uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,941,325,219,888
arxiv
\section{Introduction}\label{sec:introduction}} \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE Computer Society journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Introduction}\label{sec:introduction} Blockchain systems use various leader election schemes for consensus-driven ledger updates. In the Proof-of-Work (PoW) applications, for example, the protocol requires the network entities to produce a valid solution to a challenge to be able to publish an acceptable block. To avoid concurrent solutions, the challenge complexity is periodically calibrated to comply with the dynamically changing hash rate. Instances of this protocol can be observed in Bitcoin, where the challenge includes producing a $\textsl{nonce}$ string which, when hashed with the block header, produces a lower value than the {\em Target} threshold set by the system. As a reward for the efforts, the block releases coins to the miner's account. Once a block is published, all its transactions are approved and the block is added to the blockchain. In cryptocurrencies, miners use sophisticated hardware to increase their chances of solving proof-of-work. Mining hardware consumes significant electricity to find the correct hash value, and only one solution is considered valid per block. Therefore, the energy used by all the ``losing'' miners is wasted. To make matters worse, multi-home mining pools have been set up worldwide to facilitate collaborative mining. These mining pools are similar to data center networks, with their own mining rigs, power houses, and cooling systems. Such processing-intensive operations expend high units of electricity. Bitcoin alone consumes more than 70 Terawatt-hour per year, which is more than the electricity consumed by countries, such as Colombia and Switzerland \cite{DEVRIES2018801,Digiconomist18}. The Bitcoin community has been criticized for enabling the exhaustion of valuable energy resources, and has been encouraged to adapt to green mining practices \cite{JacquetM18,XuWLGLYG19}. An energy-efficient consensus alternative is known as the Proof-of-Stake (PoS), which relies on the stake of participants than their processing power. In PoS, miners are selected based on their balance (stake), and the network conforms to the view of those miners. In most PoS-based cryptocurrencies, coins are generated at the inception of the cryptocurrency itself. Therefore, there are no block rewards for miners. Instead, miners earn by collecting fee from transactions. The selection of miners in PoS is similar to an auction process whereby miners bid for the next block and the miner who places the highest bid wins the auction~\cite{DeuberDMMT18,JiaoWNS19}. However, there is a major caveat in this scheme: a rich miner is likely to win more auctions, which in return makes this miner richer and increases his chances of winning subsequent auctions. Therefore, in the long term, PoS-based applications tend to become more centralized in favor of a few miners. Another challenge in PoS-based protocols is the lack of $\textsl{fairness}$ policies when system is under attack. For instance, if an adversary carries out fraudulent activities, such as double-spending, blockchain fork, or a majority attack, the existing systems do not have policies to penalize the adversary and compensate the victims. To address the aforementioned challenges, we introduce a variant of PoS, called {{\em e-PoS}}\xspace, that resists network centralization, and promotes fair mining.We revisit the notions of decentralization and fairness to set baseline conditions to be met. While decentralization guarantees that the auction process in {{\em e-PoS}}\xspace is made available to wider set of stakeholders, fairness ensures that, during an attack, malicious entities are penalized and the affected parties are compensated. \BfPara{Contributions and Roadmap} Towards the broader goal of decentralization and fairness in PoS-based blockchain systems, we make the following key contributions: \begin{enumerate}[label=\protect\circled{\arabic*}, font=\small\bfseries ] \item We propose a consensus protocol, called the extended PoS ({{\em e-PoS}}\xspace), which addresses the limitations of PoW and PoS and enables decentralized, energy-efficient, and fair mining in blockchain systems. \item We present the design constructs of {{\em e-PoS}}\xspace and its translation into the blockchain framework using theoretical primitives and engineering requirements. \item We perform feasibility and security analyses of {{\em e-PoS}}\xspace, and show its merits through simulations. \item Finally, to highlight structural and functional composability, we show how {{\em e-PoS}}\xspace can be applied to the existing cryptocurrencies such as Bitcoin and Ethereum. \end{enumerate} Additionally, the paper includes background and motivation in section~\ref{sec:bg}, design primitives in section \ref{sec:pof}, simulations in section~\ref{sec:sm}, and concluding remarks in section~\ref{sec:con}. \begin{figure}[t] \hfill \begin{subfigure}[Bitcoin\label{fig:crypto}]{\includegraphics[width=0.23\textwidth]{fig/epa.pdf}} \hfill \end{subfigure} \begin{subfigure}[Ethereum\label{fig:cryptotwo}]{\includegraphics[width=0.23\textwidth]{fig/epb.pdf}} \end{subfigure} \caption{\cref{fig:crypto} shows Bitcoin hash rate (EH/s) and difficulty, while ~\cref{fig:cryptotwo} shows Ethereum hash rate (PH/s) and difficulty. } \label{fig:nlabel}\vspace{-3mm} \end{figure} \section{Background and Motivation} \label{sec:bg} \BfPara{Proof-of-Work Limitations} \label{sec:pow} PoW involves solving a mathematical challenge set by the network in order to produce a valid block. Now days, mining is carried out as a joint effort by the mining pools. As mining pools expand, the aggregate hashing power of the system increases, as shown in \cref{fig:crypto} and \cref{fig:cryptotwo}, and the competition for mining block increases as well. The excessive use of electricity in cryptocurrencies has raised major concerns. In 2019, Bitcoin consumed 69.97 TWh electricity with 32.95 Megaton carbon footprint~\cite{Digiconomist18}. The carbon footprint is the amount of carbon dioxide released due to electricity generation. Similarly, in 2019, Ethereum consumed 10.1 TWh electricity with 4.78 Megaton carbon footprint~\cite{digiconomist_20}. In~\cref{fig:ec} and \cref{fig:ectwo}, we present the energy profile of Bitcoin and Ethereum, respectively. Note that their energy profile is comparable to several countries, reflecting the threat they pose to the environment. \BfPara{Proof-of-Stake Limitations} \label{sec:pos} A popular alternative to PoW is the Proof-of-Stake (PoS), which overcomes the shortcomings of PoW while achieving similar objectives \cite{BadertscherGKRZ18}. In PoS, a candidate miner is selected to mine the next expected block, and the candidate miner has to put his balance as a stake. Peercoin, launched in 2012, is the first {cryptocurrency}\xspace that used PoS for block mining \cite{king2012ppcoin}. Later, more cryptocurrencies were launched including Nxt and BlackCoin, which used variants of PoS. The two popular techniques for miner selection are called the ``randomized block selection'' and the ``coin age-based selection.'' In the randomized block selection, a combination of the lowest hash value and the size of a candidate's stake is used to select the miner for the next block. In the coin age-based selection, an auction process selects the candidate miners based on the coin age. \begin{figure}[t] \hfill \begin{subfigure}[Bitcoin\label{fig:ec}]{\includegraphics[width=0.23\textwidth]{fig/fig3.pdf}} \hfill \end{subfigure} \begin{subfigure}[Ethereum\label{fig:ectwo}]{\includegraphics[width=0.23\textwidth]{fig/fig4.pdf}} \end{subfigure} \caption{Energy profile of Bitcoin and Ethereum. Note that Bitcoin's and Ethereum's electricity consumption is compare able to several countries} \label{fig:energy}\vspace{-3mm} \end{figure} Although PoS is energy-efficient, it introduces centralization~\cite{FantiKORVW18,Poelstra14,LiWYLZHLXD19}. In randomized block selection, the deserving candidates with high stakes are often ignored, and a random candidate is selected. In the open stake auction and age-based selection, a rich candidate is able to win every auction and get richer~\cite{buterin_2018, FantiKORVW18}. This creates a skew in the network, challenging the expected guarantees of the network decentralization, as generally expected in a blockchain application. Furthermore, public exposure of stakes reveals the balance of candidate miners, thereby compromising their anonymity and privacy. Moreover, in PoS, it is easy to fork the blockchain since anyone can produce a block without much $\textsl{effort}$ and fork the main chain. Such an attacker can exploit network latency and churn to identify nodes that lag behind the main chain, and partition them with his version of the blockchain~\cite{SaadCLTM19,SaadSNKSNM20}. \BfPara{Motivation} \label{sec:moti} Considering the aforementioned limitations, we anticipate a revision to the existing consensus schemes to improve decentralization and fairness. Naturally, we expect a variant of PoS or an energy-efficient hybrid model that does not rely on compute-intensive mining. For decentralization, the protocol should offer mining opportunities to a wider set of stakeholders. For fairness, the protocol must prevent fraudulent activities and fairly compensate affected parties if attacked. While ensuring fairness, the protocol must also have a reward mechanism that incentivizes honest mining. Additionally, it would be useful if the protocol complements the existing PoW-based cryptocurrencies to facilitate their transition to the proposed scheme and avoid bootstrapping complexities. With this motivation, we sketch the design of an algorithm that addresses these challenges and provides a blueprint for future applications. \section{Extended Proof-of-Stake } \label{sec:pof} Our protocol introduced in this section is called the ``Extended Proof-of-Stake'' (or {{\em e-PoS}}\xspace), which builds on PoS protocol, and offers fairness, security, and decentralization. . \subsection{Design Overview} \label{sec:do} The abstract implementation of {{\em e-PoS}}\xspace includes a set of miners executing an immutable smart contract that implements the rules of a PoS auction. The smart contract in {{\em e-PoS}}\xspace makes modular adjustments to the conventional PoS design to acquire the desirable properties including decentralization and fairness. Its correct execution, on the other hand, requires consensus among multiple parties (miners in a {cryptocurrency}\xspace). For that, we use the practical Byzantine fault tolerance (PBFT) protocol \cite{Castro00}. It can be argued that PBFT, in general, could provide an energy-efficient consensus scheme for distributed systems, although it suffers from high a message complexity which restricts its scalability for a {cryptocurrency}\xspace network comprising of thousands of nodes \cite{ChondrosKR11}. As we later show in \tsref{sec:app}, {{\em e-PoS}}\xspace fragments the {cryptocurrency}\xspace network in a way that the selected number of miners can easily execute PBFT. However, for now, we focus on the rules of the smart contract in {{\em e-PoS}}\xspace. In {{\em e-PoS}}\xspace, the block mining is fragmented in a series of epochs $\mathcal{E}_{1},\ldots, \mathcal{E}_{j}$. In each epoch, the smart contract produces a sequence of blocks $\mathcal{B}_{1}, \ldots, \mathcal{B}_{l}$. The smart contract computes a baseline stake $\mathcal{ST}_{1}, \ldots, \mathcal{ST}_{l}$ for each block and announces them to the network. Candidate miners $\mathcal{C}(m_{c},b_{c})$, willing to participate in the block auction compare the baseline stake with their balance, and place a bid on their block of choice. The smart contract selects the final list of miners $\mathcal{K}(m_{k}, b_{k})$ based on their bids. For each epoch $\mathcal{E}_{j}$, the miners of the previous epoch $\mathcal{E}_{j-1}$ execute the smart contract using PBFT and transfer control to the miners of the next epoch. Assuming a $3f+1$ honest majority among $\mathcal{K}(m_{k}, b_{k})$, {{\em e-PoS}}\xspace continues to function correctly~\cite{Castro00}. \BfPara{Decentralization in {{\em e-PoS}}\xspace} \label{sec:decnetralization} In PoS, decentralization can be attributed to the extension of mining opportunities to a wider set of stake holders~\cite{KwonLKSK19}. Roughly speaking, a PoS-based blockchain application is centralized if only the rich miners have the ability to mine a majority of the blocks. For a given epoch of length $l$, let $n$ be the total number of participants in the network, $c<n$ be the number of candidate miners, and $k<c$, be the final list of miners in {{\em e-PoS}}\xspace. For simplicity, and without the loss of generality, we assume that two independent and concurrent execution fragments of PoS have been implemented, from which we obtain $k$ miners in {{\em e-PoS}}\xspace and $k'$ in the conventional PoS. Provided that, decentralization (or the lack of it, thereof) is expressed as a parameter $\beta$, where $0<\beta\leq 1$. More precisely, $\beta_{e} = \frac{k}{n}$ is used for expressing the decentralization level in {{\em e-PoS}}\xspace, while $\beta_{c} = \frac{k'}{n}$ is used for expressing the decentralization in the conventional PoS. We note that a scheme is more decentralized if $\beta$ is closer to 1, and centralized otherwise. We use this notion of decentralization to compare different schemes. Let the difference between the two schemes above be $\gamma = \beta_{e} - \beta_{c}$, which represents the gap in decentralization between the two schemes. Such a gap is used to compare {{\em e-PoS}}\xspace the conventional PoS as follows: \begin{align} \label{eq:decen} \beta_{e} - \beta_{c} &= \begin{cases} {\gamma<0} & \text{{{\em e-PoS}}\xspace is less decentralized}\\ {\gamma=0} &\text{equally decentralized} \\ {\gamma>0} &\text{{{\em e-PoS}}\xspace is more decentralized} \end{cases} \end{align} The closer $\gamma$ is to 1, the more decentralization {{\em e-PoS}}\xspace brings over the conventional PoS (and vice versa). To affirm that over a long run of the system, we say that one scheme is better than another (with respect to the decentralization notion) if the overwhelming majority of the observations of $\gamma$ over a large number of epochs is positive. \BfPara{Fairness in {{\em e-PoS}}\xspace} \label{sec:fairness} In blockchain applications, fairness is defined as a model that forces an adversary to pay back a penalty upon deviating from the protocol by, for example, premature abortion of the agreed upon protocol or by carrying out fraudulent activities. This notion of fairness is defined in the prior work to explore fair mining \cite{KiayiasZZ16,PassS17}. In general, an adversary may carry out fraudulent activities if that adversary has a significant advantage over other peers. In PoW, the advantage is the mining power. If the adversary holds more than 50\% hash rate, he can double-spend by mining a longer private chain and forcing the network to switch to the longer chain~\cite{nakamoto2008bitcoin}. In PoS, the same applies if the adversary holds more than 50\% of all coins. In the following, we characterize this advantage of the adversary. Our objective is to show that the success probability {{\em e-PoS}}\xspace (with 50\% coins) is lower than in conventional PoS, thereby leading to a fair system with {{\em e-PoS}}\xspace. To formally analyze this aspect, let $B_N$ denote the total coins in the system, where $B_{N} = \sum_{i=1}^{n} b_{i}$. Let $\alpha B_{N}$ be the fraction coins owned by an adversary $\mathcal{A}$, and $\beta B_{N}$ be the coins owned by the honest miners. For $m$ consecutive blocks, the probability that the adversary successfully produces a longer private chain becomes: \begin{align} \label{eq:adv2} \text{\normalfont Pr} (~\mathcal{A} \text{\normalfont ~double-spends in PoS}) &= \begin{cases} \bigg(\frac{\alpha}{\beta} \bigg)^{m}&, {\alpha<0.5} \\ 1 &, {\alpha \geq 0.5} \end{cases} \end{align} \begin{align} \label{eq:nadv2} \text{\normalfont Pr} (~\mathcal{A} \text{\normalfont ~mines the next block}) &= \begin{cases} \bigg(\frac{\alpha}{\beta}\bigg)&, {\alpha<0.5} \\ 1 &, {\alpha \geq 0.5} \end{cases} \end{align} \autoref{eq:adv2} shows that when $\alpha<0.5$, the adversary's success probability decreases exponentially. However, when $\alpha\geq 0.5$ ($\mathcal{A}$ acquires more than 50\% coins), the adversary eventually produces a longer private chain and double-spends. Moreover,~\autoref{eq:nadv2} shows that when $\alpha\geq 0.5$, the adversary mines the next block with probability 1. This is due to the fact that no other honest miner has a higher stake in the network. For any block, the adversary can place a higher bid than any other miner and win the auction. In summary, by acquiring more than 50\% coins, the adversary can (1) double-spend with probability 1, and (2) win each successive auction and mine subsequent blocks. This prevents honest miners from producing blocks and allows adversary to violate the blockchain safety properties~\cite{GarayKL17}. In other words, the adversary breaks the notion of fairness that we aim for a PoS-based blockchain system. In {{\em e-PoS}}\xspace, we propose that even with 50\% coins, the adversary's success probability of (1) double-spending is negligible (a small value $\epsilon$), and (2) mining the next block is less than 1. \begin{align} \label{eq:adv3}\text{\normalfont Pr} (~\mathcal{A} \text{\normalfont ~double-spends in {{\em e-PoS}}\xspace} ~|~\alpha\geq 0.5) =\epsilon\\ \label{eq:nadv3}\text{\normalfont Pr} (~\mathcal{A} \text{\normalfont ~mines the next block in {{\em e-PoS}}\xspace} ~|~\alpha\geq 0.5) < 1 \end{align} \subsection{Epoch Length} \label{sec:eps} The first challenge in {{\em e-PoS}}\xspace is to determine the length of an eoch, $\mathcal{E}_{j}$, required to calculate the expected number of blocks $l$ to be computed. For that, we use the memory pool (mempool) of cryptocurrencies. The mempool is a repository of unconfirmed transactions used by miners to select transactions for blocks. The mempool size is usually greater than the average size of the block \cite{SaadTM18,SaadNKKNM19}. Miners prioritize transactions based on their fee, and select the high valued transactions from the mempool to mine in blocks. The mining process can be viewed from two perspectives. 1) The miner's aim is to select highly paying transactions. 2) In ideal conditions, the network requires an empty mempool to prevent transaction backlog. With a fixed block size, $\mathcal{B}_{s}$, and the size of mempool being $\mathcal{M}_{s}$, we can compute the number of blocks required to empty the mempool (i.e., to make $\mathcal{M}_{s} \approx 0$). It is desirable to completely fill the blocks with transactions such that the total size of each block must be equal to the standard block size of the cryptocurrency ($ |\mathcal{B}_{i}| \approx \mathcal{B}_{s}$). A complete block also means that the miner is able to obtain a maximum rewards from the mining fee paid by each transaction. In light of these design restrictions, the number of blocks in each epoch can be computed using \autoref{eq2}. \begin{align} \label{eq2} l= \begin{cases} {0} & \text{, $\mathcal{B}_{s} > \mathcal{M}_{s}$}\\ {\frac{\mathcal{M}_{s} }{\mathcal{B}_{s}}} &\text{, $\mathcal{B}_{s} < \mathcal{M}_{s}$} \end{cases}. \end{align} The number of blocks is computed by dividing the mempool size by the standard block size in \autoref{eq2}. \BfPara{Epoch Duration} \label{sec:ed} The epoch duration $\mathcal{D}_{\mathcal{E}}$ is the product of the number of blocks produced in the epoch and the standard block time. Furthermore, the difference in the publishing time between two subsequent blocks should be equal to the standard block time to avoid undesirable delays in the block generation or the ledger update \cite{GarayKL17}. \subsection{Baseline Stake} \label{sec:bs} In the next phase, the smart contract computes a baseline stake $\mathcal{ST}_{i}$ for each block $\mathcal{B}_{i}$ in the epoch. In light of the decentralization and fairness objectives outlined in \textsection\ref{sec:fairness}, the baseline stakes must satisfy the following principle objectives: 1) The baseline stakes must be small enough to allow mining opportunities to a large subset of miners ($c \approx n$). 2) The baseline stakes must also be large enough to prevent fraudulent activities and compensate victim parties. 3) The outcome of each mined block must proportionally benefit the candidate miners according to their commitments. Theoretically, the subset of candidate miners can be maximized by setting the baseline stakes to a negligible value ($\mathcal{ST}_{i} \simeq 0, \forall i$), such that the entire network becomes eligible for mining. However, this may challenge the two subsequent principle objectives that prevent fraud and promote incentivized reward distribution. The challenge is to construct a scheme that meets all three principle objectives. In \autoref{algo:sorttx} we present an efficient design for computing baseline stakes that meet the requirements. First, we sort all transactions in the mempool in a descending order, based on their transactions fee. Next, we fill the first block with the transactions from the mempool while monitoring the size of the block. Once the block size reaches the standard block size ($ \mathcal{SB}_{i} = \mathcal{B}_{s}$), the baseline stake is computed as the sum of the fee of all the transactions in the block. The procedure is repeated for the subsequent $l-1$ blocks. All of the blocks computed in the cycle are equal to the standard block in size, and their stakes are the sum of transaction fee of all the transactions in the block. The remaining transactions are then put into the last block ($\mathcal{B}_{i} = \mathcal{B}_{l}$). Once the algorithm finishes the execution, the smart contract will have a set of blocks and their corresponding baseline stakes ($\mathcal{ST}_{i} \in \mathcal{B}_{i}$). Since the baseline stake for a block is the sum of transaction fee, a miner's stake can be used to reimburse if he cheats. \setlength{\textfloatsep}{1pt} \begin{algorithm}[t] \SetAlgoLined\SetArgSty{} \SetKwInOut{Input}{Input} \Input{$l, \mathcal{E}_{j},t_{sort} \leftarrow \text{Sorted } \mathcal{T}(id,f,s)$ \\ } \textbf{Initialize:} $size = 0, fee = 0$\\ \For{$i = 1$ ... $l- 1$ } { \ForEach{$ \mathcal{T}(id,f,s) \in t_{sort}$ } { $\mathcal{B}_{i} \leftarrow \mathcal{B}_{i} \cup \mathcal{T}(id,f,s)$ \\ $size = size + (s \in \mathcal{T}(id,f,s))$ \\ $fee = fee + (f \in \mathcal{T}(id,f,s))$ \\ \textbf{remove} $\mathcal{T}(id,f,s)$ \textbf{from} $t_{sort}$ \\ \eIf{($size \leq \mathcal{B}_{s}$)}{ \textbf{continue} } { $\mathcal{ST}_{i} = fee$ \\ $fee = 0$; $size = 0$ \\ } } } \For{$i = l $ } { \ForEach{$ \mathcal{T}(id,f,s) \in t_{sort}$ } { $\mathcal{B}_{i} \leftarrow \mathcal{T}(id,f,s)$ \\ $fee = fee + (f \in \mathcal{T}(id,f,s))$ \\ \textbf{remove} $\mathcal{T}(id,f,s)$ \textbf{from} $t_{sort}$ \If{($t_{sort} = \emptyset$)}{ $\mathcal{ST}_{i} = fee$ \\ } } } \SetKwInput{KwData}{return} \KwData{ $\mathcal{B}_{i}, \mathcal{ST}_{i} $} \caption{Computing Baseline Stake} \label{algo:sorttx} \end{algorithm} \subsection{Block Auction} \label{sec:ba} Once, the baseline stakes and epoch length are determined, the smart contract announces them to the network. The nodes query the baseline stakes corresponding to each block and compare them with their balance. A node is qualified for mining only if its balance exceeds the baseline stake. To extend mining opportunities to more peers, the smart contract forces peers to place one bid per block. It is possible that a node qualifies for multiple blocks as a candidate miner. However, a node can only place the bid on one block. Since the blocks are sorted in a descending order based on the value of the transaction fees, a greedy candidate can select a block based on the reward incentive. The process of block auction is carried out in multiple phases. In the first phase, the qualified peers notify the smart contract and place their bids on each block. As a result, the smart contract obtains a list of candidate miners from the network ($c \subset n$), with a balance exceeding the baseline stake. In the next phase, the smart contract selects the final list of miners ($k \subset c$) and asks them to make a final commitment for their selected block. After all the commitments are locked, the smart contract assigns blocks to the corresponding miners, the miners verify each transaction, and receive the rewards. Below, we outline the details of each auction phase. \begin{algorithm}[t] \SetAlgoLined\SetArgSty{} \SetKwInOut{Input}{Input} \Input{$\mathcal{ST}_{i}, \mathcal{B}_{i}$} \textbf{Initialize:} $pBlock$ \tcp*[l]{block list} \For{$m \in N(m_{n},b_{n}) $}{ \ForEach{ $i = 1 ... l$}{ \If{$b_{n} > \mathcal{ST}_{i}$}{ $pBlock \leftarrow \mathcal{B}_{i} $ \tcp*[l]{potential block} }} \SetKwInput{KwData}{Output} \KwData{$pBlock$ } \textbf{Select:} $\mathcal{B}_{i} \in pBlock$ \tcp*[l]{select block} } \For{$\mathcal{ST}_{i} \in \mathcal{B}_{i}$}{ $rBalance = b_{n} - \mathcal{ST}_{i}$ \tcp*[l]{balance left} $\mathcal{S}_{c} = \frac{(x) \times rBalance}{100}$ \tcp*[l]{stake committed} } $\mathcal{P}_{c} \leftarrow \mathcal{S}_{c}$; \\ \SetKwInput{KwData}{return} \KwData{$C(m_{c},b_{c}, \mathcal{P}_{c} ) $} \caption{Selecting Candidate Miners} \label{algo:sk} \end{algorithm} \subsubsection{Selecting Candidate Miners} \label{sec:cm} In the first phase, the peers compare their balance with the baseline stakes of all blocks published by the smart contract. The peers then select a block of their choice for which their balance exceeds the baseline stake, and place a stake bid. In \autoref{algo:sk}, we show how an individual peer surveys the network conditions and places his bid on a certain block. The bid commitment consists of a percentage balance remaining after the baseline stake is subtracted from the current balance (lines 8-9). We take this approach for two reasons. First, the balance committed beyond the baseline stake can be later used to additionally reimburse the victim, in case the miner misbehaves. This also increases the fraud penalty for malicious miners and enforces fair mining practices. Second, the percentage commitment is useful in determining the risk factor associated with the commitment of the miner. If a candidate miner $x$ has a higher balance $b_{x}$ than a competing miner $y$ with balance $b_{y}$, and $x$ makes 15\% commitment of ($b_{x} - \mathcal{ST}_{i}$), while $y$ makes 20\% commitment of ($b_{y}-\mathcal{ST}_{i}$), then the preference will be given to $y$, for showing higher commitment. This method also helps in democratizing the network since the percentage stake are always in the range [0--100], where all miners will have equal opportunities to participate in the block auction. This also challenges the hegemony of rich stakeholders and decentralizes the network by offering mining opportunities to more peers. \subsubsection{Finalizing Miners} \label{sec:fm} Once the smart contract receives a percentage of stake committed by the candidate miners, it then finalizes the subset of miners ($ k \subset c$) that are designated to process each block. For each block, it sorts the percentage bids committed by all candidate miners, and assigns the blocks based on the winning bids. The winning bid for each block is the highest percentage stake $\mathcal{P}_{c}$ committed by the candidate miner. However, there are cases that may cause conflicts in the final selection process. In the following, we present those cases and remedies to address them. \BfPara{Case 1: Equal bids} It is possible to have more than one high but equal percentage stakes committed by the candidate miners. For example, a candidate miner $x$ commits 30\% of his balance for a block $\mathcal{B}_{i}$, while another candidate miner $y$ also commits 30\% of his balance for the same block. Since only one miner is selected for a block, the smart contract has to pick one of the two candidates. To resolve this conflict, the smart contract keeps a record $M(m_{j},h_{j})$ of all of the previous blocks computed and their respective miners. It then queries that record and compares how many blocks $x$ and $y$ have computed prior to the current epoch, and the one with less prior blocks is selected (\autoref{algo:sv}). \begin{algorithm}[t] \SetAlgoLined\SetArgSty{} \SetKwInput{KwData}{Input} \KwData{$C(m_{c},b_{c}, \mathcal{P}_{c} ), M(m_{j},h_{j}) $ } \textbf{Initialize:} $cList$\\ \ForEach{$\mathcal{B}_{i}$}{ \eIf{$\mathcal{P}$ \text{for first item in Sorted} $C(m_{c},b_{c}, \mathcal{P}_{c})$ \text{is unique}}{ \textbf{Assign:} \text{Block to the miner at first index} \\ } { \text{Put all} $C(m_{c},b_{c}, \mathcal{P}_{c})$ \text{with same} $\mathcal{P}_{c}$ \text{in list} $cList$ \\ } \textbf{Query:} $(M(m_{j},h_{j}) )$\\ \text{sort} $cList$ {\text{based on value of} $m_{c}$ \text{in}$(M(m_{j},h_{j}) )$}\\ \textbf{Select} $m_{c}$ \text{with minimum value of} $h_{j}$ \\ \textbf{Cold Start} \text{if} $h_{j}$ \text{= 0, select one with highest bid} \\ \textbf{Assign:} \text{Block to the Selected Miner} \\ } \caption{Final Selection of Miners} \label{algo:sv} \end{algorithm} \BfPara{Case 2: Cold start} Assuming a case where both $x$ and $y$ have not computed any block before, the contract checks the balance of both candidate miners and selects the one with a greater balance. This fulfills the third objective defined in \textsection\ref{sec:bs}, whereby miners with the higher balance benefit for their overall stake. In \autoref{algo:sv}, we describe how the smart contract selects the final list of miners, resolves conflicts, and assigns blocks to the selected miners. In \cref{fig:abs}, we provide the abstraction of the network's state after the smart contract finalizes the list of miners for each block. \subsection{Block Mining} \label{sec:bm} Once miners are finalized, the smart contract initiates a request for the final stake commitment. The final commitment ensures that a candidate miner has not spent his balance after making the initial commitment. In response, the selected miners confirm their final bids and deposit their balance to the smart contract. The smart contract locks their balance and assigns respective blocks to each miner. The miners consult the blockchain and the ``UTXO'' set to validate the authenticity of each transaction. If a transaction fraudulent, the miner notifies the smart contract and removes the transaction from the block. When all transactions are verified, the smart contract employs a signature scheme to prevent blockchain forks and assert ownership of the blocks. The signature scheme used by the smart contract is a tuple $(\mathsf{KeyGen}, \mathsf{Sign}, \mathsf{Verify})$, where each algorithm is defined as follows. \begin{enumerate*}[label=\textbf{\arabic*)},start=1] \item $\mathsf{KeyGen}(1^u)$: given a security parameter $u \in \mathbb{U}$, the key generation algorithm generates a pair of private and public keys, namely $sk$ $pk$. \item $\mathsf{Sign}(sk, m)$: a deterministic algorithm that takes as an input a message $m\in\mathcal{M}$ and the private key $sk$, and generates a signature $\sigma$ corresponding to $m$. The message $m$ here is the block produced by the smart contract. \item $\mathsf{Verify}(pk, m, \sigma)$: a verification algorithm that takes the public key $pk$, the message $m$, and the signature $\sigma$ as inputs, and outputs 0 if $\sigma$ is an invalid signature of $m$ and $1$ otherwise. This algorithm is used by network peers to verify the block authenticity. \end{enumerate*} \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{fig/epos-abs.pdf} \caption{A overview of {{\em e-PoS}}\xspace-based blockchain system consisting of network peers, candidate miners, and finalized miners. The shades of blocks represent their stakes and reward in descending order. } \label{fig:abs} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{fig/newpbft.pdf} \caption{PBFT protocol overview where client issues a request to the primary replica. The primary broadcasts the transaction to all the other replicas. The replicas validate the transaction and share their view with each other. The process of transaction verification follows four stages, namely Pre-Prepare, Prepare, Commit, and Reply. The transaction, in this case will be the mempool view issued by the primary itself.} \label{fig:PBFT} \end{center} \end{figure} \begin{figure*} \begin{adjustbox}{minipage=\linewidth,scale = 0.9 } \[ \begin{array}{l c l} \text{\textsf{Smart Contract by Miners $K$}} & & \text{\textsf{Rest of the Network}} \\ \textbf{Initialize: } C(m_{c},b_{c}) := 0, K(m_{k},b_{k}) := 0 & & \textbf{Network Size: } \mathcal{N}(m_{n},b_{n})\\ \textbf{Execute PBFT and Compute: } l, \mathcal{B}_{i}, \mathcal{SB}_{i} & & \textbf{Await: } \textit{Baseline stakes for next epoch} \\ \textit{Fill blocks, Compute baseline stakes, epoch length} & & \\ \textbf{Send: } \textit{Baseline stakes to the network}& \xrightarrow{\hspace{1em} \mathcal{SB}_{i}\hspace{1em}} & \textbf{Receive: } \textit{Baseline stakes from the smart contract}\\ ------------------------ & & ----------------------- \\ \text{\textsf{Smart Contract by Miners $K$}} & & \text{\textsf{Candidate Miners}} \\ & & \textbf{Compare: } \textit{Baseline stakes with balance $b_{n}$} \\ \textbf{Await: } \textit{List of candidate miners } C(m_{c},b_{c}, \mathcal{P}_{c} ) & & \textbf{Select: } \textit{Target block to be mined } \mathcal{B}_{i} \\ \textbf{Receive: } \textit{Bids from network nodes} & \xleftarrow{\hspace{1em} \mathcal{P}_{c}\hspace{1em}} & \textbf{Send: } \textit{Bid on the target block } \mathcal{P}_{c} \\ \textbf{Compute: } \textit{List of candidate miners and bids } C(m_{c},b_{c}, \mathcal{P}_{c} ) & & \\ ------------------------ & & ----------------------- \\ \text{\textsf{Smart Contract}} & & \text{\textsf{Final List}} \\ \textbf{Sort: } \textit{Bids for each blocks } \mathcal{P}_{c} \in \mathcal{B}_{i} & & \textbf{Await: } \textit{Final list of miners} \\ \textbf{Compute: } \textit{Final list of miners }K(m_{k},b_{k}) & & \\ \textbf{Send: } \textit{Final list } K(m_{k},b_{k}) & \xrightarrow{\hspace{1em} K(m_{k},b_{k})\hspace{1em}} & \textbf{Receive: } \textit{Final list of miners} \\ \textbf{Receive: } \textit{Final commitment} & \xleftarrow{\hspace{1em} \mathcal{P}_{k}\hspace{1em}}& \textbf{Send: } \textit{Final commitment } \mathcal{P}_{k} \\ \textbf{Assign: } \textit{Blocks to each miner} & \xrightarrow{\hspace{1em} K(m_{k},b_{k}), \mathcal{B}_{i}\hspace{1em}}& \textbf{Verify: } \textit{Transactions in the blocks } \mathcal{T}(id,f,s) \\ \textbf{Verify: } \textit{Signatures } \mathsf{Verify}(pk, m, \sigma) & \xleftarrow{\hspace{1em} \mathcal{B}_{i}, \mathsf{Sign}(sk, \mathcal{B}_{i})\hspace{1em}} & \textbf{Generate and Send: } \textit{Signatures } \mathsf{Sign}(sk, \mathcal{B}_{i}) \\ ------------------------ & & ----------------------- \\ \end{array} \] \end{adjustbox} \caption{Information workflow between smart contract executed by $K$ for the next epoch. The smart contract computes baseline stakes and receives bids for each block. It then selects the miner list $K$ for the next round. Rewards are released only when the next epoch ends successfully. } \label{fig:protocol} \end{figure*} Once the miner verifies all transactions, he generates a public/private key pair, and signs the message $\mathcal{B}_{i}$, with his private key $\mathsf{Sign}(sk, \mathcal{B}_{i})$, computes the hash of the block $H(\mathcal{B}_{i})$, and sends the block to the smart contract. The smart contract authenticates the block by using the public key $pk$ of the miner and releases the block to the network. Upon receiving the block, nodes in the network also validate the miner's signatures. This process prevents forks in case an adversary publishes a counterfeit block to the network. When an epoch ends after $\mathcal{D}_{ep}$, the mempools at each node are filled by transactions exchanged during the epoch. The probability of $z$ number of transactions arriving at a mempool during the epoch can be computed using \autoref{eq5}, and the new size of mempool $\mathcal{M}_{s}$ can be calculated using \autoref{eq6}. \begin{align} \label{eq5} P (z \text{ arrivals in } T) &= \frac{(\lambda T)^{z} e^{\lambda T} }{z\,!} = \frac{(\lambda \mathcal{D}_{ep})^{z} e^{\lambda \mathcal{D}_{ep}} }{z\,!} \\ \label{eq6} \mathcal{M}_{s} = \lambda \times T &= \lambda \times \mathcal{B}_{i} \times \mathcal{B}_{t} = \lambda \times \mathcal{D}_{ep} \end{align} In the next step, the miners $k\subset c$ calculate the parameters for the next epoch. For parameter calculation, miners must agree upon the mempool state so that their baseline stakes, block size, and epoch lengths are consistent. In blockchain systems, mempool states can vary across nodes. In Bitcoin, for example, the mempool state can change due to the varying mempool size limits specified by each node or the network asynchrony. In Bitcoin, nodes can arbitrarily set the mempool size to prevent RAM overloading. Therefore, nodes with different RAM sizes have different mempool sizes, which can cause inconsistency in our model. The mempool size can also vary due to network asynchonry typically created by (1) the transaction propagation delay, and (2) the network topology. Since Bitcoin nodes are spread across autonomous systems~\cite{SaadCLTM19,SaadAAAYM19}, the transaction exchange among nodes can experience varying delay. As a result, nodes that experience high transaction propagation delay may not receive a transaction in their mempool prior to executing {{\em e-PoS}}\xspace. Moreover, at any time, there are $\approx$10K Bitcoin nodes, and the default connectivity limit is 125~\cite{SaadCLTM19}. Therefore, the network does not form a completely connected topology due to which the transaction propagation does not follow the broadcast model~\cite{GarayKL17}. As a result, the mempool state can vary across nodes at any time. To prevent this inconsistency, $K$ must agree upon a common mempool state to output consistent parameters for the next epoch. For a common the mempool state, the miners execute PBFT protocol. In \cref{fig:PBFT}, we give an overview of the PBFT-based consensus and for more details we refer the reader to \cite{Castro00}. In a {cryptocurrency}\xspace network, where the number of peers can be in the thousands, applying PBFT can be infeasible. However, in {{\em e-PoS}}\xspace, PBFT is executed among miners. Since $k$ in an epoch is much less than $n$, PBFT can be applied efficiently. The smart contract randomly specifies a primary replica in $k$ which will share its mempool view with other replicas and obtain consensus. To ensure that all replicas faithfully execute the protocol, we create a dependency between epochs. Miners' rewards are not released by the contract until the next epoch is successfully initiated. Therefore, to favor self interest, miners will initiate the next epoch and its smart contract. In~\cref{fig:protocol}, we provide a complete workflow of {{\em e-PoS}}\xspace starting from computing the baseline stake to block mining by the final list of miners $K$. \BfPara{Benefits and Limitation of Using Mempool} In {{\em e-PoS}}\xspace we use the mempool to derive the epoch length and the baseline stakes. Our mempool-based approach is novel since it has not be adopted by existing PoS models. In this section, we succinctly summarize the key aspects of our technique. In blockchain systems, mempool is a repository of unconfirmed transactions from which miners select transactions for blocks. Ideally, if the block size is equal to the mempool size, there is no transaction backlog and all transactions are processed within the expected time. However, in practice, there is usually a transaction backlog that can harm the blockchain system as shown in~\cite{SaadTM18}. Acknowledging the role of a mempool in transaction processing, we use it to determine the epoch length. Our approach is indeed innovative since the existing PoS-based schemes do not use mempool to determine the epoch length. Moreover, this approach gives us two additional benefits. The first benefit is that based on the epoch length and duration, {{\em e-PoS}}\xspace can guarantee a new block at the average block time. In the existing blockchain system ({\em i.e.,}\xspace Bitcoin), while the expected time for a new block is $\approx$10 minutes, the inter-arrival time between two blocks cannot be guaranteed to follow the expected time. As a result, the transaction confirmation can be delayed. In {{\em e-PoS}}\xspace, we address this problem by fixing the epoch duration. The second benefit is that mempool allows us to determine the baseline stake for each block. In the existing blockchain systems, miners tend to prioritize transactions based on the transaction fee. Similarly, in {{\em e-PoS}}\xspace, we sort transactions based on the transaction fee in different blocks. Each block has a different transaction fee and a candidate miner with a higher balance can bid on a block with a higher fee. This bidding process ensures that miners with a higher balance are rewarded proportional to the stake that they commit. If any miner misbehaves during the protocol, his stakes are used to compensate the affected parties. While the mempool-based approach has several benefits, it has a limitation as well. As mentioned earlier, the mempool state can vary across miners due to which they may compute different baseline stakes and epoch length. To prevent that, we use PBFT to ensure that miners agree upon consistent values of baseline stakes and epoch lengths. \section{Feasibility Analysis of {{\em e-PoS}}\xspace} \label{sec:analysis} \subsection{Meeting Requirements} \BfPara{Decentralization} To achieve decentralization and overcome skews associated with the conventional PoS, {{\em e-PoS}}\xspace applies two conditions: percentage stake commitment and one block per miner approach during an epoch. The percentage stake provides equal opportunities to miners to place a bid on the block of their choice, thereby increasing decentralization. One block per epoch ensures that an almost unique set of miners benefits from the fee rewards during the epoch. This also contributes to decentralization and encourages fair mining. Another feature of {{\em e-PoS}}\xspace is the conflict resolution when more than one miner meet the criteria. For instance, if all miners put 99\% balance as stake beyond the baseline stake, the smart contract consults its mining repository to select the miner who has mined less blocks in the past. \BfPara{Relative Reward} Another objective is the relative reward of miners with respect to their stakes. If a miner puts higher stakes than others, then he must also receive higher mining rewards. We meet that condition by sorting transactions based on their mining fee and calculating baseline stakes accordingly. In {{\em e-PoS}}\xspace, the first block $\mathcal{B}_{1}$ and its corresponding baseline stake $\mathcal{SB}_{1}$ are greater than the subsequent blocks and their baseline stakes. Furthermore, the last block $\mathcal{B}_{l}$ and its baseline stake $\mathcal{SB}_{l}$ are less in size and reward than any other block in the epoch. Therefore, a rich miner can always opt for the blocks higher in the hierarchy of the chain to obtain more rewards, while a poor miner can select blocks from a lower hierarchy of the chain. This scheme is also applied during a conflict between two miners who put equal stakes for a target block. If the two miners have no history in the mining repository of the smart contract, a miner with higher balance is selected. Therefore, {{\em e-PoS}}\xspace also considers each miner based on his risk potential. In contrast to the other notable schemes~\cite{KiayiasRDO17,ChenM19,DaianPS19}, where miners are selected at random, in {{\em e-PoS}}\xspace, we compliment each miner based on the relative stake that they commit. \subsection{Security Analysis of {{\em e-PoS}}\xspace} \label{sec:sa} In this section, we evaluate {{\em e-PoS}}\xspace under notable PoS-based attacks. Previously, in \autoref{sec:bm}, we showed that the signature scheme prevents blockchain forks. Some other attacks discussed in this section are the majority attacks, stake theft, and nothing-at-stake. In the following, we analyze each attack in an {{\em e-PoS}}\xspace-based system. For each attack, we assume a polynomial-time adversary $\mathcal{A}$, with knowledge of {{\em e-PoS}}\xspace. For the majority attacks, we assume the $\mathcal{A}$ has accumulated more than 50\% stake in the system. Although the majority attacks are naturally less probable in PoS compared to PoW, since accumulating 50\% stakes in a system is more difficult than acquiring 50\% hash rate. However, assuming the worst case, we will demonstrate that {{\em e-PoS}}\xspace makes the attack infeasible for an adversary with 50\% stakes. For stake theft and nothing-at-stake, we assume the adversary knows about the operations and communication model of {{\em e-PoS}}\xspace, and the adversary tries to cheat or halt the auction process. \BfPara{Majority Attack} First, we analyze the notion of fairness defined in \tsref{sec:fairness}, in which the adversary with 50\% stakes (1) launches a majority attack to double-spend, and (2) wins each subsequent block auction. In {{\em e-PoS}}\xspace, we show that the probability successfully launching a majority attack is negligible~\autoref{eq:adv2}, and the probability of mining consecutive blocks with 50\% hash rate is less than 1~\autoref{eq:nadv3}. To show that {{\em e-PoS}}\xspace achieves fairness, assume $n$ peers in the system with a combined balance of $\alpha+\beta$ coins. Further assume that the adversary acquires ($\alpha$=0.51) coins, and $\beta$ is uniformly divided among the remaining $n-1$ peers. In the conventional design, \autoref{eq:nadv2} will be 1 and the adversary is guaranteed to win the next block. In comparison, assume that all peers in {{\em e-PoS}}\xspace are greedy, making equal bids of 99\% of their balance. In that case, the probability that the adversary mines the next block becomes: \begin{align} \label{eq:nadv5}\text{\normalfont Pr} (~\mathcal{A} \text{\normalfont ~mines the next block in {{\em e-PoS}}\xspace} ~|~\alpha\geq 0.5) = \frac{1}{n} \end{align} This is to be noted that if the adversary splits his stake among $p$ nodes and each node places a bid of 99\%, then the success probability can increase from $1/n$ to $p/n$. Nevertheless, as long as there is one honest party in the system, $p<n$, the probability remains less than 1. In our system, $p=n$ means that the adversary controls all nodes in the system which is an infeasible assumption in practice. Another way to express this is that as long as the adversary does not hold 100\% coins in the system, his probability of winning the next block auction remains less than 1. In contrast, in the conventional PoS models, with 51\% coins, the adversary wins the next auction with probability 1~\autoref{eq:nadv2}. Next, we analyze the success probability of launching a majority attack ({\em i.e.,}\xspace mining a longer private chain with $m$ consecutive blocks) to replace the public chain. Note that in {{\em e-PoS}}\xspace, the smart contract maintains a record of miners who mine blocks. Provided that, we present two cases in which the adversary launches a majority attack. In the first case, we assume the adversary does not control any other node in the system. In the second case, we assume the adversary controls $p$ nodes in the system, divides his balance among them, and allows them to put a maximum bid on each block. For the first case, the probability that the adversary mines two successive blocks with $\alpha\geq0.5$ becomes $(\frac{1}{n})\times (\frac{1}{n-1})$. Therefore, the probability that adversary mines $m$ subsequent blocks to double-spend becomes: \begin{align} \label{eq:adv7}\text{\normalfont Pr} (~\mathcal{A} \text{\normalfont ~double-spends in {{\em e-PoS}}\xspace} ~|~\alpha\geq 0.5) = \prod_{i=0}^{m} \frac{1}{n-i}. \end{align} Eq.~\autoref{eq:adv7} shows that for a large value of $n$ (a large network size), the probability of double-spending through a majority attack becomes negligible. For the second case, the adversary's probability of mining the first block becomes $p/n$, and the probability of mining the subsequent block becomes $(p-1)/(n-1)$. As such, the probability that adversary mines $m$ subsequent blocks to double-spend becomes: \begin{align} \label{eq:nadv7}\text{\normalfont Pr} (~\mathcal{A} \text{\normalfont ~double-spends in {{\em e-PoS}}\xspace} ~|~\alpha\geq 0.5) = \prod_{i=0}^{m} \frac{p-i}{n-i}. \end{align} In Eq.~\autoref{eq:nadv7}, keeping a realistic assumption that $p\neq n$, the adversary cannot successfully double-spend with probability 1. In other PoS models, with 51\% coins, the adversary could double-spend with probability 1~\autoref{eq:adv2}. Our analysis shows that {{\em e-PoS}}\xspace achieves more fairness than conventional PoS models. \BfPara{Stake Theft} If the adversary is a malicious candidate miner who commits a very high stake $\mathcal{P}_{c}$ to the smart contract, and later spends it, he might be able to game the system. However, to counter that, we introduced the final stake commitment in \textsection\ref{sec:bm}. The smart contract confirms that the final stake commitment is equal to the initial commitment to proceed with the block assignment. Therefore, it prevents a malicious candidate miner from gaming the system. If the adversary is among the selected miners, and invalidates one or more transactions in a block, the smart contract will punish the adversary by awarding his stake to the victim. Since the baseline stake for each block ($\mathcal{ST}_{i} \in \mathcal{B}_{i} $) is the cumulative sum of transaction fee in the block, the baseline stake acts as an insurance. If one or more transactions are compromised, the baseline stake covers them by reimbursing their fee. The penalty is derived from the final stake committed by the selected miner. Since the final stake is the percentage balance remaining after the baseline stake, therefore, the difference ($\mathcal{P}_{c} - \mathcal{ST}_{i}$) is used as a penalty. \BfPara{Nothing at Stake} An argument against the $\textsl{non-blocking}$ progress of PoS-based applications is called $\textsl{nothing-at-stake}$ \cite{martinez18}. The argument states that a PoS application would halt if no miner puts anything on stake. In {{\em e-PoS}}\xspace, this condition is less likely to arise due to following reasons. First, the problem of nothing-at-stake occurs due to centralization of mining. If there are fewer miners in the system, they can collude to halt the progress. In {{\em e-PoS}}\xspace, our main goal is to break that collusion and centralization. As such, and since mining opportunity is offered to a wider set of stakeholders (see \autoref{tab:data}), {{\em e-PoS}}\xspace will provide equal or better $\textsl{non-blocking}$ progress than other PoS schemes. Additionally, the issuer or recipient of a transaction would naturally want a non-blocking progress of the system so that their transactions are confirmed in time. As such, if there is no miner participating in the auction process, the smart contract can simply lower the threshold for the baseline stake to allow the sender or the receiver of the transaction to mine the block. \BfPara{PBFT Fault Tolerance} As mentioned in \autoref{sec:bm}, $k$ miners in an epoch execute PBFT to obtain a common mempool view. In PBFT, assuming $f$ faulty replicas, the system requires $3f+1$ replicas to behave honestly \cite{Castro00}. In other words, $\approx$ 70\% miners are expected to behave honestly for successful execution of {{\em e-PoS}}\xspace. To motivate honest mining, we create a dependency among epochs such that the smart contract does not release rewards to miners until the next epoch is executed correctly. Moreover, if any faulty replica is detected, its rewards can be withheld as a penalty. \begin{figure*} \hfill \begin{subfigure}[Random selection Epoch length = 200 \label{fig:rs50}]{\includegraphics[width=5.5cm]{fig/r-200.pdf}} \hfill \end{subfigure} \begin{subfigure}[Priority selection Epoch length = 200 \label{fig:rs100}]{\includegraphics[width=5.5cm]{fig/p-200.pdf}} \hfill \end{subfigure} \begin{subfigure}[{{\em e-PoS}}\xspace Epoch length = 200 \label{fig:rs200}]{\includegraphics[width=5.5cm]{fig/e-200.pdf}} \end{subfigure}\vspace{-3mm} \caption{Simulation results. Original is the balance distribution prior to the execution. Accordingly, Random, Priority, and {{\em e-PoS}}\xspace are the balances after executing the three schemes. The wider gap on the x-axis means that the overall stake in the system has skewed after the execution. } \label{fig:LSTMP}\vspace{-3mm} \end{figure*} \begin{figure*}[!t] \hfill \begin{subfigure}[Random Selection \label{fig:rs}]{\includegraphics[width=0.3\textwidth]{fig/rd.pdf}} \hfill \end{subfigure} \begin{subfigure}[Priority Selection \label{fig:ps}]{\includegraphics[width=0.3\textwidth]{fig/pr.pdf}} \hfill \end{subfigure} \begin{subfigure}[{{\em e-PoS}}\xspace Selection \label{fig:ep}]{\includegraphics[width=0.3\textwidth]{fig/ep.pdf}} \hfill \end{subfigure}\vspace{-3mm} \caption{Comparison of baseline stake $\mathcal{ST}_{i}$ and the balance in the three scheme. In the random selection, half of the miners have balance less than $\mathcal{ST}_{i} \in \mathcal{B}_{i}$. This scheme is unfair since miners can easily cheat. In {{\em e-PoS}}\xspace, the balances are marginally greater than the baseline stake. }\vspace{-5mm} \label{fig:bsc} \end{figure*} \section{Simulations and Results} \label{sec:sm} Now we present the simulations results to validate the performance of {{\em e-PoS}}\xspace compared to conventional PoS schemes. We construct three PoS models, namely the random selection, priority selection, and {{\em e-PoS}}\xspace selection. In the random selection, a miner is randomly chosen from the network to mine the next block. The miner selects the highest paying transactions from the mempool and publishes the block. In priority selection, and following the conventional PoS scheme the miner with more balance is selected for the next block. This design is similar to the existing PoS models applied in cryptocurrencies such as Peercoin, where a preference is given to the rich miners. To capture that, we assume a subset of greedy miners in the priority selection that participate in every block auction and try to win using their high balance. Finally, in the {{\em e-PoS}}\xspace selection, we implement the extended PoS scheme as described in \textsection\ref{sec:pof}. Miners with balance greater than the baseline stake are allowed to place a percentage bid for the block auction. \subsection{Network Overview} \label{sec:no} To faithfully model a system closer to the existing blockchain applications, we randomly select a network size in the range of 8,000--9,000 nodes. This is close to the average size of Bitcoin network ~\cite{bitnodes_18}. For each node, we assign a balance $b_{n}$ randomly distributed between 0 and 20,000 coins. In Bitcoin, the average number of transactions per block is 2,000. To reflect that, we fix the standard block size $\mathcal{B}_{s}$ to accommodate only 2,000 transactions. Next, we randomly select the mempool size $\mathcal{M}_{s}$ between 1--100 blocks to obtain the epoch length as defined in \autoref{eq2}. For each scheme in the experiment, we run the simulation for $l$ = 200, and evaluate the balance of nodes towards the end of epoch. \subsection{Evaluation Parameters} \label{sec:ep} We evaluate each design based on its capability of providing decentralization and fairness guarantees. Decentralization is measured by $\beta_{e}$ and $\beta_{c}$ from \autoref{eq:decen}. Additionally, to evaluate decentralization in the case of random selection, we use $\beta_{r}$. As shown in \autoref{eq:decen}, the greater the value of $\gamma$, the more decentralized a scheme is compared to the other. Furthermore, the fairness and security is measured by comparing the balance of the miners with the baseline stake. If the balance before the auction is greater than or equal to the baseline stake ($b_{k} \in K(m_{k},b_{k} \geq \mathcal{ST}_{i}$), the system is considered as secure. We evaluate each design under these conditions. \subsection{Results and Evaluation} \label{sec:se} In \cref{fig:LSTMP}, and \cref{fig:bsc}, we report the simulation results. In \cref{fig:LSTMP}, we cluster the balance of peers at the start and end of each epoch to capture the distribution of rewards among peers. Clustering the balance in histogram bins is useful in presenting the overall state of the network without showing the balance of each node. The figure shows that in priority selection, a subset of rich miners is able to win each auction and obtain all fee rewards. The accumulation of rewards among few miners creates a skew, thereby impacting the decentralization property of the scheme. As shown on the x-axis, the balance of these miners increases after the epoch, and there is little to no change in the balance of other miners. The random selection scheme achieves a higher decentralization by distributing block rewards among randomly selected set of peers in the system. The maximum balance of peers, shown on the x-axis, is less compared to the priority selection scheme, showing the reward distribution among a larger set of miners. The x-axis shows that the maximum balance achieved by a cluster of miners is much less (25,000) compared to the priority selection scheme (40,000). Moreover, y-axis shows the number of peers in each cluster, indicating that the lower the height of histogram on y-axis, the more distributed is the block reward. \cref{fig:LSTMP} also shows that compared to the other two schemes {{\em e-PoS}}\xspace is more decentralized and distributes rewards to the maximum set of candidate miners. It can be observed that the range of the x-axis, denoting the maximum balance acquired by any user, is less than the two other schemes. To evaluate the security we plot the baseline stake of each block in the first epoch, and the respective balance of each miner. As defined in \textsection\ref{sec:ep}, if the balance of the miner $b_{k} \in K(m_{k},b_{k})$ is greater than the baseline stake $\mathcal{ST}_{i}$, the miner will be disincentivized from cheating. Our plot in \cref{fig:bsc} shows that the random selection scheme is unfair and vulnerable to attacks, since half of the miners had lower balance than the baseline stake. In comparison, the priority selection and {{\em e-PoS}}\xspace are resilient against attacks since each miner had a higher relative balance than the baseline stake. In {{\em e-PoS}}\xspace, the balance of the miner is marginally greater than the required threshold, showing a trade-off between decentralization and security, and {{\em e-PoS}}\xspace meets both requirements. In \cref{tab:data}, we report the empirical results obtained from the simulations. In particular, we highlight the decentralization parameter $\beta$ obtained from \autoref{eq:decen}, the unique number of miners, mean baseline stake, and the mean balance of miners prior to block mining. It can be observed that as the number of blocks per epoch increases, the decentralization parameter of {{\em e-PoS}}\xspace increases more than random selection and priority selection. More specifically, and by deriving $\gamma$ from \autoref{eq:decen}, we observe that $\beta_{e} - \beta_{c}$= 0.005, and $\beta_{e} - \beta_{r}$= 0.001 after the third epoch. Since we asserted that if the value of $\gamma$ is greater than 0, then {{\em e-PoS}}\xspace is deemed to achieve higher decentralization than its competitive schemes. Simulation results complement our assertion. \section{Applications of {{\em e-PoS}}\xspace} \label{sec:app} \begin{table}[t] \centering \caption{Simulation results recorded for each scheme. Mean $\mathcal{ST}_{i}$ is the average of baseline stakes of all the blocks in the epoch. Mean $b_{k}$ is the average balance of all the miners before mining. Unq $k_{i}$ are unique number of miners selected. } \vspace{-3mm} \scalebox{0.8}{ \begin{tabular}{|l|c|c|c|c|l|c|} \Xhline{3\arrayrulewidth} Scheme & $l$ & Mean $\mathcal{ST}_{i}$ & Mean $b_{k}$ & Unq $k_{i}$ & $\ \ \ \beta$ & Mean $\gamma$ \\ \Xhline{2\arrayrulewidth} \multirow{ 3}{*}{Random} & 50 & 8626.2 & 10056.9 & 50 & 0.0062 & 0 \\ \cline{2-7} & 100 & 7346.2 & 9778.3 & 96 & 0.0120 & 0.0005 \\ \cline{2-7} & 200 & 4892.7 & 9834.7 & 192 & 0.0240 & 0.0010 \\ \Xhline{2\arrayrulewidth} \multirow{ 3}{*}{Priority} & 50 & 8626.2 & 53680.7 & 10 & 0.0012 & 0.0005 \\ \cline{2-7} & 100 & 7346.2 & 34614.7 & 60 & 0.0075 & 0.005 \\ \cline{2-7} & 200 & 4892.7 & 23327.9 & 160 & 0.0200 & 0.005 \\ \Xhline{2\arrayrulewidth} \multirow{ 3}{*}{{{\em e-PoS}}\xspace} & 50 & 8626.2 & 8783.7 & 50 & 0.0062 & 0 \\ \cline{2-7} & 100 & 7346.2 & 7435.1 & 100 & 0.0125 & 0 \\ \cline{2-7} & 200 & 4892.7 & 4948.2 & 200 & 0.0250 & 0 \\ \Xhline{3\arrayrulewidth} \end{tabular}} \label{tab:data} \end{table} By replacing the actual stake with percentage stake, a {cryptocurrency}\xspace can mitigate centralization of miners. Also, by driving baseline stakes and number of blocks from the mempool, the security and efficiency of the system can be significantly improved. However, as with any new scheme, there are application-oriented challenges address before {{\em e-PoS}}\xspace adoption by a {cryptocurrency}\xspace such as Bitcoin. Bitcoin network consists of nodes running a software client (Bitcoin Core) that implements Bitcoin protocols. To incorporate {{\em e-PoS}}\xspace, the Bitcoin community would have to release a new update that implements {{\em e-PoS}}\xspace atop the existing rules set by the system. Below, we mention some other changes that may be required if Bitcoin switches to {{\em e-PoS}}\xspace. \BfPara{Bootstrapping} In {{\em e-PoS}}\xspace, each epoch has a dependency on the previous epoch, where miners are required to achieve PBFT-based consensus over smart contract input values. To bootstrap {{\em e-PoS}}\xspace, the first epoch needs to be independent, since it will not have a dependency on any prior execution fragment. To achieve that, we propose an intuitive solution, where any set of $k$ miners in a cryptocurrency can launch a hard fork on the {cryptocurrency}\xspace blockchain. Next, they can execute PBFT for the mempool state and launch the first epoch. Once the epoch is launched, the resulting blocks will be announced to the network and all peers can switch to the new fork. All the subsequent rounds can follow the same procedure outlined in this paper. After the hard fork and the bootstrap phase, any {cryptocurrency}\xspace can switch to {{\em e-PoS}}\xspace. \BfPara{Baseline Stake and Epoch Length} \label{sec:bsn} In Bitcoin, 12.5 new coins (coinbase rewards) are released when a new block is computed. The reward of miners include those newly created coins and the mining fee of transactions. As such, the baseline stake in {{\em e-PoS}}\xspace-based Bitcoin would require adding the coinbase reward as well, the baseline stake for each block($\mathcal{ST}_{i} = \mathcal{ST}_{i} + 12.5 BTC $). Moreover,in Bitcoin, the standard block size is 1MB, and there is no cap on mempool size. However, due to selective mining, there is often a transaction backlog that reduces the system efficiency. Moreover, varying the hash rate often increases the time between publication of two subsequent blocks, which at times exceeds 50 minutes. If Bitcoin employs {{\em e-PoS}}\xspace, all blocks will be published at the exact time and in each epoch all transactions will be processed. \BfPara{Mining Record} \label{sec:epln} In {{\em e-PoS}}\xspace, the smart contract maintains a mining record, $M(m_{j},h_{j})$, in which the history of each mining node is maintained for a future reference. This record is consulted if there is a conflict in the block auction, and preference is given to the miner with fewer blocks. In Bitcoin, the smart contract can achieve this by mapping the mining record to the IP addresses of nodes. Nodes in Bitcoin connect using IP addresses, and these IP addresses are also used as reputation indicators for the node~\cite{bitnodes_18}. The mining record can also be maintained with respect to the accounts held by each user. However, we observe that in Bitcoin and Ethereum, a node with a single IP address can easily generate multiple accounts. This allows an adversary to split his balance into multiple accounts using the same node, and place multiple bids in each epoch. In blockchain systems, creating new accounts is easier than creating a new IP address. For example, if the adversary wants to create $p$ unique IP addresses to influence the mining record, he must either launch $p$ unique nodes or shut down his existing node and restart with a new IP address. We note that both methods are costly. If the adversary launches $p$ unique blockchain nodes, he must download the entire blockchain ledger at each node and concurrently manage them for each auction. In contrast, if he restarts his existing machine to acquire a new IP address, his node can take up to several minutes to synchronize with the network~\cite{BitcoinCore_18,bitnodes_18}. If the next epoch starts while the adversary's nodes are synchronizing, then they will not be able to participate in the auction. Therefore, using IP addresses for mining record is a more useful approach than using accounts since switching IP addresses can be more expensive than switching multiple accounts. It can be argued that a powerful adversary with more than 50\% coins can afford to launch $p$ machines, each with a unique IP address. In that case, if the adversary splits his balance among $p$ nodes then each node will have less balance than the aggregate balance of the adversary before splitting. As such, if $p$ is large, the balance maintained by each node in $p$ will be smaller. Therefore, in an auction, an honest miner with a higher balance can beat the dishonest miner in $p$ with lower balance. To avoid that, the adversary needs to know the actual balance of each honest party in the system and select the appropriate value of $p$. Since the balance of an honest party is always anonymous, the adversary cannot compute the precise value of $p$ to dominate the auction. In summary, using IP addresses to maintain a mining record helps in disincentivizing an adversary. \BfPara{Limitations and Future Directions} The smart contract rules can be encoded in the software clients such as Bitcoin Core. Bitcoin Core is itself a smart contract with rules to generate, validate, and broadcast a transaction. Similarly, {{\em e-PoS}}\xspace rules can be encoded in Bitcoin Core to operate the blockchain application. Moreover, while {{\em e-PoS}}\xspace provides several features for a decentralized and fair blockchain system, it can be further improved to address the following limitations. The software clients of blockchain applications are vulnerable to attacks \cite{SaadCLTM19}. The security of software clients is highly critical for {{\em e-PoS}}\xspace, and within the scope of this work, we do not cover this aspect. Our simulations were executed on a prototype smart contract in which we did not face the threat of external attackers. However, part of our ongoing work is to securely integrate {{\em e-PoS}}\xspace in a modified version of Bitcoin Core and perform an end-to-end security analysis. {{\em e-PoS}}\xspace is a multi-phased protocol involving block auction, miner selection, and mempool agreement. The message exchange in all these phases adds a time penalty to block mining. To analyze the time penalty, we use data from the Bitcoin network and try to map it on the {{\em e-PoS}}\xspace model. On average, a Bitcoin transaction takes 2.6 seconds to reach 1000 nodes~\cite{bitnodes_18}. In some cases, the delay can be as high as 8 seconds~\cite{bitnodes_18}. If we assume $c$=1000 candidate miners, the primary replica will receive 1000 bids in parallel for the next epoch. The primary will then execute PBFT among $K$ miners to set the parameters for the next epoch. Since the average Bitcoin mempool size is $\approx$6--10 times the block size, therefore, we can assume $K$=10 miners. Prior work~\cite{SukhwaniMCTR17} shows that in a network size of 10 nodes, PBFT takes $\approx$17 milliseconds. Therefore, the PBFT execution will be quick and the bottleneck will be created the bid transmission by $c$ to the primary replica in $K$. Logically, if the candidate miners are more than 1000 ({\em i.e.,}\xspace 10K in Bitcoin~\cite{bitnodes_18}), the bid transmission can be delayed by more than 8 seconds. As a result, block mining will also be delayed. In some cryptocurrencies such as Bitcoin and Litecoin, this delay can be easily tolerated since their block mining time is 10 minutes and 2.5 minutes, respectively. However, in Ethereum the block mining time is $\approx$20 seconds, and delay caused by the auction in {{\em e-PoS}}\xspace may not be tolerable. We acknowledge this limitation and note that the constraint is due to the network size and block mining policy. If the Ethereum network switches to {{\em e-PoS}}\xspace, it will have to increase the block mining time or limit the network size. Another limitation of our work is the marginal sacrifice on the fault tolerance of the system. While the smart contract can prevent against the majority attacks, however, the key security bottleneck is PBFT's fault tolerance prior to the execution of the smart contract. PBFT requires at least 70\% miners to behave honestly in an epoch. In other words, the system can only tolerate up to 30\% malicious replicas. If an adversary controls more than 30\% of the miners in an epoch, then {{\em e-PoS}}\xspace may not halt due to diverging mempool views. Therefore, the fault tolerance in {{\em e-PoS}}\xspace is low compared to the other PoW and PoS-based cryptocurrencies, where it is 50\%. An alternative would be to eliminate PBFT from {{\em e-PoS}}\xspace. However, as a requirement, we need a consensus over the mempool size. In a synchronous environment, as originally considered in Bitcoin \cite{nakamoto2008bitcoin}, all nodes have the same mempool state at any given moment. In practice, {cryptocurrency}\xspace networks are asynchronous~\cite{SaadCLTM19}, and the mempool views may not be consistent at all times. To achieve consensus over the mempool, we incorporated PBFT consensus protocol. Finally, another limitation of {{\em e-PoS}}\xspace is the marginal sacrifice of anonymity due to historical dependency on IP addresses. In {{\em e-PoS}}\xspace, the smart contract maintains a history of each miner through their IP addresses. Since blockchain in {{\em e-PoS}}\xspace is a public ledger, anyone from outside the network can query and obtain the IP addresses that have mined most blocks. Accordingly, the balance of the IP address can also be calculated. Although the IP addresses in Bitcoin and Ethereum nodes are currently public~\cite{bitnodes_18}, it is perhaps worthwhile to increase the network anonymity by masking those addresses. Our future work involves using techniques to replace IP addresses with a new identity scheme in the mining repository. Moreover, besides {{\em e-PoS}}\xspace, if the IP usage is replaced by secure yet anonymous identity schemes, then it would benefit existing blockchain applications in general by hardening their security against routing attacks \cite{apostolaki2017hijacking}. \vspace{-4mm} \section{Related Work} \label{sec:rw} \BfPara{PoW Limitations} Energy inefficiency of PoW-based blockchain applications has been significantly highlighted before \cite{BahriG18,Dwyer14,GiungatoRTT17,SYMITSI2018127}. Dwyer and Malone \cite{Dwyer14} firt observed the energy intensive computation performed by Bitcoin miners, and postulated it to become a major problem upon Bitcoin expansion. Bahri and Girdzijauskas \cite{BahriG18} analyzed the energy consumption of Bitcoin to achieve its core consensus protocol. Their estimates show that Bitcoin consumes 39.5 TWh of electricity annually, on average. Harald Vranken \cite{VRANKEN20171} looked into the sustainability of Bitcoin and blockchains by performing a resource profiling of major PoW-based blockchain applications. His work also attributes the growing energy footprint of the Bitcoin {cryptocurrency}\xspace to the endogenous PoW requirements. \BfPara{PoS Limitations} Although PoS addresses the growing energy problems of PoW \cite{Spasovski17,BartolettiLP17,FanZ17}, it is vulnerable to a series of attacks and introduces unfairness in the system. The phenomenon of ``{\em the rich gets richer}'' in mining has been reported \cite{ZhengXDCW17,Ren14,BentovGM16}. In PoS, the baseline stakes create a partition in the network in which the number of peers above the baseline continuously benefit from the fee rewards. This keeps increasing the threshold for baseline stake and the margins of partition between the wealthy nodes and the rest of the network. Zheng {\em et al.}\xspace noticed this limitation of PoS, and highlighted the need for a new PoS algorithm. Towards that, Kiayias {\em et al.}\xspace~\cite{KiayiasRDO17} performed a formal analysis of PoS-based blockchains using the PoW-based theoretical model proposed by Garay {\em et al.}\xspace~\cite{GarayKL15}. They formally specified the desirable security properties for a PoS blockchain system, and using those properties they present a model called ``Ouroboros.'' Ouroboros uses a unique reward mechanism to incentivize PoS-based systems and neutralize the selfish mining attacks. In Ouroboros, the authors use a coin-flipping technique to introduce randomness in miner selection. However, as we have shown in \autoref{tab:data}, the random selection, despite its benefits, may sacrifice fairness. To ensure fairness, it is important that the balance of each stakeholder must be above the baseline stake. In random selection (see \autoref{fig:rs}), the balance of a miner was often below the baseline stake, which can compromise the security. Therefore, specific to the {{\em e-PoS}}\xspace design, random selection of miners can be counterproductive since it sidesteps the security requirements of the baseline stake. In a similar context, Diant {\em et al.}\xspace~\cite{DaianPS19} presented \textsc{Snow White}, a variant of PoS-based consensus protocol that realizes a secure application of PoS in {\em permissionless} blockchain systems. Similar to the work of \cite{KiayiasRDO17}, \textsc{Snow White} also utilizes random selection of miners to achieve decentralization. However, instead of the coin-flipping method in~\cite{DaianPS19}, \textsc{Snow White} is an extension of the ``Sleepy Consensus''~\cite{PassS17} and introduces a dependency among the block headers to achieve the notion of randomness. Finally, another notable work is by Chen {\em et al.}\xspace~\cite{ChenM19}, called Algorand, which uses message-passing Byzantine Agreement to achieve consensus in large-scale distributed systems. Algorand requires significantly less energy to operate and it is currently being deployed in a few blockchain systems. Finally, two other notable PoS-based protocols are Delegated Proof-of-Stake (DPoS) and Supernode Proof-of-Stake (SPoS). In (DPoS), the network participants vote to elect a team of witnesses that mines block~\cite{WangLL20}. SPoS is an extension of DPoS in which supernodes are elected to mine blocks~\cite{blockchain_community_20}. Compared to DPoS, SPoS guarantees a constant inter-arrival block time and optimized data storage. In both protocols, miners commit their stakes before mining a block. If miners misbehave, their stakes are confiscated. The key problem with this approach is that if the reward of misbehavior is greater than the stake, a miner can easily cheat, violating the fairness property. No mechanism binds stakes with fee rewards to prevent malicious behavior. In {{\em e-PoS}}\xspace, we overcome this problem by using the concept of baseline stake driven from the mempool. The mempool allows us to precisely calculate the baseline stakes for each block in an epoch and set the lower bound commitment. As a result, the miner's stake is always greater than or equal to the baseline stake for each block. If a miner cheats, all victims can be compensated. This feature enables fairness in {{\em e-PoS}}\xspace and distinguishes it from both SPoS and DPoS. Another limitation of DPoS and SPoS is that they cannot work in an asynchronous network. For instance, in SPoS, and all witnesses are expected to have a consistent network view while mining blocks. However, as we extensively discuss in section 3.5, when the blockchain network size increases, the network can exhibit asynchrony which eventually leads to inconsistency among miners. Both DPoS and SPoS ignore the effect network asynchrony. In {{\em e-PoS}}\xspace, we address asynchrony by using the PBFT consensus protocol that synchronizes miners during epochs. \BfPara{Hybrid Designs} To address the limitations these consensus schemes, hybrid approaches have been adopted to converge their benefits and reduce the attack surface. Duong {\em et al.}\xspace \cite{DuongCFZ18} proposed {\em TwinsCoin}; a secure and scalable hybrid blockchain protocol. Bentov {\em et al.}\xspace \cite{BentovLMR14} proposed a hybrid blockchain protocol called Proof-of-Activity (PoA) which upgrades the defense measures of blockchains with low penalty on network communications and storage space. Some notable attempts have been made to secure blockchains against centralization and majority attacks \cite{DuongCZ18,LiABK17,SpasovskiE17,rocket2018snowflake}. Despite these commendable efforts, there is still need for a practical solution that can be applied to existing cryptocurrencies to enable them to easily transition to PoS. Our proposed model addresses these limitations, and presents a solution that can be used in the existing blockchain systems with various desirable properties. \vspace{-4mm} \section{Conclusion} \label{sec:con} Due to the massive energy footprint and great computation requirements, PoW-based blockchain applications are becoming infeasible. In contrast, the popular alternative known as the Proof-of-Stake (PoS) suffers from network centralization and unfairness. In this paper, we have introduced an extended form of PoS, termed as {{\em e-PoS}}\xspace, which introduces fairness in the blockchain network and resists centralization. We introduce a smart contract that runs atop the blockchain and carries out a blind block auction. The smart contract applies policies to extend mining opportunities to a wider set of network peers, and ensures fair reward distribution. The execution of the smart contract is facilitated by miners that run PBFT-based consensus over the mempool state. We show, by simulation, that {{\em e-PoS}}\xspace meets its objectives and can be applied to existing blockchain systems, such as Bitcoin. \vspace{-3mm} \balance \bibliographystyle{IEEEtran}
1,941,325,219,889
arxiv
\section{Introduction} \label{sec:intro} The standardization activities in the integrated 5G networks and low earth orbits (LEO) mobile satellite communication (SatCom) systems and the emerging plans about LEO constellations, introduce a renewed research interest in the performance of inter-satellite links (ISLs) \cite{LEO_5G,ultraLEO,LEO_beyond5G}. With the technological advances in small satellites, such as cubesats, ISLs will serve as the essential link to enable the satellites to work together to accomplish the same task and satellite formation flying missions \cite{surveyISL,cubesat,interSatSynch}. Orthogonal frequency division multiplexing (OFDM) emerges as the leading waveform candidate for ISLs due to the inherent advantages to provide high data rates at relatively low complexity. OFDM is a multicarrier modulation technique widely adopted by modern mobile communication systems such as WiFi networks (IEEE 802.11a-802.11be), Long Term Evolution (LTE), and LTE-Advanced. It is also considered as the basic waveform for the 5G New Radio (NR). Although OFDM has very desirable traits including high spectral efficiency, achievable high data rates, and robustness in the presence of multipath fading, it is also very sensitive to synchronization errors such as the carrier frequency offset (CFO) and frame misalignments \cite{phyLayerOFDM}. Due to the high speed movement of the LEO satellites, synchronization is expected to be a major challenge in ISLs with the high Doppler shifts \cite{LEO_5G,LEODoppler}. In the presence of a high Doppler shift, the subcarriers of an OFDM system with CFO may no longer be orthogonal and the resulting inter-carrier interference (ICI) may degrade the performance of the system. Another synchronization issue arises when the starting position of the discrete Fourier transform (DFT) window at the receiver is misaligned. Depending on the position of the misalignment, the effects of the frame misalignment may range from a simple phase offset to ICI. Although the research on carrier and frame timing synchronization for OFDM systems is very mature \cite{optimalMaxLikely,timingFreqSynch,jointCarrSamplHarmonic,espritCFO,zadoffChu,LEOSynch}, in this paper, we aim to provide a fresh perspective to OFDM's synchronization problem by also aiming to combat the residual Doppler shifts that may be encountered in ISLs due to their high mobility. Estimating the Doppler shifts is not only important for dealing with CFO effects but the Doppler shifts can also be used for securing the ISLs between spacecrafts, by using them as a mutually shared secret \cite{ozan}. The proposed method transforms the estimation of the CFO and the frame misalignment in an OFDM based inter-satellite link into a 2-D harmonic retrieval problem. The representation is in the frequency domain and this allows the estimation of the frame misalignment unlike \cite{jointCarrSamplHarmonic,espritCFO} where the frame misalignment is assumed to be corrected prior. When compared to the methods that utilize the cyclic prefix (CP) portion of the OFDM symbol \cite{optimalMaxLikely,LEOSynch}, the proposed method relies on the use of pilot symbols. Furthermore, the proposed method does not require the knowledge of the noise statistics unlike the methods that use the CP \cite{optimalMaxLikely,LEOSynch}. We also derive the Cram\'er-Rao lower bound (CRLB) for the joint estimation of the CFO and frame misalignment. Numerical results show that for the error range of $[10^{-2},10^{-4}]$, the difference between the proposed 2-D ESPRIT based method and the PSS method \cite{LEOSynch} is less than 5dB at its worse. The contributions of the paper can be summarized as follows: \begin{itemize} \item The proposed method represents the estimation of the CFO and the frame misalignment in an OFDM-based ISL as a 2-D harmonic retrieval problem. Unlike \cite{jointCarrSamplHarmonic}, the representation is in the frequency domain and this allows the estimation of the frame misalignment. \item When compared to the methods that utilize the cyclic prefix symbols \cite{optimalMaxLikely,LEOSynch}, the proposed method relies on the use of pilot symbols. So the length of the CP symbols is not a factor that affects the estimation performance of the proposed method. \item The proposed method does not require knowledge of noise statistics, unlike the methods that use cyclic prefix symbols \item A disadvantage of the proposed method is that due to relying on sending constant pilot symbols, the peak-to-average ratio is high. \cite{optimalMaxLikely,LEOSynch}. \end{itemize} The organization for the rest of this paper is in the following way; Section 2 introduces the signal model for the OFDM-based ISLs. Section 3 reformulates the signal model as a 2-D estimation problem. Section 4 shows the CRLB of the parameters in the reformulated model. We compare the simulation results of the proposed method against the state-of-art methods in Section 5. Finally, Section 6 presents the conclusions and directions for future work. \section{Signal Model} \label{sec:signal} In an OFDM system, the symbols are transmitted with the sampling period $T_{s}$ in a series of frames denoted by $X_{m}[k]$ where $m$ indicates the $m$-th OFDM frame and $k$ is the subcarrier index. The time-domain symbols are modulated by applying the inverse DFT to the frequency-domain symbols for $k=0,\ldots,N-1$ where $N$ is the total number of subcarriers. Then $N_{g}$ number of CP samples are appended to the front of the time-domain frame as in \begin{equation} x_{m}[n]=\frac{1}{N}\sum_{k=0}^{N-1}X_{m}[k]e^{i2\pi k(n-N_{g})/N} \label{eq:OFDMblock} \end{equation} where $0\le n \le N_{t}-1$ and $N_{t}=N+N_{g}$. The discrete-time received frames in baseband can be written as \begin{equation} r_{m}[n]=e^{i2\pi \epsilon mN_{t}/N}e^{i2\pi \epsilon n/N}(h_{m}\circledast\tau_{p}x_{m})[n]+z_{m}[n] \label{eq:ISI} \end{equation} where $\circledast$ denotes the cyclic convolution operation, $h_{m}[n]$ is the overall channel that is the result of the convolution of the multipath fading channel and the pulse shaping filters of both the transmitter and the receiver, the starting position of the DFT window at the receiver is shown as $\tau_{p}x_{m}[n]=x_{m}[n+p]$, and the additive white Gaussian noise (AWGN), $z_{m}[n]$, is modeled as a circularly symmetric Gaussian random variable, i.e. $z_{m}[n]\sim\mathcal{CN}(0,N_{0})$. Due to the clock inaccuracies of the transmitter and the receiver oscillators, and the Doppler spread caused by the mobility of the satellites, the conversion from passband to baseband generates the unwanted multiplicative terms, $e^{i2\pi \epsilon mN_{t}/N}e^{i2\pi \epsilon n/N}$, in \eqref{eq:ISI} where $\epsilon$ is the total CFO term normalized by the subcarrier spacing, $1/NT_{s}$, as \begin{equation} \epsilon=NT_{s}(f_{d}-\Delta_{f_{c}}). \label{eq:totalCFO} \end{equation} $\Delta_{f_{c}}$ and $f_{d}$ in \eqref{eq:totalCFO} shows the clock difference and the Doppler spread, respectively. Since the free-space loss and thermal noise of the electronics are enough to characterize the ISLs, an AWGN channel model, i.e. $h_{m}[n]=\delta_{0}[n]$, can be used in \eqref{eq:ISI} \cite{cubesat}. Thus the received signal for the discrete baseband equivalent model can be written as \begin{equation} r_{m}[n]= e^{i2\pi \epsilon m(1+\alpha)}e^{i2\pi \epsilon n/N}x_{m}[n+p]+z_{m}[n] \label{eq:AWGN} \end{equation} where $\alpha=N_{g}/N$. The normalized CFO, $\epsilon=\varepsilon+\ell$, consists of a fractional part, i.e. $|\varepsilon|\le 0.5$, and an integer part $\ell\in\mathbb{Z}^{+}$. The OFDM receiver first removes the cyclic prefix samples and then applies the DFT to the remaining samples resulting in \begin{equation} R_{m}[k]=C_{m}[k]\circledast e^{i2\pi(\ell-k)p/N}X_{m}[k-\ell]+Z_{m}[k], \label{eq:DFTapplied} \end{equation} where $C_{m}[k]$ is given as \begin{IEEEeqnarray}{rCl} C_{m}[k]&=&\frac{\sin(\pi[\varepsilon-k])}{N\sin(\pi[\varepsilon-k]/N)}e^{i\pi[\varepsilon-k](N-1)/N} \nonumber \\ &\times& e^{i2\pi m[\varepsilon(1+\alpha)+\ell\alpha]}e^{i2\pi (\varepsilon+\ell) \alpha} \end{IEEEeqnarray} and $Z_{m}[k]$ is also circularly symmetric Gaussian that is $Z_{m}[k]\sim\mathcal{CN}(0,NN_{0})$ since the DFT is a linear transformation and the circular symmetry is invariant to linear transformations. \section{Estimation of CFO and Frame Misalignment Using 2-D ESPRIT} \label{sec:2D_ESPRIT} The DFT of the received samples, $R_{m}[k]$, in \eqref{eq:DFTapplied} can be rewritten as a 2-D signal model by using the OFDM frame index, $m$, as a second dimension taking values in the range $0\le m\le M-1$ \begin{IEEEeqnarray}{rCl} R[m,k]&=& e^{i2\pi f_{1}m}e^{i2\pi f_{2}k}e^{i\psi}\sum_{r=0}^{N-1}\frac{\sin(\pi(\varepsilon-r))}{N\sin(\pi[\varepsilon-r]/N)} \nonumber \\ && e^{i2\pi r \left( \frac{1-N}{2N}+\frac{p}{N} \right)}X[m,k-\ell-r]+Z[m,k], \label{eq:DFTappliedRewrite1} \end{IEEEeqnarray} where the frequencies and the phase terms are respectively \begin{IEEEeqnarray}{rCl} f_{1}&=&\varepsilon(1+\alpha)+\ell\alpha \label{eq:freq1} \\ f_{2}&=&-p/N \label{eq:freq2} \\ \psi&=&2\pi\left\lbrack (\varepsilon+\ell) \alpha+\frac{\ell p}{N}+\varepsilon\frac{N-1}{2N} \label{eq:phase} \right\rbrack. \end{IEEEeqnarray} The transmitted OFDM symbols in \eqref{eq:DFTappliedRewrite1}, $X[m,k-\ell-r]$, depend on both the frame index, $m$, and the subcarrier index, $k$. This dependency can be removed by sending the same pilot symbol on each subcarrier that is $X[m,k]=X$ for all $m$ and $k$, and then the DFT of the received samples \eqref{eq:DFTappliedRewrite1} can be formulated as a harmonic retrieval problem \begin{equation} R[m,k]=c \, \phi^{m}\theta^{k}+Z[m,k] , \label{eq:2DParameterEstimation} \end{equation} where $\phi=e^{i2\pi f_{1}}$, $\theta=e^{i2\pi f_{2}}$ and $c$ is a complex coefficient \begin{equation} c=e^{i\psi}X\sum_{r=0}^{N-1}\frac{\sin(\pi(\varepsilon-r))}{N\sin(\pi[\varepsilon-r]/N)}e^{i2\pi r \left( \frac{1-N}{2N}+\frac{p}{N} \right)}. \end{equation} The signal in \eqref{eq:2DParameterEstimation} has a single 2-D mode defined by the frequencies $\lbrace f_{1},f_{2} \rbrace$, and a complex coefficient $c=\lambda e^{i\varphi}$ where $\lambda=|c|$ and $\varphi=\angle c$. The ESPRIT-based estimation methods identify the pairs $\lbrace f_{1},f_{2} \rbrace$ from the observed data $R[m,k]$ by turning the 2-D estimation problem into two 1-D estimation problems and exploiting the shift-invariance structure of the signal subspace for each 1-D problem \cite{2D_ESPRIT}. Since the unknown 2-D signal mode is undamped, the forward-backward prediction \cite{2D_ESPRIT} can be applied to increase the estimation accuracy by using an extended data matrix $\mathbf{R}_{ee}$ as \begin{equation} \mathbf{R}_{ee}=[\mathbf{R}_{e} \quad \mathbf{\Pi}\,\mathbf{R}^{*}_{e}\,\mathbf{\Pi}], \label{eq:extendedDataMatrix} \end{equation} where complex conjugation without transposition is shown by the $(\cdot)^{*}$ symbol, and $\mathbf{\Pi}$ is a permutation matrix with ones on its antidiagonal and zeroes elsewhere. $\mathbf{R}_{e}$ \eqref{eq:extendedDataMatrix} is the enhanced Hankel block structured matrix that is constructed by applying an observation window of size $P\times Q$ through the rows of the noisy received samples of which one of the dimensions is fixed. $\mathbf{R}_{e}$ matrix for the first dimension is given as \begin{equation} \mathbf{R}_{e1}=\left\lbrack \begin{array}{cccc} \mathbf{R}_{(0)} & \mathbf{R}_{(1)} & \cdots & \mathbf{R}_{(M-P)} \\ \mathbf{R}_{(1)} & \mathbf{R}_{(2)} & \cdots & \mathbf{R}_{(M-P+1)} \\ \vdots & \ddots & \ddots & \vdots \\ \mathbf{R}_{(P-1)} & \mathbf{R}_{(P)} & \cdots & \mathbf{R}_{(M-1)} \end{array} \right\rbrack, \end{equation} where $\mathbf{R}_{(m)}$ is a Hankel matrix of size $Q\times (N-Q+1)$ as given below \begin{equation} \mathbf{R}_{(m)}=\left\lbrack \begin{array}{ccc} R[m,0] & \cdots & R[m,N-Q] \\ R[m,1] & \cdots & R[m,N-Q+1] \\ \vdots & \ddots & \vdots \\ R[m,Q-1] & \cdots & R[m,N-1] \end{array} \right\rbrack. \end{equation} The extended data matrix constructed according to \eqref{eq:extendedDataMatrix} for the first dimension, $\mathbf{R}_{ee1}$, can be decomposed in terms of a signal and a noise subspace as \begin{equation} \mathbf{R}_{ee1}=c\,\mathbf{s}_{L1}\mathbf{s}^{T}_{R1}+\mathbf{Z}_{1}, \label{eq:extendedData} \end{equation} where $\mathbf{Z}_{1}$ is the Hankel block structured matrix constructed from the noise samples $Z[m,k]$ in the same way as $\mathbf{R}_{e1}$ from $R[m,k]$. $\mathbf{s}_{L1}$ is a vector of size $PQ\times 1$ given as $\mathbf{s}_{L1}=[\boldsymbol{\theta}_{1} \, \boldsymbol{\theta}_{1}\phi \, \ldots \boldsymbol{\theta}_{1}\phi^{P-1}]^{T}$ where $\boldsymbol{\theta}_{1}$ is $\boldsymbol{\theta}_{1}=\lbrack 1 \,\,\theta \,\ldots\,\theta^{Q-1} \rbrack^{T}$. The construction of the vector $\mathbf{s}_{R1}$ is similar to that of $\mathbf{s}_{L1}$. The rank of the signal subspace, $c\,\mathbf{s}_{L1}\mathbf{s}^{T}_{R1}$, must be equal to the number of signal components which is equal to one since there is only one user. In order for the rank of the signal part be at least equal to one, $P$ and $Q$ must satisfy the following inequalities \cite{2D_ESPRIT} \begin{equation} M \ge P \ge 1, \qquad N \ge Q \ge 1. \label{eq:obsWindSize} \end{equation} The singular value decomposition (SVD) of $\mathbf{R}_{ee1}$ yields \begin{equation} \mathbf{R}_{ee1}=\mathbf{U}_{S1}\mathbf{\Sigma}_{S1}\mathbf{V}^{H}_{S1}+\mathbf{U}_{Z1}\mathbf{\Sigma}_{Z1}\mathbf{V}^{H}_{Z1}. \label{eq:SVD} \end{equation} While the singular vectors and the singular value of the signal mode is contained respectively in $\mathbf{U}_{S1}$, $\mathbf{V}^{H}_{S1}$, and $\mathbf{\Sigma}_{S1}$, the singular vectors and the singular values of the remaining noise components are respectively in $\mathbf{U}_{Z1}$, $\mathbf{V}^{H}_{Z1}$, and $\mathbf{\Sigma}_{Z1}$ \eqref{eq:SVD}. The frequencies related to the first dimension, $f_{1}$, can be estimated the $\mathbf{F}_{1}$ matrix \begin{equation} \mathbf{F}_{1}=\underline{\mathbf{U}}^{\dagger}_{S1}\overline{\mathbf{U}}_{S1} \label{eq:eigs1stDim} \end{equation} where $\underline{\mathbf{U}}_{S1}$ (resp. $\overline{\mathbf{U}}_{S1}$) is constructed with the first (resp. last) $(P-1)Q$ rows of the matrix $\mathbf{U}_{S1}$. $\underline{\mathbf{U}}^{\dagger}_{S1}$ denotes the pseudo-inverse matrix of $\underline{\mathbf{U}}_{S1}$ and the total least squares can be applied to calculate $\mathbf{F}_{1}$ in \eqref{eq:eigs1stDim}. The frequency $f_{1}$ is the eigenvalue of the matrix $\mathbf{F}_{1}$ that is \begin{equation} f_{1}=\mathbf{T}_{1}\mathbf{F}_{1}\mathbf{T}^{-1}_{1}, \end{equation} where $\mathbf{T}_{1}$ is the eigenvector matrix that diagonalizes $\mathbf{F}_{1}$. For the frequency related to the second dimension, $f_{2}$, the $\mathbf{F}_{2}$ matrix can be obtained by using the extended data matrix for the second dimension, $\mathbf{R}_{ee2}$, and following the same steps. Finally the pairing method of \cite{2D_ESPRIT} must also be employed to calculate the eigenvalue decomposition of a linear combination of $\mathbf{F}_{1}$ and $\mathbf{F}_{2}$ \begin{equation} \beta \mathbf{F}_{1}+(1-\beta)\mathbf{F}_{2}=\mathbf{T}\mathbf{\Sigma}\mathbf{T}^{-1}, \end{equation} where $\beta$ is a scalar. The diagonalizing transformation $\mathbf{T}$ is applied to both $\mathbf{F}_{1}$ and $\mathbf{F}_{2}$ \begin{IEEEeqnarray}{rCl} f_{1}&=&\mathbf{T}\mathbf{F}_{1}\mathbf{T}^{-1} \\ f_{2}&=&\mathbf{T}\mathbf{F}_{2}\mathbf{T}^{-1} \end{IEEEeqnarray} which yields the ordered frequencies. \section{Cram\'er-Rao lower bound (CRLB) Analysis} \label{sec:CRLB} The unknown parameters of \eqref{eq:2DParameterEstimation} can be collected in a vector of size $4\times 1$ as \begin{equation} \boldsymbol{\vartheta}=\left\lbrack \omega_{1} \, \omega_{2} \,\, \lambda \,\, \varphi \right\rbrack^{T}. \end{equation} where $\omega_{1}=2\pi f_{1}$ and $\omega_{2}=2\pi f_{2}$ represent the angular frequencies. If the DFT of the received signal samples \eqref{eq:2DParameterEstimation} are written as an $MN\times 1$ column vector \begin{eqnarray} \mathbf{r}=\lbrack R[0,0],\ldots,R[0,N-1],\ldots \nonumber \\ R[M-1,0],\ldots R[M-1,N-1] \rbrack^{T} \end{eqnarray} then the joint probability density function (PDF) of the multivariate circularly symmetric Gaussian random vector $\mathbf{r}\sim\mathcal{CN}(\boldsymbol{\mu},NN_{0}\mathbf{I})$ is given as \begin{equation} f(\mathbf{r})=\frac{1}{(NN_{0}\pi)^{MN}}\exp\left\lbrace -\frac{1}{NN_{0}}\left(\mathbf{r}-\boldsymbol{\mu}\right)^{H}\left(\mathbf{r}-\boldsymbol{\mu}\right) \right\rbrace \label{eq:jointPDF} \end{equation} where $\mathbf{I}$ is an identity matrix of size $MN\times MN$. The $j$-th entry of the mean vector $\boldsymbol{\mu}$ \eqref{eq:jointPDF} is \begin{equation} \boldsymbol{\mu}_{j}=c\,\phi^{m'_{j}}\,\theta^{k'_{j}}, \qquad j=1,\ldots,MN, \end{equation} where \begin{IEEEeqnarray}{rCl} m'_{j}&=&\left\lfloor{(j-1)/N}\right\rfloor\mod M \\ k'_{j}&=&\left\lfloor j-1 \right\rfloor\mod N. \end{IEEEeqnarray} The $(j,j')$ entry of the Fisher information matrix for the multivariate Gaussian PDF \eqref{eq:jointPDF} is \begin{equation} \mathbf{W}_{j,j'}=\frac{2}{NN_{0}}\Re\left\lbrace\left\lbrack\frac{\partial\boldsymbol{\mu}}{\partial\boldsymbol{\vartheta}_{j}} \right\rbrack^{H} \left\lbrack \frac{\partial\boldsymbol{\mu}}{\partial\boldsymbol{\vartheta}_{j'}} \right\rbrack \right\rbrace, \end{equation} where $\Re\lbrace \cdot \rbrace$ takes the real parts of the entries of the matrix \cite{SSA_Multidim}. The diagonal entries of the inverse of the Fisher matrix are the CRLB of the parameters. The CRLB for $\omega_{1}$ and $\omega_{2}$ are written respectively in \eqref{eq:CRBfreq1} and \eqref{eq:CRBfreq2}. \begin{IEEEeqnarray}{rCl} \text{CRLB}(\omega_{1})&=& \frac{6NN_{0}}{c^{2}MN(M^2-1)} \label{eq:CRBfreq1} \\ \text{CRLB}(\omega_{2})&=& \frac{6NN_{0}}{c^{2}MN(N^{2}-1)} \label{eq:CRBfreq2} \end{IEEEeqnarray} \section{Numerical Results} \label{sec:numRes} The simulated inter-satellite communication system uses the OFDM modulation which consists $N=64$ subcarriers and $N_{g}=16$ CP samples. The CFO is generated independently for each OFDM frame as a uniform random variable $\epsilon \sim \mathcal{U}[0.2,0.25]$ and the frame misalignment is fixed at $p=2$. The signal-to-noise (SNR) ratio is defined as $\text{SNR}=E_{s}/N_{0}$ where the symbol energy is normalized to unity, $E_{s}=1$. The performance of the proposed method is measured by calculating the mean squared error (MSE) for both the CFO and frame misalignment estimations and a total of 2000 Monte Carlo simulations are run for each SNR value. The selected methods are the maximum likelihood estimator, also known as Beek's method \cite{optimalMaxLikely} that allows the transmission of data symbols by using the CP samples for estimation, Minn's improved Schmidl \& Cox estimator \cite{timingFreqSynch}, and the cross-correlation based method that utilizes primary synchronization symbols (PSS method) \cite{LEOSynch}. The parameters for the proposed method are as follows; the number of the pilot symbols is $M=2$, the size of the observation window applied to the received samples is chosen according to \eqref{eq:obsWindSize} as $P=Q=2$ and the pairing coefficient is $\beta=8$. The results of the frame misalignment estimation (Figure \ref{fig:MSE_STO}) show that while Beek's method and Minn's method perform worse than the proposed method at each SNR value, the performance of the PSS method surpasses the performance of the proposed method. \begin{figure}[tb] \begin{minipage}[b]{1.0\linewidth} \centering \centerline{\includegraphics[width=8.8cm]{MSE_STO_icassp21.eps}} \end{minipage} \caption{Comparison of the frame misalignment estimation performance of the proposed 2-D ESPRIT method with Beek's, Minn's, and the PSS methods in the SNR range from -10 dB to 20 dB.} \label{fig:MSE_STO} \end{figure} The MSE regarding the CFO estimation results is shown in Figure \ref{fig:MSE_CFO}. The MSE of the proposed 2-D ESPRIT method is closest to the CRLB bound at 0 dB SNR and for the SNR values less than 10 dB, the proposed method performs superior compared to the rest of the methods. The PSS method is the second-best performer for SNR values less than 5 dB. \begin{figure}[tb] \begin{minipage}[b]{1.0\linewidth} \centering \centerline{\includegraphics[width=8.8cm]{MSE_CFO_icassp21.eps}} \end{minipage} \caption{Comparison of the CFO estimation performance of the proposed 2-D ESPRIT method with Beek's, Minn's, and the PSS methods in the SNR range from -10 dB to 20 dB.} \label{fig:MSE_CFO} \end{figure} The computational complexities of all the methods and their numerical evaluations for the parameter values used in the simulations are in Table \ref{tab:complexityComparison}. The complexities are in terms of real floating-point operations (flops). One complex multiplication is counted as 6 real flops while one complex addition is counted as 2 real flops. The SVD step determines the complexity of the proposed 2-D ESPRIT estimation method and the complexity is written in terms of the size of the extended data matrix, $\mathbf{R}_{ee}\in\mathbb{C}^{L\times K}$, where $L=PQ=4$ and $K=2(N-Q+1)(M-P+1)=126$. The symbol $G$ used in the complexity of the PSS method shows the size of the search grid that is used in the CFO estimation step. $D$, the number of repetition of the training symbol used in Minn's method, is $D=4$. In the numerical results, $M$ and $G$ are chosen as $M=2$ and $G=500$ respectively. While the complexity of the proposed method is less than the PSS's method, the complexities of both Beek's and Minn's methods are lower. \begin{table}[tbp] \renewcommand{\arraystretch}{1.3} \caption{Complexity Comparison of the Methods} \begin{center} \begin{tabular}{|c|c|c|} \hline \textbf{Method}& \textbf{Complexity} & \textbf{Number} \\ \hline Proposed Harmonic Retrieval & $2LK^{2}+K^{3}+K+LK$ & 2127384 \\ \hline PSS \cite{LEOSynch} & $(NK+G)(15N+5)$ & 729540 \\ \hline Beek \cite{optimalMaxLikely} & $24N_{t}N_{g}+10N_{t}$ & 31520 \\ \hline Minn's \cite{timingFreqSynch} & $36(N^{2}/D)+6N$ & 36480 \\ \hline \end{tabular} \label{tab:complexityComparison} \end{center} \end{table} \section{Conclusion} \label{sec:con} Synchronization in the ISLs of the LEO systems is a critical issue due to the Doppler spread caused by the high mobility of the satellites. We presented a novel method for the estimation of the CFO and frame misalignment in OFDM-based ISLs. We reformulate the synchronization problem as a 2-D harmonic retrieval problem in the frequency domain and apply the 2-D ESPRIT method to estimate the parameters. Since this new approach is in the frequency domain unlike the previous harmonic retrieval approach, this allows the joint estimation of the CFO and the frame misalignment. When compared to other synchronization methods that also rely on the pilot symbols like the well-known Schmidl \& Cox method and the PSS method, the proposed method requires transmitting frames of pilot symbols for the reformulation to work. The CRLB for the joint estimation of the CFO and the frame misalignment is derived and the performance of the proposed method is compared against the Beek's, Schmidl \& Cox, and the PSS methods. Numerical results show that respectable estimation performance can be achieved by using the proposed method with only two consecutive pilot frames. \balance \bibliographystyle{IEEEbib}
1,941,325,219,890
arxiv
\section{Introduction} \label{s1} All matrices and vector spaces are considered over the field $\mathbb C$ of complex numbers. By the theorem on pencils of matrices (see \cite[Sect. XII]{gan}), every pair of $p\times q$ matrices reduces by transformations of simultaneous equivalence \begin{equation}\label{1.a} (A_1,\,A_2)\mapsto (R^{-1}A_1S,\,R^{-1}A_2S) \end{equation} ($R$ and $S$ are arbitrary nonsingular matrices) to a direct sum, determined uniquely up to permutation of summands, of pairs of the form \begin{equation}\label{1.3'} (I_n,J_n(\lambda)),\ (J_n(0),I_n),\ (F_n,G_n),\ (F_n^T,G_n^T), \end{equation} where \begin{equation}\label{1.4} F_n=\begin{bmatrix} 1&0&&0\\&\ddots&\ddots&\\0&&1&0 \end{bmatrix},\quad G_n=\begin{bmatrix} 0&1&&0\\&\ddots&\ddots&\\0&&0&1 \end{bmatrix}, \quad n\ge 1, \end{equation} are $(n-1)\times n$ matrices, and $J_n(\lambda)$ is a Jordan block. The direct sum of pairs is defined by \[ (A,B)\oplus(C,D)= (A\oplus C,\,B\oplus D)=\left(\begin{bmatrix} A & 0 \\ 0 & C \end{bmatrix},\: \begin{bmatrix} B & 0 \\ 0 & D \end{bmatrix}\right). \] Note that $F_1$ and $G_1$ in \eqref{1.4} have size $0\times 1$. It is agreed that there exists exactly one matrix, denoted by $0_{n0}$, of size $n\times 0$ and there exists exactly one matrix, denoted by $0_{0n}$, of size $0\times n$ for every nonnegative integer $n$; they represent the linear mappings $0\to {\mathbb C}^n$ and ${\mathbb C}^n\to 0$ and are considered as zero matrices. Then $$ M_{pq}\oplus 0_{m0}=\begin{bmatrix} M_{pq} & 0 \\ 0 &0_{m0} \end{bmatrix}=\begin{bmatrix} M_{pq}& 0_{p0} \\ 0_{mq}& 0_{m0} \end{bmatrix}=\begin{bmatrix} M_{pq} \\ 0_{mq} \end{bmatrix} $$ and $$ M_{pq}\oplus 0_{0n}=\begin{bmatrix} M_{pq} & 0 \\ 0 & 0_{0n} \end{bmatrix}=\begin{bmatrix} M_{pq}& 0_{pn} \\ 0_{0q}& 0_{0n} \end{bmatrix}=\begin{bmatrix} M_{pq} & 0_{pn} \end{bmatrix} $$ for every $p\times q$ matrix $M_{pq}$. P. Van Dooren \cite{doo} constructed an algorithm that for every pair $(A,B)$ of $p\times q$ matrices calculates a simultaneously equivalent pair \begin{equation*}\label{1.4a} (A_1,B_1)\oplus\dots\oplus (A_r,B_r) \oplus (C,D), \end{equation*} where all $(A_i,B_j)$ are of the form \[ (I_n,J_n(0)),\ (J_n(0),I_n),\ (F_n,G_n),\ (F_n^T,G_n^T), \] and the matrices $C$ and $D$ are nonsingular. The pair $(C,D)$ is called a {\it regular part} of $(A,B)$ and is simultaneously equivalent to a direct sum of pairs of the form $(I_n,J_n(\lambda))$ with $\lambda\ne 0$. This algorithm uses only transformations \eqref{1.a} with {\it unitary} $R$ and $S$, which is important for its numerical stability. In this article we construct a unitary algorithm for computation of the canonical form of the system of matrices of a chain of linear mappings \begin{equation}\label{aa1.1} V_1\ \frac{{\cal{A}}_1}{\qquad}\ V_2\ \frac{{\cal{A}}_2}{\qquad}\ \cdots\ \frac{{\cal{A}}_{t-1}}{\qquad}\ V_t \end{equation} (see Proposition \ref{t2.a}) and extend Van Dooren's algorithm to the matrices of a cycle of linear mappings\\[3mm] \begin{equation}\label{1.1} \unitlength 1.00mm \linethickness{0.4pt} \begin{picture}(95.00,5.00)(0,-3) \put(-15.00,0.00){\makebox(0,0)[cc]{$\cal A$:}} \put(0.00,0.00){\makebox(0,0)[cc]{$V_1$}} \put(40.00,0.00){\makebox(0,0)[cc]{$\cdots$}} \put(60.00,0.00){\makebox(0,0)[cc]{$V_{t-1}$}} \put(80.00,0.00){\makebox(0,0)[cc]{$V_t$}} \put(20.00,0.00){\makebox(0,0)[cc]{$V_2$}} \put(5.00,0.00){\line(1,0){10.00}} \put(25.00,0.00){\line(1,0){10.00}} \put(45.00,0.00){\line(1,0){10.00}} \put(65.00,0.00){\line(1,0){10.00}} \put(10.00,5.00){\makebox(0,0)[ct]{${\cal A}_1$}} \put(30.00,5.00){\makebox(0,0)[ct]{${\cal A}_2$}} \put(50.00,5.00){\makebox(0,0)[ct]{${\cal A}_{t-2}$}} \put(70.00,5.00){\makebox(0,0)[ct]{${\cal A}_{t-1}$}} \bezier{300}(4.00,-2.00)(40.00,-12.00)(76.00,-2.00) \put(40.00,-12.00){\makebox(0,0)[cb]{${\cal A}_t$}} \put(83.67,-0.00){\makebox(0,0)[lc]{$,\quad t\ge 2\,,$}} \end{picture} \end{equation} \\[13pt] (see Theorem \ref{th}), where each line is the arrow $\longrightarrow$ or the arrow $\longleftarrow$ and $V_1,\dots,V_t$ are vector spaces. For instance, the linear mappings ${\cal A}_1$ and ${\cal A}_2$ of a cycle \[ \unitlength 1.00mm \linethickness{0.4pt} \begin{picture}(90.00,5.00)(-29,0) \put(-1.00,0.00){\makebox(0,0)[cc]{$V_1$}} \put(2.00,1.00){\vector(1,0){26.00}} \put(2.00,-1.00){\vector(1,0){26.00}} \put(31.00,0.00){\makebox(0,0)[cc]{$V_2$}} \put(15.00,4.00){\makebox(0,0)[cc]{${\cal A}_1$}} \put(15.00,-4.5){\makebox(0,0)[cc]{${\cal A}_2$}} \end{picture}\\[1em] \] are represented by a pair of matrices $(A_1,A_2)$ with respect to bases in $V_1$ and $V_2$, and a change of the bases reduces this pair by transformations of simultaneous equivalence \eqref{1.a}; in this case our algorithm coincides with Van Dooren's algorithm. Similarly, the linear mappings ${\cal A}_1$ and ${\cal A}_2$ of a cycle \[ \unitlength 1.00mm \linethickness{0.4pt} \begin{picture}(90.00,5.00)(30,0) \put(59.00,0.00){\makebox(0,0)[cc]{$V_1$}} \put(62.00,1.00){\vector(1,0){26.00}} \put(88.00,-1.00){\vector(-1,0){26.00}} \put(91.00,0.00){\makebox(0,0)[cc]{$V_2$}} \put(75.00,4.00){\makebox(0,0)[cc]{${\cal A}_1$}} \put(75.00,-4.5){\makebox(0,0)[cc]{${\cal A}_2$}} \end{picture}\\*[1em] \] are represented by a pair $(A_1,A_2)$, and a change of the bases in $V_1$ and $V_2$ reduces this pair by transformations of {\it contragredient equivalence} \begin{equation*}\label{1c} (A_1,\,A_2)\mapsto (R^{-1}A_1S,\,S^{-1}A_2R). \end{equation*} The {\it direct sum} of the cycle \eqref{1.1} and a cycle \begin{equation*} \unitlength 1.00mm \linethickness{0.4pt} \begin{picture}(95.00,5.00)(-15,0) \put(-15.00,0.00){\makebox(0,0)[cc]{${\cal A}'$:}} \put(0.00,0.00){\makebox(0,0)[cc]{$V'_1$}} \put(40.00,0.00){\makebox(0,0)[cc]{$\cdots$}} \put(60.00,0.00){\makebox(0,0)[cc]{$V'_{t-1}$}} \put(80.00,0.00){\makebox(0,0)[cc]{$V'_t$}} \put(20.00,0.00){\makebox(0,0)[cc]{$V'_2$}} \put(5.00,0.00){\line(1,0){10.00}} \put(25.00,0.00){\line(1,0){10.00}} \put(45.00,0.00){\line(1,0){10.00}} \put(65.00,0.00){\line(1,0){10.00}} \put(10.00,5.00){\makebox(0,0)[ct]{${\cal A}'_1$}} \put(30.00,5.00){\makebox(0,0)[ct]{${\cal A}'_2$}} \put(50.00,5.00){\makebox(0,0)[ct]{${\cal A}'_{t-2}$}} \put(70.00,5.00){\makebox(0,0)[ct]{${\cal A}'_{t-1}$}} \bezier{300}(4.00,-2.00)(40.00,-12.00)(76.00,-2.00) \put(40.00,-12.00){\makebox(0,0)[cb]{${\cal A}'_t$}} \end{picture}\\*[30pt] \end{equation*} with the same orientation of arrows is the cycle ${\cal A}\oplus{\cal A}'$: $$ \unitlength 1.00mm \linethickness{0.4pt} \begin{picture}(135.00,15.00)(13,-8) \put(25.00,0.00){\makebox(0,0)[cc]{$V_1\oplus V_1'$}} \put(65.00,0.00){\makebox(0,0)[cc]{$V_2\oplus V_2'$}} \put(45.00,3.00){\makebox(0,0)[cc]{${\cal A}_1\oplus{\cal A}'_1$}} \put(35.00,0.00){\line(1,0){20.00}} \put(75.00,0.00){\line(1,0){20.00}} \put(85.00,3.00){\makebox(0,0)[cc]{${\cal A}_2\oplus{\cal A}_2'$}} \put(115.00,3.00){\makebox(0,0)[cc]{${\cal A}_{t-1}\oplus{\cal A}_{t-1}'$}} \put(100.00,0.00){\makebox(0,0)[cc]{$\cdots$}} \put(135.00,0.00) {\makebox(0,0)[cc]{$V_t\oplus V_t'$}} \put(105.00,0.00){\line(1,0){20.00}} \bezier{372}(34.33,-3.67)(80.00,-13.00)(125.67,-3.67) \put(80.00,-12.0){\makebox(0,0)[cc] {${\cal A}_t\oplus{\cal A}'_t$}} \end{picture} $$\\[-10pt] A cycle ${\cal A}$ of the form \eqref{1.1} is called {\it regular} if all ${\cal A}_i$ are bijections; otherwise it is called {\it singular}. By a {\it regularizing decomposition} of ${\cal A}$, we mean a decomposition \begin{equation}\label{00} {\cal A}={\cal D}\oplus\dots \oplus {\cal G}\oplus{\cal P}, \end{equation} where $\cal{D},\dots,\cal{G}$ are direct-sum-indecomposable singular cycles and ${\cal P}$ is a regular cycle. In Section \ref{s1.1} we recall notions of quiver representations; they allow to formulate our algorithms pictorially. In Section \ref{s1.2} we recall the classification of chains \eqref{aa1.1} and cycles \eqref{1.1} of linear mappings. The classification of cycles of linear mappings was obtained by Nazarova \cite{naz} and, independently, by Donovan and Freislich \cite{don} (see also \cite{gab+roi}, Theorem 11.1). In Section \ref{s4} we construct an algorithm that gets the canonical form of the matrices of a chain of linear mappings using only unitary transformations. In Sections \ref{s1.3} and \ref{s_main} we construct an algorithm that gets a regularizing decomposition \eqref{00} of a cycle of linear mappings using only unitary transformations% \footnote{This improves the numerical stability of the algorithms. Nevertheless, this does not guarantee that the computed structure of the cycle coincides with its original structure.}. The singular summands $\cal{D},\dots,\cal{G}$ will be obtained in canonical form. The canonical form of the (nonsingular) matrices $P_1,\dots,P_t$ of the regular summand \begin{equation*} \unitlength 1.00mm \linethickness{0.4pt} \begin{picture}(95.00,5.00)(-15,0) \put(-15.00,0.00){\makebox(0,0)[cc]{$\cal P$:}} \put(0.00,0.00){\makebox(0,0)[cc]{$U_1$}} \put(40.00,0.00){\makebox(0,0)[cc]{$\cdots$}} \put(60.00,0.00){\makebox(0,0)[cc]{$U_{t-1}$}} \put(80.00,0.00){\makebox(0,0)[cc]{$U_t$}} \put(20.00,0.00){\makebox(0,0)[cc]{$U_2$}} \put(5.00,0.00){\line(1,0){10.00}} \put(25.00,0.00){\line(1,0){10.00}} \put(45.00,0.00){\line(1,0){10.00}} \put(65.00,0.00){\line(1,0){10.00}} \put(10.00,5.00){\makebox(0,0)[ct]{${\cal P}_1$}} \put(30.00,5.00){\makebox(0,0)[ct]{${\cal P}_2$}} \put(50.00,5.00){\makebox(0,0)[ct]{${\cal P}_{t-2}$}} \put(70.00,5.00){\makebox(0,0)[ct]{${\cal P}_{t-1}$}} \bezier{300}(4.00,-2.00)(40.00,-12.00)(76.00,-2.00) \put(40.00,-12.00){\makebox(0,0)[cb]{${\cal P}_t$}} \end{picture} \end{equation*} \\[13pt] in \eqref{00} is not determined by this algorithm. We may compute it as follows. We first reduce $P_1$ to the identity matrix changing the basis in the space $U_2$. Then we reduce $P_2$ to the identity matrix changing the basis in the space $U_3$, and so on until obtain \begin{equation}\label{yy} P_1=\dots=P_{t-1}=I_n. \end{equation} At last, changing the bases of all spaces $U_1,\dots,U_t$ by the same transition matrix $S$ (this preserves the matrices \eqref{yy}), we can reduce the remaining matrix $P_t$ to a nonsingular Jordan canonical matrix $\Phi$ by similarity transformations $S^{-1}P_tS$. Clearly, the obtained sequence \[ (I_n,\,\dots,\,I_n,\,\Phi) \] is the canonical form of the matrices of $\cal P$. \section{Terminology of quiver representations} \label{s1.1} The notion of a quiver and its representations was introduced by Gabriel \cite{gab} (see also \cite[Section 7]{gab+roi}) and admits to formulate classification problems for systems of linear mappings. A {\it quiver} is a directed graph; loops and multiple arrows are allowed. Its {\it representation} ${\cal A}$ over $\mathbb C$ is given by assigning to each vertex $v$ a complex vector space $V_v$ and to each arrow $\alpha:u\to v$ a linear mapping ${\cal A}_{\alpha}:V_u\to V_v$ of the corresponding vector spaces. For instance, a representation of the quiver \[ \unitlength 0.60mm \linethickness{0.4pt} \begin{picture}(139.67,30.67)(0,5) \put(28.67,6.67){\makebox(0,0)[cc]{1}} \put(70.67,30.67){\makebox(0,0)[cc]{2}} \put(1.67,6.67){\makebox(0,0)[cc]{$\alpha$}} \put(70.67,10.67){\makebox(0,0)[cc]{$\gamma$}} \put(46.67,21.67){\makebox(0,0)[cc]{$\beta$}} \put(95.67,23.67){\makebox(0,0)[cc]{$\varepsilon $}} \put(70.67,-2.33){\makebox(0,0)[cc]{$\delta$}} \put(32.67,6.67){\vector(1,0){76.00}} \put(32.67,2.67){\vector(1,0){76.00}} \put(32.67,10.67){\vector(2,1){34.00}} \put(76.67,27.67){\vector(2,-1){32.00}} \put(112.67,6.67){\makebox(0,0)[cc]{3}} \put(139.67,6.67){\makebox(0,0)[cc]{$\zeta$}} \bezier{112}(116.00,7.33)(134.67,14.33)(136.00,6.67) \bezier{112}(116.00,6.00)(134.67,-1.00)(136.00,6.67) \bezier{112}(24.67,7.33)(6.00,14.33)(4.67,6.67) \bezier{112}(24.67,6.00)(6.00,-1.00)(4.67,6.67) \put(24.67,6.00){\vector(2,1){1.00}} \put(117.5,5.50){\vector(-2,1){2.00}} \end{picture}\\*[6mm] \] is a system of linear mappings \[ \unitlength 0.70mm \linethickness{0.4pt} \begin{picture}(139.67,34.67) \put(27.67,6.67){\makebox(0,0)[cc]{$V_{1}$}} \put(71.67,30.67){\makebox(0,0)[cc]{$V_{2}$}} \put(-2.67,6.67){\makebox(0,0)[cc]{${\cal A}_{\alpha}$}} \put(70.67,10.67){\makebox(0,0)[cc]{${\cal A}_{\gamma}$}} \put(45.0,21.67){\makebox(0,0)[cc]{${\cal A}_{\beta}$}} \put(95.67,23.67){\makebox(0,0)[cc]{${\cal A}_{\varepsilon} $}} \put(70.67,-1.33){\makebox(0,0)[cc]{${\cal A}_{\delta}$}} \put(32.67,6.67){\vector(1,0){76.00}} \put(32.67,3.67){\vector(1,0){76.00}} \put(32.67,10.67){\vector(2,1){34.00}} \put(76.67,27.67){\vector(2,-1){32.00}} \put(113.67,6.67){\makebox(0,0)[cc]{$V_{3}$}} \put(144.67,6.67){\makebox(0,0)[cc]{${\cal A}_{\zeta}$}} \bezier{112}(119.00,7.33)(137.67,14.33)(139.00,6.67) \bezier{112}(119.00,6.00)(137.67,-1.00)(139.00,6.67) \bezier{112}(21.67,7.33)(3.00,14.33)(1.67,6.67) \bezier{112}(21.67,6.00)(3.00,-1.00)(1.67,6.67) \put(21.67,6.00){\vector(2,1){1.00}} \put(120.5,5.50){\vector(-2,1){2.00}} \end{picture}\\*[4mm] \] The number \begin{equation*}\label{1.2c} \dim_v{\cal A}:=\dim V_v \end{equation*} is called the {\it dimension of $\cal A$ at the vertex $v$}, the set of these numbers \[ \dim{\cal A}:=\{\dim V_v\}_v \] is called the {\it dimension of $\cal A$}. Two representations $\cal A$ and ${\cal A}'$ are called {\it isomorphic} if there exists a set $\cal S$ of linear bijections ${\cal S}_v: {\cal A}_v\to {\cal A}'_v$ (assigned to all vertices $v$) transforming $\cal A$ to ${\cal A}'$. That is, the diagram \begin{equation}\label{1.2aa} \begin{CD} V_u @>{\cal A}_{\alpha}>> V_v\\ @V{\cal S}_uVV @VV{\cal S}_vV\\ V'_u @>{\cal A}'_{\alpha}>> V'_v \end{CD} \end{equation} must be commutative ($ {\cal A}'_{\alpha}{\cal S}_u={\cal S}_v{\cal A}_{\alpha}$) for every arrow $\alpha: u\longrightarrow v$. In this case we write \begin{equation}\label{1.3} {\cal S}=\{{\cal S}_v\}: {\cal A}\is {\cal A}' \qquad \text{and} \qquad {\cal A}\simeq {\cal A}'. \end{equation} The {\it direct sum} of ${\cal A}$ and ${\cal A}'$ is the representation ${\cal A}\oplus{\cal A}'$ formed by $V_v\oplus V'_v$ and ${\cal A}_{\alpha}\oplus{\cal A}'_{\alpha}$. The following theorem is a well-known corollary of the Krull--Schmidt theorem \cite[Theorem I.3.6]{bas} and holds for representations over an arbitrary field. \begin{theorem} \label{t.0} Every representation of a quiver decomposes into a direct sum of indecomposable representations uniquely, up to isomorphism of summands. \end{theorem} Every representation of a quiver over ${\mathbb C}$ is isomorphic to a representation, in which the vector spaces $V_v$ assigned to the vertices all have the form ${\mathbb C}\oplus\dots\oplus {\mathbb C}$. Such a representation of dimension $\{d_v\}$ with $d_v\in\{0,1,2,\ldots\}$ is called a {\it matrix representation}% \footnote{A matrix representation also arises when we fix bases in all the spaces of a representation. As follows from \eqref{1.2aa}, two matrix representations are isomorphic if and only if they give the same representation but in possible different bases.} and is given by a set $\mathbb A$ of matrices ${\mathbb A}_{\alpha}\in {\mathbb C}^{d_v\times d_u}$ assigned to the arrows $\alpha:u\longrightarrow v$. We will consider mainly matrix representations. For every matrix representation $\mathbb{A} =\{A_{\alpha}\}$ of a quiver $\cal Q$, we define the {\it transpose matrix representation} \begin{equation}\label{1.xy} \mathbb{A}^T=\{A_{\alpha}^T\} \end{equation} of the quiver ${\cal Q}^T$ obtained from $\cal Q$ by changing the direction of each arrow. Clearly, \begin{equation}\label{1.yx} {\mathbb S}=\{S_v\}:\,\mathbb{A}\is\mathbb{B} \qquad\text{implies}\qquad {\mathbb S}^T=\{S_v^T\}:\,\mathbb{B}^T\is \mathbb{A}^T \end{equation} The systems of linear mappings \eqref{aa1.1} and \eqref{1.1} may be considered as representations of the quivers \begin{equation} \label{x1.5} {\cal{L}} :\qquad 1\ \frac{{\alpha}_1}{\qquad}\ 2\ \frac{{\alpha}_2}{\qquad}\ \cdots\ \frac{{\alpha}_{t-2}}{\qquad}\ {(t-1)} \frac{{\alpha}_{t-1}}{\qquad}\ t \end{equation} and\\ \begin{equation}\label{1.2} \unitlength 1.00mm \linethickness{0.4pt} \begin{picture}(90.00,4.00) (-7,-1) \put(-13.00,0.00){\makebox(0,0)[cc]{${\cal C}:$}} \put(0.00,0.00){\makebox(0,0)[cc]{$1$}} \put(40.00,0.00){\makebox(0,0)[cc]{$\cdots$}} \put(65.00,0.00){\makebox(0,0)[cc]{$(t-1)$}} \put(90.00,0.00){\makebox(0,0)[cc]{$t$}} \put(20.00,0.00){\makebox(0,0)[cc]{$2$}} \put(5.00,0.00){\line(1,0){10.00}} \put(25.00,0.00){\line(1,0){10.00}} \put(10.00,4.00){\makebox(0,0)[ct]{${\alpha}_1$}} \put(30.00,4.00){\makebox(0,0)[ct]{${\alpha}_2$}} \put(50.00,4.00){\makebox(0,0)[ct]{${\alpha}_{t-2}$}} \put(80.00,4.00){\makebox(0,0)[ct]{${\alpha}_{t-1}$}} \put(45.00,-11.00){\makebox(0,0)[cb]{${\alpha}_t$}} \put(45.00,0.00){\line(1,0){10.00}} \put(75.00,0.00){\line(1,0){10.00}} \bezier{336}(86.00,-2.33)(45.00,-10.33)(4.00,-2.33) \end{picture}\\*[35pt] \end{equation} with the same orientations of arrows as in \eqref{aa1.1} and \eqref{1.1}. The quiver \eqref{1.2} will be called a {\it cycle}; the symbol $\cal C$ will always denote the cycle \eqref{1.2}. If $\mathbb A$ is a matrix representation of a quiver with an indexed set of arrows $\{\alpha_i\,|\,i\in I\}$, we will write $A_i$ instead of $\mathbb A_{\alpha_i}$. So a matrix representation $\mathbb A$ of the cycle $\cal C$ is given by a sequence of matrices \begin{equation*}\label{1.3a} \mathbb A=(A_1,\dots,A_t). \end{equation*} \section{Classification theorems} \label{s1.2} In this section, we recall the classification of representations of the quivers \eqref{x1.5} and \eqref{1.2}, and mention articles considering special cases. Some of these articles are little known outside of representation theory. We first consider the cycles of length 2. The representations of the cycle $1\rightrightarrows 2$ were classified by Kronecker \cite{kro} in 1890 (see also \cite[Sect. V]{gan} or \cite[Sect. 1.8]{gab+roi}): every pair of $p\times q$ matrices is simultaneously equivalent to a direct sum of pairs of the form \eqref{1.3'}. A simple and short proof of this result was obtained by Nazarova and Roiter \cite{naz+roi}. A classification of representations of the cycle $1\rightleftarrows 2$ was obtained by Dobrovol$'$skaya and Ponomarev \cite{dob+pon} in 1965: every matrix representation is isomorphic to a direct sum, determined uniquely up to permutation of summands, of matrix representations of the form \begin{equation}\label{1.5} (I_n,J_n(\lambda)),\ (J_n(0),I_n),\ (F_n,G_n^T),\ (F_n^T,G_n) \end{equation} (see \eqref{1.4}). Over an arbitrary field, the Jordan block $J_n(\lambda)$ is replaced by a Frobenius block $$ \Phi_n=\begin{bmatrix} 0&1&&\\&\ddots&\ddots&\\&&0&1\\ -\alpha_n& -\alpha_{n-1} &\cdots& -\alpha_1 \end{bmatrix}, $$ where \[ x^n+\alpha_1 x^{n-1}+\dots+ \alpha_{n-1}x+\alpha_n= p(x)^t \] for some irreducible polynomial $p(x)$ and some integer $t$. This result was proved again by Rubi\'{o} and Gelonch \cite{rub+gel} in 1992, Olga Holtz \cite{hol} in 2000, and Horn and Merino \cite{hor+mer} in 1995; the last article also contains many applications of this classification. A classification of systems of linear mappings of the form $$ \begin{array}{ccc} V_1 & \longrightarrow & V_2\\ \downarrow& &\uparrow \\ V_3 & \longleftarrow & V_4 \end{array} $$ was given by Nazarova \cite{naz1} in 1961 over the field with two elements, and by Nazarova \cite{naz2} in 1967 over an arbitrary field. A quiver is said to be of {\it tame type} if the problem of classifying its representations does not contain the problem of classifying pairs of matrices up to simultaneous similarity. If a quiver $\cal Q$ is not of tame type, then a full classification of its representations is impossible since it must contain a classification of representations of all quivers, see \cite[Sect. 3.1]{ser} or \cite[Sect. 2]{bel-ser}. Nevertheless, each particular representation of $\cal Q$ can be reduced to canonical form, see \cite{bel1} or \cite[Sect. 1.4]{ser}. Nazarova \cite{naz} and, independently, Donovan and Freislich \cite{don} in 1973 classified representations of all quivers of tame type (see also \cite[Sect. 11]{gab+roi}). In particular, they classified representations of the cycle \eqref{1.2}, which is of tame type (see this classification also in \cite[Theorem 11.1]{gab+roi}). This classification is not mentioned in many articles on linear algebra and system theory that study its special cases (for instance, in the article by Gelonch \cite{gel} containing the classification of representations of the cycle \eqref{1.2} with orientation $1\to 2\to \dots \to t\to 1$). Gabriel \cite{gab} (see also \cite[Sect. 11]{gab+roi}) classified representations of all quivers having a finite number of nonisomorphic indecomposable representations. In particular, he classified representations of the quiver \eqref{x1.5}. \medskip Now we formulate theorems that classify representations of the quivers \eqref{x1.5} and \eqref{1.2}. For every pair of integers $(i,j)$ such that $1\le i\le j\le t$, we define the matrix representation \begin{equation}\label{x1.4} {\mathbb{L}}_{ij} :\qquad 1\ \frac{0}{\qquad}\ \cdots \ \frac{0}{\qquad}\ i\ \frac{I_{1}}{\qquad}\ \cdots\ \frac{I_{1}}{\qquad}\ j\ \frac{0}{\qquad}\ \cdots\ \frac{0}{\qquad}\ t \end{equation} of dimension $(0,\dots,0,1,\dots,1,0,\dots 0)$ of the quiver \eqref{x1.5}. By the next theorem, which holds over an arbitrary field, the representations ${\mathbb{L}}_{ij}$ form a full set of nonisomorphic indecomposable matrix representations of \eqref{x1.5}. \begin{theorem}[see \cite{gab}] \label{xt1.1} For every system of linear mappings \eqref{aa1.1}, there are bases of the spaces $V_1,\dots,V_t$, in which the sequence of matrices of ${\cal{A}}_{1},\dots,{\cal{A}}_{t-1}$ is a direct sum of sequences $(0,\dots,0,I_{1},\dots,I_{1},0,\dots,0)$ of dimension $(0,\dots,0,1,\dots,1,0,\dots 0)$. This sum is determined by the system \eqref{aa1.1} uniquely up to permutation of summands. \end{theorem} The classification of representations of a cycle \eqref{1.2} follows from Theorem \ref{t.0} and the next fact: if a matrix representation of this cycle is direct-sum-indecomposable, then at least $t-2$ of its matrices are nonsingular. Clearly, these $t-2$ matrices reduce to the identity matrices and the remaining two matrices reduce to the form \eqref{1.3'} or \eqref{1.5} depending on the orientation of their arrows. This gives the following theorem. \begin{theorem}[see \cite{don} or \cite{naz}] \label{t1.1} For every system of linear mappings \eqref{1.1}, there are bases in the spaces $V_1,\dots,V_t$, in which the sequence of matrices of ${\cal{A}}_{1},\dots,{\cal{A}}_{t}$ is a direct sum, determined by \eqref{1.1} uniquely up to permutation of summands, of sequences of the following form $($the points denote sequences of identity matrices or $0_{00})$: \begin{itemize} \item[\rm{(i)}] $(J_n(\lambda),\ldots)$ with $\lambda\ne 0$; \item[\rm{(ii)}] $(\ldots, J_n(0), \ldots)$ with $J_n(0)$ at the place $i\in \{1,\ldots, t\}$; \item[\rm{(iii)}] $(\ldots, A_i,\ldots, A_j,\ldots)$, where $A_i$ and $A_j$ depend on the direction of the mappings ${\cal A}_i$ and ${\cal A}_j$ in the sequence \begin{equation*}\label{1.6} V_1\ \frac{{\cal A}_1}{\qquad}\ V_2\ \frac{{\cal A}_2}{\qquad}\ \cdots\ \frac{{\cal A}_{t-1}}{\qquad}\ V_t\ \frac{{\cal A}_t}{\qquad}\ V_1 \end{equation*} $($see \eqref{1.1}$)$ as follows: \end{itemize} \[ (A_i,A_j)= \begin{cases} (F_n,G_n) \text{ or } (F_n^T,G_n^T) & \text{if ${\cal A}_i$ and ${\cal A}_j$ have opposite directions}, \\ (F_n,G_n^T)\text{ or } (F_n^T,G_n) & \text{otherwise}. \end{cases} \] \end{theorem} This theorem, with a nonsingular Frobenius block instead of $J_n(\lambda)$ in (i), holds over an arbitrary field.\medskip In the remaining part of this section, we recall Gabriel and Roiter's construction \cite[Sect. 11.1]{gab+roi} of summands (ii) and (iii). For every integer $n$, denote by $[n]$ the natural number such that \begin{equation*}\label{1'.3} 1\le [n]\le t\quad \text{and} \quad [n]\equiv n \bmod t\,. \end{equation*} Let \begin{equation}\label{1'.1} l\,\frac{}{\quad\ }\, (l+1)\,\frac{}{\quad\ }\, (l+2)\, \,\frac{}{\quad\ }\,\cdots\,\frac{}{\quad\ }\,\, r, \qquad 1\le l\le t, \end{equation} be a ``clockwise walk'' on the cycle \eqref{1.2}) that starts at the vertex $l$, passes through the vertices \[ [l+1],\ [l+2],\: \dots\:,\ [r-1], \] and stops at the vertex $[r]$. This walk determines the representation $\cal A$ of $\cal C$ in which each space ${V}_v$ is spanned by all $i\in\{l,l+1,\dots,r\}$ such that $[i]=v$: $$ {V}_v=\langle i\,|\,l\le i\le r,\ [i]=v \rangle, $$ and all the nonzero actions of linear mappings ${\cal A}_{\alpha_1},\dots,{\cal A}_{\alpha_t}$ on the basis vectors are given by \eqref{1'.1}. The matrices of ${\cal A}_{\alpha_1},\dots,{\cal A}_{\alpha_t}$ in these bases form a matrix representation denoted by \begin{equation}\label{ppp} \mathbb G_{lr}. \end{equation} \begin{example}\label{e1.a} The walk $$ \unitlength 0.50mm \linethickness{0.4pt} \begin{picture}(83.00,36.67)(-9,5) \put(17.00,27.67){\makebox(0,0)[cc]{1}} \put(22.00,29.67){\vector(2,1){10.00}} \put(37.00,36.67){\makebox(0,0)[cc]{2}} \put(57.00,36.67){\vector(-1,0){15.00}} \put(62.00,36.67){\makebox(0,0)[cc]{3}} \put(83.00,22.67){\makebox(0,0)[cc]{4}} \put(79.00,26.00){\vector(-3,2){13.33}} \put(79.00,20.00){\vector(-3,-2){13.33}} \put(17.00,17.34){\makebox(0,0)[cc]{$7$}} \put(22.00,20.34){\vector(2,1){10.00}} \put(37.00,27.33){\makebox(0,0)[cc]{$8$}} \put(22.00,15.33){\vector(2,-1){10.00}} \put(37.00,8.00){\makebox(0,0)[cc]{6}} \put(57.00,8.00){\vector(-1,0){15.00}} \put(62.00,8.00){\makebox(0,0)[cc]{5}} \put(57.00,27.33){\vector(-1,0){15.00}} \put(62.00,27.33){\makebox(0,0)[cc]{$9$}} \end{picture} $$ on the cycle $$ \unitlength 0.5mm \linethickness{0.4pt} \begin{picture}(95.33,35.00)(0,10) \put(30.00,24.67){\makebox(0,0)[cc]{1}} \put(35.00,27.67){\vector(2,1){10.00}} \put(50.00,35.00){\makebox(0,0)[cc]{2}} \put(70.00,35.00){\vector(-1,0){15.00}} \put(75.00,35.00){\makebox(0,0)[cc]{3}} \put(95.33,24.67){\makebox(0,0)[cc]{4}} \put(90.33,27.67){\vector(-2,1){10.00}} \put(35.00,22.33){\vector(2,-1){10.00}} \put(50.00,15.00){\makebox(0,0)[cc]{6}} \put(70.00,15.00){\vector(-1,0){15.00}} \put(75.00,15.00){\makebox(0,0)[cc]{5}} \put(90.33,22.33){\vector(-2,-1){10.00}} \put(11.33,25.00){\makebox(0,0)[cc]{$\cal C$:}} \end{picture} $$ determines the representation $$ \unitlength 1.00mm \linethickness{0.4pt} \begin{picture}(99.33,39.00)(0,5) \put(26.00,24.67){\makebox(0,0)[cc] {$\langle{}1,7\rangle$}} \put(32.33,27.67){\vector(2,1){10.00}} \put(48.67,35.00){\makebox(0,0)[cc] {$\langle{}2,8\rangle$}} \put(70.00,35.00){\vector(-1,0){15.00}} \put(76.33,35.00){\makebox(0,0)[cc] {$\langle{}3,9\rangle$}} \put(99.33,24.67){\makebox(0,0)[cc] {$\langle{}4\rangle$}} \put(93.00,27.67){\vector(-2,1){10.00}} \put(32.33,22.33){\vector(2,-1){10.00}} \put(48.67,15.00){\makebox(0,0)[cc] {$\langle{}6\rangle$}} \put(70.00,15.00){\vector(-1,0){15.00}} \put(76.33,15.00){\makebox(0,0)[cc] {$\langle{}5\rangle$}} \put(93.00,22.33){\vector(-2,-1){10.00}} \put(7.33,25.00){\makebox(0,0)[cc] {${\mathbb{G}}_{1,9}$:}} \put(33.00,32.00){\makebox(0,0)[cc]{$I_2$}} \put(63.00,38.00){\makebox(0,0)[cc]{$I_2$}} \put(93.00,35.00){\makebox(0,0)[cc]{$ \begin{bmatrix} 1\\0\end{bmatrix} $}} \put(93.00,19.00){\makebox(0,0)[cc]{$I_1$}} \put(63.00,12.00){\makebox(0,0)[cc]{$I_1$}} \put(32.00,17.00){\makebox(0,0)[cc {$\begin{bmatrix} 0&\!\!\! 1\end{bmatrix}$}} \end{picture}\\*[-3mm] $$ \end{example} \begin{lemma}[see {\cite[Sect. 11.1]{gab+roi}}] \label{l00} The set of all\/ $\mathbb G_{lr}$ coincides with the set of matrix representations of the form {\rm(ii)} and {\rm(iii)}: \begin{itemize} \item[{\rm(a)}] $\mathbb G_{lr}$ with $r\not\equiv l-1 \bmod t$ is the matrix representation {\rm(iii)} of dimension $(d_1,\dots,d_t)$, where $d_i$ is the number of $n\in\{l, l+1, \dots,r\}$ such that $[n]=i$. {\rm(}Note that all representations of the form {\rm(iii)} have distinct dimensions and so they are determined by their dimensions.{\rm)} \item[{\rm(b)}] $\mathbb G_{l,l-1+pt}= (I_p,\dots,I_p, J_p(0), I_p,\dots,I_p)$, where $J_p(0)$ is at the $[l-1]$-st place. \end{itemize} \end{lemma} We will use the following notation. If all arrows in a representation \begin{equation}\label{x2.1aac} \unitlength 0.4mm \linethickness{0.4pt} \begin{picture}(25.00,25.00)(-20,12) \put(13.00,24.00){\makebox(0,0)[lc] {$\scriptstyle{ A_1}$}} \put(3.00,18.00){\makebox(0,0)[lc] {$\scriptstyle{ A_2}$}} \put(13.00,-1.00){\makebox(0,0)[lc] {$\scriptstyle{ A_n}$}} \put(0.00,30.00){\makebox(0,0)[rc] {$u_1$}} \put(0.00,15.00){\makebox(0,0)[rc] {$u_2$}} \put(0.00,5.00){\makebox(0,0)[rc]{$\cdots$}} \put(0.00,-5.00){\makebox(0,0)[rc] {$u_n$}} \put(25.00,10.00){\makebox(0,0)[lc]{$v$}} \put(3.00,14.00){\line(5,-1){18.00}} \put(3.00,29.00){\line(5,-4){18.00}} \put(3.00,-4.00){\line(5,3){18.00}} \end{picture} \end{equation}\\[0.5em] have the same orientation, then instead of \eqref{x2.1aac} we will write \begin{equation}\label{x2.1acc} \unitlength 0.4mm \linethickness{0.4pt} \begin{picture}(25.00,25.00)(-20,12) \put(13.00,29.00){\makebox(0,0)[lc] {\fbox{$\scriptstyle{A}$}}} \put(0.00,30.00){\makebox(0,0)[rc] {$u_1$}} \put(0.00,15.00){\makebox(0,0)[rc] {$u_2$}} \put(0.00,5.00){\makebox(0,0)[rc]{$\cdots$}} \put(0.00,-5.00){\makebox(0,0)[rc] {$u_n$}} \put(25.00,10.00){\makebox(0,0)[lc]{$v$}} \put(3.00,15.00){\line(5,-1){18.00}} \put(3.00,29.00){\line(5,-4){18.00}} \put(3.00,-4.00){\line(5,3){18.00}} \end{picture} \end{equation}\\[0.5em] where \begin{equation}\label{2.1.5aq} A=\begin{cases} [\,A_1\,|\dots|\,A_n\,]& \text{if $u_1\longrightarrow v$, $u_2\longrightarrow v,\dots, u_n\longrightarrow v$}, \\[0.1em] \left[\begin{tabular}{c} $A_1$\\ \hline $\cdots$\\ \hline $A_n$ \end{tabular}\right] & \text{if $u_1\longleftarrow v$, $u_2\longleftarrow v,\dots, u_n\longleftarrow v$}. \end{cases} \end{equation} The partition of $A$ into strips is fully determined by the dimensions of \eqref{x2.1aac} at the vertices $u_1,\dots,u_n$. \section{Chains of linear mappings} \label{s4} In this section we give an algorithm that calculates the canonical form of the matrices of a chain of linear mappings \eqref{aa1.1} using only unitary transformations. Choosing bases in the spaces $V_1,\dots,V_t$, we may represent a system of linear mappings \eqref{aa1.1} by the sequence of matrices ${\mathbb{A}}=(A_1,\dots,A_{t-1})$. We will consider this sequence as a matrix representation \begin{equation}\label{x1.2} {\mathbb{A}} :\qquad 1\ \frac{{{A}}_1}{\qquad}\ 2\ \frac{{{A}}_2}{\qquad}\ \cdots\ \frac{{{A}}_{t-1}}{\qquad}\ t\,. \end{equation} of the quiver \eqref{x1.5}. For every vertex $i$, a change of the basis in $V_i$ changes $\mathbb{A}$. This transformation of $\mathbb{A}$ will be called a {\it transformation at the vertex} $i$. It will be called a {\it unitary transformation} if the transition matrix to a new basis of $V_i$ is unitary. \subsection*{The algorithm for chains:} Let $\mathbb{A}$ be a matrix representation \eqref{x1.2} of dimension \begin{equation*}\label{x2.a} \dim \mathbb{A}= (d_1,\dots,d_t) \end{equation*} of the quiver \eqref{x1.5}. We will sequentially split $\mathbb{A}$ into representations of the form \eqref{x1.4}. \medskip \noindent {\bf Step 1:} By unitary transformations at vertices 1 and 2, we reduce $A_1$ to the form \begin{equation}\label{x2.1'} B_1= \begin{bmatrix} 0&H\\0&0 \end{bmatrix}, \end{equation} where $H$ is a nonsingular $k\times k$ matrix. These transformations change $A_2$; denote the new matrix by $A_2'$. Denote also by ${\cal P}_1$ the set consisting of $d_1-k$ representations of the form ${\mathbb L}_{11}$. Next we will transform the representation $A$ into a representation $M_1$ of a ``split'' quiver depending on the direction of $\alpha_1$ in \eqref{x1.5} as follows: \medskip {\it Case $\alpha_1:1\longrightarrow 2$}.\ \ Then \\[0.8em] \begin{equation*}\label{x2.1aa} \unitlength 0.6mm \linethickness{0.4pt} \begin{picture}(15.00,17.00)(5,-1) \put(8.00,13.00){\makebox(0,0)[lc] {\fbox{$\scriptstyle{ A_2'}$}}} \put(-90.00,0.00){\makebox(0,0)[lc]{${\mathbb M}_1$:}} \put(15.00,0.00){\makebox(0,0)[lc]{$3 \stackrel{A_3}{\frac{}{\qquad}} {4} \stackrel{A_4}{\frac{}{\qquad}} \cdots \stackrel{A_{t-1}}{\frac{}{\qquad}} t$}} \put(-2.00,-11.00) {\makebox(0,0)[rc]{$ (d_2-k \text{ copies})} \quad\cdots\!\!$}} \put(13.00,-2.00){\line(-1,-1){13.00}} \put(-2.00,-17.00){\makebox(0,0)[rc]{$2$}} \put(2.00,11.50){\makebox(0,0)[rc] {$ (k \text{ copies})} \quad\cdots\cdots\cdots $}} \put(13.00,2.00){\line(-1,1){13.00}} \put(-2.00,19.00){\makebox(0,0)[rc] {$1\stackrel{I_{1}}{\longrightarrow}2$}} \put(0.00,-4.00){\line(4,1){13.00}} \put(-2.00,-5.00){\makebox(0,0)[rc]{$2$}} \put(0.00,4.00){\line(4,-1){13.00}} \put(-2.00,7.00){\makebox(0,0)[rc] {$1\stackrel{I_{1}}{\longrightarrow}2$}} \end{picture}\\*[3em] \end{equation*} (there are $k$ fragments of the form $1{\longrightarrow} 2\,\frac{}{\quad\ }\, 3$ and $d_2-k$ fragments of the form $2\,\frac{}{\quad\ }\, 3$). The direction of the arrows is the same as in the quiver \eqref{x1.5}. \medskip {\it Case $\alpha_1:1\longleftarrow 2$}. Then\\[0.8em] \begin{equation*}\label{x2.1ab} \unitlength 0.6mm \linethickness{0.4pt} \begin{picture}(15.00,17.00)(5,-2) \put(8.00,13.00){\makebox(0,0)[lc] {\fbox{$\scriptstyle{ A_2'}$}}} \put(-90.00,0.00){\makebox(0,0)[lc]{${\mathbb M}_1$:}} \put(15.00,0.00){\makebox(0,0)[lc]{$3 \stackrel{A_3}{\frac{}{\qquad}} {4} \stackrel{A_4}{\frac{}{\qquad}} \cdots \stackrel{A_{t-1}}{\frac{}{\qquad}} t$}} \put(2.00,-12.00){\makebox(0,0)[rc] {$ (k \text{ copies})} \quad\cdots\cdots\cdots$}} \put(13.00,-2.00){\line(-1,-1){13.00}} \put(-2.00,-18.00){\makebox(0,0)[rc] {$1\stackrel{I_{1}}{\longleftarrow}2$}} \put(-2.00,10.00){\makebox(0,0)[rc] {$ (d_2-k \text{ copies})} \quad\cdots\!\!$}} \put(13.00,2.00){\line(-1,1){13.00}} \put(-2.00,17.00){\makebox(0,0)[rc]{$2$}} \put(0.00,-4.00){\line(4,1){13.00}} \put(-2.00,-5.00){\makebox(0,0)[rc] {$1\stackrel{I_{1}}{\longleftarrow}2$}} \put(0.00,4.00){\line(4,-1){13.00}} \put(-2.00,4.00){\makebox(0,0)[rc]{$2$}} \end{picture}\\*[3.5em] \end{equation*} \noindent {\bf Step $\pmb{r\ (1< r< t)}$:}\ \ Assume we have constructed in the step $r-1$ the set ${\cal P}_{r-1}$ consisting of representations of the form ${\mathbb L}_{ij}$, $1\le i\le j<r$, and a quiver representation ${\mathbb M}_{r-1}$:\\[0.5em] \begin{equation}\label{x2.1c} \unitlength 0.4mm \linethickness{0.4pt} \begin{picture}(25.00,25.00)(-20,7) \put(13.00,29.00){\makebox(0,0)[lc] {\fbox{$\scriptstyle{ A_r'}$}}} \put(0.00,30.00){\makebox(0,0)[rc] {$ (k_{1} \text{ copies})} \quad p_1 \stackrel{I_{1}}{\!\frac{}{\quad\ }\!} (p_1+1) \stackrel{I_{1}}{\!\frac{}{\quad\ }\!}\cdots \stackrel{I_{1}}{\!\frac{}{\quad\ }\!} r$}} \put(0.00,15.00){\makebox(0,0)[rc] {$ (k_{2}\text{ copies})} \quad p_2 \stackrel{I_{1}}{\!\frac{}{\quad\ }\!}(p_2+1) \stackrel{I_{1}}{\!\frac{}{\quad\ }\!} \cdots \stackrel{I_{1}}{\!\frac{}{\quad\ }\!} r$}} \multiput(-172.00,3.00)(5,0){35}{$\cdot$} \put(0.00,-5.00){\makebox(0,0)[rc] {$ (k_{r}\text{ copies})} \quad p_r \stackrel{I_{1}}{\!\frac{}{\quad\ }\!} (p_r+1)\stackrel{I_{1}}{\!\frac{}{\quad\ }\!} \cdots \stackrel{I_{1}}{\!\frac{}{\quad\ }\!} r$}} \put(25.00,10.00){\makebox(0,0)[lc]{$(r+1)\, \stackrel{A_{r+1}}{\frac{}{\qquad}} \cdots \stackrel{A_{t-1}}{\frac{}{\qquad}} t$}} \put(3.00,15.00){\line(5,-1){18.00}} \put(3.00,29.00){\line(5,-4){18.00}} \put(3.00,-4.00){\line(5,3){18.00}} \end{picture}\\*[2.2em] \end{equation} in which every \[ p_i\stackrel{I_{1}}{\,\frac{}{\quad\ }\,} (p_i+1)\stackrel{I_{1}}{\,\frac{}{\quad\ }\,} \cdots\stackrel{I_{1}}{\,\frac{}{\quad\ }\,} r \] repeats $k_i$ times, $k_1+\dots+k_r=d_r$, all $k_i\ge 0$, and \[\{p_1,p_2,\dots,p_r\}=\{1,2,\dots,r\}.\] The direction of the arrows is the same as in the quiver \eqref{x1.5}. \medskip {\it Case $\alpha_r:r\longrightarrow r+1$} (see \eqref{x1.5}).\ \ We divide $A'_r$ into $r$ vertical strips of sizes $k_1,k_2,\dots,k_r$ and reduce $A_r'$ to the form \begin{equation}\label{x2.6'} B_r= \left[\begin{tabular}{cc|cc|cc|c|cc} 0&$H_{1}$ &$*$&$*$ &$*$&$*$ &$\cdots$ &$*$&$*$\\ 0&0 &0&$H_{2}$ &$*$&$*$ &$\cdots$ &$*$&$*$\\ 0&0 &0&0 &0&$H_{3}$ &$\cdots$ &$*$&$*$\\ $\cdots$&$\cdots$ &$\cdots$&$\cdots$ &$\cdots$&$\cdots$ &$\cdots$ &$\cdots$&$\cdots$\\ 0&0 &0&0 &0&0 &$\cdots$ &0&$H_{r}$\\ 0&0 &0&0 &0&0 &$\cdots$ &0&0\\ \end{tabular}\right] \end{equation} (where each $H_{i}$ is a nonsingular $l_i\times l_i$ matrix and each $*$ is an unspecified matrix) starting from the first vertical strip by unitary column-transformations within vertical strips and by unitary row-transformations. These row-transformations are transformations at the vertex $r+1$ and they change $A_{r+1}$; denote the obtained matrix by $A_{r+1}'$. Denote also by ${\cal P}_{r}$ the set obtained from ${\cal P}_{r-1}$ by including $k_i-l_i$ representations of the form ${\mathbb L}_{p_ir}$ for all $i=1,\dots,r$. Construct the quiver representation \begin{equation*}\label{x2.1d} \unitlength 0.4mm \linethickness{0.4pt} \begin{picture}(25.00,20.00)(-50,20) \put(-190.00,10.00){\makebox(0,0)[rc] {${\mathbb M}_{r}$:}} \put(12.00,28.00){\makebox(0,0)[lc] {\fbox{$\scriptstyle{ A_{r+1}'}$}}} \put(0.00,25.00){\makebox(0,0)[rc] {$ (l_{1} \text{ copies})} \quad p_1 \stackrel{I_{1}}{\!\frac{}{\quad\ }\!} \cdots \stackrel{I_{1}}{\!\frac{}{\quad\ }\!} (r+1)$}} \multiput(-143.00,8.00)(5,0){29}{$\cdot$} \put(0.00,-5.00){\makebox(0,0)[rc] {$ (l_{r} \text{ copies})} \quad p_r \stackrel{I_{1}}{\!\frac{}{\quad\ }\!} \cdots \stackrel{I_{1}}{\!\frac{}{\quad\ }\!} (r+1)$}} \put(25.00,10.00){\makebox(0,0)[lc]{$(r+2)\, \stackrel{A_{r+2}}{\frac{}{\qquad}} \cdots \stackrel{A_{t-1}}{\frac{}{\qquad}} t$}} \put(3.00,24.00){\line(5,-3){18.00}} \put(3.00,-4.00){\line(5,3){18.00}} \put(3.00,-22.00){\line(3,4){18.00}} \put(0.00,-27.00){\makebox(0,0)[rc] {$ (d_{r+1}-l_1-\dots-l_{r} \text{ copies})} \quad \, (r+1)$}} \end{picture} \end{equation*}\\[4.3em] (Hence, $k_i-l_i$ representations \[{\mathbb L}_{p_ir}:\ p_i \stackrel{I_{1}}{\frac{}{\qquad}} (p_i+1)\stackrel{I_{1}}{\frac{}{\qquad}} \cdots \stackrel{I_{1}}{\frac{}{\qquad}} r \] for each $i=1,\dots,r$ ``break away'' from the representation \eqref{x2.1c} and join to the set ${\cal P}_{r-1}$.) In particular, if $r=t-1$, then ${\mathbb M}_{r}$ takes the form\\[2mm] \begin{equation}\label{x2.1da} \unitlength 0.4mm \linethickness{0.4pt} \begin{picture}(25.00,25.00)(-100,0) \put(0.00,25.00){\makebox(0,0)[rc] {$ (l_{1}\text{ copies})} \quad p_1 \stackrel{I_{1}}{\!\frac{}{\quad\ }\!} \cdots \stackrel{I_{1}}{\!\frac{}{\quad\ }\!} t$}} \put(-180.00,0.00){\makebox(0,0)[rc]{${\mathbb M}_{t-1}:$}} \multiput(-120.00,8.00)(5,0){24}{$\cdot$} \put(0.00,-5.00){\makebox(0,0)[rc] {$ (l_{t-1}\text{ copies})} \quad p_{t-1} \stackrel{I_{1}}{\!\frac{}{\quad\ }\!} \cdots \stackrel{I_{1}}{\!\frac{}{\quad\ }\!} t$}} \put(0.00,-27.00){\makebox(0,0)[rc] {$ (d_t-l_1-\dots-l_{t-1} \text{ copies})}\quad \, t$}} \end{picture}\\*[15mm] \end{equation} {\it Case $\alpha_r:r\longleftarrow r+1$}.\ \ We partition $A'_r$ into $r$ horizontal strips of sizes $k_1,k_2,\dots,k_{r}$ and reduce $A_r'$ to the form \begin{equation}\label{x2.7'} B_r= {\left[\begin{tabular}{ccccccc} 0&$H_{1}$&$*$ &$\cdots$ &$*$&$*$&$*$\\ 0&0&$*$ &$\cdots$ &$*$&$*$&$*$\\ \hline\multicolumn{7}{c} {$\!\!\!\cdots\cdots\cdots\cdots\cdots\cdots \cdots\cdots\cdots\cdots\cdots\cdots\!\!$} \\ \multicolumn{7}{c} {$\!\!\!\cdots\cdots\cdots\cdots\cdots\cdots \cdots\cdots\cdots\cdots\cdots\cdots\!\!$}\\ \hline 0&0&0 &$\cdots$ &$H_{{r-2}}$&$*$&$*$\\ 0&0&0 &$\cdots$ &0&$*$&$*$\\ \hline 0&0&0 &$\cdots$ &0&$H_{{r-1}}$&$*$\\ 0&0&0 &$\cdots$ &0&0&$*$\\ \hline 0&0&0 &$\cdots$ &0&0&$H_{r}$\\ 0&0&0 &$\cdots$ &0&0&0\\ \end{tabular}\right]}, \end{equation} (where each $H_{i}$ is a nonsingular $l_i\times l_i$ matrix) starting from the lower strip, by unitary row-transformations within horizontal strips and by unitary column-transformations. These column-transformations are transformations at the vertex $r+1$ and they change $A_{r+1}$; denote the obtained matrix by $A_{r+1}'$. Denote also by ${\cal P}_{r}$ the set consisting of the elements of ${\cal P}_{r-1}$ and $k_i-l_i$ representations of the form ${\mathbb L}_{p_ir}$ for all $i=1,\dots,r$. Construct the quiver representation\\[2em] \[ \unitlength 0.4mm \linethickness{0.4pt} \begin{picture}(25.00,25.00)(-45,7) \put(13.00,38.00){\makebox(0,0)[lc] {\fbox{$\scriptstyle{ A_{r+1}'}$}}} \put(0.00,25.00){\makebox(0,0)[rc] {$ (l_{1} \text{\ copies})} \quad p_1 \stackrel{I_{1}}{\!\frac{}{\quad\ }\!}\cdots \stackrel{I_{1}}{\!\frac{}{\quad\ }\!} (r+1)$}} \multiput(-143.00,8.00)(5,0){29}{$\cdot$} \put(0.00,-5.00){\makebox(0,0)[rc] {$ (l_{r}\text{ copies})} \quad p_r \stackrel{I_{1}}{\!\frac{}{\quad\ }\!} \cdots \stackrel{I_{1}}{\!\frac{}{\quad\ }\!} (r+1)$}} \put(-190.00,10.00){\makebox(0,0)[cc] {${\mathbb M}_{r}$:}} \put(25.00,10.00){\makebox(0,0)[lc]{$(r+2) \stackrel{A_{r+2}}{\frac{}{\qquad}} \cdots \stackrel{A_{t-1}}{\frac{}{\qquad}} t$}} \put(3.00,24.00){\line(5,-3){18.00}} \put(3.00,-4.00){\line(5,3){18.00}} \put(3.00,42.00){\line(3,-4){18.00}} \put(0.00,47.00){\makebox(0,0)[rc] {$ (d_{r+1}-l_1-\dots-l_{r}\text{ copies)}} \quad \, (r+1)$}} \end{picture} \]\\ \subsection*{The result:} After the step $t-1$, we have obtained the set ${\cal P}_{t-1}$ consisting of representations of the form ${\mathbb L}_{ij}$, $j<t$, and the quiver representation ${\mathbb M}_{t-1}$ (see \eqref{x2.1da}), which may be considered as a set of representations of the form ${\mathbb L}_{it}$. Define the representation \begin{equation}\label{x2.1g} {\mathbb L}({\mathbb A}) =\bigoplus_{{\mathbb L}_{ij}\in{\cal P}_{t-1}\cup {\mathbb M}_{t-1}}{\mathbb L}_{ij}. \end{equation} The following proposition will be proved in Section \ref{xs2a}. \begin{proposition} \label{t2.a} The representation ${\mathbb L}({\mathbb A})$ is the canonical form $($see Theorem \ref{xt1.1}$)$ of a matrix representation $\mathbb{A}$ of the quiver \eqref{x1.5}. \end{proposition} \section{Cycles of linear mappings} \label{s1.3} In this section, we give an algorithm for constructing a regularizing decomposition \eqref{00} that involves only unitary transformations. In the same way, one may construct a regularizing decomposition over an arbitrary field using elementary transformations. By analogy with Section \ref{s1}, we say that a matrix representation $\mathbb A=(A_1,\dots, A_t)$ of a cycle $\cal C$ (see \eqref{1.2}) is {\it regular} if \[ \dim_1(\mathbb A)=\dots =\dim_t(\mathbb A)\] and all the matrices $A_1,\dots, A_t$ are nonsingular; otherwise the representation is {\it singular}. A decomposition \begin{equation}\label{1.7b} \mathbb A\simeq \mathbb D \oplus \dots\oplus \mathbb G \oplus\mathbb{P} \end{equation} is a {\it regularizing decomposition} of $\mathbb A$ if $\mathbb D, \dots, \mathbb G$ are matrix representations of the form ${\mathbb G}_{ij}$ (see Lemma \ref{l00}) and $\mathbb P$ is a regular representation. By Theorem \ref{t1.1}, the regularizing decomposition \eqref{1.7b} is determined uniquely up to isomorphism of summands. The algorithm works like a jack-plane in a woodworker's hands. Starting from the vertex $1$, we cut a {\it shave}: $$ \unitlength 0.40mm \linethickness{0.4pt} \begin{picture}(353.47,164.76)(30,-18) \put(80.00,30.00){\line(1,0){225.00}} \put(305.00,45.00){\line(-1,0){225.00}} \bezier{260}(80.00,30.00)(47.50,32.50)(45.00,65.00) \bezier{260}(80.00,45.00)(47.50,47.50)(45.00,80.00) \bezier{260}(80.00,115.00)(47.50,112.50)(45.00,80.00) \put(45.00,65.00){\line(0,1){15.00}} \bezier{220}(80.00,100.00)(51.00,98.00)(46.33,72.00) \bezier{260}(304.33,30.00)(336.83,32.50)(339.33,65.00) \bezier{260}(304.33,45.00)(336.83,47.50)(339.33,80.00) \put(339.33,65.00){\line(0,1){15.00}} \bezier{220}(304.33,100.00)(333.33,98.00)(338.00,72.00) \put(80.00,100.00){\line(1,0){225.00}} \put(80.00,115.00){\line(1,0){40.00}} \put(120.00,115.00){\line(0,-1){5.00}} \multiput(120.00,110.00)(0,-3.9){3}{\line(0,-1){2}} \put(134,118){\makebox(0,0)[cc]{$\alpha_{1}$}} \put(120.99,90){\makebox(0,0)[cc]{$1$}} \put(120.00,110.00){\line(1,0){15.00}} \put(135.00,110.00){\line(1,0){159.00}} \multiput(146.99,113.23)(3.95,0){36}{\line(1,0){2}} \multiput(146.99,113.23)(0,-3.7){4}{\line(0,-1){2}} \put(146.99,90){\makebox(0,0)[cc]{$2$}} \put(336.67,88.71){\line(3,2){16.50}} \put(336.67,98.71){\line(3,2){16.50}} \put(353.45,99.82){\line(0,1){10.00}} \put(336.67,98.71){\line(0,-1){10.00}} \put(336.67,88.71){\line(-2,1){50.00}} \put(286.67,113.71){\line(0,1){10.00}} \put(286.67,123.71){\line(2,-1){50.00}} \put(316.67,108.71){\line(3,2){17.00}} \put(315.45,138.15){\line(-3,-2){12.00}} \put(315.45,138.15){\line(2,-3){13.78}} \put(286.67,123.71){\line(3,2){9.00}} \put(353.47,109.86){\line(-2,1){28.54}} \put(301.03,126.50){\line(2,-3){9.42}} \bezier{36}(339.26,80.00)(339.07,84.63)(337.59,89.26) \bezier{776}(320.00,111.00)(290.48,152.85)(148.57,150.95) \bezier{776}(320.00,113.81)(290.48,156.19)(148.57,154.28) \put(149.00,150.95){\line(0,1){3.33}} \end{picture}\\*[-17mm] $$ We make a full circle by the jack-plane and continue the process until the shave breaks away. Then we transpose all matrices of the remaining representation and repeat this process. The obtained representation $\mathbb{P}$ of $\cal{C}$ is regular, and the shaves split into a direct sum of matrix representations of the form ${\mathbb{G}}_{ij}$. Note that this proves Theorem \ref{t1.1} since $\mathbb P$ is isomorphic to a matrix representation $(I_n,\dots,I_n,J)$, where $J$ is a nonsingular Jordan (or Frobenius) canonical matrix with respect to similarity; see the end of Section \ref{s1}. Hence $\mathbb A$ is isomorphic to a direct sum of representations of the form (i)--(iii) from Theorem \ref{t1.1}. The uniqueness of this decomposition follows from Theorem \ref{t.0}. \subsection*{The algorithm for cycles:} This algorithm for every matrix representation \begin{equation}\label{2.1.1} \unitlength 1.00mm \linethickness{0.4pt} \begin{picture}(85.33,5.00) (10,0) \put(25.33,0.00){\makebox(0,0)[cc]{$1$}} \put(65.33,0.00){\makebox(0,0)[cc]{$\cdots$}} \put(85.33,0.00){\makebox(0,0)[cc]{$t$}} \put(45.33,0.00){\makebox(0,0)[cc]{$2$}} \put(30.33,0.00){\line(1,0){10.00}} \put(50.33,0.00){\line(1,0){10.00}} \put(70.33,0.00){\line(1,0){10.00}} \put(35.33,5.00){\makebox(0,0)[ct]{${A}_1$}} \put(55.33,5.00){\makebox(0,0)[ct]{${A}_2$}} \put(75.33,5.00){\makebox(0,0)[ct]{${A}_{t-1}$}} \put(0.33,0.00){\makebox(0,0)[cc]{$\mathbb A:$}} \put(55.33,-7.33){\makebox(0,0)[cc]{$A_t$}} \put(29.83,-1.67){\line(-4,1){0.2}} \bezier{208}(81.00,-1.67)(55.33,-7.33)(29.83,-1.67) \end{picture}\\*[20pt] \end{equation} of a cycle $\cal C$ (see \eqref{1.2}) constructs a decomposition \begin{equation}\label{1.xe} \mathbb{A}\simeq \mathbb{P}(\mathbb{A}')\oplus \widetilde{\mathbb{A}}, \end{equation} where $\mathbb{A}'$ is formed by the matrices of a chain of linear mappings, $\mathbb{P}$ sends $\mathbb{A}'$ to a representation of $\cal C$ that is isomorphic to a direct sum of representations of the form ${\mathbb{G}}_{ij}$ (see \eqref{ppp} and compare with Example \ref{e1.a}), and $\widetilde{\mathbb{A}}$ is a representation of $\cal C$ that satisfies the following condition for each arrow: \begin{equation}\label{awa} \parbox{22em} {If the arrow is oriented clockwise, then the matrix\\ assigned to it has linearly independent rows.} \end{equation} In steps $1,\, 2,\,\dots$ of the algorithm we will construct quiver representations ${\mathbb A}^{(1)}$, ${\mathbb A}^{(2)}$,\,\dots\,. \medskip \noindent {\bf Steps $\pmb {1,2,\dots,l-1}$:} In step $1$ of the algorithm, we check the condition \eqref{awa} for the representation $\mathbb A$ and the arrow $\alpha_1$. If this condition holds, we put ${\mathbb A}^{(1)}={\mathbb A}$. If this condition holds for $\alpha_2$ too, we put ${\mathbb A}^{(2)}={\mathbb A}$, and so on. If after $t$ steps we found that this condition holds for all arrows of $\cal C$, then we put \begin{equation}\label{dd} l=t+1,\qquad \mathbb{A}'=0, \qquad \widetilde{\mathbb{A}}=\mathbb{A} \end{equation} and stop the algorithm. Otherwise, we set \begin{equation}\label{mmm} l=\min\Bigl\{i\in\{1,\dots,t\}\,\Bigl|\, \alpha_i \text{ in $\mathbb A$ does not satisfy \eqref{awa}}\Bigr\} \end{equation} and continue the algorithm as follows: \medskip \noindent {\bf Step {\textit l}:} By unitary transformations at the vertex $[l+1]$, we reduce the matrix $A_l$ of $\mathbb A$ to a matrix \begin{equation*}\label{2.1.5} {\left[\begin{tabular}{c} $0$ \\ \hline \raisebox{-5pt}{$A_{l}^{(l)}$} \end{tabular}\right]}\!\!\!\!\! \begin{tabular}{l} $\,\} {d_{(l+1)'}^{(l)}\: \text{rows}}$\\[1mm] $\,\ {d_{[l+1]}^{(l)}\: \text{rows}}$\\[2mm] \end{tabular},\quad d_{(l+1)'}^{(l)}>0, \end{equation*} where the rows of $A_{l}^{(l)}$ are linearly independent. This changes $A_{l+1}$; we denote the obtained matrix by $A^{(l)}$ and construct the representation\\[-10mm] $$ \unitlength 1.25mm \linethickness{0.4pt} \begin{picture}(90.33,28.00)(0,-8) \put(0.00,10.00){\makebox(0,0)[cc]{${\mathbb A}^{(l)}$:}} \bezier{348}(88.67,-0.67)(45.00,-10.00)(1.00,-0.67) \put(41.50,10.00){\makebox(0,0)[cc]{$(l+1)'$}} \put(0.00,0.00){\makebox(0,0)[rc]{$1$}} \put(1.00,0.00){\line(1,0){7.00}} \put(11.60,0.00){\makebox(0,0)[cc]{$\cdots$}} \put(27.26,0.00){\vector(1,0){9.00}} \put(32.66,2.50){\makebox(0,0)[cc] {$\scriptstyle{A_l^{(l)}}$}} \put(41.50,0.00){\makebox(0,0)[cc]{$[l+1]$}} \put(46.33,0.00){\line(1,0){11.00}} \put(62.33,0.00){\makebox(0,0)[cc]{$[l+2]$}} \put(26.,0.00){\makebox(0,0)[cc]{$l$}} \put(67.33,0.00){\line(1,0){10.00}} \put(72.33,2.00){\makebox(0,0)[cc] {$\scriptstyle{A_{[l+2]}}$}} \put(80.1,0.00){\makebox(0,0)[cc]{$\cdots$}} \put(82.33,0.00){\line(1,0){6.00}} \put(90.33,0.00){\makebox(0,0)[lc]{$t$}} \put(46.66,9.33){\line(4,-3){10.50}} \put(55.33,8.33){\makebox(0,0)[cc] {\fbox{$\scriptstyle{A^{(l)}}$}}} \put(14.33,0.00){\line(1,0){10.00}} \put(19.66,2.00){\makebox(0,0)[cc] {$\scriptstyle{A_{l-1}}$}} \end{picture} $$ (the other matrices are the same as in \eqref{2.1.1}). Its dimensions at the vertices $(l+1)'$ and $[l+1]$ are $d_{(l+1)'}^{(l)}$ and $d_{[l+1]}^{(l)}$, and the arrow $(l+1)'\,\frac{}{\quad\ }\, [l+2]$ has the orientation of $[l+1]\,\frac{}{\quad\ }\, [l+2]$. The matrix $A^{(l)}$ is partitioned into the strips $ A^{(l)}_{(l+1)'}$ and $A^{(l)}_{[l+1]}$, which are assigned to the arrows $(l+1)'\,\frac{}{\quad\ }\, [l+2]$ and $ [l+1]\,\frac{}{\quad\ }\, [l+2]$, see \eqref{x2.1acc}. \medskip \noindent{\textbf{Step $\pmb{r\ (r> l)}$:}} Assume we have constructed in step $r-1$ a representation \\[1em] \begin{equation*}\label{m} \unitlength 1mm \linethickness{0.4pt} \begin{picture}(90.33,22.33)(0,22) \put(-10.00,45.00){\makebox(0,0)[rc] {${\mathbb A}^{(r-1)}$:}} \put(0.00,25.00){\makebox(0,0)[rc] {$(t+1)'$}} \put(1.00,25.00){\line(1,0){5.00}} \put(13.50,25.00){\makebox(0,0)[cc] {$(t+2)'$}} \put(20.33,25.00){\line(1,0){6.0}} \multiput(28.,24.0)(2,0){26}{$\cdot$} \put(80.33,25.00){\line(1,0){8.00}} \put(90.33,25.00){\makebox(0,0)[lc] {$(2t)'$}} \bezier{348}(88.67,-0.67)(45.00,-10.00)(1.00,-0.67)\put(0.00,10.00){\makebox(0,0)[rc]{$(kt+1)'$}} \put(1.00,10.00){\line(1,0){7.00}} \put(12.00,9.70){\makebox(0,0)[cc] {$\cdots$}} \put(15.33,10.00){\line(1,0){8.00}} \put(19.33,13.3){\makebox(0,0)[cc] {$\scriptstyle{A_{(r-1)'}^{(r-1)}}$}} \put(25.67,10.00){\makebox(0,0)[cc]{$r'$}} \multiput(1.33,11.67)(2,0.25){6}{$\cdot$} \multiput(25,14.3)(2,0.25){32}{$\cdot$} \put(0.00,0.00){\makebox(0,0)[rc]{$1$}} \put(1.00,0.00){\line(1,0){7.00}} \put(12.00,-0.30){\makebox(0,0)[cc] {$\cdots$}} \put(15.33,0.00){\line(1,0){7.0}} \put(18.7,3.30){\makebox(0,0)[cc] {$\scriptstyle{A_{[r-1]}^{(r-1)}}$}} \put(25.67,0.00){\makebox(0,0)[cc]{$[r]$}} \put(28.00,0.00){\line(1,0){8.50}} \put(43.00,0.00){\makebox(0,0)[cc]{$[r+1]$}} \put(49.00,0.00){\line(1,0){7.20}} \put(53.00,3.30){\makebox(0,0)[cc] {$\scriptstyle{A^{(r-1)}_{[r+1]}}$}} \put(63.00,0.00){\makebox(0,0)[cc]{$[r+2]$}} \put(69.00,0.00){\line(1,0){7.00}} \put(73.00,3.30){\makebox(0,0)[cc] {$\scriptstyle{A_{[r+2]}^{(r-1)}}$}} \put(80.7,-0.20){\makebox(0,0)[cc]{$\cdots$}} \put(84,0.00){\line(1,0){5.00}} \put(90.33,0.00){\makebox(0,0)[lc]{$t$}} \put(27.33,9.33){\line(4,-3){9.00}} \put(92.2,43.7){\makebox(0,0)[rc] {$(l+1)' \stackrel{A^{(r-1)}_{(l+1)'}}{\frac{}{\qquad}} (l+2)' \stackrel{A^{(r-1)}_{(l+2)'}}{\frac{}{\qquad}} \cdots\ \frac{}{\qquad}\ t'$}} \put(1.00,26.33){\line(6,1){87.00}} \put(38.00,8.83){\makebox(0,0)[cc] {\fbox{$\scriptstyle{A^{(r-1)}}$}}} \put(45.16,-8.5){\makebox(0,0)[cc] {$\scriptstyle{A_{t}^{(r-1)}}$}} \end{picture}\\*[8.6em] \end{equation*} where each arrow $\alpha_{i'}:i'\,\frac{}{\quad\ }\, (i+1)'$ has the orientation of $\alpha_{[i]}:[i]\,\frac{}{\quad\ }\, [i+1]$ in $\cal C$, and $\alpha_{r'}:r'\,\frac{}{\quad\ }\, [r+1]$ has the orientation of $\alpha_{[r]}:[r]\,\frac{}{\quad\ }\, [r+1]$. We will reduce ${\mathbb A}^{(r-1)}$ by unitary transformations at the vertex $[r+1]$: \begin{itemize} \item[(i)] If $\alpha_{[r]}$ is oriented clockwise, then $A^{(r-1)}$ consists of two vertical strips with $\dim_{r'}{\mathbb A}^{(r-1)}$ and $\dim_{[r]}{\mathbb A}^{(r-1)}$ columns (see \eqref{x2.1aac}--\eqref{2.1.5aq}); we reduce it by unitary row-transformations as follows: \begin{equation} \label{2.3''a} A^{(r-1)}=[A_{r'}^{(r-1)}|A_{[r]}^{(r-1)}] \mapsto \left[\begin{tabular}{c|c} $A_{r'}^{(r)}$ & 0 \\ $*$ & $A_{[r]}^{(r)}$ \end{tabular} \right], \end{equation} where $A_{[r]}^{(r)}$ has linearly independent rows. \item[(ii)] If $\alpha_{[r]}$ is oriented counterclockwise, then $A^{(r-1)}$ consists of two horizontal strips with $\dim_{r'}{\mathbb A}^{(r-1)}$ and $\dim_{[r]}{\mathbb A}^{(r-1)}$ rows; we reduce it by unitary column-transformations as follows: \begin{equation} \label{2.3'''} A^{(r-1)}=\left[\begin{tabular}{c} $A_{r'}^{(r-1)}$ \\ \hline \raisebox{-1.5mm}{$A_{[r]}^{(r-1)}$} \end{tabular} \right]\mapsto \left[\begin{tabular}{cc} $A_{r'}^{(r)}$ & 0 \\ \hline $*$ & \raisebox{-1.1mm}{$A_{[r]}^{(r)}$} \end{tabular}\right], \end{equation} where $A_{r'}^{(r)}$ has linearly independent columns. \end{itemize} These unitary transformations at the vertex $[r+1]$ change the matrix $A_{[r+1]}^{(r-1)}$ too; we denote the obtained matrix by $A^{(r)}$ and construct the representation \\[1.5em] \begin{equation}\label{1ww} \unitlength 1.1mm \linethickness{0.4pt} \begin{picture}(90.33,24.33)(1.5,20) \put(-3.50,45.00){\makebox(0,0)[rc] {${\mathbb A}^{(r)}$:}} \put(0.00,25.00){\makebox(0,0)[rc]{$(t+1)'$}} \put(1.00,25.00){\line(1,0){6.00}} \put(13.50,25.00){\makebox(0,0)[cc]{$(t+2)'$}} \put(19.33,25.00){\line(1,0){7.00}} \multiput(28.5,23.9)(1.96,0){26}{$\cdot$} \put(80.33,25.00){\line(1,0){8.00}} \put(90.33,25.00){\makebox(0,0)[lc]{$(2t)'$}} \bezier{348}(88.67,-0.67)(45.00,-10.00)(1.00,-0.67) \put(0.00,10.00){\makebox(0,0)[rc]{$(kt+1)'$}} \put(1.00,10.00){\line(1,0){7.00}} \put(12.00,9.7){\makebox(0,0)[cc]{$\cdots$}} \put(15.33,10.00){\line(1,0){8.00}} \put(19.33,12.70){\makebox(0,0)[cc] {$\scriptstyle{A_{(r-1)'}^{(r-1)}}$}} \put(25.67,10.00){\makebox(0,0)[cc]{$r'$}} \multiput(1.33,13)(2.01,0.23){44}{$\cdot$} \put(0.00,0.00){\makebox(0,0)[rc]{$1$}} \put(1.00,0.00){\line(1,0){7.00}} \put(12.04,-0.250){\makebox(0,0)[cc]{$\cdots$}} \put(15.33,0.00){\line(1,0){8.00}} \put(19.33,3){\makebox(0,0)[cc] {$\scriptstyle{A_{[r-1]}^{(r-1)}}$}} \put(33.33,3.0){\makebox(0,0)[cc] {$\scriptstyle{A_{[r]}^{(r)}}$}} \put(33.33,13){\makebox(0,0)[cc] {$\scriptstyle{A_{r'}^{(r)}}$}} \put(25.67,0.00){\makebox(0,0)[cc]{$[r]$}} \put(28.00,0.00){\line(1,0){9.00}} \put(43.00,0.00){\makebox(0,0)[cc]{$[r+1]$}} \put(48.00,0.00){\line(1,0){9.00}} \put(63.00,0.00){\makebox(0,0)[cc]{$[r+2]$}} \put(68.00,0.00){\line(1,0){9.00}} \put(73.00,3.0){\makebox(0,0)[cc] {$\scriptstyle{A_{[r+2]}^{(r-1)}}$}} \put(80.7,-0.25){\makebox(0,0)[cc]{$\cdots$}} \put(84,0.00){\line(1,0){4.30}} \put(90.33,0.00){\makebox(0,0)[lc]{$t$}} \put(1.00,26.33){\line(6,1){87.00}} \put(58.5,8.33){\makebox(0,0)[cc] {\fbox{$\scriptstyle{A^{(r)}}$}}} \put(28.00,10.00){\line(1,0){9.00}} \put(43.00,10.00){\makebox(0,0)[cc]{$(r+1)'$}} \put(48.00,8.33){\line(3,-2){9}} \put(92,43.8){\makebox(0,0)[rc] {$(l+1)' \stackrel{A^{(r-1)}_{(l+1)'}}{\frac{}{\qquad}}(l+2)' \stackrel{A^{(r-1)}_{(l+2)'}}{\frac{}{\qquad}} \cdots\ \frac{}{\qquad}\ t'$}} \end{picture}\\*[33mm] \end{equation} where ${A}^{(r)}$ is partitioned into two strips: \begin{equation}\label{vvv} A^{(r)}=\begin{cases} \left[\begin{tabular}{c|c} $ A^{(r)}_{(r+1)'}$& $A^{(r)}_{[r+1]}$ \end{tabular}\right]& \text{if $\alpha_{[r+1]}$ is oriented clockwise}, \\[1em] \left[\begin{tabular}{c} $ A^{(r)}_{(r+1)'}$\\ \hline $A^{(r)}_{[r+1]}$ \end{tabular}\right] & \text{if $\alpha_{[r+1]}$ is oriented counterclockwise}, \end{cases} \end{equation} and these strips are assigned to the arrows \[ (r+1)'\,\frac{}{\quad\ }\, [r+2],\qquad [r+1]\,\frac{}{\quad\ }\, [r+2].\] \subsection*{The result:} We make at least $t$ steps and stop at the first representation $\mathbb A^{(n)}$ with \begin{equation}\label{wwv} n\ge t\qquad\text{and}\qquad {A}^{(n)}_{(n+1)'}=0. \end{equation} The matrix ${A}^{(n)}_{(n+1)'}$ is assigned to the arrow $(n+1)'\,\frac{}{\quad\ }\, [n+2]$. Deleting this arrow, we break $\mathbb A^{(n)}$ into two representations:\\[-3mm] \begin{equation}\label{1k} \unitlength 0.9mm \linethickness{0.4pt} \begin{picture}(91.33,28.00)(-7,25) \put(-30.00,40.00){\makebox(0,0)[cc] {${\mathbb A}'$:}} \put(0.00,25.00){\makebox(0,0)[rc]{$(t+1)'$}} \put(1.00,25.00){\line(1,0){7.00}} \put(16.50,25.00){\makebox(0,0)[cc]{$(t+2)'$}} \put(23.33,25.00){\line(1,0){8.00}} \multiput(34.5,23.6)(2,0){22}{$\cdot$} \put(80.33,25.00){\line(1,0){8.00}} \put(90.33,25.00){\makebox(0,0)[lc]{$(2t)'$}} \put(-17.50,12.00){\makebox(0,0)[lc] {$(kt+1)'\ \frac{}{\qquad\quad} \ \cdots\ \stackrel{ A^{(n)}_{n'}}{\frac{}{\qquad}\!\!\!\,\frac{}{\quad\ }\,} (n+1)'$}} \multiput(1.33,13)(2,0.22){44}{$\cdot$} \put(91.33,44.50){\makebox(0,0)[rc] {$(l+1)'\stackrel{A^{(n)}_{(l+1)'}}{\frac{}{\qquad}} (l+2)'\stackrel{A^{(n)}_{(l+2)'}}{\frac{}{\qquad}} \cdots\, \frac{}{\qquad}\ t'$}} \put(1.00,26.33){\line(6,1){85.70}} \end{picture} \\*[5em] \end{equation} and\\ \begin{equation}\label{1h} \unitlength 1.15mm \linethickness{0.4pt} \begin{picture}(90.33,6.00)(0,-1) \put(-9.00,1.00){\makebox(0,0)[cc] {$\widetilde{\mathbb{A}}$:}} \bezier{348}(89.67,-0.67)(45.00,-10.00)(1.00,-0.67) \put(0.00,0.00){\makebox(0,0)[rc]{$1$}} \put(1.00,0.00){\line(1,0){7.00}} \put(12.00,-0.15){\makebox(0,0)[cc]{$\cdots$}} \put(15.33,0.00){\line(1,0){8.00}} \put(33.33,3){\makebox(0,0)[cc] {$\scriptstyle{A_{[n]}^{(n)}}$}} \put(25.67,0.00){\makebox(0,0)[cc]{$[n]$}} \put(28.00,0.00){\line(1,0){9.50}} \put(43.00,0.00){\makebox(0,0)[cc]{$[n+1]$}} \put(48.00,0.00){\line(1,0){9.50}} \put(53.00,3){\makebox(0,0)[cc] {$\scriptstyle{A^{(n)}_{[n+1]}}$}} \put(63.00,0.00){\makebox(0,0)[cc]{$[n+2]$}} \put(71.00,0.00){\line(-1,0){3.00}} \put(68.00,0.00){\line(1,0){10.00}} \put(73.00,3){\makebox(0,0)[cc] {$\scriptstyle{A_{[n+2]}^{(n)}}$}} \put(81.9,-0.15){\makebox(0,0)[cc]{$\cdots$}} \put(84.83,0.00){\line(1,0){5.00}} \put(91.33,0.00){\makebox(0,0)[lc]{$t$}} \end{picture} \\*[2em] \end{equation} The representation ${\mathbb A}'$ is a representation of the quiver \begin{equation}\label{2.3v} (l+1)'\:\frac {\alpha_{(l+1)'}} {\qquad\qquad}\:(l+2)'\: \frac{\alpha_{(l+2)'}}{\qquad\qquad}\: \cdots\: \frac{\alpha_{n'}}{\quad\qquad}\: (n+1)', \end{equation} whose arrows $i'\,\frac{}{\quad\ }\, (i+1)'$ have the orientation of the arrows $\alpha_{[i]}:[i]\,\frac{}{\quad\ }\, [i+1]$ in $\cal C$. By analogy with Example \ref{e1.a}, we construct the mapping $\mathbb P$ that sends a representation $\mathbb B$ of the quiver \eqref{2.3v} to a representation $\mathbb D$ of the cycle $\cal C$:\\[4mm] \begin{equation}\label{2.3w} \unitlength 0.8mm \linethickness{0.4pt} \begin{picture}(91.33,30.00)(-8,13) \put(0.00,25.00){\makebox(0,0)[rc]{ $\scriptstyle{(t+1)'}$}} \put(1.00,25.00){\line(1,0){11.00}} \put(20,25.00){\makebox(0,0)[cc] {$\scriptstyle{(t+2)'}$}} \put(27,25.00){\line(1,0){10.00}} \multiput(39,23.6)(2.4,0){16}{$\cdot$} \put(78.33,25.00){\line(1,0){10.00}} \put(90.33,25.00){\makebox(0,0)[lc] {$\scriptstyle{(2t)'}$}} \bezier{348}(88.67,-12.00)(45.00,-21.33)(1.00,-12.00) \put(1.00,26.33){\line(6,1){88.00}} % \put(0.00,-11.33){\makebox(0,0)[rc]{ $\scriptstyle{1}$}} \put(1.00,-11.33){\line(1,0){15.00}} \put(20,-11.33){\makebox(0,0)[cc] {$\scriptstyle{2}$}} \put(23,-11.33){\line(1,0){14.00}} \multiput(39,-12.73)(2.4,0){16}{$\cdot$} \put(78.33,-11.33){\line(1,0){10.00}} \put(90.33,-11.33){\makebox(0,0)[lc] {$\scriptstyle{t}$}} \bezier{348}(88.67,-12.00)(45.00,-21.33)(1.00,-12.00) \put(1.00,26.33){\line(6,1){88.00}} % \put(0.00,10.00){\makebox(0,0)[rc] {$\scriptstyle{(kt+1)'}$}} \put(1.00,10.00){\line(1,0){11.00}} \put(20.00,10.00){\makebox(0,0)[cc] {$\scriptstyle{(kt+2)'}$}} \put(27,10.00){\line(1,0){10.00}} \multiput(1.33,11)(2.46,0.32){36}{$\cdot$} \put(47.50,10.00){\line(1,0){10.00}} \put(43.00,9.50){\makebox(0,0)[cc] {$\cdots$}} \put(64.00,10.00){\makebox(0,0)[cc] {$\scriptstyle{(n+1)'}$}} % \put(93.53,43.5){\makebox(0,0)[rc] {${\scriptstyle (l+1)'}\!\! \stackrel{B_{(l+1)'}}{\frac{}{\qquad}} {\scriptstyle (l+2)'} \stackrel{B_{(l+2)'}}{\frac{}{\qquad}}\cdots \stackrel{B_{(t-1)'}}{\frac{}{\qquad}} {\scriptstyle t'}$}} \put(-30,24.67){\makebox(0,0)[cc] {${\mathbb B}:$}} \put(-30,-11.33){\makebox(0,0)[cc] {${\mathbb D}:$}} \put(-35,8){\makebox(0,0)[cc] {${\mathbb P}$}} \put(-31.5,21.67){\vector(0,-1){29.00}} \put(-33,21.67){\line(1,0){3.00}} \put(45.00,36.00){\makebox(0,0)[cc] {$\scriptstyle{B_{t'}}$}} \put(45.00,-20.00){\makebox(0,0)[cc] {$\scriptstyle{D_{t}}$}} \put(33,28){\makebox(0,0)[cc] {$\scriptstyle{B_{(t+2)'}}$}} \put(33,12){\makebox(0,0)[cc] {$\scriptstyle{B_{(kt+2)'}}$}} \put(53,12){\makebox(0,0)[cc] {$\scriptstyle{B_{n'}}$}} \put(33,-9){\makebox(0,0)[cc] {$\scriptstyle{D_{2}}$}} \put(83,-9){\makebox(0,0)[cc] {$\scriptstyle{D_{t-1}}$}} \put(83,28){\makebox(0,0)[cc] {$\scriptstyle{B_{(2t-1)'}}$}} \put(14.00,22.00){\makebox(0,0)[rc] {$\scriptstyle{B_{(t+1)'}}$}} \put(14.00,7.00){\makebox(0,0)[rc] {$\scriptstyle{B_{(kt+1)'}}$}} \put(12.00,-9.00){\makebox(0,0)[rc] {$\scriptstyle{D_{1}}$}} \end{picture}\\*[7em] \end{equation} This mapping is known in representation theory as a {\it push-down functor} (see \cite[Sect. 14.3]{gab+roi}) and is determined as follows: \begin{equation}\label{eee} D_i=\bigoplus_{\substack{[j]=i\\ l\le j\le n+1}} B_{j'},\qquad i=1,2,\dots, t, \end{equation} (i.e., $D_i$ is the direct sum of all $B_{j'}$ disposed over it), where \begin{equation}\label{jjj} B_{l'}=0_{p0} \quad\text{with}\quad p=\dim_{(l+1)'}{\mathbb B} \end{equation} (recall that the arrow $\alpha_{l}$ is oriented clockwise, see step $l$ of the algorithm), and \[ B_{(n+1)'}= \begin{cases} 0_{0q} & \text{if $\alpha_{[n+1]}:[n+1] \longrightarrow [n+2]$}, \\ 0_{q0} & \text{if $\alpha_{[n+1]}:[n+1] \longleftarrow [n+2]$}, \end{cases} \quad\text{with $q=\dim_{(n+1)'}{\mathbb B}$}. \] (The definition of ${\mathbb P}: {\mathbb B}\mapsto {\mathbb D}$ becomes clearer if the representations ${\mathbb B}$ and ${\mathbb D}$ are given by vector spaces and linear mappings: each vector space of ${\mathbb D}$ is the direct sum of the vector spaces of ${\mathbb B}$ disposed over it, and each linear mapping of ${\mathbb D}$ is determined by the linear mappings of ${\mathbb B}$ disposed over it.) The following proposition will be proved in Section \ref{subs}. \begin{proposition}\label{l2} Let the algorithm for circles transform a matrix representation $\mathbb A$ of a cycle $\cal C$ to $\mathbb{A}'$ and $\widetilde{\mathbb{A}}$. Then \begin{itemize} \item[{\rm (a)}] The condition \eqref{awa} holds for $\widetilde{\mathbb{A}}$ and all arrows. \item[{\rm (b)}] If an arrow $\alpha_i$ is oriented counterclockwise and the columns of ${A_i}$ are linearly independent, then the columns of $\widetilde{A}_i$ are linearly independent too. \item[{\rm (c)}] $\mathbb{A}\simeq \mathbb{P}(\mathbb{A}')\oplus \widetilde{\mathbb{A}}$. \end{itemize} \end{proposition} \section{Main theorem}\label{s_main} \begin{theorem}\label{th} A regularizing decomposition \eqref{1.7b} of a matrix representation $\mathbb A$ of a cycle $\cal C$ can be constructed in 3 steps using only unitary transformations: \begin{itemize} \item[$1.$] Applying the algorithm for cycles to ${\mathbb A}$, we get $\mathbb{A}\simeq \mathbb{P}(\mathbb{A}')\oplus \widetilde{\mathbb{A}}$. \item[$2.$] Applying the algorithm for cycles to the matrix representation ${\mathbb B}:=\widetilde{\mathbb{A}}^T$ of the cycle ${\cal C}^{\,T}$ $($see \eqref{1.xy}$)$, we get $\widetilde{\mathbb{A}}^T\simeq \mathbb{P}(\mathbb{B}')\oplus \widetilde{\mathbb{B}}$. \item[$3.$] Applying the algorithm for chains to $\mathbb{A}'$ and $\mathbb{B}^{\prime T}$, we get \begin{equation}\label{pp} \mathbb{A}'\simeq {\mathbb L}_{i_1j_1}\oplus \dots\oplus {\mathbb L}_{i_pj_p},\quad\mathbb{B}^{\prime T}\simeq {\mathbb L}_{i_{p+1}j_{p+1}}\oplus \dots\oplus {\mathbb L}_{i_qj_q}. \end{equation} \end{itemize} Then \begin{equation}\label{qq} \mathbb{A}\simeq {\mathbb G}_{i_1j_1}\oplus \dots\oplus {\mathbb G}_{i_qj_q}\oplus\widetilde{\mathbb{B}}^T \end{equation} $($see \eqref{ppp}$)$ and the representation $\widetilde{\mathbb{B}}^T$ is regular. \end{theorem} \begin{proof} By \eqref{1.yx} and Proposition \ref{l2}(c), \[ \mathbb{A}\simeq \mathbb{P}(\mathbb{A}') \oplus (\mathbb{P}(\mathbb{B}'))^T\oplus \widetilde{\mathbb{B}}^T= \mathbb{P}(\mathbb{A}') \oplus \mathbb{P}(\mathbb{B}'^T)\oplus \widetilde{\mathbb{B}}^T. \] Substituting \eqref{pp}, we obtain \[ \mathbb{A}\simeq \mathbb{P}({\mathbb L}_{i_1j_1})\oplus \dots\oplus \mathbb{P}({\mathbb L}_{i_qj_q}) \oplus\widetilde{\mathbb{B}}^T. \] This proves \eqref{qq} since $\mathbb{P}({\mathbb L}_{ij})= {\mathbb G}_{ij}$. Let us prove that $\widetilde{\mathbb{B}}^T$ is regular. By Proposition \ref{l2}(a), every matrix of $\widetilde{\mathbb{A}}$ assigned to an arrow oriented clockwise has linearly independent rows. The matrix representation ${\mathbb{B}}= \widetilde{\mathbb{A}}^T$ is constructed by transposing all matrices, and it is a representation of the cycle ${\cal C}^T$ obtained from ${\cal C}$ by changing the direction of each arrow. Hence every matrix of ${\mathbb{B}}$ assigned to an arrow oriented counterclockwise has linearly independent columns; by Proposition \ref{l2}(b) the same holds for the matrices of $\widetilde{\mathbb{B}}$. Moreover, by Proposition \ref{l2}(a) every matrix of $\widetilde{\mathbb{B}}$ assigned to an arrow oriented clockwise has linearly independent rows. Hence, \[ \dim_{[i+1]}\widetilde{\mathbb{B}} =\mathop{\rm rank}\nolimits \widetilde{{B}}_i \le \dim_i\widetilde{\mathbb{B}} \] for all vertices $i=1,\dots,t$. We have \[ \dim_1\widetilde{\mathbb{B}}\ge \dim_2\widetilde{\mathbb{B}}\ge\cdots\ge \dim_t\widetilde{\mathbb{B}}\ge \dim_1\widetilde{\mathbb{B}}. \] Therefore, each matrix $\widetilde{{B}}_i$ is square and its rows or columns are linearly independent. So $\widetilde{{B}}_i$ is nonsingular and the representation $\widetilde{\mathbb{B}}$ is regular. Then $\widetilde{\mathbb{B}}^T$ is regular too. \end{proof} \section{Proof of Proposition \ref{t2.a}} \label{xs2a} In each step $r\in\{1,\dots,t-1\}$ of the algorithm for chains (Section \ref{s4}) we constructed the matrix $B_r$ of the form \eqref{x2.6'} or \eqref{x2.7'}. Denote by $D_r$ the matrix obtained from $B_r$ by replacement of all blocks $H_{i}$ by $I_{l_i}$ and all blocks $*$ by $0$. Let us prove that the representation \begin{equation*}\label{x3.z} {\mathbb D}_{r} :\qquad 1\ \frac {D_1}{\qquad}\ \cdots\ \frac{D_{r}}{\qquad}\ (r+1)\ \frac{A_{r+1}'}{\qquad}\ (r+2)\ \frac{A_{r+2}}{\qquad}\ \cdots\ \frac{{{A}}_{t-1}}{\qquad}\ t \end{equation*} of the quiver \eqref{x1.5} is isomorphic to the initial representation $\mathbb A$: \begin{equation}\label{x3.y} {\mathbb A}\simeq {\mathbb D}_{r},\qquad r=1,\dots,t-1. \end{equation} In step 1 we reduced $\mathbb A$ to \[ {\mathbb B}_{1} :\qquad 1\ \frac {B_1}{\qquad}\ 2\ \frac{A_{2}'}{\qquad}\ 3\ \frac{A_{3}}{\qquad}\ \cdots\ \frac{{{A}}_{t-1}}{\qquad}\ t \] by unitary transformations at vertices $1$ and $2$ (see \eqref{x2.1'}). Using transformations at vertex 1, we reduce $B_1$ to \begin{equation}\label{x2.ee} D_1= \begin{bmatrix} 0&I_k\\0&0 \end{bmatrix}, \end{equation} and so ${\mathbb A}$ is isomorphic to \[ {\mathbb D}_{1} :\qquad 1\ \frac {D_1}{\qquad}\ 2\ \frac{A_{2}'}{\qquad}\ 3\ \frac{A_{3}}{\qquad}\ \cdots\ \frac{{{A}}_{t-1}}{\qquad}\ t. \] We may produce at vertex $2$ of ${\mathbb D}_1$ every transformation given by a nonsingular block-triangular matrix \[ S_2= \begin{bmatrix} S_{11}&S_{12}\\0&S_{22} \end{bmatrix}, \] where $S_{11}$ is $k$-by-$k$ if $\alpha_1:1\longrightarrow 2$, and $S_{22}$ is $k$-by-$k$ if $\alpha_1:1\longleftarrow 2$. This transformation spoils the block $I_k$ of $D_1$ but we recover it by transformations at vertex $1$. Reasoning by induction on $r$, we assume that ${\mathbb A}$ is isomorphic to \[ {\mathbb D}_{r-1} :\qquad 1\ \frac {D_1}{\qquad}\ \cdots\ \frac{D_{r-1}}{\qquad}\ r\ \frac{A_{r}'}{\qquad}\ (r+1)\ \frac{A_{r+1}}{\qquad}\ \cdots\ \frac{{{A}}_{t-1}}{\qquad}\ t \] (where $D_1,\dots,D_{r-1}$ are obtained from $B_1,\dots,B_{r-1}$ and $A'_r$ is taken from \eqref{x2.1c}) and that transformations at vertices $1,\dots,r-1$ may recover the matrices $D_1,\dots,D_{r-1}$ of ${\mathbb D}_{r-1}$ after each transformation at vertex $r$ given by a nonsingular block-triangular matrix \begin{equation}\label{x3.x} S_{r}= \begin{bmatrix} S_{11}&S_{12}&\cdots&S_{1r}\\ &S_{22}&\cdots&S_{2r}\\&&\ddots&\vdots\\ 0&&&S_{rr} \end{bmatrix}, \end{equation} in which the sizes of diagonal blocks coincide with the sizes of horizontal strips of $B_{r-1}$ if $\alpha_{r-1}:(r-1)\longrightarrow r$, or with the sizes of vertical strips of $B_{r-1}$ if $\alpha_{r-1}:(r-1)\longleftarrow r$, see \eqref{x2.6'} and \eqref{x2.7'}. In step $r$ of the algorithm, we reduced $A_r'$ to $B_r$ of the form \eqref{x2.6'} or \eqref{x2.7'} by unitary transformations at the vertices $r$ and $r+1$; moreover, we used only those transformation at vertex $r$ that were given by unitary block-diagonal matrices partitioned as \eqref{x3.x}. By the same transformations at the vertices $r$ and $r+1$ of ${\mathbb D}_{r-1}$, we reduce its matrix $A_r'$ to $B_r$. Then we reduce $B_r$ to $D_r$ by a transformation at vertex $r$ given by a matrix of the form \eqref{x3.x}, and restore $D_1,\dots,D_{r-1}$ by transformations at vertices $1,\dots,r-1$. The obtained representation is ${\mathbb D}_{r}$, and so \[ {\mathbb A}\simeq{\mathbb D}_{r-1}\simeq{\mathbb D}_{r}. \] Moreover, we may produce at the vertex $r+1$ of ${\mathbb D}_{r}$ all transformations given by block-triangular matrices, restoring the matrix $D_r$ by transformations at vertex $r$ given by matrices of the form \eqref{x3.x}, and then restoring $D_1,\dots,D_{r-1}$ by transformations at the vertices $1,\dots,r-1$. This proves the isomorphism \eqref{x3.y}. \medskip We now transform the representation ${\mathbb D}_r$ ($1\le r<t$) of the quiver \eqref{x1.5} to a representation ${\mathbb Q}_r$ of a new quiver as follows. We first replace each vertex $i\in\{1,2,\dots,r+1\}$ of ${\mathbb D}_r$ by the vertices $i_1,\dots,i_{d_i}$, where \[ d_i=\dim_i{\mathbb D}_r= \dim_i{\mathbb A}. \] Then we replace the arrow $(r+1)\ \frac{A_{r+1}'}{\qquad}\ (r+2)$ by the arrows\\[10mm] \[ \unitlength 0.4mm \linethickness{0.4pt} \begin{picture}(25.00,25.00)(0,1) \put(13.00,38.00){\makebox(0,0)[lc] {\fbox{$\scriptstyle{ A_{r+1}'}$}}} \put(0.00,25.00){\makebox(0,0)[rc] {$(r+1)_2$}} \put(-21.00,8.00){\makebox(0,0) {$\cdots\cdots\cdots$}} \put(0.00,-5.00){\makebox(0,0)[rc] {$(r+1)_{d_{r+1}}$}} \put(25.00,10.00){\makebox(0,0)[lc]{$(r+2) $}} \put(3.00,24.00){\line(5,-3){18.00}} \put(3.00,-4.00){\line(5,3){18.00}} \put(3.00,42.00){\line(3,-4){18.00}} \put(0.00,47.00){\makebox(0,0)[rc] {$(r+1)_1$}} \end{picture}\\*[15pt] \] of the same direction. At last, we replace each arrow $i \stackrel{D_i}{\frac{}{\qquad}}(i+1)$ with $i\le r$ by arrows that are in one-to-one correspondence with the units of the matrix ${D}_i$: every unit at the place $(p,q)$ in ${D}_i$ determines the arrow \[ i_q\stackrel{I_{1}}{\longrightarrow} (i+1)_p\qquad\text{if}\qquad \alpha_i:i\longrightarrow (i+1), \] or the arrow \[ i_p\stackrel{I_{1}}{\longleftarrow} (i+1)_q\qquad\text{if}\qquad \alpha_i:i\longleftarrow (i+1).\] (These arrows represent the action on the basic vectors of the linear operator \[ \ {\mathbb C} i_1\oplus\dots\oplus {\mathbb C} i_{d_i}\frac{}{\qquad}\: {\mathbb C} (i+1)_1\oplus\dots\oplus {\mathbb C} (i+1)_{d_{i+1}} \] directed as $\alpha_i:i\,\frac{}{\quad\ }\, (i+1)$ and given by the matrix $D_i$.) Since in each row and in each column of $D_i$ at most one entry is $1$ and the others are $0$, two arrows $i_p\,\frac{}{\quad\ }\, (i+1)_q$ and $i_{p'}\,\frac{}{\quad\ }\, (i+1)_{q'}$ have no common vertices (${\cal D}_i$ sends each basic vector to a basic vector or to $0$ and cannot send two basic vectors to the same basic vector). Denote the obtained representation by ${\mathbb Q}_r$. The quiver representation ${\mathbb Q}_{t-1}$ is a union of nonintersecting chains; each of them determines a representation of the form ${\mathbb L}_{ij}$. Hence, ${\mathbb D}_{t-1}$ is a direct sum of these representations. By \eqref{x3.y}, ${\mathbb A}\simeq {\mathbb D}_{t-1}$, so ${\mathbb D}_{t-1}$ is the canonical form of ${\mathbb A}$, and we need to prove ${\mathbb L}({\mathbb A})={\mathbb D}_{t-1}$ (see \eqref{x2.1g}). It suffices to show that \[ {\mathbb Q}_{t-1}={\cal P}_{t-1}\cup {\mathbb M}_{t-1} \] (see the set of indices in \eqref{x2.1g}). The equality ${\mathbb Q}_{1}={\cal P}_{1}\cup {\mathbb M}_{1}$ holds since the matrix $D_1$ is obtained from $B_1$ by replacement of $H$ with $I_k$ (see \eqref{x2.1'} and \eqref{x2.ee}). Reasoning by induction, we assume that ${\mathbb Q}_{r-1}={\cal P}_{r-1}\cup {\mathbb M}_{r-1}$. Then ${\mathbb Q}_{r}={\cal P}_{r}\cup {\mathbb M}_{r}$ by the construction of ${\cal P}_{r}$ and ${\mathbb M}_{r}$ in step $r$ of the algorithm for chains and since $D_r$ is obtained from $B_r$ by replacement of all blocks $H_{i}$ by $I_{l_i}$ and all blocks $*$ by $0$. This proves Proposition \ref{t2.a}. \begin{example} Suppose we apply the algorithm to a matrix representation \begin{equation*}\label{x1.4a} {\mathbb A} :\qquad 1\ \stackrel{A_1}{\longrightarrow}\ 2\ \stackrel{A_2}{\longrightarrow}\ 3\ \stackrel{A_3}{\longleftarrow}\ 4 \end{equation*} of dimension $(4,5,4,5)$ and obtain \begin{equation*}\label{xzz} B_1=\begin{bmatrix} 0_{31} & H_{1} \\ 0_{21} & 0_{23} \end{bmatrix},\ B_2=\left[\begin{tabular}{cc|cc} $0_{21}$&$H_{2}$ & $*$& $*$ \\ $0_{11}$&$0_{12}$ & $0_{11}$&$H_{3}$\\ $0_{11}$&$0_{12}$ & $0_{11}$&$0_{11}$ \end{tabular} \right],\ B_3=\left[\begin{tabular}{ccc} $0_{22}$&$H_{4}$&$*$ \\ \hline $0_{12}$&$0_{12}$&$*$\\ \hline $0_{12}$&$0_{12}$&$H_{5}$ \end{tabular} \right], \end{equation*} where $H_1,\dots,H_5$ are nonsingular $3\times 3$, $2\times 2$, $1\times 1$, $2\times 2$, and $1\times 1$ matrices. Then \[ \begin{CD} {\mathbb D}_3:\qquad 1@>{\begin{bmatrix} 0&1&0&0\\0&0&1&0\\0&0&0&1\\0&0&0&0\\0&0&0&0 \end{bmatrix}}>>2 @>{\begin{bmatrix} 0&1&0&0&0\\0&0&1&0&0\\0&0&0&0&1\\0&0&0&0&0 \end{bmatrix}}>>3 @<{\begin{bmatrix} 0&0&1&0&0\\0&0&0&1&0\\0&0&0&0&0\\0&0&0&0&1 \end{bmatrix}}<<4 \end{CD} \] and\\[-16mm] \begin{equation}\label{yyy} \unitlength 1mm \linethickness{0.4pt} \begin{picture}(110.00,40.00)(0,19) \put(0.00,20.00){\makebox(0,0)[cc]{${\mathbb Q}_3:$}} \put(20.00,40.00){\makebox(0,0)[cc]{$1_1$}} \put(20.00,30.00){\makebox(0,0)[cc]{$1_2$}} \put(20.00,20.00){\makebox(0,0)[cc]{$1_3$}} \put(20.00,10.00){\makebox(0,0)[cc]{$1_4$}} \put(50.00,40.00){\makebox(0,0)[cc]{$2_1$}} \put(50.00,30.00){\makebox(0,0)[cc]{$2_2$}} \put(50.00,20.00){\makebox(0,0)[cc]{$2_3$}} \put(50.00,10.00){\makebox(0,0)[cc]{$2_4$}} \put(50.00,0.00){\makebox(0,0)[cc]{$2_5$}} \put(80.00,40.00){\makebox(0,0)[cc]{$3_1$}} \put(80.00,30.00){\makebox(0,0)[cc]{$3_2$}} \put(80.00,20.00){\makebox(0,0)[cc]{$3_3$}} \put(80.00,10.00){\makebox(0,0)[cc]{$3_4$}} \put(110.00,40.00){\makebox(0,0)[cc]{$4_1$}} \put(110.00,30.00){\makebox(0,0)[cc]{$4_2$}} \put(110.00,20.00){\makebox(0,0)[cc]{$4_3$}} \put(110.00,10.00){\makebox(0,0)[cc]{$4_4$}} \put(110.00,0.00){\makebox(0,0)[cc]{$4_5$}} \put(107.00,1.00){\vector(-3,1){24.00}} \put(107.00,12.00){\vector(-3,2){24.00}} \put(107.00,22.00){\vector(-3,2){24.00}} \put(53.00,2.00){\vector(3,2){24.00}} \put(53.00,21.00){\vector(3,1){24.00}} \put(53.00,31.00){\vector(3,1){24.00}} \put(23.00,11.00){\vector(3,1){24.00}} \put(23.00,21.00){\vector(3,1){24.00}} \put(23.00,31.00){\vector(3,1){24.00}} \end{picture}\\*[25mm] \end{equation} \bigskip We have the canonical form of ${\mathbb A}$: \[ {\mathbb A}\simeq {\mathbb L}_{11}\oplus {\mathbb L}_{12}\oplus {\mathbb L}_{14}\oplus {\mathbb L}_{14}\oplus {\mathbb L}_{22}\oplus {\mathbb L}_{23}\oplus {\mathbb L}_{34}\oplus {\mathbb L}_{44}\oplus {\mathbb L}_{44} \] \end{example} Note that the block-triangular form of $S_{r}$ (see \eqref{x3.x}) follows from the disposition of the chains \[ p_i \stackrel{I_{1}}{\frac{}{\qquad}} (p_i+1)\stackrel{I_{1}}{\frac{}{\qquad}} \cdots \stackrel{I_{1}}{\frac{}{\qquad}} r, \qquad i=1,\dots,r+1, \] in the quiver representation ${\mathbb M}_{r-1}$ (see \eqref{x2.1c}): they represent the linear mappings and we may add these chains from the top down by changing bases in vector spaces; this is clear for the quiver representation \eqref{yyy}. \section{Proof of Proposition \ref{l2}} \label{subs} The representation ${\mathbb A}^{(r)}$ (see \eqref{1ww}) is a representation of the quiver, which we will denote by ${\cal Q}^{(r)}$. For every representation \\[1em] \\[1.5em] \begin{equation}\label{m1} \unitlength 1.1mm \linethickness{0.4pt} \begin{picture}(90.33,24.33)(1.5,20) \put(-3.50,45.00){\makebox(0,0)[rc] {${\mathbb B}$:}} \put(0.00,25.00){\makebox(0,0)[rc]{$(t+1)'$}} \put(1.00,25.00){\line(1,0){6.00}} \put(13.50,25.00){\makebox(0,0)[cc]{$(t+2)'$}} \put(19.33,25.00){\line(1,0){7.00}} \multiput(28.5,23.9)(1.96,0){26}{$\cdot$} \put(80.33,25.00){\line(1,0){8.00}} \put(90.33,25.00){\makebox(0,0)[lc]{$(2t)'$}} \bezier{348}(88.67,-0.67)(45.00,-10.00)(1.00,-0.67) \put(0.00,10.00){\makebox(0,0)[rc]{$(kt+1)'$}} \put(1.00,10.00){\line(1,0){7.00}} \put(12.00,9.7){\makebox(0,0)[cc]{$\cdots$}} \put(15.33,10.00){\line(1,0){8.00}} \put(19.33,12.70){\makebox(0,0)[cc] {$\scriptstyle{B_{(r-1)'}}$}} \put(25.67,10.00){\makebox(0,0)[cc]{$r'$}} \multiput(1.33,13)(2.01,0.23){44}{$\cdot$} \put(0.00,0.00){\makebox(0,0)[rc]{$1$}} \put(1.00,0.00){\line(1,0){7.00}} \put(12.04,-0.250){\makebox(0,0)[cc]{$\cdots$}} \put(15.33,0.00){\line(1,0){8.00}} \put(19.33,3){\makebox(0,0)[cc] {$\scriptstyle{B_{[r-1]}}$}} \put(33.33,3.0){\makebox(0,0)[cc] {$\scriptstyle{B_{[r]}}$}} \put(33.33,13){\makebox(0,0)[cc] {$\scriptstyle{B_{r'}}$}} \put(25.67,0.00){\makebox(0,0)[cc]{$[r]$}} \put(28.00,0.00){\line(1,0){9.00}} \put(43.00,0.00){\makebox(0,0)[cc]{$[r+1]$}} \put(48.00,0.00){\line(1,0){9.00}} \put(63.00,0.00){\makebox(0,0)[cc]{$[r+2]$}} \put(68.00,0.00){\line(1,0){9.00}} \put(73.00,3.0){\makebox(0,0)[cc] {$\scriptstyle{B_{[r+2]}}$}} \put(80.7,-0.25){\makebox(0,0)[cc]{$\cdots$}} \put(84,0.00){\line(1,0){4.30}} \put(90.33,0.00){\makebox(0,0)[lc]{$t$}} \put(1.00,26.33){\line(6,1){87.00}} \put(58.5,8.33){\makebox(0,0)[cc] {\fbox{$\scriptstyle{B}$}}} \put(28.00,10.00){\line(1,0){9.00}} \put(43.00,10.00){\makebox(0,0)[cc]{$(r+1)'$}} \put(48.00,8.33){\line(3,-2){9}} \put(92,43.8){\makebox(0,0)[rc] {$(l+1)' \stackrel{B_{(l+1)'}}{\frac{}{\qquad}}(l+2)' \stackrel{B_{(l+2)'}}{\frac{}{\qquad}} \cdots\ \frac{}{\qquad}\ t'$}} \end{picture}\\*[33mm] \end{equation} of this quiver, we define the representation \begin{equation*} \unitlength 1.00mm \linethickness{0.4pt} \begin{picture}(85.33,5.00) (10,0) \put(25.33,0.00){\makebox(0,0)[cc]{$1$}} \put(65.33,0.00){\makebox(0,0)[cc]{$\cdots$}} \put(85.33,0.00){\makebox(0,0)[cc]{$t$}} \put(45.33,0.00){\makebox(0,0)[cc]{$2$}} \put(30.33,0.00){\line(1,0){10.00}} \put(50.33,0.00){\line(1,0){10.00}} \put(70.33,0.00){\line(1,0){10.00}} \put(35.33,5.00){\makebox(0,0)[ct]{${D}_1$}} \put(55.33,5.00){\makebox(0,0)[ct]{${D}_2$}} \put(75.33,5.00){\makebox(0,0)[ct]{${D}_{t-1}$}} \put(0.33,0.00){\makebox(0,0)[cc] {${\mathbb{F}(\mathbb{B}})$:}} \put(55.33,-7.33){\makebox(0,0)[cc]{$D_t$}} \put(29.83,-1.67){\line(-4,1){0.2}} \bezier{208}(81.00,-1.67)(55.33,-7.33)(29.83,-1.67) \end{picture}\\*[20pt] \end{equation*} of the cycle $\cal C$ by ``gluing down of the shave'' (see the beginning of Section \ref{s1.3}): \begin{equation*}\label{1p} D_i=\Bigl(\bigoplus_{\substack{[j]=i\\ l\le j\le r}}{ B_{j'}}\Bigr)\oplus \begin{cases} B_i & \text{if $i\ne [r+1]$}, \\ B & \text{if $i=[r+1]$}, \end{cases} \end{equation*} where $B_{l'}$ is defined by \eqref{jjj} (compare with \eqref{eee}). The mapping $\mathbb F$ is analogous to the ``push-down functor'' \eqref{2.3w}. Moreover, for the representation $\mathbb A^{(n)}$, obtained in the last step of the algorithm for cycles, we have \begin{equation}\label{1o} \mathbb{F}(\mathbb{A}^{(n)})= \mathbb{P}({\mathbb A}')\oplus \widetilde{\mathbb{A}}, \end{equation} where ${\mathbb A}'$ and $\widetilde{\mathbb{A}}$ are the representations \eqref{1k} and \eqref{1h}. By \eqref{2.1.5aq}, the matrix $B$ in \eqref{m1} has the form \begin{equation*}\label{1w} B=\begin{cases} [\,B_{(r+1)'}\,|\,B_{[r+1]}\,]& \text{if $\alpha_{[r+1]}$ is oriented clockwise,} \\[0.1em] \left[\begin{tabular}{c} $B_{(r+1)'}$\\ \hline $B_{[r+1]}$ \end{tabular}\right] & \text{if $\alpha_{[r+1]}$ is oriented counterclockwise}. \end{cases} \end{equation*} By {\it triangular transformations} with a representation $\mathbb B$ of the form \eqref{m1}, we mean the following transformations: \begin{itemize} \item[(i)] additions of linear combinations of columns of $B_{[r+1]}$ to columns of $B_{(r+1)'}$ if $\alpha_{[r+1]}$ is oriented clockwise, \item[(ii)] additions of linear combinations of rows of $B_{(r+1)'}$ to rows of $B_{[r+1]}$ if $\alpha_{[r+1]}$ is oriented counterclockwise. \end{itemize} We say that $\mathbb B$ is a {\it triangular representation} if \begin{equation*}\label{1t} {\mathbb F}({\mathbb B})\simeq {\mathbb F}({\mathbb B}^{\vartriangle}) \end{equation*} for every representation ${\mathbb B}^{\vartriangle}$ obtained from ${\mathbb B}$ by triangular transformations. \begin{lemma}\label{l.2e} Suppose $\mathbb{D}$ is obtained from a triangular representation $\mathbb B$ of ${\cal Q}^{(r)}$ by transformations at the vertex $[r+2]$. Then $\mathbb{D}$ is triangular too. \end{lemma} \begin{proof} Let \begin{equation*}\label{2.0'} {\cal S}=(I,\dots,I,S_{[r+2]},I,\dots,I): {\mathbb B}\is {\mathbb D} \end{equation*} (see \eqref{1.3}). We must prove that ${\mathbb F}(\mathbb D)\simeq{\mathbb F}({\mathbb D}^{\vartriangle})$ for every ${\mathbb D}^{\vartriangle}$ obtained from ${\mathbb D}$ by triangular transformations. Denote by ${\mathbb B}^{\vartriangle}$ the matrix representation obtained from ${\mathbb B}$ by the same triangular transformations. By \eqref{1.2aa} and the definition of triangular transformations, there is a block matrix \[ R=\begin{bmatrix} I&0\\ *&I \end{bmatrix} \] such that \begin{align*} [D^{\vartriangle}_{(r+1)'}\,|\, D^{\vartriangle}_{[r+1]}] &=S_{[r+2]}[B_{(r+1)'}\,|\, B_{[r+1]}] R\quad \text{if $\alpha_{[r+1]}$ is oriented clockwise,}\\[3mm] \left[\begin{tabular}{c} $D^{\vartriangle}_{(r+1)'}$\\ \hline $D^{\vartriangle}_{[r+1]}$ \end{tabular}\right] &=R \left[\begin{tabular}{c} $B_{(r+1)'}$\\ \hline $B_{[r+1]}$ \end{tabular}\right] S_{[r+2]}^{-1}\quad \text{if $\alpha_{[r+1]}$ is oriented counterclockwise,} \\[3mm] D^{\vartriangle}_{[r+2]}&= \begin{cases} B_{[r+2]} S_{[r+2]}^{-1} &\text{if $\alpha_{[r+2]}$ is oriented clockwise,}\\ S_{[r+2]} B_{[r+2]} &\text{if $\alpha_{[r+2]}$ is oriented counterclockwise}. \end{cases} \end{align*} These equalities imply \[{\cal S}=(I,\dots,I,S_{[r+2]},I,\dots,I): {\mathbb B}^{\vartriangle} \is {\mathbb D}^{\vartriangle}\] and \[ {\mathbb F}({\mathbb D})\simeq {\mathbb F}({\mathbb B})\simeq {\mathbb F}({\mathbb B}^{\vartriangle}) \simeq {\mathbb F}({\mathbb D}^{\vartriangle}). \] \end{proof} \begin{lemma}\label{l.2f} Each representation ${\mathbb A}^{(r)}$ $($obtained in step $r$ of the algorithm for cycles$)$ is triangular and ${\mathbb F}({\mathbb A}^{(r)})\simeq {\mathbb A}$. \end{lemma} \begin{proof} The lemma is obvious if $l=t+1$ (see \eqref{dd}). Suppose $l\le t$. The statements hold for ${\mathbb A}^{(1)},\dots,{\mathbb A}^{(l)}$. Reasoning by induction, we assume that they hold for ${\mathbb A}^{(r-1)}$ with $r-1\ge l$ and prove them for ${\mathbb A}^{(r)}$. First we apply the unitary transformations at the vertex $[r+1]$ from step $r$ of the algorithm for cycles to the representation ${\mathbb A}^{(r-1)}$ of the quiver ${\cal Q}^{(r-1)}$: we reduce the matrix $A^{(r-1)}$ to a block-triangular form by transformations \eqref{2.3''a} or \eqref{2.3'''} (depending on the orientation of $\alpha_{[r]}$), and the matrix $A_{[r+1]}^{(r-1)}$ to $A^{(r)}$. Denote the obtained representation by ${\mathbb A}^{(r-2/3)}$. Then we make zero the block $*$ of \eqref{2.3''a} or \eqref{2.3'''} by triangular transformations and obtain the following representation ${\mathbb A}^{(r-1/3)}$ of the quiver ${\cal Q}^{(r-1)}$: \\[1.5em] \begin{equation*} \unitlength 1.1mm \linethickness{0.4pt} \begin{picture}(90.33,22.33)(-10,22) \put(-10.00,45.00){\makebox(0,0)[rc] {${\mathbb A}^{(r-1/3)}$:}} \put(0.00,25.00){\makebox(0,0)[rc]{$(t+1)'$}} \put(1.00,25.00){\line(1,0){5.00}} \put(13.50,25.00){\makebox(0,0)[cc]{$(t+2)'$}} \put(20.33,25.00){\line(1,0){6.0}} \multiput(28.,24.0)(2,0){26}{$\cdot$} \put(80.33,25.00){\line(1,0){8.00}} \put(90.33,25.00){\makebox(0,0)[lc]{$(2t)'$}} \bezier{348}(88.67,-0.67)(45.00,-10.00)(1.00,-0.67)\put(0.00,10.00){\makebox(0,0)[rc]{$(kt+1)'$}} \put(1.00,10.00){\line(1,0){7.00}} \put(12.00,9.70){\makebox(0,0)[cc]{$\cdots$}} \put(15.33,10.00){\line(1,0){8.00}} \put(19.33,13.3){\makebox(0,0)[cc] {$\scriptstyle{A_{(r-1)'}^{(r-1)}}$}} \put(25.67,10.00){\makebox(0,0)[cc]{$r'$}} \multiput(1.33,11.67)(2,0.25){6}{$\cdot$} \multiput(25,14.3)(2,0.25){32}{$\cdot$} \put(0.00,0.00){\makebox(0,0)[rc]{$1$}} \put(1.00,0.00){\line(1,0){7.00}} \put(12.00,-0.30){\makebox(0,0)[cc]{$\cdots$}} \put(15.33,0.00){\line(1,0){7.0}} \put(18.7,3.30){\makebox(0,0)[cc] {$\scriptstyle{A_{[r-1]}^{(r-1)}}$}} \put(25.67,0.00){\makebox(0,0)[cc]{$[r]$}} \put(28.00,0.00){\line(1,0){8.50}} \put(43.00,0.00){\makebox(0,0)[cc]{$[r+1]$}} \put(49.00,0.00){\line(1,0){7.20}} \put(53.00,2.50){\makebox(0,0)[cc] {$\scriptstyle{A^{(r)}}$}} \put(63.00,0.00){\makebox(0,0)[cc]{$[r+2]$}} \put(69.00,0.00){\line(1,0){7.00}} \put(73.00,3.30){\makebox(0,0)[cc] {$\scriptstyle{A_{[r+2]}^{(r-1)}}$}} \put(80.7,-0.20){\makebox(0,0)[cc]{$\cdots$}} \put(84,0.00){\line(1,0){5.00}} \put(90.33,0.00){\makebox(0,0)[lc]{$t$}} \put(27.33,9.33){\line(4,-3){9.00}} \put(92,45){\makebox(0,0)[rc] {$(l+1)' \stackrel{A^{(r-1)}_{(l+1)'}}{\frac{}{\qquad}} (l+2)' \stackrel{A^{(r-1)}_{(l+2)'}}{\frac{}{\qquad}} \cdots \frac{}{\qquad}\ t'$}} \put(1.00,27){\line(6,1){87.3}} \put(43.00,10){\makebox(0,0)[cc] {\fbox{$\scriptstyle{A_{r'}^{(r)}\oplus A_{[r]}^{(r)}}$}}} \put(45.16,-8.5){\makebox(0,0)[cc] {$\scriptstyle{A_{t}^{(r-1)}}$}} \end{picture}\\*[9.5em] \end{equation*} By the induction hypothesis, ${\mathbb A}\simeq{\mathbb F}({\mathbb A}^{(r-1)})$ and ${\mathbb A}^{(r-1)}$ is triangular. By Lemma \ref{l.2e}, ${\mathbb A}^{(r-2/3)}$ is triangular too, and so \[ {\mathbb F}({\mathbb A}^{(r-2/3)})\simeq{\mathbb F}({\mathbb A}^{(r-1/3)}). \] We have \[ {\mathbb A}\simeq{\mathbb F}({\mathbb A}^{(r-1)})\simeq{\mathbb F}({\mathbb A}^{(r-2/3)})\simeq{\mathbb F}({\mathbb A}^{(r-1/3)})={\mathbb F}({\mathbb A}^{(r)}). \] Let ${\mathbb A}^{(r)\vartriangle}$ be obtained from ${\mathbb A}^{(r)}$ by triangular transformations. These transformations reduce $A^{(r)}$ (see \eqref{1ww}) to a new matrix $A^{(r)\vartriangle}$ and do not change the other matrices of ${\mathbb A}^{(r)}$. Since \[ A^{(r)}=A_{[r+1]}^{(r-1/3)}, \] these transformations with $A_{[r+1]}^{(r-1/3)}$ can be realized by transformations at the vertex $[r+1]$ of ${\mathbb A}^{(r-1/3)}$; denote the obtained representation by ${\mathbb A}^{(r-1/3)\vartriangle}$, it is triangular by Lemma \ref{l.2e}. These transformations may spoil the subdiagonal block $0$ of \[ A^{(r-1/3)}={A_{r'}^{(r)}\oplus A_{[r]}^{(r)}}, \] but it is recovered by triangular transformations and so \[ {\mathbb F}({\mathbb A}^{(r-1/3)\vartriangle}) \simeq {\mathbb F}({\mathbb A}^{(r)\vartriangle}). \] Since \[ {\mathbb F}({\mathbb A}^{(r)})= {\mathbb F}({\mathbb A}^{(r-1/3)})\simeq {\mathbb F}({\mathbb A}^{(r-1/3)\vartriangle}) \simeq {\mathbb F}({\mathbb A}^{(r)\vartriangle}), \] the representation ${\mathbb F}({\mathbb A}^{(r)})$ is triangular. \end{proof} \begin{lemma}\label{l.2g} Let ${\mathbb A}^{(k)}$ be the representation obtained from a representation ${\mathbb A}$ in step $k$ of the algorithm for cycles, and let $k\ge l$ $($hence $l\le t$ by \eqref{dd} and \eqref{mmm}$)$. Denote \begin{equation*}\label{lll} \widehat{A}^{(k)}_i= \begin{cases} {A}^{(k)}_i& \text{if}\ \ i\ne [k+1], \\ {A}^{(k)} & \text{if}\ \ i= [k+1], \end{cases} \end{equation*} where $i=1,\dots,t$. Then \begin{itemize} \item[\rm(i)] The rows of $\widehat{A}^{(k)}_i$ are linearly independent if $\alpha_i$ is oriented clockwise and $i\le k$. \item[\rm(ii)] The columns of $\widehat{A}^{(k)}_i$ are linearly independent if $\alpha_i$ is oriented counterclockwise and the columns of ${A}_i$ are linearly independent. \end{itemize} \end{lemma} \begin{proof} We will prove the lemma by induction on $k$. Clearly, the statements (i) and (ii) hold for $k=l$. Assume they hold for $k=r-1\ge l$ and prove them for $k=r$. We need to check (i) and (ii) only for $i=[r]$ and $i=[r+1]$ since in step $r$ of the algorithm we change $\widehat{A}^{(r-1)}_{[r]}$ and $\widehat{A}^{(r-1)}_{[r+1]}$. By \eqref{2.3''a}, the matrix $\widehat A_{[r]}^{(r)}=A_{[r]}^{(r)}$ has linearly independent rows if $\alpha_{[r]}$ is oriented clockwise. By \eqref{2.3'''}, this matrix has linearly independent columns if both $\alpha_{[r]}$ is oriented counterclockwise and $\widehat A_{[r]}^{(r-1)}=A^{(r-1)}$ has linearly independent columns. Hence, (i) and (ii) hold for $i=[r]$. The statements (i) and (ii) hold for $i=[r+1]$ by the induction hypothesis and since $\widehat A_{[r+1]}^{(r)}=A^{(r)}$ is obtained from $A_{[r+1]}^{(r-1)}$ by elementary transformations with its columns or rows. \end{proof} \begin{proof}[Proof of Proposition \ref{l2}] The statement (c) of Proposition \ref{l2} follows from \eqref{1o} and Lemma \ref{l.2f}, so we will prove (a) and (b). If $l=t+1$ (see \eqref{dd}), then $\widetilde{\mathbb{A}}=\mathbb{A}$ satisfies (a) and (b). Suppose $l\le t$. Then $\widetilde{\mathbb{A}}$ is the restriction of the representation ${\mathbb{A}}^{(n)}$ (obtained in the last step of the algorithm) to the cycle $\cal{C}$ and so $ \widetilde{A}_i=A^{(n)}_i$ ($i=1,2,\dots,t$). Since \[ \widehat A^{(n)}_i=A^{(n)}_i=\widetilde{A}_i \] if $i\ne [n+1]$, \[ \widehat A^{(n)}_{[n+1]} =A^{(n)}= \left[\begin{tabular}{c|c} $0$& $A^{(n)}_{[n+1]}$ \end{tabular}\right]= \left[\begin{tabular}{c|c} $0$& $\widetilde{A}_{[n+1]}$ \end{tabular}\right] \] if $\alpha_{[n+1]}$ is oriented clockwise (see \eqref{vvv} and \eqref{wwv}), and \[ \widehat A^{(n)}_{[n+1]} =A^{(n)}= \left[\begin{tabular}{c} $0$\\ \hline \raisebox{-3pt}{$A^{(n)}_{[n+1]}$} \end{tabular}\right]= \left[\begin{tabular}{c} $0$\\ \hline \raisebox{-3pt}{$\widetilde{A}_{[n+1]}$} \end{tabular}\right] \] if $\alpha_{[n+1]}$ is oriented counterclockwise, the statements (i) and (ii) follow from Lemma \ref{l.2g}, in which $k=n\ge t$. \end{proof} \medskip {\it The author wishes to express his gratitude to Professor Roger Horn for the hospitality and stimulating discussions.}
1,941,325,219,891
arxiv
\section{Introduction} A topological insulator (TI) is a material in which the bulk is insulating but the surface contains metallic state due to non-zero topological invariants of the bulk band structure\cite{Hasan,Qi,Ando}. SnTe with NaCl-type crystal structure is a topological crystalline insulator (TCI)\cite{Hsieh,Tanaka}. A TCI requires certain symmetries in crystal structure such as mirror symmetry while a TI requires time-reversal symmetry. It has been proposed that the superconductivity in In-doped SnTe may be topological.\cite{Sasaki} Topological superconductivity is anologous to TI in that the superconducting gap function has a nontrivial topological invariant. The best studied candidate of topological superconductors is a doped TI, Cu$_x$Bi$_2$Se$_3$,\cite{Hor} in which a spin-triplet, odd-parity superconducting state was recently established\cite{Matano}. The superconducting properties of doped SnTe has large dopant-dependence. When the Sn site is doped with Ag or vacancy, the Fermi level ($E_{\rm F}$) is simply lowered, and superconductivity appears at low temperatures (transition temperature $T_{\rm c}\sim 0.1$ K with a hole concentration of $p = 10^{21}$cm$^{-3}$)\cite{Mathur}. On the other hand, when the Sn site is doped with In, superconductivity appears at higher temperatures ($T_{\rm c} \sim 1$ K with $p = 10^{21}$cm$^{-3}$).\cite{Erickson1} The existence of impurity states (in-gap states) bound to In was suggested from the results of a reduction of carrier mobility and the pinning of the $E_{\rm F}$ due to In-doping.\cite{Kaidanov,Bushmarina,Nemov} Based on this, Shelankov proposed that a strong interaction between the impurity state and phonon is responsible for the enhancement of the superconducting transition temperature.\cite{Shelankov} More recently, it is suggested by an first principle calculation that the impurity state is composed primarily of In $5s$ and Te $5p$ orbitals .\cite{Haldo} Such a bound state often contributes to magnetism. For example, in P-doped Si with P-concentrations $N_{\rm P} < 6.5\times 10^{18} \rm{cm}^{-3}$, the magnetic susceptibility ($\chi$) shows a Curie-Weiss like temperature ($T$)-dependence.\cite{Kobayashi,Ikehata}. Several nuclear magnetic resonance (NMR) studies have also been reported. The magnetic field swept $^{29}$Si-NMR (nuclear spin $I=1/2$) spectrum shows a characteristic broadening with a tail on the low field side because the wave function of the localized impurity state creates an inhomogeneous magnetic field. The spin-lattice relaxation rate ($1/T_1$) divided by $T$ shows a Curie-Weiss like $T$-dependence at low fields due to the magnetization of impurity states but is $T$-independent under high magnetic fields\cite{Kobayashi}. Sn$_{1-x}$In$_x$Te has four Fermi surfaces centered at L points of the fcc lattice while Cu$_x$Bi$_2$Se$_3$ has one Fermi surface centered at $\Gamma$ point of the rhombohedral lattice, but each Fermi surface in Sn$_{1-x}$In$_x$Te is essentially equivalent to that of Cu$_x$Bi$_2$Se$_3$\cite{Sasaki}. Point-contact spectroscopy found a zero-bias conductance peak which was taken as a signature of unconventional superconductivity\cite{Sasaki}. On the other hand, specific heat have revealed fully gapped superconductivity\cite{Mario}. By combining these results, a spin-triplet state with an isotropic gap was suggested\cite{Hashimoto}. Therefore, NMR measurements which can reveal spin symmetry as well as the parity of the gap function in the superconducting state is desired. As a first step toward a full understanding of this material, we aim to observe the impurity state by NMR. In this paper, we report the synthesis of Sn$_{1-x}$In$_x$Te ($x$ = 0 and 0.1) and $^{125}$Te-NMR ($I=1/2$) measurements. \section{Experimental} Polycrystalline samples of SnTe and Sn$_{0.9}$In$_{0.1}$Te were synthesized by a sintering method. The required amounts of Sn, In, and Te were pre-reacted in evacuated quartz tubes at 1000$^\circ$C for 8 hours. The resultant materials were powdered, pressed into pellets, and sintered in evacuated quartz tubes at 400$^\circ$C for 2 days. In order to avoid the influence of dilute magnetic impurities such as Fe , high purity starting materials (Sn: 99.9999\%, In: 99.9999\%, and Te: 99.9999\%) were used. For powder x-ray diffraction (XRD), ac susceptibility, and NMR measurements, a part of the pellet was powdered. The samples were characterized by powder XRD using Rigaku RINT-TTR III with CuK$\alpha$ radiation. No impurity peaks were observed in the powder XRD pattern. The $T_{\rm c}$ was determined by measuring the inductance of a coil filled with a sample which is a typical setup for NMR measurements. NMR measurements were carried out by using a phase-coherent spectrometer. The NMR spectrum was obtained by integrating the spin echo intensity by changing the resonance frequency ($f$) at the fixed magnetic field of 5 T. The spin-lattice relaxation time ($T_1$) was measured by using a single saturating pulse, and determined by fitting the recovery curve of the nuclear magnetization to the exponential function: $(M_0-M(t))/M_0 = \exp(-t/T_1)$, where $M_0$ and $M(t)$ are the nuclear magnetization in the thermal equilibrium and at a time $t$ after the saturating pulse. The recovery curves in the whole temperature and magnetic field ranges were well fitted by the single exponential function. \section{Results and Discussion} Figure \ref{chi} shows the $T$-dependence of the ac susceptibility for Sn$_{0.9}$In$_{0.1}$Te, which showed superconductivity at 1.68 K under $H_0 = 0$ T and at 1.45 K under $H_0 = 0.1$ T. To obtain information on superconductivity of Sn$_{0.9}$In$_{0.1}$Te by NMR, the applied magnetic field must be 0.1 T or less, owing to the small upper critical field $H_{\rm c2}$. \begin{figure}[H] \begin{center}\includegraphics[clip,width=75mm]{img/67909Fig1.eps}\end{center} \caption{(Color online) Temperature-dependence of the ac susceptibility for Sn$_{0.9}$In$_{0.1}$Te under static magnetic fields of $H_0$ = 0 T and 0.1 T, respectively. The solid arrows indicate $T_{\rm c}$.} \label{chi} \end{figure} Figure \ref{Spectra1} shows the $^{125}$Te-NMR spectra under $H_0 = 5$ T. The $^{125}$Te Knight shift ($K$) in NaCl type crystal structure is isotropic because the Te site is at the center of the regular tetrahedron of Sn/In. Although the non-doped SnTe shows a small distortion at about 100 K\cite{Erickson1}, the sharp spectrum of SnTe indicates the anisotropy is negligibly small. Such distortion is completely suppressed in Sn$_{0.9}$In$_{0.1}$Te\cite{Erickson1}. The spectrum of Sn$_{0.9}$In$_{0.1}$Te has a large tail on the high frequency side resembling that observed in P-doped Si\cite{Kobayashi}. The $K$ is expressed as $K = K_{\rm s} + K_{\rm orb}$, where $K_{\rm s}$ is the spin part, and $K_{\rm orb}$ is the orbital part. The difference in peak position between SnTe and Sn$_{0.9}$In$_{0.1}$Te is due to the $K_{\rm s}$ that is sensitive to the carrier concentration. Figure \ref{Spectra2} shows the spectra of Sn$_{0.9}$In$_{0.1}$Te at three different temperatures. As the temperature is lowered, the tail on the high frequency side grew while the peak position was unchanged, which means that the spin polarization of the impurity states developed. This situation is clearly different from the case due to dilute magnetic impurities such as Fe or dense magnetic ions in the lattice, in which cases the NMR spectrum is symmetrically broadened.\cite{CuFe,Nd} \begin{figure}[H] \begin{center}\includegraphics[clip,width=75mm]{img/67909Fig2.eps}\end{center} \caption{(Color online) $^{125}$Te-NMR spectra for SnTe at 3 K and Sn$_{0.9}$In$_{0.1}$Te at 1.55 K under $H_0$ = 5 T.} \label{Spectra1} \end{figure} \begin{figure}[H] \begin{center}\includegraphics[clip,width=75mm]{img/67909Fig3.eps}\end{center} \caption{(Color online) $^{125}$Te-NMR spectra for Sn$_{0.9}$In$_{0.1}$Te at 1.55 K, 7 K, and 50 K under $H_0$ = 5 T.} \label{Spectra2} \end{figure} Figure \ref{T1f} shows the $f$-dependence of the $\left( T_1T \right)^{-1/2}$ at 1.55 K under $H_0 = 5$ T. The higher the $f$, the larger the $\left( T_1T \right)^{-1/2}$ became, which means that the $1/T_1$ at the tail is enhanced reflecting the large density of states due to the in-gap state. The linear relation between $\left( T_1T \right)^{-1/2}$ and $f$ suggests that a Korringa relation $T_1\left( \bm r \right)TK_{\rm s}^2\left( \bm r \right) = a$ is satisfied locally, where $T_1\left( \bm r \right)$ is the $T_1$ at a position of $\bm r$, $K_s\left( \bm r \right)$ is the $K_{\rm s}$ at $\bm r$, and $a$ is a constant. By extrapolating the linear relation $K$ vs $(1/T_1T)^{-1/2}$ to the origin where $(1/T_1T)^{-1/2} = 0$, the value of $K_s$ was obtained. If the electron correlation is weak and the $s$-orbital electrons make the dominant contribution, the $a$ is calculated as, $a_{\rm s} = \frac{\hbar}{4\pi k_{\rm B}}\left( \frac{\gamma_{\rm e}}{\gamma_{\rm n}}\right)^2$, where $k_{\rm B}$ is the Boltzmann constant, $\gamma_{\rm e}$ is the gyromagnetic ratio of electrons, and $\gamma_{\rm n}$ is the nuclear gyromagnetic ratio. We obtained $a = 2.2a_{\rm s}$ for Sn$_{0.9}$In$_{0.1}$Te. The deviation form $a_{\rm s}$ is most likely due to the contribution of the Te $5p$ orbital as suggested by the first principle calculation\cite{Haldo}. Similar inhomogeneity of the $T_1$ was also reported for P-doped Si\cite{Kobayashi}. \begin{figure}[H] \begin{center}\includegraphics[clip,width=80mm]{img/67909Fig4.eps}\end{center} \caption{(Color online) Frequency-dependence of the $(T_1T)^{-1/2}$ for Sn$_{0.9}$In$_{0.1}$Te at 1.55 K under $H_0$ = 5 T. The vertical arrow indicates the position of $K_{\rm s}$ = 0.} \label{T1f} \end{figure} Figure \ref{T1T} shows the $T$- dependence of the $1/T_1T$ at different $H_0$ for SnTe and Sn$_{0.9}$In$_{0.1}$Te measured at the peak position. The $1/T_1T$ of SnTe is $H_0$- and $T$-independent, indicating a conventional metallic state of SnTe and that the amount of magnetic impurities such as Fe is negligibly small. It is well known that SnTe is actually metallic because of Sn-vacancy although an ideal SnTe is a semiconductor\cite{Erickson1}. On the other hand, the $1/T_1T$ of Sn$_{0.9}$In$_{0.1}$Te shows a distinct Curie-Weiss like $T$-dependence under $H_0$ = 0.1 T but is $T$-independent under $H_0$ = 5 T. \begin{figure}[H] \begin{center}\includegraphics[clip,width=80mm]{img/67909Fig5.eps}\end{center} \caption{(Color online) Temperature-dependence of the $1/T_1T$ for SnTe (blue squares) and Sn$_{0.9}$In$_{0.1}$Te (red circles) under $H_0$ = 0.1 T (open markers) and 5 T (filled markers). The red dashed line represents the relation $1/T_1T = A + B/T$, with $A = 0.43$ [s$^{-1}$K$^{-1}$] and $B = 10.1$ [s$^{-1}$].} \label{T1T} \end{figure} According to Erickson $et\ al$., the $T$-dependence of the $\chi$ in Sn$_{1-x}$In$_x$Te is Pauli like rather than Curie-Weiss like.\cite{Erickson2} Below, we explain why both NMR spectra and $1/T_1T$ in Sn$_{0.9}$In$_{0.1}$Te show the peculiar characteristics even though the $\chi$ is Pauli like, by referring to and partly modifying the discussion of P-doped Si given by Kobayashi $et\ al$.\cite{Kobayashi} With high impurity concentration, the impurity states hybridize and create a narrow band with finite density of states at the $E_{\rm F}$. The wave functions of the impurity states can be extended over many impurity sites. The states near the $E_{\rm F}$ are spin polarized under an external field and give the Pauli like $\chi$. However, because of the localized nature of the impurity states, an internal inhomogeneous magnetic field is created. The $^{125}$Te nuclei within the wave functions of the polarized impurity states feel large magnetic fields and compose the tail on the high frequency side. As the temperature is lowered, the tail extends to the high frequency side because the magnetization of the impurity states develops. On the other hand, the $^{125}$Te nuclei without the wave function of the polarized impurity states compose the large area on the low frequency side and have small $T$-dependence. The $1/T_1T$ under high magnetic fields satisfies the Korringa relation locally. On the other hand, under low fields, where the NMR spectrum broadening due to the impurity states is smaller than that due to the nuclear dipole interaction, a spin-diffusion process homogenizes the $1/T_1T$. Kobayashi $et\ al$. qualitatively explained the $T$-dependence of the $1/T_1T$ in P-doped Si with $N_{\rm P} < 6.5\times 10^{18}$ cm$^{-3}$ but did not explain that in P-doped Si with $N_{\rm P} \geq 6.5\times 10^{18}$ cm$^{-3}$ which showed a behavior similar to the present results for Sn$_{0.9}$In$_{0.1}$Te.\cite{Kobayashi} We thus propose that the $1/T_1T$ under low magnetic fields is obtained as the spatial average of $1/T_1(\bm{r})T$, \begin{equation} \frac{1}{T_1T} = \frac{1}{V} \int \frac{\mathrm{d}^3 \bm{r}}{T_1(\bm{r})T} = \frac{1}{Va} \int K_s^2 (\bm{r}) \ \mathrm{d}^3 \bm{r}, \end{equation} where $V$ is the volume. By assuming $\mu_{\rm B}H \ll k_{\rm B}T$, the integral around the polarized impurity states is proportional to $T^{-1}$. This is because the effective number of the polarized impurity state is determined by $k_{\rm B}T$, while $K_s(\bm{r})$ proportional to the spin polarization of the impurity states goes as $T^{-1}$.\cite{Kobayashi} On the other hand, the integral away from the polarized impurity states is $T$-independent. Therefore, the $1/T_1T$ is linear in $T^{-1}$, which explains the Curie-Weiss like $T$-dependence of $1/T_1T$ in Sn$_{0.9}$In$_{0.1}$Te, as well as in highly P-doped Si. Generally, the $1/T_1T$ which can probe the parity of the superconducting gap function is a very important quantity. However, a Curie-Weiss like $T$-dependence of $1/T_1T$ obscures the important information and is a serious obstacle. This was encountered in the study of the electron-doped high-$T_{\rm c}$ cuprates Nd$_{1-x}$Ce$_x$CuO$_4$\cite{Nd}, and is also true for Cu$_x$Bi$_2$Se$_3$. Nisson $et\ al$. reported a Curie-Weiss like $T$-dependence of $1/T_1T$ by $^{209}$Bi-NMR ($I = 9/2$) and suggested that it is due to the magnetism of Se-vacancies\cite{Nisson}. Therefore, an efficient method for measuring $1/T_1$ for these systems should be worked out. \section{Summary} We synthesized high purity Sn$_{1-x}$In$_x$Te polycrystals and performed $^{125}$Te-NMR measurements. The NMR spectra under $H_0 = 5$ T showed a broadening characteristic due to a localized impurity state. The $1/T_1T$ showed a Curie-Weiss like $T$-dependence under $H_0 = 0.1$ T but was $T$-independent under $H_0 = 5$ T. These results indicate the existence of a quasi-localized impurity states due to In-doping. Since such state was proposed to be responsible for the superconductivity, our results serve to lay a foundation toward understanding the possible exotic superconducting state of this material. \begin{acknowledgment} We acknowledge partial support by MEXT Grant No. 15H05852 and JSPS grant No.16H0401618. \end{acknowledgment}
1,941,325,219,892
arxiv
\section{Introduction}\label{intro} The knowledge of the stellar initial mass function (IMF) is a fundamental piece of information in many research areas of astrophysics. From a theoretical point of view, providing tight constraints on the IMF properties in different stellar environments - both in the field and in star clusters - is mandatory to develop a complete and reliable theory of star formation \citep[and references therein]{Mckee07}. At the same time, from a phenomenological point of view, the IMF is a fundamental property of stellar populations, and hence a crucial input in any study of galaxy formation and evolution. For instance, it represents an important ingredient in the computations of Population Synthesis models (see \citealt{vazdekis15} and references therein), and hence it affects our capability to extract the properties of stellar populations such as their luminosity evolution over time, the mass--to--light ratio, the total star formation rate at low and high redshifts, and so on. Therefore, it appears evident that to improve our knowledge of the IMF, or at least to have stronger observational constraints on this crucial ingredient, is of pivotal importance in many astrophysics research fields. It is particularly important to analyze the properties of the IMF in various stellar environments such as the disk and the bulge of spiral galaxies in order to verify whether the well-known differences (in age and chemical composition) in the stellar populations hosted by the distinct galactic components have an impact on the IMF \citep{zoccali03}. An additional reason that makes the study of the IMF in the bulge of spiral galaxies and elliptical galaxies important is due to the possibility that these spheroids could potentially contain the majority of the stellar mass of the universe (see, for instance \citet{fukugita98}. As we discuss, the IMF for the Galactic bulge is unlikely to be very different from the present-day mass function (PDMF) below the main-sequence turn-off (MSTO), since most of the star-formation in the Galactic bulge happened within about 2 Gyr \citep{clarkson08}, with no evidence of star formation after that. So we will refer to the observed PDMF of the Galactic bulge as the IMF in the mass range below the MSTO, which occurs at $\approx$1.0 $M_{\odot}$ for a stellar population with solar metallicity and an age of $t$ = 11 Gyr \citep[hereafter Paper I]{calamida14b}. In spite of the huge improvements achieved in the observational facilities, there is not yet any chance to directly measure the IMF of spheroids outside of our Galaxy. As a consequence, the measurement of the IMF in the Galactic bulge is a fundamental benchmark (or reference point) for any analysis devoted to investigate this property in extra-galactic spheroids \citep{calchi08}. Concerning the Galactic bulge, the two most recent determinations of the IMF in our spheroid have been performed by \citet[hereinafter HO98]{holtz98} and by \citet[hereafter ZO00]{zoccali00}, by taking advantage of the exquisite observational capabilities of the {\sl Hubble Space Telescope} ({\it HST}). In particular, the analysis performed by ZO00 pushed a step forward the knowledge of the bulge IMF thanks to the use of the Near-Infrared Camera and Multi-Object Spectrometer (NICMOS) available at that time: the derived mass function still represents the deepest measured to date and extends to $\sim$ 0.15 $M_{\odot}$. They found a power-law slope for the IMF equal to $\alpha=$ $-1.33$ (when a Salpeter IMF would have $\alpha = -2.35$, where $dN/dM$ = $C \times M^{\alpha}$.), with some hint for a possible change of the power slope - $\alpha \approx$ $-2$ at $\sim$ 0.5 $M_{\odot}$. ZO00 also found that the derived bulge IMF is steeper than that measured for the Galactic disk \citep{reid97, gould97}. In this context, it is also worth noting that \citet{dutton13} have recently used strong lensing and gas kinematics to investigate the existence of possible differences in the properties of the IMF between the disk and the bulge in a sample of spiral galaxies within the Sloan WFC Edge-on Late-type Lens Survey \citep[SWELLS]{treu11}. As a result they found a significant difference between the bulge IMF and that of the disk, the former being more consistent with a Salpeter IMF, and the latter being more consistent with a Chabrier-like IMF. On the basis of this evidence, it appears quite appropriate to analyze the properties of the Galactic bulge IMF in different fields of view and using more updated observational datasets. In a previous paper \citep[hereinafter Paper I]{calamida14b}, we have taken advantage of the availability of a huge photometric dataset for the low-reddening Sagittarius window in the Galactic bulge collected with the {\sl Advanced Camera for Survey} (ACS) on board {\it HST}, to obtain the first unambiguous detection of the white dwarf cooling sequence of the Galactic bulge. In this manuscript, we use the same data to perform a thorough analysis of the IMF in this field of the bulge in order to provide additional, independent insights on the bulge IMF, thus supplementing the results of previous analyses. In this investigation we explore a larger and denser field compared to what was previously observed by HO98 in the Baade's Window and by ZO00 in a field at $(l = 0.277^\circ,,b = -6.167^\circ,)$. Most importantly, for the first time we estimate the Galactic bulge IMF based on a clean sample of bulge stars thanks to the very accurate proper motions (down to $F814W \approx$ 26 mag) that we were able to measure. Furthermore, we use a statistical approach to apply a correction for the presence of unresolved binaries. We note that the slope of the very low-mass range of the IMF is fundamental to estimate the mass budget of a stellar population, since a major fraction of the stellar mass is included in this range and low-mass stars have been hypothesized to contain a significant fraction of the total mass in the universe \citep{fukugita98}. It is even more important in the case of the Galactic bulge since this component might include $\approx$20\% ($1.8\times10^{10} M_{\odot}$) of the mass of the Galaxy \citep{sofue09, portail15}. The structure of the paper is as follows: in \S 2 we discuss the observations and data reduction in detail, while in \S 3 we describe how we selected a clean sample of bulge stars. In \S 4 we present the theoretical mass--luminosity relations we adopted to convert the luminosity functions in mass functions, while \S5 deals with the different systematics that affect the estimate of the initial mass function. In \S6 we compare the derived IMF for the bulge with the disk mass function, and in \S 7 we derive a minimum value for the stellar mass--to--light ratio of the Galactic bulge. \S 8 deals with the gravitation microlensing events predicted by the bulge IMF derived in this work, and the conclusions are presented in \S 9. \section{Observations and data reduction}\label{obs} We observed the Sagittarius Window Eclipsing Extrasolar Planet Search (SWEEPS) field ($l = 1.25^\circ, b = -2\fdg65$) in the Galactic bulge in 2004 and again in 2011, 2012 and 2013 with {\it HST}, using the Wide-Field Channel of ACS (proposals GO-9750, GO-12586, GO-13057, PI: Sahu). The SWEEPS field covers $\approx 3\farcm3 \times 3\farcm3$ in a region of relatively low extinction in the bulge ($E(B-V) \lesssim 0.6$~mag; \citealt{oosterhoff}). The 2004 observations were taken in the $F606W$ (wide $V$) and $F814W$ (wide $I$) filters over the course of one week (for more details see \citealt{sahu06}). The new data were collected between October 2011 and October 2013, with a $\sim$ 2-week cadence, for a total of 60 $F606W$- and 61 $F814W$-band images. The two datasets, the 2004 and the 2011--2012--2013 (hereafter 2011--13), were reduced separately by using a software that performs simultaneous point-spread function (PSF) photometry on all the images. The choice to reduce the two datasets separately is due to the high relative proper motions of the disk and bulge stars in this field, caused by the Galactic rotation: the disk star relative proper motions (PMs) peak at $\mu_l \approx$ 4 mas/yr, with a dispersion of $\approx$ 3 mas/yr, whereas the bulge motions are centered at $\mu_l \approx$ 0 mas/yr, with a dispersion of $\approx$ 3 mas/yr (see Paper I). This means that a substantial fraction ($\sim$ 30\%) of stars would move by more than half a pixel (25 mas) in 9 years. We calibrated the instrumental photometry to the Vegamag system by adopting the 2004 photometric zero-points, and we obtained a catalog of $\approx$ 340,000 stars for the 2004 and for the 2011--2013 datasets. The left panel of Fig.~1 shows the $F814W,\, (F606W - F814W)$ Color-Magnitude Diagrams (CMD) for all the observed MS stars in the 2011--2013 dataset. The right panel shows the sample completeness as a function of the $F814W$ magnitude. Details on how the completeness was derived are given in \S \ref{art}. This figure shows that the completeness is $\sim$ 50\% at $F814W \sim$ 25.5 mag. The completeness of the $F606W$ magnitude is $\sim$ 50\% at $F606W \sim$ 28 mag. The 2004 dataset has a very similar completeness, reaching $\sim$ 50\% at $F814W \sim$ 25.3 mag and $F606W \sim$ 28 mag, respectively. In order to obtain a clean bulge MS sample to derive the IMF, we estimated the PMs of the stars in this field by combining the astrometry and the photometry of the 2004 and the 2011-2013 datasets. By comparing the positions of stars in the two epochs we estimated PMs for $\approx$ 200,000 stars down to $F814W \approx$ 25.5 mag. \begin{figure*} \begin{center} \label{fig1} \includegraphics[height=0.7\textheight,width=0.57\textwidth, angle=90]{cmd_newcut.ps} \caption{Left: $F814W,\ (F606W - F814W)$ CMD of MS stars in the SWEEPS field based on the 2011--2013 dataset. Error bars are also labelled. Right: completeness of MS stars as a function of the $F814W$ magnitude. The horizontal lines in both panels represent the $F814W$ magnitude at which the completeness is 50\% and the vertical line in the right panel represents the 50\% completeness level.} \end{center} \end{figure*} \subsection{Artificial star tests}\label{art} To properly characterize the completeness of the measured PMs, the photometric errors and the errors due to the reduction and selection techniques adopted, we performed several artificial star (AS) tests. We created a catalog of $\approx$ 200,000 artificial MS stars, with magnitudes and colors estimated by adopting a ridge line following the MS. We then produced a second artificial star catalog, by using the same input magnitudes and colors, but applying a PM to each star. We assumed the bulge PM distribution as measured in Paper I, with the distribution centered at $\mu_l \approx$ 0 mas/yr, and a dispersion of $\approx$ 3 mas/yr. Artificial stars were added and recovered one by one on every image of the two datasets by using the same reduction procedures adopted earlier. In this way the level of crowding is not affected. In order to estimate the magnitude and color dispersion of the MS due to photometric errors and data reduction systematics, we selected recovered artificial stars with $\Delta Mag = (Mag_i - Mag_o) \le$ 0.5 mag, and $d = \sqrt{(X_{o}-X_{i})^{2}+(Y_{o}-Y_{i})^{2}} \le$ 0.75 pixel, where the quantities with subscript $i$ represent the input, and $o$ represent the output, in both datasets, ending up with a sample of 146,225 stars. We applied this selection because stars which were not recovered in a circle of radius 0.75 pixel can be safely considered not found. The left panel of Fig.~2 shows the selected recovered artificial stars for the 2011--2013 dataset (red dots) plotted in the $F814W, (F606W - F814W)$ CMD; the observed stars are plotted as well as grey dots. The right panel shows the recovered color spread of the MS as a function of the $F814W$ magnitude. The comparison of the artificial (red dots) and observed (grey dots) CMDs indicates that we are not able to reproduce the entire color spread of the MS by assuming the presence of a single stellar population of solar metallicity and age $t$ = 11 Gyr. A metallicity spread of more than 1 dex is present in the SWEEPS bulge field based on medium-resolution spectra of MS turn-off, sub-giant and red-giant branch stars collected with FLAMES at the Very Large Telescope (see Paper I for more details). The metallicty spread can further broaden the MS, and differential reddening, depth effects as well as binaries might also play a role. It is worth mentioning that most stars in the color and magnitude ranges 2.0$ < F606W - F814W <$ 2.5 and 18.5 $< F814W < $21.5 mag belong to the (closer) disk population. \begin{figure*} \begin{center} \label{fig2} \includegraphics[height=0.7\textheight,width=0.57\textwidth, angle=90]{sigma_F814W_2012cut.ps} \caption{Left: $F814W,\ (F606W - F814W)$ CMD of recovered artificial stars for the 2011--2013 dataset (red points). Observed stars are marked with grey points. Stars are selected in magnitude, $\Delta Mag \le$ 0.5 mag, and in position, $\Delta d \le$ 0.75 pixel. Right: ($F606W - F814W$) photometric scatter as a function of the $F814W$-band magnitude as estimated from the artificial star test.} \end{center} \end{figure*} In order to estimate the completeness of the measured PMs, we matched the two recovered sets of artificial stars and compared the output with the input PMs in the direction of both $X$ and $Y$ axes as a function of the two magnitudes. Fig.~3 shows this comparison for the X (top panel) and the Y (bottom) axes versus the $F814W$ magnitude. Only stars with $\Delta Mag \le$ 0.5 mag and $d \le$ 0.75 pixel are shown. This plot shows that the dispersion of the recovered PMs increases at fainter magnitudes as expected and the accuracy of the measured PMs is better than 0.1 mas/yr ($\approx$ 4 km/s at the distance of the Galactic bulge) at magnitudes brighter than $F814W \le$ 23. At $F814W \sim$ 25 mag where the completeness is $\gtrsim$ 50\% for both datasets, the recovered PM scatter is $\approx$ 0.25 mas/yr ($\approx$ 10 km/s) within 3 $\sigma$ uncertainties. This precision will allow us to separate bulge from disk stars down to very faint magnitudes and to characterize the Galactic bulge mass function down to the very low-mass (VLM) range, $M <$ 0.5 $M_{\odot}$. \begin{figure} \begin{center} \label{fig3} \includegraphics[height=0.65\textheight,width=0.5\textwidth]{recover_PM_F814Wcutnew.ps} \caption{Comparison of Input and Output proper motions in the X (top panel) and Y (bottom) axes of stars recovered in the AS test as a function of the $F814W$ magnitude. The 3$\sigma$ limit is indicated by the overplotted red dots. Dashed and black dotted lines mark a dispersion of 0.1 and 0.5 mas/yr, respectively. } \end{center} \end{figure} \section{A clean bulge main-sequence sample}\label{bulge} We adopted the measured PMs to select a sample of main-sequence (MS) stars devoid of disk-star contamination from the 2011--2013 dataset. PMs are projected along the Galactic coordinates and we considered stars with $\mu_l \le -2 \,\rm mas\,yr^{-1}$ to belong to the bulge, following the criteria adopted in Paper I. This selection allowed us to keep $\approx$ 30\% of bulge members while the residual contamination of the sample by disk stars is $\lesssim$ 1\%. We ended up with a sample of 67,765 bulge MS stars. Note that the total contamination by disk stars in the SWEEPS field is $\approx$ 10\%, as shown in the previous work of \citet{clarkson08}. Fig.~4 shows the $F606W$ (left panel) and the $F814W$ (right) versus $(F814W-F606W)$ CMDs of the selected bulge MS stars. The magnitude range covered by MS stars is different when observing with the $F606W$ or the $F814W$ filter, decreasing from $\sim$ 8.5 to 7 magnitudes. This happens because very low-mass MS stars are cooler and thus more luminous at longer wavelengths. The CMDs of Fig.~4 show that the color spread of the MS did not substantially decrease compared to the CMD of Fig.~1, confirming that most of the color dispersion is due to the spread in metallicity, the presence of some amount of differential reddening, depth effects and binaries. Fig.~5 shows the observed PM-cleaned bulge MS luminosity function (dashed line) based on the $F814W$ magnitude for the stars plotted in the CMDs of Fig.~4. The completeness measured from the AS test is used to correct the number of observed stars per magnitude bin and the corrected luminosity function is over-plotted in the same figure with a solid line. We applied the completeness correction by binning on the observed magnitudes; in this way we take into account the uncertainties due to the photometric errors moving the stars among the magnitude bins. The $F814W$-band corrected luminosity function of Fig.~5 extends from just below the bulge MSTO at $F814W \sim$ 19 mag down to $F814W \sim$ 26 mag, where the completeness level is $\sim$ 30\%. A similar luminosity function is obtained by adopting the $F606W$ magnitude. \section{The mass--luminosity relation}\label{mass} In Paper I we used the BaSTI \footnote{http://albione.oa-teramo.inaf.it/} \citep{pietrinferni04, pietrinferni06} stellar-evolution database to fit isochrones to the bulge CMD. In order to extend the BaSTI isochrones\footnote{In their standard format the minimum initial mass in the BaSTI isochrones is equal to ${\rm 0.5 M_\odot}$. The BaSTI isochrones extended in the VLM star regime are available at the BaSTI URL repository.} to the range of very-low-mass stars (${\rm M < 0.5M_\odot}$) we computed very-low-mass (VLM) stellar models for exactly the same chemical composition of the BaSTI ones, by adopting the same physical inputs used in \citet[hereafter CA00]{cassisi00}. We note that, the accuracy and reliability of the BaSTI models and their extension to the VLM stellar regime have been extensively tested by comparing them with observed CMDs and mass-luminosity (M-L) datasets for both field and cluster stars. As a result, a very good level of agreement has been obtained with the various observational constraints \citep{cassisi00, bedin09, cassisi09, cassisi11, cassisi14}. Since the VLM stellar models have been computed by using a different physical framework compared to the models of more massive stars in the BaSTI library (see \citet{pietrinferni04} and CA00 for more details on this issue) one can expect that, in the stellar mass regime corresponding to the transition between the BaSTI and the VLM stellar models occurring at about ${\rm \sim 0.6M_\odot}$, some small mismatch in surface luminosity and effective temperature at a given mass is possible. Since in retrieving the IMF one has to rely on the first derivative of the theoretical M-L relation, it is important to eliminate any such discontinuity in the M-L relation \citep{kroupa97}. To this aim, we devoted a huge effort - which included computing additional stellar models using both the physical inputs adopted for the BaSTI library and that used by CA00 - in order to match the two model datasets at the stellar mass with (almost) the same luminosity and effective temperature. The evolutionary predictions were transformed from the theoretical to the observational plane by adopting the color--$T_{\rm eff}$ relations and bolometric correction scale for the ACS filters provided by \citet{hauschildt99} for $T_{\rm eff} \le 10,000$~K, while at larger $T_{\rm eff}$ we adopted the relations published by \citet{bedin05a}. Fig.~6 shows selected scaled-solar isochrones\footnote{Our referee correctly pointed out that bulge stars appear to be $\alpha$-enhanced up to about solar metallicity \citep{zoccali08, gonzalez11, johnson11, johnson13}. However, we decided in present work to adopt scaled-solar models due to the lack of suitable alpha-enhanced VLM star sequences in a wide metallicity range. This notwithstanding, we note that all the comparisons performed in present paper are performed at constant global metallicity (and not at constant $[Fe/H]$ and it is well known that $\alpha$-enhanced stellar models are nicely mimicked by scaled-solar one with the same global metallicity (see, e.g. Pietrinferni et al. 2006 and references therein).} for the same age, $t$ = 11 Gyr, and different metallicities, $Z$= 0.008, 0.0198, 0.03, plotted in the $F814W$ versus $log (M/M_{\odot})$ plane. We selected models with this age and abundances based on the fit of the bulge CMD performed in Paper I (see Fig.~2) and on the spectroscopic metallicity distribution for this field. In the same plot a solar metallicity isochrone but for an age of 8 Gyr is also shown (blue solid line). As expected, in the explored age and stellar mass range, the M-L is completely unaffected by an age change. In order to check the impact on the adopted M-L relation related to the use of a different bolometric correction scale, we also plotted the 11 Gyr, solar metallicity isochrone transferred in the observational plane by using the standard Johnson bolometric corrections provided by \citet{pietrinferni04} and the transformations from the Johnson to the HST photometric system by \citet[red solid]{sirianni05}. The five mass--luminosity relations all show a slight change of the slope around $log (M/M_{\odot}) \approx$ -0.3 ($M \approx$ 0.5 $M_{\odot}$). This inflection point is due to the molecular Hydrogen recombination occurring at a mass equal to $\approx 0.5M_\odot$; the formation of the $H_2$ molecule changes the value of the adiabatic gradient and, hence, the stellar structure thermal stratification (see Cassisi et al. 2000 and references therein). Fig.~6 also shows the impact of using various metallicities or ages for the selected M-L relation. As discussed, for old ages, $t \ge 8$ Gyr, suitable for the Galactic bulge population under scrutiny, the exact value of the selected age is quite irrelevant. On the other hand, the change in the mass derived (at a fixed magnitude) using two different mass-luminosity relations corresponding to $Z=0.008$ (which is the most metal-poor chemical composition we selected) and $Z=0.03$ (which is our most metal-rich composition) is only about $\approx$ 0.04--0.08 $M_{\odot}$ in the high-mass range ($M >$ 0.5 $M_{\odot}$), and $\approx$ 0.02--0.04 $M_{\odot}$ in the lower-mass range. The spectroscopic metallicity distribution we derived for the SWEEPS field, as discussed in \S 3.1 and Paper I, spans a range of metallicity from $[M/H] \sim$ -0.8 to $\sim$0.6, i.e. more than 1 dex. However, the distribution shows three peaks at $[M/H] \sim$ -0.4, 0.0 and $0.3$ and most of the stars, $\sim$ 85\%, are included in the range -0.5 $< [M/H] <$ 0.5. We can then safely assume the aforementioned metallicity values, $Z = 0.008 and Z = 0.03$, corresponding to the more metal-poor and the more metal-rich peaks of the distribution, to test the effect of metallicity on the mass estimate. However, we also tested the effect of further decreasing the metallicity of the adopted models, by using an isochrone for $Z = 0.002$ and the same age, $t =$ 11 Gyr, to convert luminosities into masses. In this case, the mass estimate changes by $\sim$ 17\% in the entire mass range, when going from the more metal-rich model, $Z = 0.03$, to the more metal-poor, $Z = 0.002$. For a small fraction of stars in our field, less than $\sim$ 10\%, the mass estimate will have a $\sim$ 5\% larger uncertainty. The impact of using a different bolometric correction scale for transferring the models from the theoretical to the observational plane in the derived masses is smaller and of the order of $\approx$ 0.005 $M_{\odot}$ in the entire mass regime. \begin{figure*} \begin{center} \label{fig4} \includegraphics[height=0.7\textheight,width=0.57\textwidth, angle=90]{cmd_MSbulgecut.ps} \caption{Left: PM-cleaned bulge MS $F606W,\ F606W - F814W$ CMD; note that 70\% of the bulge stars were rejected because of the PM selection. Error bars are also labelled. Right: Same stars plotted on the $F814W,\ F606W - F814W$ CMD.} \end{center} \end{figure*} \section{The Galactic bulge initial mass function}\label{initial} The mass-luminosity relation we obtained for MS stars by using BaSTI isochrones is only the first step towards determining the IMF of the Galactic bulge. Uncertainties due to the assumed distance and reddening, presence of differential reddening, metallicity dispersion, depth effects, and the presence of binaries need to be taken into account. Following \citet{sahu06} and Paper I, we fitted the bulge CMD using a distance modulus of $\mu_0 = 14.45$ mag \citep{sahu06} and a mean reddening of $E(B-V)$ = 0.5 mag and a standard reddening law. Extinction coefficients for the WFC filters are estimated by applying the \citet{cardelli89} reddening relations and by adopting a standard reddening law, $R_V = A_V/E(B-V) = 3.1$, finding $A_{F606W} = 0.922 \times A_V$, $A_{F814W} = 0.55 \times A_V$, and $E(F606W - F814W)= 1.14 \times E(B-V)$. It is worth mentioning that if we use the reddening value estimated by \citet{nataf13} for the SWEEPS field, $E(V-I) = 0.79$, and their extinction curve, $R_V = A_V/E(B-V) = 2.5$, we obtain $E(B-V) = 0.47$, in good agreement with the value we assumed. We used the $F814W$-band luminosity function to probe the bulge IMF since MS stars are brighter at redder colors and so the photometry in this filter is more complete and accurate than in the $F606W$ filter for the same mass (see the CMDs in Fig.~4). We converted observed magnitudes into masses using the mass-luminosity relation for solar metallicity, $Z$ = 0.0198, and for an age of $t$ = 11 Gyr, transformed by using the color--$T_{\rm eff}$ relations by \citet{hauschildt99}. As we showed in the previous section, age does not significantly affect the mass-luminosity relation for $t \ge$ 8 Gyr, and observational evidence shows that most bulge stars in our field are older than 8 Gyr \citep[Paper I]{clarkson08}. \begin{figure} \begin{center} \label{fig5} \includegraphics[height=0.4\textheight,width=0.5\textwidth]{funF814W_dm1450_red05_Z22.ps} \caption{$F814W$-band observed luminosity function for the Galactic PM-cleaned bulge MS stars (dashed line) and the luminosity function corrected for completeness (solid line). } \end{center} \end{figure} In order to estimate the effect of dispersion in metallicity, we computed the difference in the masses derived by adopting three different metallicities: solar ($Z = 0.00198$), metal-rich ($Z = 0.03$), and metal-poor ($Z = 0.008$). For magnitudes in the range $18.0 < F814W < 26 $, this changes the inferred masses by 0.02 to 0.08 $M_{\odot}$, resulting in an uncertainty of $\approx$ 8\% in mass. We also varied the assumed distance modulus by 0.2 mag, from 14.35 to 14.55 mag, corresponding to a depth of $ \sim$ 1 Kpc, and the extinction, $E(B-V)$, from 0.45 up to 0.55 mag. Both the distance and reddening uncertainty affects the derived masses by $\approx$ 0.01 $M_{\odot}$ over the entire mass range, i.e. 1--5\%. Similarly, adopting different color--$T_{\rm eff}$ relations changes the derived massed by less than 2\%. By summing in quadrature the uncertainties related to the parameters of the bulge CMD as fitted to our data, including metallicity, distance, reddening, and color--$T_{\rm eff}$ relations, we obtain a final systematic uncertainty on the mass estimate for each star. This uncertainty varies depending on the inferred mass and is carried through the remainder of the analysis. \begin{figure} \begin{center} \label{fig6} \includegraphics[height=0.4\textheight,width=0.5\textwidth]{mass_lumF814W.ps} \caption{Theoretical mass-luminosity relations for different metallicities, ages, and color-temperature relations.} \end{center} \end{figure} \subsection{The effect of unresolved binaries} Unresolved binaries, i.e. the expected presence of equal or lower-mass binary companions for many of the main sequence stars we observe, are likely to affect the inferred IMF of the bulge especially at lower masses, such as $M < 0.5 M_\odot $ (see, e.g., \citet{kroupa91} and \citet{kroupa01}). The availability of photometry in two different filters for a large fraction of our stars potentially allows us to correct for the effect of unresolved binaries, as binary systems will be somewhat redder than single stars of the same apparent brightness. However, the photometry, in particular at the faint end, is not sufficiently accurate for a direct identification of individual binary systems; our correction must therefore be probabilistic. Both the fraction of binaries and the distribution of mass ratios for the Galactic bulge are not well constrained. However, in Paper I we showed that there is evidence for a substantial fraction of He-core white dwarfs in the bulge based on the color dispersion of the cooling sequence and the comparison between star counts and predicted evolutionary lifetimes. According to standard stellar evolution models, He-core white dwarfs can only be produced in a Hubble time by stars experiencing extreme mass-loss events, such as in compact binaries. Indeed, in Paper I we reported our finding of two dwarf novae in outburst and five candidate cataclysmic variables in the same field, both of which are characteristic of a population of binaries. Our evidence at the time suggested that the Galactic bulge has a fraction of binaries of larger than 30\%. For the present analysis, we assume that the distribution of mass ratios of binary stars in the bulge follows the distribution derived by \citet[hereafter DM]{duquennoy91} for a sample of 164 F- and G-dwarf stars in the solar neighborhood. The distribution is a log-normal and it is described by the functional form: \begin{equation} \xi (q) = C e^{\left \{ \frac{-(q-\mu)^{2}}{2 \sigma_q ^{2}} \right \}} \end{equation} in the interval [0,1], where $q = M_2/M_1$, $\mu$ = 0.23 and $\sigma_q$ = 0.42 and $C$ = 10,900 for our sample of bulge stars. We also repeated the experiment by assuming a flat mass-ratio distribution similar to the distribution found by \citet{raghavan10} based on data for 454 F- to K-dwarf stars within 25 pc of the Sun. \begin{figure} \begin{center} \label{fig7} \includegraphics[height=0.4\textheight,width=0.5\textwidth]{IMFnew.ps} \caption{IMF of the Galactic bulge. The two power laws that fit the IMF are over plotted, for a slope $\alpha$ = -2.41$\pm$0.50 (dotted line) and $\alpha$ = 1.25$\pm$0.19 (dashed), together with a log-normal function with $M_c =$ 0.25 and $\sigma$ = 0.50 (green solid). The Salpeter mass function (blue dashed-dotted line) and the Chabrier log-normal function (red dashed double dotted) are also shown. Error bars are displayed. } \end{center} \end{figure} In a simplified Bayesian approach, we use the fraction of binaries and the distribution of mass ratios from Equation (1) as a prior for the presence and mass of binary companions, and then use the observed photometry to determine the posterior probability distribution of companion mass for each star in our sample. For simplicity, for each observed bulge star we consider 11 discrete options $J, J = 0, \dots, 10 $. The option $ J = 0 $ implies a single star, $ J > 0 $ implies a binary system with mass ratio $ q_J = J / 10 $. The prior probability $ Pr_J $ of each option is consistent with the DM distribution with an overall binary fraction of 50\%; thus $ Pr_0 =$ 0.5, $Pr_{1-10} = $ 0.07, 0.072, 0.07, 0.063, 0.06, 0.05, 0.04, 0.035, 0.025, 0.015. For each value of $ J $, the total system mass $ (M_T)_J = (M_1)_J + (M_2)_J $ is chosen so as to match the total flux in the $F814W$ filter, using the appropriate mass-luminosity relation for age, metallicity and distance for both components (or only one component if $ J = 0 $). We then compute the likelihood $ P_{D|M} = P (\hbox{Data} | \hbox{Model}) $ of the measured total flux in the $F814W$ filter, given the model, using a Gaussian distribution for the flux with the realistic photometric errors derived above. To the photometric errors derived from the AS tests, we added errors due to the presence of a metallicity spread, differential reddening and depth effects. These have been derived by using mass-luminosity relations for different metallicities, and by varying the distance modulus of 0.2 mag, and reddening of 0.1 mag, as described in \S 5. To each observed star we thus assign a probability distribution function (PDF) of the component masses over the allowed values of the mass ratio between components, according to the classic Bayes formula: $ P_J = Pr_J * P_{D|M} / P (\hbox{Data}) $, where $ P(\hbox{Data}) $ is a normalization factor chosen to take into account the estimated completeness correction. Finally, we generate multiple realizations of the full stellar mass function by randomly drawing stellar distributions with the probabilities thus determined. This procedure allows us to better understand and quantify the uncertainties arising from the correlated nature of the probabilities for each object (e.g., only one value of the mass ratio can be selected for each system). \begin{figure} \begin{center} \label{fig8} \includegraphics[height=0.37\textheight,width=0.5\textwidth]{IMFconfro.ps} \caption{IMFs of the Galactic bulge derived by assuming different fraction of binaries and a DM mass-ratio distribution for the binaries. } \end{center} \end{figure} In practice, larger mass ratios $ q_J $ generally correspond to redder $F606W - F814W$ colors at given $F814W$-band flux; thus stars that lie red-ward of the main sequence of Fig.~4 will generally favor larger mass ratios, while stars located near the main sequence will be consistent with a single star or a low-mass binary companion which contributes little to the total flux. However, note that for many stars the photometric error is large enough that photometry (through the term $ P_{D|M} $) does not provide a strong discriminant; for such cases, the final probability $ P_J $ for each option is the same as the prior probability $Pr_J$. By not taking into account the photometric color information, for instance, the distribution changes by $\approx$ 2-7 \% in the VLM range, and by less than 1\% at higher masses, i.e. M $>$ 0.5 $M_{\odot}$. As discussed in the following subsection, undetected binaries have a substantial impact in the inferred mass function, especially below $\approx$ 0.5 $M_\odot$. However, we must remark that the distribution of binary properties is uncertain and poorly constrained by the data at hand; changing the assumed binary fraction and the a priori distribution of mass ratios would also alter the derived mass function, as we show in Section 5.2. The treatment above is somewhat simplified in comparison with a fully Bayesian approach, in which we would consider fully the uncertainties in in the parameters of the model (metallicity, distance, reddening variation), using for each an appropriate distribution rather than the ``best" values. We defer this more complex and computationally expensive approach to the analysis of the full data set, including one more season of photometry and eleven additional fields. \subsection{Discussion}\label{discuss} One of the realizations of the Galactic bulge IMF is shown in Fig.~7. Error bars also include the uncertainties that come from statistical noise in the star counts. We generated 10,000 realizations of the same mass function and fitted them by adopting two power laws. The fit was performed by varying the mass break-point in the range $-0.2 \le log (M/M_{\odot}) \le -0.3$ and the lowest chi-square fit resulted with for a value of $log (M/M_{\odot})=$ -0.25 ($M$ = 0.56 $M_{\odot}$). The best estimate of the power-law slopes are $\alpha = -2.41 \pm$0.50 (dotted line) for higher masses, and $\alpha = -1.25 \pm$0.19 (dashed) for lower masses, where $dN/dM$ = $C M^{\alpha}$. We also fitted the Galactic bulge IMF by using a log-normal function described by the functional form: \begin{equation} \xi (log M) = C \exp \left \{ -\frac{[log (M) - log(M_c)]^{2}}{2 \sigma ^{2}} \right \} \end{equation} with $M_c$ = 0.25$\pm$0.07 and $\sigma$ = 0.50$\pm$0.01 (solid green line in Fig.~7). The power-law slope for the high-mass range ($M >$ 0.56 $M_{\odot}$) agrees very well with the Salpeter IMF ($\alpha = -2.35$) derived for solar neighborhood stars in the mass range 0.3 -- 10 $M_{\odot}$ (blue dashed-dotted line in Fig.~7). The log-normal mass function derived for disk stars closer than 8 pc in the mass range 0.08 to 1.0 $M_{\odot}$ by \citet{chabrier03, chabrier05} has $M_c$ = 0.20 and $\sigma$ = 0.55 (dashed-double dotted red line) and agrees well with our Galactic bulge IMF. We derived the IMFs using the same method described in the previous section for different values of binary fraction, assuming a flat distribution of mass ratios, and a distribution given by DM. Fig.~ 8 shows one realization for each of the IMFs derived for different binary fractions and the DM mass-ratio distribution. In general, the IMF has two distinct slopes in the low- and high-mass ranges, and the slopes have only a weak dependence on the assumed mass-ratio distribution for the binaries. If we assume the DM mass-ratio distribution for the binary components, the slopes of the IMF at higher masses are $-2.25, -2.36, -2.41$ and $-2.53$, for a bulge binary fraction of 0, 30, 50 and 100\%, respectively. If we assume a flat mass-ratio distribution, the slopes change only by 1 to 4\% for binary mass fractions of 0 to 100\%. The corresponding slopes in the low-mass range are $-0.89, -1.12, -1.25$ and $-1.51$ for the DM mass-ratio distribution, and they change by $3-4\%$ for a flat mass-ratio distribution. Full details including the error bars are given in Table 1. These results indicate that the effect of the presence of unresolved binaries is more pronounced in the low-mass range ($\sim$50\%), than in the high-mass range ($\sim$12\%). In the rest of the discussion, we use the IMF derived by assuming a binary fraction of 50\% and a DM mass-ratio distribution. As discussed in \S 5.1, the presence of a substantial fraction of He-core WDs in the Galactic bulge suggests that the fraction of binaries in the bulge is larger than 30\%. \section{Comparison with other Initial mass functions} \subsection{Galactic bulge} The Galactic bulge mass function was first measured by HO98 based on a set of observations of the Baade's window $(l = 1\deg,b = -4\deg)$ collected with the Wide Field Planetary Camera 2 (WFPC2) on board HST. These data allowed them to derive a luminosity function down to $F814W \sim$ 24.3, corresponding to $M \sim$ 0.3 $M_{\odot}$. No information on proper motions was available, so disk stars are included in their study. But they applied a correction for the presence of unresolved binaries and found that the IMF of the bulge has a power-law slope of $\alpha = -2.2$ in the high-mass range. The slope of the IMF flattens at $\sim 0.7 M_{\odot}$, with $\alpha = -0.9$ for a fraction of binaries of 0\% and $-1.3$ for 50\% (see Table~2). HO98 result for an assumed fraction of binaries of 50\% agrees quite well, within the uncertainties, with what we obtained in our analysis for the same assumption on binaries, but the changing of power-law slope occurs at lower masses, $\sim$ 0.56 $M_{\odot}$, in our bulge IMF. A second study on the Galactic bulge mass function was published by ZO00, based on a set of observations collected in the $F110W$ and $F160W$ filters with NICMOS on board HST, covering a 22.5" $\times$ 22.5" field of view in a region of the bulge South of the Baade's window $(l = 0.277\deg, b = -6.167\deg)$. To convert magnitudes to masses they used a mass-luminosity relation based on the same stellar models adopted in this investigation. They also did not have propel-motion information to separate bulge from disk stars, nor did they apply a correction for the presence of unresolved binaries. However, ZO00 applied an overall reduction of the luminosity function by 11\% for magnitudes brighter than $J < $ 17, to take into account the contamination by disk stars. By fitting the IMF with a single power law they obtained a slope of $\alpha = -1.33\pm$0.07, over the mass range 0.15 $< M/M_{\odot} <$ 1.0, while by using two different power laws they obtained $\alpha = -2.00\pm$0.23 for masses $M >$ 0.5 $M_{\odot}$, and $\alpha = -1.43\pm$0.13 for lower masses (see Table~2). If we fit our IMF by using a single power law for the entire mass range (0.15 $< M/M_{\odot} <$ 1.0), we obtain a range of slopes from $\alpha = -1.14\pm$0.10 for no binaries to $\alpha = -1.56\pm$0.10 for 100\% of binaries. The slope of the IMF not corrected for the presence of unresolved binaries is then shallower compared to the slope of the IMF obtained by ZO00 ($-1.14$ vs $-1.33$). Moreover, the same IMF shows a much shallower slope in the low-mass regime compared to ZO00 mass function ($-0.89$ vs $-1.43$). {This discrepancy might be due to the residual contamination by disk stars of ZO00 sample. Part of the difference could also be due to an intrinsic difference of stars observed by ZO00 and stars in the SWEEPS field. From spectra collected by our group the stars in this region of the bulge have a similar metallicity distribution as the stars in the Baade's Window, with main peaks at $[M/H] \approx -0.4, 0$ and 0.3 \citep{hill11, bensby13, ness13}. The metallicity distribution of the region of the Galactic bulge observed by ZO00 shows a decrease in the fraction of metal-rich stars, with the average metallicity decreasing from $[Fe/H] \sim$ $+$0.03 in the Baade's window down to $[Fe/H] \sim$ $-$0.12 \citep{zoccali08}. However, such a small difference in the metallicity distribution cannot account for a $\sim$ 20\% difference in the IMF slope. \subsection{Galactic disk} The Galactic disk mass function has been constrained in the low-mass regime down to the hydrogen-burning limit and in the brown dwarf regime by various studies. \citet{salpeter55} derived the ``original mass function" for solar neighborhood stars in the range 0.3 $\lesssim M/M_{\odot} \lesssim$ 10 and fitted it by using a single power law with a slope of $\alpha = -2.35$. Later studies found that the disk mass function can be reproduced either by a segmented power law or by a log-normal function. Table~2 lists the power law slopes and the characteristic masses and sigmas used by different studies to fit the Galactic disk mass function. \citet{kroupa93} and later \citet{kroupa01} derived the IMF for disk stars within 5.2 pc of the Sun by taking into account a correction for the presence of unresolved binaries and fitting it with a power law with a slope of $\alpha = -2.2\pm$0.3 in the mass range 0.5 $< M/M_{\odot} <$ 1.0 and of $\alpha = -1.3\pm$0.5 in the range 0.08 $<M/M_{\odot}<$ 0.5. \citet{gould97} based their study of the Galactic disk mass function on photometry collected with the WFPC2 and WFPC1 on board HST. They observed a sample of 337 stars distributed in different regions of the disk and found a mass function with a slope close to Salpeter, $\alpha \sim -2.2$, for masses in the range 0.6 $< M/M_{\odot} <$1.0, and $\alpha \sim -0.9$ for lower masses. \citet{reid02} observed a sample of 558 main-sequence stars in the solar neighborhood in the mass range 0.1 $< M/M_{\odot} <$ 3.0 and found that a power law with a slope of $\alpha = -1.3$ fits the mass function in the low-mass range, i.e. for stars with M $<$ 0.7 $M_{\odot}$. \citet{chabrier05} adopted a log-normal function to fit the Galactic disk IMF for single stars in the mass range 0.08 $< M/M_{\odot} <$ 1.0, and found a characteristic mass $M_c =$ 0.20$\pm$0.02, and $\sigma =$0.55$\pm$0.05. More recent analyses based on the Sloan Digital Sky Survey (SDSS) and the Two micron all sky survey (2MASS) data confirmed previous results, showing that the Galactic disk mass function can be reproduced either by assuming a segmented power law with slopes of $\alpha = -2.04/-2.66$ and $\alpha = -0.8/-0.98$, for the high- and low-mass range, respectively, or by a log-normal function with $M_c =$ 0.20/0.50, $\sigma =$ 0.22/0.37 (\citealt{covey08, bochanski10}). The IMF we derived for the Galactic bulge is in very good agreement, within uncertainties, with the mass function obtained by \citet{kroupa01} and \citet{chabrier05} for the disk. On the other hand, the mass functions derived for the disk by \citet{covey08} and \citet{bochanski10} have a slightly shallower slope compared to our bulge IMF in the low-mass regime (see Table~2), although the two mass functions would agree at lower masses by assuming the presence of no binaries in the bulge. \section{The stellar mass--to--light ratio of the Galactic bulge} The stellar mass--to--light ratio ($M/L$) is an important parameter of a stellar population and depends on its IMF. We used the mass function derived in this work and the total luminosity of stars observed in the SWEEPS field to estimate the stellar $M/L$ of the Galactic bulge in the $F814W$ and the $F606W$ filters. We obtain a total mass for bulge stars in the SWEEPS field included in the mass range adopted to estimate the IMF, 0.16 $\le M/M_{\odot} \le$ 1.0, of 137,527$\pm$23,400 $M_{\odot}$. By extrapolating the IMF with a power-law slope of $\alpha =-1.25$ down to the hydrogen burning limit, we get an extra mass of 14,310$\pm$2,400 $M_{\odot}$, for a total mass of $\approx$ 152,000$\pm$23,500 $M_{\odot}$ included in the 0.10 $\le M/M_{\odot} \le $ 1.0 mass range. Uncertainties take into account the error budget of the derived IMF. A constant mass of 1.0 $M_{\odot}$ is assumed for bulge sub- and red-giant stars and red clump stars, for a total mass of 4,116 $M_{\odot}$. We do not take into account the mass loss along the RGB, but since the total mass of the giants is already very small compared to the mass of the MS stars, this has no effect on the final derivation of the mass-to-light ratio. We then assume that the IMF of the Galactic bulge has a constant Salpeter power-law slope for masses larger than 1.0 $M_{\odot}$ and up to 120 $M_{\odot}$, and we integrate the IMF to obtain the number of stars that formed in this mass range. To estimate the mass currently in stellar remnants in the bulge we follow the prescriptions of \citet{percival09}: stars with mass (i) 1 $< M/M_{\odot} \le$ 10, the remnant is a white dwarf; (ii) 10 $< M/M_{\odot} \le$ 25, the remnant is a neutron star, and (iii) $M >$ 25 $M_{\odot}$, the remnant is a black hole. In order to estimate the mass of white dwarf remnants, we used the initial--to--final mass relation by \citet{salaris09}, $M_f =$ 0.084 $M_i$ + 0.466 for initial masses less than 7 $M_{\odot}$ and a constant final mass of 1.3 $M_{\odot}$ for initial masses in the range 7 $< M/M_{\odot} \le $ 10, obtaining a total mass in white dwarfs of 53,912$\pm$9,200 $M_{\odot}$. For neutron stars we assumed a constant mass of 1.4 $M_{\odot}$ and for black holes a mass equal to 1/3 of the initial mass, obtaining total remnant masses of 3,905$\pm$600 and 11,151$\pm$1,900 $M_{\odot}$ for neutron stars and black holes, respectively. By using the aforementioned values we found that the total stellar mass in the bulge SWEEPS field is $M =$ 228,814 $\pm$25,300 $M _{\odot}$. \begin{table} \begin{center} \caption{Power-law slopes of the IMFs derived by assuming different binary fractions and mass-ratio distributions for the Galactic bulge.} \label{table:1} \begin{tabular}{l c c c} \hline\hline Binary fraction & Mass-ratio & $\alpha_{High}$ & $\alpha_{Low}$ \\ \hline\hline 0 & \ldots & $-2.25\pm$0.50 & $-0.89\pm$0.20 \\ 30 & DM & $-2.36\pm$0.51 & $-1.12\pm$0.19 \\ 50 & DM & $-2.41\pm$0.50 & $-1.25\pm$0.19 \\ 100 & DM & $-2.53\pm$0.51 & $-1.51\pm$0.20 \\ 30 & Flat & $-2.39\pm$0.51 & $-1.16\pm$0.19 \\ 50 & Flat & $-2.45\pm$0.51 & $-1.29\pm$0.19 \\ 100 & Flat & $-2.62\pm$0.52 & $-1.55\pm$0.20 \\ \hline\hline \end{tabular} \end{center} \end{table} We estimated the flux emitted by bulge stars in the SWEEPS field by using the proper-motion cleaned photometric catalog corrected for the total fraction of stars and for completeness. We thus obtained a total luminosity of $L_{F814W} \approx$ 104,000$\pm$2,000 $L_{\odot}$ and $L_{F606W} \approx$ 71,000$\pm$1,400 $L_{\odot}$. The stellar mass--to--light values based on our IMF and the photometric catalog for the SWEEPS field are then $M/L_{F814W} =$ 2.2$\pm$0.3 and $M/L_{F606W} =$ 3.2$\pm$0.5. We estimated the stellar mass included in our field by also using two other assumptions for the mass distribution at masses larger than 1 $M_{\odot}$: constant slopes of $\alpha =$-2.0 and of $\alpha =$-2.7. In the first case, we obtain a larger total stellar mass, $M =$ 254,505 $\pm$28,100, and larger mass--to--light values, $M/L_{F814W} =$ 2.4$\pm$0.4 and $M/L_{F606W}$ = 3.6$\pm$0.6, while in the second case we obtain smaller values, $M =$ 219,079 $\pm$24,200, $M/L_{F814W} =$ 2.1$\pm$0.3 and $M/L_{F606W}$ = 3.1$\pm$0.5. The total mass of the observed field and the stellar mass--to--light values estimated for the different cases are listed in Table~3. Finally, we also explored a more theoretical route and we computed the average bulge luminosity in the SWEEPS field by using two different synthetic population codes by \citet{cignoni13} and BASTI. For both simulations we generated a fake stellar population with properties resembling those in the Galactic bulge: solar metallicity, constant star formation rate between 12 and 10 Gyr, our IMF, a binary fraction of 50\%, distance modulus of 14.45 and reddening $E(B-V)=$ 0.5. In the first case we used the latest PARSEC stellar models \citep{bressan12}. Simulations were run until the number of stars in the magnitude range 20 $\le F606W \le$ 22 matched the observed number ($\sim$ 25000 stars). This experiment was repeated 1,000 times. We found average values of $L_{F606W} \sim 58,900$ and $L_{F814W} \sim 92,800$. In order to evaluate the effect of metallicity dispersion we also tested different $Z$ values, namely 0.008 and 0.03, corresponding to the metal-poor and metal-rich peaks of the metallicity distribution of the considered field. In these cases we found $L_{F606W} \sim70,800 $ and $L_{F814W} \sim 101,600$ for the former metallicity, and $ L_{F606W} \sim 53,800 $ and $ L_{F814W} \sim 87,500$ for the latter. As expected, lowering the metallicity causes an increase in the luminosity of the system. Interestingly enough, luminosity values estimated for the lower metallicity, $Z = 0.008$, agree quite well with the observed values, while values for the higher metallicities are systematically lower than our flux estimates. A part of this discrepancy may be due to the possibility that a few very bright thin-disk stars are still contaminating our data, raising the inferred luminosities. In addition, the actual PARSEC models miss the asymptotic giant branch stellar phase, hence the predicted luminosities are likely to be underestimated. We repeated the same experiment using the BASTI models for the three different metallicites, and obtained $L_{F606W}$ and $L_{F814W}$ values as $\sim 74,500$ and $\sim 103,000$ for Z=0.008, $\sim 62,500$ and $ \sim 89,700$ for Z=0.02, and $ \sim 57,200$ and $ \sim$ 86,200 for Z=0.03. In this case the luminosity estimates for the lower metallicity are also in very good agreement with the observed values, while the luminosities obtained for the solar and higher metallicites are systematically lower. On the other hand, the luminosity estimates for the three metallicites based on the two different sets of models agree very well. Summarizing, we found a stellar mass--to--light ratios included in the range 2.1$< M/L_{F814W} <$ 2.4 and 3.1$< M/L_{F606W} <$ 3.6 according to different assumption on the slope of the IMF for masses larger than 1$M_{\odot}$. These are likely to be slightly lower estimates of the real stellar mass--light budget of bulge since a few bright disk stars might still be contaminating our luminosity estimate. These values agree quite well, within the uncertainties, with the estimates provided by ZO00 in the Johnson $V$ filter, $M/L_V \sim$ 3.4, by using their IMF with a single slope of $\alpha =$-1.33 below 1 $M_{\odot}$, and by assuming a constant Salpeter IMF for stars more massive than 1 $M_{\odot}$. \begin{table*} \begin{center} \caption{List of the different mass functions derived for the Galactic bulge and disk} \label{table:2} \begin{tabular}{l c c c c c c c} \hline\hline Reference & Mass range & $\alpha_{High}$ & $\alpha_{Low}$ & $M_{break}$ & $\alpha$ & $M_c$ & $\sigma$ \\ \hline\hline \multicolumn{8}{c}{Galactic bulge} \\ \hline This work & $0.15 - 1.0$ & $-2.41\pm$0.50 & $-1.25\pm$0.19 & 0.56 & \ldots & 0.25$\pm$0.07 & 0.50$\pm$0.01 \\ Holtzman et al. (1998) & $0.30 - 1.0$ & $- 2.2$ & $-1.3$ & 0.7 & \ldots & \ldots & \ldots \\ Zoccali et al. (2000) & 0.15 - 1.0 & \dots & \ldots & \dots & $-1.33\pm$0.07 & \ldots & \ldots \\ \hline \multicolumn{8}{c}{Galactic disk} \\ \hline Salpeter (1955) & 0.30 - 10 & \ldots & \ldots & \ldots & $-2.35$ & \ldots & \ldots \\ Kroupa et al. (1993, 2001) & 0.08 - 1.0 & $-2.3\pm$0.3 & $-1.3\pm$0.5 & 0.5 & \ldots & \ldots \\ Gould et al. (1997) & 0.08 - 1.0 & $-2.2$ & $-0.9$ & 0.6 & \ldots & \ldots \\ Reid et al. (2002) & 0.10 - 3.0 & \ldots & \ldots & \ldots & $-1.3$ & \ldots & \ldots \\ Chabrier (2005) & 0.10 - 1.0 & \ldots & \ldots & \ldots & \ldots & 0.20$\pm$0.02 & 0.55$\pm$0.05 \\ Covey et al. (2008) & 0.10 - 0.7 & $-2.04$ & $-0.8$ & 0.32 & $-1.1$ & 0.20 - 0.50 & 0.22 - 0.37 \\ Bochanski et al. (2010) & 0.10 - 0.8 & $-2.66\pm$0.10 & $-0.98\pm$0.10 & 0.32 & \ldots & 0.18$\pm$0.02 & 0.34$\pm$0.05 \\ \hline\hline \end{tabular} \end{center} \end{table*} \section{Microlensing optical depth} Several thousand microlensing events have been detected to date towards the Galactic bulge, mainly by the OGLE \citep{udalski15} and MOA collaborations \citep{bond01,sako08,sumi13}. These microlensing events have been used by several investigators to derive the total mass budget as well as the mass function of the lenses. \citet{paczynski94}, based on a small number of microlensing events, noticed that the observed microlensing optical depth is in excess of the theoretical estimates, indicating a much higher efficiency for microlensing by either bulge or disk lenses. This issue has been further investigated in recent years by several groups \citep{Wyrzykowski15, sumi13}. A helpful hint comes from the latest study by Wyrzykowski et al., which shows a dependence of the mean microlensing timescale on the Galactic latitude. This signals an increasing contribution from disk lenses closer to the plane relative to the height of the disk, which needs to be taken into account in the estimation of timescales and optical depths. Since the timescale of the microlensing event is proportional to the square root of the mass of the lens, the timescales can be used for a statistical estimate of the mass function of the lenses. \citet{zhao95} and \citet{hangould96} used this approach and reported a mass function with a slope of $-2.0$ and a cutoff at $\sim 0.1 M_\odot$. \citet{calchi08} also attempted to fit the observed timescales of the microlensing events with a power-law distribution of lens masses and obtained a slope of $-1.7$ for the distribution. As pointed out by ZO00, there may be an extra bias in the observed timescales due to blending in the ground-based observations, which causes the times scales to appear shorter than they actually are. This leads to an underestimation of the lens masses. Even so, the derived slope from microlensing observations is in between the two slopes of $\alpha =$ -2.41 and $ -1.25$ derived here, and thus seems consistent. It would be interesting to extend this microlensing analysis to the currently available list of all the observed microlensing events. Finally, we note that the microlensing optical depth comes not only from the living main-sequence stars, but also from the white dwarfs, neutron stars and black holes. The mass-to-light ratio derived in this paper should help in deriving a more correct estimate of the microlensing optical depth. \begin{table} \begin{center} \caption{Stellar mass estimates and mass--to--light values for the Galactic bulge for different assumed IMF slopes for $M >$ 1 $M_{\odot}$.} \label{table:3} \begin{tabular}{l c c c} \hline\hline $\alpha$ & Stellar mass & $M/L_{F814W}$ & $M/L_{F606W}$ \\ \hline\hline Salpeter & 228,814 $\pm$25,300 & 2.2$\pm$0.3 & 3.2$\pm$0.5 \\ -2.0 & 254,505 $\pm$28,100 & 2.4$\pm$0.4 & 3.6$\pm$0.6 \\ -2.7 & 219,079 $\pm$24,200 & 2.1$\pm$0.3 & 3.1$\pm$0.5 \\ \hline\hline \end{tabular} \end{center} \end{table} \section{Discussion and conclusions}\label{concl} We have derived the IMF of the pure bulge component down to 0.15 $M_\odot$. The Galactic bulge IMF can be fitted by two power laws, one with a steeper slope $\alpha = -2.41\pm$0.50 for $M \ge$ 0.56 $M_\odot$, and another with a shallower slope $\alpha = -1.25\pm$0.19 for the lower masses. A log-normal function fits the IMF too, with a characteristic mass of $M_c =$ 0.25$\pm$0.07 and $\sigma =$ 0.50$\pm$0.01. The slope of the IMF at high masses is mildly affected by the assumption on the fraction of unresolved binaries in the bulge or the distribution of their mass ratios. The high-mass slope ranges from $\alpha = -2.25\pm0.50$ for no binaries to $\alpha = -2.62\pm0.52$ for 100\% of binaries in the bulge. On the other hand, the slope at lower masses changes significantly, ranging from $\alpha =-0.89\pm0.20$ for no binaries to $\alpha = -1.55\pm0.20$ for 100\% of binaries. As we noted earlier, the slope of the IMF at the very low-mass range is crucial in estimating the mass budget of the Galactic bulge which contains $\approx$20\% of the mass of the Galaxy. Our deep HST observations obtained over a timescale of $\sim$9 years allowed us to derive the mass function of the pure bulge component even in this low-mass range, which was previously not possible. The shape of the Galactic bulge IMF we derived in this work is in good agreement, within the uncertainties, with the IMFs derived previously by HO98 for the Baade's window, but our mass function extends to lower masses and it is purely based on bulge members with negligible contamination from disk stars. On the other hand, our IMF not corrected for the presence of unresolved binaries shows a slightly shallower slope compared to ZO00 IMF ($-1.14$ vs $ -1.33$). This difference could be due to a small residual contamination by disk stars of the ZO00 sample, or to some intrinsic differences in the stars in the field observed by ZO00 and stars in the SWEEPS field. Our bulge IMF is in very good agreement with the mass function derived for the Galactic disk by \citet{kroupa01} and \citet{chabrier03} in the entire mass range, while it is steeper in the very low-mass regime compared to the mass functions derived for the disk by \citet{gould97} and \citet{reid02}. The PDMFs derived in the more recent studies of \citet{covey08} and \citet{bochanski10} agree quite well with our IMF for the Galactic bulge in the high-mass range, but they still show a shallower slope in the low mass range. The characterization of the IMF in different stellar environments is fundamental for investigating if the IMF has a dependence on the stellar metallicity and/or age. The recent work of \citet{kalirai13} showed that the IMF of the Small Magellanic Cloud (SMC, $-1.5\lesssim [Fe/H] \gtrsim -1.0$) is shallower than the Salpeter mass function, $\alpha = -1.9$, down to $\approx 0.4$ $M_{\odot}$, and does not show evidence for a turn-over in the very low-mass regime. Furthermore, \citet{geha13} showed that the IMF of two metal-poor ($[Fe/H] < -2.0$) ultra faint galaxies, Hercules and Leo IV, are even shallower, having a slope in the range $\alpha = -1.2$ to $1.3$ for masses larger than $\approx 0.5$ $M_{\odot}$. In the higher-mass range (M $> 0.5 M_{\odot}$) where the mass function of these galaxies is well measured, our bulge IMF is steeper than both the IMFs of the intermediate-metallicity environment of the SMC ($-2.41$ vs $-1.9$) and the metal-poor environment of the ultra-faint galaxies ($-2.41$ vs $-1.3$ to $-1.2$), pointing towards a variation of the IMF with the global average metallicity of the stellar population. However, more data are needed to sample the IMF down to lower masses, i.e. 0.1 $M_{\odot}$, in the different environments, to confirm this preliminary result. We then used the derived IMF to estimate the stellar mass--to--light ratios of the Galactic bulge. We obtained for the two filters, values included in the range 2.1 $\le M/L_{F814W} \le$ 2.4 and 3.1 $\le M/L_{F606W} \le$ 3.6 according to different assumption on the slope of the IMF for masses larger than 1 $M_{\odot}$. The shape of the mass function derived from microlensing observations has large uncertainties but is in consistent with the observed bulge IMF presented here. \acknowledgments This study was supported by NASA through grants GO-9750 and GO-12586 from the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS~5-26555. SC and RB thank financial support from PRIN-INAF2014 (PI: S. Cassisi). We thank the anonymous referee for helpful suggestions which led to an improved version of the paper. \clearpage \bibliographystyle{aa}
1,941,325,219,893
arxiv
\section{Gradient Q$(\sigma, \lambda)$} We have discussed the divergence of Q$(\sigma,\lambda)$ with semi-gradient method. In this section, we propose a convergent and stable TD algorithm: gradient Q$(\sigma,\lambda)$. \subsection{Objective Function} We derive the algorithm via MSPBE \cite{sutton2009fast_a}: \begin{flalign} \nonumber \text{MSPBE}(\theta,\lambda)&=\frac{1}{2}\|\Phi\theta-\Pi\mathcal{B}^{\pi,\mu}_{\sigma,\lambda}(\Phi\theta)\|^{2}_{\Xi},\\ \label{Eq:solve_mspbe} \theta^{*}&=\arg\min_{\theta}\text{MSPBE}(\theta,\lambda), \end{flalign} where $\Pi = \Phi(\Phi^{\top}\Xi\Phi)^{-1}\Phi^{\top}\Xi$ is an $|\mathcal{S}| \times |\mathcal{S}|$ \emph{projection matrix} which projects any value function into the space generated by $\Phi$. After some simple algebra, we can further rewrite MSPBE$(\theta,\lambda)$ as a standard weight least-squares equation: \begin{flalign} \label{Eq:mspbe} \text{MSPBE}(\theta,\lambda)=\frac{1}{2}\|A_{\sigma}\theta+b_{\sigma}\|^{2}_{M^{-1}}, \end{flalign} where $M=\mathbb{E}[\phi_{k}\phi_{k}^{\top}]=\Phi^{\top}\Xi\Phi$. Now, We define the update rule as follows, for a given trajectory $\{(S_{k},A_{k},R_{k},S_{k+1})\}_{k\ge0}$, $\forall\lambda,\sigma\in[0,1]$: \begin{flalign} \nonumber &e_{k}=\phi_{k}+\gamma\lambda e_{k-1},\\ \nonumber &\delta_{k}=R_{k}+\gamma\theta_{k}^{\top}(\sigma\phi_{k+1}+(1-\sigma)\mathbb{E}_{\pi}\phi(S_{k+1},\cdot))-\theta_{k}^{\top}\phi_{k},\\ \label{theata_update} &\theta_{k+1}=\theta_{k}-\alpha_{k}\frac{1}{2}\nabla_{\theta} \text{MSPBE}(\theta,\lambda)|_{\theta=\theta_{k}}, \end{flalign} where $\alpha_{k}>0$ is step-size, $\phi_{k}=\phi(S_{k},A_{k})$, $e_{k}$ is trace vector and $e_{-1}=0$. By Eq.(\ref{theata_update}), it is worth notice that the challenges of solving (\ref{Eq:solve_mspbe}) are two-fold: \begin{itemize} \item The computational complexity of the invertible matrix $M^{-1}$ is at least $\mathcal{O}(p^3)$~\cite{golub2012matrix}, where $p$ is the dimension of feature space. Thus, it is too expensive to use gradient to solve the problem (\ref{Eq:solve_mspbe}) directly. \item Besides, as pointed out by Szepesv{\'a}~\shortcite{szepesvari2010algorithms} and Liu et al.~\shortcite{liu2015finite}, we cannot get an unbiased estimate of $\nabla_{\theta}\text{MSPBE}(\theta,\lambda)=A^{\top}_{\sigma}M^{-1}(A_{\sigma}\theta+b_{\sigma})$. In fact, since the update law of gradient involves the product of expectations, the unbiased estimate cannot be obtained via a single sample. It needs to sample twice, which is a double sampling problem. Secondly, $M=\mathbb{E}[\phi_{t} \phi_{t}^\top]^{-1}$ cannot also be estimated via a single sample, which is the second bottleneck of applying stochastic gradient to solve problem (\ref{Eq:solve_mspbe}). \end{itemize} We provide a practical way to solve the above problem in the next subsection. \subsection{Algorithm Derivation} The gradient $\nabla_{\theta} \text{MSPBE}(\theta,\lambda)$ in Eq.(\ref{theata_update}) can be replaced by the following equation: \begin{flalign} \nonumber &\frac{1}{2}\nabla_{\theta} \text{MSPBE}(\theta,\lambda)\\ \label{gradient_equal} =&\nabla_{\theta}\mathbb{E}[\delta_{k}e_{k}]^{\top}\underbrace{\mathbb{E}[\phi_{k}\phi^{\top}_{k}]^{-1}\mathbb{E}[\delta_{k}e_{k}]}_{\overset{\text{def}}=\omega(\theta_{k})}. \end{flalign} The proof of Eq.(\ref{gradient_equal}) is similar to the derivation in Chapter 7 of \cite{maei2011gradient}, thus we omit its proof. Furthermore, the following Proposition \ref{prop1} provides a new way to estimate $\nabla_{\theta} \text{MSPBE}(\theta,\lambda)$. \begin{proposition} \label{prop1} Let $e_{t}$ be the eligibility traces vector that is generated as $e_{k}=\lambda\gamma e_{k-1}+\phi_{k}$, let \begin{flalign} \label{Delta} \Delta_{k,\sigma}&=\gamma\{\sigma\phi_{k+1}+(1-\sigma)\mathbb{E}_{\pi}\phi(S_{k+1},\cdot)\}-\phi_{k},\\ \nonumber v_{\sigma}(\theta_{k})&=(1-\sigma)\{\mathbb{E}_{\pi}\phi(S_{k+1},\cdot)-\lambda\phi_{k+1}\}e^{\top}_{k}\\ \label{vk} &\hspace{1cm}+\sigma(1-\lambda)\phi_{k+1}e^{\top}_{k}, \end{flalign} then the following holds, \begin{flalign} \nonumber \theta_{k+1}&=\theta_{k}-\alpha_{k}\frac{1}{2}\nabla_{\theta} \emph{MSPBE}(\theta,\lambda)|_{\theta=\theta_{k}}\\ \nonumber &=\theta_{k}-\alpha_{k}\mathbb{E}[\Delta_{k,\sigma}e_{k}^{\top}]\omega(\theta_{k})\\ \label{theta_uptate_2} &=\theta_{k}+\alpha_{k}\{\mathbb{E}[\delta_{k}e_{k}]-\gamma\mathbb{E}[v_{\sigma}(\theta_{k})]\omega(\theta_{k})\}. \end{flalign} \end{proposition} \begin{proof} See Appendix B. \end{proof} It is too expensive to calculate inverse matrix $\mathbb{E}[\phi_{k}\phi^{\top}_{k}]^{-1}$ in Eq.(\ref{gradient_equal}). In order to develop an efficient $\mathcal{O}(p)$ algorithm, Sutton et al.\shortcite{sutton2009fast_a} use a weight-duplication trick. They propose the way to estimate $\omega(\theta_{k})$ on a fast timescale: \begin{flalign} \label{gq_update2} \omega_{k+1}=\omega_{k}+\beta_{k}(\delta_{k}e_{k}-\phi_k\omega^{\top}_{k}\phi_k). \end{flalign} Now, sampling from Eq.(\ref{theta_uptate_2}) directly , we define the update rule of $\theta$ as follows, \begin{flalign} \label{gq_update1} \theta_{k+1}=\theta_{k}+\alpha_{k}(\delta_{k}e_{k}-\gamma v_{\sigma}(\theta_{k})\omega_{k}) \end{flalign} where $\delta_{k},e_{k}$ is defined in Eq.(\ref{theata_update}), $v_{\sigma}(\theta_{k})$ is defined in Eq.(\ref{vk}) and $\alpha_{k},\beta_{k}$ are step-size. More details of gradient Q$(\sigma,\lambda)$ are summary in Algorithm \ref{alg:algorithm1}. \begin{algorithm}[tb] \caption{Gradient Q$(\sigma,\lambda)$} \label{alg:algorithm1} \begin{algorithmic} \STATE { \textbf{Require}:Initialize parameter $\omega_{0},v_{0}=0$, ${\theta}_{0}$ arbitrarily, $\alpha_{k}>0,\beta_{k}>0$}. \\ \STATE { \textbf{Given}: target policy $\pi$, behavior policy $\mu$.}\\ \FOR{$i=0$ {\bfseries to} $n$} \STATE ${e}_{-1}={0}$. \FOR{$k=0$ {\bfseries to} $T_{i}$} \STATE Observe $\{S_{k},A_{k},R_{k+1},S_{k+1}\}$ by $\mu$. \STATE {\color{blue}{\# Update traces }} \STATE ${e}_{k}=\lambda\gamma {e}_{k-1}+{\phi}_{k}$. \STATE $\delta_{k}=R_{k}+\gamma\{\sigma{\theta}^{\top}_{k}{\phi}(S_{k+1},A_{k+1})$ \STATE$\hspace{0.7cm}+(1-\sigma){\theta}^{\top}_{k}\mathbb{E}_{\pi}{\phi}(S_{k+1},\cdot)\}-{\theta}^{\top}_{k}\phi(S_{k},A_{k})$. \STATE {\color{blue}{\# Update parameter}} \STATE $v_{k+1}=\sigma(1-\lambda)\phi(S_{k+1},A_{k+1})e^{\top}_{k}$ \STATE \hspace{0.7cm}$+(1-\sigma)\{\mathbb{E}_{\pi}\phi(S_{k+1},\cdot)-\lambda\phi(S_{k+1},A_{k+1})\}e^{\top}_{k}.$ \STATE${\theta}_{k+1}=\theta_{k}+\alpha_{k}(\delta_{k}e_{k}-\gamma v_{k}\omega_{k})$. \STATE$\omega_{k+1}=\omega_{k}+\beta_{k}\{\delta_{k}e_{k}-\phi(S_{k},A_{k})\omega^{\top}_{k}\phi(S_{k},A_{k})\}.$ \ENDFOR \ENDFOR \STATE { \textbf{Output}:${\theta}$} \end{algorithmic} \end{algorithm} \subsection{Convergence Analysis} We need some additional assumptions to present the convergent of Algorithm \ref{alg:algorithm1}. \begin{assumption} \label{ass:positive_lr} The positive sequence $\{\alpha_{k}\}_{k\ge0}$, $\{\beta_{k}\}_{k\ge0}$ satisfy $\sum_{k=0}^{\infty}\alpha_{k}=\sum_{k=0}^{\infty}\beta_{k}=\infty,\sum_{k=0}^{\infty}\alpha^{2}_{k}<\infty,\sum_{k=0}^{\infty}\beta^{2}_{k}<\infty$ with probability one. \end{assumption} \begin{assumption}[Boundedness of Feature, Reward and Parameters\cite{liu2015finite}] \label{ass:boundedness} (1)The features $\{\phi_{t}, \phi_{t+1}\}_{t\ge0}$ have uniformly bounded second moments, where $\phi_{t}=\phi(S_{t}),\phi_{t+1}=\phi(S_{t+1})$. (2)The reward function has uniformly bounded second moments. (3)There exists a bounded region $D_{\theta}\times D_{\omega}$, such that $\forall (\theta,\omega)\in D_{\theta}\times D_{\omega}$. \end{assumption} Assumption \ref{ass:boundedness} guarantees that the matrices $A_{\sigma}$ and $M$, and vector $b_{\sigma}$ are uniformly bounded. After some simple algebra, we have $A_{\sigma}=\mathbb{E}[e_{k}\Delta_{k,\sigma}]$, see Appendix C. The following Assumption \ref{invA} implies $A_{\sigma}^{-1}$ is well-defined. \begin{assumption} \label{invA} $A_{\sigma}=\mathbb{E}[e_{k}\Delta_{k,\sigma}]$ is non-singular, where $\Delta_{k,\sigma}$ is defined in Eq.(\ref{Delta}). \end{assumption} \begin{theorem}[Convergence of Algorithm \ref{alg:algorithm1}] \label{theorem2} Consider the iteration $(\theta_{k},\omega_k)$ generated by (\ref{gq_update1}) and (\ref{gq_update2}), if $\beta_{k}=\eta_{k}\alpha_{k}$, $\eta_{k}\rightarrow0$, as $k\rightarrow\infty$ , and $\alpha_{k},\beta_k$ satisfies Assumption \ref{ass:positive_lr}. The sequence $\{(\phi_{k},R_{k},\phi_{k+1})\}_{k\ge0}$ satisfies Assumption \ref{ass:boundedness}. Furthermore, $A_{\sigma}=\mathbb{E}[e_{k}\Delta_{k,\sigma}]$ satisfies Assumption \ref{invA}. Let \begin{flalign} \label{def:G} G(\omega,\theta)&=A^{\top}_{\sigma}M^{-1}(A_{\sigma}\theta+b_{\sigma}),\\ \label{def:H} H(\omega,\theta)&=A_{\sigma}\theta+b_{\sigma}-M \omega. \end{flalign} Then $(\theta_{k},\omega_k)$ converges to $(\theta^{*},\omega^{*})$ with probability one, where $(\theta^{*},\omega^{*})$ is the unique global asymptotically stable equilibrium w.r.t ordinary differential equation (ODE) $\dot\theta(t)=G(\Omega(\theta),\theta), \dot\omega(t)=H(\omega,\theta)$ correspondingly, and $\Omega(\theta)$: $ \theta\mapsto M^{-1}(A_{\sigma}\theta+b_\sigma)$. \end{theorem} \begin{proof} The ODE method (see Lemma 1; Appendix D) is our main tool to prove Theorem \ref{theorem2}. Let \begin{flalign} \label{M_k,N_k} M_{k}&=\delta_{k}e_{k}-\gamma v_{\sigma}(\theta_{k})\omega_{k}-M^{-1}(A_{\sigma}\theta_{k}+b_{\sigma}), \\ N_{k}&=(\hat{A}_{k}-A_{\sigma})\theta_k+(\hat{b}_{k}-b_{\sigma})-(\hat{M}_{k}-M)\omega_{k}. \end{flalign} Then, we rewrite the iteration (\ref{gq_update1}) and (\ref{gq_update2}) as follows, \begin{flalign} \label{gq_update1_1} \theta_{k+1}&=\theta_{k}+\alpha_{k}(M^{-1}(A_{\sigma}\theta_k+b_{\sigma})+M_k),\\ \label{gq_update2_1} \omega_{k+1}&=\omega_{k}+\beta_{k}(H(\theta_{k},\omega_{k})+N_{k}). \end{flalign} The Lemma 1 requires us to verify the following 4 steps. \underline{\textbf{Step 1: (Verifying the condition A2)}} \emph{Both of the functions $G$ and $H$ are Lipschitz functions.} By Assumption \ref{ass:boundedness}-\ref{invA}, $A_{\sigma}$, $b$ and $M$ are uniformly bounded, thus it is easy to check $G$ and $H$ are Lipschitz functions. \underline{\textbf{Step 2: (Verifying the condition A3)} } \emph{Let the $\sigma$-field $\mathcal{F}_{k}=\sigma\{\theta_{t},\omega_{t};t\leq k\}$, then $\mathbb{E}[M_{k}|\mathcal{F}_k]=\mathbb{E}[N_{k}|\mathcal{F}_k]=0.$ Furthermore, there exists non-negative a constant $K > 0$, s.t. $\{M_{k}\}_{k\in\mathbb{ N}}$ and$ \{N_{k}\}_{k\in\mathbb{ N}}$ are square-integrable with \begin{flalign} \label{MN} \mathbb{E}[\|M_{k}\|^{2}|\mathcal{F}_{k}],\mathbb{E}[\|N_{k}\|^{2}|\mathcal{F}_{k}]\leq K(1+\|\theta_{k}\|^{2}+\|\omega_{k}\|^{2}). \end{flalign} } \begin{figure*}[t] \centering \subfigure {\includegraphics[width=5.8cm,height=4.2cm]{aaai-baird.pdf}} \subfigure {\includegraphics[width=5.8cm,height=4.2cm]{aaai-boyan-chain.pdf}} \subfigure {\includegraphics[width=5.8cm,height=4.2cm]{aaai-cart-pole.pdf}} \caption { MSPBE comparison with different $\sigma$: $\sigma= 0$ (pure-expectation) , $\sigma= 1$ (full-sampling). Dynamic $\sigma\in (0,1)$. } \end{figure*} By Eq.(\ref{theta_uptate_2}), $\mathbb{E}[M_{k}|\mathcal{F}_k]=0$. With $\mathbb{E}[\hat{A}_{k}]=A_{\sigma}, \mathbb{E}[\hat{b}_{k}]=b_{\sigma}, \mathbb{E}[\hat{M}_{k}]=M,$ we have $\mathbb{E}[N_{k}|\mathcal{F}_k]=0$. By Assumption \ref{ass:boundedness}, Eq.(\ref{A_k}) and Eq.(\ref{b_k}), there exists non-negative constants $K_1,K_2,K_3$ such that $\{\|\hat{A}_{k}\|^{2},\|A_{\sigma}\|^{2}\}\leq K_1$, $\{\|\hat{b}_{k}\|^{2},\|b_{\sigma}\|^{2}\}\leq K_2$, $\{\|\hat{M}_{k}\|^{2},\|M\|^{2}\}\leq K_3$, which implies all above terms are bounded. Thus, there exists a non-negative constant $\widetilde{K}_1$ s.t the following Eq.(\ref{b}) holds, \begin{flalign} \nonumber &\mathbb{E}[\|N_{k}\|^{2}|\mathcal{F}_{k}]\\ \nonumber \leq&\mathbb{E}\Big[(\|(\hat{A}_{k}-A_{\sigma})\theta_{k}\|+\|\hat{b}_{k}-b_{\sigma} \| + \| (\hat{M}_{k}-M)\omega_{k}\|)^{2}\Big|\mathcal{F}_{k}\Big]\\ \label{b} \leq& {\widetilde{K}_1}^{2}(1+\|\theta_{k}\|^{2}+\|\omega_{k}\|^{2}). \end{flalign} Similarly, by Assumption \ref{ass:boundedness}, $R_{k},\phi_k$ have uniformly bounded second, then $\mathbb{E}[\|M_{k}\|^{2}|\mathcal{F}_{k}]\leq{\widetilde{K}_2}^{2}(1+\|\theta_{k}\|^{2}+\|\omega_{k}\|^{2})$ holds for a constant $\widetilde{K}_2>0$. Thus, Eq.(\ref{MN}) holds. \underline{\textbf{Step 3: (Verifying the condition A4)} } \emph{For each $\theta\in\mathbb{R}^{p}$, the ODE $ \dot \omega(t) = H(\omega(t), \theta) $ has a unique global asymptotically stable equilibrium $\Omega(\theta)$ such that: $\Omega(\theta):\mathbb{R}^{p}\rightarrow\mathbb{R}^{k}$ is Lipschitz.} For a fixed $\theta$, let $H_{\infty}(\omega,\theta)=\lim_{r\rightarrow\infty}\dfrac{H(r\omega(t),\theta)}{r}=M\omega({t}).$ We consider the ODE \begin{flalign} \label{ode:h-infty} \dot{\omega}(t)=H_{\infty}(\omega,\theta)=M\omega(t). \end{flalign} Assumption \ref{invA} implies that $M$ is a positive definite matrix, thus, for ODE (\ref{ode:h-infty}), origin is a globally asymptotically stable equilibrium. Thus, for a fixed $\theta$, by Assumption \ref{invA}, \begin{flalign} \label{def:oemga-star} \omega^{*}=M^{-1}(A_{\sigma}\theta+b_{\sigma}) \end{flalign} is the unique globally asymptotically stable equilibrium of ODE $\dot \omega(t) = A_{\sigma}\theta+b_{\sigma}-M\omega(t)\overset{(\ref{def:H})}=H(\omega(t), \theta).$ Let $\Omega(\theta): \theta\mapsto M^{-1}(A_{\sigma}\theta+b_{\sigma}),$ it is obvious $\Omega$ is Lipschitz. \underline{\textbf{Step 4: (Verifying the condition A5)} } \emph{The ODE $\dot \theta(t) =G\big(\Omega(\theta(t)),\theta(t)\big)$ has a unique global asymptotically stable equilibrium $\theta^{*}$.} Let $ G_{\infty}(\theta)=\lim_{r\rightarrow\infty}\frac{G(r\theta,\omega)}{r} =A^{\top}_{\sigma} M^{-1} A_{\sigma}\theta$. We consider the following ODE \begin{flalign} \label{ode-G} \dot{\theta}(t)=G_{\infty}(\theta(t)). \end{flalign} By Assumption \ref{ass:boundedness}-\ref{invA}, $A_\sigma$ is invertible and $M^{-1}$ is positive definition, thus $A^{\top}_{\sigma} M^{-1} A_{\sigma}$ is a positive defined matrix. Thus the ODE (\ref{ode-G}) has unique global asymptotically stable equilibrium: origin point. Now, let's consider the iteration (\ref{gq_update1})/(\ref{gq_update1_1}) associated with the ODE $ \dot{\theta}(t)=(\gamma\mathbb{E}[\sigma\phi_{k+1}+(1-\sigma)\mathbb{E}_{\pi}[\phi(S_{k+1},\cdot)]]e_{k}^{\top}M^{-1}-I)\mathbb{E}[\delta_{k}e_{k}|\theta(t)] ,$ which can be rewritten as follows, \begin{flalign} \label{ode-theta-1} \dot{\theta}(t)&=\mathbb{E}[\Delta_{k,\sigma}e_{k}^{\top}]M^{-1}(A_{\sigma}\theta(t)+b_\sigma) \\ \label{ode-theta-2} &=A^{\top}_{\sigma}M^{-1}(A_{\sigma}\theta(t)+b_\sigma) \overset{( \ref{def:G})}=G(\theta(t)), \end{flalign} Eq.(\ref{ode-theta-2}) holds due to $\mathbb{E}[\delta_{k}e_{k}|\theta(t)]=A_{\sigma}\theta(t)+b_\sigma$. By Assumption \ref{invA}, $A_\sigma$ is invertible, then \begin{flalign} \label{def:theta-star} \theta^{*}=-A_{\sigma}^{-1}b_{\sigma} \end{flalign} is the unique global asymptotically stable equilibrium of ODE (\ref{ode-theta-2}). Since then, we have verified all the conditions of Lemma 1, thus the following almost surely \[ (\theta_{k},\omega_k)~\rightarrow~~(\theta^{*},\omega^{*}), ~~\text{as}~~ k\rightarrow\infty, \] where $\omega^{*}$ is defined in (\ref{def:oemga-star}), $\theta^{*}$ is defined in (\ref{def:theta-star}). \end{proof} \section{Appendix A} \subsection{Proof of Eq.(\ref{Eq:linear_eq})} Let $ {P}^{\pi}_{\lambda}=(1-\lambda)\sum_{\ell=0}^{\infty}\gamma^{\ell}\lambda^{\ell}({P}^{\pi})^{\ell+1},\hspace{0.3cm} \mathcal{R}_{\lambda}^{\pi}=\sum_{\ell=0}^{\infty}\gamma^{\ell}\lambda^{\ell}({P}^{\pi})^{\ell}\mathcal{R}^{\pi}. $ \textbf{Step1}: If ${A}_{k}={\phi_{k}}\{\sum_{t=k}^{\infty}(\gamma\lambda)^{t-k}(\phi_{t}-\gamma\phi_{t+1})^{T}\}$, then \[\mathbb{E}[{A}_{k}]={\Phi}^{T}{\Xi}(I-\gamma\lambda{P}^{\mu})^{-1}({I}-\gamma{P}^{\mu}){\Phi}.\] \begin{eqnarray*} &&\sum_{t=k}^{\infty}(\gamma\lambda)^{t-k}(\phi_{t}-\gamma\phi_{t+1})^{\top}\\ &=&\lim_{n\rightarrow\infty}[\sum_{t=k}^{n}(\gamma\lambda)^{t-k}(\phi_{t}-\gamma\phi_{t+1})^{\top}]\\ &=&\lim_{n\rightarrow\infty}[{\phi}^{\top}_{k}-\gamma{\phi}^{\top}_{k+1}+\sum_{t=k+1}^{n}(\gamma\lambda)^{t-k}{\phi}^{\top}_{t}-\gamma\sum_{t=k+1}^{n}(\gamma\lambda)^{t-k}{\phi}^{\top}_{t+1}]\\ &=&\lim_{n\rightarrow\infty}[{\phi}^{\top}_{k}+\sum_{t=k}^{n-1}(\gamma\lambda)^{t+1-k}{\phi}^{\top}_{t+1}-\gamma\sum_{t=k}^{n}(\gamma\lambda)^{t-k}{\phi}^{\top}_{t+1}]\\ &=&\lim_{n\rightarrow\infty}[{\phi}^{\top}_{k}+\gamma\lambda\sum_{t=k}^{n-1}(\gamma\lambda)^{t-k}{\phi}^{\top}_{t+1}-\gamma\sum_{t=k}^{n}(\gamma\lambda)^{t-k}{\phi}^{\top}_{t+1}]\\ &=&\lim_{n\rightarrow\infty}[{\phi}^{\top}_{k}-\gamma(1-\lambda)\sum_{t=k}^{n-1}(\gamma\lambda)^{t-k}{\phi}^{\top}_{t+1}-\gamma^{n-k+1}\lambda^{n-k}{\phi}^{\top}_{n+1}]\\ &=&{\phi}^{\top}_{k}-\gamma(1-\lambda)\sum_{t=k}^{\infty}(\gamma\lambda)^{t-k}{\phi}^{\top}_{t+1}. \end{eqnarray*} For a stable behavior policy $\mu$, we have $\mathbb{E}[A_{k}]=\mathbb{E}[A_{0}]$. Thus, \begin{flalign} \mathbb{E}[{A}_{k} \nonumber &=\mathbb{E}[{\phi}_{k}{\phi}^{\top}_{k}]-\gamma\mathbb{E}[(1-\lambda)\sum_{t=0}^{\infty}(\gamma\lambda)^{t}\phi^{\top}_{t+1}]\\ \label{A1} &={\Phi}^{\top}{\Xi}{\Phi}-\gamma{\Phi}^{\top}{\Xi}{{P}}^{\mu}_{\lambda}{\Phi}\\ \nonumber &={\Phi}^{\top}{\Xi}\{{I}-\gamma(1-\lambda)\sum_{\ell=0}^{\infty}\gamma^{\ell}\lambda^{\ell}({P}^{\mu})^{\ell+1}\}{\Phi}\\ \nonumber &={\Phi}^{\top}{\Xi}\{I-\gamma(1-\lambda)(I-\gamma\lambda{P}^{\mu})^{-1}{P}^{\mu}\}{\Phi}\\ \nonumber &={\Phi}^{\top}{\Xi}(I-\gamma\lambda{P}^{\mu})^{-1}({I}-\gamma{P}^{\mu}){\Phi}. \end{flalign} By the identity ${P}^{\mu}_{\lambda}=(1-\lambda)\sum_{\ell=0}^{\infty}\gamma^{\ell}\lambda^{\ell}({P}^{\mu})^{\ell+1}$, thus (\ref{A1}) holds. \textbf{Step2}: If ${A}_{k}={\phi}_{k}\{\sum_{t=k}^{\infty}(\gamma\lambda)^{t-k}({\phi}_{t}-\gamma\mathbb{E}_{\pi}[\phi(S_{t+1},\cdot)])^{\top}\}$, then \[\mathbb{E}[{A}_{k}]={\Phi}^{\top}{\Xi}(I-\gamma\lambda{P}^{\mu})^{-1}({I}-\gamma{P}^{\pi}){\Phi}.\] Under Assumption 1and it is similar to the proof in step 1, then we have, \begin{eqnarray*} \sum_{t=k}^{\infty}(\gamma\lambda)^{t-k}({\phi}_{t}-\gamma\mathbb{E}_{\pi}[\phi(S_{t+1},\cdot)])^{\top}&=&{\phi}^{T}_{k}-\gamma(1-\lambda)\sum_{t=k}^{\infty}(\gamma\lambda)^{t-k}\mathbb{E}_{\pi}[\phi^{\top}(S_{t+1},\cdot)]\\ &=&{\phi}^{\top}_{k}-\gamma(1-\lambda)\sum_{t=0}^{\infty}(\gamma\lambda)^{t}\mathbb{E}_{\pi}[\phi^{\top}(S_{t+1},\cdot)] \end{eqnarray*} Let $\mathcal{E}_{t}=\cup_{i=0}^{t}\{(A_{i},S_{i},R_{i})\},\mathcal{F}_{t}=\mathcal{E}_{t}\cup\{S_{t+1}\}$, then \begin{eqnarray*} \mathbb{E}\Big[\sum_{t=0}^{\infty}(\gamma\lambda)^{t}\mathbb{E}_{\pi}[\phi^{\top}(S_{t+1},\cdot)]\Big] &=&\sum_{t=0}^{\infty}(\gamma\lambda)^{t}\mathbb{E}_{\mathcal{F}_{t}}\Big[\mathbb{E}_{\pi}[\phi^{T}(S_{t+1},\cdot)]\Big]\\ &=&\sum_{t=0}^{\infty}(\gamma\lambda)^{t}\mathbb{E}_{\mathcal{E}_{t}}\Big\{\mathbb{E}_{S_{t+1}\sim \mathcal{P}(\cdot |{A_{t}},{S_{t}})}\Big[\mathbb{E}_{\pi}[\phi^{\top}(S_{t+1},\cdot)]\Big]\Big\}\\ &=&\sum_{t=0}^{\infty}(\gamma\lambda)^{t}\mathbb{E}_{\mathcal{E}_{t}}\Big\{\sum_{a\in\mathcal{A}}\sum_{s\in\mathcal{S}}\mathcal{P}(s|S_{t},A_{t})\pi(s,a)\phi(s,a)\Big\}\\ &=&\sum_{t=0}^{\infty}(\gamma\lambda)^{t}\mathbb{E}_{\mathcal{E}_{t}}\{{{P}^{\pi}}\phi_{t}\}\\ &=&\sum_{t=0}^{\infty}(\gamma\lambda)^{t}\mathbb{E}_{\mathcal{E}_{t-1}}\Big\{\sum_{a\in\mathcal{A}}\sum_{s\in\mathcal{S}}\mathcal{P}(s|S_{t-1},A_{t-1})\mu(s,a){P}^{\pi}\phi_{t}\Big\}\\ &=&\sum_{t=0}^{\infty}(\gamma\lambda)^{t}\mathbb{E}_{\mathcal{E}_{t-1}}\Big\{{P}^{\mu}{P}^{\pi}\phi_{t}\Big\}\\ &=&\sum_{t=0}^{\infty}(\gamma\lambda)^{t}\mathbb{E}_{\mathcal{E}_{t-2}}\Big\{({P}^{\mu})^{2}{P}^{\pi}\phi_{t-2}\Big\}\\ &=&\sum_{t=0}^{\infty}(\gamma\lambda)^{t}\mathbb{E}\Big\{({P}^{\mu})^{t}{P}^{\pi}\phi_{0}\Big\}. \end{eqnarray*} \begin{eqnarray*} \mathbb{E}[A_{k}]&=&\mathbb{E}\Big[{\phi}_{k}\{\sum_{t=k}^{\infty}(\gamma\lambda)^{t-k}({\phi}_{t}-\gamma\mathbb{E}_{\pi}[\phi(S_{t+1},\cdot)])^{\top}\}\Big]\\ &=&\mathbb{E}\Big[{\phi}_{t}\Big\{{\phi}^{\top}_{t}-\gamma(1-\lambda)\sum_{t=0}^{\infty}(\gamma\lambda)^{t}\mathbb{E}_{\pi}[\phi^{\top}(S_{t+1},\cdot)]\Big\}\Big]\\ &=&\mathbb{E}[\phi_{t}\phi^{\top}_{t}]-\gamma(1-\lambda)\sum_{t=0}^{\infty}(\gamma\lambda)^{t}\mathbb{E}\Big\{\phi_{t}^{\top}({P}^{\mu})^{t}{P}^{\pi}\phi_{t}\Big\}\\ &=&\mathbb{E}\Big[\phi_{0}\Big(\sum_{t=0}^{\infty}(\gamma\lambda)^{t}({P}^{\mu})^{t}\Big)(I-\gamma{P}^{\pi})\phi^{\top}_{0}\Big]\\ &=&{\Phi}^{\top}{\Xi}(I-\gamma\lambda{P}^{\mu})^{-1}({I}-\gamma{P}^{\pi}){\Phi}. \end{eqnarray*} \textbf{Step3}: Combining the step1 and step2, we have: \[ A_{\sigma}={\Phi}^{T}\Xi(I-\gamma\lambda{\mathcal{P}}^{\mu})^{-1}((1-\sigma)\gamma{P}^{\pi}+\sigma\gamma{P}^{\mu}-{I}){\Phi}. \] The proof of $\mathbb{E}[\hat{b}_{k}]$ is similar to above steps and we omit it. \[ b_{\sigma}=\Phi\Xi (I-\gamma\lambda{\mathcal{P}}^{\mu})^{-1}(\sigma\mathcal{R}^{\mu}+(1-\sigma)\mathcal{R}^{\pi}). \] \textbf{Step4}: Taking expectation of Eq.(\ref{Eq:semi-gradient}), by Step 1, Step 2 and Step 3, we have Eq.(\ref{Eq:linear_eq}). \clearpage \section{Appendix B: Proof of Proposition \ref{prop1}} \textbf{Proposition \ref{prop1}}\emph{ Let $e_{t}$ be the eligibility traces vector that is generated as $e_{k}=\lambda\gamma e_{k-1}+\phi_{k}$, let \begin{flalign} \nonumber \Delta_{\sigma,k}&=\gamma\{\sigma\phi_{k+1}+(1-\sigma)\mathbb{E}_{\pi}\phi(S_{k+1},\cdot)\}-\phi_{k},\\ \nonumber v_{\sigma}(\theta_{k})&=(1-\sigma)\{\mathbb{E}_{\pi}\phi(S_{k+1},\cdot)-\lambda\phi_{k+1}\}e^{\top}_{k}+\sigma(1-\lambda)\phi_{k+1}e^{\top}_{k}, \end{flalign} then we have \begin{flalign} \nonumber \theta_{k+1}&=\theta_{k}-\alpha_{k}\frac{1}{2}\nabla_{\theta} \emph{MSPBE}(\theta,\lambda)|_{\theta=\theta_{k}}\\ \nonumber &=\theta_{k}-\alpha_{k}\mathbb{E}[\Delta_{\sigma,k}e_{k}^{\top}]\omega(\theta_{k})\\ \nonumber &=\theta_{k}+\alpha_{k}\{\mathbb{E}[\delta_{k}e_{k}]-\gamma\mathbb{E}[v_{\sigma}(\theta_{k})]\omega(\theta_{k})\}. \end{flalign} } \begin{proof} Let us calculate MSPBE($\theta,\lambda$) directly, \begin{flalign} \nonumber &-\frac{1}{2}\nabla_{\theta} \text{MSPBE}(\theta,\lambda)|_{\theta=\theta_{k}}\\ \nonumber =&-\frac{1}{2}\nabla_{\theta}\Big(\mathbb{E}[\delta_{k}e_{k}]^{\top}\mathbb{E}[\phi_{k}\phi^{\top}_{k}]^{-1}\mathbb{E}[\delta_{k}e_{k}]\Big)\\ \nonumber =&-(\nabla_{\theta}\mathbb{E}[\delta_{k}e_{k}]^{\top})\mathbb{E}[\phi_{k}\phi^{\top}_{k}]^{-1}\mathbb{E}[\delta_{k}e_{k}]\\ \nonumber =&-\mathbb{E}\Big[\underbrace{\Big(\gamma(\sigma\phi_{k+1}+(1-\sigma)\mathbb{E}_{\pi}\phi(S_{k+1},\cdot))-\phi_{k}\Big)}_{\Delta_{\sigma,k}}e^{\top}_{k}\Big]\omega(\theta_k)\\ \nonumber =&-\mathbb{E}\Big[\gamma(\sigma\phi_{k+1}+(1-\sigma)\mathbb{E}_{\pi}\phi(S_{k+1},\cdot))e^{\top}_{k}-\phi_{k}e^{\top}_{k}\Big]\omega(\theta_k)\\ \nonumber =&\mathbb{E}\Big[\phi_{k}\phi^{T}_{k}+\phi_{k+1}\gamma\lambda e_{k}^{\top}-\gamma(\sigma\phi_{k+1}+(1-\sigma)\mathbb{E}_{\pi}\phi(S_{k+1},\cdot))e^{\top}_{k}\Big]\omega(\theta_k) \\ \label{prop1-1} =&\mathbb{E}[\delta_{k}e_{k}]-\gamma\mathbb{E}\Big[\sigma(1-\lambda)\phi_{k+1}e^{\top}_{k}+(1-\sigma)\Big(\mathbb{E}_{\pi}\phi(S_{k+1},\cdot)-\lambda\phi_{k+1}\Big)e^{\top}_{k}\Big]\omega(\theta). \end{flalign} Taking Eq.(\ref{prop1-1}) into Eq.(\ref{theata_update}), then we have Eq.(\ref{theta_uptate_2}). \end{proof} \section{Appendix C} In fact, $\Delta_{\sigma,k}=(1-\sigma)[\gamma\mathbb{E}_{\pi}\phi(S_{k+1},\cdot)-\phi(S_{k},A_{t})]+\sigma[\gamma\phi(S_{k+1},A_{k+1})-\phi(S_{k},A_{k})]$ \begin{eqnarray*} \mathbb{E}[\hat{A_{k}}]&=&\mathbb{E}[\Delta_{\sigma,k}e_{k}]\\ &=&\mathbb{E}[\Delta_{\sigma,k}(\sum^{k}_{i=0}(\lambda\gamma)^{k-i}\phi(S_{i},A_{i}))]\\ &{=}&\mathbb{E}[\phi(S_{k},A_{k})\Delta_{\sigma,k}+\lambda\gamma\phi(S_{k-1},A_{k-1})\Delta_{\sigma,k}+\sum^{\infty}_{t=k+1}(\lambda\gamma)^{t-k+1}\phi(S_{k},A_{k})\Delta_{\sigma,t+1}]\\ &=&\mathbb{E}[\phi(S_{k},a_{k})\Delta_{\sigma,k}+\lambda\gamma\phi(S_{k},A_{k})\Delta_{\sigma,k+1}+\sum^{\infty}_{t=k+1}(\lambda\gamma)^{t-k+1}\phi(S_{k},A_{k})\Delta_{\sigma,t+1}]\\ &=&\mathbb{E}[\phi(S_{k},A_{k})\Delta_{\sigma,k}+\sum^{\infty}_{t=k}(\lambda\gamma)^{t-k+1}\phi(S_{k},A_{k})\Delta_{\sigma,t+1}]\\ &=&\mathbb{E}[\phi(S_{k},A_{k})\Delta_{\sigma,k}+\sum^{\infty}_{t=k+1}(\lambda\gamma)^{t-k}\phi(S_{k},A_{k})\Delta_{\sigma,t}]\\ &=&\mathbb{E}[\sum^{\infty}_{t=k}(\lambda\gamma)^{t-k}\phi(S_{k},A_{k})\Delta_{\sigma,t}]\\ &=&A_{\sigma} \end{eqnarray*} \section{Appendix D: Lemma \ref{Borkar-two--timescale}} \begin{lemma}[\cite{borkar1997stochastic}] \label{Borkar-two--timescale} For the stochastic recursion of $x_{n},y_{n}$ given by \begin{flalign} \label{Borkar97-lemma-x} x_{n+1}&=x_{n}+a_{n}[g(x_{n},y_{n})+M^{(1)}_{n+1}],\\ \label{Borkar97-lemma-y} y_{n+1}&=y_{n}+b_{n}[h(x_{n},y_{n})+M^{(2)}_{n+1}],n\in\mathbb{N} \end{flalign} if the following assumptions are satisfied: \begin{itemize} \item (A1)Step-sizes $\{a_{n}\},\{b_{n}\}$ are positive, satisfying \[ \sum_{n} a_{n}=\sum_{n} b_{n}=\infty, \sum_{n} a^{2}_{n}+b^{2}_{n}<\infty,\dfrac{b_{n}}{a_{n}}\rightarrow 0~~ \emph{as} ~~n\rightarrow \infty. \] \item(A2) The map $g:\mathbb{R}^{d+k} \rightarrow \mathbb{R}^{d},h:\mathbb{R}^{d+k} \rightarrow \mathbb{R}^{k}$ are Lipschitz. \item(A3) The sequence $\{M^{(1)}_{n+1}\}_{n\in\mathbb{N}},\{M^{(2)}_{n+1}\}_{n\in\mathbb{N}}$ are martingale difference sequences w.r.t. the increasing $\sigma$-fields $ \mathcal{F}_{n} \overset{\text{def}}= \sigma(x_{m},y_{m},M^{(1)}_{m},M^{(2)}_{m}, m \leq n), n\in\mathbb{N}, $ satisfying \[\mathbb{E}[M^{(i)}_{n+1}|\mathcal{F}_{n}]=0,i=1,2, n\in\mathbb{N}.\] Furthermore, $\{M^{(i)}_{n+1}\}_{n\in\mathbb{ N}}, i=1,2$, are square-integrable with \[ \mathbb{E}[\|M^{(i)}_{n+1}\|^{2}|\mathcal{F}_{n}]\leq K(1+\|x_{n}\|^{2}+\|y_{n}\|^{2}), \] for some constant $K > 0$. \item(A4)For each $x\in\mathbb{R}^{d}$,the o.d.e. \[ \dot y(t) = h(x, y(t)) \] has a global asymptotically stable equilibrium $\Omega(x)$ such that:$\Omega(x):\mathbb{R}^{d}\rightarrow\mathbb{R}^{k}$ is Lipschitz. \item(A5)The o.d.e.\[\dot x(t) =g(x(t),\Omega(x(t)))\] has a global asymptotically stable equilibrium $x^{*}$. \end{itemize} Then, the iterates (\ref{Borkar97-lemma-x}), (\ref{Borkar97-lemma-y}) converge to $(x^{*},\Omega(x^{*}))$ a.s. on the set $Q \overset{\emph{def}}=\{\sup_{n} x_{n}<\infty, \sup_{n} y_{n}<\infty\}.$ \end{lemma} \section{Appendix E} \subsection{Baird Star} \label{app-ex-Baird Star} Baird's Star a well known example for divergence in off-policy TD learning for all step-sizes. The Baird’s Star considers the episodic MDP with seven-state $\mathcal{S}=\{\mathtt{s}_1,\cdots,\mathtt{s}_7\}$ and two-action $\mathcal{A}$= \{$\mathtt{dashed}$ action, $\mathtt{solid}$ action\}. The $\mathtt{dashed}$ action takes the system to one of the first six states with equal probability, whereas the $\mathtt{solid}$ action takes the system to the state $\mathtt{s}_7$. The policy $\mu$ and $\pi$ select the $\mathtt{dashed}$ and $\mathtt{solid}$ actions with probabilities \[\mu(\mathtt{dashed}|\cdot)=\frac{6}{7}, \mu(\mathtt{solid}|\cdot)=\frac{1}{7}, \pi(\mathtt{solid}|\cdot)=1,\] which implies that the target policy always takes the $\mathtt{solid}$ action. The reward is zero on all transitions. The discount rate $\gamma= 0.99.$ Features are chosen as \[\phi(\mathtt{s}_i) = 2\epsilon_i + (0, 0,0, 0, 0 ,0 ,0, 1)^{T},\] where $i$-th component is $\epsilon_i$ is 1, others are all 0. We used $\theta_{0} = (1, 1, 1 ,1, 1 ,1, 10, 1)^{T}$ as initial parameter vector for the methods that allow specifying a start estimate, TD-learning is known to diverge for this initialization of the parameter-vector \cite{dann2014policy,sutton2018reinforcement}. \subsection{Boyan Chain} \begin{figure}[htbp] \centering {\includegraphics[scale=0.6]{boyanchain_.pdf}} \caption { The dynamics of Boyan Chain } \end{figure} The second benchmark MDP is the classic chain example from \cite{boyan2002technical} which considers a chain of $14$ states $\mathcal{S} = \{\mathtt{s}_1, \cdots, \mathtt{s}_{14}\}$ and one action. Each transition from state $\mathtt{s}_i$ results in state $\mathtt{s}_{i+1}$ or $\mathtt{s}_{i+2}$ with equal probability and a reward of $-3$. If the agent is in the second last state $\mathtt{s}_{13}$, it always proceeds to the last state with reward $-2$ and subsequently stays in this state forever with zero reward. The true value function, which is linearly decreasing from $\mathtt{s}_1$ to $\mathtt{s}_{14}$, can be represented perfectly. \begin{flalign} \nonumber P_{\mathtt{14}\times\mathtt{14}} = \begin{pmatrix} 0 & \frac{1}{2} & \frac{1}{2} & 0&0 & \cdots&0 & 0 \\ 0 & 0 & \frac{1}{2} & \frac{1}{2}&0 & \cdots&0 & 0 \\ 0 & 0 & 0 & \frac{1}{2}&\frac{1}{2} & \cdots&0 & 0 \\ \vdots&&&&&&& \vdots\\ 0 & 0 &0 & 0 & 0& \cdots&\frac{1}{2} & \frac{1}{2} \\ 0 & 0 & 0 & 0 & 0&\cdots&0 & 1 \\ 0 & 0 & 0 & 0 & 0&\cdots&0 & 1 \end{pmatrix},~~~ R_{\mathtt{14}\times\mathtt{1}} = \begin{pmatrix} -3 \\ -3 \\ -3 \\ \vdots\\ -3 \\ -2 \\ 0 \end{pmatrix} ~ \end{flalign} By Bellman equation, we have \[ v=(I-\gamma P)^{-1}R~~\rightarrow (-26 ,-24 -22,\cdots, -4, -2 ,0 ) ,~~\text{as} ~\gamma\rightarrow~1. \] In this paper, we chose a discount factor of $\gamma= 0.99$ and four-dimensional feature description with triangular-shaped basis functions covering the state space (Figure 7). \begin{figure}[h] \label{fig:app-boyan-chain-feature} \vskip 0.2in \begin{center} \centerline{\includegraphics[width=7cm]{boyan_phi.pdf}} \caption{Feature Activation for the Boyan chain benchmark. The state space is densely covered with triangle-shaped basis functions. The figure here refers to \cite{dann2014policy}.} \label{icml-historical} \end{center} \vskip -0.2in \end{figure} \section{Background and Notations} The standard reinforcement learning framework~\cite{sutton1998reinforcement} is often formalized as \emph{Markov decision processes} (MDP)~\cite{puterman2014markov}. It considers 5-tuples form $\mathcal{M}=(\mathcal{S},\mathcal{A},\mathcal{P},\mathcal{R},\gamma)$, where $\mathcal{S}$ indicates the set of all states, $\mathcal{A}$ indicates the set of all actions. At each time $t$, the agent in a state $S_{t}\in\mathcal{S}$ and it takes an action $A_{t}\in\mathcal{A}$, then environment produces a reward $R_{t+1}$ to the agent. $\mathcal{P} : \mathcal{S}\times\mathcal{A}\times\mathcal{S}\rightarrow[0,1]$, $P_{s s^{'}}^a=\mathcal{P}(S_{t}=s^{'}|S_{t-1}=s,A_{t-1}=a)$ is the conditional probability for the state transitioning from $s$ to $s^{'}$ under taking the action $a$. $\mathcal{R} : \mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}^{1}$: $\mathcal{R}_{s}^{a}=\mathbb{E}[R_{t+1}|S_{t}=a,A_{t}=a]$. The discounted factor $\gamma\in(0,1)$. A \emph{policy} is a probability distribution defined on $\mathcal{S}\times\mathcal{A}$, \emph{target policy} is the policy that will be learned, and \emph{behavior policy} is used to generate behavior. If $\mu=\pi$, the algorithm is called \emph{on-policy} learning, otherwise it is \emph{off-policy} learning. We assume that Markov chain induced by behavior policy $\mu$ is ergodic, then there exists a stationary distribution $\xi$ such that $\forall S_{0}\in\mathcal{S}$ \[\frac{1}{n}\sum_{k=1}^{n} P(S_{k}= s |S_{0})\rightarrow \xi(s),~\text{as}~ n\rightarrow\infty.\] We denote $\Xi = \text{diag}\{\xi(s_{1}),\xi(s_{2}),\cdots,\xi(s_{|\mathcal{S}|)}\}$ as a diagonal matrix and its diagonal element is the stationary distribution of state. For a given policy $\pi$, one of many key steps in RL is to estimate the \emph{state-action value function} \[ q^{\pi}(s,a) = \mathbb{E}_{\pi}[G_{t}|S_{t} = s,A_{t}=a], \] where $G_{t}=\sum_{k=0}^{\infty}\gamma^{k}R_{k+t+1}$ and $\mathbb{E}_{\pi}[\cdot|\cdot]$ stands for the expectation of a random variable with respect to the probability distribution induced by $\pi$. It is known that $q^{\pi}(s,a)$ is the unique fixed point of \emph{Bellman operator} $\mathcal{B}^{\pi}$, \begin{flalign} \label{bellman-equation} \mathcal{B}^{\pi} q^{\pi}=q^{\pi}~~\text{(Bellman equation)}, \end{flalign} where Bellman operator $\mathcal{B}^{\pi}$ is defined as: \begin{flalign} \label{Eq:bellman_operator} \mathcal{B}^{\pi} q&=\mathcal{R}^{\pi}+\gamma {P}^{\pi}q, \end{flalign} $P^{\pi}$$\in\mathbb{R}^{|\mathcal{S}| \times |\mathcal{S}|}$ and $R$$\in\mathbb{R}^{|\mathcal{S}|\times|\mathcal{A}|}$ with the corresponding elements: $ P^{\pi}_{ss^{'}}= \sum_{a \in \mathcal{A}}\pi(a|s)P^{a}_{ss^{'}},R(s,a)=\mathcal{R}_{s}^{a}. $ \subsection{Temporal Difference Learning and $\lambda$-Return} We can not calculate $q^{\pi}$ from Bellman equation (\ref{bellman-equation}) directly for the model-free RL problem (in such problem, the agent can not get $\mathcal{P}$ or $\mathcal{R}$ for a given MDP). In RL, temporal difference (TD) learning \cite{sutton1988learning} is one of the most important methods to solve the model-free RL problem. $\lambda$-Return is a multi-step TD learning that needs a longer sequence of experienced rewards is to learning the value function. \textbf{TD learning} One-step TD learning estimates the value function by taking action according to behavior policy, sampling the reward, and bootstrapping via the current estimation of the value function. Sarsa~\cite{rummery1994line}, Q-learning~\cite{watkins1989learning} and Expected-Sarsa~\cite{van2009theoretical} are typical one-step TD learning algorithms. From the view of sampling degree, TD learning methods fall into two categories: \emph{full-sampling} and \emph{pure-expectation}, which is deeply discussed in section 7.5\&7.6 in \cite{sutton2018reinforcement} or \cite{de2018multi}. Sarsa and Q-learning are typical full-sampling algorithms which have sampled all transitions to learn value function. Instead, pure-expectation algorithms take into account how likely each action is under current policy, e.g. \emph{Tree-Backup}~\cite{precup2000eligibility} or Expected-Sarsa uses the expectation of state-action value to estimate value function. $\text{Q}^{\pi}(\lambda)$~\cite{H2016} algorithm is also a pure-expectation algorithm which combines TD learning with eligibility trace. Harutyunyan et al.\shortcite{H2016} prove that when behavior and target policies are sufficiently close, off-policy $\text{Q}^{\pi}(\lambda)$ algorithm converges both in policy evaluation and control task. \textbf{$\lambda$-Return} For a trajectory, the $\lambda$-return is an average contains all the $n$-step returns by weighting proportionally to $\lambda^{n-1}$, $\lambda\in[0,1]$. Since the mechanisms of all the $\lambda$-return algorithms are similar, we only present the definition of $\lambda$-return of Sarsa$(\lambda)$~\cite{sutton2018reinforcement} as follows, \begin{flalign} \label{l_return_sarsa_onpolicy} G_{t}^{\lambda,\text{S}}=(1-\lambda)\sum_{n=0}^{\infty}\lambda^{n}G_{t}^{t+n}, \end{flalign} where $G_{t}^{t+n}=\sum_{i=0}^{n}\gamma^{i}R_{t+i+1}+\gamma^{n+1}Q(S_{t+n},A_{t+n})$ is $n$-step return of Sarsa from time $t$. After some simple algebra, the $\lambda$-return can be rewritten as a sum of TD-error, \begin{flalign} \label{lam_oprator} G_{t}^{\lambda,\text{S}} =Q(S_0,A_0)+\sum_{t=0}^{\infty}(\lambda\gamma)^{t}\delta_{t}^{\text{S}}, \end{flalign} where $\delta_{t}^{\text{S}}=R_{t+1}+\gamma Q_{t+1} - Q_{t}$, and $Q_{t}\overset{\text{def}}=Q(S_t,A_t)$. \subsection{An Unified View} In this section, we introduce the existing method that unifies full-sampling and pure-expectation algorithms. \textbf{$\text{Q}(\sigma)$ Algorithm} Recently, Sutton and Barto\shortcite{sutton2018reinforcement} and De Asis et al.\shortcite{de2018multi} propose a new TD algorithm $\text{Q}(\sigma)$ unifies Sarsa and Expected Sarsa \footnote{For multi-step case, $\text{Q}(\sigma)$ unifies \emph{$n$-step Sarsa} and \emph{$n$-step Tree-Backup}~\cite{precup2000eligibility}. For more details, please refer to \cite{de2018multi}.}. $\text{Q}(\sigma)$ estimates value function by weighting the average between Sarsa and Expected-Sarsa through a \emph{sampling parameter} $\sigma$. For a transition ($S_{t},A_{t},R_{t+1},S_{t+1},A_{t+1}$), $\text{Q}(\sigma)$ updates value function as follows, \begin{flalign} \nonumber Q(S_{t},A_{t})&=Q(S_{t},A_{t}) + \alpha_{t}\delta_{t}^{\sigma},\\ \label{eq:delta_sigma} \delta_{t}^{\pi,\sigma}&=\sigma\delta_{t}^{\text{S}}+(1-\sigma)\delta_{t}^{\text{ES}}, \end{flalign} $\delta_{t}^{\text{ES}}=R_{t+1}+\mathbb{E}_{\pi}[Q(S_{t+1},\cdot)]- Q_{t}$, $\mathbb{E}_{\pi}[Q(S_{t+1},\cdot)]=\sum_{a\in\mathcal{A}}\pi(a|S_{t+1})Q(S_{t+1},a)$. Q$(\sigma)|_{\sigma=0}$ is reduced to Expected-Sarsa, while Q$(\sigma)|_{\sigma=1}$ is Sarsa exactly. Experiments by De Asis et al.\shortcite{de2018multi} show that for an intermediate value of $\sigma\in(0,1)$, which results in a mixture of the existing algorithms, performs better than either extreme $\sigma=0$ or $\sigma=1$. \textbf{Q($\sigma,\lambda$) Algorithm} Later, Yang et al.\shortcite{yang2018} extend Q($\sigma$) with eligibility trace, and they propose Q($\sigma,\lambda$) unifies Sarsa($\lambda$) and $\text{Q}^{\pi}(\lambda)$. $\text{Q}(\sigma,\lambda)$ updates value function as: \begin{flalign} e(s,a)&=\gamma\lambda e(s,a)+\mathbb{I}\{(S_{t},A_{t})=(s,a)\},\\ Q_{t+1}(s,a)&= Q_{t+1}(s,a) + \alpha_{t}\delta_{t}^{\pi,\sigma}e(s,a), \end{flalign} where $\mathbb{I}$ is indicator function, $\alpha_{t}$ is step-size. We notice that $\text{Q}(\sigma,\lambda)|_{\sigma=0}$ is reduced to $\text{Q}^{\pi}(\lambda)$~\cite{H2016}, Q$(\sigma,\lambda)|_{\sigma=1}$ is Sarsa($\lambda$) exactly. The experiments in \cite{yang2018} shows a similar conclusion as De Asis et al.\shortcite{de2018multi}: an intermediate value of $\sigma\in(0,1)$ achieve the best performance than extreme $\sigma=0$ or $\sigma=1$. Besides, Yang et al.\shortcite{yang2018} have showed that for a trajectory $\{(S_{t},A_{t},R_{t})\}_{t\ge0}$, by Q($\sigma,\lambda$), the total update of a given episode reaches \begin{flalign} \label{mixedoperator} Q(S_{0},A_{0})+\sum_{t=0}^{\infty}(\lambda\gamma)^{t}\delta^{\pi,\sigma}_{t}, \end{flalign} which is an off-line version of Q($\sigma$) with eligibility trace. If $\sigma=1$, Eq.(\ref{mixedoperator}) is Eq.(\ref{lam_oprator}) exactly. Finally, we introduce the \emph{mixed-sampling operator} $\mathcal{B}^{\pi,\mu}_{\sigma,\lambda}$~\cite{yang2018}, which is a high level view of (\ref{mixedoperator}), \begin{flalign} \nonumber \mathcal{B}^{\pi,\mu}_{\sigma,\lambda}: \mathbb{R}^{|\mathcal{S}|\times|\mathcal{A}|}&\rightarrow\mathbb{R}^{|\mathcal{S}|\times|\mathcal{A}|}\\ \label{def:mixedoperator} q&\mapsto q+\mathbb{E}_{\mu}[\sum_{t=0}^{\infty}(\lambda\gamma)^{t}\delta^{\pi,\sigma}_{t}]. \end{flalign} \section{Convergence Analysis} In this section, we prove the convergence of the proposed Algorithm \ref{alg:algorithm1}. In order to present the convergent result, we need some additional assumptions. \begin{assumption} \label{ass:positive_lr} The positive sequence $\{\alpha_{k}\}_{k\ge0}$, $\{\beta_{k}\}_{k\ge0}$ satisfy $\sum_{k=0}^{\infty}\alpha_{k}=\sum_{k=0}^{\infty}\beta_{k}=\infty,\sum_{k=0}^{\infty}\alpha^{2}_{k}<\infty,\sum_{k=0}^{\infty}\beta^{2}_{k}<\infty$ with probability one. \end{assumption} \begin{assumption}[Boundedness of Feature, Reward and Parameters\cite{liu2015finite}] \label{ass:boundedness} (1)The features $\{\phi_{t}, \phi_{t+1}\}_{t\ge0}$ have uniformly bounded second moments, where $\phi_{t}=\phi(S_{t}),\phi_{t+1}=\phi(S_{t+1})$. (2)The reward function has uniformly bounded second moments. (3)There exists bounded $D_{\theta}\times D_{\omega}$,such that $\forall (\theta,\omega)\in D_{\theta}\times D_{\omega}$. \end{assumption} Assumption \ref{ass:boundedness} guarantees that the matrices $A_{\sigma}$ and $M$, and vector $b_{\sigma}$ are uniformly bounded. After some simple algebra, we have $A_{\sigma}=\mathbb{E}[e_{k}\Delta_{k,\sigma}]$, we provide its proof in Appendix C. The following Assumption \ref{invA} implies $A_{\sigma}^{-1}$ is well-defined. \begin{assumption} \label{invA} $A_{\sigma}=\mathbb{E}[e_{k}\Delta_{k,\sigma}]$ is non-singular, where $\Delta_{k,\sigma}$ is defined in Eq.(\ref{Delta}). \end{assumption} \begin{theorem}[Convergence of Algorithm \ref{alg:algorithm1}] \label{theorem2} Consider the iteration $(\theta_{k},\omega_k)$ generated by (\ref{gq_update1}) and (\ref{gq_update2}), if $\beta_{k}=\eta_{k}\alpha_{k}$, $\eta_{k}\rightarrow0$, as $k\rightarrow\infty$ , and $\alpha_{k},\beta_k$ satisfies Assumption \ref{ass:positive_lr}. The sequence $\{(\phi_{k},R_{k},\phi_{k+1})\}_{k\ge0}$ satisfies Assumption \ref{ass:boundedness}. Furthermore, $A_{\sigma}=\mathbb{E}[e_{k}\Delta_{k,\sigma}]$ satisfies Assumption \ref{invA}. Let $G(\omega,\theta)=A^{T}_{\sigma}M^{-1}A_{\sigma}\theta$, $H(\omega,\theta)=A_{\sigma}\theta+b_{\sigma}-M \omega$ Then $(\theta_{k},\omega_k)$ converges to $(\theta^{*},\omega^{*})$ with probability one, where $(\theta^{*},\omega^{*})$ is the unique global asymptotically stable equilibrium w.r.t ordinary differential equation (ODE) $\dot\theta(t)=G(\Omega(\theta),\theta)$,$\dot\omega(t)=H(\omega,\theta)$ correspondingly, $\Omega(\theta)$ is a function: $ \theta\mapsto M^{-1}(A_{\sigma}\theta+b_\sigma)$. \end{theorem} \begin{proof} We use the ordinary differential equation (ODE) method \cite{borkar1997stochastic} or Theorem 2 of Chapter 6 in \cite{borkar2009stochastic} to prove above result. Let $M_{k}=\delta_{k}e_{k}-\gamma v_{\sigma}(\theta_{k})\omega_{k}-M^{-1}(A_{\sigma}\theta_{k}+b_{\sigma})$ and $N_{k}=(\hat{A}_{k}-A_{\sigma})\theta_k+(\hat{b}_{k}-b_{\sigma})-(\hat{M}_{k}-M)\omega_{k}$. First, we rewrite the iteration (\ref{gq_update1}) and (\ref{gq_update2}) as follows \begin{flalign} \label{gq_update1_1} \theta_{k+1}&=\theta_{k}+\alpha_{k}(M^{-1}(A_{\sigma}\theta_k+b_{\sigma})+M_k)\\ \label{gq_update2_1} \omega_{k+1}&=\omega_{k}+\beta_{k}(H(\theta_{k},\omega_{k})+N_{k}). \end{flalign} Now we apply Theorem 2 in \cite{borkar2009stochastic}(or in \cite{borkar1997stochastic}). Theorem 2 requires the verification of the following conditions: (I) Both of the functions $G$ and $H$ are Lipschitz functions. It is obvious from Assumption \ref{ass:boundedness}-\ref{invA}. (II) Let the $\sigma$-field $\mathcal{F}_{k}=\sigma\{\theta_{t},\omega_{t};t\leq k\}$, then $\mathbb{E}[M_{k}|\mathcal{F}_k]=\mathbb{E}[N_{k}|\mathcal{F}_k]=0$. Furthermore, $\{M_{k}\}_{k\in\mathbb{ N}},\{N_{k}\}_{k\in\mathbb{ N}}$, are square-integrable with \begin{flalign} \label{MN} \mathbb{E}[\|M_{k}\|^{2}|\mathcal{F}_{k}],\mathbb{E}[\|N_{k}\|^{2}|\mathcal{F}_{k}]\leq K(1+\|\theta_{k}\|^{2}+\|\omega_{k}\|^{2}), \end{flalign} for some constant $K > 0$. In fact, it is obvious that $\mathbb{E}[M_{k}|\mathcal{F}_k]=\mathbb{E}[N_{k}|\mathcal{F}_k]=0$. By Assumption \ref{ass:boundedness}, Eq.(\ref{A_k}) and Eq.(\ref{b_k}), there exists non-negative constant $K_1,K_2,K_3$ such that $\{\|\hat{A}_{k}\|^{2},\|A_{\sigma}\|^{2}\}\leq K_1$, $\{\|\hat{b}_{k}\|^{2},\|b_{\sigma}\|^{2}\}\leq K_2$, $\{\|\hat{M}_{k}\|^{2},\|M\|^{2}\}\leq K_3$ which implies all above terms are bounded. Then, \begin{flalign} \nonumber &\mathbb{E}[\|N_{k}\|^{2}|\mathcal{F}_{k}]\\ \nonumber =&\mathbb{E}\Big[\|(\hat{A}_{k}-A_{\sigma})\theta_{k}+(\hat{b}_{k}-b_{\sigma})-(\hat{M}_{k}-M)\omega_{k}\|^{2}\Big|\mathcal{F}_{k}\Big]\\ \label{a} \leq&\mathbb{E}\Big[\Big(\|(\hat{A}_{k}-A_{\sigma})\theta_{k}\|+\|\hat{b}_{k}-b_{\sigma} \| + \| (\hat{M}_{k}-M)\omega_{k}\| \Big)^{2}\Big|\mathcal{F}_{k}\Big]\\ \label{b} \leq& {\widetilde{K}_1}^{2}(1+\|\theta_{k}\|^{2}+\|\omega_{k}\|^{2}). \end{flalign} Eq.(\ref{b}) holds due to Assumption \ref{ass:boundedness}, all the terms in Eq.(\ref{a}) is bounded. Thus, for a large positive $\widetilde{K}_1$, we have Eq.(\ref{b}). Also, by Assumption \ref{ass:boundedness}, $R_{k},\phi_k$ have uniformly bounded second, then $\mathbb{E}[\|M_{k}\|^{2}|\mathcal{F}_{k}]\leq{\widetilde{K}_2}^{2}(1+\|\theta_{k}\|^{2}+\|\omega_{k}\|^{2})$ for some $\widetilde{K}_2>0$. Thus, Eq.(\ref{MN}) holds. (III) For each $\theta\in\mathbb{R}^{p}$, the ODE $ \dot \omega(t) = H(\omega, \theta) $ has a unique global asymptotically stable equilibrium $\Omega(\theta)$ such that: $\Omega(\theta):\mathbb{R}^{p}\rightarrow\mathbb{R}^{k}$ is Lipschitz. In fact, for a fixed $\theta$, by Assumption \ref{invA}, $\omega^{*}=M^{-1}(A_{\sigma}\theta+b_{\sigma})$ is the unique globally asymptotically stable equilibrium for ODE $\dot \omega(t) = A_{\sigma}\theta+b_{\sigma}-M\omega(t).$ Let $\Omega(\theta): \theta\mapsto M^{-1}(A_{\sigma}\theta+b_{\sigma})$, it is easy to verify $\Omega$ is Lipschitz. For a fixed $\theta$, let $H_{\infty}(\omega(t),\theta)$ be a function: $H_{\infty}(\omega,\theta)=\lim_{r\rightarrow\infty}\dfrac{H_{\infty}(r\omega(t),\theta)}{r}=M\omega({t}).$ Now we consider the ODE \begin{flalign} \label{ode:h-infty} \dot{\omega}(t)=H_{\infty}(\omega,\theta)=M\omega(t). \end{flalign} By Assumption \ref{invA} implies that $M$ is a positive definite matrix, thus, for ODE (\ref{ode:h-infty}), origin is a globally asymptotically stable equilibrium. Since then, we have verified the conditions of Theorem 2.2 in~\cite{borkar2000ode}, then the following almost surely as $t\rightarrow\infty$, $ \|\omega_{t}-\omega^{*}\|\rightarrow0, $ which illustrates the convergence of iteration (\ref{gq_update2}). (IV)The ODE $\dot \theta(t) =G\big(\Omega(\theta(t)),\theta(t)\big)$ has a unique global asymptotically stable equilibrium $\theta^{*}$. Now, we consider the iteration (\ref{gq_update1_1}) associated with the following ODE $ \dot{\theta}(t)=(\gamma\mathbb{E}[\sigma\phi_{k+1}+(1-\sigma)\mathbb{E}_{\pi}[\phi(S_{k+1},\cdot)]]e_{k}^{T}M^{-1}-I)\mathbb{E}[\delta_{k}e_{k}|\theta(t)] $, which can be rewritten as \begin{flalign} \label{ode-theta-1} \dot{\theta}(t)&=\mathbb{E}[\Delta_{k,\sigma}e_{k}^{T}]M^{-1}(A_{\sigma}\theta(t)+b_\sigma) \\ \label{ode-theta-2} &=A^{T}_{\sigma}M^{-1}(A_{\sigma}\theta(t)+b_\sigma) \overset{\text{def}}=G(\theta(t)), \end{flalign} where Eq.(\ref{ode-theta-2}) holds due to $\mathbb{E}[\delta_{k}e_{k}|\theta(t)]=A_{\sigma}\theta(t)+b_\sigma$. By Assumption \ref{ass:boundedness}, $A_\sigma$ is invertible, then it is obvious $\theta^{*}=-A_{\sigma}^{-1}b_{\sigma}$ is the unique global asymptotically stable equilibrium of ODE (\ref{ode-theta-2}). Let $G_{\infty}(\theta)$ be a function defined by $ G_{\infty}(\theta)=\lim_{r\rightarrow\infty}\frac{G(r\theta,\omega)}{r} $, thus $G_{\infty}(\theta)=A^{T}_{\sigma} MM^{-1} A_{\sigma}\theta$. Now we consider the following ODE \begin{flalign} \label{ode-G} \dot{\theta}(t)=G_{\infty}(\theta(t)). \end{flalign} By Assumption \ref{ass:boundedness}, $A_\sigma$ is invertible and $M^{-1}$ is positive definition, thus $A^{T}_{\sigma} M^{-1} A_{\sigma}$ is also positive definition. Thus the ODE (\ref{ode-G}) has unique global asymptotically stable equilibrium: origin point. \end{proof} \section{Experiment} In this section, we test both policy evaluation and control capability of the proposed GQ$( \sigma,\lambda)$ algorithm and validate the trade-off between full-sampling and pure-expectation on some standard domains. In this section, for all experiments, we set the hyper parameter $\sigma$ as follows, $\sigma\sim\mathcal{N}(\mu, 0.01^2)$, where $\mu$ ranges dynamically from 0.02 to 0.98 with step of 0.02, and $\mathcal{N}(\cdot,0.01^2)$ is Gaussian distribution with standard deviation $0.01$. In the following paragraph, we use the term \emph{dynamic} $\sigma$ to represent the above way to set $\sigma$. \subsection{Policy Evaluation Task} We employ three typical domains in RL for policy evaluation: Baird Star \cite{baird1995residual}, Boyan Chain \cite{boyan2002technical} and linearized Cart-Pole balancing. \textbf{Domains}~ Baird Star is a well known example for divergence in off-policy TD learning, which considers $7$-state $\mathcal{S}=\{\mathtt{s}_1,\cdots,\mathtt{s}_7\}$ and $\mathcal{A}$= \{$\mathtt{dashed}$, $\mathtt{solid}$\}. The behavior policy $\mu$ selects the $\mathtt{dashed}$ and $\mathtt{solid}$ actions with $\mu(\mathtt{dashed}|\cdot)=\frac{6}{7}$ and $\mu(\mathtt{solid}|\cdot)=\frac{1}{7}$. The target policy always takes the $\mathtt{solid}$: $\pi(\mathtt{solid}|\cdot)=1$. The second benchmark MDP is the classic chain example from \cite{boyan2002technical}, which considers a chain of $14$ states $\mathcal{S} = \{\mathtt{s}_1, \cdots, \mathtt{s}_{14}\}$. Each transition from state $\mathtt{s}_i$ results in state $\mathtt{s}_{i+1}$ or $\mathtt{s}_{i+2}$ with equal probability and a reward of $-3$. The behavior policy we chose is random. For the limitation of space, we provide more details of the dynamics of Boyan Chain and Baird Star, chosen policy and features in Appendix E. Cart-Pole balancing is widely used for many RL tasks. According to ~\cite{dann2014policy}, the target policy we use in this section is the optimal policy $\pi^{*}(a|s)= \mathcal{N}(a|{\beta_{1}^{*}}^{\top}s,(\sigma_{1}^{*})^{2})$, where the hyper parameters $\beta_{1}^{*}$ and $\sigma_{1}^{*}$ are computed using dynamic programming. The feature chosen according to ~\cite{dann2014policy} is a imperfect feature set : $\phi(s)=(1,s_{1}^{2},s_{2}^{2},s_{3}^{2},s_{4}^{2})^{\top}, ~\text{where}~s=(s_1,s_2,s_3,s_4)^{\top}.$ \textbf{Performance Measurement}~In this section, we use the empirical $\text{MSPBE}=\frac{1}{2}\|\hat{b}_{\sigma}-\hat{A}_{\sigma}\theta\|^{2}_{\hat{M}^{-1}}$ to evaluate the performance of all the algorithms, where we evaluate $\hat{A}_\sigma$, $\hat{b}_\sigma$, and $\hat{M}$ according to their unbiased estimates (\ref{A_k}), $(\ref{b_k})$ and $\phi_{k}\phi^{\top}_{k}$, the features are presented in Appendix E. \textbf{Results Report} Figure 3 shows the results with dynamic $\sigma$ achieves the best performance on all the three domains. Our results show that an intermediate value of $\sigma$, which results in a mixture of the full-sampling and pure-expectation algorithms, performs better than either extreme ($\sigma=0$ or $1$). This validates the trade-off between full-sampling and pure-expectation for policy evaluation in standard domains, which also give us some insights that unifying some disparate existing method can create a better performing algorithm. \subsection{Control Task} In this section, we test the control capability of GQ$( \sigma,\lambda)$ algorithm on mountain car domain, where the agent considers the task of driving an underpowered car up a steep mountain road. The agent receives a reward of $-1$ at every step until it reaches the goal region at the top of the hill. Since the state space of this domain is continuous, we use the open tile coding software\footnote{\url{http://incompleteideas.net/rlai.cs.ualberta.ca/RLAI/RLtoolkit/tilecoding.html}} to extract feature of states. \textbf{Empirical Performance Comparison} The performance shown in Figure 4 is an average of 100 runs, and each run contains 400 episodes. We set $\lambda=0.99$, $\gamma=0.99$, and step-size $\alpha_{k}=\{10^{-2},2\times 10^{-2},10^{-3},2\times 10^{-3}\}$, $ \eta_{k}=\{2^{0},2^{-1},\cdots,2^{-10}\}$. \begin{figure}[t] \centering \subfigure[] {\includegraphics[width=4.1cm,height=3.2cm]{final_alpha=0_001.pdf}} \subfigure[] {\includegraphics[width=4.1cm,height=3.2cm]{final_alpha=0_002.pdf}} \caption { Comparison of empirical performance with different step-size: (a) $\alpha_{k}=0.001$, (b) $\alpha_{k}=0.002$. } \end{figure} The result in Figure 4 shows that GQ($\sigma,\lambda$) with an intermediate $\sigma$ (between $0$ and $1$) has a better performance than the extreme case ($\sigma=0~\text{and}~1$). This experiment further validates that unifying some existing algorithms can create a better algorithm for RL. \textbf{Variance Comparison} Now, we investigate and survey the variance of the performance of GQ($\sigma,\lambda$) during training. All parameters are set as before. We run the size of feature maps $p$ from $2^{7}$ to $2^{11}$. The outcomes related to different $p$ are very similar, so we only show the results of $p=1024$. Result in Figure 5 shows that the performance with the least variance is neither $\sigma=0$ nor $\sigma=1$, but the dynamic $\sigma$ reaches the least variance. \textbf{Overall Presentation} Now, we give more comprehensive results of the trade-off between full-sampling and pure-expectation. \begin{table}[H] \centering \begin{tabular}{|c|c|c|c|} \hline Case&I & II & III \\ \hline Percentage&\textbf{42.2\%} & \textbf{24.1\%} & 33.7\% \\ \hline \end{tabular} \label{tab:mc-all} \caption{Overall data statistics.} \end{table} We statistic of the number of $\sigma$ happens for the following three case, (I) GQ($\sigma,\lambda$) performs better than both $\sigma=0$ and $\sigma=1$. (II) GQ($\sigma,\lambda$) performs better than one of $\sigma=0$ and $\sigma=1$. (III) GQ($\sigma,\lambda$) performs worse than both $\sigma=0$ and $\sigma=1$. The setting of $\sigma$ is the same as the previous section, and the total number of $\sigma$ reaches $51$. \begin{figure}[t] \centering {\includegraphics[width=6.5cm,height=5.0cm]{final_maxSize=1024.pdf}} \caption { Comparison of variance. Figure shows the results of $p=1024$. } \end{figure} \begin{table}[H] \centering \begin{tabular}{|l|l|c|c|c|} \hline \multicolumn{2}{|c|}{Case} & $p=512$ & $p=1024$ & $p=2048$ \\ \hline \multirow{3}{*}{$\alpha=0.001$} & I & \textbf{69.8\%} & 20.4\%& \textbf{55.1\%} \\ \cline{2-5} & II & 14.1\% & 36.7\% & 20.4\% \\ \cline{2-5} & III & 16.1\% & \textbf{42.9\%} & 24.5\% \\ \hline \multirow{3}{*}{$\alpha=0.002$} & I & \textbf{53.1\%} & 6.1\% & \textbf{49.0\%} \\ \cline{2-5} & II & 38.8\% & 16.3\% & 18.4\% \\ \cline{2-5} & III & 8.1\% & \textbf{77.6\%} & 32.6\% \\ \hline \end{tabular} \label{tab:mc} \caption{Percentage under various parameters.} \end{table} Table 1 shows that GQ$(\sigma,\lambda)$ performs better than both $\sigma=0$ and $\sigma=1$ with a more significant number than case II and III. Table 2 implies the GQ$(\sigma,\lambda)$ reaches the best performance and the advantage is more significant for $p=512 $ and $2048$. This experiment further illustrates the trade-off between full-sampling and pure-expectation in RL for control tasks. \section{Conclusions} In this paper, we study tabular Q$(\sigma,\lambda)$ with function approximation. We analyze divergence in Q$(\sigma,\lambda)$ with semi-gradient method. To address the instability of semi-gradient Q$(\sigma,\lambda)$ algorithm, we propose GQ$(\sigma,\lambda)$ algorithm. Our theoretical results have given a guarantee of convergence for the combination of full-sampling algorithm and pure-expectation algorithm, which is of great significance in reinforcement learning algorithms with function approximation. Finally, we conduct extensive experiments on some standard domains to show that GQ$(\sigma,\lambda)$ with an value $\sigma\in(0,1)$ that results in a mixture of the full-sampling with pure-expectation methods, performs better than either extreme $\sigma=0$ or $\sigma=1$. \section{Q$(\sigma,\lambda)$ with Semi-Gradient Method} In this section, we analyze the instability of extending tabular Q$(\sigma,\lambda)$ with linear function approximation by the semi-gradient method. We need some necessary notations about linear function approximation. When the dimension of $\mathcal{S}$ is huge, we cannot expect to obtain value function accurately by tabular learning methods. We often use a linear function with a parameter ${\theta}$ to estimate $q^{\pi}(s,a)$ as follows, \[ q^{\pi}(s,a)\approx\phi^{\top}(s,a)\theta\overset{\text{def}}=\hat{Q}_{\theta}(s,a), \] where $\phi: \mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}^{p}$ is a \emph{feature map}, specifically, $\phi(s,a)=(\varphi_{1}(s,a),\varphi_{2}(s,a),\cdots,\varphi_{p}(s,a))^{\top},$ the corresponding element $\varphi_{i}$ is defined as follows $\varphi_{i}:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}$. Then $\hat{Q}_{\theta}$ can be written as a matrix version, \[\hat{Q}_{\theta}=\Phi\theta\approx q^{\pi},\] where $\Phi$ is a $|\mathcal{S}||\mathcal{A}|\times p$ matrix whose rows are the state-action features $\phi(s,a)$,$(s,a)\in\mathcal{S}\times\mathcal{A}$. \subsection{Semi-Gradient Method} For a trajectory $\{(S_{k},A_{k},R_{k})\}_{k\ge0}$, we define the update rule of Q($\sigma,\lambda$) with semi-gradient method as follows, \begin{flalign} \nonumber \theta_{k+1} &=\theta_{k}-\alpha_{k}\nabla_{\theta}\big(G_{k}^{\lambda}-\hat{Q}_{\theta}(S_{k},A_{k})\big)^2|_{\theta=\theta_k}\\ \nonumber &=\theta_{k}-\alpha_{k}\{G_{k}^{\lambda}-\hat{Q}_{\theta_{k}}(S_{k},A_{k})\}\nabla_{\theta}(-\hat{Q}_{\theta}(S_{k},A_{k}))|_{\theta=\theta_k}\\ \label{Eq:semi-gradient} &=\theta_{k}+\alpha_{k}\{\sum_{t=k}^{\infty}(\lambda\gamma)^{t-k}\delta^{\pi,\sigma}_{k,t}(\theta_{k})\}\phi(S_{k},A_{k}), \end{flalign} where $\alpha_{k}$ is step-size, $G_{k}^{\lambda}$ is an off-line estimate of value function according to Eq.(\ref{mixedoperator}), specifically, for each $k\ge1$, \begin{flalign} \label{G} G_{k}^{\lambda}&=\theta_{k}^{\top}\phi_{k}+\sum_{t=k}^{\infty}(\lambda\gamma)^{t-k}\delta^{\pi,\sigma}_{k,t}(\theta_{k}),\\ \nonumber \delta^{\pi,\sigma}_{k,t}(\theta_{k})&=\sigma\delta_{t}^{\text{S}}(\theta_{k})+(1-\sigma)\delta_{t}^{\text{ES}}(\theta_{k}), \end{flalign} where $ \delta_{t}^{\text{S}}(\theta_{k})=R_{t}+\gamma \theta_{k}^{\top}\phi_{t+1}-\theta_{k}^{\top}\phi_{t},~~ \delta_{t}^{\text{ES}}(\theta_{k})= R_{t}+\gamma\mathbb{E}_{\pi}[\theta_{k}^{\top}\phi(S_{t+1},\cdot)]-\theta_{k}^{\top}\phi_{t}, $ and $\phi_{t}\overset{\text{def}}=\phi(S_{t},A_{t})$. \subsection{Instability Analysis} Now, we show the iteration (\ref{Eq:semi-gradient}) is an unstable algorithm. Let's consider the sequence $\{\theta_k\}_{k\ge0}$ generated by the iteration (\ref{Eq:semi-gradient}), then the following holds, \begin{flalign} \label{Eq:linear_eq} \mathbb{E}[\theta_{k+1}|\theta_{0}]=\mathbb{E}[\theta_{k}|\theta_{0}]+\alpha_{k}(A_{\sigma}\hspace{0.05cm}\mathbb{E}[\theta_{k}|\theta_{0}]+b_{\sigma}), \end{flalign} where $A_{\sigma}=\mathbb{E}[\hat{A}_k]$ and $b_{\sigma}=\mathbb{E}[\hat{b}_k]$, where \begin{flalign} \nonumber \hat{A}_{k}&=\phi_{k}\sum_{t=k}^{\infty}(\lambda\gamma)^{t-k}\big(\sigma(\gamma\phi_{t+1}-\phi_{t})+\\ \label{A_k} &~~~~~~~~~~~~~~~~~~~~(1-\sigma)[\gamma\mathbb{E}_{\pi}\phi(S_{t+1},\cdot)-\phi_{t}]\big)^{\top},\\ \label{b_k} \hat{b}_{k}&=\sum_{t=k}^{\infty}(\lambda\gamma)^{t-k}R_{t}\phi_{t}. \end{flalign} Furthermore, \begin{flalign} A_{\sigma} \nonumber &={\Phi}^{\top}\Xi(I-\gamma\lambda{{P}}^{\mu})^{-1}((1-\sigma)\gamma{P}^{\pi}+\sigma\gamma{P}^{\mu}-{I}){\Phi},\\ b_{\sigma} &=\Phi\Xi (I-\gamma\lambda{{P}}^{\mu})^{-1}(\sigma\mathcal{R}^{\mu}+(1-\sigma)\mathcal{R}^{\pi}), \end{flalign} Eq.(\ref{Eq:linear_eq}) plays a critical role for our analysis, we provide its proof in Appendix A. As the same discussion by Tsitsiklis and Van Roy~\shortcite{tsitsiklis1997analysis}; Sutton, Mahmood, and White~\shortcite{sutton2016emphatic}, under the conditions of Proposition 4.8 proved by Bertsekas and Tsitsiklis \shortcite{bertsekas1995neuro}, \emph{if $A_{\sigma}$ is a negative matrix, then $\theta_{k}$ generated by iteration (\ref{Eq:semi-gradient}) is a convergent algorithm. By (\ref{Eq:linear_eq}), $\theta_{k}$ converges to the unique TD fixed point $\theta^{*}$}: \begin{flalign} \label{TD-fixed-point} A_{\sigma}\theta^{*}+b_{\sigma}=0. \end{flalign} For on-policy case, for $\forall \sigma\in[0,1]$, \begin{flalign} \label{Eq:on_policy_key_matrix} A_{\sigma}={\Phi}^{\top}\Xi(I-\gamma\lambda{{P}}^{\pi})^{-1}(\gamma{P}^{\pi}-{I}){\Phi}. \end{flalign} It has been shown that $A_{\sigma}$ in Eq.(\ref{Eq:on_policy_key_matrix}) is negative definite (e.g. section 9.4 in \cite{sutton2018reinforcement}), thus iteration~(\ref{Eq:semi-gradient}) is a convergent algorithm: it converges to $\theta^{*}$ satisfies (\ref{TD-fixed-point}). Unfortunately, by the fact that the steady state-action distribution doesn't match the transition probability during off-policy learning, $\forall \sigma\in(0,1)$, $A_{\sigma}$ may not have an analog of~(\ref{Eq:on_policy_key_matrix}). Thus, unlike on-policy learning, there is no guarantee that $A_{\sigma}$ keeps the negative definite property, thus $\theta_{k}$ may diverge. We use a typical example to illustrate it. \subsection{A Counter Unstable Example} \begin{figure}[h] \centering \includegraphics[scale=0.7, trim={1mm 1mm 1mm 1mm}, clip]{fig_1.pdf} \caption{A Counter Example \cite{touati2018convergent}. We assign the features $ \{(1, 0)^{\top}, (2, 0)^{\top}, (0, 1)^{\top}, (0, 2)^{\top}\}$ to the state-action pairs $\{(1,\mathtt{right}),(2,\mathtt{right}),(1,\mathtt{left}),(2,\mathtt{left})\}$, the target policy $\pi(\mathtt{right} |\cdot)=1$ and the behavior policy $\mu(\mathtt{right} | \cdot)=0.5$. } \end{figure} \begin{figure}[h] \label{theta_div} \centering \includegraphics[width=0.35\textwidth]{off_policy_theta.pdf} \caption{Demonstration of instability on the counter example. The components of parameter $\theta$ are shown in the figure. The initial weights $\theta=(2,0)^{\top}$, $\gamma=0.99$, $\lambda=0.99$. We run $\sigma$ from 0 to 1 with step-size 0.01, and all the solutions are similar. We only show one result here.} \end{figure} The Figure 2 shows the numerical solution of the parameter $\theta$ learned by (\ref{Eq:semi-gradient}) for the counterexample (Figure1). This simple example is very striking because the full-sampling and pure-expectation methods are arguably the simplest and best-understood methods and the linear, semi-gradient method is arguably the simplest and best-understood kind of function approximation. This result shows that even the simplest combination of full-sampling and pure-expectation with function approximation can be unstable if the updates are not done according to the on-policy distribution. \section{Introduction} Reinforcement learning (RL) is a powerful tool for sequential decision-making problem. In RL, the agent's goal is to learn from experiences and to seek an optimal policy from the delayed reward decision system. Tabular learning methods are core ideas in RL algorithms with simple forms: \emph{value functions} are represented as arrays, or tables \cite{sutton1998reinforcement}. One merit of tabular learning methods is that such methods converge to the optimal solution with solid theoretical guarantee \cite{singh2000convergence}. However, when state space is enormous, we suffer from what Bellman called ``curse of dimensionality" \cite{ernest1957dynamic}, and can not expect to obtain value function accurately by tabular learning methods. An efficient approach to address the above problem is to use a parameterized function to approximate the value function \cite{sutton1998reinforcement}. Recently, Yang et al.~\shortcite{yang2018} propose a new algorithm Q$(\sigma,\lambda)$ that develops Q$(\sigma)$ \cite{sutton2018reinforcement,de2018multi} with eligibility trace. Q$(\sigma,\lambda)$ unifies Sarsa$(\lambda)$ \cite{rummery1994line} and Q$^{\pi}(\lambda)$ \cite{H2016}. However, original theoretical results by Yang et al. \shortcite{yang2018} are limited in tabular learning. In this paper, we extend tabular Q$(\sigma,\lambda)$ algorithm with linear function approximation and propose the gradient Q$(\sigma,\lambda)$ (GQ$(\sigma,\lambda)$) algorithm. The proposed GQ$(\sigma,\lambda)$ algorithm unifies \emph{full-sampling} ($\sigma=1$) and \emph{pure-expectation} ($\sigma=0$) algorithm through the \emph{sampling degree} $\sigma$. Results show that GQ$(\sigma,\lambda)$ with a varying combination between full-sampling with pure-expectation achieves a better performance than both full-sampling and pure-expectation methods. Unfortunately, it is not sound to expend Q$(\sigma,\lambda)$ by semi-gradient method via \emph{mean square value error} (MSVE) objective function directly, although the linear, semi-gradient method is arguably the simplest and best-understood kind of function approximation. In this paper, we provide a profound analysis of the instability of Q$(\sigma,\lambda)$ with function approximation by the semi-gradient method. Furthermore, to address above instability, we propose GQ$(\sigma,\lambda)$ algorithm under the framework of \emph{mean squared projected Bellman error} (MSPBE) \cite{sutton2009fast_a}. However, as pointed out by Liu et al.~\shortcite{liu2015finite}, we can not get an unbiased estimate of the gradient with respect to the MSPBE objective function. In fact, since the update law of gradient involves the product of expectations, the unbiased estimation cannot be obtained via a single sample, which is the double-sampling problem. Secondly, the gradient of MSPBE objective function has a term likes $\mathbb{E}[\phi_{t} \phi_{t}^\top]^{-1}$, cannot also be estimated via a single sample, which is the second bottleneck of applying stochastic gradient method to optimize MSPBE objective function. Inspired by the key step of the derivation of TDC algorithm \cite{sutton2009fast_a}, we apply the two-timescale stochastic approximation \cite{borkar2000ode} to address the dilemma of double-sampling problem, and propose a convergent GQ$(\sigma,\lambda)$ algorithm which unifies full-sampling and pure-expectation algorithm with function approximation. Finally, we conduct extensive experiments on some standard domains to show that GQ$(\sigma,\lambda)$ with an value $\sigma\in(0,1)$ that results in a mixture of the full-sampling with pure-expectation methods, performs better than either extreme $\sigma=0$ or $\sigma=1$. \section{Preliminary} We consider the stochastic multi-bandit (MAB) problem with $K\ge2$ arms. At each time step $t\in\mathbb{N}$, the agent selects an arm $k\in[K]\overset{\text{def}}=\{1,2,\cdots,K\}$. Then the agent receives a random real-valued reward $X_{t,k}$ according to an unknown (fixed) distribution $\nu_{k}$ with expectation $\mu_{k}=\mathbb{E}[X_{t,k}]$. The random rewards are independent identically distributed for each arm is played repeatedly, and independent of the plays of the other arms. We assume that there is an unique optimal arm $k^{*}$ satisfies that $\mu^{*}=\mu_{k^{*}}=\max_{k\in[K]}\mu_{k}$. \subsection{Unimodal Structure} In this paper, we mainly focus on the \emph{unimodal} bandit problem where the expected reward function is unimodal over a partially order set of arms, which is an important class of problems has been studied by \cite{yu2011unimodal,combes2014unimodal}. Let $\mathcal{G}=(\mathcal{V},\mathcal{E})$ be an undirected graph, where $\mathcal{V}=[K]$ is the set of vertices correspond to arms, and an edge $(i,j)\in\mathcal{E}$ characterizes a partial order between the expected rewards of the arm $i$ and $j$. Let $\nu=(\nu_{1},\cdots,\nu_{K})$, the partial order of $\mathcal{G}$ is unknown since the reward distribution vector $\nu$ is a priori unknown. The unimodal structure is made precise in the following Assumption \ref{ass-unimodal}. \begin{assumption}[Unimodality] \label{ass-unimodal} Let $\mathcal{G}=(\mathcal{V},\mathcal{E})$ be an undirected graph. The expected reward function is unimodal along every path $P=(k_1,\cdots,k_m)$ contains $k^{*}$, such that \[\mu_{k_1}<\mu_{k_2}<\cdots<\mu_{k^{*}},\mu_{k^{*}}>\cdots>\mu_{k_{j-1}}>\mu_{k_m},\] where $(k_{i},k_{i+1})\in\mathcal{E},1\leq i\leq m-1$. \end{assumption} This unimodality includes the classic unimodality which is a special case when $\mathcal{G}$ is just a line. \subsection{Regret} The performance of a policy $\pi$ for MAB problem is measured by its regret. Let $\mathcal{A}=\{ k_{1},k_{2},\cdots,k_{T}\}$ be the sequence generated by $\pi$, where $k_{i}\in[K],i=1,2,\cdots,T$. The regret of policy $\pi$ on a bandit is defined as \[R_{\pi}(T)=T\mu^{*}-\mathbb{E}[\sum_{i=1}^{T}X_{i,k_i}],\] where the expectation is taken with respect to the measure on outcomes induced by the interaction of $\pi$ and $\nu$. We now present the lower bound of $R_{\pi}(T)$ over unimodal bandit problem. Without loss of generality, we consider the uniformly good policy \cite{lai1985asymptotically}, i.e. $R_{\pi}(T)=o(T^{a})$, for all $a>0$. We use $N(i)$ to denote the neighborhood of arm $i$, specifically, $N(i)=\{j|(i,j)\in\mathcal{E}\}$. \begin{theorem}\emph{\cite{combes2014unimodal}} Let $\pi$ be a uniformly good policy. For given unimodal bandit $\mathcal{G}=(\mathcal{V},\mathcal{E})$, the following holds, \begin{flalign} \label{lower-bound} \lim_{T\rightarrow\infty}\inf \frac{R_{\pi}(T)}{\log(T)}\ge\sum_{k\in N(k^*)}\frac{\mu^*-\mu_k}{KL(\mu_{k},\mu^*)}, \end{flalign} where $KL(\mu_{k},\mu^*)=\mu_{k}\log\frac{\mu_k}{\mu^*}+(1-\mu_{k})\log\frac{1-\mu_k}{1-\mu^*}$ is KL-divergence number between Bernoulli distributions of respective means $\mu_k$ and $\mu^*$. \end{theorem} \section{Related Work}
1,941,325,219,894
arxiv
\section{Introduction} In this paper, we study the third weight of generalized Reed-Muller codes.\\ We first introduce some notations : \\ Let $p$ be a prime number, $e$ a positive integer, $q=p^e$ and $\mathbb{F}_q$ a finite field with $q$ elements.\\ If $m$ is a positive integer, we denote by $B_m^q$ the $\mathbb{F}_q$-algebra of the functions from $\mathbb{F}_q^m$ to $\mathbb{F}_q$ and by $\mathbb{F}_q[X_1,\ldots,X_m]$ the $\mathbb{F}_q$-algebra of polynomials in $m$ variables with coefficients in $\mathbb{F}_q$. We consider the morphism of $\mathbb{F}_q$-algebras $\varphi: \mathbb{F}_q[X_1,\ldots,X_m]\rightarrow B_m^q$ which associates to $P\in\mathbb{F}_q[X_1,\ldots,X_m]$ the function $f\in B_m^q$ such that $$\textrm{$\forall x=(x_1,\ldots,x_m)\in\mathbb{F}_q^m$, $f(x)=P(x_1,\ldots,x_m)$.}$$ The morphism $\varphi$ is onto and its kernel is the ideal generated by the polynomials $X_1^q-X_1,\ldots,X_m^q-X_m$. So, for each $f\in B_m^q$, there exists a unique polynomial $P\in\mathbb{F}_q[X_1,\ldots,X_m]$ such that the degree of $P$ in each variable is at most $q-1$ and $\varphi(P)=f$. We say that $P$ is the reduced form of $f$ and we define the degree $\deg(f)$ of $f$ as the degree of its reduced form. The support of $f$ is the set $\{x\in\mathbb{F}_q^m:f(x)\neq0\}$ and we denote by $|f|$ the cardinal of its support (by identifying canonically $B_m^q$ and $\mathbb{F}_q^{q^m}$, $|f|$ is actually the Hamming weight of $f$). \\ For $0\leq r\leq m(q-1)$, the $r$th order generalized Reed-Muller code of length $q^m$ is $$R_q(r,m):=\{f\in B_m^q :\deg(f)\leq r\}.$$ For $1\leq r\leq m(q-1)-2$, the automorphism group of generalized Reed-Muller codes $R_q(r,m)$ is the affine group of $\mathbb{F}_q^m$ (see \cite{charpin_auto}). \\ For more results on generalized Reed-Muller codes, we refer to \cite{delsarte_poids_min}. \\ In the following of the article, we write $r=a(q-1)+b$, $0\leq a\leq m-1$, $1\leq b\leq q-1$. \\ In \cite{MR0275989}, interpreting generalized Reed-Muller codes in terms of BCH codes, it is proved that the minimal weight of the generalized Reed-Muller code $R_q(r,m)$ is $(q-b)q^{m-a-1}$. The minimum weight codewords of generalized Reed-Muller codes are described in \cite{delsarte_poids_min} (see also \cite{Leducq2012581}). In his Ph.D thesis \cite{erickson1974counting}, Erickson proves that if we know the second weight of $R_q(b,2)$, then we know the second weight for all generalized Reed-Muller codes. From a conjecture on blocking sets, Erickson conjectures that the second weight of $R_q(b,2)$ is $(q-b)q+b-1$. Bruen proves the conjecture on blocking set in \cite{MR2766082}. Geil also proves this result in \cite{MR2411119} using Groebner basis. An altenative approach can be found in \cite{MR2592428} where the second weight of most $R_q(r,m)$ is established without using Erickson's results. Second weight codewords have been studied in \cite{MR1384161}, \cite{MR2332391} and finally completely described in \cite{raey}. For $q=2$, small weights and small weight codewords are described in \cite{MR0401324}, the third weight for $r>(m-1)(q-1)+1$ is given in \cite{MR2411119}, we can find results on small weight codewords in \cite{DBLP:journals/corr/abs-1203-4592}. In the following, we consider only $q\geq3$ and $r\leq m(q-1)+1$.\\ We first give some tools that we will use through all the paper. Then we give an upper bound on the third weight of generalized Reed-Muller codes. In Section 4 is the main result of this article : we describe the third weight of generalized Reed-Muller codes with some restrictive conditions. In section 5, we study more particularly the case of two variables which is quite essential in the determination of the third weight. In Section 6, we described the codewords reaching the third weight. In section 7, we summarize the results obtain in this article. This article ends with an Appendix which gives more precisions on the results in Section 3. \section{Preliminaries} \subsection{Notation and preliminary remark} Let $f\in B_m^q$, $\lambda\in\mathbb{F}_q$. We define $f_{\lambda}\in B_m^q$ by $$\forall x=(x_2,\ldots,x_m)\in\mathbb{F}_q^m, \quad f_{\lambda}(x)=f(\lambda,x_2\ldots,x_m).$$ Let $0\leq r\leq (m-1)(q-1)$ and $f\in R_q(r,m)$. We denote by $S$ the support of $f$. Consider $H$ an affine hyperplane in $\mathbb{F}_q^m$, by an affine transformation, we can assume $x_1=0$ is an equation of $H$. Then $S\cap H$ is the support of $f_0\in R_q(r,m-1)$ or the support of $(1-x_1^{q-1})f\in R_q(r+(q-1),m)$. \subsection{Useful lemmas} \begin{lemme}\label{inter}Let $q\geq 3$, $m\geq 3$ and $S$ a set of points of $\mathbb{F}_q^m$ such that $\#S=uq^n<q^m$, $u\not\equiv0\mod q$. If for all hyperplane $H$ $\#(S\cap H)=0$, $\#(S\cap H)=wq^{n-1}$, $\#(S\cap H)=vq^{n-1}$ or $\#(S\cap H)\geq uq^{n-1}$, with $w<v<u$, then there exists $H$ an affine hyperplane such that $\#(S\cap H)=0$, $\#(S\cap H)=wq^{n-1}$ or $\#(S\cap H)=vq^{n-1}$.\end{lemme} \begin{preuve}Assume for all $H$ hyperplane, $\#(S\cap H)\geq uq^{n-1}$. Consider an affine hyperplane $H$; then for all $H'$ hyperplane parallel to $H$, $\#(S\cap H')\geq u.q^{n-1}.$ Since $u.q^{n}=\#S=\displaystyle\sum_{H'//H}\#(S\cap H')$, we get that for all $H$ hyperplane, $\#(S\cap H)=u.q^{n-1}$. \\Now consider $A$ an affine subspace of codimension 2 and the $(q+1)$ hyperplanes through $A$. These hyperplanes intersect only in $A$ and their union is equal to $\mathbb{F}_q^m$. So $$uq^{n}=\#S=(q+1)u.q^{n-1}-q\#(S\cap A).$$ Finally we get a contradiction if $n=1$ since $u\not\equiv 0\mod q$. Otherwise, $\#(S\cap A)=u.q^{n-2}$. Iterating this argument, we get that for all $A$ affine subspace of codimension $k\leq n$, $\#(S\cap A)=u.q^{n-k}$. \\Let $A$ be an affine subspace of codimension $n+1$ and $A'$ an affine subspace of codimension $n-1$ containing $A$. We consider the $(q+1)$ affine subspaces of codimension $n$ containing $A$ and included in $A'$, then $$u.q=\#(S\cap A')=(q+1)u-q\#(S\cap A)$$ which is absurd since $\#(S\cap A)$ is an integer and $u\not\equiv 0\mod q$. So there exists $H_0$ an hyperplane such that $\#(S\cap H_0)=vq^{n-1}$, $\#(S\cap H_0)=wq^{n-1}$ or $S$ does not meet $H_0$. \end{preuve} The following lemma is proved in \cite{delsarte_poids_min}. \begin{lemme}\label{DGMW1}Let $m\geq1$, $q\geq2$, $f\in B_m^q$ and $w\in\mathbb{F}_q$. If for all $(x_2,\ldots,x_m)$ in $\mathbb{F}_q^{m-1}$, $f(w,x_2,\ldots,x_m)=0$ then for all $(x_1,\ldots,x_m)\in\mathbb{F}_q^m$, $$f(x_1,\ldots,x_m)=(x_1-w)g(x_1,\ldots,x_m)$$ with $\deg_{x_1}(g)\leq\deg_{x_1}(f)-1$ and $\deg(g)\leq\deg(f)-1$.\end{lemme} The following lemmas are proved in \cite{erickson1974counting} \begin{lemme}\label{2.7}Let $m\geq 2$, $q\geq3$, $0\leq r\leq m(q-1)$. If $f\in R_q(r,m)$, $f\neq0$ and there exists $y\in R_q(1,m)$ and $(\lambda_i)_{1\leq i\leq n}$ $n$ elements in $\mathbb{F}_q$ such that the hyperplanes of equation $y=\lambda_i$ do not meet the support of $f$, then $$|f|\geq(q-b)q^{m-a-1}+\left\{\begin{array}{ll}n(b-n)q^{m-a-2}&\textrm{if $n<b$}\\(n-b)(q-1-n)q^{m-a-1}&\textrm{if $n\geq b$}\end{array}\right.$$ where $r=a(q-1)+b$, $1\leq b\leq q-1$.\end{lemme} \begin{lemme}\label{3.9}Let $m\geq2$, $q\geq3$, $1\leq b\leq q-1$. Assume $f\in R_q(b,m)$ is such that $f$ depends only on $x_1$ and $g\in R_q(b-k,m)$, $1\leq k\leq b$. Then either $f+g$ depends only on $x_1$ or $|f+g|\geq (q-b+k)q^{m-1}$.\end{lemme} \begin{lemme}\label{3.5}Let $m\geq2$, $q\geq3$, $1\leq a\leq m-1$, $1\leq b\leq q-2$. Assume $f\in R_q(a(q-1)+b,m)$ is such that $\forall x=(x_1,\ldots,x_m)\in\mathbb{F}_q^m$, $$f(x)=(1-x_1^{q-1})\widetilde{f}(x_2,\ldots,x_m)$$ and $g\in R_q(a(q-1)+b-k)$, $1\leq k\leq q-1$, is such that $(1-x_1^{q-1})$ does not divide $g$. Then, either $|f+g|\geq (q-b+k)q^{m-a-1}$ or $k=1$.\end{lemme} \begin{lemme}\label{3.7}Let $m\geq2$, $q\geq3$, $1\leq a\leq m-2$, $1\leq b\leq q-2$ and $f\in R_q(a(q-1)+b,m)$. We set an order on the elements of $\mathbb{F}_q$ such that $|f_{\lambda_1}|\leq\ldots\leq|f_{\lambda_q}|$. If $f$ has no linear factor and there exists $k\geq1$ such that $(1-x_2^{q-1})$ divides $f_{\lambda_i}$ for $i\leq k$ but $(1-x_2^{q-1})$ does not divide $f_{\lambda_{k+1}}$ then, $$|f|\geq (q-b)q^{m-a-1}+k(q-k)q^{m-a-2}.$$\end{lemme} \begin{lemme}\label{2.14}Let $m\geq2$, $q\geq3$, $1\leq a\leq m$ and $f\in R_q(a(q-1),m)$ such that $|f|=q^{m-a}$ and $g\in R_q(a(q-1)-k,m)$, $1\leq k\leq q-1$, such that $g\neq0$. Then, either $|f+g|=kq^{m-a}$ or $|f+g|\geq(k+1)q^{m-a}$.\end{lemme} \begin{lemme}\label{2.15.1}Let $m\geq2$, $q\geq3$, $1\leq a\leq m-1$ and $f\in R_q(a(q-1),m)$. If for some $u$, $v\in\mathbb{F}_q$, $|f_u|=|f_v|=q^{m-a-1}$, then there exists $T$ an affine transformation fixing $x_1$ such that $$(f\circ T)_u=(f\circ T)_v.$$\end{lemme} \section{An upper bound on the third weight}\label{upper} \begin{theoreme}\label{3hyp}Let $q\geq 3$, $m\geq2$, $0\leq a\leq m-1$, $1\leq b\leq q-1$, then if $W_3$ is the third weight of $R_q(a(q-1)+b,m)$, we have \begin{itemize} \item If $b=1$ then, \begin{itemize}\item For $q=3$, $m\geq3$, $1\leq a\leq m-2$, $$W_3\leq3^{m-a}.$$ \item For $q=4$, $m\geq3$ and $1\leq a\leq m-2$, $$W_3\leq18.4^{m-a-2}.$$ \item For $q=3$ and $a=m-1$ or $q=4$ and $a=m-1$, $$W_3\leq2(q-1).$$ \item For $q\geq5$ and $1\leq a\leq m-1$, $$W_3\leq2(q-2)q^{m-a-1}.$$ \end{itemize} \item If $2\leq b\leq q-1$ \begin{itemize} \item For $q\geq5$, $0\leq a\leq m-2$ and $4\leq b\leq \lfloor\frac{q}{2}+2\rfloor$, $$W_3\leq(q-2)(q-b+2)q^{m-a-2}.$$ \item For $q\geq7$, $0\leq a\leq m-2$ and $\lceil\frac{q}{2}+2\rceil\leq b\leq q-1$ or $q\geq 4$, $0\leq a\leq m-2$ and $b=2$ or $q\geq 4$, $a=m-2$ and $b=3$ or $q=3$, $a\in\{0,m-2\}$ and $b=2$ $$W_3\leq(q-b+1)q^{m-a-1}.$$ \item For $q\geq4$, $m\geq3$, $0\leq a\leq m-3$ and $b= 3$, $$W_3\leq(q-1)^3q^{m-a-3}.$$ \item For $q=3$, $m\geq4$, $1\leq a\leq m-3$ and $b=2$, $$W_3\leq16.3^{m-a-3}.$$ \end{itemize} \end{itemize} \end{theoreme} \begin{preuve}\begin{itemize} \item If $b=1$ then, \begin{itemize}\item For $q=3$, $m\geq3$, $1\leq a\leq m-2$, define for $x=(x_1,\ldots,x_m)\in\mathbb{F}_q^m$, $$f(x)=\prod_{i=1}^a(1-x_i^{2}).$$ Then, $f\in R_3(2a+1,m)$ and $|f|=3^{m-a}>8.3^{m-a-2}$. \item For $q=4$, $m\geq3$ and $1\leq a\leq m-2$, define for $x=(x_1,\ldots,x_m)\in\mathbb{F}_q^m$, $$f(x_1,\ldots,x_m)=\prod_{i=1}^{a-1}(1-x_i^3)(x_a-u)(x_a-v)(x_{a+1}-w)(x_{a+2}-z)$$ with $u$, $v$, $w$, $z\in\mathbb{F}_q$ and $u\neq v$. Then, $f\in R_4(3a+1,m)$ and $|f|=18.4^{m-a-2}>4^{m-a}$ \item For $q=3$ and $a=m-1$ or $q=4$ and $a=m-1$, define for $x=(x_1,\ldots,x_m)\in\mathbb{F}_q^m$, $$f(x)=\prod_{i=1}^{m-2}(1-x_i^{q-1})\prod_{j=1}^{q-2}(x_{m-1}-b_j)(x_{m}-c)$$ with $b_j\in\mathbb{F}_q$, $b_j\neq b_k$ for $j\neq k$ and $c\in\mathbb{F}_q$. Then, $f\in R_q((m-1)(q-1)+1,m)$ and $|f|=2(q-1)>q$ \item For $q\geq5$ and $1\leq a\leq m-1$, define for $x=(x_1,\ldots,x_m)\in\mathbb{F}_q^m$, $$f(x)=\prod_{i=1}^{a-1}(1-x_i^{q-1})\prod_{j=1}^{q-2}(x_a-b_j)(x_{a+1}-u)(x_{a+1}-v)$$ with $b_j\in\mathbb{F}_q$, $b_j\neq b_k$ for $j\neq k$ and $u$, $v\in\mathbb{F}_q$, $u\neq v$. Then, $f\in R_q(a(q-1)+1,m)$ and $|f|=2(q-2)q^{m-a-1}>q^{m-a}$ \end{itemize} \item If $2\leq b\leq q-1$ \begin{itemize} \item For $q\geq5$, $0\leq a\leq m-2$ and $4\leq b\leq \lfloor\frac{q}{2}+2\rfloor$, define for $x=(x_1,\ldots,x_m)\in\mathbb{F}_q^m$, $$f(x)=\prod_{i=1}^a(1-x_i^{q-1})\prod_{j=1}^{b-2}(x_{a+1}-b_j)(x_{a+2}-u)(x_{a+2}-v)$$ with $b_j\in\mathbb{F}_q$, $b_j\neq b_k$ for $j\neq k$ and $u$, $v\in\mathbb{F}_q$, $u\neq v$. Then, $f\in R_q(a(q-1)+b,m)$ and $|f|=(q-2)(q-b+2)q^{m-a-2}>(q-b+1)(q-1)q^{m-a-2}$ \item For $q\geq7$, $0\leq a\leq m-2$ and $\lceil\frac{q}{2}+2\rceil\leq b\leq q-1$ or $q\geq 4$, $0\leq a\leq m-2$ and $b=2$ or $q\geq 4$, $a=m-2$ and $b=3$ or $q=3$, $a\in\{0,m-2\}$ and $b=2$, define for $x=(x_1,\ldots,x_m)\in\mathbb{F}_q^m$, $$f(x)=\prod_{i=1}^a(1-x_i^{q-1})\prod_{j=1}^{b-1}(x_{a+1}-b_j)$$ with $b_j\in\mathbb{F}_q$, $b_j\neq b_k$ for $j\neq k$. Then, $f\in R_q(a(q-1)+b,m)$ and $|f|=(q-b+1)q^{m-a-1}>(q-b+1)(q-1)q^{m-a-2}.$ \item For $q\geq4$, $m\geq3$, $0\leq a\leq m-3$ and $b= 3$, define for $x=(x_1,\ldots,x_m)\in\mathbb{F}_q^m$, $$f(x)=\prod_{i=1}^a(1-x_i^{q-1})(x_{a+1}-u)(x_{a+2}-v)(x_{a+3}-w)$$ with $u$, $v$, $w\in\mathbb{F}_q$. Then, $f\in R_q(a(q-1)+3,m)$ and $|f|=(q-1)^3q^{m-a-3}>(q-2)(q-1)q^{m-a-2}$ \item For $q=3$, $m\geq4$, $1\leq a\leq m-3$ and $b=2$, define for $x=(x_1,\ldots,x_m)\in\mathbb{F}_q^m$, $$f(x)=\prod_{i=1}^{a-1}(1-x_i^{2})\prod_{j=1}^4(x_{a-1+j}-u_j)$$ with $u_j\in\mathbb{F}_q$. Then, $f\in R_3(2(a+1),m)$ and $|f|=16.3^{m-a-3}>4.3^{m-a-2}$ \end{itemize} \end{itemize}\end{preuve} \begin{remarque} We say that $\mathcal{B}$ is an hyperplane arrangement in $\mathcal{L}_d$ if there exist $k\in\mathbb{N}^*$, $(d_1,\ldots,d_k)\in(\mathbb{N^*})^k$ such that $\sum_{i=1}^kd_i\leq d$ and $f_1,\ldots,f_k$ are k independent linear forms over $\mathbb{F}_q^m$ such that $\mathcal{B}$ is composed of $k$ blocks of $d_i$ parallel hyperplanes of equation $f_i(x)=u_i,j$ where $1\leq i\leq k$, $1\leq j\leq d_i$, $u_{i,j}\in\mathbb{F}_q$ and if $k\neq j$, $u_{i,j}\neq u_{i,k}$. The upper bound given in the Theorem above are the third weight among hyperplane arrangements in $\mathcal{L}_d$. The proof of this result is given in Appendix.\end{remarque} \section{Third weight}\label{poids3} \subsection{The case where $a=0$} We denote by $c_b$ the third weight of $R_q(b,2)$, for $2\leq b\leq q-1$. From Theorem \ref{3hyp}, we get that $$c_b\leq \left\{\begin{array}{ll}(q-2)(q-b+2)&\textrm{ for $q\geq 5$, $4\leq b \leq \frac{q+3}{2}$}\\&\\ (q-b+1)q &\textrm{$\begin{array}{l}\textrm{for $q\geq 7$ and $\frac{q}{2}+2 \leq b\leq q-1$ or $q\geq 3$ and $b=2$}\\ \textrm{or $q\geq 4$ and $b=3$}\end{array}$}\end{array}\right.$$ \begin{lemme}\label{t=0}Let $m\geq 2$, $q\geq 3$, $4\leq b\leq q-1$ and $f\in R_q(b,m)$. Assume $c_b<(q-b+1)q$. If $|f|>(q-b+1)(q-1)q^{m-2}$ then $|f|\geq c_bq^{m-2}$.\end{lemme} \begin{preuve}We prove this result by induction on $m$. For $m=2$, it is the definition of $c_b$.\\ Let $m\geq 3$. Assume if $f\in R_q(b,m-1)$ is such that $|f|>(q-b+1)(q-1)q^{m-3}$ then $|f|\geq c_bq^{m-3}$.\\ Let $f\in R_q(b,m)$ such that $|f|>(q-b+1)(q-1)q^{m-2}$. Assume $|f|<c_bq^{m-2}$. We denote by $S$ the support of $f$. Assume $S$ meets all affine hyperplanes. Then, for all $H$ hyperplane, $\#(S\cap H)\geq (q-b)q^{m-2}$. Assume there exists $H_1$ such that $\#(S\cap H_1)=(q-b)q^{m-2}$. By applying an affine transformation, we can assume $x_1=\lambda$, $\lambda\in\mathbb{F}_q$ is an equation of $H_1$. We set an order on the elements of $\mathbb{F}_q$ such that $|f_{\lambda_1}|\leq|f_{\lambda_2}|\leq\ldots\leq |f_{\lambda_q}|$. Then, $f_{\lambda_1}$ is a minimal weight codeword of $R_q(b,m-1)$. So, by applying an affine transformation, we can assume $f_{\lambda_1}$ depends only on $x_2$. Let $k\geq 1$ be such that for all $i\leq k$, $f_{\lambda_i}$ depends only on $x_2$ and $f_{\lambda_{k+1}}$ does not depend only on $x_2$. If $k>b$, we can write for all $x=(x_1,\ldots,x_m)\in\mathbb{F}_q^m)$, $$f(x)=\sum_{i=0}^bf_{\lambda_{i+1}}^{(i)}(x_2,\ldots,x_m)\prod_{1\leq j\leq i}(x_1-\lambda_j)\qquad \textrm{(see \cite{erickson1974counting})}.$$ Since for $i \leq b+1$, $f_{\lambda_i}$ depends only on $x_2$ then $f$ depends only on $x_1$ and $x_2$ which is a contradiction by the case $m=2$. So $k\leq b$ and we can write for all $x=(x_1,\ldots,x_m)\in\mathbb{F}_q^m$, $$f(x)=g(x_1,x_2)+\prod_{i=1}^k(x_1-\lambda_i)h(x)$$ where $\deg(h)\leq b-k$. Then, for all $x_2\in\mathbb{F}_q$ and all $\underline{x}\in\mathbb{F}_q^{m-2}$, $$f_{\lambda_{k+1}}(x_2,\underline{x})=g_{\lambda_{k+1}}(x_2)+\alpha.h(x_2,\underline{x})$$ where $\alpha\in\mathbb{F}_q^*$. So, by Lemma \ref{3.9}, since $f_{\lambda_{k+1}}$ does not depend only on $x_2$, $|f_{\lambda_{k+1}}|\geq(q-b+k)q^{m-2}$. We get \begin{align*}|f|&\geq k(q-b)q^{m-2}+(q-k)(q-b+k)q^{m-2}\\&=(q-b)q^{m-1}+(q-k)kq^{m-2}\\|f|&\geq(q-b)q^{m-1}+(q-1)q^{m-2}\end{align*} Since $c_b<(q-b+1)q$ and $|f|<c_bq^{m-2}$, we get a contradiction.\\ Then, for all $H$ hyperplane, $\#(S\cap H)\geq (q-1)(q-b+1)q^{m-3}$. By induction hypothesis, since $|f|<c_bq ^{m-2}$ there exists an affine hyperplane $H_2$ such that $\#(S\cap H_2)=(q-1)(q-b+1)q^{m-3}$. So, there exists $A$ an affine subspace of codimension 2 included in $H_2$ which does not meet $S$ (see \cite{raey}). Then, considering all affine hyperplanes through $A$, we must have $$(q+1)(q-1)(q-b+1)q^{m-3}<c_bq^{m-2}$$ which gives, since $c_b\leq(q-b+1)q-1$, $q<q-b+1$. We get a contradiction since $b\geq 4$. \\ So there exists $H_0$ an hyperplane which does not meet $S$. We denote by $n$ the number of hyperplanes parallel to $H_0$ which do not meet $S$. By Lemma \ref{2.7}, since $c_b\leq (q-b+2)(q-2)$, we get that $n=b$, $n=b-1$ or $n=1$. By applying an affine transformation, we can assume $x_1=\lambda_1$, $\lambda_1\in\mathbb{F}_q$ is an equation of $H_0$.\\ If $n=b$, then for all $x=(x_1,\ldots,x_m)\in\mathbb{F}_q^m$, we have $$f(x)=\prod_{i=1}^b(x_1-\lambda_i)$$ with $\lambda_i\in\mathbb{F}_q$ and for $i\neq j$, $\lambda_i\neq \lambda_j$. In this case, $f$ is a minimum weight codeword of $R_q(b,m)$ which is absurd.\\ If $n=b-1$, then for all $x=(x_1,\ldots,x_m)\in\mathbb{F}_q^m$, we have $$f(x)=\prod_{i=1}^{b-1}(x_1-\lambda_i)g(x)$$ with $\lambda_i\in\mathbb{F}_q$, for $i\neq j$, $\lambda_i\neq \lambda_j$ and $g\in R_q(1,m)$. If $\deg(g)=0$ then $f$ is a minimum weight codeword of $R_q(b-1,m)$. If $\deg(g)=1$ then $f$ is a second weight codeword of $R_q(b,m)$. Both cases give a contradiction.\\ If $n=1$ then for all $x=(x_1,\ldots,x_m)\in\mathbb{F}_q^m$, we have $$f(x)=(x_1-\lambda_1)g(x)$$ with $g\in R_q(b-1,m)$. Then, for $i\geq 2$, $\deg(f_{\lambda_i})\leq (b-1)$, so, $|f_{\lambda_i}|\geq (q-b+1)q^{m-2}$. We denote by $N=\#\{i:|f_{\lambda_i}|= (q-b+1)q^{m-2}\}$. Then, $$N(q-b+1)q^{m-2}+(q-1-N)(q-b+2)(q-1)q^{m-3}<c_bq^{m-2}$$ which gives $N>\frac{(q-1)^2(q-b+2)-c_bq}{b-2}\geq0$ so $N\geq1$. Denote by $H_1$ an hyperplane such that $\#(S\cap H_1)=(q-b+1)q^{m-2}$. Then, $S\cap H_1$ is the support of a minimal weight codeword of $R_q(b-1,m-1)$ so it is the union of $(q-b+1)$ parallel affine subspaces of codimension 2 included in $H_1$. Now, consider $P$ an affine subspace of codimension 2 included in $H_1$ such that $\#(S\cap P)=(q-b+1)q^{m-3}$. Then, for all $H$ hyperplane through $P$, $\#(S \cap H)\geq (q-b+1)(q-1)q^{m-3}$. Indeed, by definition of $P$, $S$ meets all hyperplanes through $P$, so, for all $H$ hyperplane through $P$, $\#(S\cap H)\geq (q-b)q^{m-2}$. If $\#(S\cap H)= (q-b)q^{m-2}$, then $S\cap H$ is the union of $(q-b)$ parallel affine subspaces of codimension 2 which is absurd since it intersects $P$ in $(q-b+1)$ affine subspaces of codimension 3. We can apply the same argument to all affine subspaces of codimension 2 included in $H_1$ parallel to $P$. Now consider an hyperplane through $P$ and the $q$ hyperplanes parallel to this hyperplane, since $|f|<c_bq^{m-2}$, one of these hyperplanes, say $H_2$, meets $S$ in $(q-b+1)(q-1)q^{m-3}$ points. We denote by $(A_i)_{1\leq i\leq b}$ the $b$ affine subspaces of codimension 2 included in $H_2$ which do not meet $S$. Let $1\leq i\leq b$, suppose that $S$ meets all hyperplanes through $A_i$ and let $H$ be one hyperplane through $A_i$. If all hyperplanes parallel to $H$ meet $S$ then as in the beginning of the proof of this lemma, we get $\#(S\cap H)\geq(q-1)(q-b+1)q^{m-3}$. If there exists an hyperplane parallel to $H$ which does not meet $S$ then $\#(S\cap H)\geq(q-b+1)q^{m-2}$. In both cases we get a contradiction since $(q+1)(q-b+1)(q-1)q^{m-3}\geq c_bq^{m-2}$. So, for all $1\leq i\leq b$ there exists an hyperplane through $A_i$ which does not meet $S$. Then at least $b-1$ of the hyperplanes through the $(A_i)$ which do not meet $S$ must intersect $H_1$, we get that $|f|=(q-1)(q-b+1)q^{m-2}$ (see \cite{raey}) which is absurd. \begin{figure}[!h] \caption{} \begin{center}\subfloat[]{\label{fig1a} \begin{tikzpicture}[scale=0.17] \draw (0,0)--(5,2)--(5,11)--(0,9)--cycle; \draw (1,2/5)--(1,9+2/5); \draw (2,4/5)--(2,9+4/5); \draw[dotted] (3,6/5)--(3,9+6/5); \draw[dotted] (4,8/5)--(4,9+8/5); \draw (0,9) node[above left]{$H_1$}; \draw[dashed](0,4)--++(5,2); \draw (0,4)--(12,4)--++(5,2)--++(-12,0); \draw (17,6) node[above right]{$H_2$}; \draw (1,4+2/5)--++(12,0); \draw (2, 4+4/5)--++(12,0); \draw[dotted] (3,4+6/5)--++(12,0); \draw[dotted] (4,4+8/5)--++(12,0); \draw (4,8/5)--++(12,0)--++(0,9)--++(-12,0); \draw[dashed] (4,4+8/5)--(11,4); \draw (0,4) node[left]{$P$}; \end{tikzpicture}} \hspace{3cm} \subfloat[]{\label{fig1b} \begin{tikzpicture}[scale=0.17] \draw (0,0)--(5,2)--(5,11)--(0,9)--cycle; \draw (1,2/5)--(1,9+2/5); \draw (2,4/5)--(2,9+4/5); \draw[dotted] (3,6/5)--(3,9+6/5); \draw[dotted] (4,8/5)--(4,9+8/5); \draw (0,9) node[above left]{$H_1$}; \draw[dashed](0,4)--++(5,2); \draw (0,4)--(12,4)--++(5,2)--++(-12,0); \draw (17,6) node[above right]{$H_{2}$}; \draw (2,4+4/5)--++(12,0); \draw (1,4+2/5)--(15,4+6/5); \draw[dotted] (3,4+6/5)--(13,4+2/5); \draw[dotted] (4,4+8/5)--(12,4); \draw[dotted] (6,4)--(11,6); \draw (6,0)--++(5,2)--++(0,9)--++(-5,-2)--cycle; \draw (0,4) node[left]{$P$}; \end{tikzpicture}}\end{center} \label{fig1} \end{figure} \end{preuve} \begin{lemme}Let $m\geq2$, $q\geq 3$ and $f\in R_q(2,m)$. If $|f|>(q-1)^2q^{m-2}$ then $|f|\geq (q^2-q-1)q^{m-2}$.\end{lemme} \begin{preuve}Let $f\in R_q(2,m)$ such that $|f|>(q-1)^2q^{m-2}$. If $\deg(f)\leq 1$ then $|f|\geq(q-1)q^{m-1}$. From now, assume that $\deg(f)=2$.\\ First we recall some results on quadratic forms. These results can be found in \cite{hirschfeld_proj_geom} for example. If $Q$ is a quadratic form of rank $R$ on $\mathbb{F}_q^m$ then, there exists a linear transformation such that for all $x=(x_1,\ldots,x_m)\in\mathbb{F}_q^m$, \\if $R=2r+1$ \begin{equation}\label{eq1}Q(x)=\sum_{i=1}^rx_{2i-1}x_{2i}+ax_{2r+1}^2\end{equation} or if $R=2r$ \begin{equation}\label{eq2}Q(x)=\sum_{i=1}^rx_{2i-1}x_{2i}\end{equation} or \begin{equation}\label{eq3}Q(x)=\sum_{i=1}^{r-1}x_{2i-1}x_{2i}+ax_{2r-1}^2+bx_{2r-1}x_{2r}+cx_{2r}^2\end{equation} with $ax^2+bx+c$ is irreducible over $\mathbb{F}_q$. Then $N(Q)$ the number of zeros of $Q$ is $$N(Q)=q^{m-1}+(w-1)(q-1)q^{m-\frac{R}{2}-1}$$ where $$w=\left\{\begin{array}{ll}1&\textrm{if $R$ is odd}\\2&\textrm{if $R$ is even and $Q$ is of type \eqref{eq2}}\\0&\textrm{if $R$ is even and $Q$ is of type \eqref{eq3}}\end{array}\right..$$ We write for all $x=(x_1,\ldots,x_m)\in\mathbb{F}_q^m$, $f(x)=q_0(x)+\alpha_1x_1+\ldots+ \alpha_mx_m+\beta$ where $q_0$ is a quadratic form of rank $r$ and $w_0$ is defined as above. Then the number of zeros of $f$ is the number of affine zeros of the homogeneized form $Q(x)=q_0(x)+\alpha_1x_1z+\ldots+ \alpha_mx_mz+\beta z^2$. We denote by $R$ the rank of $Q$ and $w$ is defined as above. Then, using the formula above, $$|f|=(q-1)q^{m-1}+(w_0-1)q^{m-\frac{r}{2}-1}-(w-1)q^{m-\frac{R}{2}}.$$. By applying affine transformation (see \cite{6362214}), we get that : \begin{itemize} \item If $r$ is odd then, either $R=r$, $w=1$ and $|f|=(q-1)q^{m-1}$ or $R=r+1$, $w=2$ and $|f|=(q-1)q^{m-1}-q^{m-\frac{r+1}{2}}$. \item If $r$ is even and $w_0=2$ then, $R=r+2$, $w=2$ and $|f|=(q-1)q^{m-1}$, $R=r$, $w=2$ and $|f|=(q-1)(q^{m-1}-q^{m-1-\frac{r}{2}})$ or $R=r+1$, $w=1$ and $|f|=(q-1)q^{m-1}+q^{m-1-\frac{r}{2}}$. \item If $r$ is even and $w_0=0$ then, $R=r+2$, $w=0$ and $|f|=(q-1)q^{m-1}$, $R=r+1$, $w=1$ and $|f|=(q-1)q^{m-1}-q^{m-1-\frac{r}{2}}$ or $R=r$, $w=0$ and $|f|=(q-1)(q^{m-1}+q^{m-1-\frac{r}{2}})$. \end{itemize} Finally, the third weight of $R_q(2,m)$ is $(q^2-q-1)q^{m-2}$. \end{preuve} \begin{lemme}For $q\geq4$, $c_3=q^2-3q+3$. Furthermore, for $q\geq 7$, if $f\in R_q(3,2)$ is such that $|f|=q^2-3q+3$ then up to affine transformation for all $(x,y)\in\mathbb{F}_q^2$, $$f(x,y)=(a_1x+b_1y)(a_2x+b_2y)(a_3x+b_3y+c)$$ where $(a_i,b_i)\in\mathbb{F}_q^2\setminus\{(0,0)\}$ such that for $i\neq j$, $a_ib_j-a_jb_i\neq0$ and $c\in\mathbb{F}_q^*$.\end{lemme} \begin{preuve}The second weight in this case is $(q-2)(q-1)=q^2-3q+2$. So we only have to find a codeword of $R_q(3,2)$ such that its weight is $q^2-3q+3$ to prove the first part of this proposition. Consider $3$ lines which meet pairwise but do not intersect in one point. Then the union of this 3 lines has $3q-3$ points. Let $a_1x+b_1y+c_1=0$, $a_2x+b_2y+c_2=0$ and $a_3x+b_3y+c_3=0$ be the equations of these 3 lines then $f(x,y)=\displaystyle\prod_{i=1}^3(a_ix+b_iy+c_i)\in R_q(3,2)$ and $|f|=q^2-3q+3$. Let $f\in R_q(3,2)$ such that $|f|=q^2-3q+3$. Denote by $S$ the support of $f$. For $q\geq4$, $q^2-3q+3<(q-2)q$. Since $(q-2)q$ is the minimum weight of $R_q(2,2)$, $\deg(f)=3$. We prove first that for $q\geq7$, $f$ is the product of $3$ affine factors. Let $P$ be a point of $\mathbb{F}_q^2$ which is not in $S$ and $L$ be a line in $\mathbb{F}_q^2$ such that $P\in L$. Then, either $L$ does not meet $S$ or $L$ meets $S$ in at least $q-3$ points. If any line through $P$ meets $S$ then $$(q+1)(q-3)\leq|f|\leq q^2-3q+3$$ which is absurd since $q\geq7$. So there exists a line through $P$ which does not meet $S$. By applying the same argument to all $P$ not in $S$, we get that $f$ is the product of affine factors. Denote by $Z$ the set of zeros of $f$. We have just proved that this set is the union of 3 lines. If these 3 lines are parallel then we get a minimum weight codeword. If 2 of these lines are parallel or the 3 lines meet in one point, we have a second weight codeword. So the only possibility is the case where the 3 lines meet pairwise but do not intersect in one point which gives the result.\end{preuve} \begin{lemme}\label{m=3}Let $q\geq 4$. If $f\in R_q(3,3)$ and $|f|>(q-1)(q-2)q$ then $|f|\geq (q-1)^3$.\end{lemme} \begin{preuve} Let $f\in R_q(3,3)$ such that $|f|>(q-2)(q-1)q$. Assume $|f|<(q-1)^3$. We denote by $S$ the support of $f$. Assume $S$ meets all hyperplanes. Then for all $H$ hyperplane, $\#(S\cap H)\geq(q-3)q$. Assume there exists $H_1$ an hyperplane such that $\#(S\cap H_1)=(q-3)q$. By applying an affine transformation, we can assume $x_1=0$ is an equation of $H_1$. We set an order on the elements of $\mathbb{F}_q$ such that $|f_{\lambda_1}|\leq\ldots\leq|f_{\lambda_q}|$. Since $f_{\lambda_1}$ is a minimum weight codeword of $R_q(3,2)$, by applying an affine transformation, we can assume it depends only on $x_2$. Let $k\geq1$ be such that for all $i\leq k$, $f_{\lambda_i}$ depends only on $x_2$ but $f_{\lambda_{k+1}}$ does not depend only on $x_2$. If $k\geq 3$ then we can write for all $(x_1,x_2,x_3)\in\mathbb{F}_q$ , $$f(x_1,x_2,x_3)=g(x_1,x_2)+(x_1-\lambda_1)(x_1-\lambda_2)(x_1-\lambda_3)h(x_1,x_2,x_3)$$ where $\deg(h)\leq 3-3=0$ and $f$ depends only on $x_1$ and $x_2$. So, $|f|\equiv0\mod q$. But since $|f|>(q-1)(q-2)q$, $|f|\geq (q-1)(q-2)q+q\geq (q-1)^3$ which gives a contradiction . So, $k\leq 2$. Since $f_{\lambda_1},\ldots,f_{\lambda_k}$ depend only on $x_2$, we can write for all $(x_1,x_2,x_3)\in\mathbb{F}_q^3$, $$f(x_1,x_2,x_3)=g(x_1,x_2)+(x_1-\lambda_1)\ldots(x_1-\lambda_k)h(x_1,x_2,x_3)$$ where $\deg(h)\leq 3-k$. Then, $$f_{\lambda_{k+1}}(x_2,x_3)=g_{\lambda_{k+1}}(x_2)+\alpha h_{\lambda_{k+1}}(x_2,x_3)$$ where $\alpha\in \mathbb{F}_q^*$. So by Lemma \ref{3.9}, since $f_{\lambda_{k+1}}$ does not depends only on $x_2$, $|f_{\lambda_{k+1}}|\geq(q-3+k)q$. We get $$|f|\geq k(q-3)q+(q-k)(q-3+k)q=(q-3)q^2+(q-k)kq\geq(q-3)q^2+(q-1)q\geq (q-1)^3$$ which is absurd since $q\geq4$. So for all $H$ hyperplane, $\#(S\cap H)\geq (q-1)(q-2)$. Considering $q$ parallel hyperplanes, since $((q-1)(q-2)+1)q\geq(q-1)^3$, there exits an hyperplane $H_0$ such that $\#(S\cap H_0)= (q-1)(q-2)$. So there exists $A$ an affine subspace of codimension 2 included in $H_0$ which does not meet $S$. Considering all hyperplanes through $A$, since $S$ meets all hyperplanes, we get $$(q+1)(q-1)(q-2)\leq |f|<(q-1)^3$$ which is absurd since $q\geq4$. So there exists $H_1$ an affine hyperplane which does not meet $S$. We denote by $n$ the number of hyperplanes parallel to $H_1$ which do not meet $S$. By applying an affine transformation, we can assume $x_1=\lambda_1$ is an equation of $H_1$. by Lemma \ref{DGMW1}, $n\leq 3$ If $n=3$, then by Lemma \ref{DGMW1}, we have for all $x=(x_1,x_2,x_3)\in\mathbb{F}_q^3$ $$f(x)=(x_1-\lambda_1)(x_1-\lambda_2)(x_1-\lambda_3)g(x)$$ where $\lambda_i\in\mathbb{F}_q$, $\lambda_i\neq\lambda_j$ for $i\neq j$, $\deg(g)\leq 0$. So, $|f|=(q-3)q^2$ which is absurd. If $n=2$, then by Lemma \ref{DGMW1}, we have for all $x=(x_1,x_2,x_3)\in\mathbb{F}_q^3$ $$f(x)=(x_1-\lambda_1)(x_1-\lambda_{2})g(x)$$ where $\lambda_2\in\mathbb{F}_q$, $\lambda_2\neq\lambda_1$, $\deg(g)\leq 1$. So, if $\deg(g)=0$, $|f|=(q-2)q^2$. If $\deg(g)=1$, $|f|=(q-2)(q-1)q$. Both cases give a contradiction. If $n=1$, then by Lemma \ref{DGMW1}, we have for all $x=(x_1,x_2,x_3)\in\mathbb{F}_q^3$ $$f(x)=(x_1-\lambda_1)g(x)$$ where $\deg(g)\leq 2$. Then, for $i\geq 2$, $\deg(f_{\lambda_i})\leq 2$, so, $|f_{\lambda_i}|\geq (q-2)q$. We denote by $N=\#\{i:|f_{\lambda_i}|= (q-2)q\}$. Then, $$N(q-2)q+(q-1-N)(q-1)^2\leq|f|<(q-1)^3$$ so $N\geq1$. Denote by $H_2$ an hyperplane such that $\#(S\cap H_2)=(q-2)q$. Then, $S\cap H_2$ is the support of a minimal weight codeword of $R_q(2,2)$ so it is the union of $(q-2)$ parallel affine subspaces of codimension 2 included in $H_2$. Now, consider $P$ an affine subspace of codimension 2 included in $H_2$ such that $\#(S\cap P)=(q-2)$. Then, for all $H$ hyperplane through $P$, $\#(S \cap H)\geq (q-2)(q-1)$. Indeed, by definition of $P$, $S$ meets all hyperplanes through $P$, so, for all $H$ hyperplane through $P$, $\#(S\cap H)\geq (q-3)q$. If $\#(S\cap H)= (q-3)q$, then $S\cap H$ is the union of $(q-3)$ parallel affine subspaces of codimension 2 which is absurd since it intersects $P$ in $(q-2)$ affine subspaces of codimension 3. We can apply the same argument to all affine subspaces of codimension 2 included in $H_2$ parallel to $P$. Now consider an hyperplane through $P$ and the $q$ hyperplanes parallel to this hyperplane, since $|f|<(q-1)^3$, one of these hyperplanes, say $H_3$, meets $S$ in $(q-2)(q-1)$ points. We denote by $(A_i)_{1\leq i\leq 3}$ the $3$ affine subspaces of codimension 2 included in $H_3$ which do not meet $S$. Suppose that $S$ meets all hyperplanes through $A_i$ and consider $H$ one of them. If all hyperplanes parallel to $H$ meet $S$ then as in the beginning of the proof of this lemma, we get that $\#(S\cap H)\geq (q-1)(q-2)$. If there exists an hyperplane parallel to $H$ which does not meet $S$ then $\#(S\cap H)\geq (q-2)q$. In all cases we get a contradiction since $(q+1)(q-1)(q-2)\geq(q-1)^3$. Then at least $2$ of the hyperplanes through the $(A_i)$ which do not meet $S$ must intersect $H_2$, we get that $|f|=(q-1)(q-b+1)q$ (see \cite{raey}) which is absurd.\end{preuve} \begin{lemme}\label{t=03}Let $q\geq 4$ and $m \geq3$. If $f\in R_q(3,m)$ and $|f|>(q-1)(q-2)q^{m-2}$ then, $|f|\geq(q-1)^3q^{m-3}$.\end{lemme} \begin{preuve}We prove this lemma by induction on $m$. The case where $m=3$ comes from lemma \ref{m=3}. Assume for some $m\geq4$, if $f\in R_q(3,m-1)$ is such that $|f|>(q-1)(q-2)q^{m-3}$, then $|f|\geq (q-1)^3q^{m-4}$. Let $f\in R_q(3,m)$ such that $|f|>(q-2)(q-1)q^{m-2}$. Assume $|f|<(q-1)^3q^{m-3}$. We denote by $S$ the support of $f$. Assume $S$ meets all hyperplanes. Then for all $H$ hyperplane, $\#(S\cap H)\geq(q-3)q^{m-2}$. Assume there exists $H_1$ such that $\#(S\cap H_1)=(q-3)q^{m-2}$. By applying an affine transformation, we can assume $x_1=0$ is an equation of $H_1$. We set an order on the elements of $\mathbb{F}_q$ such that $|f_{\lambda_1}|\leq\ldots\leq|f_{\lambda_q}|$. Since $f_{\lambda_1}$ is a minimum weight codeword of $R_q(3,m-1)$, by applying an affine transformation, we can assume it depends only on $x_2$. Let $k\geq1$ be such that for all $i\leq k$, $f_{\lambda_i}$ depends only on $x_2$ but $f_{\lambda_{k+1}}$ does not depend only on $x_2$. If $k\geq 3$ then we can write for all $x_1$ , $x_2\in\mathbb{F}_q$ and $\underline{x}\in\mathbb{F}_q^{m-2}$, $$f(x_1,x_2,\underline{x})=g(x_1,x_2)+(x_1-\lambda_1)\ldots(x_1-\lambda_3)h(x_1,x_2,\underline{x})$$ where $\deg(h)\leq 0$. So, $f$ depends only on $x_1$ and $x_2$ and $|f|\equiv 0\mod q^{m-2}$. Since $|f|>(q-1)(q-2)q^{m-2}$, $|f|\geq(q-1)(q-2)q^{m-2}+q^{m-2}\geq(q-1)^3q^{m-3}$ which gives a contradiction. So, $k\leq 2$. Since $f_{\lambda_1},\ldots,f_{\lambda_k}$ depend only on $x_2$ we can write for all $x_1$ , $x_2\in\mathbb{F}_q$ and $\underline{x}\in\mathbb{F}_q^{m-2}$, $$f(x_1,x_2,\underline{x})=g(x_1,x_2)+(x_1-\lambda_1)\ldots(x_1-\lambda_k)h(x_1,x_2,\underline{x})$$ where $\deg(h)\leq 3-k$. Then $$f_{\lambda_{k+1}}(x_2,\underline{x})=g_{\lambda_{k+1}}(x_2)+\alpha h_{\lambda_{k+1}}(x_2,\underline{x})$$ where $\alpha\in \mathbb{F}_q^*$. So by Lemma \ref{3.9}, since $f_{\lambda_{k+1}}$ does not depend only on $x_2$, $|f_{\lambda_{k+1}}|\geq(q-3+k)q^{m-2}$. We get $$|f|\geq k(q-3)q^{m-2}+(q-k)(q-3+k)q^{m-2}=(q-3)q^{m-1}+(q-k)kq^{m-2}.$$ Since $|f|<(q-1)^3q^{m-3}$, this is absurd. So for all $H$ hyperplane, $\#(S\cap H)\geq (q-1)(q-2)q^{m-3}$. By induction hypothesis, considering $q$ parallel hyperplanes, there exits an hyperplane $H_0$ such that $\#(S\cap H_0)= (q-1)(q-2)q^{m-3}$. So there exists $A$ an affine subspace of codimension 2 included in $H_0$ which does not meet $S$. Considering all hyperplanes through $A$, since $S$ meets all hyperplanes, we get $$(q+1)(q-1)(q-2)q^{m-2}<(q-1)^3q^{m-3}$$ which is absurd. So there exists an affine hyperplane $H_1$ which does not meet $S$. We denote by $n$ the number of hyperplanes parallel to $H_1$ which do not meet $S$. By applying an affine transformation, we can assume $x_1=\lambda_1$ is an equation of $H_1$. By Lemma \ref{DGMW1}, $n\leq3$. If $n=3$, then by Lemma \ref{DGMW1}, we have for all $x=(x_1,\ldots,x_m)\in\mathbb{F}_q^m$ $$f(x)=(x_1-\lambda_1)(x_1-\lambda_2)(x_1-\lambda_3)g(x)$$ where $\lambda_i\in\mathbb{F}_q$, $\lambda_i\neq\lambda_j$ for $i\neq j$, $\deg(g)\leq 0$. So, $|f|=(q-3)q^{m-1}$ which is absurd. If $n=2$, then by Lemma \ref{DGMW1}, we have for all $x=(x_1,\ldots,x_m)\in\mathbb{F}_q^m$ $$f(x)=(x_1-\lambda_1)(x_1-\lambda_{2})g(x)$$ where $\lambda_2\in\mathbb{F}_q$, $\lambda_2\neq\lambda_1$, $\deg(g)\leq 1$. So, if $\deg(g)=0$, $|f|=(q-2)q^{m-1}$. If $\deg(g)=1$, $|f|=(q-2)(q-1)q^{m-2}$. Both cases give a contradiction. If $n=1$, then by Lemma \ref{DGMW1}, we have for all $x=(x_1,\ldots,x_m)\in\mathbb{F}_q^m$ $$f(x)=(x_1-\lambda_1)g(x)$$ where $\deg(g)\leq2$. Then, for $i\geq 2$, $\deg(f_{\lambda_i})\leq 2$, so, $|f_{\lambda_i}|\geq (q-2)q^{m-2}$. We denote by $N=\#\{i:|f_{\lambda_i}|= (q-2)q^{m-2}\}$. Then, $$N(q-2)q^{m-2}+(q-1-N)(q-1)^2q^{m-3}\leq|f|<(q-1)^3q^{m-3}$$ which gives $N\geq1$. Denote by $H_2$ an hyperplane such that $\#(S\cap H_2)=(q-2)q^{m-2}$. Then, $S\cap H_2$ is the support of a minimal weight codeword of $R_q(2,m-1)$ so it the union of $(q-2)$ parallel affine subspaces of codimension 2 included in $H_2$. Now, consider $P$ an affine subspace of codimension 2 included in $H_2$ such that $\#(S\cap P)=(q-2)q^{m-3}$. Then, for all $H$ hyperplane through $P$, $\#(S \cap H)\geq (q-2)(q-1)q^{m-3}$. Indeed, by definition of $P$, $S$ meets all hyperplane through $P$, so, for all $H$ hyperplane through $P$, $\#(S\cap H)\geq (q-3)q^{m-2}$. If $\#(S\cap H)= (q-3)q^{m-2}$, then $S\cap H$ is the union of $(q-3)$ parallel affine subspaces of codimension 2 which is absurd since it intersects $P$ in $(q-2)$ affine subspaces of codimension 3. We can apply the same argument to all affine subspaces of codimension 2 included in $H_2$ parallel to $P$. Now consider an hyperplane through $P$ and the $q$ hyperplanes parallel to this hyperplane, since $|f|<(q-1)^3q^{m-3}$, one of these hyperplanes, say $H_3$, meets $S$ in $(q-2)(q-1)q^{m-3}$ points. We denote by $(A_i)_{1\leq i\leq 3}$ the $3$ affine subspaces of codimension 2 included in $H_3$ which do not meet $S$. Suppose that $S$ meets all hyperplanes through $A_i$ and consider $H$ one of them. If all hyperplanes parallel to $H$ meet $S$ then as in the beginning of the proof of this lemma, we get that $\#(S\cap H)\geq (q-1)(q-2)q^{m-3}$. If there exists an hyperplane parallel to $H$ which does not meet $S$ then $\#(S\cap H)\geq (q-2)q^{m-2}$. In all cases we get a contradiction since $(q+1)(q-1)(q-2)q^{m-3}\geq(q-1)^3q^{m-3}$. Then at least $2$ of the hyperplanes through the $(A_i)$ which do not meet $S$ must intersect $H_2$, we get that $|f|=(q-1)(q-b+1)q^{m-2}$ (see \cite{raey}) which is absurd.\end{preuve} \subsection{The case where $a$ is maximal} \begin{lemme}\label{m-2}Let $m\geq 4$, $q\geq5$ and $2\leq b\leq q-2$. Assume $c_b<(q-b+1)q$. If $f\in R_q((m-2)(q-1)+b,m)$ and $|f|>(q-1)(q-b+1)$, then $|f|\geq c_b$.\end{lemme} \begin{preuve}Let $f\in R_q((m-2)(q-1)+b,m)$ such that $|f|>(q-b+1)(q-1)$. Assume $|f|<c_b$ and denote by $S$ the support of $f$. Assume $S$ meets all affine hyperplanes. We set an order on the elements of $\mathbb{F}_q$ such that $|f_{\lambda_1}|\leq\ldots\leq|f_{\lambda_q}|$. Then for all $H$ hyperplane, $\#(S\cap H)\geq q-b$ and since $(q-b+1)q>c_b$, $|f_{\lambda_1}=(q-b)$. By applying an affine transformation, we can assume $(1-x_2^{q-1})$ divides $f_{\lambda_1}$. Let $1\leq k$ be such that for all $i\leq k$, $(1-x_2^{q-1})$ divides $f_{\lambda_i}$ and $(1-x_2^{q-1})$ does not divide $f_{\lambda_{k+1}}$. Then, by Lemma \ref{3.7}, $|f|\geq (q-b)q+(q-1)$. We get a contradiction since $ (q-b)q+(q-1)\geq c_b$. So there exists an hyperplane $H_0$ which does not meet $S$. By applying an affine transformation, we can assume $x_1=\alpha$, $\alpha\in\mathbb{F}_q$, is an equation of $H_0$. We denote by $n$ the number of hyperplanes parallel to $H_0$ which do not meet $S$. We set an order on the elements of $\mathbb{F}_q$ such that $|f_{\lambda_1}|\leq\ldots\leq|f_{\lambda_q}|$. If $n=q-1$ then we can write for all $x=(x_1,\ldots,x_m)\in\mathbb{F}_q^m$, $$f(x)=(1-x_1^{q-1})g(x_2,\ldots,x_m)$$ where $g\in R_q((m-3)(q-1)+b,m-1)$ and $|f|=|g|$. So, $g$ fulfils the same conditions as $f$ with one variable less. Iterating this process, we end either in the case where $t=0$ (which is absurd by definition of $c_b$) or in the case where $n<q-1$. From now, we assume $n<(q-1)$. By Lemma \ref{2.7}, since for $b\geq 3$, $|f|<c_b\leq (q-b+2)(q-2)$ and for $b=2$, $q^2-q-1\leq 2(q-3)q$, the only possibilities for $b\geq 2$ are $n=1$, $n=b-1$ or $n=b$. We can write for all $x=(x_1,\ldots,x_m)\in\mathbb{F}_q^m$ $$f(x)=\prod_{1\leq i\leq n}(x_1-\lambda_i)g(x)$$ where $g\in R_q((m-2)(q-1)+b-n,m)$. Then for all $i\geq n+1$, $f_{\lambda_i}\in R_q((m-2)(q-1)+b-n,m-1)$ and $|g_{\lambda_i}|=|f_{\lambda_i}|\geq (q-b+n)$. Assume $n=b$. For $\lambda\in\mathbb{F}_q$, if $|g_{\lambda}|>q$, then $|g_{\lambda}|\geq 2(q-1)$. Denote by $N:=\#\{i\geq b+1 :|g_{\lambda_i}|=q\}$. Since for $i\geq b+1$, $|f_{\lambda_i}|=|g_{\lambda_i}|$ and $(q-b)2(q-1)\geq(q-b+1)q$ for $b\leq q-2$, we get $N\geq1$. Furthermore, since $(q-b)q<(q-b+1)(q-1)<|f|$, $N\leq q-b-1$. Assume $|f_{\lambda_{b+N+1}}|\geq(N+1)q$. Then $$Nq+(q-b-N)(N+1)q\leq |f|<c_b$$ which gives $$Nq(q-N-b)< c_b-(q-b)q<q.$$ This is absurd since $1\leq N\leq q-b-1$. Furthermore the only possibility such that $|f_{\lambda_{b+N+1}}|=Nq$ is $N=1$ which is absurd since $f_{\lambda_{b+N+1}}$ is not a minimal weight codeword. By Lemma \ref{2.15.1}, for all $b+1\leq i\leq N+b$, $g_{\lambda_{b+1}}=g_{\lambda_i}$. So, we can write for all $x=(x_1,\ldots,x_m)\in\mathbb{F}_q^m$ \begin{align*}f(x)&=\prod_{1\leq i\leq b}(x_1-\lambda_i)\left(g_{\lambda_{b+1}}(x_2,\ldots,x_m)+ \prod_{b+1\leq i\leq N+b}(x_1-\lambda_i)h(x)\right)\\&=\prod_{1\leq i\leq b}(x_1-\lambda_i)\left(\alpha f_{\lambda_{b+1}}(x_2,\ldots,x_m)+ \prod_{b+1\leq i\leq N+b}(x_1-\lambda_i)h(x)\right)\end{align*} where $h\in R_q((m-2)(q-1)-N,m)$ and $\alpha\in\mathbb{F}_q^*$. Then, for all $(x_2,\ldots,x_m)\in\mathbb{F}_q^{m-1}$, $$f_{\lambda_{b+N+1}}(x_2,\ldots,x_m)=\beta f_{\lambda_{b+1}}(x_2,\ldots,x_m)+\gamma h_{\lambda_{b+N+1} }(x_2,\ldots,x_m).$$ We get a contradiction by Lemma \ref{2.14}. Now, assume $n=1$ or $n=b-1$. If $(1-x_2^{q-1})$ divides $f$, we can write for all $x=(x_1,\ldots,x_m)\in\mathbb{F}_q^m$, $$f(x)=(1-x_2^{q-1})h(x_1,x_3,\ldots,x_m)$$ where $h\in R_q((m-3)(q-1)+b,m-1)$ and $|f|=|h|$. So, $h$ fulfils the same conditions as $f$. Iterating this process, we end either in the case where $t=0$ which is absurd by definition of $c_b$ or in the case where $(1-x_2^{q-1})$ does not divide $h$. So we can assume $(1-x_2^{q-1})$ does not divide $f$. Since $n\geq 1$, $f_{\lambda_1}=0$. So, $1-x_2^{q-1}$ divides $f_{\lambda_1}$. Then, since $1-x_2^{q-1}$ does not divide $f$, there exists $k\in\{1,\ldots,q-1\}$ such that for all $i\leq k$, $1-x_2^{q-1}$ divides $f_{\lambda_i}$ and $(1-x_2^{q-1})$ does not divide $f_{\lambda_{k+1}}$. For $\lambda\in\mathbb{F}_q$, if $|f_{\lambda}|>(q-b+n)$ then $$|f_{\lambda}|\geq w_2=\left\{\begin{array}{ll}q&\textrm{if $n=b-1$}\\(q-b+2)&\textrm{if $n=1$}\end{array}\right..$$ Denote by $N:=\#\{i\geq n+1 :|f_{\lambda_i}|=(q-b+n)\}$. In all cases, $(q-n)w_2\geq c_b.$ So, $N\geq1$. Furthermore $(q-b+n)(q-n)=(q-b+1)(q-1)<|f|$ so $N\leq q-n-1$. Then, $|f_{\lambda_{n+1}}|=(q-b+n)$ and $f_{\lambda_{n+1}}$ is a minimal weight codeword of $R_q((m-2)(q-1)+b-n,m-1)$ so, by applying an affine transformation, we can assume $1-x_2^{q-1}$ divides $f_{\lambda_{n+1}}$. Thus, $k\geq n+1\geq 2$. If $1\leq k\leq n+N-1$, then $|f_{\lambda_{k+1}}|=(q-b+n)<(q-b+k)$. If $n+N\leq k\leq q-1$, assume $|f_{\lambda_{k+1}}|\geq (q-b+k)$. Then, $$|f|\geq N(q-b+n)+(k-n-N)w_2+(q-k)(q-b+k)\geq (q-b+1)q-1$$ for $b\geq 4$, $n+N\leq k\leq q-1$ and $1\leq N\leq q-n-1$. So, we get a contradiction since $|f|<c_b<(q-b+1)q$. Since for all $n\leq i\leq k$, $1-x_2^{q-1}$ divides $f_{\lambda_i}$, it divides $g_{\lambda_i}$ too. Then we can write for all $x=(x_1,x_2,\ldots,x_m)\in\mathbb{F}_q^{m}$ \begin{align*}f(x)&=\prod_{1\leq i\leq n}(x_1-\lambda_i)\left(\prod_{n+1\leq i\leq k}(x_1-\lambda_i)h(x_1,x_2,x_3,\ldots,x_m)\right.\\&\hspace{2cm}+(1-x_2^{q-1})l(x_1,x_3,\ldots,x_m)\Bigg)\end{align*} with $\deg(h)\leq (m-2)(q-1)+b-k$. Then for all $(x_2,\ldots,x_m)\in\mathbb{F}_q^{m-1}$, $$f_{\lambda_{k+1}}(x_2, \ldots,x_m)=\alpha h_{\lambda_{k+1}}(x_2,\ldots,x_m)+\beta(1-x_2^{q-1})l_{\lambda_{k+1}}(x_3,\ldots,x_m).$$ Thus, we get a contradiction by Lemma \ref{3.5} since $k\geq2$ and $|f_{\lambda_{k+1}}|<(q-b+k)$. \end{preuve} \begin{lemme}\label{m-3}Let $m\geq 4$, $q\geq 7$. If $f\in R_q((m-3)(q-1)+3,m)$ and $|f|>(q-1)(q-2)q$ then $|f|\geq (q-1)^3$.\end{lemme} \begin{preuve}Let $f\in R_q((m-3)(q-1)+3,m)$ such that $|f|>(q-2)(q-1)q$. Assume $|f|<(q-1)^3$ and denote by $S$ the support of $f$. Assume $S$ meets all affine hyperplanes. We set an order on the elements of $\mathbb{F}_q$ such that $|f_{\lambda_1}|\leq\ldots\leq|f_{\lambda_q}|$. Then for all $H$ hyperplane, $\#(S\cap H)= (q-3)q$ or $\#(S\cap H)\geq (q-2)(q-1)$ and since $((q-2)(q-1)+1)q\geq (q-1)^3$, $|f_{\lambda_1}|\leq(q-2)(q-1)$. By applying an affine transformation, we can assume $(1-x_2^{q-1})$ divides $f_{\lambda_1}$. Let $1\leq k$ be such that for all $i\leq k$, $(1-x_2^{q-1})$ divides $f_{\lambda_i}$ but $(1-x_2^{q-1})$ does bot divide $f_{\lambda_{k+1}}$ $f_{\lambda_{k+1}}$. Then, by Lemma \ref{3.7}, $|f|\geq (q-3)q^2+(q-1)q$. We get a contradiction since $ (q-3)q^2+(q-1)q\geq (q-1)^3$. So there exists an hyperplane $H_0$ which does not meet $S$. By applying an affine transformation, we can assume $x_1=\alpha$, $\alpha\in\mathbb{F}_q$, is an equation of $H_0$. We denote by $n$ the number of hyperplanes parallel to $H_0$ which do not meet $S$. We set an order on the elements of $\mathbb{F}_q$ such that $|f_{\lambda_1}|\leq\ldots\leq|f_{\lambda_q}|$. If $n=q-1$ then, we can write for all $x=(x_1,\ldots,x_m)\in\mathbb{F}_q^m$, $$f(x)=(1-x_1^{q-1})g(x_2,\ldots,x_m)$$ where $g\in R_q((m-4)(q-1)+3,m-1)$ and $|f|=|g|$. So, $g$ fulfils the same conditions as $f$ with one variable less. Iterating this process, we end either in the case where $t=0$ (which gives a contradiction) or in the case where $n<q-1$. From now, we assume $n<(q-1)$. By Lemma \ref{2.7}, since $(q-1)^3\leq 2(q-4)q^2$, the only possibilities are $n=1$, $n=2$ or $n=3$. We can write for all $x=(x_1,\ldots,x_m)\in\mathbb{F}_q^m$ $$f(x)=\prod_{1\leq i\leq n}(x_1-\lambda_i)g(x)$$ where $g\in R_q((m-3)(q-1)+3-n,m)$. Then for all $i\geq n+1$, $f_{\lambda_i}\in R_q((m-3)(q-1)+3-n,m)$ and $|f_{\lambda_i}|=|g_{\lambda_i}|\geq (q-3+n)q$. Assume $n=3$. For $\lambda\in\mathbb{F}_q$, if $|g_{\lambda}|>q^2$, then $|g_{\lambda}|\geq 2(q-1)q$. Denote by $N:=\#\{i\geq 4 :|g_{\lambda_i}|=q^2\}$. Since for $i\geq 4$, $|f_{\lambda_i}|=|g_{\lambda_i}|$ and $(q-3)2(q-1)q\geq(q-1)^3$, $N\geq1$. Furthermore, since $(q-3)q^2<(q-2)(q-1)q$, $N\leq q-4$. Assume that $|f_{\lambda_{N+4}}|\geq(N+1)q^2$. Then $$Nq^2+(q-3-N)(N+1)q^2\leq |f|<(q-1)^3$$ which gives $$Nq(q-N-3)< 3q-1.$$ This gives a contradiction since $1\leq N\leq q-4$. Furthermore the only possibility such that $|f_{\lambda_{N+4}}|=Nq^2$ is $N=1$ which is absurd since $f_{\lambda_{N+4}}$ is not a minimal weight codeword. By Lemma \ref{2.15.1}, for all $4\leq i\leq N+3$, $g_{\lambda_{4}}=g_{\lambda_i}$. So, we can write for all $x=(x_1,\ldots,x_m)\in\mathbb{F}_q^m$ \begin{align*}f(x)&=\prod_{1\leq i\leq 3}(x_1-\lambda_i)\left(g_{\lambda_{4}}(x_2,\ldots,x_m)+ \prod_{4\leq i\leq N+3}(x_1-\lambda_i)h(x)\right)\\&=\prod_{1\leq i\leq 3}(x_1-\lambda_i)\left(\alpha f_{\lambda_{4}}(x_2,\ldots,x_m)+ \prod_{4\leq i\leq N+3}(x_1-\lambda_i)h(x)\right)\end{align*} where $h\in R_q((m-3)(q-1)-N,m)$ and $\alpha\in\mathbb{F}_q^*$. Then, for all $(x_2,\ldots,x_m)\in\mathbb{F}_q^{m-1}$, $$f_{\lambda_{N+4}}(x_2,\ldots,x_m)=\beta f_{\lambda_{4}}(x_2,\ldots,x_m)+\gamma h_{\lambda_{N+4}}(x_2,\ldots,x_m).$$ We get a contradiction by Lemma \ref{2.14}. Now, assume $n=1$ or $n=2$. If $(1-x_2^{q-1})$ divides $f$, we can write for all $x=(x_1,\ldots,x_m)\in\mathbb{F}_q^m$, $$f(x)=(1-x_2^{q-1})h(x_1,x_3,\ldots,x_m)$$ where $h\in R_q((m-4)(q-1)+3,m-1)$ and $|f|=|h|$. So, $h$ fulfils the same conditions as $f$. Iterating this process, we end either in the case where $t=0$ which is absurd or in the case where $(1-x_2^{q-1})$ does not divide $h$. So we can assume $(1-x_2^{q-1})$ does not divide $f$. Since $n\geq 1$, $f_{\lambda_1}=0$. So, $1-x_2^{q-1}$ divides $f_{\lambda_1}$. Then, since $1-x_2^{q-1}$ does not divide $f$, there exists $k\in\{1,\ldots,q-1\}$ such that for all $i\leq k$, $1-x_2^{q-1}$ divides $f_{\lambda_i}$ and $(1-x_2^{q-1})$ does not divide $f_{\lambda_{k+1}}$. For $\lambda\in\mathbb{F}_q$, if $|f_{\lambda}|>(q-3+n)q$ then $$|f_{\lambda}|\geq w_2=\left\{\begin{array}{ll}q^2&\textrm{if $n=2$}\\(q-1)^2&\textrm{if $n=1$}\end{array}\right..$$ Denote by $N:=\#\{i\geq n+1 :|f_{\lambda_i}|=(q-3+n)q\}$. In all cases, $(q-n)w_2\geq(q-1)^3.$ So, $N\geq1$. Then, $|f_{\lambda_{n+1}}|=(q-3+n)q$ and $f_{\lambda_{n+1}}$ is a minimal weight codeword of $R_q((m-3)(q-1)+3-n,m-1)$ so, by applying an affine transformation, we can assume $1-x_2^{q-1}$ divides $f_{\lambda_{n+1}}$. Thus, $k\geq n+1\geq 2$. If $1\leq k\leq n+N-1$, then $|f_{\lambda_{k+1}}|=(q-3+n)q<(q-3+k)q$. Otherwise, assume $|f_{\lambda_{k+1}}|\geq (q-3+k)q$. Then, $$|f|\geq N(q-3+n)q+(k-n-N)w_2+(q-k)(q-3+k)q.$$ In both cases, we get a contradiction since $|f|<(q-1)^3$ and $2\leq n+N\leq k\leq q-1$. Since for all $n\leq i\leq k$, $1-x_2^{q-1}$ divides $f_{\lambda_i}$, it divides $g_{\lambda_i}$ too. Then we can write for all $x=(x_1,x_2,\ldots,x_m)\in\mathbb{F}_q^{m}$ \begin{align*}f(x)&=\prod_{1\leq i\leq n}(x_1-\lambda_i)(\prod_{n+1\leq i\leq k}(x_1-\lambda_i)h(x_1,x_2,x_3,\ldots,x_m)\\&\hspace{2cm}+(1-x_2^{q-1})l(x_1,x_3,\ldots,x_m))\end{align*} with $\deg(h)\leq (m-3)(q-1)+3-k$. Then for all $(x_2,\ldots,x_m)\in\mathbb{F}_q^{m-1}$, $$f_{\lambda_{k+1}}(x_2, \ldots,x_m)=\alpha h_{\lambda_{k+1}}(x_2,\ldots,x_m)+\beta(1-x_2^{q-1})l_{\lambda_{k+1}}(x_3,\ldots,x_m).$$ Thus, we get a contradiction by Lemma \ref{3.5} since $k\geq2$ and $|f_{\lambda_{k+1}}|<(q-3+k)$.\end{preuve} \subsection{General case} \begin{proposition}\label{gene}Let $m\geq 2$, $q\geq 5$, $0\leq a\leq m-2$, $2\leq b\leq q-2$ and $f\in R_q(a(q-1)+b,m)$. Assume $c_b<(q-b+1)q$ and $b\neq 3$. If $|f|>(q-b+1)(q-1)q^{m-a-2}$ then $|f|\geq c_bq^{m-a-2}$.\end{proposition} \begin{proposition}\label{gene3}Let $m\geq 3$, $q\geq7$, $0\leq a\leq m-3$. If $f\in R_q(a(q-1)+3,m)$ and $|f|>(q-1)(q-2)q^{m-a-2}$ then $|f|\geq (q-1)^3q^{m-a-3}$.\end{proposition} We prove the two previous propositions in the same time. In order to simplify the notations, we set $$\widetilde{c}_b=\left\{\begin{array}{ll}c_b&\textrm{if $b\neq3$}\\(q-1)^3&\textrm{if $b=3$}\end{array}\right.$$ and $$m_0=\left\{\begin{array}{ll}2&\textrm{if $b\neq3$}\\3&\textrm{if $b=3$}\end{array}\right..$$ \begin{preuve} Lemmas \ref{t=0} and \ref{t=03} give the case where $a=0$. If $m=m_0$ we have considered all cases. Assume $m\geq m_0+1$ and $a\geq1$. We proceed by recursion on $a$. The case where $a=m-m_0$ comes from Lemmas \ref{m-2} and \ref{m-3}. If $m=m_0+1$, we have considered all cases. So, from now we assume $m\geq m_0+2$. \\ Let $m-m_0-1\geq a\geq1$. Assume if $f\in R_q((a+1)(q-1)+b,m)$ is such that $|f|>(q-1)(q-b+1)q^{m-a-3}$ then, $|f|\geq \widetilde{c}_b q^{m-m_0-a-1}$. Let $f\in R_q(a(q-1)+b,m)$ such that $|f|>(q-b+1)(q-1)q^{m-a-2}$. Assume $|f|<\widetilde{c}_b q^{m-a-m_0}$ and denote by $S$ the support of $f$. Assume $S$ meets all affine hyperplanes. We set an order on the elements of $\mathbb{F}_q$ such that $|f_{\lambda_1}|\leq\ldots\leq|f_{\lambda_q}|$. Since $|f|<\widetilde{c}_b q^{m-a-m_0}$, by recursion hypothesis, $f_{\lambda_1}$ is either a minimal weight codeword or a second weight codeword of $R_q(a(q-1)+b,m-1)$. In all cases, by applying an affine transformation, we can assume $(1-x_2^{q-1})$ divides $f_{\lambda_1}$. Let $1\leq k$ be such that for all $i\leq k$, $(1-x_2^{q-1})$ divides $f_{\lambda_i}$ but $(1-x_2^{q-1})$ does not divide $f_{\lambda_{k+1}}$. Then, by Lemma \ref{3.7}, $$|f|\geq (q-b)q^{m-a-1}+k(q-k)q^{m-a-2}\geq(q-b)q^{m-a-1}+(q-1)q^{m-a-2}.$$ We get a contradiction since $(q-b)q^{m-a-1}+(q-1)q^{m-a-2}\geq\widetilde{c}_b q^{m-a-m_0}$. So there exists an hyperplane $H_0$ which does not meet $S$. By applying an affine transformation we can assume $x_1=\alpha$, $\alpha\in\mathbb{F}_q$, is an equation of $H_0$. We denote by $n$ the number of hyperplanes parallel to $H_0$ which do not meet $S$. We set an order on the elements of $\mathbb{F}_q$ such that $|f_{\lambda_1}|\leq\ldots\leq |f_{\lambda_q}|$. If $n=q-1$ then we can write for all $x=(x_1,\ldots,x_m)\in\mathbb{F}_q^m$, $$f(x)=(1-x_1^{q-1})g(x_2,\ldots,x_m)$$ where $g\in R_q((a-1)(q-1)+b,m-1)$ and $|f|=|g|$. So, $g$ fulfils the same conditions as $f$. Iterating this process, we end either in the case where $a=0$ (which gives a contradiction by Lemmas \ref{t=0} and \ref{t=03}) or in the case where $n<q-1$. From now we assume $n<(q-1)$. By Lemma \ref{2.7}, since for $b\geq4$, $c_b\leq(q-2)(q-b+2)q^{m_0-2}$, for $b=2$, $(q^2-q-1)\leq 2(q-3)q$ and for $b=3$, $(q-1)^3\leq 2(q-4)q^2$, the only possibilities for $b\geq 2$ are $n=1$, $n=b-1$ or $n=b$. We can write for all $x=(x_1,\ldots,x_m)\in\mathbb{F}_q^m$ $$f(x)=\prod_{1\leq i\leq n}(x_1-\lambda_i)g(x)$$ where $g\in R_q(a(q-1)+b-n,m)$. Then for all $i\geq n+1$, $f_{\lambda_i}\in R_q(a(q-1)+b-n,m)$ and $|f_{\lambda_i}|=|g_{\lambda_i}|\geq (q-b+n)q^{m-a-2}$. Assume $n=b$. For $\lambda\in\mathbb{F}_q$, if $|g_{\lambda}|>q^{m-a-1}$, then $|g_{\lambda}|\geq 2(q-1)q^{m-a-2}$. Denote by $N:=\#\{i\geq b+1 :|g_{\lambda_i}|=q^{m-a-1}\}$. Since for $i\geq b+1$, $|f_{\lambda_i}|=|g_{\lambda_i}|$ and $(q-b)2(q-1)q^{m-a-2}\geq \widetilde{c}_b q^{m-a-m_0}$, $N\geq1$. Furthermore, since $(q-b)q^{m-a-1}<(q-b+1)(q-1)q^{m-a-1}<|f|$, $N\leq q-b-1$. Assume $|f_{\lambda_{b+N+1}}|\geq(N+1)q^{m-a-1}$. Then $$Nq^{m-a-1}+(q-b-N)(N+1)q^{m-a-1}\leq |f|<\widetilde{c}_b q^{m-a-m_0}$$ which gives $$Nq^{m_0-1}(q-N-b)< \widetilde{c}_b -(q-b)q^{m_0-1}<q^{m_0-1}.$$ This gives a contradiction since $1\leq N\leq q-b-1$. Furthermore, the only possibility such that $|f_{\lambda_{b+N+1}}|=Nq^{m-a-1}$ is $N=1$ which is absurd since $f_{\lambda_{b+N+1}}$ is not a minimal weight codeword. By Lemma \ref{2.15.1}, for all $b+1\leq i\leq N+b$, $g_{\lambda_{b+1}}=g_{\lambda_i}$. So, we can write for all $x=(x_1,\ldots,x_m)\in\mathbb{F}_q^m$ \begin{align*}f(x)&=\prod_{1\leq i\leq b}(x_1-\lambda_i)\left(g_{\lambda_{b+1}}(x_2,\ldots,x_m)+ \prod_{b+1\leq i\leq N+b}(x_1-\lambda_i)h(x)\right)\\&=\prod_{1\leq i\leq b}(x_1-\lambda_i)\left(\alpha f_{\lambda_{b+1}}(x_2,\ldots,x_m)+ \prod_{b+1\leq i\leq N+b}(x_1-\lambda_i)h(x)\right)\end{align*} where $h\in R_q(a(q-1)-N,m)$ and $\alpha\in\mathbb{F}_q^*$. Then, for all $(x_2,\ldots,x_m)\in\mathbb{F}_q^{m-1}$, $$f_{\lambda_{b+N+1}}(x_2,\ldots,x_m)=\beta f_{\lambda_{b+1}}(x_2,\ldots,x_m)+\gamma h_{\lambda_{b+N+1} }(x_2,\ldots,x_m).$$ We get a contradiction by Lemma \ref{2.14}. Now, assume $n=1$ or $n=b-1$. If $(1-x_2^{q-1})$ divides $f$, we can write for all $x=(x_1,\ldots,x_m)\in\mathbb{F}_q^m$, $$f(x)=(1-x_2^{q-1})g(x_1,x_3,\ldots,x_m)$$ where $g\in R_q((a-1)(q-1)+b,m-1)$ and $|f|=|g|$. So, $g$ fulfils the same conditions as $f$. Iterating this process, we end either in the case where $t=0$ which is impossible by Lemmas \ref{t=0} and \ref{t=03} or in the case where $(1-x_2^{q-1})$ does not divide $g$. So we can assume that $(1-x_2^{q-1})$ does not divides $f$. Since $n\geq 1$, $f_{\lambda_1}=0$. So, $1-x_2^{q-1}$ divides $f_{\lambda_1}$. Then, since $1-x_2^{q-1}$ does not divide $f$, there exists $k\in\{1,\ldots,q-1\}$ such that for all $i\leq k$, $1-x_2^{q-1}$ divides $f_{\lambda_i}$ and $(1-x_2^{q-1})$ does not divide $f_{\lambda_{k+1}}$. For $i\geq n+1$, if $|f_{\lambda_i}|>(q-b+n)q^{m-a-2}$ then $$|f_{\lambda_i}|\geq w_2=\left\{\begin{array}{ll}q^{m-a-1}&\textrm{if $n=b-1$}\\(q-b+2)(q-1)q^{m-a-3}&\textrm{if $n=1$}\end{array}\right..$$ Denote by $N:=\#\{i\geq n+1 :|f_{\lambda_i}|=(q-b+n)q^{m-a-2}\}$. If $4\leq b\leq \frac{q+3}{2}$, then $(q-n)w_2\geq(q-b+2)(q-2)q^{m-a-2}\geq c_bq^{m-a-2}$. If $b=2$ or $\frac{q}{2}+2\leq b\leq q-2$, $(q-n)w_2\geq((q-b+1)q-1)q^{m-a-2}$. If $b=3$, $(q-n)w_2\geq (q-1)^3q^{m-a-3}$. So, $N\geq1$. Since $(q-n)(q-b+n)q^{m-a-2}=(q-1)(q-b+1)q^{m-a-2}<|f|$, $N\leq q-n-1$. Then, $|f_{\lambda_{n+1}}|=(q-b+n)q^{m-a-2}$ and $f_{\lambda_{n+1}}$ is a minimal weight codeword of $R_q(a(q-1)+b-n,m-1)$ so, by applying an affine transformation, we can assume $1-x_2^{q-1}$ divides $f_{\lambda_{n+1}}$. Thus, $k\geq n+1\geq 2$. If $1\leq k\leq n+N-1$, then $|f_{\lambda_{k+1}}|=(q-b+n)q^{m-a-2}<(q-b+k)q^{m-a-2}$. If $n+N\leq k\leq q-1$, assume $|f_{\lambda_{k+1}}|\geq (q-b+k)q^{m-a-2}$ Then $$|f|\geq N(q-b+n)q^{m-a-2}+(k-n-N)w_2+(q-k)(q-b+k)q^{m-a-2}$$ which gives a contradiction since $|f|<\widetilde{c}_b q^{m-a-m_0}$, $c_b<(q-b+1)q$ and $1\leq N\leq q-n-1$. Since for all $n\leq i\leq k$, $1-x_2^{q-1}$ divides $f_{\lambda_i}$, it divides $g_{\lambda_i}$ too. Then we can write for all $x=(x_1,x_2,\ldots,x_m)\in\mathbb{F}_q^{m}$ \begin{align*}f(x)&=\prod_{1\leq i\leq n}(x_1-\lambda_i)(\prod_{n+1\leq i\leq k}(x_1-\lambda_i)h(x_1,x_2,x_3,\ldots,x_m)\\&\hspace{2cm}+(1-x_2^{q-1})l(x_1,x_3,\ldots,x_m))\end{align*} with $\deg(h)\leq a(q-1)+b-k$ and $l\in R_q((a-1)(q-1)+b-n,m-1)$. Then for all $(x_2,\ldots,x_m)\in\mathbb{F}_q^{m-1}$, $$f_{\lambda_{k+1}}(x_2, \ldots,x_m)=\alpha h_{\lambda_{k+1}}(x_2,\ldots,x_m)+\beta(1-x_2^{q-1})l_{\lambda_{k+1}}(x_3,\ldots,x_m).$$ Thus, we get a contradiction by Lemma \ref{3.5} since $k\geq2$ and $|f_{\lambda_{k+1}}|<(q-b+k)q^{m-a-2}$. \end{preuve} \begin{theoreme}\label{w3}Let $m\geq 2$, $q\geq 5$, $0\leq a\leq m-2$, $2\leq b\leq q-2$. If $c_b<(q-b+1)q$ and either $b\neq3$ or $a\in\{0,m-2\}$ and $b=3$ then the third weight of $R_q(a(q-1)+b,m)$ is $W_3=c_bq^{m-a-2}$. \end{theoreme} \begin{theoreme}\label{w33}Let $m\geq 3$, $q\geq 5$, $0\leq a\leq m-3$. The third weight of $R_q(a(q-1)+b,m)$ is $W_3=(q-1)^3q^{m-a-3}$.\end{theoreme} \begin{preuve}For $m=m_0$, it is the definition of ${c}_b$ and Lemma \ref{m=3}. Let $g\in R_q(b,m_0)$ such that $|g|=\widetilde{c}_b$. If $m\geq m_0+1$, by Proposition \ref{gene} and \ref{gene3} and Lemma \ref{m-2}, we have $W_3\geq \widetilde{c}_bq^{m-a-m_0}$. For $x=(x_1,\ldots,x_m)\in\mathbb{F}_q^m$, we define $$f(x)=\prod_{i=1}^a(1-x_i^{q-1})g(x_{a+1},\ldots,x_{a+m_0}).$$ Then $f\in R_q(a(q-1)+b,m)$ and $|f|=|g|q^{m-a-2}=\widetilde{c}_bq^{m-2}$ which proves both theorems. \end{preuve} \section{The case where $m=2$}\label{cb} \begin{theoreme}For $q\geq16$ and $6\leq b<\frac{q+4}{3}$, $c_b=(q-b+2)(q-2)$. Furthermore, if $f\in R_q(b,2)$ is such that $|f|=(q-b+2)(q-2)$ then up to affine transformation for all $(x,y)\in\mathbb{F}_q^2$, $$f(x,y)=\prod_{i=1}^{b-2}(x-b_i)(y-c)(y-d)$$ where $b_i\in \mathbb{F}_q$ are such that for $i\neq j$, $b_i\neq b_j$, $c\in\mathbb{F}_q$, $d\in\mathbb{F}_q$ and $c\neq d$, or $$ f(x,y)=\prod_{i=1}^{b-1}(a_i x+b_i y)(a_1x+b_1y+e)$$ where $(a_i,b_j)\in\mathbb{F}_q^2\setminus\{(0,0)\}$, for $i\neq j$, $a_ib_j-a_jb_i\neq0$ and $e\in\mathbb{F}_q^*$ or $$f(x,y)=\prod_{i=1}^{3}(a_i x+b_i y)\prod_{j=1}^{b-3}(a_1x+b_1y+e_j)$$ where $(a_i,b_j)\in\mathbb{F}_q^2\setminus\{(0,0)\}$, for $i\neq j$, $a_ib_j-a_jb_i\neq0$, $e_i\in\mathbb{F}_q^*$ and $e_i\neq e_j$ for $j\neq i$. \end{theoreme} \begin{preuve}Let $6\leq b<\frac{q+4}{3}$ and $f\in R_q(b,2)$ such that $|f|=c_b$ and denote by $S$ its support. From section \ref{upper}, we know that in this case $c_b\leq (q-b+2)(q-2)<(q-b+1)q$. Since $(q-b+1)q$ is the minimum weight of $R_q(b-1,2)$, $\deg(f)=b$. We prove first that $f$ is the product of $b$ affine factors. Let $P$ be a point of $\mathbb{F}_q^2$ which is not in $S$ and $L$ be a line in $\mathbb{F}_q^2$ such that $P\in L$. Then, either $L$ does not meet $S$ or $L$ meets $S$ in at least $q-b$ points. If any line through $P$ meets $S$ then $$(q+1)(q-b)\leq|f|\leq(q-b+2)(q-2)$$ which is absurd since $b< \frac{q+4}{3}$. So there exists a line through $P$ which does not meet $S$. By applying the same argument to all $P$ not in $S$, we get that $f$ is the product of affine factors. We have just proved that $Z$ the set of zeros of $f$ is the union of $b$ lines in $\mathbb{F}_q^2$. We say that those lines are in configuration $A_b$ is the $b$ lines are parallel, in configuration $B_b$ if exactly $b-1$ lines are parallel, in configuration $C_b$ if the $b$ lines meet in a point, in configuration $D_b$ if $b-2$ lines are parallel and the 2 other lines are also parallel, in configuration $E_b$ if $b-2$ lines are parallel and the 2 other lines intersect in one point included in one of the parallel lines, in configuration $F_b$ if $b-1$ lines intersect in one point and the $b$th line is parallel to one of the previous. We say that we are in configuration $G_b$ if we are in none of the previous configurations. \begin{figure}[h!] \begin{center}\subfloat[$D_6$]{\begin{tikzpicture}[scale=0.2] \draw (0,3)--(8,3); \draw (0,1)--(8,1); \draw (1,4)--(1,0); \draw (3,4)--(3,0); \draw (5,0)--(5,4); \draw (7,0)--(7,4); \end{tikzpicture}} \hspace{1cm} \subfloat[$E_6$]{\begin{tikzpicture}[scale=0.2] \draw (0,3)--(8,3); \draw (0,20/6)--(8,4/6); \draw (1,4)--(1,0); \draw (3,4)--(3,0); \draw (5,0)--(5,4); \draw (7,0)--(7,4); \end{tikzpicture}} \hspace{1cm} \subfloat[$F_6$]{\begin{tikzpicture}[scale=0.2] \draw (0,3)--(8,3); \draw (0,1)--(8,1); \draw (0,0)--(16/3,4); \draw (2,0)--(14/3,4); \draw (6,0)--(10/3,4); \draw (8,0)--(8/3,4); \end{tikzpicture}} \end{center}\end{figure} We prove by induction on $b$ that $Z$ the set of zeros of $f$ is of type $D_b$, $E_b$ or $F_b$. Since the cardinal of such set is $(q-b+2)(q-2)$, by Lemma \ref{DGMW1} we get the result. For $b=6$, denote by $Z$ the set of the zeros of $f$. We have just proved that $Z$ is the union of 6 lines in $\mathbb{F}_q^2$. If the 6 lines are parallel then $f$ is minimum weight codeword of $R_q(6,2)$ which is absurd. If 5 of these lines are parallel or the 6 lines intersect in a point then, $f$ is a second weight codeword of $R_q(6,2)$ which is absurd. If 4 of these lines are parallel, then if the 2 other lines are parallel and we are in configuration $D_6$, if 3 of theses lines intersect in a point then we are in configuration $E_6$ otherwise $\#Z=6q-9<q^2-(q-4)(q-2)=6q-8$ which is absurd since $|f|\leq(q-4)(q-2)$. If 3 of theses lines are parallel then, they intersect the 3 other lines so $\#Z\leq 6q-9<6q-8$ which is absurd. If 2 of these lines are parallel then, if at least two of the other lines intersect in a point which is not included in one of the parallel lines then $\#Z\leq6q-9$ which is absurd. So the only possibility in this case is configuration $F_6$. If all lines intersect pairwise then they cannot intersect in one point, so $\#Z\leq 6q-9$ which is absurd. This proves the result for $b=6$. Let $6\leq b<\frac{q+1}{2}$. Assume if $f\in R_q(b,2)$ and $|f|=c_b$ then its set of zeros is of type $D_b$, $E_b$ or $F_b$. Let $f\in R_q(b+1,2)$ such that $|f|=c_b\leq(q-b+1)(q-2)=q^2-(bq+q-2b+2)$. Denote by $Z$ the set of zeros of $f$, it is the union of $b+1$ lines in $\mathbb{F}_q^2$. Suppose that $Z$ is of type $G_{b+1}$. We decompose $Z$ in a configuration of $b$ lines and a line. We have 6 possible cases : \begin{itemize} \item $Z$ is the union of a type $G_b$ configuration and a line. Since a configuration $G_b$ is composed of neither at least $b-1$ parallel lines nor $b$ lines which meet in the same point. The line meets the configuration $G_b$ in at least 2 points. Then, by induction hypothesis, $\#Z<bq-2b+4+q-2=bq+q-2b+2$. \item $Z$ is the union of a type $F_b$ configuration and a line. Since $Z$ is a configuration $G_{b+1}$, the line cannot intersect the configuration $F_b$ in the point where $b-1$ lines of the configuration intersect. So, the line intersects the configuration $F_b$ in at least 3 points and $\#Z\leq bq-2b+4+q-3=bq+q-2b+1$. \item $Z$ is the union of a type $E_b$ configuration and a line. Since $Z$ is a configuration $G_{b+1}$, the line cannot be parallel to the $b-2$ lines parallel in the configuration $E_b$. So, the line intersects the configuration $E_b$ in at least 3 points and $\#Z\leq bq-2b+4+q-3=bq+q-2b+1$. \item $Z$ is the union of a type $D_b$ configuration and a line. Since $Z$ is a configuration $G_{b+1}$, the line cannot be parallel to the $b-2$ lines parallel in the configuration $D_b$. So, the line intersects the configuration $D_b$ in at least 3 points and $\#Z\leq bq-2b+4+q-3=bq+q-2b+1$. \item $Z$ is the union of a type $C_b$ configuration and a line. Since $Z$ is a configuration $G_{b+1}$, the line can neither be parallel to one of the lines in the configuration $C_b$ nor intersect $C_b$ in the point where all the lines of $C_b$ intersect. So, the line intersects the configuration $C_b$ in at least $b$ points and $\#Z\leq bq-b+1+q-b=bq+q-2b+1$. \item $Z$ is the union of a type $B_b$ configuration and a line. Since $Z$ is a configuration $G_{b+1}$, the line can neither be parallel to one of the lines in the configuration $B_b$ nor intersect the configuration $B_b$ in a point included in 2 different lines. So, the line intersects the configuration $B_b$ in at least $b$ points and $\#Z\leq bq-b+1+q-b=bq+q-2b+1$. \end{itemize} Since $A_{b+1}$, $B_{b+1}$ and $C_{b+1}$ are minimal or second weight configurations, the zeros of $f$ are of type $D_{b+1}$, $E_{b+1}$ or $F_{b+1}$.\end{preuve} \begin{proposition}For $q\geq9$, $c_4=(q-2)^2$. Furthermore, if $f\in R_q(4,2)$ is such that $|f|=(q-2)^2$ then up to affine transformation for all $(x,y)\in\mathbb{F}_q^2$, either $$f(x,y)=(x-a)(x-b)(y-c)(y-d)$$ where $a\in\mathbb{F}_q$, $b\in\mathbb{F}_q$, $c\in\mathbb{F}_q$, $d\in\mathbb{F}_q$ are such that $a\neq b$ and $c\neq d$ or $$ f(x,y)=\prod_{i=1}^{3}(a_i x+b_i y)(a_1x+b_1y+e)$$ where $(a_i,b_j)\in\mathbb{F}_q^2\setminus\{(0,0)\}$, for $i\neq j$, $a_ib_j-a_jb_i\neq0$ and $e\in\mathbb{F}_q^*$.\end{proposition} \begin{preuve}Let $f\in R_q(4,2)$ such that $|f|=c_4$ and denote by $S$ its support. From section \ref{upper}, we know that in this case $c_4\leq (q-2)^2<(q-3)q$. Since $(q-3)q$ is the minimum weight of $R_q(3,2)$, $\deg(f)=4$. We prove first that $f$ is the product of $4$ affine factors. Let $P$ be a point of $\mathbb{F}_q^2$ which is not in $S$ and $L$ be a line in $\mathbb{F}_q^2$ such that $P\in L$. Then, either $L$ does not meet $S$ or $L$ meets $S$ in at least $q-4$ points. If any line through $P$ meets $S$ then $$(q+1)(q-4)\leq|f|\leq(q-2)^2$$ which is absurd since $q\geq 9$. So there exists a line through $P$ which does not meet $S$. By applying the same argument to all $P$ not in $S$, we get that $f$ is the product of affine factors. Denote by $Z$ the set of zeros of $f$. We have just proved that $Z$ is the union of 4 lines in $\mathbb{F}_q^2$. If the 4 lines are parallel then $f$ is minimum weight codeword of $R_q(4,2)$ which is absurd. If 3 of these lines are parallel or the 4 lines intersect in a point then, $f$ is a second weight codeword of $R_q(4,2)$ which is absurd. Assume 2 of these lines are parallel. If the 2 other lines are parallel and we are in the first case of the proposition. If 3 of the 4 lines intersect in a point then we are in the second case of the proposition, otherwise $\#Z=4q-5<q^2-(q-2)^2=4q-4$ which is absurd since $|f|\leq(q-2)^2$. Assume all lines intersect pairwise. They cannot intersect in one point, so $\#Z\leq 4q-5$ which is absurd. \end{preuve} \begin{proposition}For $q\geq13$, $c_5=(q-3)(q-2)$. Furthermore, if $f\in R_q(5,2)$ is such that $|f|=(q-3)(q-2)$ then up to affine transformation for all $(x,y)\in\mathbb{F}_q^2$, $$f(x,y)=\prod_{i=1}^{3}(x-b_i)(y-c)(y-d)$$ where $b_i\in \mathbb{F}_q$ are such that for $i\neq j$, $b_i\neq b_j$, $c\in\mathbb{F}_q$, $d\in\mathbb{F}_q$ and $c\neq d$ or $$ f(x,y)=\prod_{i=1}^{4}(a_i x+b_i y)(a_1x+b_1y+e)$$ where $(a_i,b_j)\in\mathbb{F}_q^2\setminus\{(0,0)\}$, for $i\neq j$, $a_ib_j-a_jb_i\neq0$ and $e\in\mathbb{F}_q^*$ or $$f(x,y)=\prod_{i=1}^{3}(a_i x+b_i y)\prod_{j=1}^{2}(a_1x+b_1y+e_j)$$ where $(a_i,b_j)\in\mathbb{F}_q^2\setminus\{(0,0)\}$, for $i\neq j$, $a_ib_j-a_jb_i\neq0$, $e_i\in\mathbb{F}_q^*$ and $e_i\neq e_j$ for $j\neq i$ or $$f(x,y)=(x-a)(x-b)(y-c)(y-d)\left((d-c)x+(a-b)y+bc-ad\right)$$ where $a\in\mathbb{F}_q$, $b\in\mathbb{F}_q$, $c\in\mathbb{F}_q$, $d\in\mathbb{F}_q$ are such that $a\neq b$ and $c\neq d$\end{proposition} \begin{preuve}Let $f\in R_q(5,2)$ such that $|f|=c_5$ and denote by $S$ its support. From section \ref{upper}, we know that in this case $c_5\leq (q-3)(q-2)<(q-4)q$. Since $(q-4)q$ is the minimum weight of $R_q(4,2)$, $\deg(f)=5$. We prove first that $f$ is the product of $5$ affine factors. Let $P$ be a point of $\mathbb{F}_q^2$ which is not in $S$ and $L$ be a line in $\mathbb{F}_q^2$ such that $P\in L$. Then, either $L$ does not meet $S$ or $L$ meets $S$ in at least $q-5$ points. If any line through $P$ meets $S$ then $$(q+1)(q-5)\leq|f|\leq(q-3)(q-2)$$ which is absurd since $q\geq13$. So there exists a line through $P$ which does not meet $S$. By applying the same argument to all $P$ not in $S$, we get that $f$ is the product of affine factors. Denote by $Z$ the set of zeros of $f$. We have just proved that $Z$ is the union of 5 lines in $\mathbb{F}_q^2$. If the 5 lines are parallel then $f$ is minimum weight codeword of $R_q(5,2)$ which is absurd. If 4 of these lines are parallel or the 5 lines intersect in one point then, $f$ is a second weight codeword of $R_q(5,2)$ which is absurd. Assume 3 of these lines are parallel. If the 2 other lines are parallel and we are in the first case of the proposition. If 3 of the 5 lines lines intersect in a point then we are in the second case of the proposition, otherwise $\#Z=5q-7<q^2-(q-3)(q-2)=5q-6$ which is absurd since $|f|\leq(q-3)(q-2)$. Assume 2 of the lines are parallel. If an other pair of lines is parallel, then either we are in the last case of the proposition or the fifth line meets the four other lines in at least 3 points and $\#Z\leq 4q-4+q-3=5q-7$ which is absurd. If 4 of the 5 lines intersect in a point then we are in the third case of the proposition, otherwise $\#Z=5q-7<5q-6$ which is absurd. Assume all lines intersect pairwise. They cannot intersect in one point, so $\#Z\leq 5q-7$ which is absurd.\end{preuve} \section{Codeword reaching the third weight} \begin{proposition}\label{case1}Let $q\geq 5$, $m\geq 2$, $4\leq b\leq q-2$, $f\in R_q(b,m)$ and $g\in R_q(b,2)$ such that $|g|=c_b$. If $c_b<(q-b+1)q-1$ and $|f|=c_bq^{m-2}$ then up to affine transformation, for all $x=(x_1,\ldots,x_m)\in\mathbb{F}_q^m$, $f(x)=g(x_1,x_2)$. \end{proposition} \begin{preuve}The case where $m=2$ comes from the definition of $c_b$. Assume $m\geq 3$. Let $f\in R_q(b,m)$ such that $4\leq b<q-1$ and $|f|=c_bq^{m-2}$. Assume $c_b<(q-b+1)q-1$. We denote by $S$ the support of $f$. Suppose first that $f$ has an affine factor. By applying an affine transformation, we can assume that $x_1$ divides $f$. Let $n=\#\{\lambda\in\mathbb{F}_q:(x_1-\lambda)\textrm{ divides }f\}$. Since $\deg(f)\leq b$ and $f$ is neither a minimal weight codeword nor a second weight codeword, $n\leq b-2$ (see proof of Lemma \ref{t=0}). Furthermore, if $b\geq \frac{q+3}{2}$, $(q-b+1)-1\leq (q-b+2)(q-2)$ then, by Lemma \ref{2.7}, the only possibility for $n$ is 1. If $b<\frac{q+3}{2}$, $c_b\leq (q-b+2)(q-2)$. So, by Lemma \ref{2.7}, the only possibility for $n$ are 1, 2, $b-2$. We can write for all $x=(x_1,\ldots,x_m)\in \mathbb{F}_q$, $$f(x)=\prod_{1=1}^n(x_1-\lambda_i)g(x)$$ with $\lambda_i\in\mathbb{F}_q$, $\lambda_i\neq\lambda_j$ for $i\neq j$ and $\deg(g)\leq b-n$. Then for $i\geq n+1$, $\deg(f_{\lambda_i}) \leq b-n$ and $|f_{\lambda_i}|\geq(q-b+n)q^{m-2}$. If $n=b-2$ or $n=2$ Then, for reasons of cardinality, for all $i\geq n+1$, $|f_{\lambda_i}|=(q-b+n)q^{m-2}$ and $f_{\lambda_i}$ is a minimum weight codeword of $R_q(b-n,m-1)$. If $n=1$, we denote by $N_1:=\#\{i\geq n+1:|f_{\lambda_i}|=(q-b+1)q^{m-2}\}$. For $i\geq n+1$, if $|f_{\lambda_i}|>(q-b+1)q^{m-2}$ then, $|f_{\lambda_i}|\geq (q-b+2)(q-1)q^{m-3}$. If $b<\frac{q+3}{2}$, $(q-1)^2(q-b+2)q^{m-3}>(q-b+2)(q-2)q^{m-2}$. If $b\geq\frac{q+3}{2}$, $(q-1)^2(q-b+2)q^{m-3}\geq ((q-b+1)q-1)q^{m-2}$. So, in both cases, $N_1\geq1$. Suppose now that $f$ meets all hyperplanes ($n=0$). Since $(q-b+1)(q-1)<c_b<(q-b+1)q$, $c_b\not\equiv 0\mod q$. Then by Lemma \ref{inter}, there exists $H$ an hyperplane such that either $\#(S\cap H)=(q-b)q^{m-2}$ or $\#(S\cap H)=(q-b+1)(q-1)q^{m-3}$. If $\#(S\cap H)=(q-b+1)(q-1)q^{m-3}$ then there exists $A$ an affine subspace of codimension 2 included in $H$ which does not meet $S$. If all hyperplanes through $A$ meet $S$ in $(q-b+1)(q-1)q^{m-3}$ points then, $|f|\geq(q+1)(q-1)(q-b+1)q^{m-3}\geq((q-b+1)q-1)q^{m-2}$ which gives a contradiction. So there exists an hyperplane which meets $S$ in $(q-b)q^{m-2}$ points. In all cases, there is an hyperplane which meets $S$ in $(q-b+n)q^{m-2}$ points. By applying an affine transformation, we can assume $x_1=\lambda$, $ \lambda\in\mathbb{F}_q$ is an equation of this hyperplane. We set an order on the elements of $\mathbb{F}_q$ such that $|f_{\lambda_1}|\leq\ldots\leq|f_{\lambda_q}|$, then $f_{\lambda_{n+1}}$ is a minimum weight codeword of $R_q(b_n,m-1)$. By applying an affine transformation, we can assume $f_{\lambda_{n+1}}$ depends only on $x_2$. Let $k\in\{n+1,\ldots,q\}$ be such that for all $i\leq k$, $f_{\lambda_i}$ depends only on $x_2$ and $f_{\lambda_{k+1}}$ does not depend only on $x_2$. If $k>b$ then for all $x=(x_1,\ldots,x_m)\in\mathbb{F}_q^m)$, $$f(x)=\sum_{i=0}^bf_{\lambda_{i+1}}^{(i)}(x_2,\ldots,x_m)\prod_{1\leq j\leq i}(x_1-\lambda_j).$$ Since for $i \leq b+1$, $f_{\lambda_i}$ depends only on $x_2$ then $f$ depends only on $x_1$ and $x_2$ which proves the proposition in this case. If $b\geq k$ Then we can write for all $x=(x_1,\ldots,x_m)\in\mathbb{F}_q^m)$, $$f(x)=g(x_1,x_2)+\prod_{i=1}^k(x_1-\lambda_i)h(x)$$ with $\deg(h)\leq b-k$. Then, for all $(x_2,\ldots,x_m)\in\mathbb{F}_q^{m-1}$, $$f_{\lambda_{k+1}}(x_2,\ldots,x_m)=g_{\lambda_{k+1}}(x_2)+\alpha h_{\lambda_{k+1}}(x_2,\ldots,x_m)$$ with $\alpha\in\mathbb{F}_q^*$. Since $f_{\lambda_{k+1}}$ does not depend only on $x_2$, by Lemma \ref{3.9}, we have $|f_{\lambda_{k+1}}|\geq(q-b+k)q^{m-2}$. If $n=2$ or $n=b-2$, we get a contradiction since $k\geq n+1$ and $|f_{\lambda_{k+1}}|=(q-b+n)q^{m-2}$. If $n=0$ or $n=1$, we get $$|f|\geq (k-n)(q-b+n)q^{m-2}+(q-k)(q-b+k)q^{m-2}$$ which is absurd since for $n+1\leq k\leq b$, $$(k-n)(q-b+n)q^{m-2}+(q-k)(q-b+k)q^{m-2}\geq((q-b+1)q-1)q^{m-2}.$$ \end{preuve} \begin{lemme}\label{33}Let $q\geq7$, $f\in R_q(3,3)$ such that $|f|=(q-1)^3$ then up to affine transformation, for all $x=(x_1,x_2,x_3)\in\mathbb{F}_q^3$, $$f(x)=x_1x_2x_3.$$\end{lemme} \begin{preuve}Since $(q-1)^3>(q-2)q^2$, $\deg(f)=3$. We prove first that $f$ is the product of $3$ affine factors. Let $P$ be a point of $\mathbb{F}_q^3$ which is not in $S$ and $L$ be a line in $\mathbb{F}_q^3$ such that $P\in L$. Then, either $L$ does not meet $S$ or $L$ meets $S$ in at least $q-3$ points. If any line through $P$ meets $S$ then $$(q^2+q+1)(q-3)\leq|f|=(q-1)^3$$ which is absurd since $q\geq7$. So there exists $L_P$ a line through $P$ which does not meet $S$. Then, let $A$ be plane through $L_P$, either $A$ does not meet $S$ or $A$ meets $S$ in at least $(q-3)q$ points. So, $(q+1)(q-3)q\leq|f|=(q-1)^3$ which is absurd since $q\geq 7$. So there exists $A_P$ a plane containing $P$ which does not meet $S$. By applying the same argument to all $P$ not in $S$, we get that $f$ is the product of affine factors. We have just prove that $Z$ the set of zeros of $f$ is the union of $3$ planes. If these 3 planes are parallel, $f$ is a minimum weight codeword. If 2 of these planes are parallel or the 3 planes intersect in a line, we get a second weight codeword. So the 3 planes intersect pairwise in a line. If the 3 intersection are parallel lines (see figure \ref{mot33}) then $\#Z=3q^2-3q$ which is absurd. The only possibility left gives the result. \begin{figure}[h!]\caption{}\label{mot33} \begin{center}\begin{tikzpicture}[scale=0.2] \draw[dashed] (1,1)--(1,7); \draw[dashed] (7,4)--(7,10); \draw[dashed] (3,9)--(3,15); \draw (0,39/6)--(8,63/6); \draw (26/8,10)--(26/8,16); \draw (26/8,16)--(6/8,6); \draw (6/8,0)--(6/8,6); \draw (6/8,0)--(26/8,10); \draw (0,39/6-6)--(8,63/6-6); \draw (0,39/6)--(0,39/6-6); \draw (8,63/6-6)--(8,63/6); \draw (11/5,16)--(11/5,10); \draw (11/5,16)--(39/5,9); \draw (39/5,9)--(39/5,3); \draw (39/5,3)--(11/5,10); \end{tikzpicture} \end{center} \end{figure}\end{preuve} \begin{proposition}\label{case2}Let $q\geq 7$, $m\geq 3$, $f\in R_q(3,m)$. If $|f|=(q-1)^3q^{m-3}$ then up to affine transformation, for all $x=(x_1,\ldots,x_m)\in\mathbb{F}_q^m$, $f(x)=x_1x_2x_3$. \end{proposition} \begin{preuve}The case where $m=3$ comes from Lemma \ref{33}. Assume $m\geq 4$. Let $f\in R_q(3,m)$ such that $|f|=(q-1)^3q^{m-3}$. We denote by $S$ the support of $f$. Suppose first that $f$ has an affine factor. By applying an affine transformation, we can assume $x_1$ divides $f$. Let $n=\#\{\lambda\in\mathbb{F}_q:(x_1-\lambda)\textrm{ divides }f\}$. Since $\deg(f)\leq b$ and $f$ is neither a minimal weight codeword nor a second weight codeword, $n=1$. We can write for all $x=(x_1,\ldots,x_m)\in \mathbb{F}_q$, $$f(x)=x_1g(x)$$ with $\deg(g)\leq 2$. Then for $\lambda\in\mathbb{F}_q^*$, $\deg(f_{\lambda}) \leq 2$ and $|f_{\lambda}|\geq(q-2)q^{m-2}$. We denote by $N_1:=\#\{\lambda\in\mathbb{F}_q^*:|f_{\lambda}|=(q-2)q^{m-2}\}$. For $\lambda\in\mathbb{F}_q^*$, if $|f_{\lambda}|>(q-2)q^{m-2}$ then, $|f_{\lambda}|\geq (q-1)^2q^{m-3}$. So either $N_1\geq1$ or for all $\lambda\in\mathbb{F}_q^*$, $|f_{\lambda}|=(q-1)^2q^{m-3}$. If for all $\lambda\in\mathbb{F}_q^*$, $|f_{\lambda}|=(q-1)^2q^{m-3}$ then for any $\lambda\in\mathbb{F}_q^*$, there are two non parallel affine subspaces of codimension 2 included in the hyperplane of equation $x_1=\lambda$ which do not meet $S$. Let $A$ be one of these affine subspace in $x_1=\lambda'$. Since $2(q-2)>q+1$, there exists $\lambda''\in\mathbb{F}_q^*$ such that $A_1$ one of the affine subspace of codimension 2 included in $x_1=\lambda''$ is parallel to $A$. Then $H_0$ the hyperplane through $A$ and $A_1$ contains at least one more point which is in an hyperplane of equation $x_1=\lambda$, $\lambda\neq\lambda'$ and $\lambda\neq\lambda''$. So $H_0$ meets $S$ in at most $q^{m-1}-3q^{m-2}-1$. Since the minimum weight of $R_q(3,m-1)$ is $(q-3)q^{m-2}$, $H_0$ does not meet $S$. Applying the same argument to the other affine subspace of codimension 2 included in the hyperplane of equation $x_1=\lambda'$. We get that $f$ is the product of 3 linear factors. So, $Z$ the set of zeros of $f$ is the union of $3$ hyperplanes. If these 3 hyperplanes are parallel, $f$ is a minimum weight codeword. If 2 of these hyperplanes are parallel or those 3 hyperplanes intersect in an affine subspace of codimension 2, we get a second weight codeword. So the 3 planes intersect pairwise in an affine subspace of codimension 2. If the 3 intersections are parallel affine subspaces then $\#Z=3q^{m-1}-3q^{m-2}$ which is absurd. The only possibility left gives the result. From now, we suppose that if $n=1$ then $N_1\geq1$. Suppose now that $f$ meets all hyperplanes ($n=0$). Since $(q-1)^3\not\equiv 0\mod q$, by Lemma \ref{inter}, there exists $H$ an hyperplane such that either $\#(S\cap H)=(q-3)q^{m-2}$ or $\#(S\cap H)=(q-2)(q-1)q^{m-3}$. If $\#(S\cap H)=(q-2)(q-1)q^{m-3}$ then there exists $A$ an affine subspace of codimension 2 included in $H$ which does not meet $S$. If all hyperplanes through $A$ meet $S$ in $(q-2)(q-1)q^{m-3}$ points then, $|f|\geq(q+1)(q-1)(q-2)q^{m-3}>(q-1)^3q^{m-3}$ which gives a contradiction. So there exists an hyperplane which meets $S$ in $(q-3)q^{m-2}$ points. In all cases, there exists an hyperplane which meets $S$ in $(q-b+N)q^{m-2}$ points. By applying an affine transformation, we can assume $x_1=\lambda$, $\lambda\in\mathbb{F}_q$ is an equation of this hyperplane. We set an order on the elements of $\mathbb{F}_q$ such that $|f_{\lambda_1}|\leq\ldots\leq|f_{\lambda_q}|$, then $f_{\lambda_{n+1}}$ is a minimum weight codeword of $R_q(3-n,m-1)$. By applying an affine transformation, we can assume $f_{\lambda_{n+1}}$ depends only on $x_2$. Let $k\in\{n+1,\ldots,q\}$ be such that for all $i\leq k$, $f_{\lambda_i}$ depends only on $x_2$ and $f_{\lambda_{k+1}}$ does not depend only on $x_2$. If $k>b$ then for all $x=(x_1,\ldots,x_m)\in\mathbb{F}_q^m)$, $$f(x)=\sum_{i=0}^3f_{\lambda_{i+1}}^{(i)}(x_2,\ldots,x_m)\prod_{1\leq j\leq i}(x_1-\lambda_j).$$ Since for $i \leq 4$, $f_{\lambda_i}$ depends only on $x_2$ then $f$ depends only on $x_1$ and $x_2$ which gives a contradiction since $(q-1)^3q^{m-3}\not \equiv0\mod q^{m-2}$. If $3\geq k$ Then we can write for all $x=(x_1,\ldots,x_m)\in\mathbb{F}_q^m)$, $$f(x)=g(x_1,x_2)+\prod_{i=1}^k(x_1-\lambda_i)h(x)$$ with $\deg(h)\leq 3-k$. Then, for all $(x_2,\ldots,x_m)\in\mathbb{F}_q^{m-1}$, $$f_{\lambda_{k+1}}(x_2,\ldots,x_m)=g_{\lambda_{k+1}}(x_2)+\alpha h_{\lambda_{k+1}}(x_2,\ldots,x_m)$$ with $\alpha\in\mathbb{F}_q^*$. Since $f_{\lambda_{k+1}}$ does not depend only on $x_2$, by Lemma \ref{3.9}, we have $|f_{\lambda_{k+1}}|\geq(q-3+k)q^{m-2}$. Then, we get $$|f|\geq (k-n)(q-b+n)q^{m-2}+(q-k)(q-b+k)q^{m-2}$$ which is absurd since for $n+1\leq k\leq b$, $$(k-n)(q-b+n)q^{m-2}+(q-k)(q-b+k)q^{m-2}>(q-1)^3q^{m-3}.$$\end{preuve} \begin{lemme}\label{suphyp}Let $m\geq3$, $q\geq7$, $1\leq a\leq m-2$ and $4\leq b \leq q-2$. If $f\in R_q(a(q-1)+b,m)$ is such that $|f|=c_bq^{m-a-2}$ and $c_b<(q-b+1)q-1$, then the support of $f$ is included in an affine hyperplane of $\mathbb{F}_q^m$.\end{lemme} \begin{lemme}\label{suphyp2}Let $m\geq4$, $q\geq7$, $1\leq a\leq m-3$. If $f\in R_q(a(q-1)+3,m)$ is such that $|f|=(q-1)^3q^{m-a-3}$, then the support of $f$ is included in an affine hyperplane of $\mathbb{F}_q^m$.\end{lemme} We set $m_0$ and $\widetilde{c}_b$ as in Section \ref{poids3}. We prove both lemmas in the same time.\\ \begin{preuve} Let $f\in R_q(a(q-1)+b,m)$ such that $|f|=\widetilde{c}_bq^{m-a-m_0}$ and denote by $S$ the support of $f$. Assume $S$ is not included in a hyperplane. Assume $S$ meets all affine hyperplanes. If $m=3$, since $(q-b+1)q>\widetilde{c}_b$ necessarily, there exists an hyperplane which meets $S$ in $(q-b)$ points. Assume $m\geq 4$, since $(q-b+1)(q-1)<{c}_b<(q-b+1)q$, $c_b\not\equiv0\mod q$. Since $(q-1)^3\not\equiv 0\mod q$, by Lemma \ref{inter}, in both cases there exists an hyperplane which meets $S$ in $(q-b)q^{m-a-2}$ points or $(q-b+1)(q-1)q^{m-a-3}$ points. By applying an affine transformation, we can assume $x_1=\lambda$, $\lambda\in\mathbb{F}_q$ is an equation of this hyperplane. We set an order on the elements of $\mathbb{F}_q$ such that $|f_{\lambda_1}|\leq\ldots\leq |f_{\lambda_q}|$. Then, by applying an affine transformation, we can assume that $(1-x_2^{q-1})$ divides $f_{\lambda_1}$. Let $1\leq k$ be such that for all $i\leq k$, $(1-x_2^{q-1})$ divides $f_{\lambda_i}$ but $(1-x_2^{q-1})$ does not divide $f_{\lambda_{k+1}}$. Then, by Lemma \ref{3.7}, $|f|\geq (q-b)q^{m-a-1}+k(q-k)q^{m-a-2}\geq(q-b)q^{m-a-1}+(q-1)q^{m-a-2}$. We get a contradiction since $c_b<(q-b+1)q-1$. So there exists an hyperplane $H_0$ which does not meet $S$. By applying an affine transformation we can assume $x_1=\alpha$, $\alpha\in\mathbb{F}_q$, is an equation of $H_0$. We denote by $n$ the number of hyperplanes parallel to $H_0$ which does not meet $S$. We set an order on the elements of $\mathbb{F}_q$ such that $|f_{\lambda_1}|\leq\ldots\leq |f_{\lambda_q}|$. Since $S$ is not included in an hyperplane, $n<q-1$. If $b\geq\frac{q+3}{2}$ then $c_b<(q-b+2)(q-2)q^{m_0-2}$ and if $b=3$, $(q-1)^3<\leq2(q-4)q^2$. So, by Lemma \ref{2.7}, for $b=3$ or $b\geq \frac{q+3}{2}$, the only possibilities are $n=1$, $n=b-1$ or $n=b$. If $4\leq b<\frac{q+3}{2}$ then, $\widetilde{c}_b\leq(q-b+2)(q-2)$. So, by Lemma \ref{2.7}, the only possibilities are $n=1$, $n=2$, $n=b-2$, $n=b-1$ or $n=b$. We can write for all $x=(x_1,\ldots,x_m)\in\mathbb{F}_q^m$ $$f(x)=\prod_{1\leq i\leq n}(x_1-\lambda_i)g(x)$$ where $g\in R_q(a(q-1)+b-n,m)$. Then for all $i\geq n+1$, $f_{\lambda_i}\in R_q(a(q-1)+b-n,m)$ and $|f_{\lambda_i}|=|g_{\lambda_i}|\geq (q-b+n)q^{m-a-2}$. Assume $n=b$. For $\lambda\in\mathbb{F}_q$, if $|g_{\lambda}|>q^{m-a-1}$, then $|g_{\lambda}|\geq 2(q-1)q^{m-a-2}$. Denote by $N:=\#\{i\geq b+1 :|g_{\lambda_i}|=q^{m-a-1}\}$. Since for $i\geq b+1$, $|f_{\lambda_i}|=|g_{\lambda_i}|$ and $(q-b)2(q-1)q^{m-a-2}> \widetilde{c}_bq^{m-a-m_0}$, $N\geq1$. Furthermore, since $(q-b)q^{m-a-1}<(q-b+1)(q-1)q^{m-a-1}$, $N\leq q-b-1$. Assume that $|f_{\lambda_{b+N+1}}|\geq(N+1)q^{m-a-1}$. Then $$Nq^{m-a-1}+(q-b-N)(N+1)q^{m-a-1}\leq |f|= \widetilde{c}_bq^{m-a-m_0}$$ which gives $$Nq^{m_0-1}(q-N-b)\leq \widetilde{c}_b-q^{m_0-1}(q-b)<q^{m_0-1}.$$ This gives a contradiction since $1\leq N\leq q-b-1$. Furthermore the only possibility such that $|f_{\lambda_{b+N+1}}|=Nq^{m-a-1}$ is $N=1$ which is absurd since $f_{\lambda_{b+N+1}}$ is not a minimal weight codeword. By Lemma \ref{2.15.1}, for all $b+1\leq i\leq N+b$, $g_{\lambda_{b+1}}=g_{\lambda_i}$. So, we can write for all $x=(x_1,\ldots,x_m)\in\mathbb{F}_q^m$ \begin{align*}f(x)&=\prod_{1\leq i\leq b}(x_1-\lambda_i)\left(g_{\lambda_{b+1}}(x_2,\ldots,x_m)+ \prod_{b+1\leq i\leq N+b}(x_1-\lambda_i)h(x)\right)\\&=\prod_{1\leq i\leq b}(x_1-\lambda_i)\left(\alpha f_{\lambda_{b+1}}(x_2,\ldots,x_m)+ \prod_{b+1\leq i\leq N+b}(x_1-\lambda_i)h(x)\right)\end{align*} where $h\in R_q(a(q-1)-N,m)$ and $\alpha\in\mathbb{F}_q^*$. Then, for all $(x_2,\ldots,x_m)\in\mathbb{F}_q^{m-1}$, $$f_{\lambda_{b+N+1}}(x_2,\ldots,x_m)=\beta f_{\lambda_{b+1}}(x_2,\ldots,x_m)+\gamma h_{\lambda_{b+N+1} }(x_2,\ldots,x_m).$$ We get a contradiction by Lemma \ref{2.14}. Now, assume $n=1$, $n=2$, $n=b-2$ or $n=b-1$. Since $n\geq 1$, $f_{\lambda_1}=0$. So, $1-x_2^{q-1}$ divides $f_{\lambda_1}$. Then, there exists $k\in\{1,\ldots,q\}$ such that for all $i\leq k$, $1-x_2^{q-1}$ divides $f_{\lambda_i}$ and $(1-x_2^{q-1})$ does not divide $f_{\lambda_{k+1}}$. Since $S$ is not included in an hyperplane, $k\leq q-1$. For $i\geq n+1$, if $|f_{\lambda_i}|>(q-b+n)q^{m-a-2}$ then $$|f_{\lambda_i}|\geq w_2=\left\{\begin{array}{ll}q^{m-a-1}&\textrm{if $n=b-1$}\\q-b+n+1&\textrm{if $b\geq4$, $m=3$ and $n\neq b-1$}\\(q-b+n+1)(q-1)q^{m-a-3}&\textrm{otherwise}\end{array}\right..$$ Denote by $N:=\#\{i\geq n+1 :|f_{\lambda_i}|=(q-b+n)q^{m-a-2}\}$. Since $(q-n)w_2>\widetilde{c}_bq^{m-a-m_0}$, $N\geq1$. Since for $n=1$ or $n=b-1$, $(q-n)(q-b+n)q^{m-a-2}=(q-1)(q-b+1)q^{m-a-2}<|f|$, in these cases $N\leq q-n-1$. For $n=2$ or $n=b-2$, since $(q-n)(q-b+n)q^{m-a-2}=(q-2)(q-b+2)q^{m-a-2}$, $N=q-n$. Then, $|f_{\lambda_{n+1}}|=(q-b+n)q^{m-a-2}$ and $f_{\lambda_{n+1}}$ is a minimal weight codeword of $R_q(a(q-1)+b-n,m-1)$ so, by applying an affine transformation, we can assume $1-x_2^{q-1}$ divides $f_{\lambda_{n+1}}$. Thus, $k\geq n+1\geq 2$. If $1\leq k\leq n+N-1$, then $|f_{\lambda_{k+1}}|=(q-b+n)q^{m-a-2}<(q-b+k)q^{m-a-2}$. If $n+N\leq k\leq q-1$ (since for $n=2$ or $n=b-2$, $n+N=q$, this case is possible only for $n=1$ or $n=b-1$), assume $|f_{\lambda_{k+1}}|\geq (q-b+k)q^{m-a-2}$. Then, $$|f|\geq N(q-b+n)q^{m-a-2}+(k-n-N)w_2+(q-k)(q-b+k)q^{m-a-2}. $$ Since $|f|=\widetilde{c}_bq^{m-a-m_0}$, $c_b<(q-b+1)-1$, $1\leq N\leq q-n-1$ and $n+N\leq k\leq q-1$, we get a contradiction. Since for all $n\leq i\leq k$, $1-x_2^{q-1}$ divides $f_{\lambda_i}$, it divides $g_{\lambda_i}$ too. Then we can write for all $x=(x_1,x_2,\ldots,x_m)\in\mathbb{F}_q^{m}$ \begin{align*}f(x)&=\prod_{1\leq i\leq n}(x_1-\lambda_i)\left(\prod_{n+1\leq i\leq k}(x_1-\lambda_i)h(x_1,x_2,x_3,\ldots,x_m)\right.\\&\hspace{2cm}+(1-x_2^{q-1})l(x_1,x_3,\ldots,x_m)\Bigg)\end{align*} with $\deg(h)\leq a(q-1)+b-k$ and $l\in R_q((a-1)(q-1)+b-n,m-1)$. Then for all $(x_2,\ldots,x_m)\in\mathbb{F}_q^{m-1}$, $$f_{\lambda_{k+1}}(x_2, \ldots,x_m)=\alpha h_{\lambda_{k+1}}(x_2,\ldots,x_m)+\beta(1-x_2^{q-1})l_{\lambda_{k+1}}(x_3,\ldots,x_m).$$ Thus, we get a contradiction by Lemma \ref{3.5} since $k\geq2$ and $|f_{\lambda_{k+1}}|<(q-b+k)q^{m-a-2}$.\end{preuve} \begin{theoreme}\label{m3}For $q\geq 7$, $m\geq2$, $0\leq a\leq m-2$, $4\leq b\leq q-2$, up to affine transformation, if $c_b<(q-b+1)q-1$, the third weight codeword of $R_q(a(q-1)+b,m)$ are of the form : $$\forall x=(x_1,\ldots,x_m)\in\mathbb{F}_q^m,\quad f(x)=\prod_{i=1}^a(1-x_1^{q-1})g(x_{a+1},x_{a+2})$$ where $g\in R_q(b,2)$ is such that $|g|=c_b$. \end{theoreme} \begin{theoreme}\label{m33}For $q\geq 7$, $m\geq3$, $0\leq a\leq m-3$, up to affine transformation, the third weight codeword of $R_q(a(q-1)+3,m)$ are of the form : $$\forall x=(x_1,\ldots,x_m)\in\mathbb{F}_q^m,\quad f(x)=\prod_{i=1}^a(1-x_1^{q-1})x_{a+1}x_{a+2}x_{a+3}$$. \end{theoreme} \begin{preuve}Let $f\in R_q(a(q-1)+b,m)$ such that $|f|=\widetilde{c}_bq^{m-a-m_0}$. We denote by $S$ the support of $f$. If $a\geq1$ then by Lemma \ref{suphyp} or Lemma \ref{suphyp2}, $S$ is included in an hyperplane. By applying an affine transformation, we can assume $S$ is included in the hyperplane of equation $x_1=0$. Then by Lemma \ref{DGMW1}, for all $x=(x_1,\ldots,x_m)\in\mathbb{F}_q^m$, we can write $$f(x)=(1-x_1^{q-1})g_1(x_2,\ldots,x_m)$$ with $g_1\in R_q((a-1)(q-1)+b,m-1)$. Since $|g_1|=|f|=\widetilde{c}_bq^{m-a-m_0}$, we can iterate this process. So, for all $x=(x_1,\ldots,x_m)\in\mathbb{F}_q^m$, $$f(x)=\prod_{i=1}^a(1-x_i^{q-1})g_a(x_{a+1},\ldots,x_m)$$ with $g_a\in R_q(b,m-a)$ and $|g_a|=|f|=\widetilde{c}_bq^{m-a-m_0}$. Then $g_a$ fulfils the conditions of Proposition \ref{case1} or Proposition \ref{case2} and we get the result. \end{preuve} \section{Conclusions} In this paper, we describe the third weight of generalized Reed-Muller codes for small $b\geq2$. More precisely, for $4\leq b< \frac{q+3}{2}$, $c_b\leq(q-b+2)(q-2)<(q-b+1)q-1$. Then from the results of Section \ref{cb}, Theorem \ref{w3}, Theorem \ref{w33}, Theorem \ref{m3} and Theorem \ref{m33}, we know the third weight and the third weight codewords of generalized Reed-Muller codes for $3\leq b<\frac{q+3}{4}$. If $\frac{q+3}{4}\leq b<\frac{q+3}{2}$, we know the third weight and the third weight codewords of generalized Reed-Muller codes up to the third weight and the third weight codewords of $R_q(b,2)$. If $b=2$ or $b=\frac{q+3}{2}$, $c_b\leq (q-b+2)(q-2)=(q-b+1)-1$, so for $b=2$, we know the third weight and for $b=\frac{q+3}{2}$, we know the third weight up to the third weight of $R_q(b,2)$ but we do not know the form of the third weight codewords. Finally, if $b\geq \frac{q}{2}+2$, $c_b\leq (q-b+1)q$ we are not in the conditions of application of any of the previous theorems. This upper bound on $c_b$ is the minimum weight of $R_q(b-1,2)$. Either we can find $f$ a polynomial (of degree $b$) such that $|f|\leq(q-b+1)q-1$ and we will have a better upper bound on the third weight of $R_q(a(q-1)+b,m)$ for $0\leq a\leq m-2$ and $b\geq \frac{q}{2}+2$ or $c_b=(q-b+1)q$. In this last case several questions arise : is the third weight of $R_q(b,2)$ reached only by minimal weight codeword of $R_q(b-1,2)$? Is the third weight of $R_q(a(q-1)+b,m)$ also the minimal weight of $R_q(a(q-1)+b-1,m)$?
1,941,325,219,895
arxiv
\section{Introduction} Gaussian processes \citep[GPs;][]{rasmussen06} are popular tools among statisticians and engineers for modeling complex problems because of their flexibility, simplicity, and their ability to quantify uncertainty. As Gaussian processes have become more popular in practice, there is an increased demand to modify Gaussian processes to possess certain characteristics. \citet{swiler} give several such possibilities to implement bound constraints, monotonicity constraints, differential equation constraints, and boundary condition constraints. In differential equations, boundary constraints on the actual values of the solution are called Dirchlet boundary conditions (as opposed to, e.g., Neumann boundary conditions which specify values of the derivatives). This is a common setting for modeling GPs. In a more general scenario, however, one may simply have knowledge of a process on a subset of the domain. This does not necessarily fit under the umbrella of ``boundary conditions", as the knowledge of the process may not be on the boundary and/or the process may not to be known to satisfy a differential equation. In this paper we propose a novel adaptation of a large class of Gaussian processes which have known, fixed values on an arbitrary subset of the domain. For simplicity, we will refer to this notion throughout the paper as ``boundary constraints" while recognizing that the methodology is not limited to the boundary. As motivation, consider the following materials science application. Finite element models can be used to predict the strength of composite materials consisting of a polymer matrix and a filler material consisting of embedded spherical particles \citep{arp21}. There are seven parameters contributing to variations in strength, six of which determine properties of the filler and interactions between the filler and the matrix. The code to run the finite element model is too expensive to run directly, so Gaussian process models can serve as an approximation of the model given model runs throughout the domain. However, when there is no filler in the material, the strength of the composite is simply the strength of the polymer, which is a control parameter. Therefore, the strength of the composite is already known on a six-dimensional subset of the seven-dimensional domain. In an ideal setting one would be able to use that information in totality to improve the Gaussian process model. This information, though, cannot be captured via conditioning on a finite-dimensional multivariate Gaussian distribution. Given that infinitely many points are available in this scenario, one may suggest selecting a sufficient number of discrete points so that prediction error on this subset is below a certain threshold. For instance, the standard rule of thumb for choosing the sample size in a computer experiment is $10d$ where $d$ is the dimension of the domain \cite{loeppky09}. However, this rule is given in the context of computer experiments, where computing computer model runs can be very time consuming. Given the application stated above where there is no computational cost associated with the information, this may not be the best approach. A more tailored approach to choosing the necessary sample size given an error threshold is given by \cite{Harari17}, who consider sample size as a a random variable whose distribution is determined by the prior distribution on the parameterization of the covariance kernel family used. Though useful from a theoretical perspective, practically this would require strong prior knowledge of the parameter values, which is not likely to be known. Ultimately, one may simply check the prediction accuracy based upon various sample sizes and choose an appropriate sample size based upon trial and error. However, this still raises the question of how these points are distributed throughout the domain. Our interest here is thus a method for using Gaussian processes to capture information on an arbitrary subset of the domain in a more principled way. There exist in the literature several proposed approaches for solving simplified versions of this problem. \cite{Solin_2019} suggested modifying an analytic stationary covariance function by approximation with a collection of functions which vanish on the boundary of the domain. The basis functions used were solutions to the eigenvalue problem for the homogeneous Laplace equation. \cite{hegermann} used pushforward mappings to modify Gaussian processes to satisfy homogeneous linear operator constraints, including boundary constraints. One particular pushforward of a Gaussian process $\mathbb{X}$ is of the form $\rho \mathbb{X}$, where $\rho:\mathbb{R}^d \to [0,1]$. The author suggested choosing $\rho$ so that $\rho \equiv 0$ on the boundary as a means of satisfying the constraint. \cite{tan} several years earlier developed an explicit construction representative of the reasoning from \cite{hegermann}, and developed a mean function which permits nonzero constant boundary conditions. Though these methods have proven reasonable and effective under certain circumstances, none are able to handle truly general boundary conditions. The reasoning behind our construction follows from a more probabilistic perspective, in which fixing the value of a Gaussian process at certain points can be considered as computing the conditional distribution. For Gaussian distributions, computing conditional distributions is very straightforward in finite dimensions. But, for cases in which the value is assumed to be fixed on an uncountable subset containing infinitely many points, it is not straightforward to compute the conditional distribution. Our approach is to consider conditional expectation as an orthogonal projection, and so computing the conditional distribution reduces to explicitly identifying the form of the projection, which we are able to do. As an illustration, consider the following example. Let $T \subset \mathbb{R}^d,$ and define $\mathbb{X}^0=\{X^0_s; \, s \in T\}$ to be a Gaussian field with mean function $\mu$ and covariance kernel $k$. Define $T_0 \subset T$ to be a finite collection of points, $T_0=\{t_1,...,t_n\}$. It is well known that the stochastic process $\mathbb{X}^n=\{X^n_s; \, s \in T\}$ where $X^n_s=X^0_s|(X_{t_1}=x_{t_1},...,X_{t_n}=x_{t_n})$ is a Gaussian process with mean function $\mu$ \begin{equation}\label{eq:finiteMean} \mu_0(s)=\mu(s)+k(s,\mathbf{t}) k(\mathbf{t},\mathbf{t})^{-1}(\mathbf{x}-\mu(\mathbf{t})), \end{equation} and covariance kernel \begin{equation}\label{eq:finiteKernel} k_0(s,s)=k(s,s)-k(s,\mathbf{t}) k(\mathbf{t},\mathbf{t})^{-1} k(\mathbf{t},s), \end{equation} where $\mathbf{t} = (t_1, \ldots, t_n)^\top$ and $s \in T$. This can be shown using orthogonal projections and properties of Hilbert spaces. Define $$ X= \begin{pmatrix} X_1 \\ X_2 \end{pmatrix} \sim \mathcal{N} \bigg( \begin{pmatrix} \mu_1 \\ \mu_2 \end{pmatrix} , \begin{pmatrix} \Sigma_{11} & \Sigma_{12} \\ \Sigma_{12}^\top & \Sigma_{22} \end{pmatrix} \bigg ), $$ Recalling that conditional expectation is an orthogonal projection, we can write $X_1=(X_1-PX_2)+PX_2$, for some linear operator $P$ so that $\mbox{Cov}(X_1-PX_2,X_2)=0$. In the finite dimensional case, $P$ is simply a matrix. Expanding this covariance out, we see $$ 0=\Sigma_{12}-P\Sigma_{22}, $$ Thus, $P$ is the solution to $\Sigma_{12}=P \Sigma_{22}$. In the finite dimensional case, assuming $X_2$ is nondegenerate, we see $P=\Sigma_{12}\Sigma_{22}^{-1}$. Then, it follows that \begin{align*} E[X_1|X_2=x_2]&=E[X_1-PX_2|X_2=x_2]+PE[X_2|X_2=x_2]=\mu_1-P\mu_2+Px_2\\ &=\mu_1+P(x_2-\mu_2), \\ V(X_1|X_2=x_2)&=V(X_1-PX_2)=\mbox{Cov}(X_1,X_1-PX_2)-\mbox{Cov}(PX_2,X_1-PX_2) \\ &=\mbox{Cov}(X_1,X_1-PX_2)=\Sigma_{11}-\Sigma_{12}P^\top. \end{align*} In the finite dimensional case, projection matrices typically can be computed explicitly. However, for infinite dimensional function spaces, projections are not as tractable. Therefore, our goal is to identify the distribution of a Gaussian process $\mathbb{X}^0$ conditional on $\mathbb{X}^0|_{T_0}=g_0$ with an orthogonal projection from one function space to another, describe the projection operator in a more meaningful way, and use it to compute the conditional distribution. Then, we discuss how one might derive this result from conditioning on a representative set of points, providing an avenue for showing that our results do indeed represent the conditional distribution. This paper is organized as follows: Section 2 introduces some of the relevant information and notation that will be used throughout the paper, while Section 3 describes the construction of the mean and covariance of the process and illustrates how it can be derived by limits. Section 4 provides some probabilistic credence to the derivation in Section 3 including the connection to conditional expectation, and Section 5 is dedicated to illustrating how one might actually employ this approach in the context of more complex statistical models, as well as the notion of inexact or noisy information on $T_0$. Lastly, Section 6 discusses computational implementation, including several examples. We draw on several fundamental results from probability, functional analysis, and Reproducing Kernel Hilbert space theory that can be found in \cite{kall}, \cite{lax}, and \cite{paulsen} respectively. \section{Preliminaries} Construction of a conditional distribution revolves around the covariance function, which for the case of Gaussian processes will be studied as an element of a function space. As conditional expectation is an orthogonal projection in a Hilbert space, we need the covariance function to satisfy more properties than simply continuity or continuous differentiability. In this section we breifly review reproducing kernel Hilbert spaces (RKHS) and universal kernels, which play fundamental roles in our proposed construction. We use $K$ to denote the integral operator in $L^2(T)$ associated with $k$, defined by $$ Kx(t)= \int_T k(s,t) x(s) ds, $$ where $T \subset \mathbb{R}^d$, denote the range of $K$ as $R(K)$, and define $\langle \cdot \, , \cdot \rangle$ to be the standard inner product on $L^2$. \subsection{Reproducing Kernel Hilbert Spaces} For $t \in T$, define $\delta_t$ to be the Dirac functional which maps a function $f$ to $f(t)$. The collection $\{\delta_t\}_{t \in T}$ are known as the evaluation functionals. These are commonly seen defined on the continuous functions $(C(T),||\cdot||_\infty)$ where $||\cdot||_\infty$ denotes the supremum norm. As elements of the dual space, the evaluation functionals correspond to Dirac measures. The motivation behind reproducing kernel Hilbert spaces (RKHS) is to construct a Hilbert space so that the evaluation functionals are bounded, and thus identify uniquely with an element of the space itself. This is different from an $L^2$ space that contains congruence classes of functions in which two classes are equal if their representives are equal almost surely. Under this construction, the evaluation functionals are not even well-defined. Thus, to guarantee these functionals exist and are bounded, clearly the Hilbert space must contain only continuous functions. Therefore, a RKHS on $T$ is defined to be a collection of functions $(\mathcal{H}(T),\langle \cdot,\cdot \rangle_{\mathcal{H}(T)})$ such that the evaluation functionals are bounded. A kernel $k$ defined on $T \times T$ has the reproducing property on $\mathcal{H}(T)$ if the representation of $\delta_t$ in $\mathcal{H}(T)$ is $k_t:=k(\cdot,t)$ for each $t \in T$. Thus, it follows that the inner product $\langle \cdot, \cdot \rangle_{\mathcal{H}(T)}$ satisfies $f(t)=\langle f, k_t\rangle_{\mathcal{H}(T)}$, for any $f \in \mathcal{H}(T), \, t \in T$. By the Moore-Aronszajn Theorem, each RKHS is identified uniquely with a kernel \citep[2.14][Theorem ]{paulsen}. The space $\mathcal{H}(T)$ is constructed via closing the span of the functions $\{k_t\}_{t \in T}$ under $||\cdot||_{\mathcal{H}(T)}$, which implies $\{k_t\}_{t \in T} \subset \mathcal{H}(T)$. In addition, the norm of $k_t$ can be calculated explicitly by $$ ||k_t||_{\mathcal{H}(T)}=\langle k_t,k_t\rangle^{1/2}_{\mathcal{H}(T)}=k(t,t)^{1/2}. $$ Furthermore, for $s,t \in T$, \begin{align*} ||k_s-k_t||^2_{\mathcal{H}(T)} = \langle k_s - k_t,k_s-k_t \rangle_{\mathcal{H}(T)} = k(s,s)-k(s,t)-k(t,s)+k(t,t) \end{align*} Using this, we may note that if $k$ is $\gamma$-H\"older continuous, then $||k_s-k_t||^2_{\mathcal{H}(T)} \le B |s-t|^{\gamma}$, for some constant $B>0$. This fact plays an important role in showing weak convergence of Gaussian processes to a limit in Section 4. Mercer's theorem \citep[][pp. 343-344]{lax} plays a fundamental role in the theory of RKHS, which states that if $k$ is a continuous kernel, then for any $s,t \in T$, there exists a non-negative sequence $\{\lambda_n\}$ and an orthonormal basis $\{e_n\}$ such that $$ k(s,t)=\sum_{n=1}^\infty \lambda_n e_n(s)e_n(t), $$ which as a series converges absolutely and uniformly. In addition, it can be shown that for $f,g \in \mathcal{H}(T)$, $$ \langle f,g \rangle_{\mathcal{H}(T)}=\sum_{n=1}^\infty \frac{\langle f,e_n \rangle \langle g,e_n \rangle}{\lambda_n}, $$ and thus any $f \in \mathcal{H}(T)$ must satisfy $\sum_{n=1}^\infty \frac{\langle f,e_n \rangle^2}{\lambda_n} < \infty$. Therefore, we can generalize this to write $\mathcal{H}(T)=\{\sum_{n=1}^\infty a_n e_n: \{\frac{a_n}{\sqrt{\lambda_n}} \} \in \ell^2\}$. Consider the square root operator $K^{1/2}$ of the integral operator $K$. Observing that $\sum_{n=1}^\infty \lambda_n=\int_{T} k(s,s)ds<\infty$, it follows that $K^{1/2}$ is a bounded, compact, self-adjoint operator \citep{lax}, and can be represented by $$ K^{1/2} x = \sum_{n=1}^\infty \lambda_n^{1/2} \langle x, e_n \rangle e_n. $$ Since for $x \in L^2(T),$ \begin{align*} ||K^{1/2}x||_{\mathcal{H}(T)}^2 &= \langle K^{1/2}x,K^{1/2}x \rangle_{\mathcal{H}(T)} = \sum_{n=1}^\infty \frac{\langle K^{1/2}x,e_n \rangle^2}{\lambda_n}=\sum_{n=1}^\infty \frac{(\sqrt{\lambda_n}\langle x,e_n\rangle)^2}{\lambda_n} \\ &= \sum_{n=1}^\infty \langle x,e_n \rangle^2 \le ||x||_{L^2}^2, \end{align*} where the last inequality is an application of Bessel's inequality, we see that $\mbox{im}(K^{1/2}) \subset \mathcal{H}(T)$. Thus, $K^{1/2}$ is bounded with respect to $||\cdot||_{\mathcal{H}(T)}$. In particular, if $K$ has a trivial nullspace, the eigenvectors $\{e_n\}$ span $L^2(T)$, which allows us to substitute the inequality with an equality. If this is the case, $K^{1/2}$ is an isometric isomorphism between $L^2(T)$ and $\mathcal{H}(T)$. Hence, $K^{-1/2}$ exists and is bounded, and for $f,g \in \mathcal{H}(T),$ $$ \langle f,g \rangle_{\mathcal{H}(T)}=\langle K^{-1/2}f,K^{-1/2}g \rangle. $$ As motivated in the previous section, the projection occurs in both the mean and the covariance, meaning that the mean function should be an element of the RKHS. If the mean function is zero, this is trivially the case. Otherwise, it is difficult to check if a function is an element of $\mathcal{H}(T)$. As stated before, $\mathcal{H}(T) \subset C(T)$, but the converse is not true in general. For example, it has been shown that the RKHS associated with the square exponential kernel given by $k(s,t)=\exp \{-|s-t|^2\}$ does not contain any constant functions or polynomials in general \citep{ha}. Ideally, the mean function is an element of the RKHS, but in the case which it is not, it is important that it can be well approximated by an element of the RKHS. The notion of universality is an important concept which describes the ``coverage" of a kernel with respect to the continuous functions. \subsection{Universal Kernels} Since the space of uniformly continuous functions does not form a Hilbert space, there cannot exist a kernel such that $\mathcal{H}(T) = C(T)$. Thus, the universality of a kernel refers to the ability of the associated RKHS to approximate continuous functions. In particular, a kernel is said to be \textit{universal} if $\mathcal{H}(T)$ is dense in $C(T)$ under the supremum norm $||\cdot||_\infty$, i.e. if any continuous function can be approximated to arbitrary precision by an element of $\mathcal{H}(T)$. Uninversal kernels were covered extensively by \cite{micchelli}, and our insight stems from this paper. In statistics and machine learning, it is typical for one to use translation-invariant or stationary kernels when defining Gaussian processes, i.e kernels $\Tilde{k}$ such that $\Tilde{k}(s,t)=k(s-t)$ for some function $k$. Bochner's theorem \citep[][pp. 141-147]{lax} provides that $\Tilde{k}$ is a kernel if and only if there exists a unique Borel measure $\nu$ on $\mathbb{R}^d$ satisfying for any $s \in \mathbb{R}^d,$ $$ k(s)=\int_{\mathbb{R}^d} e^{i( s,t)}\nu(dt), $$ where $( \cdot \,, \cdot )$ denotes the dot product on $\mathbb{R}^d$. Defining $\phi$ to be so that $\phi(s)(t)=e^{i( s,t) }$, we see that $$ k(s_1-s_2)=\int_{\mathbb{R}^d} e^{i( s_1,t)}e^{-i( s_2,t)}\nu(dt)=\int_{\mathbb{R}^d} \phi(s_1)(t) \overline{\phi(s_2)}(t)\nu(dt)=\langle \phi(s_1),\phi(s_2)\rangle. $$ Since $\phi$ does not depend on $k$, the properties of universality are completely determined by the measure $\nu$. \cite{micchelli} show that if $\nu$ is absolutely continuous with respect to Lebesgue measure, then $\Tilde{k}$ is universal. In this sense, any characteristic function of a continuous, symmetric probability distribution is universal. This fact alone provides that since the square exponential kernel is the characteristic function of a zero mean Gaussian distribution, and the Mat\'ern kernel is the characteristic function of the $t$-distribution, any square exponential and Mat\'ern kernel is universal. Furthermore, any kernel of the form $$ k(s-t)=C \exp \bigg( - \sum_{i=1}^d \ell_i |s_i-t_i|^{p_i} \bigg), \,C, \ell_i, p_i>0 $$ is universal, as these are the characteristic functions of a subclass of symmetric stable distributions. Furthermore, non-stationary universal kernels may be constructed using the idea presented below. \begin{proposition} Suppose $k$ is a universal kernel, and $q$ is a kernel of the form $$ q(s,t)=\sigma(s)\sigma(t) k(s,t), $$ where $\sigma$ is a continuous function on $T$ satisfying $0 < m \le \sigma(s)\le M < \infty$ for some $m$ and $M$ for each $s \in \mathbb{R}^d$. Then, $q$ is universal. \begin{proof} Since $k$ is universal, $R(K)$ is dense in $C(T)$. Now, define $Q:L^2(T) \to L^2(T)$ by $$ Qx(t)=\int_X q(s,t)x(s)ds=\sigma(t)\int_X k(s,t)\sigma(s)x(s)ds. $$ Thus, we observe that $I_\sigma=\{\sigma f: f \in R(K)\} \subset R(Q)$. Therefore, it suffices to show that $I_q$ is dense in $C(T)$ under $||\cdot||_\infty$. So, let $g \in C(T)$. Then, $\frac{g}{\sigma} \in C(T)$. So, for $\epsilon>0,$ choose $f \in R(K)$ so that $||f-\frac{g}{\sigma}||_\infty < \epsilon/M$. Then, for any $s \in T$, $$ |\sigma(s)f(s)-g(s)|=|\sigma(s)|\, \bigg|f(s)-\frac{g(s)}{\sigma(s)} \bigg| < \epsilon. $$ \end{proof} \end{proposition} Thus, one may combine translation invariant kernels such as those given above with non-homogeneous variance conditions to generate a general class of non-stationary universal covariance kernels. In practice, working with a universal kernel is important since it is often not realistic to assume the function one is interested in estimating is in $\mathcal{H}(T)$. In the next section, the importance of universal kernels will become clear, as the solution relies on the computation of an RKHS inner product. \section{Deriving the Mean and Covariance} In this section, we define a mean and covariance for a Gaussian process $\mathbb{X}$ that results from the limit of mean and covariance functions obtained via conditioning on finitely many points in a subset of the domain. Section 4 will discuss the implications these results from a probabilistic perspective. \subsection{Derivation} Let $T\subset \mathbb{R}^d$ be compact, and $T_0 \subset T$ be an arbitrary set on which we assume information about a particular function $g$ is known. Any Gaussian process which is fixed on $T_0$ must have a covariance function $k_0$ satisfying $k_0(s,t)=0$, if one of $s,t \in T_0$. Denote by $\mathcal{H}(T)$ to be the RKHS associated with continuous and universal kernel $k$, and define $$ \mathcal{H}_0 = \{f \in \mathcal{H}(T):\, f|_{T_0} \equiv 0\}. $$ It can be verified that $\mathcal{H}_0$ is a closed subspace of $\mathcal{H}(T)$, which implies that there exists an orthogonal projection $P: \mathcal{H}(T) \to \mathcal{H}_0$. $\mathcal{H}_0$ is also a RKHS with reproducing kernel $k_0(s,t)=(Pk)(s,t) = \langle Pk_s, k_t \rangle_{\mathcal{H}(T)}$ \cite[][Theorem 2.5]{paulsen}. Furthermore, by properties of orthogonal projections, any function $f \in \mathcal{H}(T)$ which satisfies $f=g$ on $T_0$ must be of the form $$ f = h_0 + g_{\perp}, $$ where $g_\perp \in \mathcal{H}_0^\perp$ is so that $g$ has the unique representation $g= g_0 + g_\perp$, where $g_0, h_0 \in \mathcal{H}_0$. The Kolmogorov existence theorem permits the existence of a Gaussian process given a mean $\mu$ and kernel function $k$ provided that the $k$ is symmetric and positive semi-definite \citep[][pp. 92]{kall}. As a corollary, we have the following result. \begin{theorem} For a continuous covariance function $k$ given, and $\mu \in \mathcal{H}(T)$, there exists a Gaussian process $\mathbb{X}=\{X_t; \, t \in T\}$ with mean $\mu_0 = P \mu + g_\perp$ and covariance $Pk$. In addition, $X_t = g_\perp(t)$ a.s. for each $t \in T_0$. \end{theorem} Though such processes are guaranteed to exist, this result by itself is not very useful from a practical standpoint since it is unclear how one might compute $Pf$ for arbitrary $f \in \mathcal{H}(T)$. Note that $$ \mathcal{H}_0^\perp = \overline{\mbox{Span}}(\{k_s; \, s \in T_0\}). $$ Hence, in the remainder of this section, we use $\mathcal{H}_0^\perp$ for computations, as the elements of this RKHS are more naturally described. Let $k_\perp$ be the reproducing kernel for $\mathcal{H}_0^\perp$. Since $\mathcal{H}(T) = \mathcal{H}_0 \oplus \mathcal{H}_0^\perp$, it follows that $k = k_0 + k_\perp$ \citep[][Corollary 5.5]{paulsen}, and therefore $k_0 = k - k_\perp$. Naturally, one may compute $k_\perp(s,t) = \langle (I-P)k_s, k_t \rangle_{\mathcal{H}(T)}$. However, in this section, we will find a more tractable expression for $k_\perp$ which does not require the use of a projection operator. First, suppose $T_0=\{t_1,\hdots,t_n\}$, and define $Q$ to be the orthogonal projection onto $\mathcal{H}_0^\perp =\mbox{Span}(\{k_{t_1},\hdots,k_{t_n}\})$. Although computing the conditional distribution in this case is trivial, we provide an alternative derivation which extends directly to a more general setting. Without loss of generality, assume that $\{k_{t_1},\hdots,k_{t_n}\}$ is a linearly independent set so that the matrix $k(\mathbf{t},\mathbf{t}) = (k(t_i,t_j))_{i,j=1}^n$ has full rank. Then, any $f \in \mathcal{H}(T)$ can be decomposed uniquely as $f=f_0 + Qf$, where $Qf_0 \equiv 0$, and $$ Qf = \sum_{i=1}^n a_i(f) k_{t_i}, $$ where $Qf(t_i)=f(t_i)$ for each $i=1,\hdots,n$ \cite[][Corollary 3.5]{paulsen}. In turn, this implies the vector $\mathbf{a}(f) = (a_1(f),\hdots,a_n(f))^\top$ satisfies $$ \mathbf{a}(f) = k(\mathbf{t},\mathbf{t})^{-1} f(\mathbf{t}). $$ Therefore, for $f_1,f_2 \in \mathcal{H}(T)$, the inner product on $\mathcal{H}_0^\perp$ for $Qf_1,Qf_2$ is computed as \begin{align*} \langle Q f_1, Q f_2 \rangle_{\mathcal{H}(T)} & = \Big \langle \sum_{i=1}^n a_i(f_1) k_{t_i}, \sum_{j=1}^n a_j(f_2) k_{t_j}\Big \rangle_{\mathcal{H}(T)} = \sum_{i=1}^n \sum_{j=1}^n a_i(f_1) a_j(f_2) \langle k_{t_i}, k_{t_j} \rangle_{\mathcal{H}(T)} \\ & = \sum_{i=1}^n \sum_{j=1}^n a_i(f_1) a_j(f_2) k(t_i, t_j) = \mathbf{a}(f_1)^\top k(\mathbf{t},\mathbf{t}) \mathbf{a}(f_2) = f(\mathbf{t})^\top k(\mathbf{t},\mathbf{t})^{-1} f_2(\mathbf{t}). \end{align*} Using this formula, we see for $s_1,s_2 \in T$ that $$ k_\perp(s_1, s_2) = \langle Q k_{s_1}, Q k_{s_2} \rangle_{\mathcal{H}(T)} = k(s_1,\mathbf{t}) k(\mathbf{t},\mathbf{t})^{-1} k(\mathbf{t},s_2), $$ which implies that $$ k_0(s_1,s_2) = k(s_1,s_2) - k(s_1,\mathbf{t}) k(\mathbf{t},\mathbf{t})^{-1} k(\mathbf{t},s_2). $$ By setting $P = I-Q$, noting that $g_\perp = Q g$, and using the reproducing property, we may write \begin{align*} \mu_0(s) & = [(I-Q)\mu](s) + g_\perp(s) = \mu(s) + [Q(g - \mu)](s) = \mu(s) + \langle Q k_s, Q(g-\mu)\rangle_{\mathcal{H}(T)}\\ &= \mu(s) + k(s,\mathbf{t}) k(\mathbf{t},\mathbf{t})^{-1} (g(\mathbf{t}) - \mu(\mathbf{t})). \end{align*} Note the formulae for $\mu_0$ and $k_0$ correspond with those for the conditional distribution of $X_s | (X_{t_1}=g(t_1), \hdots, X_{t_n}=g(t_n))$, as expected. Define the mapping $\psi:\mathcal{H}_0^\perp \to \mathbb{R}^n$ by $\psi(f) = f(\mathbf{t})$. Equipping $\mathbb{R}^n$ with the inner product $$ \langle \mathbf{f_1}, \mathbf{f}_2 \rangle_{0} = \mathbf{f_1}'k(\mathbf{t}, \mathbf{t})^{-1} \mathbf{f_2}, $$ it is clear that $\psi$ is an isometry. This observation is emphasized because of the fact that even though elements of $\mathcal{H}_0^\perp$ are functions on $T$, they are completely determined by their values on $T_0$. In fact, $(\mathbb{R}^n,\langle \cdot, \cdot \rangle_{0})$ is itself an RKHS with kernel $k(\mathbf{t},\mathbf{t})$, which is congruent to $k|_{T_0 \times T_0}$. Therefore, in some sense one can think of $\psi$ as a restriction to the set $T_0$. This is a key feature of our construction, one that holds true in the general case. Now suppose $T_0$ is an arbitrary subset of $T$, and define $\mathcal{H}(T_0)$ to be the RKHS generated by $k$ of functions defined on $T_0$. Although this space is different than $\mathcal{H}_0^\perp$, one can also write $$ \mathcal{H}(T_0) = \overline{\mbox{Span}} (\{k_s|_{T_0}: s \in T_0\}), $$ so in some sense $\mathcal{H}(T_0)$ and $\mathcal{H}_0^\perp$ are generated by the same functions, which leads to an important result. \begin{theorem} There exists an isometric isomorphism between $\mathcal{H}_0^\perp$ and $\mathcal{H}(T_0)$. \begin{proof} Define $\psi:\mbox{Span}(\{k_s;\, s \in T_0\}) \to \mathcal{H}(T_0)$ by $f \mapsto f|_{T_0}$. Clearly $\psi$ is well-defined and linear. Additionally, for arbitrary $n \ge 1$, $\{t_1,\hdots,t_n\} \subset T_0$, and $f = \sum_{i=1}^n a_i k_{t_i}$, we have \begin{align*} \langle f,f \rangle_{\mathcal{H}(T)} & = \sum_{i=1}^n \sum_{j=1}^n a_i a_j \langle k_{t_i}, k_{t_j} \rangle_{\mathcal{H}(T)} = \sum_{i=1}^n \sum_{j=1}^n a_i a_j \langle \psi(k_{t_i}), \psi(k_{t_j}) \rangle_{\mathcal{H}(T_0)} \\ & = \bigg \langle \psi \Big(\sum_{i=1}^n a_i k_{t_i}\Big), \psi\Big(\sum_{j=1}^n a_j k_{t_j}\Big) \bigg \rangle_{\mathcal{H}(T_0)} = \langle \psi (f), \psi (f) \rangle_{\mathcal{H}(T_0)}. \end{align*} Therefore $\psi$ is an isometry. Since $\mbox{Span}(\{k_s;\, s \in T_0\})$ is dense in $\mathcal{H}_0^\perp$, there exists an isometry $\Tilde{\psi}:\mathcal{H}_0^\perp \to \mathcal{H}(T_0)$ \citep[][pp. 205]{rudin73} which is defined by limits, and therefore must also map $f \mapsto f|_{T_0}$. Clearly $\Tilde{\psi}$ is one-to-one since $\Tilde{\psi}f \equiv 0$ implies that $f|_{T_0} \equiv 0$, meaning that $f \in \mathcal{H}_0$. Since $f \in \mathcal{H}_0^\perp$, $f \equiv 0$. Now, suppose $h \in \mathcal{H}(T_0)$. Then, there exists a Cauchy sequence $\{h_n\} \subset \mbox{Span}(\{k_s|_{T_0}; \, s \in T_0\})$ which converges to $h$. One may define $\{f_n\} \in \mathcal{H}_0^\perp$ so that $\Tilde{\psi}f_n=h_n$. Since $\Tilde{\psi}$ is an isometry, $\{f_n\}$ is Cauchy and therefore has a limit $f \in \mathcal{H}_0^\perp$. Then, $$ \Tilde{\psi}f = \Tilde{\psi}\Big(\lim_n f_n\Big) = \lim_n \, \Tilde{\psi} f_n = \lim_n h_n = h, $$ which completes the proof. \end{proof} \end{theorem} Thus, defining $Q$ to be the projection from $\mathcal{H}(T)$ to $\mathcal{H}_0^\perp$, we have $$ Qf(s) = \langle Qf, k_{s}\rangle_{\mathcal{H}(T)} = \langle f|_{T_0}, k_s|_{T_0} \rangle_{\mathcal{H}(T_0)}. $$ Therefore, in the more general case, for $s_1,s_2 \in T$, one may write \begin{align} \mu_0(s_1) & = \mu(s_1) + \langle k_{s_1}|_{T_0}, (g-\mu)|_{T_0}\rangle_{\mathcal{H}(T_0)}, \\ k_0(s_1,s_2) & = k(s_1,s_2) - \langle k_{s_1}|_{T_0}, k_{s_2}|_{T_0} \rangle_{\mathcal{H}(T_0)}. \end{align} Referring back to series representation of the RKHS inner product, the inner product $\langle \cdot, \cdot \rangle_{\mathcal{H}(T_0)}$, is much more tractable than the inner product $\langle \cdot, \cdot \rangle_{\mathcal{H}_0^\perp}$ due to the fact that the kernel on $\mathcal{H}(T_0)$ is known explicitly, whereas the kernel $k_\perp$ for $\mathcal{H}_0^\perp$ is computed via a projection which is less tractable from a numerical perspective. In the Section \ref{sec:app}, we show that this formulation can be used in a numerical setting. \subsection{Limits} As mentioned in Section 1, one potential method of approximating the distribution of a Gaussian process conditional on all of $T_0$ is by conditioning on a representative finite subset of $T_0$. We will now show that the conditional mean and covariance computed from this method converge to $\mu_0$ and $k_0$ given by (3-4) as the number of points conditioned on increases. By Theorem 3.2, it is acceptable to consider functions on $T_0$. Assume any function defined in this section is done so on $T_0$ unless otherwise specified. Let $D= \{t_n\}$ be a countable dense subset of $T_0$, and consider $\mathcal{K}_D:=\overline{\mbox{Span}}(\{k_t; \, t \in D\})$. Note that since $D$ is dense, for arbitrary $s \in T_0$, there exists a subsequence $\{t_{n_j}\} \subset D$ so that $k_s = \lim_{j \to \infty} k_{t_j}$. Therefore, $$ \{k_s; \, s \in T_0\} \subset \mathcal{K}_D \subset \mathcal{H}(T_0). $$ which implies that $\mathcal{K}_D=\mathcal{H}(T_0)$. As a consequence, for a given $f \in \mathcal{H}(T_0)$ and for $\epsilon > 0 $, there exists an $N_0$ so that any interpolating approximation $f_N$ by $\{k_{t_n}\}_{n=1}^N$ of $f$ satisfies $$ ||f_N-f||_{\mathcal{H}(T_0)} < \epsilon, \, \mbox{if} \, N \ge N_0. $$ By defining $Q_N$ as the orthogonal projection on $\mbox{Span}(\{k_{t_n}\}_{n=1}^N)$, this is statement is equivalent to saying that $Q_Nf \to f$ for any $f \in \mathcal{H}(T_0)$. Now, define $\mu^N_0$ and $k_0^N$ as the mean and covariance resulting from conditioning on $\{t_1,\hdots,t_N\}$. Recalling the derivation of $\langle Q_Nf_1, Q_Nf_2 \rangle_{\mathcal{H}(T_0)}$, and noting that $$ \langle Q_Nf_1, Q_Nf_2 \rangle_{\mathcal{H}(T_0)} \to \langle f_1, f_2 \rangle_{\mathcal{H}(T_0)}, $$ it follows that for $s_1,s_2 \in T$ \begin{align} \mu_0^N(s_1) &= \mu(s_1) + \langle Q_Nk_{s_1}, Q_N(g-\mu)\rangle_{\mathcal{H}(T_0)} \to \mu(s_1) + \langle k_s, g-\mu\rangle_{\mathcal{H}(T_0)}=\mu_0(s_1) \\ k_0^N(s_1,s_2) &= k(s_1, s_2) - \langle Q_N k_{s_1}, Q_N k_{s_2} \rangle_{\mathcal{H}(T_0)} \to k(s_1,s_2) - \langle k_{s_1}, k_{s_2} \rangle_{\mathcal{H}(T_0)} = k_0(s_1,s_2). \end{align} Observe also that \begin{align*} k_0^N(s_1,s_2)-k_0(s_1,s_2) & = \langle k_{s_1},k_{s_2} \rangle_{\mathcal{H}(T_0)}- \langle Q_Nk_{s_1},Q_Nk_{s_2} \rangle_{\mathcal{H}(T_0)} \\ & = \langle (I-Q_N)k_{s_1}, (I-Q_N)k_{s_2} \rangle_{\mathcal{H}(T_0)}, \end{align*} which implies that $k_0^N-k_0$ is a positive kernel. In the sense of stochastic processes, this property implies that $k_0$ is a further reduction of variance from $k_0^N$. In fact, equations (5-6) correspond directly to equations (1-2) respectively. The next section we address the question of stochastic convergence. \section{Weak Convergence and a Probabilistic Perspective} One of the highlights of the previous section was showing that the finite dimensional distributions of a Gaussian process conditioned on $N$ points converges to a limiting process provided that the mean function $\mu$ is in the RKHS associated with the covariance kernel and the dense set of points defines a function $g$ which is also contained in the RKHS. Define the sequence of Gaussian processes $\{\mathbb{X}^N\}$ so that $\mathbb{X}^N$ has mean and covariance $\mu^N_0$ and $k^N_0$, and define $\mathbb{X}$ to be a Gaussian process with mean and covariance $\mu_0$ and $k_0$. To show that the limit of the finite dimensional distributions defines a Gaussian process $\mathbb{X}$ such that $\mathbb{X}^N \Rightarrow \mathbb{X}$, it remains to show that $\{\mathbb{X}^N\}$ is tight. As the setting for many applications desires continuous processes, it is important to ensure that sample paths of $\{\mathbb{X}^N\}$ are almost surely continuous for each $N \ge 0$. \begin{lemma} Suppose that $\mathbb{X}$ is a Gaussian process with mean $\mu$ and covariance kernel $k$. If $\mu$ is continuous and $k$ is $\gamma-$H\"older continuous on $\mathbb{R}^d \times \mathbb{R}^d$, then there is a version of $\mathbb{X}$ which almost surely continuous. \begin{proof} We will use the Kolmogorov-Chentsov theorem \citep[][pp. 35-36]{kall} which states that $\mathbb{X}$ has a continuous version on $\mathbb{R}^d$ taking on values in a complete metric space $(S,\rho)$ if there exists $a,b>0$ such that $$ E[\rho(X_s,X_t)^a] \le |s-t|^{d+b}, \, s,t \in \mathbb{R}^d. $$ Assume that $\mathbb{X}$ has zero mean and covariance as specified above. Define $\rho$ to be the Euclidean norm on $\mathbb{R}$, and recall that for any zero mean Gaussian random variable $Z$ and any even integer $a$, $$ E[Z^{a}]= C_a E[Z^2]^{a/2}, $$ where $C_a=\prod_{i=1}^{a/2} (2i-1)$. Defining $a$ to be the smallest even integer strictly larger than $\frac{2d}{\gamma}$, we see for any $s,t \in \mathbb{R}^d$, \begin{align*} E[\rho(X_t,X_s)^a]&=E[(X_t-X_s)^a]=C_{a} E[(X_t-X_s)^2]^{a/2}=C_a \big [k(t,t)-2k(t,s)+k(s,s) \big]^{a/2} \\ & \le C|s-t|^{\gamma a/2}=C|s-t|^{d+(\gamma a/2-d)}. \end{align*} Thus, selecting $b=\gamma a/2-d$, and scaling $\rho$ appropriately, we get the result for a zero mean process. Lastly, the non-zero mean process can be achieved by translating the process by the mean, repeating the procedure above, and noting that the sum of continuous functions is continuous. \end{proof} \end{lemma} It is indeed the case that $\{\mathbb{X}^N\}$ is tight if the conditions for the Kolmogorov-Chentsov theorem stated above are met uniformly on $N$ \citep[][pp. 35-36]{kall}. The theorem below provides conditions for the tightness of $\{\mathbb{X}^N\}$ to a Gaussian process $\mathbb{X}$ with mean function $\mu_0$ and covariance kernel $k_0$ \begin{theorem} If the covariance kernel $k$ is $\gamma-$H\"older continuous, $k$ is universal on $T_0$ and $g|_{T_0},\mu|_{T_0} \in \mathcal{H}(T_0)$, then $\{\mathbb{X}^N\}$ is tight in $(C(T),||\cdot||_\infty)$. \begin{proof} Recall the remark in Section 3 in which the mean and covariance of $\mathbb{X}^N$ written $\mu^N$ and $k^N$ can be defined as \begin{align*} \mu^N_0(s) &=\mu(s)+\langle Q_Nk_s,Q_N(g-\mu) \rangle_{\mathcal{H}(T_0)}, \\ k^N_0(s,t) &=k(s,t)-\langle Q_N k_s, Q_N k_t\rangle_{\mathcal{H}(T_0)}. \end{align*} Now, observe that for $s_0 \in T$, \begin{align*} |k^N_0(s_0,s) - k^N_0(s_0,t)| &\le |k(s_0,s)-k(s_0,t)| + |\langle Q_N k_{s_0}, Q_N (k_s-k_t)\rangle_{\mathcal{H}(T_0)}| \\ &\le C|s-t|^{\gamma} + ||Q_N k_{s_0} ||_{\mathcal{H}(T_0)} || Q_N(k_s-k_t)||_{\mathcal{H}(T_0)} \\ & \le C|s-t|^{\gamma} +||k_{s_0}||_{\mathcal{H}(T_0)}||k_s-k_t||_{\mathcal{H}(T_0)} \\ &\le C|s-t|^{\gamma} + ||k_{s_0}||_{\mathcal{H}(T_0)} ||k_s-k_t||_{\mathcal{H}(T)} \\ & \le C|s-t|^{\gamma} + C'|s-t|^{\gamma/2}\le \Tilde{C} |s-t|^{\gamma/2}, \end{align*} where the first inequality follows frome the triangle inequality, the final inequality follows form the boundedness of $T$, and $\Tilde{C}$ does not depend on $s_0$ or $N$. Since $k$ itself is $\gamma-$H\"older continuous, it follows that $k^N_0$ is $\gamma/2-$H\"older continuous on $T \times T$ uniformly in $N$. Furthermore, $\mu^N \to \mu$ uniformly where we again use the fact that $\Tilde{K}$ is uniformly $\gamma/2-$H\"older continuous on $\{Q_N(g-\mu)\}$. Therefore, $\{\mathbb{X}^N\}$ is tight. \end{proof} \end{theorem} Therefore, it follows that $\mathbb{X}^N \Rightarrow \mathbb{X}$ if the original mean function is continuous, and the covariance kernel is H\"older continuous. In particular, $\mathbb{X}$ is the Gaussian process $\mathbb{X}^0$ conditioned on $\mathbb{X}^0|_{D}=g$. One would like to extend this result to say that $\mathbb{X}$ is the Gaussian process conditioned on $\mathbb{X}^0|_{T_0}=g$. Since conditional expectation is determined by the $\sigma-$fields generated by the random elements, it suffices to show that $$ \sigma \big ( \{X^0_t \, ; \, t \in D\} \big)=\sigma \big ( \{X^0_t \, ; \, t \in T_0\} \big). $$ This follows directly from the fact that for any sequence $\{t_{n_j}\}$ such that $t_{n_j} \to t$, $$ X_{t_{n_j}} \to X_t, \, \mbox{a.s.}. $$ Furthermore, since measurability under limits of functions is preserved, for any $t \in T_0$, $X_t$ is $\sigma \big ( \{X^0_t \, ; \, t \in D\} \big)$-measurable. Thus, $\mathbb{X}$ is a version of the original stochastic process conditioned on $\mathbb{X}^0|_{T_0}=g$. To more aptly discuss the significance of this result, denote $\mathcal{F}_0 = \sigma \big ( \{X^0_t \, ; \, t \in T_0\} \big)$. Then, defining $F_g = \{X^0_t=g(t);\,t \in T_0\} \in \mathcal{F}_0$, we may simply define $\mathbb{X}$ by $X_t = X^0_t |F_g$. Now, speaking in more broad terms, suppose we define $X_t=E[X^0_t|\mathcal{F}_0]$. Since $\mathbb{X}^0$ is continuous, there exists a unique process up to a nullset $\mathcal{N}$ whose elements are defined above \citep[][pp. 34]{kall}. Furthermore, $\mathbb{X}$ is an $\mathcal{F}_0$-measurable process which can be thought of as an predictor of $\mathbb{X}^0$ rather than $g$, which allows us to discuss the notion of optimality in prediction. \begin{theorem} For any $\mathcal{F}_0$-measurable process $\hat{\mathbb{X}}$, it follows that for any $t \in T$, $$ E\big[(X_t^0-\hat{X}_t)^2\big] \ge E\big[(X_t^0 - X_t)^2\big]. $$ \end{theorem} The proof of this follows directly from the definition of conditional expectation. To illustrate the value of this observation, consider a simple Gaussian process model given by $X^0(x) = \mu(x) + W(x)$, where $W$ is a centered Gaussian process with covariance kernel $k$, where it is of interest to predict $X^0$. Then, given the information of $ \mathbb{X}^0$ on any subset of its domain, the predictive process containing the prior information of $\mathbb{X}^0$ which minimizes the mean square prediction error is $\mathbb{X}$. \section{Inexact Solutions and Noise} Throughout the past two sections, it has been assumed that $g$ and $\mu$ constricted to $T_0$ are contained in $\mathcal{H}(T_0)$. Though necessary for our computations, this is actually a very limiting assumption as in general $\mathcal{H}(T_0)$ is very small relative to $C(T_0)$ \citep{vanvart11}. We will see in the next section that this does not play much of a factor in a more practical setting provided that $\mathcal{H}(T_0)$ is dense in $C(T_0)$. Nevertheless, the model presented in the previous two sections is confined to a very basic Gaussian process model, and it is unclear based upon the previous sections how one may apply our method to more involved statistical models. This section is dedicated to showing how one might modify our approach when complexity is added into a Gaussian process model, illustrated through several different examples. It is commonplace for Gaussian process computer models to have more than one source of uncertainty. For example, one may model correlated data $y^0$ as $$ y^0(x) = \mu(x) + \delta^0(x) + \varepsilon^0(x), $$ $\mu$ is a deterministic computer model, $\delta^0$ refers to zero mean model bias \citep{koh01}, and $\varepsilon^0$ refers to zero mean error associated with collecting data, with $\delta^0$ and $\varepsilon^0$ independent Gaussian processes. Suppose that the output $y^0$ is known explicitly on $T_0$ and is described by the function $g$. This would correspond to $\varepsilon^0=0$ and $\delta^0=g-\mu$ on $T_0$ with zero variance. If $\varepsilon^0$ represents uncorrelated error, then one may use this information to update $y^0$ so that $$ y(x) = \mu_0(x) + \delta(x) + \varepsilon(x), $$ where $\mu_0$ is defined as in Section 3, $\delta(x)$ has mean zero and covariance $k_0$, and $\varepsilon$ is a zero mean white noise Gaussian process whose variance on $T_0$ is zero. If $\varepsilon^0$ is correlated error, then one may perform the same modification on $\varepsilon^0$ as done for $\delta^0$ provided that the covariance function for $\varepsilon^0$ is continuous. As one can see, considering slight alterations in the overall structure of the model does not significantly alter our methodology if one assumes that the information on $T_0$ is known exactly. Now, we will consider more complicated case where information on $T_0$ is known less explicitly. \subsection{Handling information up to a white noise} Now, suppose the information on $T_0$ is known up the white noise $\varepsilon$ at each point which is independent of $\mathbb{X}^0$. In other words, we want to compute the distribution of $\mathbb{X}$ where $\mathbb{X}|_{T_0}=\Tilde{g}=g+\varepsilon$. There are several reasons for adding the white noise term, with the first being that it may not be the case that information is known completely on $T_0$. Another common reason to consider is that covariance matrices constructed from very smooth kernels (e.g. square exponential) can be very ill-conditioned, and so one adds a "nugget" term ensure stable computations \citep{ranjan11}. Using this formulation, one may derive very similar results as in Section 3. In either case, the covariance function becomes $\Tilde{k}(s,t)=k(s,t)+\sigma^2 \mathbb{I}(s=t)$. Since this kernel is not continuous, the theory of RKHS cannot apply here in the sense that has been described in the previous sections. For $\mathbf{t}=(t_1,\hdots,t_n)$, the covariance matrix generated by $\Tilde{k}$ is of the form $k(\mathbf{t},\mathbf{t}) + \sigma^2 I_n$, where $I_n$ is the $n \times n$ identity matrix. One may naturally extend this to $L^2$ by defining the operator $\Tilde{K} = K + \sigma^2I$, where $K$ is the standard integral operator and $I$ is the identity operator, which are both defined on $L^2(T_0)$. However, here it is important to note that $\Tilde{K}$ maps to $L^2(T_0)$ rather than a RKHS. Now, recall the representation of the RKHS inner product as $$ \langle f,g \rangle_{\mathcal{H}(T_0)} = \langle K^{-1/2}f, K^{-1/2}\Tilde{g} \rangle_{T_0}, $$ where $\langle \cdot, \cdot \rangle_{T_0}$ denotes the $L^2$ inner product on $T_0$. Using previous notation, eigenvalues and eigenvectors of $\Tilde{K}$ are $\{\lambda_n+ \sigma^2\}$ and $\{e_n\}$, and so one may represent $\Tilde{K}$ as $$ \Tilde{K}(\cdot)= \sum_{n=1}^\infty (\lambda_n+\sigma^2) \langle \,\cdot \,,e_n\rangle_{T_0} e_n. $$ Therefore, $\Tilde{K}^{-1/2}$ can be represented by $$ \Tilde{K}^{-1/2}(\cdot)= \sum_{n=1}^\infty \frac{1}{\sqrt{\lambda_n+\sigma^2}}\langle \,\cdot \,,e_n\rangle_{T_0} e_n. $$ Replacing $K^{-1/2}$ with $\Tilde{K}^{-1/2}$, we may define a new inner product for $f_1, f_2 \in L^2(T_0)$ by $$ \langle f_1,f_2 \rangle_{\Tilde{K}} = \langle \Tilde{K}^{-1/2}f_1,\Tilde{K}^{-1/2} f_2 \rangle_{T_0} = \sum_{n=1}^\infty \frac{\langle f_1,e_n\rangle_{T_0} \langle f_2,e_n \rangle_{T_0}}{\lambda_n + \sigma^2}. $$ Since any continuous function defined on $T_0$ is also an element of $L^2$, this definition is valid. Using this, it follows that $\mathbb{X}$ is Gaussian with posterior mean $\Tilde{\mu}_0$ and posterior covariance $\Tilde{k}_0$, which are defined in the same way as $\mu_0$ and $k_0$, but replacing $\langle \cdot, \cdot \rangle_{\mathcal{H}(T_0)}$ with $\langle \cdot, \cdot \rangle_{\Tilde{K}}$. Therefore, we define $\Tilde{\mu}_0$ and $\Tilde{k}_0$ by \begin{align*} \Tilde{\mu}_0(s_1) & = \mu(s_1) + \langle k_{s_1}, \Tilde{g}-\mu\rangle_{\Tilde{K}}, \\ \Tilde{k}_0(s_1,s_2) & = k(s_1,s_2) - \langle k_{s_1}, k_{s_2} \rangle_{\Tilde{K}}. \end{align*} Note here that $\Tilde{g}$ is a stochastic process, so in fact this definition is not only conditional on $\mathbb{X}^0|_{T_0}$, but on $\varepsilon$ as well. \subsection{Handling Stochastic Information} Lastly, we consider the more general case where the information on $T_0$ is known up to a zero mean Gaussian process $\delta$ with covariance kernel $q$, which is again independent of $\mathbb{X}^0$. One may write this as finding the distribution of $\mathbb{X}$ where $\mathbb{X}|_{T_0}=g_\delta=g+\delta$. Then, again the covariance matrix in the finite case is given by $k(\mathbf{t},\mathbf{t}) + q(\mathbf{t},\mathbf{t})$, and the associated RKHS with $k+q$ is the sum $\mathcal{H}_{k+q}(T_0)=\mathcal{H}(T_0)+\mathcal{H}_q(T_0)$, where $\mathcal{H}_q(T_0)$ is the RKHS associated with $q$. In general $\mathcal{H}(T_0) \cap \mathcal{H}_q(T_0) \not = \{0\}$, so the sum is not direct, which makes determining the inner product on $\mathcal{H}_{k+q}(T_0)$ as the sum of its constituents nontrivial. However, it is the case that any element of $\mathcal{H}(T_0)$ or $\mathcal{H}_q(T_0)$ is also an element of $\mathcal{H}_{k+q}(T_0)$, and therefore the mean and covariance are again defined as in (5-6), but replacing $\langle \cdot, \cdot \rangle_{\mathcal{H}(T_0)}$ with $\langle \cdot, \cdot \rangle_{\mathcal{H}_{k+q}(T_0)}$. Therefore, we define $\mu_0$ and $k_0$ by \begin{align*} \mu_0(s_1) & = \mu(s_1) + \langle k_{s_1}, g_\delta-\mu\rangle_{\mathcal{H}_{k+q}(T_0)}, \\ k_0(s_1,s_2) & = k(s_1,s_2) - \langle k_{s_1}, k_{s_2} \rangle_{\mathcal{H}_{k+q}(T_0)}. \end{align*} As mentioned in Section 5.1, this definition also is conditional on $\delta$ as well as $\mathbb{X}^0|_{T_0}$. \section{Numerical Implementation}\label{sec:app} The previous sections have shown that one may construct a Gaussian process which has zero variation on an arbitrary select subset $T_0$ of the domain, and define its mean and covariance functions in terms of an RKHS inner product. However, in practice, the RKHS inner product in the general case cannot be computed exactly. Here we discuss techniques for computing the inner products, followed by examples. \subsection{Computation of RKHS Inner Product} Recall that the RKHS norm is given in terms of the spectral decomposition $\{(\lambda_n, e_n) \}$ of the integral operator $K_{T_0}$, which in general must be computed numerically. Then, the inner product $\langle \cdot, \cdot \rangle_{\mathcal{H}(T_0)}$ is approximated via the bilinear form $a_N ( \cdot, \cdot )$, given by $$ a_N( f, g ) = \sum_{n=1}^N \frac{\langle f, e_n \rangle_{T_0} \langle g, e_n \rangle_{T_0} }{\lambda_n}. $$ Naturally, the form of $a_N (\cdot, \cdot )$ does not permit a convergence independent of the selection of arbitrary $f, g \in \mathcal{H}(T_0)$. However, a uniform-type convergence can be established for the family $\mathcal{K}:=\{k_t: t \in T\}$. \begin{proposition} The collection of bilinear forms $\{a_N\}$ converge uniformly to $\langle \cdot, \cdot \rangle_{\mathcal{H}(T_0)}$ on $\mathcal{K} \times \mathcal{K}$. \begin{proof} Define $F_N, F: T \times T \to \mathbb{R}$ by $F_N(s,t) = a_N(k_s, k_t)$ and $F(s,t) = \langle k_s, k_t \rangle_{\mathcal{H}(T_0)}$. It is clear that $F_N \to F$ pointwise, so it suffices to show that $\{F_N\}$ is equicontinuous. Defining $Q_N$ to be the projection from $\mathcal{H}(T_0)$ to $\mbox{Span}( \{e_n\}_{n=1}^N)$, it is clear that $$ F_N(s,t) = \langle Q_N k_s, Q_N k_t \rangle_{\mathcal{H}(T_0)}, $$ and so equicontinuity follows directly from the fact that $F$ is H\"older continuous $\{Q_N\}$ are uniformly bounded by the identity operator. \end{proof} \end{proposition} Thus, given a function $\mu \in \mathcal{H}(T_0)$, and a tolerance $\epsilon$, one may select $N$ so that $$ |a_N(f,g) - \langle f, g \rangle_{\mathcal{H}(T_0)}| < \epsilon, $$ for $f, g \in \mathcal{K} \cup \{\mu\}$, which suggests that using this methodology in an application setting is indeed stable. Naturally, $\{(\lambda_n, e_n)\}_{n=1}^N$ need to be computed, and are done so by solving the eigenvalue problem $$ Kf = \lambda f. $$ \cite{oya2009} discuss various methods of computing RKHS inner products using this formulation, and suggested using a Ritz-Rayleigh (RR) approach to compute the approximate spectral decomposition of $K$ and inserting the approximate values $\{(\Tilde{\lambda}_n, \Tilde{e}_n)\}$ to compute the inner product. To summarize this approach, suppose that $A \in \mathbb{R}^{n \times n}$ is positive semidefinite, and $V \in \mathbb{R}^{p \times n}$ has orthonormal row vectors $\{v_1, \hdots, v_p \}$, where $p < n$. Then, the matrix $$ A_V = V A V^* \in \mathbb{R}^{p \times p} $$ is a positive semidefinite matrix, which can be written as $A_V = U D_\alpha U^*$ for an orthonormal matrix $U$ and a diagonal matrix of eigenvalues $D_\alpha$. This matrix has the property that if an eigenvalue $e_i$ of $A$ is in the span of $V$, then there is a corresponding eigenvector $u$ of $A_V$ such that $$ e_i = V^*u $$ with $u^*A_Vu = e_i^* A e_i$. This algorithm also applies in an arbitrary Hilbert space, and is the basis for many numerical methods in applied mathematics. Naturally, the effectiveness depends upon the function basis used. In the case of symmetric kernels, one may actually define the RKHS inner product in terms of Fourier transforms. Let $\Tilde{k}(s-t) = k(s,t)$, and define $\mathcal{F}$ to be the Fourier operator. Then, for $f, g \in \mathcal{H}(T_0)$ \cite{berlinet04} define the RKHS inner product by $$ \langle f, g \rangle_{\mathcal{H}(T_0)} = \frac{1}{(2 \pi)^{d_0/2}}\int_{\mathbb{R}^{d_0}} \frac{\mathcal{F}[f] (\omega) \overline{\mathcal{F}[g] (\omega)}}{\mathcal{F}[\Tilde{k}](\omega)} d \omega. $$ Direct computations of $\langle f, g \rangle_{\mathcal{H}(T_0)}$ using this approach can potentially be expensive, but discrete Fourier approximations may prove useful in this scenario. \subsection{Numerically Verifying the Reproducing Property} Although the RKHS inner product cannot be explicitly calculated for arbitrary functions, the accuracy of any approximation method can be verified by utilizing the reproducing property. For example, it is always the case that for $f \in \mathcal{H}(T)$, $$ \langle f, k_t \rangle_{\mathcal{H}(T)} = f(t). $$ As shown in Section 3, one may approximate the inner product by computing the mean function of a Gaussian process conditioned on its value at several points in the domain. So, it also of interest to know how more spectral approaches such as those given in Section 6.1 compare with the interpolation method of reproducing $f$. As previously mentioned, it is unlikely that any continuous function $f$ is an element of $\mathcal{H}(T)$. Thus, it is worth considering the effects of reproducing functions which are not elements of $\mathcal{H}(T)$ as well as those which are. Assume that $T= [-1,1]$, and $k(x, x') = \exp \{-|x-x'|^ 2\}$. Define $f_1, f_2 \in C[-1, 1]$ to interpolate the points $\{(x_j, y_j)\}_{j=1}^J$ (which are assumed to be unknown), where $f_1$ does so using the kernel as a basis, and $f_2$ does so using a polynomial basis. Thus, $f_1,f_2$ should have a similar appearance, but $f_1 \in \mathcal{H}(T)$, whereas $f_2$ is not. $J$ is selected to be $6$, $\{x_j\}$ are selected to be equidistant on $[-1,1]$, and $\{y_j\}$ are randomly selected in $[-1,1]$. Figure \ref{fig: f1f2} indicates, as one may expect, that the difference between $f_1$ and $f_2$ in this type of setup is negligible. However, Figure \ref{fig:repr} indicates that the RR method described in Section 6.1 significantly outperforms the standard interpolation method for $f_2$, suggesting that this method perhaps is better for reproducing functions which are not necessarily in the RKHS. \begin{figure}[h!] \centering \includegraphics[scale=.5]{f1f2plot.png} \caption{Plot showing $f_1$, $f_2$ constructed as described above. As one can see, the difference between the two functions is negligible.} \label{fig: f1f2} \end{figure} \begin{figure}[h!] \centering \includegraphics[scale=.4]{reproducerror.png} \caption{Plot showing the convergence rates of reproducing $f_1$ and $f_2$ via the RR method and the typical interpolation method seen in Gaussian process regression.} \label{fig:repr} \end{figure} \subsection{Numerical Examples} \subsubsection{Boundary Conditions}\label{lab:boundary} As a basic application, let $T=[-1,1]^2$ and define $f$ by $$ f(t_1,t_2)=\frac{1}{2} e^{.2 (t_1 - .5)^2}\sin \Big(\frac{\pi t_1}{2} \Big) + e^{-x_2^2}\cos \Big(\frac{\pi x_2}{2} \Big) $$ Assume that the value of $f$ is known at $M$ points of the domain, as well as on $$ T_0 = \partial T. $$ Since $T_0$ has dimension one, one may define a parameterization $\ell:[-1, 1)\to T_0$ so that computations may be performed in one dimension. One practical issue with this however is that the function $f$ is continuously differentiable on $T$, whereas the function $f \circ \ell$ is not differentiable on $\{-1, -1/2, 0, 1/2\}$. To assess the accuracy of the method for different numbers of basis functions, the test data used is a collection of points on the set $\mathcal{T} = \{(.9t, .9s): (t,s) \in \partial T\}$ and measure discrepancy based upon the the loss function $$ L(f,g) = \max_{t \in \mathcal{T}} |f(t)-g(t)|. $$ We select $M=10$, where the points on the interior are chosen via a Latin Hypercube sampling scheme. Figure \ref{fig:boundcomp} shows the error as the number of basis functions increases. Observe that the log error flattens out, unlike what is observed in Figure \ref{fig:repr} from reproducing the function. This can be thought of as a phenomenon where essentially all of useful information from the boundary has been extracted, leading to diminishing returns on predictive power with additional basis functions. \begin{figure}[h!] \centering \includegraphics[scale=.4]{boundarycomp.png} \caption{Plot of approximation error versus number of basis functions given boundary information, as described in Section \ref{lab:boundary}.}\label{fig:boundcomp} \end{figure} \subsubsection{Diagonal Conditions}\label{sec:diagonal} As mentioned previously, $T_0$ is not limited to the boundary, and can be any subset of $T$. In this example, assume that $T=[-1, 1]^2$, and let $T_0$ be the diagonal of $T$, i.e. $T_0= \{(t, t): t \in [-1,1]\}$. Define $f$ by $$ f(t_1,t_2) = t_2\sqrt{1+t_1}\cos(\pi t_2)\sin\Big(\frac{\pi(t_1-t_2)}{2}+1\Big) e^{.5 (t_1 + t_2) ^ 2} $$ Selecting $M=10$ as before, and choosing test points on the set $\mathcal{T}=T \cap \big(\{(t, t\pm .1):t \in [-1, 1]\} \big)$, we again compute the maximum predictive error as a function of the number of basis functions for each method. Figure \ref{fig:diagcomp} suggests that all of the information from the diagonal is extracted very quickly using the RR method, whereas the convergence is much slower using the standard interpolation method to approximate the RKHS norm. This is likely due to the fact that a parameterization of the diagonal is differentiable whereas a parameterization of the boundary is not. \begin{figure}[h] \centering \includegraphics[scale=.5]{diagcomp.png} \caption{Plot of approximation error versus number of basis functions given information on the diagonal, as described in Section \ref{sec:diagonal}.}\label{fig:diagcomp} \end{figure} The results from these two examples indicate that our adopted approach in computing RKHS inner products has proven effective for incorporating information from more general subsets of the domain into a predictive Gaussian process model. Additionally, the method appears to be even more valuable in the case where the information available does not exist in the RKHS generated by the covariance kernel, which is certainly the case for the parameterized boundary in the first example, and likely the case in the complicated example given in the second example. \section{Conclusions and Future Directions} The goal of this paper was to construct Gaussian processes which are capable of using information from arbitrary connected subsets of the domain in a way which required minimal assumptions to be made. Using the theory of Reproducing Kernel Hilbert Spaces, we were able to explicitly define the conditional mean and covariance of Gaussian processes via orthogonal projections in an RKHS, prove that such processes exist, and show that the processes are optimal in the sense of minimizing pointwise mean square error given the initial assumptions made. In addition, we provided several numerical examples to exhibit the practical nature of our construction, which included evidence that one need not assume the functional information available is an element of a RKHS. Future work in this area includes extending the theory to more naturally handle the case where functional information is available on disjoint subsets of the domain. Another interesting avenue to extend this research is to provide a similar framework for including more general linear operator constraints, e.g. differential operator constraints. \newpage
1,941,325,219,896
arxiv
\section{Introduction} \label{sec:intro} Long-lived massive particles (LLP) are predicted in many extensions of the Standard Model (SM) that address the hierarchy problem~\cite{Giudice:1998bp, Arvanitaki:2012ps}, naturalness~\cite{Chacko:2005pe, Burdman:2006tz, Craig:2015pha, Curtin:2015fna, Chacko:2015fbc}, the baryon-antibaryon asymmetry in the universe~\cite{Cui:2014twa}, and dark matter (DM)~\cite{Co:2015pka, Godbole:2015gma, Khoze:2017ixx, Garny:2017rxs} including feebly interacting particles~\cite{Co:2015pka, Evans:2016zau, Banerjee:2016uyt} and asymmetric DM~\cite{Bai:2013xga}. They can also impact the phenomenology of the Higgs boson~\cite{Maiezza:2015lza,Dev:2017dui}. The LLP's long lifetime can be due to: {\it i)} a much reduced phase space resulting from a small mass splitting between the LLP and one of its decay products (as found in AMSB models~\cite{Randall:1998uk}) or {\it ii)} a suppressed coupling that controls the dominant decay, examples include SUSY models with a gravitino DM~\cite{Dimopoulos:1996vz, Asai:2011wy, Jung:2015boa, Allanach:2016pam}, $R$-parity violating (RPV) scenarios~\cite{Graham:2012th,Evans:2016zau}, and models containing a hidden sector that is weakly coupled to the SM via some mediator~\cite{Strassler:2006ri, Han:2007ae}. The suppressed (effective) coupling can also result from the fact that the main decay proceeds through a mediator whose mass is very high compared to the LLP ({\it e.g.} $R$-hadrons as bound states of gluino ($\tilde{g}$) in split SUSY with very heavy squarks~\cite{Kraan:2004tz, Mackeprang:2006gx}). The characteristic long life-time, ranging between 100 picoseconds to a few nanoseconds, of these massive particles translates, at the experimental level, to a characteristic feature that has been exploited in many analyses by the ATLAS, CMS and LHCb Collaborations at the Large Hadron Collider (LHC): they decay at some distance (tens to hundreds of centimeters) from the interaction point. When an LLP is produced at the primary vertex and decays at a certain distance inside the detector, \textit{i.e.}, at the secondary vertex, a typical search method is to identify this displaced vertex~\cite{Sirunyan:2017ezt}, as has been pursued for neutral LLPs~\cite{CMS:2014wda, Aaij:2014nma, CMS:2014hka, Khachatryan:2014mea, Aad:2015rba, Aaij:2016xmb}. More specific or tailor-made analyses exploit the location of the secondary vertex within a particular layer of the detector (tracker, electromagnetic calorimeter or ECAL, hadronic calorimeter or HCAL, or muon chamber), the nature of the LLP (charged or neutral) and the signatures of the decay products. For instance some charged LLPs are identified by leaving only some visible tracks in the inner layer of the tracker before seeming to disappear in the outer layers as the decay products go undetected because they are either weakly interacting neutral particles and/or too soft. Disappearing tracks~\cite{CMS:2014gxa} and tracks with kinks belong to this category~\cite{Barate:1999gm, Asai:2011wy, Jung:2015boa, Curtin:2018mvb}. Strategies to look for charged particles that are long-lived enough to escape the entire detector~\cite{CMS-PAS-EXO-16-036, Allanach:2002nj} have also been designed. As in the case of some neutral LLPs, the inner tracker may not be of much use and one may rely on the muon spectrometer~\cite{Aad:2014yea}. Especially in the case of fast LLPs giving rise to collimated final states, leptonic decay products, that materialise in the HCAL or the outer edges of the ECAL, may be reconstructed as jets (lepton jets) with a peculiar energy deposition~\cite{Aad:2012kw, Aad:2014yea, Aad:2015asa, ATLAS:2016jza}. For (neutral) LLPs whose decay products consist of photons, exploiting the capabilities of the ECAL within a displaced vertex reveals the LLP through photons that are non-pointing (to the primary vertex) or delayed (compared to prompt photons)~\cite{Aad:2014gfa}. Other scenarios with many final state decay products can rely on a few overlapping displaced vertices (emerging jets~\cite{Schwaller:2015gea}). A common underlying feature of most of these LLP searches is that they are based on {\em inside-out} analyses, looking at the ordered sequence of events going successively from the inner layers (and sublayers) of the detectors to the outer layers, that is from the interaction point (or the beam) to layers in the tracker, to the ECAL, the HCAL and the muon chamber. This is the normal sequence even in the case of `standard' beyond the standard model (BSM) particle searches. This seems to be the logical sequence, as when a certain heavy particle is produced at the interaction point, it moves forward in time and outward from the beam pipe through the successive inner layers of the detector. What we would like to underline in this paper is that there are instances where an {\em outside-in} approach (at least between two regions of the above ordered sequence), starting from the location of the secondary vertex~\cite{Sirunyan:2017ezt}, is possible and that it should be fully exploited since the signatures are striking with little standard model background. What we will take advantage of is the fact that while the LLP is travelling {\em inside-out}, away from the beam, a proportion of its decay products, those being emitted in the opposite (backward) direction with respect to the direction of the LLP, seem to move inward and hit {\em outside-in} some of the layers or/and sublayers of the detectors. In the latter and in the particular case of jets as decay products, it can also happen that these jets emanating from a displaced vertex located in the HCAL, are deviated, compared to prompt jets emerging form the production vertex. As a result, they hit multi-towers of the HCAL contrary to the prompt jets that hit only one tower of the HCAL. Such a manifestation is akin to the case of non-pointing photons listed in the previous paragraph. These scenarios are in sharp contrast to the production of particles that experience a large boost and therefore carry all their decay products in their original direction. Since the proportion of daughter particles from the decay of massive LLPs that may experience an {\em outside-in} trajectory is of key importance, section~\ref{sec:sec2} is dedicated to a detailed study of generic scenarios according to production modes, decay signatures and masses, as well as the possible influence of spin. As expected, the heavier and hence slower the LLP, the larger the proportion of backward daughters should be. At the LHC, the range of masses that can be exploited is also quite wide. Objects, such as stopped hadrons, which represent particles that lose all their energy and decay after coming to rest within the detector~\cite{Khachatryan:2015jha, Aad:2013gva, Abazov:2007ht}, are an extreme case of slow moving objects and therefore benefit from the general observations we make in this paper. Although in order to solidly quantify the benefits of our approach requires implementing the details of the detector geometry and the triggers, we nonetheless conduct a simple simulation in section~\ref{sec:sec3}. In section~\ref{sec:sec4}, we discuss the present and future experimental possibilities to deal with such new signatures, in particular the present limitations of the detectors and what future implementations can be added. A summary of the salient points of our paper together with some recommendations are left for the conclusion. \section{Angular characteristics of the decay products for pair produced heavy particles} \label{sec:sec2} Our analysis starts by looking at the angular features of the final products of the decaying heavy particles, $X$, that are pair produced at the LHC. In particular, we have in mind the alignment of the daughter particles with respect to the original direction of the parent particle, $X$. Our choice of signatures is based on typical examples of LLP scenarios. However, we will first perform a model-independent investigation in order to find out whether the specific underlying model-dependent dynamics have important roles in the salient features that we want to emphasise. The daughters can be \textit{massless} quarks, $q$, or heavy invisible particles, DM, which may satisfy the properties of dark matter. We consider the following four distinct possibilities. \begin{itemize} \item[$\bullet$] $X \to q \; q$ \\ The decay into a pair of a massless quarks is motivated by, \textit{e.g.}, $R$-Parity Violating (RPV) decays of a squark in supersymmetry, $\tilde{q} \to q \; q $, and has connection with $R$-hadrons. Another example is a slepton $\tilde{l}$ decaying into a pair of quarks through RPV, $\tilde{l} \to qq$. We will use the latter for our simulation of this class of scenarios which we henceforth refer to as {\it 2BM0}, corresponding to two-body \textit{massless} final state. One should keep in mind that the production is of the Drell-Yan kind, being initiated by quarks. Having considered scalar mother particles in this example, there is no spin correlation to worry about. \item[$\bullet$] $X \to q \; q \; q$ \\ This case is also within the purview of RPV. A prototype which we will use in our simulation is the three-body decay of a neutralino into quarks, $\tilde{\chi}^0_{1} \to q \; q \; q$. This is thus defined as our {\it 3BM0} class. Once again, the production is quark initiated but the effects of spin correlation may not be negligible. \item[$\bullet$] $X \to q \; \textrm{DM}$\\ Here we consider decays of the LLP into a heavy neutral invisible particle, $\textrm{DM}$, which may be a dark matter candidate, alongside a light quark. This scenario arises in $R$-parity conserving SUSY processes, \textit{viz.} $\tilde{q} \to q \tilde{\chi}_1^0$, or the radiatively induced $\tilde{g} \to g \tilde{\chi}_1^0$, where $\tilde{\chi}_1^0$ is the lightest neutralino, which can potentially be a DM candidate. Our prototype here is based on the lightest sbottom decay into a bottom quark and a neutralino, $\tilde{b}_1 \to b \tilde{\chi}_1^0$. This class will be termed {\it 2BM}. In our prototype of this class of processes, the production is dominantly gluon induced. \item[$\bullet$] $X \to q \; q \; \textrm{DM}$ \\ The final class of processes that we will consider is the {\it 3BM}. This may be represented in $R$-parity conserving SUSY scenarios by the three-body decay $\tilde{g} \to q \bar{q} \tilde{\chi}_1^0$. Our simulation here will be based on this decay. As such, the production process here is also dominantly gluon induced. We will use this example to study the effects of spin correlation, later in this section. \end{itemize} \begin{figure}[tbhp] \subfigure[] { \includegraphics[height=4cm,width=8.0cm]{theta_distribution_1.pdf} } \subfigure[] { \includegraphics[height=4cm,width=8.0cm]{theta_distribution_2.pdf}\\ } \subfigure[] { \includegraphics[height=4cm,width=8.0cm]{theta_distribution_3.pdf} } \subfigure[] { \includegraphics[height=4cm,width=8.0cm]{theta_distribution_5.pdf}\\ } \subfigure[] { \includegraphics[height=4cm,width=8.0cm]{theta_distribution_4.pdf} } \subfigure[] { \includegraphics[height=4cm,width=8.0cm]{theta_distribution_6.pdf}\\ } \subfigure[] { \includegraphics[height=4cm,width=8.0cm]{theta_distribution_7.pdf} } \subfigure[] { \includegraphics[height=4cm,width=8.0cm]{theta_distribution_8.pdf}\\ } \caption{Angle $\theta$ between the direction of $X$ and the massless daughter (one of the quarks, $q$) or the massive daughter (DM) for the four scenarios (2BM0, 3BM0, 2BM and 3BM) for different values of $M_X$ and $M_{\text DM}$, as shown in the figure.} \label{fig:theta_2_TeV} \end{figure} The above examples are illustrative and have been used to simulate our Monte Carlo samples. However, it is important to remember that the actual results that we will discuss in the following sections will be mostly model-independent. All these possibilities give rise to final states with multiple jets and, in the case of {\it 2BM} and {\it 3BM}, these jets are accompanied with missing transverse energy ($\slashed{E}_T$). To weigh the robustness of our findings, the examples we have taken cover both $qq$ initiated ({\it 2BM0} and {\it 3BM0}) and $gg$ initiated processes. To see the effects of the full spin-correlation, we consider the case where $X$ is a fermion, in the {\it 3BM0} and {\it 3BM} scenarios. Moreover, in order to see whether there is any bias that is introduced by a particular choice of our prototype simulation on the dynamics of the model, we compare the results of a full simulation (in the case of {\it 3BM}, with and without including spin-correlations) with those assuming no dynamics in the production, that is by considering a unit value for the matrix element (${\cal M}$). This helps us in finding out whether or not the results are mostly kinematics driven. This is also the reason we consider three distinct values for the mass of $X$, $M_X$, and for the same $M_X$ we also test two values for the mass of $\textrm{DM}$, $M_{\textrm{DM}}$. In this first investigation, all simulations are performed at the parton level for the 14 TeV LHC using \texttt{PYTHIA 6}~\cite{Sjostrand:2006za}. The important feature that we want to portray is through the angular distributions of the decay products, in particular the {\em observable} massless quark, with respect to the direction of the long-lived mother particle, $X$. We commence by studying the specific processes that we have introduced earlier without considering the spin information of $X$ in the decay. We then investigate the model dependence and the effects of the spin. The latter will be shown to be negligible. Figure~\ref{fig:theta_2_TeV} shows the angle the \textit{massless} (and massive DM-like) decay particles make with the direction of motion of $X$~\footnote{Because the samples have been generated for SUSY processes using \texttt{PYTHIA 6}, there is no spin information and hence for the full \textit{massless} scenario, the angular distribution for all the daughter particles are identical.}. \\ As expected, for light mother particles ($M_X =200$ GeV), the decay products are preferentially highly boosted, becoming slightly less so if there is a heavy DM particle among the decay products, see Figures~\ref{fig:theta_2_TeV}(a-f). As the mass of the parent particle increases, the fraction of massless daughters that are emitted opposite to the direction of motion of the parent particle, {\it i.e.} backwards, gets larger and larger. However, one can clearly observe that, even for lighter masses of the parent particle, the fraction of massless daughter particles with $\theta (q,X) > 90^{\circ}$, is not negligible. For the largest mass of the decaying particle considered in this work, $M_X=2$ TeV, the distribution in the angle of the massless quarks is practically independent of the presence of a massive (DM) particle among the decay products. To summarise at this point, the message is that, independent of the channels and the specific dynamics, there is a non-negligible fraction of {\em backward} massless particles. This fraction increases with the mass of the mother particle since this is associated with a smaller $\beta$. Although for lighter masses of the mother particle the backward fraction is small, in terms of total events, this is compensated by the larger $pp \to XX$ cross-section. \begin{table}[!t] \scriptsize \centering \begin{tabular}{+l^l^l^l^l^l^l^l} \hline Case & $M_{X}$ & $M_\textrm{DM}$ & $\beta$ (mean, RMS) & $\theta > 22.5^{\circ}$ & $\theta > 45^{\circ}$ & $\theta > 90^{\circ}$ & $\theta > 135^{\circ}$ \\ & [TeV] & [TeV] & & & & & \\ \hline \hline 2BM0 & 0.2 & - & 0.75, 0.23 & 0.85 & 0.62 & 0.25 & 0.05 \\ \rowstyle{\itshape}% & & & 0.87, 0.13 & 0.78 & 0.46 & 0.13 & 0.03 \\ & 0.5 & - & 0.66, 0.24 & 0.96 & 0.78 & 0.33 & 0.07 \\ \rowstyle{\itshape}% & & & 0.81, 0.14 & 0.94 & 0.65 & 0.19 & 0.04 \\ & 1 & - & 0.58, 0.23 & 0.99 & 0.90 & 0.42 & 0.09 \\ \rowstyle{\itshape & & & 0.72, 0.15 & 0.99 & 0.83 & 0.28 & 0.06 \\ & 2 & - & 0.46, 0.20 & 1.00 & 0.98 & 0.54 & 0.13 \\ \rowstyle{\itshape & & & 0.60, 0.14 & 1.00 & 0.97 & 0.40 & 0.08 \\ \hline 2BM & 0.2 & 0.05 & 0.67, 0.24 & 0.73 & 0.47 & 0.16 & 0.04 \\ \rowstyle{\itshape}% & & & 0.74, 0.21 & 0.67 & 0.40 & 0.13 & 0.03 \\ & 0.2 & 0.15 & 0.67, 0.24 & 0.73 & 0.46 & 0.16 & 0.04 \\ \rowstyle{\itshape & & & 0.74, 0.21 & 0.67 & 0.40 & 0.13 & 0.03 \\ & 0.5 & 0.125 & 0.60, 0.23 & 0.80 & 0.54 & 0.20 & 0.05 \\ \rowstyle{\itshape & & & 0.66, 0.21 & 0.78 & 0.50 & 0.17 & 0.04 \\ & 0.5 & 0.375 & 0.60, 0.23 & 0.80 & 0.54 & 0.20 & 0.04 \\ \rowstyle{\itshape & & & 0.66, 0.21 & 0.77 & 0.50 & 0.17 & 0.04 \\ & 1 & 0.25 & 0.52, 0.22 & 0.85 & 0.61 & 0.24 & 0.05 \\ \rowstyle{\itshape & & & 0.57, 0.19 & 0.84 & 0.58 & 0.21 & 0.05 \\ & 1 & 0.75 & 0.53, 0.22 & 0.85 & 0.61 & 0.24 & 0.05 \\ \rowstyle{\itshape & & & 0.57, 0.19 & 0.84 & 0.58 & 0.21 & 0.05 \\ & 2 & 0.50 & 0.42, 0.19 & 0.90 & 0.68 & 0.29 & 0.07 \\ \rowstyle{\itshape}% & & & 0.46, 0.17 & 0.89 & 0.66 & 0.27 & 0.06 \\ & 2 & 1.50 & 0.42, 0.19 & 0.90 & 0.68 & 0.29 & 0.07 \\ \rowstyle{\itshape & & & 0.46, 0.17 & 0.89 & 0.66 & 0.27 & 0.06 \\ \hline 3BM0 & 0.2 & - & 0.76, 0.23 & 0.89 & 0.69 & 0.32 & 0.07 \\ \rowstyle{\itshape}% & & & 0.94, 0.09 & 0.65 & 0.34 & 0.09 & 0.02 \\ & 0.5 & - & 0.67, 0.23 & 0.98 & 0.84 & 0.43 & 0.10 \\ \rowstyle{\itshape & & & 0.86, 0.13 & 0.92 & 0.61 & 0.20 & 0.04 \\ & 1 & - & 0.58, 0.23 & 0.99 & 0.94 & 0.54 & 0.14 \\ \rowstyle{\itshape & & & 0.76, 0.15 & 0.99 & 0.84 & 0.33 & 0.07 \\ & 2 & - & 0.46, 0.20 & 1.00 & 0.99 & 0.68 & 0.18 \\ \rowstyle{\itshape & & & 0.62, 0.15 & 1.00 & 0.98 & 0.52 & 0.12 \\ \hline 3BM & 0.2 & 0.05 & 0.67, 0.24 & 0.91 & 0.70 & 0.31 & 0.07 \\ \rowstyle{\itshape}% & & & 0.76, 0.19 & 0.86 & 0.60 & 0.22 & 0.05 \\ & 0.2 & 0.15 & 0.67, 0.24 & 0.89 & 0.67 & 0.30 & 0.07 \\ \rowstyle{\itshape & & & 0.77, 0.19 & 0.84 & 0.58 & 0.21 & 0.05 \\ & 0.5 & 0.125 & 0.60, 0.23 & 0.96 & 0.79 & 0.37 & 0.09 \\ \rowstyle{\itshape & & & 0.69, 0.19 & 0.94 & 0.73 & 0.29 & 0.06 \\ & 0.5 & 0.375 & 0.60, 0.23 & 0.94 & 0.76 & 0.36 & 0.09 \\ \rowstyle{\itshape & & & 0.69, 0.19 & 0.92 & 0.70 & 0.28 & 0.06 \\ & 1 & 0.25 & 0.53, 0.22 & 0.98 & 0.86 & 0.43 & 0.11 \\ \rowstyle{\itshape & & & 0.61, 0.18 & 0.97 & 0.82 & 0.36 & 0.08 \\ & 1 & 0.75 & 0.52, 0.22 & 0.97 & 0.83 & 0.42 & 0.10 \\ \rowstyle{\itshape & & & 0.61, 0.18 & 0.96 & 0.79 & 0.35 & 0.08 \\ & 2 & 0.50 & 0.42, 0.19 & 0.99 & 0.93 & 0.52 & 0.13 \\ \rowstyle{\itshape & & & 0.50, 0.16 & 0.99 & 0.90 & 0.46 & 0.11 \\ & 2 & 1.50 & 0.42, 0.19 & 0.99 & 0.90 & 0.49 & 0.13 \\ \rowstyle{\itshape}% & & & 0.50, 0.16 & 0.98 & 0.87 & 0.44 & 0.11 \\ \hline \end{tabular} \caption{Mean value and dispersion (rms) of the velocity of the mother particle ($X$) and fraction of events with angle $\theta$ made by at least one of the lightest daughter particles with the direction of $X$, for the four scenarios. For each $M_X$ (and $M_{\text DM}$), we also give the ${\cal M}=1$ (kinematics only) case (first row). The row just below (in \textit{italics}) is for the the model-dependent scenarios.} \label{tab:theta_unit} \end{table} Table~\ref{tab:theta_unit} makes the correlation with the boost of the mother particle, $X$, more apparent by showing the mean value of its velocity, $\beta$, and the associated fraction of massless decay particles (the quarks here which will be tracking the LLP) that are emitted at different angles relative to $X$. Four sectors in angular separation are defined. The most interesting are the ones where the daughter particle is emitted {\em backward}, {\it i.e.}, with $\theta > 90^{\circ}$. The table also shows the corresponding values for a matrix element ${\cal M}=1$ scenario, that is, a model driven solely by kinematics. First of all, $\beta$ is independent of the decay channel and depends only on the production process. For ${\cal M}=1$, we expect that the mean $\beta$ is the same between {\it 2BM0} and {\it 3BM0}, as it is the same between {\it 2BM} and {\it 3BM}, for the same mass (independently of $M_{\text DM}$). The small difference (even smaller for larger $M_X$), between {\it 2BM0} and {\it 3BM0} on the one hand and {\it 2BM} and {\it 3BM} on the other hand, reflects the $qq$ {\it versus} $gg$ production. As expected, the $s$-channel $qq$ production leads to slightly larger values of the mean $\beta$. The reason we gear the discussion around the mean $\beta$ is because as $\beta$ decreases, the fraction of {\it backward} events increases~\footnote{Since we are considering the particle motion in the transverse direction, the velocity in the transverse direction, $\beta_T$ is a more pertinent quantity. We find that the mean and rms values of $\beta_T$ do not vary much with the LLP mass. However, because these distributions are asymmetric, we find that the fourth moment (kurtosis) parameter plays a significant role in discriminating the $\beta_T$ distributions and increases with the mass.}. Of course $\beta$ decreases as the mass of the mother particle increases and independently of the model, the fraction of {\it backward} massless quarks increases. We observe that, for the same mass, there is some model dependence in the value of the mean $\beta$, and for all models, $\beta$ increases as compared to the pure kinematics case. The largest difference is seen in the case of $3BM0$. But, in all cases the difference gets smaller as the mass ($M_X$) increases and with it the fraction of {\em backward} moving quarks also increases. In all four cases, it suffices to look at the mean $\beta$ to guess the angular fraction at, for example, $\theta > 90^{\circ}$. For instance $\beta \sim 0.75$ occurs for ${\cal M}=1$ and $M_X=200$ GeV as well as for $M_X=1$ TeV in the dynamical {\it 2BM0} case, moreover both display similar backward fractions. Similarly for the {\it 2BM} case ($\beta \sim 0.6$), the {\it 3BM0} case ($\beta \sim 0.76$) and the {\it 3BM} case ($\beta \sim 0.6$), the values of $\beta$ correspond to different mass scenarios, yet they lead to similar angular fractions. In summary, independently of the channel and the model, we find fractions of backward particles of at least $10\%$ (for small $M_X$) to values as high as $68\%$ (for larger masses). If {\em backwardness} is mostly driven by the velocity of the mother particle which in turn, is essentially driven by the kinematics of the initial state, we expect the spin of the mother particle to play a negligible part. To quantitatively check this, we consider three-body decays ($3BM$) and compare the approximation with the spin-averaged cross section with the full simulation taking into account complete spin correlations between the production and decay (we simulate full spin correlations with \texttt{MG5\_aMC@NLO}~\cite{Alwall:2014hca}). To make the point, we only consider a single benchmark scenario with $M_{LLP}=2$ TeV with two values of $M_{DM}$, \textit{viz.}, 500 GeV and 1.5 TeV. Figure~\ref{fig:theta_2_TeV_spin} shows that the angular distributions of the massless daughters are extremely well reproduced by the spin-averaged approximation. For the distribution of the massive (invisible) daughter, the approximation shows a slight difference. Therefore, for the rest of this study we work with the spin-averaged scenarios. \begin{figure}[tbhp] \subfigure[] { \includegraphics[height=5cm,width=8.0cm]{theta_distribution_spin_1.pdf} } \subfigure[] { \includegraphics[height=5cm,width=8.0cm]{theta_distribution_spin_2.pdf}\\ } \caption{Angle $\theta$ made by a daughter particle with the direction of $X$ for (a) three body decays with one massive daughter with $M_X=2$ TeV and $M_{DM}=1.5$ TeV, (b) same with $M_{DM}=0.5$ TeV. Here we compare the simulation with full spin correlations and with the spin averaged approximation.} \label{fig:theta_2_TeV_spin} \end{figure} To sum up this discussion, we wish to underline that the more sluggish the mother LLP, the more important it is for us to study particles moving in the backward direction with respect to the direction of $X$. The extreme scenario is where $\beta$ of the mother particle becomes zero. Such scenarios can come about in stopped $R$-hadrons which move inside the detector up to a certain distance and then come to a standstill. In our analysis, the visible decay products have been assumed to be quarks. We could have just as well considered the LLP decays into leptons. The general feature of {\em outside-in} objects will remain unchanged, although in the analysis the hadronisation step will be different. \section{Implications for LLP searches} \label{sec:sec3} Although, such angular distributions are well-known, their implications for LLP searches have not been thoroughly investigated until now. An LLP, upon production, moves a certain distance inside the detector and decays at a secondary vertex. We have learned that, especially for quite massive LLPs, there is a non-negligible proportion of the decay products that will not carry on in the original direction of the mother particle. For starters, the decay particles will not point in the direction of the interaction point. Depending on the angular separation of the \textit{visible} daughter with respect to the direction of the LLP, the decay products may reveal an {\em outside-in} activity in different parts of the detector. For example, non-prompt jets emanating from a secondary vertex could pass through multiple calorimeter towers yielding elliptical energy deposition in the $\eta-\phi$ plane of the HCAL. This is in contrast to {\em normal} jets born at the primary vertex which are usually contained within a single tower of the HCAL and yield a circular energy deposition. Similar energy distributions in the ECAL are expected from prompt and non-prompt photons~\cite{Chatrchyan:2012jwg}. We will not dwell further on these distorted objects (\textit{DOs}) because we would like to study the interesting case of the backward moving objects, \textit{BMOs}. If the separation angle between the direction of the daughter and that of the mother is sufficiently large, this means that the daughter particle is moving in the backward direction. It can therefore even cross inward layers of the detector (which a stable mother would not have done!). For example, if an LLP decays in the ECAL, the \textit{BMOs} will tend to move towards the tracker. As discussed in the introduction, this statement can be generalised in the context of decay products of an LLP moving from any outward detector segment to an inner one. Such unusual signatures are indeed striking and suffer from very low backgrounds. For sure SM particles produced or initiated by $pp$ collisions do not contribute to such signatures. We will address the issue of potential backgrounds to the \textit{BMOs} from LLP decays, which consist essentially of cosmic rays, in the next section. To the best of our knowledge, dedicated searches for such \textit{BMOs} are yet to be performed at the LHC. We attempt two exploratory analyses with \textit{BMOs} based on a {\em simplified geometrical analysis}. A detailed simulation leading to more realistic significances would require us to know the geometry and response of the different components of the detector, which is outside the scope of this work. In the present paper, we look at two regions. In the first example, we consider the HCAL-tracker region and in the second example we are interested in the muon chamber as a collector of otherwise lost signals for LLP decaying outside the detector. The results we will show pertain to a single LLP. Since daughter particles moving in the backward direction can occur from either of the pair produced LLPs, the actual statistics (and the significance) could therefore be larger. We approximately follow the dimensions of the CMS detector~\cite{Ball:2007zza} to quantify our analyses. The results can be generalised to the ATLAS detector. We exploit the 2BM/2BM0 and 3BM/3BM0 signatures defined in the previous section with hadronisation performed within the \texttt{PYTHIA 6} framework. We compute the ratio of the energy carried by the visible (hadronised) \textit{BMOs} that inwardly traverse the volume of interest, $E_{\textrm{in}}$, to the initial energy carried by the LLP, $E_{\textrm{LLP}}$. $E_{\textrm{in}}/E_{\textrm{LLP}}$ will be the characterising variable in our analysis. We expect this variable, with the consideration of the size of the particular layer of the detector (tracker, muon chamber), to still reflect the proportion of backward moving, {\em outside-in}, objects as given in Table~\ref{tab:theta_unit}. In both the tracker and the muon chamber application, we will consider the case of an LLP decaying in flight as well as the case of a stopped $R$-hadron. In both cases we take the same mass and the same decay products. It is, of course, also assumed that both decay in the same region of interest. Naturally, the life-time of the LLP is assumed to be appropriate so as to yield a significant number of events within the region of interest. We keep this discussion model independent and do not make the exact lifetime explicit, nor the total cross section, as the following results will be fairly independent of these assumptions. For instance, the couplings of the underlying model can be easily tuned to get the desired lifetime. In this analysis, we are not attempting a precise modelling of the $R$-hadron's hadronisation as they move through the detector, yet we should reproduce the main features of the $R$-hadrons. In a sense, we are considering a toy skeleton of stopped $R$-hadrons to which we are looking at after they have come to a rest. We boost back all the daughter particles of that particular LLP to the stopped $R$-hadron's rest frame and compute the fraction of energy carried by them in the backward direction and inside that chosen layer of the detector (tracker in the first case and the muon chamber in the second example). At this stage, before we present the results of what we called our {\em simplified geometrical analysis}, we would like to issue an important warning. The analysis does not address crucial points about the reconstruction and the measurement of some key quantities. For instance, the specifics of the particular portion of the detector is not addressed. In this section, we will not discuss the response of a particular element of the detector to the \textit{BMO} and the identification of this \textit{BMO}. For example an identification and/or discrimination based on the possibility of timing or the shape of the showers, involves different issues depending on the location of the specific layer of the detector. In this section, we do not address how the energies we have introduced can be measured experimentally. For instance, reconstructing the secondary vertex in events with large impact parameters, is less trivial. As for $E_{\textrm{in}}$, the question of trigger may prove important. In this analysis, we have used $E_{\textrm{LLP}}$ mainly to express our results in terms of a normalised quantity, $E_{\textrm{in}}/E_{\textrm{LLP}}$ rather than $E_{\textrm{in}}$. In the case where the decay involves invisible particles, we could substitute $E_{\textrm{LLP}}$ with the total transverse energy, $E_T$. It rests that the important discriminating observable that must be measured experimentally is $E_{\textrm{in}}$. In some cases, $E_{\textrm{LLP}}$, like for a neutral LLP, may not be measured but $E_{\textrm{in}}$ can bring invaluable information. We will come back to these very important points in section~\ref{sec:sec4}. Let us now return to our simplified geometrical analysis. \subsection{Reversing into the tracker} \label{subsec:tracker} We consider the tracker as an open cylinder having a length, $L_{\textrm{tracker}}=600$ cm, along the $z$-direction and a radius, $R_{\textrm{tracker}}=100$ cm. The last layer of the HCAL is considered to be at a transverse distance of 300 cm from the $z$-axis. For simplicity, our considerations pertain to the barrel only. The results can be extended by including the end-caps. We compute the fraction of energy carried by particles moving from somewhere between the outer edge of the HCAL as they make their way into the tracker volume. To do so, we employ a trivial geometry concerning a ray crossing a finite open cylinder. If the LLP decays between 100 cm and 300 cm in the transverse direction between the HCAL and the tracker we compute the fraction, $E_{\textrm{in}}/E_{\textrm{LLP}}$. \begin{figure}[!h] \centering \includegraphics[height=4cm,width=8cm]{Ein_by_Emoth_2bd.pdf}~\includegraphics[height=4cm,width=8cm]{Ein_by_Emoth_3bd.pdf} \caption{Normalised distribution of $E_{\textrm{in}}/E_{\textrm{LLP}}$, the energy fraction of visible daughter particles to the mother LLP shown for $M_{LLP}=2$ TeV and $M_{\textrm{DM}}=0.75 \times M_{\textrm{LLP}}=1.5$ TeV. For the definition of the 2BM/3BM decays, see the text. In the first bin ($E_{\textrm{in}}/E_{\textrm{LLP}}< 0.1$) $E_{\textrm{in}}=0$. It should be interpreted as the case where no \textit{BMO} has registered.} \label{fig:Efac} \end{figure} Figure~\ref{fig:Efac}, shows the (normalised) distribution $E_{\textrm{in}}/E_{\textrm{LLP}}$ for a $2$ TeV LLP. While a large proportion of the LLP decay products do not make it into the tracker (these are represented by the first $E_{\textrm{in}}=0$ bin) independently of the decay channel, a substantial proportion does register inside the tracker as a signal for \textit{BMOs}. This proportion is larger for the stopped $R$-hadron case. These observations are in line with those we made in section~\ref{sec:sec2} based on the velocity of the LLP. This distinction is striking for the case when all the daughters are \textit{massless} (two-body 2BM0 and three-body 3BM0). For such scenarios, the fractions of energy coming back inside the tracker in the case of massless two-body decay is 25.9\% for the stopped $R$-hadrons and slightly less than half that number, 12.2\% for the moving LLP. In the case of three-body decays, these figures are slightly higher, respectively 34.2\% and 14.2\%. When one of the daughters is a massive invisible particle, the situation changes drastically, especially in the case of the $R$-hadron. The heavy daughter moves forward mostly in the direction of the mother LLP (as shown in figure~\ref{fig:theta_2_TeV}). The energy fractions traversing back into the tracker become, for the $R$-hadron, 8.2\% in the 2BM case and 4.6\% in the 3BM case. For the corresponding moving LLP, we obtain 5.1\% (2.5\%) for the two-body (three-body) decay of the LLP. Upon varying the mass of the heavy invisible daughter particle, we find that the fraction $E_{\textrm{in}}/E_{\textrm{LLP}}$ changes appreciably. As an example, for $M_{\textrm{LLP}}=2$ TeV, for the 2BM decay mode, this fraction decreases approximately linearly from 8.5\% to 5.1\% upon changing $M_{\textrm{DM}}/M_{\textrm{LLP}}$ from 0\% to 75\%. We should keep in mind that these unconventional signatures have almost no SM background. Therefore, even though the $E_{\textrm{in}}$ fractions are smaller in scenarios where decay products of LLPs include massive invisible particles than in scenarios where the decay products consist exclusively of massless visible particles, the results we obtain are encouraging. \subsection{Back into the muon chamber} To quantify a more concrete advantage of this framework, we consider particles that decay just outside the muon chamber. The only way of detecting such particles (inside the same detector) is if the daughter particles move inward towards the muon chamber. Here we again refer to the CMS geometry~\cite{cms-geometry}. We consider the muon chamber as a finite open cylinder of radius, $R_{\textrm{muon-chamber}}=750$ cm and a length of $L_{\textrm{muon-chamber}}=1300$ cm along the $z$-direction. The CMS experimental cavern is around 26.5 m in diameter and the diameter of CMS is around 15 m. Hence, there is a volume between the CMS detector and the cavern which may not all be empty. We consider the LLP to decay outside the muon chamber, somewhere between 750 cm and 1500 cm. Finally, we compute the same fraction, \textit{viz.}, $E_{\textrm{in}}/E_{\textrm{LLP}}$ for the two-body and three-body decay scenarios. In figure~\ref{fig:Efac-mu}, we show these ratios for the cases with $M_{LLP}=2$ TeV and $M_{\textrm{DM}}=1.5$ TeV. \\ \begin{figure}[!h] \centering \includegraphics[height=4cm,width=8cm]{Ein_by_Emoth_2bd_mu.pdf}~\includegraphics[height=4cm,width=8cm]{Ein_by_Emoth_3bd_mu.pdf} \caption{As in Fig.~\ref{fig:Efac} but for the case of the muon chamber with dimensions as specified in the text.} \label{fig:Efac-mu} \end{figure} The fractions of energy coming back inside the muon chamber is similar to the energy fractions we calculated for the tracker, especially in the case where all decay particles are visible (2BM0/3BM0). In the case of the two-body decays, the fraction is as much as 24\% for the stopped $R$-hadrons but it is less than half that number, 9\%, for the moving LLP. In the case of three-body decays, these figures are slightly higher, respectively 26\% and 10\%. When one of the daughters is a massive invisible particle, there is an important deterioration, worse than what we observed in the case of the tracker, especially in the case of three-body decays. For the $R$-hadron, the percentages drop to 7\% for the 2-body and only 3\% for the 3-body decay scenario. For the moving LLP, one has 4\% for the 2-body and only 1\% for the three-body. Even in this case, and depending on the statistics, let us not forget that this is one of the unique handles to resuscitate LLPs that decay outside the muon chamber. \section{Experimental considerations and future upgrades} \label{sec:sec4} In this section we will discuss some of the important points that we left out in the previous section. After discussing about the background, we will turn to how the \textit{BMOs} can be tracked down (shower-shapes, timing, etc.). This very much depends on the specific slice of the detector (tracker, ECAL, HCAL, muon chamber). We will also briefly review how one can improve reconstruction and how future upgrades can help (whether it affects the secondary vertex reconstruction, new timers, etc.). \subsection{Backgrounds and background mitigation} A \textit{BMO} signal is striking because it will not be recorded in the same pattern as that of the SM particles that originate from $pp$ collisions, making their way, transversally, from the beam-pipe to the outer-layers of the detector. There may be challenges coming from beam-induced noise, overlapping events (timing and/or shower shapes, see later, should help here) and instrumental noise. But by far, the most important background is the one not produced by the $pp$ machine, it is the one due to cosmic ray events~\cite{Gibson:2017igw, Rodenburg:2014urc, Aad:2016tzx}. This is particularly problematic when the signal is looked for in the muon chamber. One way to suppress such backgrounds is by tagging the backward moving LLP only in the lower half of the detector which will be almost free of any cosmic rays that move towards the beam-pipe. Exploiting events from the upper hemisphere can be a bit more challenging. To attempt giving any estimate for the cosmic muon background in the upper hemisphere requires knowledge of the event selection cuts. However, it is to be noted that the signature of hadrons in the muon detector is not a well studied subject. It needs to be checked, preferably by the experimental collaborations, using a proper full simulation. It is possible that a hadron in the muon chamber would be easily distinguishable from a cosmic muon in the muon chamber, because of the difference of signature. In that case the cosmic muon background will not be a big issue. If the LLP decays in the tracker or the calorimeters, cosmic muons should not be a problem, and both the upper and lower hemispheres could be used. \subsection{Shower shapes for the ECAL} Shower-shape for an {\em inside-out} jet is expected to be different from a backward-moving {\em outside-in} jet. There are widely used shower-shape variables for the ECAL, \textit{viz.}, $S_{major}$, $S_{minor}$, $\sigma_{i\eta i\eta}$ and $R_9$~\cite{CMS-PAS-EXO-12-035, CMS:ril, Khachatryan:2015qba}. The shape of the energy deposit in ECAL is characterised by the major and minor axes ($S_{major}$, $S_{minor}$), and of its projection on the internal ECAL surface. The variables $S_{major}$ and $S_{minor}$ are computed using the geometrical properties of the distribution of the energy deposit. The variable $\sigma_{i\eta i\eta}$ is the energy weighted standard deviation of single crystal $\eta$ within the $5 \times 5$ crystals centred at the crystal with maximum energy. The variable $R_9$ is the ratio of the energy deposited in the $3 \times 3$ crystal matrix surrounding the highest energy crystal to the total energy. For \textit{BMOs} decaying inside the ECAL, the aforementioned shower shape variables along with the ECAL timing information~\cite{CMS-PAS-EXO-12-035} can be utilised to distinguish such striking signatures and also to potentially reduce backgrounds. \subsection{Shower shapes for the HCAL and calorimeter upgrades} For signal signatures pertaining mostly to jets, let us discuss some possible shower-shape variables specific to the HCAL. This is particularly important when the LLP decays in one of the outer layers of the HCAL or even after crossing it and at least one of the decay products comes back inside the HCAL. If the HCAL has depth-segmentation then the energy of each depth can be read-out separately. If $E(D_i)$ denotes the energy deposited in the $i^{th}$ depth of a HCAL tower, then one can use $E(D_i)$ as inputs to train a boosted decision tree (BDT)~\cite{Roe:2004na}. The BDT output should be a powerful discriminator between backward-moving signal jets and forward-moving background jets. After the phase II upgrade in 2024-2025, the CMS detector is expected to have a high-granularity calorimeter (HGCAL)~\cite{Martelli:2017qbe} in the forward direction, {\it i.e.}, towards the endcaps, which will have high-precision timing capabilities. The calorimeter design, with fine granularity in both lateral and longitudinal directions, is ideally suited to enhance such pattern recognition. Fine longitudinal granularity allows fine sampling of the longitudinal development of showers, providing good energy resolution, pattern recognition, and discrimination against pile-up. On the other hand, fine lateral granularity will help us to separate two close-by showers. After these improvements in the detector, the \textit{BMOs} in the forward part of the detector can be tagged more efficiently using the improved granularity and timing information. \subsection{Timing in the muon chamber and upgrades for the tracker} If an LLP decays just outside the muon chamber, then the \textit{BMOs} are the only detectable objects in the signal. These \textit{BMOs} will reach the muon chambers two or more bunch-crossings after its production, and will give rise to signatures resembling that of late-muons. The CMS experiment has reported their trigger capabilities for such kind of exotic signatures in Refs.~\cite{latemuon1} and \cite{latemuon2}. Moreover, in such cases, the timing information of the muon detectors (for example: resistive plate chambers in CMS) can be useful. Resistive plate chambers (RPC) are gaseous parallel-plate detectors that have good spatial resolution and excellent time resolution. The spatial resolution of RPC is of the order of 1 cm, and the time resolution is around 2-3 ns. So, it is capable of tagging the time of an ionising particle in a much shorter time than the 25 ns between two consecutive LHC bunch crossings. If $t_n$ is the timing of the hit in the $n^{th}$ layer of the muon detector~\footnote{Here, the innermost layer is assumed to be the first layer and $n$ increases as we move radially outwards.} for a reconstructed muon-track, then $t_n<t_{n+1}$ will be the signature of outward-moving background tracks and $t_n>t_{n+1}$ will be the sign of a \textit{BMO}. Something similar can not be done in the silicon tracker, because of its slow response time. However, the CMS collaboration is seriously considering the option of installing an additional timing layer~\cite{Josh} during the phase II upgrade of the detector in 2024-2026. This precise timing detector might sit just outside the tracker barrel support tube, in between the tracker and the ECAL barrel. This thin layer is expected to have a time resolution of 10-20 picosecond and it will provide timing for the individual tracks crossing it, while photon and neutral hadron timing will be provided by the upgraded calorimeters. The timing detector will be used to assign the timing for each reconstructed vertex and to measure the time of flight of the LLPs between the primary and secondary vertices. Thus, it would provide new, powerful information in searches for LLPs. \subsection{Secondary vertex reconstruction, trackers and triggers} \textit{BMOs}, that are heavily displaced with respect to the primary vertex, having large impact parameters, are likely to be missed by the currently used jet reconstruction algorithms, because such algorithms are based on the assumption that the jets are originating from the collision point. However, the jet reconstruction algorithm can be tuned to catch displaced jets. This option can be heavily resource-consuming and the experiments can utilise the ideas of data-scouting and parking~\cite{CMS-DP-2012-022}. Reconstructing \textit{BMOs}, with large impact parameters, inside the tracker, can be extremely challenging, but can be achieved by making modifications in track reconstruction algorithms, for example, by relaxing the requirement on the impact parameters of the track. One can use the concept of regional-tracking~\cite{Tosi}, \textit{i.e.}, the non-pointing tracks of only those regions of the tracker will be reconstructed where there is a corresponding calorimeter energy deposit. This concept is already used in track reconstruction in high-level trigger (HLT) in CMS. Reconstruction of tracks is a sophisticated, complex and time-consuming step. In order to make it faster during the data-taking at the HLT, some modifications have been done to the actual track reconstruction technique, that is used offline, which is not pressed by time. One of the modifications in order to save time, is to use regional track reconstruction, where tracking algorithm is run only in regions-of-interest defined by the direction of an already available physics object or calorimeter energy deposit. Even with modified track reconstruction techniques, it might be very difficult to distinguish between signal and background tracks. However, a recent study~\cite{Gershtein:2017tsv} has shown the capabilities of the high luminosity runs of the LHC (HL-LHC) in extracting more information from non-pointing tracks. A set of dedicated triggers might be needed to select such signal events within the LHC experiments. One possibility is to require multiple displaced jets with appropriate $p_T$ cuts. Otherwise, one can trigger on the sum of HCAL energy deposits. \section{Conclusions} \label{sec:conclusions} There has been in the last couple of years a rather intense activity in the search for long lived particles. The lack of any signal from the conventional searches of many BSM particles is one of the reasons behind this renewed interest. Because of their long life-time, the LLPs decay some distance away from the interaction point, at a secondary vertex, or even decay outside the detector. They may therefore easily be missed by standard searches. One should therefore leave no stone unturned and critically revisit any possible trace that they may leave on any sector of the detector, even if one can not trace back their production point. The main observation we make in this paper is that a, far from negligible, proportion of some of the visible decay products of the LLP will be moving {\em outside-in}, meaning that they will be moving from the location of the secondary vertex somewhere inside the detector towards the inner layers of the detector, in the direction of the beam pipe. These backward moving objects, \textit{BMOs}, will therefore have a most striking manifestation. It can even happen that the LLP may decay outside the detector but that some of its \textit{BMO} daughters will ``move back" to deposit energy in the muon chamber. This crucial property of the \textit{BMOs} results from the fact that if the mother LLP is not too fast moving, these decay products will not be much boosted in the direction of flight. An extreme case is the one where the LLP decays at rest, and barring some spin effect, the decay products are distributed in all directions. If the LLP is sluggish at production, at the decay location some of its daughters will not carry on in the direction of the parent. In section~\ref{sec:sec2} we make this observation quantitative when we study the angular separation that the visible decay product makes with respect to the direction of the parent LLP. We considered different scenarios and masses for the LLP as it is produced in pair at the LHC, either through a $q\bar q$ initiated or gluon-gluon initiated mechanism. We analysed, through the general models of LLP we introduced, the possible effect of the spin of the LLP, just to find out that spin effects are not important. We even make quantitative the expectation that the effect is mostly the result of kinematics, how slow the LLP is, and that the exact dynamics (the physics model dependence) is not crucial, by implementing a unit matrix element for the production. Although the model for the decay is inspired by some classes of LLP found in the literature, they cover essentially two classes. Either all decay products are visible or one of them is invisible (a possible Dark Matter candidate), in which case we investigate how heavy the latter is with respect to the parent LLP. As expected, the proportion of \textit{BMOs}, for example the fraction of visible daughters in a direction of more than $135^\circ$ from the original direction of the LLP is more substantial for larger masses of the LLP. This enhanced effect with higher masses should compensate the correspondingly smaller cross sections. It is therefore important to exploit this signature. For this simple analysis, we have only considered light jets as the visible objects. However, one can study other signatures involving leptons, photons or even boosted objects like top-jets, $W/Z/h$-jets. Performing a more realistic, let alone a full simulation, for this unusual signature would require detailed information on the different components of the various layers of the detector. Nonetheless, we have attempted to model the {\it geometry} of the tracker and the muon chamber (based on the dimension of the CMS sections of these layers) to quantify how the effect of a large angle separation translates into a measurable fraction of energy (with respect to the original energy of the LLP) that gets deposited respectively in the tracker from \textit{BMOs} emerging from as far as the HCAL and in the muon chamber for \textit{BMOs} entering from outside the detector. As expected, the largest energy deposits are for stopped $R$-hadrons and the smallest in cases where the phase space left for the visible objects is reduced by the presence of a large mass taken by the invisible particle present in the decay. The results we obtained could most probably be optimised by combining them with the use of other variables, like for instance the use of transverse energies or even better the knowledge of a specificity of the particular layer. We discuss some of these issues, either based on what is already implemented in the current detectors or what could be implemented in the future, to help better track the \textit{BMOs}. In particular, we review how the shower shapes of the ECAL and the HCAL could be exploited and optimised, together with the timing techniques in the muon chamber and the improvements we could have in the tracker. Another aspect which needs more attention, since it is somehow a defining characteristic of the LLP, is the reconstruction of the secondary vertex in case of large impact parameters. Many of the improvements may be in place in the high luminosity option of the LHC, which could help increase the signal statistics of the LLP. One should however pay special attention to techniques of mitigating the underlying events and their influence on the improvement of the timing information to decipher the LLP in some layers of the detector. We have also argued that the main background, from cosmic rays, can be eliminated. In the worst case, we can restrict the analysis to the lower half of the detector. All in all, the proposal we make in this paper looks very promising for the search of the LLP at the LHC, especially for the quite massive ones (above $500$ GeV). As we have discussed, this preliminary study calls for the investigation of a wide range of theoretical, phenomenological and experimental issues and optimisations so we can take full advantage of all the runs of the LHC. \\ {\it \bf Acknowledgments ---} We thank Sunanda Banerjee, Nicolas Berger, Gustaaf H. Brooijmans, Shilpi Jain, Remi Lafaye, Maurizio Pierini, and Giacomo Polesello for useful discussions. This work was supported in part by the French ANR project DMAstro-LHC (ANR-12-BS05-0006), by the {\it Investissements d'avenir} Labex ENIGMASS, by the Research Executive Agency of the European Union under the Grant Agreement PITN-GA2012-316704 (HiggsTools), by the CNRS LIA-THEP (Theoretical High Energy Physics) and the INFRE-HEPNET (IndoFrench Network on High Energy Physics) of CEFIPRA/IFCPAR (Indo-French Centre for the Promotion of Advanced Research). The work of SM is supported by the German Federal Ministry of Education and Research BMBF. The work of BB is supported by the Department of Science and Technology, Government of India, under the Grant Agreement number IFA13-PH-75 (INSPIRE Faculty Award). The work of RMG is supported by the Department of Science and Technology, India under Grant No. SR/S2/JCB-64/2007. BB acknowledges the hospitality of LAPTh where the major parts of this work were carried out. BB, SB, GB and FB acknowledge the Les Houches workshop series ``Physics at TeV colliders'' 2017 where the work was finalised. The work of SB is also supported by a Durham Junior Research Fellowship COFUNDed between Durham University and the European Union under grant agreement number 609412 \bibliographystyle{JHEP}
1,941,325,219,897
arxiv
\section{Introduction} In this paper we are interested in the size of sums such as $$ \sum_{n \leq x} n^{it} \;\;\;\;\; \text{and} \;\;\;\;\; \sum_{n \leq x} \chi(n) , $$ where $t \in \mathbb{R}$ and $\chi(n)$ is a non-principal Dirichlet character modulo a large prime $r$. These zeta sums and character sums are among the most studied objects in analytic number theory. We would like to show, on the widest possible range of $x$, that we have substantial cancellation amongst the terms in the sums. Furthermore, we would like to understand the extent of the cancellation. By periodicity, we can confine our study of character sums to the range $x \leq r$. And if $t$ is large and $x \geq t$, then standard Fourier analysis (see Lemma 1.2 of Ivi\'{c}~\cite{ivic}, for example) shows that $\sum_{x < n \leq 2x} n^{it} = \int_{x}^{2x} w^{it} dw + O(1) = \frac{(2x)^{1+it} - x^{1+it}}{1+it} + O(1)$. So again, we can confine our study of zeta sums to the range $x \leq |t|$. \vspace{12pt} For character sums, the classical P\'{o}lya--Vinogradov inequality asserts that we always have $|\sum_{n \leq x} \chi(n)| \ll \sqrt{r} \log r$. This is only non-trivial for $x$ larger than about $\sqrt{r} \log r$, whereas the Burgess bound supplies a non-trivial bound $o(x)$ provided $x \geq r^{1/4 + o(1)}$, for characters modulo prime $r$. See e.g. chapter 9.4 of Montgomery and Vaughan~\cite{mv}. Assuming the Generalised Riemann Hypothesis, Montgomery and Vaughan~\cite{mvexp} improved the P\'{o}lya--Vinogradov inequality to $|\sum_{n \leq x} \chi(n)| \ll \sqrt{r} \log\log r$, and then Granville and Soundararajan~\cite{gransoundlcs} showed that $\sum_{n \leq x} \chi(n) = o(x)$ provided $(\log x)/\log\log r \rightarrow \infty$ as $r \rightarrow \infty$. They also showed this range of $x$ would be best possible for the character sum to always be $o(x)$ (for all such $x$ and all non-principal characters). Turning to average results, for any $x < r$ we have the easy low moment estimates $$ \frac{1}{r-1} \sum_{\chi \; \text{mod} \; r} |\sum_{n \leq x} \chi(n)|^2 = \lfloor x \rfloor , \;\;\;\;\; \text{and} \;\;\;\;\; \frac{1}{r-1} \sum_{\chi \; \text{mod} \; r} |\sum_{n \leq x} \chi(n)|^{2q} \leq x^q \;\;\; \forall \; 0 \leq q \leq 1 . $$ The first statement here is a trivial consequence of orthogonality of Dirichlet characters, and the second follows from it using H\"older's inequality. Perhaps surprisingly, these straightforward estimates seem to remain essentially the best known upper bounds for low moments of character sums (i.e. for $0 \leq q \leq 1$). Less straightforwardly, we have $$ \frac{1}{r-2} \sum_{\chi \neq \chi_0 \; \text{mod} \; r} |\sum_{n \leq x} \chi(n)|^{2q} \ll_{q} r^q \;\; \forall q > 0 , $$ which follows from Theorem 1 of Montgomery and Vaughan~\cite{mvchar} (who actually proved the bound with the fixed sum $\sum_{n \leq x} \chi(n)$ replaced by $M(\chi) := \max_{x} |\sum_{n \leq x} \chi(n)|$). See also Cochrane and Zheng~\cite{cochzheng}, Granville and Soundararajan~\cite{gransoundlcs}, and Kerr~\cite{kerr}, for a selection of stronger upper bounds on high moments when $x$ is small. Notice that it is important to exclude the principal character $\chi_0$ in Montgomery and Vaughan's result~\cite{mvchar} when $q$ and $x$ are large (it would give a large contribution $\asymp x^{2q}/r$), but in the first statement its contribution $\lfloor x \rfloor^{2}/(r-1)$ is not overwhelming (compared with $\lfloor x \rfloor$) provided $x \leq 0.99r$, say. It is natural to ask whether one typically (e.g. for a positive proportion of characters $\chi$ mod $r$) has squareroot behaviour $\sum_{n \leq x} \chi(n) \asymp \sqrt{x}$, and thus whether the low moments are really $\asymp x^q$ or not\footnote{See the MathOverflow post http://mathoverflow.net/questions/129264/short-character-sums-averaged-on-the-character for explicit discussion of this question.}. Combining the Cauchy--Schwarz inequality with the preceding estimates, for any $x \leq 0.99r$ we get $$ \frac{1}{r-2} \sum_{\chi \neq \chi_0 \; \text{mod} \; r} |\sum_{n \leq x} \chi(n)|^{2q} \geq \frac{(\frac{1}{r-2} \sum_{\chi \neq \chi_0 \; \text{mod} \; r} |\sum_{n \leq x} \chi(n)|^{2})^2}{\frac{1}{r-2} \sum_{\chi \neq \chi_0 \; \text{mod} \; r} |\sum_{n \leq x} \chi(n)|^{2(2-q)}} \gg_{q} \frac{x^2}{r^{2-q}} \;\;\;\;\; \forall \; 0 \leq q \leq 1 . $$ In particular, for any fixed small $\alpha > 0$ and all $\alpha r \leq x \leq 0.99r$, we now find that indeed $\frac{1}{r-2} \sum_{\chi \neq \chi_0 \; \text{mod} \; r} |\sum_{n \leq x} \chi(n)|^{2q} \asymp_{\alpha, q} x^q$ for all $0 \leq q \leq 1$. Since the order of the second moment is the square of the first moment, another standard Cauchy--Schwarz argument (often called the Paley--Zygmund inequality) implies that $|\sum_{n \leq x} \chi(n)| \gg_{\alpha} \sqrt{x}$ for a positive proportion of characters mod $r$, when $\alpha r \leq x \leq 0.99r$. But when $x=o(r)$, the lower bound $\frac{x^2}{r^{2-q}}$ does not match $x^q$, and the typical size of $\sum_{n \leq x} \chi(n)$ remains unclear. (Note that depending on the size of $x$, one could substantially improve the ``simple'' lower bound $\frac{x^2}{r^{2-q}}$ by suitably applying H\"older's inequality rather than the Cauchy--Schwarz inequality, and using the high moment bounds of e.g. \cite{cochzheng, gransoundlcs, kerr} rather than Montgomery and Vaughan's result~\cite{mvchar}. See also section 1.5 of La Bret\`eche, Munsch and Tenenbaum~\cite{bretechemunschten}. But this would still not deliver a matching lower bound $x^q$ in general.) \vspace{12pt} For zeta sums $\sum_{n \leq x} n^{it}$, it is not too difficult to show that $|\sum_{n \leq x} n^{it}| \ll \sqrt{t} \log t$ for all large $x \leq t$, although this estimate seems much less celebrated than the analogous P\'{o}lya--Vinogradov inequality for character sums. See chapter 7.6 of Montgomery~\cite{mont}, and the paper of Fujii, Gallagher and Montgomery~\cite{fgmon}. The Vinogradov--Korobov method yields non-trivial estimates $o(x)$ provided $(\log x)/\log^{2/3}|t| \rightarrow \infty$ as $|t| \rightarrow \infty$, see e.g. chapter 6 of Ivi\'{c}~\cite{ivic}. Obtaining bounds on a wide range of $x$ is particularly important when bounding the Riemann zeta function $\zeta(s)$ for $\Re(s)$ close to 1, with consequences for the distribution of primes. When $\Re(s) = 1/2$, obtaining a larger saving on a more limited range of $x$ is important. The state of the art is recent work of Bourgain~\cite{bourgainzeta}. Somewhat surprisingly, the analogues of the precise conditional character sum results of Montgomery and Vaughan~\cite{mvexp} and of Granville and Soundararajan~\cite{gransoundlcs} do not seem to have been worked out explicitly for zeta sums. However, one could deduce various such results by adapting their methods, using the approximate functional equation for $\zeta(s)$ (which implies in particular that $|\sum_{n \leq x} n^{it}| \asymp \sqrt{t}|\sum_{t/(2\pi x) < n \leq t/\pi} \frac{1}{n^{1+it}} + O(\frac{x}{t} + \frac{1}{\sqrt{t}})|$ for all large $x \leq t$) in place of the P\'olya Fourier expansion for character sums. Regarding average results, a well known mean value theorem of Montgomery and Vaughan (see e.g. Theorem 5.2 of Ivi\'{c}~\cite{ivic}) asserts that for any complex $a_n$, we have $$ \frac{1}{T} \int_{0}^{T} |\sum_{n \leq x} a_n n^{it}|^2 dt = \sum_{n \leq x} |a_{n}|^2 (1 + O(\frac{n}{T})) . $$ Thus if $T$ is large and $1 \leq x \leq T$ then $\frac{1}{T} \int_{0}^{T} |\sum_{n \leq x} n^{it}|^2 dt \ll x$, and by H\"older's inequality we get $\frac{1}{T} \int_{0}^{T} |\sum_{n \leq x} n^{it}|^{2q} dt \ll x^q$ for all $0 \leq q \leq 1$. Furthermore, if $T/2 \leq t \leq T$, say, then we can use the approximate functional equation to deduce that $|\sum_{n \leq x} n^{it}| \ll \sqrt{T}|\sum_{T/(2\pi x) < n \leq T/\pi} \frac{1}{n^{1+it}}| + \sqrt{T}$. For any $q \in \mathbb{N}$ we have $(\sum_{T/(2\pi x) < n \leq T/\pi} \frac{1}{n^{1+it}})^q = \sum_{n \leq (T/\pi)^q} \frac{c_{x,T,q}(n)}{n^{1+it}}$, for certain coefficients $c_{x,T,q}(n)$ that are bounded by the $q$-fold divisor function $d_{q}(n)$, and so the mean value theorem yields $$ \frac{2}{T} \int_{T/2}^{T} |\sum_{n \leq x} n^{it}|^{2q} dt \ll_{q} T^q + T^q \sum_{n \leq (T/\pi)^q} d_{q}(n)^2 (\frac{1}{n^2} + \frac{1}{nT}) \ll_{q} T^q \;\;\; \forall \; q \in \mathbb{N} . $$ By H\"older's inequality we then get this bound for all $q > 0$, which is a partial analogue (with $x$ fixed rather than taking an inner maximum over $x$) of Montgomery and Vaughan's moment bound~\cite{mvchar} for character sums. We remark that although the restriction to $T/2 \leq t \leq T$ could be significantly relaxed here, we would {\em not} have such a bound when integrating over all $0 \leq t \leq T$ with $q$ and $x$ large, due to large contributions from small $t$ (analogously to the contribution from the principal character $\chi_0$ that should be excluded from high moments of character sums). Now if $x \leq cT$ for a suitable small constant $c > 0$, then Montgomery and Vaughan's mean value theorem also immediately implies that $\frac{2}{T} \int_{T/2}^{T} |\sum_{n \leq x} n^{it}|^2 dt = \lfloor x \rfloor + O(x^{2}/T) \gg x$ (and with more work one could show this for all $1 \leq x \leq T$). So, similarly as for character sums, the Cauchy--Schwarz inequality implies that $\frac{2}{T} \int_{T/2}^{T} |\sum_{n \leq x} n^{it}|^{2q} dt \gg_q x^{2}/T^{2-q}$ for all $1 \leq x \leq T$ and $0 \leq q \leq 1$. In particular, if $x \asymp T$ then these moments are $\asymp x^q$, but when $x=o(T)$ the true order of the low moments remains unclear. See also section 1.5 of La Bret\`eche, Munsch and Tenenbaum~\cite{bretechemunschten} for discussion of lower bounds for the moments. \vspace{12pt} One way of exploring the behaviour of $\sum_{n \leq x} \chi(n)$ or $\sum_{n \leq x} n^{it}$ is to consider an appropriate random model. Let $(f(p))_{p \; \text{prime}}$ be a sequence of independent random variables, each distributed uniformly on the complex unit circle. Then we define a {\em Steinhaus random multiplicative function} $f(n)$, by setting $f(n) := \prod_{p^{a} || n} f(p)^{a}$ for all $n \in \mathbb{N}$. Steinhaus random multiplicative functions have been used quite extensively to model a randomly chosen Dirichlet character $\chi(n)$ or ``continuous character'' $n^{it}$: see the papers of Granville and Soundararajan~\cite{gransoundlcs} and Lamzouri~\cite{lamzouri2dzeta}, for example. Helson~\cite{helson} conjectured, by a rough analogy with the first moment of the Dirichlet kernel in classical Fourier analysis, that one should have $\mathbb{E}|\sum_{n \leq x} f(n)| = o(\sqrt{x})$ as $x \rightarrow \infty$. This conjecture was somewhat surprising, given the general philosophy of squareroot (and not more than squareroot) cancellation for oscillating number theoretic sums, and various counter-conjectures were made by other authors. See the introduction to \cite{harperrmflowmoments} for discussion and references. However, the author~\cite{harperrmflowmoments} recently proved Helson's conjecture, in fact showing that \begin{equation}\label{precisehelson} \mathbb{E}|\sum_{n \leq x} f(n)|^{2q} \asymp \Biggl(\frac{x}{1 + (1-q)\sqrt{\log\log x}} \Biggr)^q \end{equation} uniformly for all large $x$ and all $0 \leq q \leq 1$. This raises the question whether one should now expect better than squareroot cancellation for character and zeta sums, on some range of $x$ rather smaller than the conductor, or whether the random multiplicative model simply fails to capture the arithmetic truth in these problems. \subsection{Statement of results} Our main results establish, for character and zeta sums, the natural analogues of the upper bound part of \eqref{precisehelson}. \begin{thm1}\label{mainthmchar} Let $r$ be a large prime. Then uniformly for any $1 \leq x \leq r$ and any $0 \leq q \leq 1$, we have $$ \frac{1}{r-1} \sum_{\chi \; \text{mod} \; r} |\sum_{n \leq x} \chi(n)|^{2q} \ll \Biggl(\frac{x}{1 + (1-q)\sqrt{\log\log(10L)}} \Biggr)^q , $$ where $L = L_r := \min\{x,r/x\}$. \end{thm1} \begin{thm2}\label{mainthmcont} Let $T$ be a large real number. Then uniformly for any $1 \leq x \leq T$ and any $0 \leq q \leq 1$, we have $$ \frac{1}{T} \int_{0}^{T} |\sum_{n \leq x} n^{it}|^{2q} dt \ll \Biggl(\frac{x}{1 + (1-q)\sqrt{\log\log(10L_T)}} \Biggr)^q , $$ where $L_T := \min\{x,T/x\}$. \end{thm2} In particular, for any {\em fixed} $0 < q < 1$ and any $x=x(r)$ such that $x$ and $r/x$ both tend to infinity with $r$, we have $\frac{1}{r-1} \sum_{\chi \; \text{mod} \; r} |\sum_{n \leq x} \chi(n)|^{2q} \ll \frac{x^q}{(\log\log(10L))^{q/2}} = o(x^q)$. We shall discuss the proofs in detail in section \ref{subsecproofideas}, below. We note here that there is a well known ``symmetry'' in the behaviour of character sums and zeta sums, whereby e.g. $\frac{1}{\sqrt{x}} |\sum_{n \leq x} \chi(n)| \approx \sqrt{\frac{x}{r}} |\sum_{n \leq r/x} \chi(n)|$ (very roughly speaking). See section 10 of Granville and Soundararajan~\cite{gransoundlcs}. This symmetry is sometimes called the ``Fourier flip'', and manifests itself in the (approximate) functional equations of the corresponding $L$-functions, and in the very structured nature of long sums $\sum_{n \leq x} \chi(n) , \sum_{n \leq x} n^{it}$ where $x$ is of the same order as the conductor (i.e. as $r$ or $|t|$, respectively). Given the symmetry between character sums of lengths $x$ and $r/x$, and between zeta sums of lengths $x$ and $|t|/x$, the quantities $\log\log(10L_r)$ and $\log\log(10L_T)$ appearing in Theorems \ref{mainthmchar} and \ref{mainthmcont} are natural substitutes for the $\log\log x$ saving factor in \eqref{precisehelson}. The shape of the bounds in Theorems \ref{mainthmchar} and \ref{mainthmcont} might initially seem peculiar, and perhaps open to improvement. In most number theoretic settings, if one obtains a saving one expects to save at least a power of a logarithm. But in fact it seems reasonable to conjecture that Theorems \ref{mainthmchar} and \ref{mainthmcont} are sharp (provided in Theorem \ref{mainthmchar} that $x \leq 0.99r$, say, so we are away from the point where periodicity trivially induces substantial extra cancellation\footnote{For non-principal $\chi$, if $0.99r < x \leq r$ we can observe that $\sum_{n \leq x} \chi(n) = - \sum_{x < n \leq r} \chi(n) = - \sum_{1 \leq n < r-x} \chi(r-n) = - \chi(-1) \sum_{1 \leq n < r-x} \chi(n)$, and then apply Theorem \ref{mainthmchar} to these sums instead.}). Note that in the probabilistic setting of \eqref{precisehelson} we already have an order of magnitude result, rather than just an upper bound. It is possible that the methods leading to Theorems \ref{mainthmchar} and \ref{mainthmcont} would produce matching lower bounds when $x \leq e^{\log^{c}r}$ and $x \leq e^{\log^{c}T}$, say, for a certain small $c > 0$, although the author has not checked all details of this. (See the discussion of the proofs of Theorems \ref{mainthmchar} and \ref{mainthmcont} in section \ref{subsecproofideas}. To produce lower bounds in an analogous way, one would need to compare {\em conditional second and fourth moments}, rather than simply using H\"older's inequality to pass to conditional second moments, with the parameter $\log P$ in the proof being kept comparable to $\log x$ to prevent blow-up from primes larger than $P$ in the conditional fourth moments. This seems to correspond to conditions like $x \leq e^{\log^{c}r}$ and $x \leq e^{\log^{c}T}$.) By ``symmetry'', this should also lead to lower bounds when $x$ is close to $r$ and $T$, respectively. For general $x$ and $q < 1$, the best existing lower bounds for these moments seem to differ from our upper bounds by powers of $\log L$ (see e.g. section 1.5 of La Bret\`eche, Munsch and Tenenbaum~\cite{bretechemunschten}), and it is a challenging and interesting open problem to show that Theorems \ref{mainthmchar} and \ref{mainthmcont} are sharp. As we shall discuss below in the context of Corollary \ref{cortheta}, matching lower bounds for Theorem \ref{mainthmchar} when $x \approx \sqrt{r}$ (say) would have some important consequences. \vspace{12pt} Theorem \ref{mainthmchar} implies that for most characters $\chi$ mod $r$, we have $|\sum_{n \leq x} \chi(n)| \ll \frac{\sqrt{x}}{(\log\log(10L))^{1/4}}$. This is ``better than squareroot cancellation'', provided $L$ is large. The following Corollary, which is the character sum version of Corollary 2 of Harper~\cite{harperrmflowmoments} from the random multiplicative case, makes this observation a little more precise. \begin{cor1}\label{cordev} Let $r$ be a large prime. Uniformly for any $1 \leq x \leq r$ and any $\lambda \geq 2$, we have $$ \frac{1}{r-1} \#\{\chi \; \text{mod} \; r : |\sum_{n \leq x} \chi(n)| \geq \lambda\frac{\sqrt{x}}{(\log\log(10L))^{1/4}}\} \ll \frac{\min\{\log\lambda, \sqrt{\log\log(10L)}\}}{\lambda^2} , $$ where $L = \min\{x,r/x\}$. \end{cor1} Corollary \ref{cordev} follows from Theorem \ref{mainthmchar} with $q$ chosen suitably in terms of $\lambda$. See section \ref{secotherproofs}, below, for the full (short) argument. If desired, one could use Theorem \ref{mainthmcont} to deduce an analogous corollary for zeta sums. \vspace{12pt} One of the many motivations for considering the character sums $\sum_{n \leq x} \chi(n)$ is their connection with Dirichlet theta functions $\theta(s,\chi)$. Recall that whenever $\Re(s) > 0$, we define \begin{equation} \theta(s,\chi) := \left\{ \begin{array}{ll} \sum_{n=1}^{\infty} \chi(n) e^{-\pi n^{2} s/r} & \text{if} \; \chi \; \text{is even} , \\ \sum_{n=1}^{\infty} n \chi(n) e^{-\pi n^{2} s/r} & \text{if} \; \chi \; \text{is odd} . \end{array} \right. \nonumber \end{equation} (These differ by a factor of 2 from the definitions sometimes given, but that will be unimportant for our purposes here.) Theta functions arise in the theory of automorphic forms, and are an important tool in classical proofs of the functional equation for Dirichlet $L$-functions. See e.g. chapter 10.1 of Montgomery and Vaughan~\cite{mv}. In the last few years, a sequence of papers have investigated the moments of $\theta(1,\chi)$, as $\chi$ varies over even (non-principal) characters or over odd characters. For example, for each fixed $q \in \mathbb{N}$ Munsch and Shparlinski~\cite{munshpar} proved conjecturally sharp lower bounds for the $2q$-th moments, namely $$ \frac{1}{r-1} \sum_{\substack{\chi \; \text{mod} \; r, \\ \chi \; \text{even}, \\ \chi \neq \chi_0}} |\theta(1, \chi)|^{2q} \gg_{q} r^{q/2} \log^{(q-1)^2}r , \;\;\;\;\;\;\;\; \frac{1}{r-1} \sum_{\substack{\chi \; \text{mod} \; r, \\ \chi \; \text{odd}}} |\theta(1, \chi)|^{2q} \gg_{q} r^{3q/2} \log^{(q-1)^2}r . $$ Munsch~\cite{munsch} proved almost sharp upper bounds assuming the Generalised Riemann Hypothesis for Dirichlet $L$-functions (losing a factor $\log^{\epsilon}r$ compared with the presumed truth), again when $q \in \mathbb{N}$. One application of such estimates is deducing non-vanishing results for $\theta(1, \chi)$, the subject of a conjecture of Louboutin~\cite{louboutin}. Thus Louboutin~\cite{louboutinodd}, and Louboutin and Munsch~\cite{loubmun}, proved that $\theta(1,\chi) \neq 0$ for $\gg r/\log r$ odd characters and $\gg r/\log r$ even characters modulo prime $r$, by computing and comparing second and fourth moments. Most recently, La Bret\`eche, Munsch and Tenenbaum~\cite{bretechemunschten} introduced weights coming from G\'al sums into such arguments, and proved that $\theta(1,\chi) \neq 0$ for $\gg r/\log^{\delta + o(1)}r$ even characters modulo prime $r$, for a certain explicit constant $\delta \approx 0.086$. See also the work of Bengoechea~\cite{bengo} and of Guo and Peng~\cite{guopeng}, who use Galois theoretic techniques to deduce that $\theta(1,\chi) \neq 0$ for almost all $\chi$, but only for moduli $r$ from certain sparse subsets of primes. Using our methods, we can prove: \begin{cor2}\label{cortheta} Let $r$ be a large prime. Uniformly for any $0 \leq q \leq 1$, we have $$ \frac{1}{r-1} \sum_{\substack{\chi \; \text{mod} \; r, \\ \chi \; \text{even}}} |\theta(1, \chi)|^{2q} \ll \Biggl(\frac{\sqrt{r}}{1+(1-q)\sqrt{\log\log r}} \Biggr)^q , $$ and $$ \frac{1}{r-1} \sum_{\substack{\chi \; \text{mod} \; r, \\ \chi \; \text{odd}}} |\theta(1, \chi)|^{2q} \ll \Biggl(\frac{r^{3/2}}{1+(1-q)\sqrt{\log\log r}} \Biggr)^q . $$ \end{cor2} Since we have $\theta(1, \chi) = \sum_{n=1}^{\infty} \chi(n) e^{-\pi n^{2}/r} \approx \sum_{n \leq \sqrt{r}} \chi(n)$ for even characters, and $\theta(1, \chi) \approx \sum_{n \leq \sqrt{r}} n \chi(n) \approx \sqrt{r} \sum_{n \leq \sqrt{r}} \chi(n)$ for odd characters, one sees that Corollary \ref{cortheta} should follow fairly directly from Theorem \ref{mainthmchar}. Again, see section \ref{secotherproofs} below for the full proof, which is just a partial summation argument. Similarly as for Theorems \ref{mainthmchar} and \ref{mainthmcont}, it is reasonable to conjecture that one should have {\em matching lower bounds} for the moments in Corollary \ref{cortheta}. If one could prove this for any fixed $0 < q < 1$, then (since the low moments scale proportionally with the exponent $q$, unlike the higher moments where there is quadratic dependence in the exponent of $\log r$) another standard Paley--Zygmund type argument with H\"{o}lder's inequality, comparing upper and lower bounds for two low moments, would immediately yield that $\theta(1, \chi) \neq 0$ for a {\em positive proportion} of $\chi$. \vspace{12pt} We are also highly interested in variants of Theorems \ref{mainthmchar} and \ref{mainthmcont}, where the sums are more exotic. In the case of the second moment of such sums, one has exactly the same mean value estimates $\frac{1}{r-1} \sum_{\chi \; \text{mod} \; r} |\sum_{n \leq x} a_n \chi(n)|^2 = \lfloor x \rfloor$ (where $x < r$) and $\frac{1}{T} \int_{0}^{T} |\sum_{n \leq x} a_n n^{it}|^2 dt = \lfloor x \rfloor + O(\frac{x^2}{T})$ for {\em any} complex coefficients $a_n$ with absolute value 1. We cannot seek analogues of Theorems \ref{mainthmchar} and \ref{mainthmcont} at this level of generality, since for a generic sequence of unimodular coefficients $a_n$ the moments will have order $x^q$ (e.g. for independent, uniformly random $a_n$, this is implied by standard moment results like Khintchine's inequalities). But if we tweak the sums in ways that don't disrupt the {\em multiplicative structure} too much, then it turns out that better than squareroot bounds like Theorems \ref{mainthmchar} and \ref{mainthmcont} do endure. Thus we have: \begin{thm3}\label{mainthmmulttwist} Let $r$ be a large prime. Then uniformly for any $1 \leq x \leq r$, any $0 \leq q \leq 1$, and any multiplicative function $h(n)$ that has absolute value 1 on primes and absolute value at most 1 on prime powers, we have $$ \frac{1}{r-1} \sum_{\chi \; \text{mod} \; r} |\sum_{n \leq x} h(n) \chi(n)|^{2q} \ll \Biggl(\frac{x}{1 + (1-q)\sqrt{\log\log(10L)}} \Biggr)^q , $$ where $L = L_r := \min\{x,r/x\}$. \end{thm3} The size conditions on $h(n)$ could be adjusted a little here, but the current formulation already permits important examples such as the M\"{o}bius function $\mu(n)$. The key point is that the multiplicative structure of $h(n)$ allows one to adapt the proof of Theorem \ref{mainthmchar}, to show that the moments of $\sum_{n \leq x} h(n) \chi(n)$ behave like the moments of twisted random multiplicative sums $\sum_{n \leq x} h(n) f(n)$. And since, on the primes, $h(p)f(p)$ are again independent and uniform on the complex unit circle (here we use the unimodularity of $h(p)$), these are essentially the same as the untwisted moments. See section \ref{secotherproofs} for further details. One could also prove the analogous result for $\sum_{n \leq x} h(n) n^{it}$. Unlike the sums $\sum_{n \leq x} \chi(n)$ and $\sum_{n \leq x} n^{it}$, there is no symmetry or ``Fourier flip'' in the presence of a general multiplicative twist $h(n)$. So whilst the appearance of $L_r, L_T$ in Theorems \ref{mainthmchar} and \ref{mainthmcont} was very natural, and one would conjecture those bounds to be sharp, for many $h(n)$ it seems likely that Theorem \ref{mainthmmulttwist} should hold in a stronger form with $\log\log(10L)$ replaced by $\log\log(10x)$. Similarly, the restriction to $x \leq r$ or to $x \leq T$ no longer seems natural for $\sum_{n \leq x} h(n) \chi(n)$ and $\sum_{n \leq x} h(n) n^{it}$. Indeed, if we take $h(n)$ to be a Steinhaus random multiplicative function, then \eqref{precisehelson} implies that for {\em any} large $x$ and large prime $r$ we have $$ \mathbb{E} \frac{1}{r-1} \sum_{\chi \; \text{mod} \; r} |\sum_{n \leq x} h(n) \chi(n)|^{2q} = \frac{1}{r-1} \sum_{\chi \; \text{mod} \; r} \mathbb{E} |\sum_{n \leq x} h(n) \chi(n)|^{2q} \asymp \Biggl(\frac{x}{1 + (1-q)\sqrt{\log\log x}} \Biggr)^q , $$ since $h(n) \chi(n)$ will also be a Steinhaus random multiplicative function\footnote{Strictly speaking, $h(n) \chi(n)$ will be a Steinhaus random multiplicative function restricted to numbers not divisible by $r$, so we will have $\mathbb{E} |\sum_{n \leq x} h(n) \chi(n)|^{2q} = \mathbb{E} |\sum_{n \leq x} h(n) - h(r) \sum_{n \leq x/r} h(n)|^{2q}$. But it is easy to check this also obeys the bounds \eqref{precisehelson}, for example by very slightly modifying the proofs of Harper~\cite{harperrmflowmoments} (only a single Euler product factor corresponding to the prime $r$ must change), or simply using Minkowski's inequality (when $1/2 \leq q \leq 1$) and H\"{o}lder's inequality.} for any given Dirichlet character $\chi$. Consequently, for most realisations of $h(n)$ we must have the stronger bound $\frac{1}{r-1} \sum_{\chi \; \text{mod} \; r} |\sum_{n \leq x} h(n) \chi(n)|^{2q} \ll (\frac{x}{1 + (1-q)\sqrt{\log\log x}} )^q$. In particular, we can return to the case of the M\"{o}bius function. One tends to think of $\mu(n)$ as being ``random looking'' in various senses, and in a recent paper Gorodetsky~\cite{gorodetsky} explored this in the context of character sums. Based on function field considerations, he ultimately conjectured that for all natural number exponents $q < \log r$, the moments $\frac{1}{r-1} \sum_{\chi \; \text{mod} \; r} |\sum_{n \leq x} \mu(n) \chi(n)|^{2q}$ should be asymptotic to the corresponding random multiplicative moments as $x$ and $r$ become large. Likewise, it now seems reasonable to conjecture that for all $0 \leq q \leq 1$ and any fixed $A > 0$, we should have $$ \frac{1}{r-1} \sum_{\chi \; \text{mod} \; r} |\sum_{n \leq x} \mu(n) \chi(n)|^{2q} \ll (\frac{x}{1 + (1-q)\sqrt{\log\log x}} )^q \;\;\; \forall \; x \leq r^A , $$ and \begin{equation}\label{mobiusconj} \frac{1}{2T} \int_{-T}^{T} |\sum_{n \leq x} \mu(n) n^{it}|^{2q} dt \ll (\frac{x}{1 + (1-q)\sqrt{\log\log x}} )^q \;\;\; \forall \; x \leq T^A . \end{equation} This conjecture seems worthy of further investigation, since if true it would have significant arithmetic consequences. Standard arguments with Perron's formula imply that $$ \left| \sum_{x < n \leq x+y} \mu(n) \right| \approx \left| \frac{1}{2\pi i} \int_{-i(x/y)}^{i(x/y)} (\sum_{n \leq 2x} \frac{\mu(n)}{n^s}) \frac{x^{s} ((1+y/x)^s - 1)}{s} ds \right| \lesssim \frac{y}{x} \int_{-x/y}^{x/y} |\sum_{n \leq 2x} \frac{\mu(n)}{n^{it}}| dt , $$ and so \eqref{mobiusconj} (if true) would deliver a bound $|\sum_{x < n \leq x+y} \mu(n) | \lesssim \frac{\sqrt{x}}{(\log\log x)^{1/4}}$ provided $y \leq x^{1-\epsilon}$, say. Thus we could deduce there is cancellation in sums of the M\"{o}bius function in {\em all} short intervals of length $y \gg \frac{\sqrt{x}}{(\log\log x)^{1/4}}$. It is a major open problem to go below, or even to reach, the squareroot interval barrier in problems of this kind. See e.g. the recent beautiful work of Matom\"{a}ki and Radziwi\l\l~\cite{matradz}, which (among many other results) establishes the existence of positive and negative M\"{o}bius values (or, strictly speaking, values of the closely related Liouville function) in intervals of length $C\sqrt{x}$, for a large constant $C$. \subsection{Ideas from the proofs}\label{subsecproofideas} Sections \ref{secprep} and \ref{secproof1} will be taken up with the proof of Theorem \ref{mainthmchar}. Here we try to explain and motivate the main steps in the argument. Throughout the paper, $r$ will be a large prime modulus as in Theorem \ref{mainthmchar}, and we shall write $\mathbb{E}^{\text{char}}$ to denote averaging over all Dirichlet characters mod $r$. Thus if $W(\chi)$ is any function, then $\mathbb{E}^{\text{char}} W := \frac{1}{r-1} \sum_{\chi \; \text{mod} \; r} W(\chi)$. This notation is intended both to simplify the writing, and to emphasise the connection between the arguments here and those in the random multiplicative function case. \vspace{12pt} The introduction to the author's paper~\cite{harperrmflowmoments} contains a detailed outline of the proof of the estimate \eqref{precisehelson} for $\mathbb{E}|\sum_{n \leq x} f(n)|^{2q}$. The first phase of that proof involves showing that $$ \mathbb{E}|\sum_{n \leq x} f(n)|^{2q} \approx x^{q} \mathbb{E}\Biggl(\frac{1}{\log x} \int_{-1/2}^{1/2} |F(1/2+it)|^2 dt \Biggr)^{q} , $$ where $F(s) := \prod_{p \leq x} (1 - \frac{f(p)}{p^{s}})^{-1}$ is a suitable random Euler product. The second phase is to show that $\mathbb{E} (\frac{1}{\log x} \int_{-1/2}^{1/2} |F(1/2+it)|^2 dt )^{q} \approx (\frac{1}{1 + (1-q)\sqrt{\log\log x}} )^q$, using ideas from the probabilistic theory of {\em critical multiplicative chaos}. To prove Theorem \ref{mainthmchar}, we shall rework the first phase of the random multiplicative function proof to show, roughly speaking, that \begin{equation}\label{charbyranddisplay} \mathbb{E}^{\text{char}} |\sum_{n \leq x} \chi(n)|^{2q} \ll x^{q} \mathbb{E}\Biggl(\frac{1}{\log P} \int_{-1/2}^{1/2} |F_{P}^{\text{rand}}(1/2+it)|^2 dt \Biggr)^{q} , \end{equation} where $F_{P}^{\text{rand}}(s) := \prod_{p \leq P} (1 - \frac{f(p)}{p^s})^{-1}$ for a suitable parameter $P = P(r,x)$. Notice that on the left hand side we have our character average, but we will show this can be bounded by the expectation of the genuinely random quantity on the right. Thus we will {\em not} need to rework the second phase of the random multiplicative function proof, since we will be able to apply this directly once we have established a bound like \eqref{charbyranddisplay}. \vspace{12pt} In \cite{harperrmflowmoments}, to make the connection between $\mathbb{E}|\sum_{n \leq x} f(n)|^{2q}$ and a random Euler product one conditions on (i.e. freezes) the values $f(p)$ for all ``small'' primes $p$, before applying H\"{o}lder's inequality to compare with the conditional second moment. We shall review that argument now, but for simplicity only for $\sum_{n \leq x, P(n) > \sqrt{x}} f(n)$, the subsum over numbers whose largest prime factor $P(n)$ is greater than $\sqrt{x}$. By multiplicativity we can write $\sum_{n \leq x, P(n) > \sqrt{x}} f(n) = \sum_{\sqrt{x} < p \leq x} f(p) \sum_{m \leq x/p} f(m)$, and note that in the inner sums we have $m \leq x/p < \sqrt{x}$, so those values are entirely determined by the $(f(p))_{p \leq \sqrt{x}}$. Then if we let $\tilde{\mathbb{E}}$ denote expectation conditional on the values $(f(p))_{p \leq \sqrt{x}}$ (i.e. expectation with those values treated as fixed and the $(f(p))_{p > \sqrt{x}}$ remaining random, so the conditional expectation of any quantity is a function of the values $(f(p))_{p \leq \sqrt{x}}$), we get $$ \tilde{\mathbb{E}} \Biggl| \sum_{n \leq x, P(n) > \sqrt{x}} f(n) \Biggr|^{2} = \sum_{\sqrt{x} < p,q \leq x} \tilde{\mathbb{E}}\Biggl( f(p) (\sum_{m \leq x/p} f(m)) \overline{f(q)} (\sum_{m \leq x/q} \overline{f(m)}) \Biggr) = \sum_{\sqrt{x} < p \leq x} \Biggl| \sum_{m \leq x/p} f(m) \Biggr|^2 , $$ because the $(f(p))_{p > \sqrt{x}}$ {\em remain independent with mean zero}. But by the Tower Property of conditional expectation (which in this case is simply Fubini's theorem, breaking up the multiple ``integration'' $\mathbb{E}$ into separate integrations corresponding to the $(f(p))_{p \leq \sqrt{x}}$ and the $(f(p))_{p > \sqrt{x}}$), we can write $\mathbb{E}|\sum_{n \leq x, P(n) > \sqrt{x}} f(n) |^{2q} = \mathbb{E} \tilde{\mathbb{E}} | \sum_{n \leq x, P(n) > \sqrt{x}} f(n) |^{2q}$. So applying H\"{o}lder's inequality to the {\em conditional expectation} $\tilde{\mathbb{E}}$, we get $$ \mathbb{E}\Biggl|\sum_{n \leq x, P(n) > \sqrt{x}} f(n) \Biggr|^{2q} \leq \mathbb{E} \Biggl( \tilde{\mathbb{E}} \Biggl| \sum_{n \leq x, P(n) > \sqrt{x}} f(n) \Biggr|^{2} \Biggr)^{q} = \mathbb{E} \Biggl( \sum_{\sqrt{x} < p \leq x} \Biggl| \sum_{m \leq x/p} f(m) \Biggr|^2 \Biggr)^{q} . $$ Having reached this point, one can perform some smoothing and use a suitable form of Parseval's identity to relate the right hand side to an Euler product average like $\mathbb{E}(\frac{1}{\log x} \int_{-1/2}^{1/2} |F(1/2+it)|^2 dt )^{q}$. The factor $\frac{1}{\log x}$ reflects the density of primes on the interval $(\sqrt{x},x]$. To prove Theorem \ref{mainthmchar}, we must find a version of the above procedure that can succeed for character averages, and ultimately deliver \eqref{charbyranddisplay}. This is challenging for two related reasons: when averaging over characters $\chi$ mod $r$, the values $\chi(p)$ for different primes $p$ are not really independent of one another; and since there are only $r-1$ characters, we cannot hope for the behaviour of the $\chi(n)$ to really resemble the random model $f(n)$ on events that occur with probability much smaller than $1/(r-1)$ on the random side. In particular, if we try to condition (freeze) the values $\chi(p)$ for too many primes $p$, there will generally be zero or precisely one character with the values $\chi(p)$ as specified, and we cannot hope for conditioning on such configurations to match the calculations in the random multiplicative case. \vspace{12pt} To match character averages with expectations involving random multiplicative functions, we must confine ourselves to working with {\em polynomial expressions of suitable degree} in the various $\chi(p)$. Indeed, if $p_1, ..., p_k, p_{k+1}, ..., p_l$ are any (not necessarily distinct) primes such that $\prod_{j=1}^{k} p_j, \prod_{j=k+1}^{l} p_j < r$, we have a genuine equality \begin{eqnarray}\label{charrmfexp} \mathbb{E}^{\text{char}} \prod_{j=1}^{k} \chi(p_j) \prod_{j=k+1}^{l} \overline{\chi(p_j)} & = & \mathbb{E}^{\text{char}} \chi(\prod_{j=1}^{k} p_j) \overline{\chi(\prod_{j=k+1}^{l} p_j)} = \textbf{1}_{\prod_{j=1}^{k} p_j \equiv \prod_{j=k+1}^{l} p_j \; \text{mod} \; r} \nonumber \\ & = & \textbf{1}_{\prod_{j=1}^{k} p_j = \prod_{j=k+1}^{l} p_j} = \mathbb{E} \prod_{j=1}^{k} f(p_j) \prod_{j=k+1}^{l} \overline{f(p_j)} . \end{eqnarray} Here we used orthogonality of Dirichlet characters; the size restrictions on $\prod_{j=1}^{k} p_j$ and $\prod_{j=k+1}^{l} p_j$ (to crucially replace a congruence mod $r$ by an equality); and finally the orthogonality property $\mathbb{E} f(n) \overline{f(m)} = \textbf{1}_{n=m}$ of Steinhaus random multiplicative functions. To exploit \eqref{charrmfexp}, the key observation is that we did not really need to condition on the exact values of all $(f(p))_{p \leq \sqrt{x}}$ to carry our overall argument through on the random multiplicative side. Since we ultimately bound $\sum_{\sqrt{x} < p \leq x} | \sum_{m \leq x/p} f(m) |^2$ by something like $\frac{1}{\log x} \int_{-1/2}^{1/2} |F(1/2+it)|^2 dt$, the proof could proceed very similarly if we conditioned on any information about the $f(p)$ that is sufficient to roughly fix the value of $\int_{-1/2}^{1/2} |F(1/2+it)|^2 dt$. For each $t$, the value of $|F(1/2+it)|$ is the exponential of a prime number sum like $\Re \sum_{p \leq \sqrt{x}} \frac{f(p)}{p^{1/2+it}}$. Furthermore, such prime number sums do not usually vary much when $t$ varies by less than $1/\log x$ (say), because $p^{it} = e^{it\log p}$ does not vary much when $t$ varies by much less than $1/\log p$. So if we replace $\tilde{\mathbb{E}}$ in the above description by a {\em coarser conditioning}, only on the values of $\Re \sum_{p \leq \sqrt{x}} \frac{f(p)}{p^{1/2+it}}$ at some net of $t$ values with spacing roughly $1/\log x$, (and in fact we only need to know $\Re \sum_{p \leq \sqrt{x}} \frac{f(p)}{p^{1/2+it}}$ to a precision of $O(1)$), this should fix enough information on the inside of the $q$-th power that we still end up with an overall bound of roughly the shape $\mathbb{E}(\frac{1}{\log x} \int_{-1/2}^{1/2} |F(1/2+it)|^2 dt )^{q}$. Now translating to character sums, the analogue of breaking up $\mathbb{E}|\sum_{n \leq x, P(n) > \sqrt{x}} f(n) |^{2q}$ by writing $\mathbb{E}|\sum_{n \leq x, P(n) > \sqrt{x}} f(n) |^{2q} = \mathbb{E} \tilde{\mathbb{E}} | \sum_{n \leq x, P(n) > \sqrt{x}} f(n) |^{2q}$ will be (roughly) to write $$ \mathbb{E}^{\text{char}} |\sum_{n \leq x} \chi(n)|^{2q} = \sum_{\textbf{j}} \mathbb{E}^{\text{char}} G_{\textbf{j}}(\chi) |\sum_{n \leq x} \chi(n) |^{2q} , $$ where $G_{\textbf{j}}(\chi)$ are functions satisfying $\sum_{\textbf{j}} G_{\textbf{j}}(\chi) \equiv 1$ that approximately pick out all characters $\chi$ for which sums like $\Re \sum_{p \leq \sqrt{x}} \frac{\chi(p)}{p^{1/2+it}}$ have a given collection of values. We can apply H\"{o}lder's inequality to each inner average $\mathbb{E}^{\text{char}} G_{\textbf{j}}(\chi) |\sum_{n \leq x} \chi(n) |^{2q}$ here, as we did with $\tilde{\mathbb{E}} | \sum_{n \leq x, P(n) > \sqrt{x}} f(n) |^{2q}$ previously, and end up needing to work with quantities like $\mathbb{E}^{\text{char}} G_{\textbf{j}}(\chi) |\sum_{n \leq x} \chi(n) |^{2}$. Furthermore, if the functions $G_{\textbf{j}}$ are sufficiently nice we could hope to approximate $G_{\textbf{j}}(\chi)$ by a polynomial in the $\chi(p)$, at which point we might be able to invoke \eqref{charrmfexp} and replace $\mathbb{E}^{\text{char}} G_{\textbf{j}}(\chi) |\sum_{n \leq x} \chi(n) |^{2}$ by $\mathbb{E} G_{\textbf{j}}(f) |\sum_{n \leq x} f(n) |^{2}$. This is a fairly good high level description of how the proof of Theorem \ref{mainthmchar} proceeds, and so a key tool will be a collection of nice functions forming a smooth partition of unity, from which we can ultimately form our multipliers $G_{\textbf{j}}$ by taking suitable products (corresponding to all the different $t$ values in our net). See Approximation Result \ref{apres}, in section \ref{subsecpartition}, for the technical statement we use. \vspace{12pt} But there is an important issue that must be addressed. Notice that the conductor $r$ doesn't appear in the preceding paragraph, and one doesn't yet see how the quantity $\log\log(10L_r)$ will arise in Theorem \ref{mainthmchar} in place of $\log\log x$. The point is that in \eqref{charrmfexp} we must ensure, when everything is expanded out, that our products of primes are smaller than $r$. See Propositions \ref{condexpprop} and \ref{boxprobprop}, in section \ref{subsecmeans}, for the precise statements that we will use to compare $\mathbb{E}^{\text{char}} G_{\textbf{j}}(\chi) |\sum_{n \leq x} \chi(n) |^{2}$ with $\mathbb{E} G_{\textbf{j}}(f) |\sum_{n \leq x} f(n) |^{2}$, where it is crucial that the prime sums $\Re \sum_{p \leq P} \frac{\chi(p)}{p^{1/2+it}}$ involved run up to $P$ for some $P$ that is suitably small compared with $r/x$. Indeed, for prime number sums of length $P$ we should work with a net of $t$ values with spacing roughly $1/\log P$, so with roughly $\log P$ different sums. And for each of those, when we apply Taylor's theorem to expand its contribution to $G_{\textbf{j}}(\chi)$ we must expand as far as degree $\log^{O(1)}P$ to achieve a suitable level of precision. Since the square $|\sum_{n \leq x} \chi(n) |^{2}$ also contributes terms $\chi(n), \overline{\chi(m)}$ with $n,m$ up to $x$ in size, we must have $x P^{\log^{O(1)}P} < r$ to match up character sum and random multiplicative function expressions. In particular, we cannot work with $P=\sqrt{x}$ here unless $x$ is rather small compared with $r$. The largest permissible choice of $P$ will rather be something like $e^{\log^{c}(r/x)}$, for a suitable constant $c$. Since we also want $P < x$ (it would not make sense to include sums over primes $> x$, which are not involved in $\sum_{n \leq x} \chi(n)$, in the conditioning), one reasonable choice turns out to be $P \approx \exp\{\log^{1/6}L_r\}$. Fortunately, for an upper bound (but not for a lower bound) we are free to select the range of primes involved in our ``conditioning'' as we wish. (This observation would actually allow some simplification of the upper bound arguments in the purely random setting~\cite{harperrmflowmoments} as well. Rather than breaking up $\sum_{n \leq x} f(n)$ into various subsums according to the size of the largest prime factor $P(n)$, as there, and then conditioning on all somewhat smaller primes, we can simply condition the full sum on all primes up to one suitably chosen point.) As $P$ becomes smaller, the possible saving $(\frac{1}{1 + (1-q)\sqrt{\log\log P}} )^q$ that we can ultimately achieve using multiplicative chaos results will diminish, but since $\log\log P$ varies so slowly we can vary $P$ quite a lot without visibly changing the final bounds. In particular, note that although our choice $P \approx \exp\{\log^{1/6}L_r\}$ is (necessarily) significantly smaller than $L_r$, we still have $\log\log P \asymp \log\log L_r$ and so we get the desired factor $\frac{1}{1 + (1-q)\sqrt{\log\log(10L)}}$ in Theorem \ref{mainthmchar}. In sections \ref{thm1small}--\ref{subsecmaingorandom}, we implement an argument along the above lines and succeed in replacing all terms of the form $\mathbb{E}^{\text{char}} G_{\textbf{j}}(\chi) |\sum_{n \leq x} \chi(n) |^{2}$ by $\mathbb{E} G_{\textbf{j}}(f) |\sum_{n \leq x} f(n) |^{2}$, thus passing to the purely random setting. However, because this is done in the setup of coarser conditioning, more work remains to confirm that the resulting random multiplicative function expression can really still be bounded by something like $\mathbb{E}(\frac{1}{\log P} \int_{-1/2}^{1/2} |F_{P}^{\text{rand}}(1/2+it)|^2 dt )^{q}$. Initially one can apply a similar kind of smoothing and Parseval procedure as was originally done in \cite{harperrmflowmoments}, see section \ref{subsecparseval} (where we recall a version of Parseval's identity for Dirichlet series) and section \ref{subsecpasseuler} below. This brings us to random Euler products, but with a (weighted) sum over $\textbf{j}$ on the outside of the $q$-th power in place of a ``genuine'' expectation $\mathbb{E}$, and with some additional averaging (weighted by $G_{\textbf{j}}(f)$) on the inside of the $q$-th power. Since each term $G_{\textbf{j}}(f)$ only contains information about $\Re \sum_{p \leq P} \frac{f(p)}{p^{1/2+it}}$ at a net of points $t$, as opposed to all $-1/2 \leq t \leq 1/2$ or all $t \in \mathbb{R}$, we want to restrict to working with Euler products at those $t$. However, provided the net of $t$ are sufficiently close together (slightly closer than $1/\log P$), this can be achieved with simple mean square arguments. Finally, we must check that the weighting by $G_{\textbf{j}}(f)$ fixes enough information about all the sums $\Re \sum_{p \leq P} \frac{f(p)}{p^{1/2+it}}$ that, even with this extra averaging on the inside of the $q$-th power, we still end up with something roughly like $\mathbb{E}(\frac{1}{\log P} \int_{-1/2}^{1/2} |F_{P}^{\text{rand}}(1/2+it)|^2 dt )^{q}$ (or actually a version of this with the integral replaced by a sum over our discrete points $t$). Provided the parameters were selected properly in terms of $P$ when constructing the smooth functions underlying $G_{\textbf{j}}(\cdot)$, (which is ultimately why one must Taylor expand as far as degree $\log^{O(1)}P$ in the above discussion), it turns out that averaging against $G_{\textbf{j}}(f)$ does essentially restrict all the sums $\Re \sum_{p \leq P} \frac{f(p)}{p^{1/2+it}}$ to their intended boxes depending on $\textbf{j}$. Thus the weighted outer sum over $\textbf{j}$ does perform essentially the same role as a genuine expectation, and we do arrive at (a discretised version of) $\mathbb{E}(\frac{1}{\log P} \int_{-1/2}^{1/2} |F_{P}^{\text{rand}}(1/2+it)|^2 dt )^{q}$. This argument is completed in sections \ref{subsecrefinecond}--\ref{subsecconclusion}, using properties of the functions from Approximation Result \ref{apres} and using multiplicative chaos results presented in section \ref{subsecrandeuler}. \section{Preparations}\label{secprep} \subsection{A smooth partition of unity}\label{subsecpartition} As explained in the introduction, one important tool for us will be a collection of fairly well behaved functions that we can use to approximately detect the values of various sums involving $\chi(p)$, and therefore simulate a conditioning process in our main proofs. These sorts of constructions are fairly standard in modern analysis and number theory, and it will not be too difficult to prove the following. \begin{approxres1}\label{apres} Let $N \in \mathbb{N}$ be large, and $\delta > 0$ be small. There exist functions $g : \mathbb{R} \rightarrow \mathbb{R}$ (depending on $\delta$) and $g_{N+1} : \mathbb{R} \rightarrow \mathbb{R}$ (depending on $\delta$ and $N$) such that, if we define $g_{j}(x) = g(x - j)$ for all integers $|j| \leq N$, we have the following properties: \begin{enumerate} \item $\sum_{|j| \leq N} g_{j}(x) + g_{N+1}(x) = 1$ for all $x \in \mathbb{R}$; \item $g(x) \geq 0$ for all $x \in \mathbb{R}$, and $g(x) \leq \delta$ whenever $|x| > 1$; \item $g_{N+1}(x) \geq 0$ for all $x \in \mathbb{R}$, and $g_{N+1}(x) \leq \delta$ whenever $|x| \leq N$; \item for all $l \in \mathbb{N}$ and all $x \in \mathbb{R}$, we have the derivative estimate $|\frac{d^{l}}{dx^{l}} g(x)| \leq \frac{1}{\pi (l+1)} (\frac{2\pi}{\delta})^{l+1}$. \end{enumerate} \end{approxres1} \begin{proof}[Proof of Approximation Result \ref{apres}] Our construction will be a minor variant of the proof of Approximation Result 1 of Harper~\cite{harperpartition} (with $R=0$ there). As such, we content ourselves with outlining the main steps. Let $b(x)$ be a Beurling--Selberg function majorising the indicator function $\textbf{1}_{|x| \leq 1/2}$, with Fourier transform supported on $[-1/\delta,1/\delta]$. See e.g. Vaaler's paper~\cite{vaaler} for background on such majorants. Thus we have $b(x) \geq \textbf{1}_{|x| \leq 1/2}$ for all $x \in \mathbb{R}$; and $\int_{-\infty}^{\infty} b(x) dx = 1 + \delta$; and $b(x) = \int_{-1/\delta}^{1/\delta} \hat{b}(t) e^{2\pi i xt} dt$ for all $x \in \mathbb{R}$, where $|\hat{b}(t)| = |\int b(x) e^{-2\pi i x t} dx| \leq 1 + \delta$. We define $g(x)$ as a convolution of $b$, namely $$ g(x) = \int_{-\infty}^{\infty} \textbf{1}_{|u| \leq 1/2} \frac{b(x-u)}{1+\delta} du = \int_{-\infty}^{\infty} \textbf{1}_{|x-u| \leq 1/2} \frac{b(u)}{1+\delta} du . $$ Then it is clear that $g(x)$ is non-negative, since $b(x)$ is non-negative. The other claims about $g(x)$ in (ii) and (iv) follow identically as in Harper~\cite{harperpartition}. Now by definition of $g$ and $g_j$, the sum $\sum_{|j| \leq N} g_{j}(x)$ is $$ = \int_{-\infty}^{\infty} \sum_{|j| \leq N} \textbf{1}_{|x - j -u| \leq 1/2} \frac{b(u)}{1 + \delta} du = \int_{-\infty}^{\infty} \textbf{1}_{|x-u| \leq N+1/2} \frac{b(u)}{1 + \delta} du $$ for all real $x$. Thus we always have $\sum_{|j| \leq N} g_{j}(x) \leq \int_{-\infty}^{\infty} \frac{b(u)}{1 + \delta} du = 1$, and furthermore $1 - \sum_{|j| \leq N} g_{j}(x)$ is equal to $$ \int_{-\infty}^{\infty} \textbf{1}_{|x-u| > N+1/2} \frac{b(u)}{1 + \delta} du = \int_{-\infty}^{\infty} \textbf{1}_{|x-u| > N+1/2} \frac{b(u) - \textbf{1}_{|u| \leq 1/2}}{1 + \delta} du + \int_{-\infty}^{\infty} \textbf{1}_{|x-u| > N+1/2} \frac{\textbf{1}_{|u| \leq 1/2}}{1 + \delta} du . $$ When $|x| \leq N$ the second integral here vanishes, and so $$ 1 - \sum_{|j| \leq N} g_{j}(x) \leq \int_{-\infty}^{\infty} \frac{|b(u) - \textbf{1}_{|u| \leq 1/2}|}{1+\delta} du = \int_{-\infty}^{\infty} \frac{b(u) - \textbf{1}_{|u| \leq 1/2}}{1+\delta} du = \frac{\delta}{1+\delta} \leq \delta . $$ So the first and third statements in Approximation Result \ref{apres} follow if we simply set $g_{N+1}(x):= 1 - \sum_{|j| \leq N} g_{j}(x)$ for all $x \in \mathbb{R}$. \end{proof} \subsection{Mean value estimates}\label{subsecmeans} Having introduced approximating functions as in Approximation Result \ref{apres}, our arguments will require us to evaluate various character averages involving these functions. A basic tool for this will be the following bound, which is a fairly standard even moment estimate for character sums of appropriate lengths. \begin{lem1}[Even moment estimate]\label{evenmomentlem} Let $x \geq 1$, and let $(c(n))_{n \leq x}$ be any complex numbers. Let $\mathcal{P}$ be any finite set of primes, let $\mathcal{Q}$ be any (non-empty) set consisting of some elements of $\mathcal{P}$ and squares of elements of $\mathcal{P}$, and write $U := \max\{q \in \mathcal{Q}\}$ . Finally, let $Q(\chi) := \sum_{q \in \mathcal{Q}} \frac{a(q) \chi(q)}{\sqrt{q}}$, where the $a(q)$ are any complex numbers. Then for any natural number $k$ such that $x U^k < r$, we have $$ \mathbb{E}^{\text{char}} \Biggl|\sum_{n \leq x} c(n) \chi(n) \Biggr|^{2} |Q(\chi)|^{2k} \ll \Biggl(\sum_{n \leq x} \tilde{d}(n) |c(n)|^2 \Biggr) \cdot (k !) \Biggl( 2 \sum_{q \in \mathcal{Q}} \frac{v_q |a(q)|^2}{q} \Biggr)^{k} , $$ where $\tilde{d}(n) := \sum_{d|n} \textbf{1}_{p|d \Rightarrow p \in \mathcal{P}}$, and $v_q$ is 1 if $q$ is a prime and 6 if $q$ is the square of a prime. \end{lem1} \begin{proof}[Proof of Lemma \ref{evenmomentlem}] This is a character sum version of Lemma 2 of Harper~\cite{harperpartition}, which dealt with $t$-averages of $\sum_{n \leq x} c(n) n^{-it}$ and $\sum_{q \in \mathcal{Q}} \frac{a(q)}{q^{1/2 + it}}$. It may be proved in the same way as that result (in fact slightly more easily, since for Dirichlet characters one has perfect rather than approximate orthogonality). \end{proof} Using Taylor expansion and Lemma \ref{evenmomentlem} we can deduce the following crucial Proposition, which guarantees that (provided we keep sufficient control on the sizes of the various parameters) character averages involving the functions $g_j$ behave in the same way as the corresponding averages involving random multiplicative functions. \begin{prop1}[Characters behave like random model]\label{condexpprop} Let the functions $g_j$, with associated parameters $N, \delta$, be as in Approximation Result \ref{apres}. Suppose that $x \geq 1$, and let $(c(n))_{n \leq x}$ be any complex numbers having absolute values $\leq 1$. Furthermore, let $P$ be large, and let $Y \in \mathbb{N}$ be such that $x P^{400(Y/\delta)^2 \log(N\log P)} < r$. Let $f(n)$ denote a Steinhaus random multiplicative function. Then for any indices $-N \leq j(1), j(2), ..., j(Y) \leq N+1$, and for any sequences $(a_{1}(p))_{p \leq P}, (a_{1}(p^2))_{p \leq P}, ..., (a_{Y}(p))_{p \leq P}, (a_{Y}(p^2))_{p \leq P}$ of complex numbers having absolute values $\leq 1$, we have \begin{eqnarray} && \mathbb{E}^{\text{char}} \prod_{i=1}^{Y} g_{j(i)}(\Re(\sum_{p \leq P} \frac{a_{i}(p) \chi(p)}{\sqrt{p}} + \frac{a_{i}(p^2) \chi(p^2)}{p})) \Biggl|\sum_{n \leq x} c(n) \chi(n) \Biggr|^2 \nonumber \\ & = & \mathbb{E} \prod_{i=1}^{Y} g_{j(i)}(\Re(\sum_{p \leq P} \frac{a_{i}(p) f(p)}{\sqrt{p}} + \frac{a_{i}(p^2) f(p^2)}{p})) \Biggl|\sum_{n \leq x} c(n) f(n) \Biggr|^2 + O\left(\frac{x}{(N \log P)^{Y(1/\delta)^2}} \right) . \nonumber \end{eqnarray} \end{prop1} \begin{proof}[Proof of Proposition \ref{condexpprop}] Using property (iv) from Approximation Result \ref{apres}, for any threshold $2S \in 2\mathbb{N}$ we can write $g_{j}(x) = \tilde{g}_{j}(x) + r_{j}(x)$, where $\tilde{g}_{j}(\cdot)$ is a polynomial of degree $2S-1$ (namely the degree $2S-1$ Taylor polynomial of $g_{j}(x)$ about zero), and where $|r_{j}(x)| \leq \frac{|x|^{2S}}{(2S)!} \sup_{|y| \leq |x|} |\frac{d^{2S}}{dy^{2S}} g_{j}(y)| \ll \frac{N |2\pi x/\delta|^{2S}}{\delta S (2S)!}$. (Note that the factor $N$ here is to account for the case where $g_{j}(y) = g_{N+1}(y) = 1 - \sum_{|i| \leq N} g_{i}(y)$.) First we examine the contribution from the ``main terms'' $\tilde{g}_{j}(\cdot)$ to the left hand side in the Proposition. Provided that $x P^{4SY} < r$, we can expand all the polynomials and the square out and find that \begin{eqnarray} && \mathbb{E}^{\text{char}} \prod_{i=1}^{Y} \tilde{g}_{j(i)}(\Re(\sum_{p \leq P} \frac{a_{i}(p) \chi(p)}{\sqrt{p}} + \frac{a_{i}(p^2) \chi(p^2)}{p})) \Biggl|\sum_{n \leq x} c(n) \chi(n) \Biggr|^2 \nonumber \\ & = & \mathbb{E} \prod_{i=1}^{Y} \tilde{g}_{j(i)}(\Re( \sum_{p \leq P} \frac{a_{i}(p) f(p)}{\sqrt{p}} + \frac{a_{i}(p^2) f(p^2)}{p})) \Biggl|\sum_{n \leq x} c(n) f(n) \Biggr|^2 , \nonumber \end{eqnarray} since if $1 \leq U,V < r$ then we have the equality $\mathbb{E}^{\text{char}} \chi(U) \overline{\chi(V)} = \textbf{1}_{U=V} = \mathbb{E} f(U) \overline{f(V)}$. (Note that $\mathbb{E} f(U) \overline{f(V)} = \textbf{1}_{U=V}$ for {\em all} natural numbers $U,V$ on the random multiplicative function side, but on the character side one needs the restriction that $1 \leq U,V < r$ to boost a congruence mod $r$ to an equality.) Next, dividing up according to the smallest index $i$ at which we get a remainder, we see the contribution from all of the remainders $r_{j(i)}(\cdot)$ to the left hand side in Proposition \ref{condexpprop} is \begin{eqnarray} & \ll & \mathbb{E}^{\text{char}} \sum_{i=1}^{Y} \frac{N |2\pi/\delta|^{2S}}{\delta S (2S)!} |\sum_{p \leq P} \frac{a_{i}(p) \chi(p)}{\sqrt{p}} + \frac{a_{i}(p^2) \chi(p^2)}{p}|^{2S} \cdot \nonumber \\ && \cdot \prod_{l=1}^{i-1} \Biggl(1 + O(\frac{N |2\pi/\delta|^{2S}}{\delta S (2S)!} |\sum_{p \leq P} \frac{a_{l}(p) \chi(p)}{\sqrt{p}} + \frac{a_{l}(p^2) \chi(p^2)}{p}|^{2S})\Biggr) \Biggl|\sum_{n \leq x} c(n) \chi(n) \Biggr|^2 . \nonumber \end{eqnarray} Here we used the fact that $|\tilde{g}_{j(l)}(x)| \leq |g_{j(l)}(x)| + |r_{j(l)}(x)| \leq 1 + O(\frac{N |2\pi x/\delta|^{2S}}{\delta S (2S)!})$. Using Lemma \ref{evenmomentlem} and the condition $x P^{4SY} < r$ again, along with the bounds $(jS)! \ll \sqrt{jS} (jS/e)^{jS}$ and $(2S)! \geq (2S/e)^{2S}$ to handle the factorials produced by that lemma, we find this is all \begin{eqnarray} & \ll & \Biggl(\sum_{n \leq x} \tilde{d}(n) |c(n)|^2 \Biggr) \cdot \sum_{i=1}^{Y} \frac{N|2\pi/\delta|^{2S}}{\delta S (2S)!} \Biggl( \sqrt{iS} (\frac{iS}{e})^S (2\log\log P + O(1))^{S} \Biggr) \cdot \nonumber \\ && \cdot \prod_{l=1}^{i-1} \Biggl(1 + O\Biggl(\frac{N|2\pi/\delta|^{2S}}{\delta S (2S)!} (\frac{iS}{e})^S (2\log\log P + O(1))^{S} \Biggr) \Biggr) \nonumber \\ & \ll & x\log P \cdot \sum_{i=1}^{Y} \frac{N}{\delta} \sqrt{\frac{i}{S}} (\frac{e (\pi/\delta)^2 i (2\log\log P + O(1))}{S})^S \cdot \nonumber \\ && \cdot \Biggl( 1 + O\Biggl(\frac{N}{\delta S} (\frac{e (\pi/\delta)^2 i (2\log\log P + O(1))}{S})^S \Biggr) \Biggr)^{i-1} . \nonumber \end{eqnarray} Here we also used our assumptions that $|c(n)| \leq 1$ and $|a_{i}(p)|, |a_{i}(p^2)| \leq 1$, which in particular imply that $2\sum_{p \leq P} (\frac{|a_{i}(p)|^2 }{p} + \frac{6|a_{i}(p^2)|^2}{p^2}) \leq 2\sum_{p \leq P} \frac{1}{p} + O(1) = 2\log\log P + O(1)$. One has the same overall bound for the contribution from the remainders $r_{j(i)}(\cdot)$ to the right hand side in Proposition \ref{condexpprop}, since one has the same bound for $\mathbb{E} |\sum_{n \leq x} c(n) f(n) |^{2} |Q(f)|^{2k}$ as for the character average in Lemma \ref{evenmomentlem} (indeed this quantity is again exactly equal to $\mathbb{E}^{\text{char}} |\sum_{n \leq x} c(n) \chi(n) |^{2} |Q(\chi)|^{2k}$, under the size conditions in the lemma). Now if we set $S = 100Y \lfloor (1/\delta)^2 \log(N\log P) \rfloor$, then the condition $x P^{4SY} < r$ is satisfied in view of our assumption that $x P^{400(Y/\delta)^2 \log(N\log P)} < r$. And with this choice, the error term produced by all of the remainders is $$ \ll x\log P \cdot \sum_{i=1}^{Y} \frac{N}{\delta} \sqrt{\frac{i}{S}} 0.6^S \left( 1 + O(\frac{N}{\delta S} 0.6^S ) \right)^{i-1} \ll x\log P \cdot NY \cdot 0.6^S \ll \frac{x}{(N \log P)^{Y(1/\delta)^2}} , $$ as desired. \end{proof} Taking $x=1$ and $c(1)=1$ in Proposition \ref{condexpprop}, we obtain the following important special case. \begin{prop2}\label{boxprobprop} Let the functions $g_j$, with associated parameters $N, \delta$, be as in Approximation Result \ref{apres}. Suppose $P$ is large, and let $Y \in \mathbb{N}$ be such that $P^{400(Y/\delta)^2 \log(N\log P)} < r$. Let $f(n)$ denote a Steinhaus random multiplicative function. Then for any indices $-N \leq j(1), j(2), ..., j(Y) \leq N+1$, and for any sequences $(a_{1}(p))_{p \leq P}, (a_{1}(p^2))_{p \leq P}, ..., (a_{Y}(p))_{p \leq P}, (a_{Y}(p^2))_{p \leq P}$ of complex numbers having absolute values $\leq 1$, we have \begin{eqnarray} \mathbb{E}^{\text{char}} \prod_{i=1}^{Y} g_{j(i)}(\Re(\sum_{p \leq P} \frac{a_{i}(p) \chi(p)}{\sqrt{p}} + \frac{a_{i}(p^2) \chi(p^2)}{p})) & = & \mathbb{E} \prod_{i=1}^{Y} g_{j(i)}(\Re(\sum_{p \leq P} \frac{a_{i}(p) f(p)}{\sqrt{p}} + \frac{a_{i}(p^2) f(p^2)}{p})) + \nonumber \\ && + O\left(\frac{1}{(N \log P)^{Y(1/\delta)^2}} \right) . \nonumber \end{eqnarray} \end{prop2} \subsection{Parseval's identity for Dirichlet series}\label{subsecparseval} As in the proof of the random analogue of Theorem \ref{mainthmchar}, we shall need the following version of Parseval's identity for Dirichlet series. \begin{harman1}[See (5.26) in sec. 5.1 of Montgomery and Vaughan~\cite{mv}]\label{harmdirichlet} Let $(a_n)_{n=1}^{\infty}$ be any sequence of complex numbers, and let $A(s) := \sum_{n=1}^{\infty} \frac{a_n}{n^s}$ denote the corresponding Dirichlet series, and $\sigma_c$ denote its abscissa of convergence. Then for any $\sigma > \max\{0,\sigma_c \}$, we have $$ \int_{0}^{\infty} \frac{|\sum_{n \leq x} a_n |^2}{x^{1 + 2\sigma}} dx = \frac{1}{2\pi} \int_{-\infty}^{\infty} \left|\frac{A(\sigma + it)}{\sigma + it}\right|^2 dt . $$ \end{harman1} We shall deploy Harmonic Analysis Result \ref{harmdirichlet} at a point in our argument where we have already used Propositions \ref{condexpprop} and \ref{boxprobprop} to move from studying character averages to studying random multiplicative functions $f(n)$. As such, we will be able to take $a_n$ as the values $f(n)$ restricted to $P$-smooth numbers $n$ (for a certain parameter $P$), and with $A(s)$ being the partial Euler product corresponding to $f(n)$ on all $P$-smooth numbers, similarly as in the random case in \cite{harperrmflowmoments}. Note that we could not proceed in this way working with Dirichlet characters mod $r$ directly, because we would not retain sufficient control on terms in the Euler product corresponding to those $n > r$. \subsection{Random Euler products}\label{subsecrandeuler} The final key tool we require, and the ultimate source of the better than squareroot cancellation that we look to establish in our theorems, is an upper bound for the small moments of a short integral of a random Euler product. This builds on ideas from the probabilistic theory of critical multiplicative chaos. Recall that $(f(p))_{p \; \text{prime}}$ is a sequence of independent random variables distributed uniformly on the complex unit circle, and for any large quantity $P$ and any complex $s$ with $\Re(s) > 0$, let $F_{P}^{\text{rand}}(s) := \prod_{p \leq P} (1 - \frac{f(p)}{p^s})^{-1}$. \begin{multchaos1}[See section 4 of Harper~\cite{harperrmflowmoments}]\label{mcres1} Uniformly for all large $P$ and $2/3 \leq q \leq 1$, we have $$ \mathbb{E}( \int_{-1/2}^{1/2} |F_{P}^{\text{rand}}(1/2 + it)|^2 dt )^q \ll \Biggl(\frac{\log P}{1 + (1-q)\sqrt{\log\log P}}\Biggr)^{q} . $$ \end{multchaos1} This is proved in section 4 of \cite{harperrmflowmoments} (see the proof of the Theorem 1 upper bound there), in slightly different notation: the product $F_{0}(1/2+it)$ from \cite{harperrmflowmoments} corresponds to $F_{P}^{\text{rand}}(1/2 + it)$ with $P$ replaced by $x^{1/e}$. Although the reader is welcome to treat Multiplicative Chaos Result \ref{mcres1} entirely as a black box for our purposes here, it seems worthwhile to note that the bound is rather subtle. An upper bound $\ll (\log P)^q$ would follow immediately by using H\"{o}lder's inequality to compare with the $q=1$ case, and applying standard Euler product calculations. Obtaining the crucial saving $1 + (1-q)\sqrt{\log\log P}$ in the denominator requires a careful analysis of the behaviour of various subproducts of $F_{P}^{\text{rand}}(1/2 + it)$. It is also shown in section 5.2 of Harper~\cite{harperrmflowmoments} that the upper bound in Multiplicative Chaos Result \ref{mcres1} is sharp, but we won't need (or be able) to exploit that here. \vspace{12pt} When dealing with character sums we cannot perform such a precise and complete ``conditioning'' procedure as in the genuine random multiplicative case, so it will be more straightforward (although not absolutely essential) to work with discrete sums rather than integral averages $\int_{-1/2}^{1/2}$ in the argument. We now deduce a discrete version of Multiplicative Chaos Result \ref{mcres1} to fit with this proof structure. \begin{lem2}\label{disccontlem} For any large $P$, we have $$ \mathbb{E} \sum_{|k| \leq \frac{\log^{1.01}P}{2}} \int_{-\frac{1}{2\log^{1.01}P}}^{\frac{1}{2\log^{1.01}P}} |F_{P}^{\text{rand}}(1/2 + i\frac{k}{\log^{1.01}P} + it) - F_{P}^{\text{rand}}(1/2 + i\frac{k}{\log^{1.01}P})|^2 dt \ll \log^{0.99}P . $$ \end{lem2} \begin{proof}[Proof of Lemma \ref{disccontlem}] Since we have $F_{P}^{\text{rand}}(s) = \sum_{\substack{n=1, \\ n \; \text{is} \; P \; \text{smooth}}}^{\infty} \frac{f(n)}{n^s}$, with $f(n)$ a Steinhaus random multiplicative function, a mean square calculation shows that the left hand side in Lemma \ref{disccontlem} is \begin{eqnarray} & = & \sum_{|k| \leq \frac{\log^{1.01}P}{2}} \int_{-\frac{1}{2\log^{1.01}P}}^{\frac{1}{2\log^{1.01}P}} \mathbb{E} |\sum_{\substack{n=1, \\ n \; \text{is} \; P \; \text{smooth}}}^{\infty} \frac{f(n)}{n^{1/2 + i\frac{k}{\log^{1.01}P}}} (n^{-it} - 1)|^2 dt \nonumber \\ & = & \sum_{|k| \leq \frac{\log^{1.01}P}{2}} \int_{-\frac{1}{2\log^{1.01}P}}^{\frac{1}{2\log^{1.01}P}} \sum_{\substack{n=1, \\ n \; \text{is} \; P \; \text{smooth}}}^{\infty} \frac{|n^{-it}-1|^2}{n} dt . \nonumber \end{eqnarray} Here we have $|n^{-it} - 1| \ll \min\{|t|\log n, 1\} \leq \min\{\frac{\log n}{\log^{1.01}P}, 1\}$. Thus the contribution to the series from those $n \leq P^{\log\log P}$ is $\ll \frac{(\log\log P)^2}{\log^{0.02}P} \sum_{\substack{n=1, \\ n \; \text{is} \; P \; \text{smooth}}}^{\infty} \frac{1}{n}$, and since $\sum_{\substack{n=1, \\ n \; \text{is} \; P \; \text{smooth}}}^{\infty} \frac{1}{n} = \prod_{p \leq P} (1 - \frac{1}{p})^{-1} \ll \log P$ this is all $\ll (\log\log P)^2 \log^{0.98}P$. The contribution to the series from those $n > P^{\log\log P}$ (where we use the trivial bound $|n^{-it} - 1| \ll 1$) is also acceptably small, namely $$ \ll e^{-\log\log P} \sum_{\substack{n=1, \\ n \; \text{is} \; P \; \text{smooth}}}^{\infty} \frac{1}{n^{1-1/\log P}} = e^{-\log\log P} \prod_{p \leq P} (1 - \frac{1}{p^{1-1/\log P}})^{-1} \ll e^{-\log\log P} \log P = 1 . $$ \end{proof} \begin{multchaos2}\label{mcres2} Uniformly for all large $P$ and $2/3 \leq q \leq 1$, we have $$ \mathbb{E}( \frac{1}{\log^{1.01}P} \sum_{|k| \leq (\log^{1.01}P)/2} |F_{P}^{\text{rand}}(1/2 + i\frac{k}{\log^{1.01}P})|^2 )^q \ll \Biggl(\frac{\log P}{1 + (1-q)\sqrt{\log\log P}}\Biggr)^{q} . $$ \end{multchaos2} \begin{proof}[Proof of Multiplicative Chaos Result \ref{mcres2}] To deduce this from Multiplicative Chaos Result \ref{mcres1}, it will suffice to prove a suitable upper bound for $$ \mathbb{E}( \sum_{|k| \leq (\log^{1.01}P)/2} \int_{-1/(2\log^{1.01}P)}^{1/(2\log^{1.01}P)} |F_{P}^{\text{rand}}(1/2 + i\frac{k}{\log^{1.01}P} + it) - F_{P}^{\text{rand}}(1/2 + i\frac{k}{\log^{1.01}P})|^2 dt )^q . $$ But using H\"{o}lder's inequality to compare the $q$-th moment with the first moment, we see this quantity is at most $$ \Biggl( \mathbb{E} \sum_{|k| \leq \frac{\log^{1.01}P}{2}} \int_{-\frac{1}{2\log^{1.01}P}}^{\frac{1}{2\log^{1.01}P}} |F_{P}^{\text{rand}}(1/2 + i\frac{k}{\log^{1.01}P} + it) - F_{P}^{\text{rand}}(1/2 + i\frac{k}{\log^{1.01}P})|^2 dt \Biggr)^q , $$ which we can bound acceptably using Lemma \ref{disccontlem}. \end{proof} \section{Proof of Theorem \ref{mainthmchar}}\label{secproof1} \subsection{Notation and set-up}\label{thm1small} We may restrict attention to the range $2/3 \leq q \leq 1$, since if $q$ is smaller we can use H\"{o}lder's inequality to upper bound $\mathbb{E}^{\text{char}} |\sum_{n \leq x} \chi(n)|^{2q}$ by $(\mathbb{E}^{\text{char}} |\sum_{n \leq x} \chi(n)|^{4/3})^{3q/2}$, and invoke the $q=2/3$ case. Recall that $L := \min\{x,r/x\}$, which we may assume to be large since otherwise the Theorem is trivial. We have a parameter $P$ at our disposal, which we must choose to be comparable to $L$ on a {\em doubly logarithmic} scale, but (it turns out) somewhat smaller than $L$ on a logarithmic scale. In fact it will suffice if $P$ is around $\exp\{\log^{1/6}L\}$, and (for very minor technical reasons) we shall actually choose $P$ to be the largest number below $\exp\{\log^{1/6}L\}$ such that $\log^{0.01}P$ is an integer. Thus we will have $\log P \asymp \log^{1/6}L$ and $\log\log P \asymp \log\log L$, and to prove Theorem \ref{mainthmchar} it will suffice to show that \begin{equation}\label{stpmaindisplay} \mathbb{E}^{\text{char}} |\sum_{n \leq x} \chi(n)|^{2q} \ll \Biggl(\frac{x}{1 + (1-q)\sqrt{\log\log P}} \Biggr)^q . \end{equation} We set $M := 2\log^{1.02}P$, say (note this is an integer with our choice of $P$), and for each integer $k$ satisfying $|k| \leq M$ and each character $\chi$ mod $r$ we set $S_{k}(\chi) := \Re \sum_{p \leq P} (\frac{\chi(p)}{p^{1/2 + ik/\log^{1.01}P}} + \frac{\chi(p)^2}{2p^{1 + 2ik/\log^{1.01}P}})$. These are the prime number sums on which we shall ``condition'' in the next subsection. Finally, recall that in Approximation Result \ref{apres} we have two parameters $N, \delta$ to set, for our construction of functions $g_{j}(\cdot)$ forming a smooth partition of unity. We shall make the final choices of these at the end of the proof, but it will turn out that the ``precision'' parameter $\delta$ may be chosen as a suitable negative power of $\log P$, and the ``range'' parameter $N$ (for which we have much flexibility) as a suitable multiple of $\log\log P$, say. \subsection{The conditioning argument}\label{subsecmaincond} Firstly, let $P(n)$ denote the largest prime factor of $n$, and as usual set $\Psi(x,y) := \#\{n \leq x : n \; \text{is} \; y \; \text{smooth}\} = \#\{n \leq x : P(n) \leq y \}$. Then we have $$ \mathbb{E}^{\text{char}} |\sum_{n \leq x, P(n) \leq x^{1/\log\log x}} \chi(n)|^{2q} \leq \Biggl(\mathbb{E}^{\text{char}} |\sum_{n \leq x, P(n) \leq x^{1/\log\log x}} \chi(n)|^{2} \Biggr)^q = \Psi(x,x^{1/\log\log x})^q , $$ by H\"{o}lder's inequality and orthogonality of Dirichlet characters. Using standard smooth number estimates (see Theorem 7.6 of Montgomery and Vaughan~\cite{mv}, for example) this is $\ll (x (\log x)^{-c\log\log\log x})^q$, which is a negligible contribution in Theorem \ref{mainthmchar}. So it will suffice to bound $\mathbb{E}^{\text{char}} |\sum_{n \leq x, P(n) > x^{1/\log\log x}} \chi(n)|^{2q}$. In the random version of the argument, one proceeds by conditioning on the values of $f(p)$ on all ``small'' primes $p$. We now look to emulate this procedure for character sums, breaking up the average $\mathbb{E}^{\text{char}}$ according to the values of all of the sums $S_{k}(\chi)$. Thus, recalling that the $g_j$ from Approximation Result \ref{apres} form a partition of unity, we may rewrite $\mathbb{E}^{\text{char}} |\sum_{n \leq x, P(n) > x^{1/\log\log x}} \chi(n)|^{2q}$ as \begin{eqnarray} && \mathbb{E}^{\text{char}} \prod_{i=-M}^{M} (\sum_{j=-N}^{N+1} g_{j}(S_{i}(\chi))) \Biggl|\sum_{\substack{n \leq x, \\ P(n) > x^{1/\log\log x}}} \chi(n) \Biggr|^{2q} \nonumber \\ & = & \sum_{-N \leq j(-M), ..., j(0), ..., j(M) \leq N+1} \mathbb{E}^{\text{char}} \prod_{i=-M}^{M} g_{j(i)}(S_{i}(\chi)) \Biggl|\sum_{\substack{n \leq x, \\ P(n) > x^{1/\log\log x}}} \chi(n) \Biggr|^{2q} \nonumber \\ & = & \sum_{-N \leq j(-M) , ... , j(0), ..., j(M) \leq N+1} \sigma(\textbf{j}) \mathbb{E}^{\textbf{j}} \Biggl|\sum_{\substack{n \leq x, \\ P(n) > x^{1/\log\log x}}} \chi(n) \Biggr|^{2q} , \nonumber \end{eqnarray} where for any $(2M+1)$-vector $\textbf{j}$ from the outer sum we set $\sigma(\textbf{j}) := \mathbb{E}^{\text{char}} \prod_{i=-M}^{M} g_{j(i)}(S_{i}(\chi))$, and $\mathbb{E}^{\textbf{j}} W := \sigma(\textbf{j})^{-1} \mathbb{E}^{\text{char}} W \prod_{i=-M}^{M} g_{j(i)}(S_{i}(\chi))$ for all functions $W(\chi)$. This $\mathbb{E}^{\textbf{j}}$ is our character sum version of a conditional expectation. In particular, for the constant function 1, by the definitions we have $\mathbb{E}^{\textbf{j}} 1 = 1$ for all choices of the vector $\textbf{j}$. So applying H\"older's inequality to $\mathbb{E}^{\textbf{j}}$, we conclude overall that $$ \mathbb{E}^{\text{char}} \Biggl|\sum_{\substack{n \leq x, \\ P(n) > x^{1/\log\log x}}} \chi(n) \Biggr|^{2q} \leq \sum_{-N \leq j(-M) , ..., j(0), ..., j(M) \leq N+1} \sigma(\textbf{j})\Biggl( \mathbb{E}^{\textbf{j}} \Biggl|\sum_{\substack{n \leq x, \\ P(n) > x^{1/\log\log x}}} \chi(n) \Biggr|^{2} \Biggr)^{q} . $$ \subsection{Passing to the random case}\label{subsecmaingorandom} At this point, we can use Propositions \ref{condexpprop} and \ref{boxprobprop} to move from working with $\sigma(\textbf{j})$ and $\mathbb{E}^{\textbf{j}} \Biggl|\sum_{\substack{n \leq x, \\ P(n) > x^{1/\log\log x}}} \chi(n) \Biggr|^{2}$ to working with their analogues involving random multiplicative functions. Actually it isn't essential to do this at such an early stage in the argument, but doing it early will simplify various steps of the analysis, and will ultimately allow us to establish \eqref{stpmaindisplay} once we reach a point where we can invoke our Multiplicative Chaos Results. Using Proposition \ref{condexpprop} with $Y=2M+1$, {\em provided that our choices of $N, \delta$ ultimately satisfy $x P^{400((2M+1)/\delta)^2 \log(N\log P)} < r$} we will have $$ \mathbb{E}^{\textbf{j}} \Biggl|\sum_{\substack{n \leq x, \\ P(n) > x^{1/\log\log x}}} \chi(n) \Biggr|^{2} = \frac{1}{\sigma(\textbf{j})} \Biggl( \mathbb{E} \prod_{i=-M}^{M} g_{j(i)}(S_{i}(f)) \Biggl|\sum_{\substack{n \leq x, \\ P(n) > x^{1/\log\log x}}} f(n) \Biggr|^{2} + O\left(\frac{x}{(N \log P)^{(2M+1)(1/\delta)^2}} \right) \Biggr) . $$ Now observe that $\sum_{\textbf{j}} \sigma(\textbf{j}) = \mathbb{E}^{\text{char}} \prod_{i=-M}^{M} (\sum_{j=-N}^{N+1} g_{j}(S_{i}(\chi))) = \mathbb{E}^{\text{char}} 1 = 1$. Thus, applying H\"{o}lder's inequality to $\sum_{\textbf{j}}$, we see the total contribution to $\mathbb{E}^{\text{char}} |\sum_{n \leq x, P(n) > x^{1/\log\log x}} \chi(n)|^{2q}$ from all of the ``big Oh'' terms from Proposition \ref{condexpprop} is $$ \ll \Biggl( \sum_{-N \leq j(-M) , ..., j(0), ..., j(M) \leq N+1} \sigma(\textbf{j}) \cdot \frac{1}{\sigma(\textbf{j})} \frac{x}{(N \log P)^{(2M+1)(1/\delta)^2}} \Biggr)^{q} = \Biggl( x (\frac{(2N+2)}{(N \log P)^{(1/\delta)^2}})^{2M+1} \Biggr)^{q} . $$ This is negligible for \eqref{stpmaindisplay}. Next, if we define $\sigma^{\text{rand}}(\textbf{j}) := \mathbb{E} \prod_{i=-M}^{M} g_{j(i)}(S_{i}(f))$ for all $(2M+1)$-vectors $\textbf{j}$, where $f$ is a Steinhaus random multiplicative function, then using Proposition \ref{boxprobprop} we get $\sigma(\textbf{j})^{1-q} \ll \sigma^{\text{rand}}(\textbf{j})^{1-q} + (\frac{1}{(N \log P)^{(2M+1)(1/\delta)^2}})^{1-q}$. Therefore we have \begin{eqnarray} && \sum_{-N \leq j(-M) , ..., j(0), ..., j(M) \leq N+1} \sigma(\textbf{j})\Biggl( \frac{1}{\sigma(\textbf{j})} \mathbb{E} \prod_{i=-M}^{M} g_{j(i)}(S_{i}(f)) \Biggl|\sum_{\substack{n \leq x, \\ P(n) > x^{1/\log\log x}}} f(n) \Biggr|^{2} \Biggr)^q \nonumber \\ & \ll & \sum_{\textbf{j}} \sigma^{\text{rand}}(\textbf{j})\Biggl( \frac{1}{\sigma^{\text{rand}}(\textbf{j})} \mathbb{E} \prod_{i=-M}^{M} g_{j(i)}(S_{i}(f)) \Biggl|\sum_{\substack{n \leq x, \\ P(n) > x^{1/\log\log x}}} f(n) \Biggr|^{2} \Biggr)^q + \nonumber \\ && + (\frac{1}{(N \log P)^{(2M+1)(1/\delta)^2}})^{1-q} \sum_{\textbf{j}} \Biggl( \mathbb{E} \prod_{i=-M}^{M} g_{j(i)}(S_{i}(f)) \Biggl|\sum_{\substack{n \leq x, \\ P(n) > x^{1/\log\log x}}} f(n) \Biggr|^{2} \Biggr)^q . \nonumber \end{eqnarray} Yet another application of H\"older's inequality to the sum over $\textbf{j}$, and recalling again that the $g_j$ form a partition of unity, implies that the final line here is \begin{eqnarray} & \ll & (\frac{1}{(N \log P)^{(2M+1)(1/\delta)^2}})^{1-q} \cdot ((2N+2)^{2M+1})^{1-q} \cdot \Biggl( \sum_{\textbf{j}} \mathbb{E} \prod_{i=-M}^{M} g_{j(i)}(S_{i}(f)) \Biggl|\sum_{\substack{n \leq x, \\ P(n) > x^{1/\log\log x}}} f(n) \Biggr|^{2} \Biggr)^q \nonumber \\ & = & \Biggl( (\frac{(2N+2)}{(N \log P)^{(1/\delta)^2}})^{2M+1} \Biggr)^{1-q} \Biggl( \mathbb{E} \Biggl|\sum_{\substack{n \leq x, \\ P(n) > x^{1/\log\log x}}} f(n) \Biggr|^{2} \Biggr)^q . \nonumber \end{eqnarray} This is certainly $\leq e^{-(1-q)\log\log P} x^q$, say, which is acceptable for \eqref{stpmaindisplay}. In summary, if we now define $\mathbb{E}^{\textbf{j}, \text{rand}} W := \sigma^{\text{rand}}(\textbf{j})^{-1} \mathbb{E} W \prod_{i=-M}^{M} g_{j(i)}(S_{i}(f))$ for all random variables $W$, then to prove Theorem \ref{mainthmchar} it remains for us to show that \begin{equation}\label{stpnextdisplay} \sum_{-N \leq j(-M) , ..., j(0), ..., j(M) \leq N+1} \sigma^{\text{rand}}(\textbf{j}) \Biggl( \mathbb{E}^{\textbf{j}, \text{rand}} \Biggl|\sum_{\substack{n \leq x, \\ P(n) > x^{1/\log\log x}}} f(n) \Biggr|^{2} \Biggr)^{q} \ll \Biggl(\frac{x}{1 + (1-q)\sqrt{\log\log P}} \Biggr)^q . \end{equation} Note that we have now removed all mention of Dirichlet characters, and \eqref{stpnextdisplay} is purely a statement about random multiplicative functions. \subsection{Passing to Euler products}\label{subsecpasseuler} Our next step is to move from the left hand side of \eqref{stpnextdisplay}, which still involves {\em sums} of $f(n)$, to an expression featuring Euler products. This part of the argument will be very similar to the corresponding work in section 2.4 of Harper~\cite{harperrmflowmoments}, although not exactly the same because the way we set up our ``conditioning'' (using a smooth partition of unity) was necessarily different here. Firstly (and crucially), since the $f(p)$ are {\em independent} mean zero random variables, and the various expressions $\prod_{i=-M}^{M} g_{j(i)}(S_{i}(f))$ in the definition of $\mathbb{E}^{\textbf{j}, \text{rand}}$ only involve the $f(p)$ for $p \leq P$ (not those for $p > P$), we find by expanding the square that $$ \sum_{\textbf{j}} \sigma^{\text{rand}}(\textbf{j}) \Biggl( \mathbb{E}^{\textbf{j}, \text{rand}} \Biggl|\sum_{\substack{n \leq x, \\ P(n) > x^{1/\log\log x}}} f(n) \Biggr|^{2} \Biggr)^{q} = \sum_{\textbf{j}} \sigma^{\text{rand}}(\textbf{j}) \Biggl( \mathbb{E}^{\textbf{j}, \text{rand}} \sum_{\substack{m \leq x, \\ P(m) > x^{1/\log\log x} , \\ p|m \Rightarrow p > P}} \Biggl|\sum_{\substack{n \leq x/m, \\ n \; \text{is} \; P \; \text{smooth}}} f(n) \Biggr|^{2} \Biggr)^{q} . $$ Here we implicitly used the fact that $P < x^{1/\log\log x}$. Setting $X = e^{\sqrt{\log x}}$, and replacing the condition that $P(m) > x^{1/\log\log x}$ by the weaker condition that $m > x^{1/\log\log x}$ (for an upper bound), and introducing an integral to smooth out on a scale of $1/X$, we find the bracketed term is \begin{eqnarray}\label{smoothrandsplit} & \ll & \Biggl(\mathbb{E}^{\textbf{j}, \text{rand}} \sum_{\substack{x^{1/\log\log x} < m \leq x, \\ p|m \Rightarrow p > P}} \frac{X}{m} \int_{m}^{m(1+1/X)} |\sum_{\substack{n \leq x/t, \\ n \; \text{is} \; P \; \text{smooth}}} f(n)|^{2} dt \Biggr)^q \nonumber \\ && + \Biggl(\mathbb{E}^{\textbf{j}, \text{rand}} \sum_{\substack{x^{1/\log\log x} < m \leq x, \\ p|m \Rightarrow p > P}} \frac{X}{m} \int_{m}^{m(1+1/X)} |\sum_{\substack{x/t < n \leq x/m, \\ n \; \text{is} \; P \; \text{smooth}}} f(n)|^{2} dt \Biggr)^q . \end{eqnarray} Now observe again that $\sum_{\textbf{j}} \sigma^{\text{rand}}(\textbf{j}) = \mathbb{E} \prod_{i=-M}^{M} (\sum_{j=-N}^{N+1} g_{j}(S_{i}(f))) = \mathbb{E} 1 = 1$. Thus, applying H\"{o}lder's inequality to the sum over $\textbf{j}$, the total contribution to \eqref{stpnextdisplay} from the second bracket in \eqref{smoothrandsplit} is \begin{eqnarray} & \ll & \sum_{\textbf{j}} \sigma^{\text{rand}}(\textbf{j}) \Biggl( \mathbb{E}^{\textbf{j}, \text{rand}} \sum_{\substack{x^{1/\log\log x} < m \leq x, \\ p|m \Rightarrow p > P}} \frac{X}{m} \int_{m}^{m(1+1/X)} |\sum_{\substack{x/t < n \leq x/m, \\ n \; \text{is} \; P \; \text{smooth}}} f(n)|^{2} dt \Biggr)^q \nonumber \\ & \leq & \Biggl(\sum_{\textbf{j}} \sigma^{\text{rand}}(\textbf{j}) \mathbb{E}^{\textbf{j}, \text{rand}} \sum_{\substack{x^{1/\log\log x} < m \leq x, \\ p|m \Rightarrow p > P}} \frac{X}{m} \int_{m}^{m(1+1/X)} |\sum_{\substack{x/t < n \leq x/m, \\ n \; \text{is} \; P \; \text{smooth}}} f(n)|^{2} dt \Biggr)^q . \nonumber \end{eqnarray} But recalling the definitions of everything, and then using the orthogonality of random multiplicative functions, we see this is the same as $$ \Biggl(\mathbb{E} \sum_{\substack{x^{1/\log\log x} < m \leq x, \\ p|m \Rightarrow p > P}} \frac{X}{m} \int_{m}^{m(1+1/X)} |\sum_{\substack{x/t < n \leq x/m, \\ n \; \text{is} \; P \; \text{smooth}}} f(n)|^{2} dt \Biggr)^q \ll \Biggl(\sum_{\substack{x^{1/\log\log x} < m \leq x, \\ p|m \Rightarrow p > P}} (1 + \frac{x}{mX}) \Biggr)^q . $$ Applying a standard sieve bound (e.g. Theorem 3.6 of Montgomery and Vaughan~\cite{mv}) to detect the condition that $p|m \Rightarrow p > P$ implies this quantity is $\ll (\frac{x}{\log P})^q$, which is more than good enough for us. Meanwhile, the total contribution to \eqref{stpnextdisplay} from the first bracket in \eqref{smoothrandsplit} is \begin{eqnarray} & \ll & \sum_{\textbf{j}} \sigma^{\text{rand}}(\textbf{j}) \Biggl(\mathbb{E}^{\textbf{j}, \text{rand}} \int_{x^{1/\log\log x}}^{x} |\sum_{\substack{n \leq x/t, \\ n \; \text{is} \; P \; \text{smooth}}} f(n)|^2 \sum_{\substack{t/(1+1/X) < m \leq t, \\ p|m \Rightarrow p > P}} \frac{X}{m} dt \Biggr)^q \nonumber \\ & \ll & \sum_{\textbf{j}} \sigma^{\text{rand}}(\textbf{j}) (\frac{1}{\log P})^q \Biggl(\mathbb{E}^{\textbf{j}, \text{rand}} \int_{x^{1/\log\log x}}^{x} |\sum_{\substack{n \leq x/t, \\ n \; \text{is} \; P \; \text{smooth}}} f(n)|^2 dt \Biggr)^q \nonumber \\ & = & \sum_{\textbf{j}} \sigma^{\text{rand}}(\textbf{j}) (\frac{x}{\log P})^q \Biggl(\mathbb{E}^{\textbf{j}, \text{rand}} \int_{1}^{x^{1-1/\log\log x}} |\sum_{\substack{n \leq z, \\ n \; \text{is} \; P \; \text{smooth}}} f(n)|^2 \frac{dz}{z^2} \Biggr)^q . \nonumber \end{eqnarray} Here the second line follows by applying a standard sieve bound again to the sum over $m$, and the final equality follows from the substitution $z=x/t$. Using Harmonic Analysis Result \ref{harmdirichlet}, we deduce that this is all \begin{equation}\label{produpperintrand} \ll (\frac{x}{\log P})^q \sum_{\textbf{j}} \sigma^{\text{rand}}(\textbf{j}) (\mathbb{E}^{\textbf{j}, \text{rand}} \int_{-\infty}^{\infty} \frac{|F_{P}^{\text{rand}}(1/2 + it)|^2}{|1/2+it|^2} dt)^q , \end{equation} where $F_{P}^{\text{rand}}(s) := \prod_{p \leq P} (1 - \frac{f(p)}{p^s})^{-1} = \sum_{\substack{n=1, \\ n \; \text{is} \; P \; \text{smooth}}}^{\infty} \frac{f(n)}{n^s}$ is the Euler product corresponding to the random multiplicative function $f(n)$ on $P$-smooth numbers. Since we have restricted to the range $2/3 \leq q \leq 1$, we can break up the integral in \eqref{produpperintrand} into sub-intervals of length 1 and obtain a bound \begin{eqnarray} & \leq & (\frac{x}{\log P})^q \sum_{\textbf{j}} \sigma^{\text{rand}}(\textbf{j}) \sum_{v=-\infty}^{\infty} (\mathbb{E}^{\textbf{j}, \text{rand}} \int_{v-1/2}^{v+1/2} \frac{|F_{P}^{\text{rand}}(1/2 + it)|^2}{|1/2+it|^2} dt)^q \nonumber \\ & \ll & (\frac{x}{\log P})^q \sum_{v=-\infty}^{\infty} \frac{1}{(|v|+1)^{4/3}} \sum_{\textbf{j}} \sigma^{\text{rand}}(\textbf{j}) (\mathbb{E}^{\textbf{j}, \text{rand}} \int_{v-1/2}^{v+1/2} |F_{P}^{\text{rand}}(1/2 + it)|^2 dt)^q . \nonumber \end{eqnarray} Those terms with $|v| > \log^{0.01}P$, say, trivially make a negligible contribution here. Indeed, using H\"{o}lder's inequality again their contribution is \begin{eqnarray} & \leq & (\frac{x}{\log P})^q \sum_{|v| > \log^{0.01}P} \frac{1}{(|v|+1)^{4/3}} (\sum_{\textbf{j}} \sigma^{\text{rand}}(\textbf{j}) \mathbb{E}^{\textbf{j}, \text{rand}} \int_{v-1/2}^{v+1/2} |F_{P}^{\text{rand}}(1/2 + it)|^2 dt)^q \nonumber \\ & = & (\frac{x}{\log P})^q \sum_{|v| > \log^{0.01}P} \frac{1}{(|v|+1)^{4/3}} (\mathbb{E} \int_{v-1/2}^{v+1/2} |F_{P}^{\text{rand}}(1/2 + it)|^2 dt)^q , \nonumber \end{eqnarray} and the orthogonality of random multiplicative functions implies this is $$ = (\frac{x}{\log P})^q \sum_{|v| > \log^{0.01}P} \frac{1}{(|v|+1)^{4/3}} ( \sum_{\substack{n = 1, \\ n \; \text{is} \; P \; \text{smooth}}}^{\infty} \frac{1}{n} )^q \ll (\frac{x}{\log P})^q \frac{1}{\log^{1/300}P} \log^{q}P , $$ which is acceptable for Theorem \ref{mainthmchar}. To complete the proof of the theorem, in view of \eqref{stpnextdisplay} and \eqref{produpperintrand} it will now suffice to show that uniformly for all $|v| \leq \log^{0.01}P$, we have \begin{equation}\label{tobeprovedeqrand} \sum_{\textbf{j}} \sigma^{\text{rand}}(\textbf{j}) (\mathbb{E}^{\textbf{j}, \text{rand}} \int_{v-1/2}^{v+1/2} |F_{P}^{\text{rand}}(1/2 + it)|^2 dt)^q \ll \Biggl(\frac{\log P}{1 + (1-q)\sqrt{\log\log P}}\Biggr)^{q} . \end{equation} \subsection{Strengthening the conditioning}\label{subsecrefinecond} We now embark on establishing \eqref{tobeprovedeqrand}. For notational simplicity, we will write out the details in the case where $v=0$. The treatment of all other $|v| \leq \log^{0.01}P$ is exactly similar\footnote{Observe that our ``conditioning'' in $\mathbb{E}^{\textbf{j}, \text{rand}}$ is on the $S_{k}(f)$ for all $|k| \leq M = 2\log^{1.02}P$, corresponding to imaginary parts $\leq \frac{M}{\log^{1.01}P} = 2\log^{0.01}P$, which includes (with a little room to spare) the full range $|v| \leq \log^{0.01}P$ that we must handle in \eqref{tobeprovedeqrand}. Notice also that since we assume that $\log^{0.01}P$ is an integer, the points $i(v + \frac{k'}{\log^{1.01}P})$ for $v,k' \in \mathbb{Z}$ are of the desired form $i \frac{k}{\log^{1.01}P}$ with $k \in \mathbb{Z}$, which appear inside $S_{k}(f)$.}. Firstly, since our ``conditional expectations'' $\mathbb{E}^{\textbf{j}, \text{rand}}$ contain information about all the sums $S_{k}(f)$, corresponding to the discrete points $t = \frac{k}{\log^{1.01}P}$, we need to move from the integral in \eqref{tobeprovedeqrand} to a discretised version. Thus we have \begin{eqnarray} && \sum_{\textbf{j}} \sigma^{\text{rand}}(\textbf{j}) (\mathbb{E}^{\textbf{j}, \text{rand}} \int_{-1/2}^{1/2} |F_{P}^{\text{rand}}(1/2 + it)|^2 dt)^q \nonumber \\ & \ll & \sum_{\textbf{j}} \sigma^{\text{rand}}(\textbf{j}) ( \mathbb{E}^{\textbf{j}, \text{rand}} \sum_{|k| \leq \frac{\log^{1.01}P}{2}} \int_{-\frac{1}{2\log^{1.01}P}}^{\frac{1}{2\log^{1.01}P}} |F_{P}^{\text{rand}}(\frac{1}{2} + i\frac{k}{\log^{1.01}P}+it) - F_{P}^{\text{rand}}(\frac{1}{2} + i\frac{k}{\log^{1.01}P})|^2 dt )^q \nonumber \\ && + \sum_{\textbf{j}} \sigma^{\text{rand}}(\textbf{j}) (\mathbb{E}^{\textbf{j}, \text{rand}} \frac{1}{\log^{1.01}P} \sum_{|k| \leq (\log^{1.01}P)/2} |F_{P}^{\text{rand}}(1/2 + i\frac{k}{\log^{1.01}P})|^2 )^q . \nonumber \end{eqnarray} Applying H\"older's inequality to the sum over $\textbf{j}$ once again, we see the first line here is $$ \leq \Biggl( \mathbb{E} \sum_{|k| \leq \frac{\log^{1.01}P}{2}} \int_{-\frac{1}{2\log^{1.01}P}}^{\frac{1}{2\log^{1.01}P}} |F_{P}^{\text{rand}}(1/2 + i\frac{k}{\log^{1.01}P} + it) - F_{P}^{\text{rand}}(1/2 + i\frac{k}{\log^{1.01}P})|^2 dt \Biggr)^q , $$ which is $\ll \log^{0.99q}P$ in view of Lemma \ref{disccontlem} from section \ref{subsecrandeuler}. This is more than acceptable for \eqref{tobeprovedeqrand}, so it remains to handle the discrete sum on the second line. It will also shortly be helpful to have some size restrictions on the $|F_{P}^{\text{rand}}(1/2 + i\frac{k}{\log^{1.01}P})|$, to aid us with setting the ``range'' parameter $N$ from Approximation Result \ref{apres}. So let us define the (random) ``bad set'' $$ \mathcal{T} := \{k \in \mathbb{Z} : |F_{P}^{\text{rand}}(1/2 + i\frac{k}{\log^{1.01}P})| \geq \log^{1.1}P \;\;\; \text{or} \;\;\; |F_{P}^{\text{rand}}(1/2 + i\frac{k}{\log^{1.01}P})| \leq \frac{1}{\log^{1.1}P} \} , $$ say. Splitting up the sum over $|k| \leq (\log^{1.01}P)/2$ according to whether $k \in \mathcal{T}$ or not, and using H\"{o}lder's inequality as (many times) before, we find that \begin{eqnarray} && \sum_{\textbf{j}} \sigma^{\text{rand}}(\textbf{j}) (\mathbb{E}^{\textbf{j}, \text{rand}} \frac{1}{\log^{1.01}P} \sum_{|k| \leq (\log^{1.01}P)/2} |F_{P}^{\text{rand}}(1/2 + i\frac{k}{\log^{1.01}P})|^2 )^q \nonumber \\ & \leq & \sum_{\textbf{j}} \sigma^{\text{rand}}(\textbf{j}) (\mathbb{E}^{\textbf{j}, \text{rand}} \frac{1}{\log^{1.01}P} \sum_{\substack{|k| \leq (\log^{1.01}P)/2, \\ k \notin \mathcal{T}}} |F_{P}^{\text{rand}}(1/2 + i\frac{k}{\log^{1.01}P})|^2 )^q + \nonumber \\ && + \Biggl( \frac{1}{\log^{1.01}P} \sum_{|k| \leq (\log^{1.01}P)/2} \mathbb{E} \textbf{1}_{k \in \mathcal{T}} |F_{P}^{\text{rand}}(1/2 + i\frac{k}{\log^{1.01}P})|^2 \Biggr)^q . \nonumber \end{eqnarray} On the final line, $\mathbb{E} \textbf{1}_{k \in \mathcal{T}} |F_{P}^{\text{rand}}(1/2 + i\frac{k}{\log^{1.01}P})|^2$ is at most $(\log^{1.1}P)^{-0.2} \mathbb{E} |F_{P}^{\text{rand}}(1/2 + i\frac{k}{\log^{1.01}P})|^{2.2} + (\frac{1}{\log^{1.1}P})^2$ (say). Standard results on the moments of random Euler products (see e.g. Euler Product Result 1 of \cite{harperrmfhigh}) imply that $(\log^{1.1}P)^{-0.2} \mathbb{E} |F_{P}^{\text{rand}}(1/2 + i\frac{k}{\log^{1.01}P})|^{2.2} \ll (\log^{1.1}P)^{-0.2} \log^{1.21}P = \log^{0.99}P$, so we get another more than acceptable contribution $\ll \log^{0.99q}P$ to \eqref{tobeprovedeqrand}. Now the purpose of introducing the functions $g_{j(i)}$ (in the definitions of $\mathbb{E}^{\textbf{j}, \text{rand}}$ and $\sigma^{\text{rand}}(\textbf{j})$) was to approximately localise the values of the sums $S_{i}(f)$. At this stage, we will replace this approximate localisation by an exact version, with a view to ultimately producing a connection with Multiplicative Chaos Result \ref{mcres2}. Thus the contribution to \eqref{tobeprovedeqrand} from the sum over $k \notin \mathcal{T}$ is \begin{eqnarray} & \leq & \sum_{\textbf{j}} \sigma^{\text{rand}}(\textbf{j}) (\mathbb{E}^{\textbf{j}, \text{rand}} \frac{1}{\log^{1.01}P} \sum_{\substack{|k| \leq (\log^{1.01}P)/2, \\ k \notin \mathcal{T}}} \textbf{1}_{|S_{k}(f) - j(k)| \leq 1} |F_{P}^{\text{rand}}(1/2 + i\frac{k}{\log^{1.01}P})|^2 )^q + \nonumber \\ && + \sum_{\textbf{j}} \sigma^{\text{rand}}(\textbf{j}) (\mathbb{E}^{\textbf{j}, \text{rand}} \frac{1}{\log^{1.01}P} \sum_{|k| \leq (\log^{1.01}P)/2} \textbf{1}_{|S_{k}(f) - j(k)| > 1} \textbf{1}_{k \notin \mathcal{T}} |F_{P}^{\text{rand}}(1/2 + i\frac{k}{\log^{1.01}P})|^2 )^q . \nonumber \end{eqnarray} Applying H\"{o}lder's inequality yet again to the sum over $\textbf{j}$ on the second line, and recalling the definitions of $\mathbb{E}^{\textbf{j}, \text{rand}}$ and $\sigma^{\text{rand}}(\textbf{j})$, we can bound that line by $$ (\frac{1}{\log^{1.01}P} \sum_{|k| \leq \frac{\log^{1.01}P}{2}} \sum_{\textbf{j}} \mathbb{E} \prod_{i=-M}^{M} g_{j(i)}(S_{i}(f)) \cdot \textbf{1}_{|S_{k}(f) - j(k)| > 1} \textbf{1}_{k \notin \mathcal{T}} |F_{P}^{\text{rand}}(1/2 + i\frac{k}{\log^{1.01}P})|^2 )^q . $$ Now when $-N \leq j(k) \leq N$, by property (ii) from Approximation Result \ref{apres} we have $g_{j(k)}(S_{k}(f)) \cdot \textbf{1}_{|S_{k}(f) - j(k)| > 1} \leq \delta$. Furthermore, if $k \notin \mathcal{T}$ then $|S_{k}(f)| = |\log|F_{P}^{\text{rand}}(1/2 + i\frac{k}{\log^{1.01}P})|| + O(1) \leq 1.1\log\log P + O(1)$. Then {\em provided we take $N \geq 1.2\log\log P$ (say)}, when $j(k) = N+1$ we have $g_{j(k)}(S_{k}(f)) \cdot \textbf{1}_{k \notin \mathcal{T}} \leq \delta$, by property (iii) from Approximation Result \ref{apres}. So in any case we have $\prod_{i=-M}^{M} g_{j(i)}(S_{i}(f)) \cdot \textbf{1}_{|S_{k}(f) - j(k)| > 1} \textbf{1}_{k \notin \mathcal{T}} \leq \delta \prod_{i \neq k} g_{j(i)}(S_{i}(f))$, and then if we perform the sum over all possible values of $j(i)$ for $i \neq k$ we get $\delta \prod_{i \neq k} \sum_{-N \leq j(i) \leq N+1} g_{j(i)}(S_{i}(f)) = \delta$. Thus the second line above is at most \begin{eqnarray} (\frac{\delta}{\log^{1.01}P} \sum_{|k| \leq \frac{\log^{1.01}P}{2}} \sum_{-N \leq j(k) \leq N+1} \mathbb{E} |F_{P}^{\text{rand}}(1/2 + i\frac{k}{\log^{1.01}P})|^2 )^q & \ll & (\delta N \sum_{\substack{n = 1, \\ n \; \text{is} \; P \; \text{smooth}}}^{\infty} \frac{1}{n} )^q \nonumber \\ & \ll & (\delta N \log P)^q . \nonumber \end{eqnarray} This bound will be acceptable for \eqref{tobeprovedeqrand} {\em provided that $\delta \leq \frac{1}{N\sqrt{\log\log P}}$}. \subsection{Conclusion}\label{subsecconclusion} Finally, to establish \eqref{tobeprovedeqrand} (in the case $v=0$) it remains to prove that \begin{eqnarray} && \sum_{\textbf{j}} \sigma^{\text{rand}}(\textbf{j}) \Biggl(\mathbb{E}^{\textbf{j}, \text{rand}} \frac{1}{\log^{1.01}P} \sum_{\substack{|k| \leq (\log^{1.01}P)/2, \\ k \notin \mathcal{T}}} \textbf{1}_{|S_{k}(f) - j(k)| \leq 1} |F_{P}^{\text{rand}}(1/2 + i\frac{k}{\log^{1.01}P})|^2 \Biggr)^q \nonumber \\ & \ll & \Biggl(\frac{\log P}{1 + (1-q)\sqrt{\log\log P}}\Biggr)^{q} . \nonumber \end{eqnarray} Recall now that $|F_{P}^{\text{rand}}(1/2 + i\frac{k}{\log^{1.01}P})| = \exp\{-\Re \sum_{p \leq P} \log(1 - \frac{f(p)}{p^{1/2 + i\frac{k}{\log^{1.01}P}}})\} \asymp \exp\{S_{k}(f)\}$ for all $k$ and all realisations of the random multiplicative function $f(n)$. And the point of our manipulations has been that the only $k$ values that now contribute to the sum over $k$ are those for which $S_{k}(f) \in [j(k)-1,j(k)+1]$. Noting also that we must have $|S_{k}(f)| \leq 1.1\log\log P + O(1)$ if $k \notin \mathcal{T}$, we see the left hand side is \begin{eqnarray} & \ll & \sum_{\textbf{j}} \sigma^{\text{rand}}(\textbf{j}) (\mathbb{E}^{\textbf{j}, \text{rand}} \frac{1}{\log^{1.01}P} \sum_{\substack{|k| \leq (\log^{1.01}P)/2, \\ |j(k)| \leq 1.1\log\log P + O(1)}} e^{2j(k)} )^q \nonumber \\ & = & \sum_{\textbf{j}} \sigma^{\text{rand}}(\textbf{j}) (\frac{1}{\log^{1.01}P} \sum_{\substack{|k| \leq (\log^{1.01}P)/2, \\ |j(k)| \leq 1.1\log\log P + O(1)}} e^{2j(k)} )^q . \nonumber \end{eqnarray} Here the quantity inside the bracket is a deterministic function of $\textbf{j}$, with the only remaining appearance of any randomness coming inside the multipliers $\sigma^{\text{rand}}(\textbf{j})$ in the outer ``averaging'' over $\textbf{j}$. Thus we are very close to the structure of the quantity bounded in Multiplicative Chaos Result \ref{mcres2}, where all of the averaging $\mathbb{E}$ occurs on the outside of the $q$-th power. Indeed, recalling the definition of $\sigma^{\text{rand}}(\textbf{j})$ we can rewrite the sum here as \begin{eqnarray} && \mathbb{E} \sum_{\textbf{j}} \prod_{i=-M}^{M} g_{j(i)}(S_{i}(f)) (\frac{1}{\log^{1.01}P} \sum_{\substack{|k| \leq (\log^{1.01}P)/2, \\ |j(k)| \leq 1.1\log\log P + O(1)}} e^{2j(k)} )^q \nonumber \\ & \ll & \mathbb{E} \sum_{\textbf{j}} \prod_{i=-M}^{M} g_{j(i)}(S_{i}(f)) (\frac{1}{\log^{1.01}P} \sum_{\substack{|k| \leq (\log^{1.01}P)/2, \\ |j(k)| \leq 1.1\log\log P + O(1)}} |F_{P}^{\text{rand}}(1/2 + i\frac{k}{\log^{1.01}P})|^2 )^q + \nonumber \\ && + \mathbb{E} \sum_{\textbf{j}} \prod_{i=-M}^{M} g_{j(i)}(S_{i}(f)) (\frac{1}{\log^{1.01}P} \sum_{\substack{|k| \leq (\log^{1.01}P)/2, \\ |j(k)| \leq 1.1\log\log P + O(1)}} \textbf{1}_{|S_{k}(f) - j(k)| > 1} \log^{2.2}P )^q . \nonumber \end{eqnarray} And since $\sum_{\textbf{j}} \prod_{i=-M}^{M} g_{j(i)}(S_{i}(f)) \equiv 1$, the first line on the right hand side is at most $\mathbb{E}( \frac{1}{\log^{1.01}P} \sum_{|k| \leq (\log^{1.01}P)/2} |F_{P}^{\text{rand}}(1/2 + i\frac{k}{\log^{1.01}P})|^2 )^q$, which is $\ll \left(\frac{\log P}{1 + (1-q)\sqrt{\log\log P}}\right)^{q}$ by Multiplicative Chaos Result \ref{mcres2}. Meanwhile, one last application of H\"older's inequality (to the expectation and sum over $\textbf{j}$ simultaneously), and using properties (ii) and (iii) from Approximation Result \ref{apres} (as in section \ref{subsecrefinecond}) to deduce that $g_{j(k)}(S_{k}(f)) \textbf{1}_{|j(k)| \leq 1.1\log\log P + O(1)} \textbf{1}_{|S_{k}(f) - j(k)| > 1} \leq \delta$, reveals the second line is \begin{eqnarray} & \leq & \Biggl( \mathbb{E} \sum_{\textbf{j}} \prod_{i=-M}^{M} g_{j(i)}(S_{i}(f)) \frac{1}{\log^{1.01}P} \sum_{\substack{|k| \leq (\log^{1.01}P)/2, \\ |j(k)| \leq 1.1\log\log P + O(1)}} \textbf{1}_{|S_{k}(f) - j(k)| > 1} \log^{2.2}P \Biggr)^q \nonumber \\ & \leq & \left( \frac{1}{\log^{1.01}P} \sum_{|k| \leq (\log^{1.01}P)/2} \log^{2.2}P \cdot \mathbb{E} \sum_{\textbf{j}} \delta \prod_{i \neq k} g_{j(i)}(S_{i}(f)) \right)^q . \nonumber \end{eqnarray} Since the functions $g_j$ form a partition of unity, this is all $$ \leq \left( \frac{1}{\log^{1.01}P} \sum_{|k| \leq (\log^{1.01}P)/2} \log^{2.2}P \cdot \delta \sum_{-N \leq j(k) \leq N+1} 1 \right)^q \ll (\delta N \log^{2.2}P)^q , $$ which will be acceptable for \eqref{tobeprovedeqrand} {\em provided that $\delta \leq \frac{1}{N \log^{1.2}P \sqrt{\log\log P}}$}. To conclude, we need only check that we can make an acceptable choice of the parameters $N, \delta$. In section \ref{subsecmaingorandom}, we needed to have $x P^{400((2M+1)/\delta)^2 \log(N\log P)} < r$ so that Proposition \ref{condexpprop} could be applied. Recalling that $M = 2\log^{1.02}P$, we see this will be satisfied provided that $P^{6499 (\log^{2.04}P) (1/\delta)^2 \log(N\log P)} < r/x$, say. In section \ref{subsecrefinecond} we needed $N \geq 1.2\log\log P$ and $\delta \leq \frac{1}{N\sqrt{\log\log P}}$, and now we have the more stringent condition $\delta \leq \frac{1}{N \log^{1.2}P \sqrt{\log\log P}}$. So taking $N = \lceil 1.2\log\log P \rceil$ and $\delta = \frac{1}{\log^{1.3}P}$, say, all of our conditions will be satisfied provided that $P^{6500 (\log^{4.64}P) \log\log P} < r/x$. This indeed holds with our choice of $P$, slightly smaller than $\exp\{\log^{1/6}L\}$. \qed \section{Proofs of the other Theorems and Corollaries}\label{secotherproofs} \subsection{Proof of Theorem \ref{mainthmcont}} The neatest way to proceed is to let $\Phi : \mathbb{R} \rightarrow \mathbb{R}$ be a fixed non-negative function that is $\geq \textbf{1}_{[0,1]}$, and whose Fourier transform $\widehat{\Phi}$ is supported in $[-1/2\pi,1/2\pi]$. For example, we can take $\Phi(x)$ to be a Beurling--Selberg function, as described in Vaaler's paper~\cite{vaaler}. Then if we write $\mathbb{E}^{\text{cont}} W(t)$ to denote the continuous average $\frac{1}{T \widehat{\Phi}(0)} \int_{-\infty}^{\infty} \Phi(t/T) W(t) dt$, we see $$ \frac{1}{T} \int_{0}^{T} |\sum_{n \leq x} n^{it}|^{2q} dt \leq \widehat{\Phi}(0) \mathbb{E}^{\text{cont}} |\sum_{n \leq x} n^{it}|^{2q} \ll \mathbb{E}^{\text{cont}} |\sum_{n \leq x} n^{it}|^{2q} , $$ and so it will suffice to prove the claimed Theorem \ref{mainthmcont} bound for $\mathbb{E}^{\text{cont}} |\sum_{n \leq x} n^{it}|^{2q}$. The point of introducing $\mathbb{E}^{\text{cont}}$ is that we have $\mathbb{E}^{\text{cont}} 1 = \frac{1}{T \widehat{\Phi}(0)} \int_{-\infty}^{\infty} \Phi(t/T) dt = \frac{1}{\widehat{\Phi}(0)} \int_{-\infty}^{\infty} \Phi(u) du = 1$, and crucially $\mathbb{E}^{\text{cont}} U^{it} \overline{V^{it}} = \frac{1}{T \widehat{\Phi}(0)} \int_{-\infty}^{\infty} \Phi(t/T) e^{-it\log(V/U)} dt = \frac{\widehat{\Phi}((T/2\pi)\log(V/U))}{\widehat{\Phi}(0)} = \textbf{1}_{U=V}$ for all natural numbers $1 \leq U,V < T$. (If we worked with $\frac{1}{T} \int_{0}^{T}$ directly, there would be error terms rather than an exact equality here.) These are the same properties that we had for our character average $\mathbb{E}^{\text{char}}$ in the proof of Theorem \ref{mainthmchar}. In particular, the analogues of Lemma \ref{evenmomentlem} and of Propositions \ref{condexpprop} and \ref{boxprobprop} hold with $\mathbb{E}^{\text{char}}$ replaced by $\mathbb{E}^{\text{cont}}$, with $\chi(n)$ replaced by $n^{it}$, and with the various conditions $xU^k , x P^{400(Y/\delta)^2 \log(N\log P)} , P^{400(Y/\delta)^2 \log(N\log P)} < r$ replaced by their obvious substitutes $xU^k , x P^{400(Y/\delta)^2 \log(N\log P)} , P^{400(Y/\delta)^2 \log(N\log P)} < T$. This means that we can repeat the argument in sections \ref{thm1small}--\ref{subsecmaingorandom}, simply replacing $L = \min\{x,r/x\}$ by $L_T := \min\{x,T/x\}$ and replacing $S_{k}(\chi)$ by $\Re \sum_{p \leq P} (\frac{p^{it}}{p^{1/2 + ik/\log^{1.01}P}} + \frac{p^{2it}}{2p^{1 + 2ik/\log^{1.01}P}})$. At the end of section \ref{subsecmaingorandom}, the proof of Theorem \ref{mainthmcont} then reduces to establishing exactly the same bound \eqref{stpnextdisplay} for random multiplicative functions that we already proved in sections \ref{subsecpasseuler}--\ref{subsecconclusion}. \qed \subsection{Proof of Corollary \ref{cordev}} This is a simple consequence of Theorem \ref{mainthmchar} together with Markov's inequality. Thus for any $0 \leq q \leq 1$, the left hand side in Corollary \ref{cordev} is $$ \leq \frac{\mathbb{E}^{\text{char}} |\sum_{n \leq x} \chi(n)|^{2q}}{(\lambda\frac{\sqrt{x}}{(\log\log(10L))^{1/4}})^{2q}} . $$ If $\lambda \geq e^{\sqrt{\log\log(10L)}}$, then simply taking $q = 1$ and using the fact that $\mathbb{E}^{\text{char}} |\sum_{n \leq x} \chi(n)|^{2} \leq x$ yields the desired bound. For smaller $\lambda$, if we set $q = 1 - \delta$ with $0 < \delta \leq 1$, and apply Theorem \ref{mainthmchar}, we obtain that $$ \frac{\mathbb{E}^{\text{char}} |\sum_{n \leq x} \chi(n)|^{2q}}{(\lambda\frac{\sqrt{x}}{(\log\log(10L))^{1/4}})^{2q}} \ll \frac{1}{\lambda^2} \frac{\lambda^{2\delta} (\log\log(10L))^{q/2}}{(1 + \delta\sqrt{\log\log(10L)})^q} \leq \frac{1}{\lambda^2} \frac{\lambda^{2\delta}}{\delta} = \frac{\log\lambda}{\lambda^2} \frac{e^{2\delta \log\lambda}}{\delta \log\lambda} . $$ Choosing $\delta = \frac{1}{2\log\lambda}$ yields the claimed upper bound. \qed \subsection{Proof of Corollary \ref{cortheta}} As usual, it will suffice to prove Corollary \ref{cortheta} when $2/3 \leq q \leq 1$, because when $q$ is smaller we can use H\"{o}lder's inequality to compare with the $q=2/3$ case. If $\chi$ is an even Dirichlet character mod $r$, then using the definition of the theta function and partial summation we have \begin{eqnarray} \theta(1,\chi) = \sum_{n=1}^{\infty} \chi(n) e^{-\pi n^{2}/r} & = & \sum_{n \leq \sqrt{r\log r}} \chi(n) e^{-\pi n^{2}/r} + O(1) \nonumber \\ & = & \sum_{n \leq \sqrt{r\log r}} \chi(n) \int_{n}^{\sqrt{r\log r}} \frac{2\pi u}{r} e^{-\pi u^{2}/r} du + O(1) \nonumber \\ & = & \int_{1}^{\sqrt{r\log r}} \frac{2\pi u}{r} e^{-\pi u^{2}/r} \sum_{n \leq u} \chi(n) du + O(1) . \nonumber \end{eqnarray} By noting that $\int_{1}^{\sqrt{r\log r}} \frac{2\pi u}{r} e^{-\pi u^{2}/r} du \asymp 1$, and applying H\"{o}lder's inequality to the integral over $u$ (here we use the fact that $2q \geq 1$), we deduce that for each even $\chi$ we have $$ |\theta(1, \chi)|^{2q} \ll \int_{1}^{\sqrt{r\log r}} \frac{2\pi u}{r} e^{-\pi u^{2}/r} |\sum_{n \leq u} \chi(n)|^{2q} du + 1 . $$ The contribution to this integral from $1 \leq u \leq r^{1/4}$ is trivially $\ll \int_{1}^{r^{1/4}} \frac{u}{r} u^{2q} du \ll 1$, which is negligible. On the remaining range $r^{1/4} \leq u \leq \sqrt{r\log r}$, Theorem \ref{mainthmchar} implies that $\mathbb{E}^{\text{char}} |\sum_{n \leq u} \chi(n)|^{2q} \ll (\frac{u}{1+(1-q)\sqrt{\log\log r}})^q$, giving an acceptable contribution $$ \ll (\frac{1}{1+(1-q)\sqrt{\log\log r}})^q \int_{r^{1/4}}^{\sqrt{r\log r}} \frac{u}{r} e^{-\pi u^{2}/r} u^{q} du \ll (\frac{\sqrt{r}}{1+(1-q)\sqrt{\log\log r}})^q $$ for Corollary \ref{cortheta}. If $\chi$ is an odd character, then by definition we have $\theta(1,\chi) = \sum_{n=1}^{\infty} n\chi(n) e^{-\pi n^{2}/r}$, and a similar partial summation as before yields that $$ \theta(1,\chi) = \int_{1}^{\sqrt{r\log r}} (\frac{2\pi u^2}{r} - 1) e^{-\pi u^{2}/r} \sum_{n \leq u} \chi(n) du + O(1) . $$ Since we now have $\int_{1}^{\sqrt{r\log r}} |\frac{2\pi u^2}{r} - 1| e^{-\pi u^{2}/r} du \asymp \sqrt{r}$, when we apply H\"older's inequality and Theorem \ref{mainthmchar} (as in the even character case) we now have an extra factor $(\sqrt{r})^{2q} = r^q$ in our bounds, producing the claimed upper bound for the average over odd characters. \qed \subsection{Proof of Theorem \ref{mainthmmulttwist}} Once we introduce the multiplicative twist $h(n)$, the Euler product factor corresponding to a prime $p$ changes from being $(1 - \frac{\chi(p)}{p^{1/2+it}})^{-1}$ (or $(1 - \frac{f(p)}{p^{1/2+it}})^{-1}$ on the random multiplicative side), to $$ 1 + \frac{h(p) \chi(p)}{p^{1/2+it}} + \sum_{k=2}^{\infty} \frac{h(p^k) \chi(p)^k}{p^{k(1/2+it)}} \;\;\;\;\; \text{or} \;\;\;\;\; 1 + \frac{h(p) f(p)}{p^{1/2+it}} + \sum_{k=2}^{\infty} \frac{h(p^k) f(p)^k}{p^{k(1/2+it)}} . $$ Here we have $|\frac{h(p) \chi(p)}{p^{1/2+it}} + \sum_{k=2}^{\infty} \frac{h(p^k) \chi(p)^k}{p^{k(1/2+it)}}| \leq \sum_{k=1}^{\infty} \frac{1}{p^{k/2}} = \frac{1}{\sqrt{p} - 1}$. Provided that $p \geq 5$, this is all $\leq \frac{1}{\sqrt{5} - 1} < 1$, so we can still apply Taylor expansion to analyse the logarithms of the Euler factors. This is not the case for the primes 2 and 3, but for those we still have an upper bound $\ll 1$ for the Euler factors. With the above observations, the proof of Theorem \ref{mainthmmulttwist} is a fairly obvious adjustment of the proof of Theorem \ref{mainthmchar}. Rather than $S_{k}(\chi) := \Re \sum_{p \leq P} (\frac{\chi(p)}{p^{1/2 + ik/\log^{1.01}P}} + \frac{\chi(p)^2}{2p^{1 + 2ik/\log^{1.01}P}})$, in section \ref{secproof1} we work with $S_{k,h}(\chi) := \Re \sum_{5 \leq p \leq P} (\frac{h(p) \chi(p)}{p^{1/2 + ik/\log^{1.01}P}} + \frac{(h(p^2) - (1/2)h(p)^2)\chi(p)^2}{p^{1 + 2ik/\log^{1.01}P}})$ (coming from the first and second order terms in the Taylor expansions of the logarithms of the Euler factors). Note that the primes 2 and 3 are omitted here. The calculations then proceed essentially without change, until in \eqref{produpperintrand} we end up with $|\prod_{p \leq P} (1 + \frac{h(p) f(p)}{p^{1/2 +it}} + \sum_{k=2}^{\infty} \frac{h(p^k) f(p)^k}{p^{k(1/2 +it)}})|$ in place of $|F_{P}^{\text{rand}}(1/2+it)|$. And this is $\ll |F_{P, h}^{\text{rand}}(1/2+it)|$, where $F_{P, h}^{\text{rand}}(s) := \prod_{5 \leq p \leq P} (1 + \frac{h(p) f(p)}{p^{s}} + \sum_{k=2}^{\infty} \frac{h(p^k) f(p)^k}{p^{ks}})$ (with the primes 2 and 3 omitted). Continuing through sections \ref{subsecrefinecond} and \ref{subsecconclusion}, we only need to verify that we have the same upper bound $\mathbb{E} |F_{P,h}^{\text{rand}}(1/2 + i\frac{k}{\log^{1.01}P})|^{2.2} \ll \log^{1.21}P$ as for $\mathbb{E} |F_{P}^{\text{rand}}(1/2 + i\frac{k}{\log^{1.01}P})|^{2.2}$, and (most importantly) that Multiplicative Chaos Result \ref{mcres1} continues to hold with $F_{P}^{\text{rand}}(1/2 + it)$ replaced by $F_{P,h}^{\text{rand}}(1/2 + it)$. The former is an easy modification of Euler Product Result 1 of \cite{harperrmfhigh}. The latter is also quite straightforward to check by working through the proofs in sections 3.1--3.2 and 4.1--4.3 of \cite{harperrmflowmoments}, noting that Lemma 1 from that paper holds without change for the Euler product $F_{P, h}^{\text{rand}}(s)$ (here it is important that $|h(p)|=1$ on primes $p$, which produce the main terms there), and all subsequent results flow from Lemma 1. \qed
1,941,325,219,898
arxiv
\section{Introduction} \label{sec:introduction} Many organizations undergo rapid digital transformation nowadays. In such a setting, a common challenge is the understanding of software development dynamics. Horizontal relations between development teams are typically obscure and unobvious at a glance. Yet understanding them is important to manage organizational structure, plan hiring, and prioritize work. This paper introduces complementary methods to analyze contributors to open source projects. We study several orthogonal contributor features to obtain an overview of existing collaborations and evaluate our methods on the GitLab{} organization, a large software company with over 500 employees and 110 projects with more than 2,000 external contributors as of 2019. The considered features include the alignment of commit time series, the developer experience reflected in contributed lines of code (LoC), and topic models related to code naturalness~\citep{Allamanis2017Naturalness}. We harness dimensionality reduction and clustering for visualization. \section{Related Work} \label{sec:related-work} Some researchers have already explored developer collaboration and recommendation through analysis of the coding activities. For example, \cite{Montandon2019IdentifyingEI} made use of supervised machine learning classifiers to identify experts in three popular JavaScript libraries. However, that approach depends on manually labeled classes, so it may be difficult to generalize. Along the same line, \cite{Greene2016CVExplorerIC} have developed a tool to extract and visualize developers' skills by using both keywords from the project's README file and commit messages combined with their portfolio metadata, which they collected from third-party Community Question Answering websites. \cite{Hauff2015MatchingGD} had a different objective to directly match developers to relevant jobs by comparing their embeddings. Those embeddings were generated from the query describing the job and the README file of the project, but they do not leverage the source code itself. Regarding the topic modeling technique, it has been applied on source code identifiers by \cite{Markovtsev2017TM}, but without analyzing individual contributions or focusing on potential coding collaborations. Differing from these, we detect collaborators by working with fine-grained information from every commit: we analyze every file in the repository's history. \section{Methodology} \label{sec:methodology} This section presents a combination of three complementary developer analyses, and each clarifies a particular aspect of developers' characteristics. \subsection{Data Processing} \label{sec:data-processing} To gain the quantitative insight into source code required by further analyses, we employ Hercules{}\footnote{\url{https://github.com/src-d/hercules}} and Gitbase{}\footnote{\url{https://github.com/src-d/gitbase}}---tools that allow efficient and rapid mining of Git repositories. We gather the number of contributed lines per programming language per developer, the ownership information about each file (who edited each line the last), and the commit dates. We also extract the identifiers from each file and store their frequencies. There is the need for matching Git signatures to contributors. Therefore we aggregate all co-occurring emails and names to a single identity by finding connected components in the corresponding undirected co-occurrence graph. We have to exclude signature stubs such as \texttt{gitlab@localhost}. The help of GitLab API is intentionally avoided for versatility reasons. \subsection{Commit Time Series} \label{sec:commit-time-series} The first analysis is grounded on aligning developers' commits time series. People who work in a team tend to create commits at correlated times, because they often work on related, mutually dependent features. The relative commit frequency depends on the shared team planning, notably the deadlines. We found empirically that the absolute values of the time series are less important than the distribution shape. Thus we extract how many commits were done by each developer per day, divide the values by the mean to normalize the resulting time series, and calculate the pairwise distance matrix using Fast Dynamic Time Warping~\citep{Salvador2007TAD}. Then we perform DBSCAN~\citep{Ester1996DAD} clustering and run UMAP~\citep{McInnes2018UMAP} dimensionality reduction to visualize the clusters. Both mentioned algorithms operate on the explicit distance matrix. \subsection{Contributions by Programming Languages} \label{sec:contributions-by-programming-languages} The following way of grouping developers is based on the programming language, the amount and the nature of contributions (line additions, modifications, and deletions). Intuitively, these coarsely correlate with developer seniority and activity domains. Hence we exclude text markup files from the analysis. We reduce the influence of outliers by saturating all the values to the 95\textsuperscript{th} percentile. Our distance metric is L\textsuperscript{2}. We visualize the distribution by applying UMAP, which preserves the local topological relations that should be highlighted in our case. We calculate clusters in the embedding space using K-Means, and their number is determined by the elbow rule. Running K-Means in the original space produces worse results due to its high-dimensionality and sparsity. \subsection{Source Code Identifiers} \label{sec:source-code-identifiers} To complement the commit activity and the programming language usage analyses, we leverage the content of developers' contributions in order to assign them topics that are representative of their activity. We compute the representation for each code file based on its content as a bag of TF-IDF scores of its identifiers. We then represent each developer by aggregating the representations of the code files they edited with a sum weighted by the proportion of the owned lines in each file. Finally, we apply ARTM~\citep{Vorontsov2015BigARTM} to find decorrelated, sparse topics fitted to those representations and label them manually having obtained their most likely and specific terms. \section{Results} \label{sec:results} We evaluate the three presented analyses on the codebase of the GitLab{} organization. That comprised \numprint{117} repositories, totaling \numprint{145} programming languages, \numprint{44272} files and \numprint{12895956} lines of code in April 2019. Among the \numprint{2956} contributors, we detected \numprint{210} current GitLab{} employees split in 37 teams (this is a lower bound given the algorithmic difficulties detailed in Section~\ref{sec:data-processing}). A mapping from developers to teams is publicly available\footnote{https://gitlab.com/gitlab-com/www-gitlab-com/blob/master/data/team.yml}. \captionsetup[figure]{labelfont={bf,small},textfont={it,small}} \captionsetup[table]{labelfont={bf,small},textfont={it,small}} \captionsetup[subfloat]{labelfont={bf,small},textfont={it,small}, subrefformat=parens} \begin{figure*}[t] \centering \subfloat[]{\label{fig:commit-raw}{\includegraphics[width=0.4\linewidth]{raw_clusters_commits.eps}}}\hspace*{3em} \subfloat[]{\label{fig:commit-teams}{\includegraphics[width=0.46\linewidth]{team_clusters_commits.eps}}}\\ \hspace*{-1.8em}\subfloat[]{\label{fig:experience-raw}{\includegraphics[width=0.46\linewidth]{raw_clusters_experience_moved.eps}}}\hspace*{3em} \subfloat[]{\label{fig:experience-teams}{\includegraphics[width=0.4\linewidth]{team_clusters_experience.eps}}} \caption{Clusters of the contributors to \texttt{gitlab-org} open-source projects based on (a) commit time series and (c) contributed LoC by language. In (b) and (d) the same contributors are labeled not according to their cluster but according to their official team in the GitLab{} organization. The gray points stand for contributors that are either outside the company as of April 2019 or using different identities.} \label{fig:experience-and-commit} \end{figure*} Fig.~\ref{fig:commit-raw} and Fig.~\ref{fig:commit-teams} reveal the temporal commit coupling of the contributors. GitLab{} employees are sorted into two clusters among five; commit activity alignment proves to be a reliable way to tell GitLab{} employees from external contributors. However, the finer structure often does not respect the teams. For example, cluster \#2 groups high frequency committers across the company, one of them being GitLab's CEO. On the other hand, clustering developers by their experience in programming languages--- Fig.~\ref{fig:experience-raw} and~\ref{fig:experience-teams}---provides different insights. Fig.~\ref{fig:experience-teams} demonstrates that people in the same team tend to have similar experience. All the clusters but \#4 in Fig.~\ref{fig:experience-raw} contain Ruby developers. Indeed, GitLab{} is driven by Ruby on Rails and being proficient in Ruby is essential for the company. Cluster \#2 joins developers who code in both Go and Ruby, which agrees with the current partial conversion of the GitLab{} backend from Ruby on Rails to Go\footnote{https://about.gitlab.com/2018/10/29/why-we-use-rails-to-build-gitlab/}. Fig.~\ref{fig:experience-teams} suggests that the \emph{Gitaly}, \emph{Create BE} and \emph{Verify} teams are particularly working on this, since their members are located together. Then, cluster \#4 emphasizes the recent switch to Vue.js in the frontend, and the \emph{Dev UX} team seems to be one of the team in charge of this. Cluster \#4 clearly gathers all the frontend engineers who code in JavaScript, Vue.js framework, and write SCSS. Overall, the clusters match the typical Ruby on Rails relation between frontend and backend technologies. \begin{table}[htb] \begin{minipage}{.49\textwidth} {\scriptsize \captionof{table}{Teams of topic contributors.} \label{tab:topics-teams} \begin{tabular}{lll} \toprule Frontend & Config Management & Gollum (wiki) \\ \midrule Verify FE & Monitor BE & Product Management \\ Core & Plan BE & Support Department \\ Manage FE & Core Alumni & Core Alumni \\ Create FE & Create BE & Core \\ Monitor FE & Distribution BE \\ Serverless FE & & \\ Frontend & & \\ \bottomrule \end{tabular} } \end{minipage} \hspace*{1.2em} \begin{minipage}{.5\textwidth} {\scriptsize \captionof{table}{Manual labels assigned to topics.} \label{tab:topics-terms} \begin{tabular}{llll} \toprule \multicolumn{1}{c}{Topic label} & \multicolumn{3}{c}{Top terms} \\ \midrule Backend frameworks & servlet & flask & javax \\ Language detection & languag & java & linguist \\ Data mining & chartj & graphql & averag \\ Frontend + UI, CSS & modernizr & mstyle & elementn \\ Config management & chef & runner & platform \\ TODO & todo & todos & app \\ Low-level backend & btree & opclass & using \\ \bottomrule \end{tabular} } \end{minipage} \end{table} Finally, to evaluate the quality of a code topic, we retrieve the teams of the contributors for whom the evaluated topic is the preponderant one. Results shown in Table~\ref{tab:topics-teams} highlight that the topic modeling analysis strongly correlates with GitLab{} teams. Table~\ref{tab:topics-terms} shows examples of manually created labels. We also measure the agreement between topics and the clusters from Fig.~\ref{fig:commit-raw} and \ref{fig:experience-raw}. On average, developers that have a given main topic are in 1.54 and 1.63 clusters for commit activity and experience clustering respectively (the expectations are 2.11 and 2.27 respectively for random clusters). \section{Conclusion} \label{sec:conclusion} This paper demonstrated how to identify existing collaborations in an organization by exploring the topological structure of three feature spaces of a codebase: commit activity, usage of programming languages, and topics of source code identifiers. Clustering proved insightful: we observed a good match with existing development teams in the GitLab{} organization, and additionally found several product-oriented inter-organizational squads. We traced the partial transition from Ruby to Go in GitLab{}'s backends, which was indeed decided at the company level. Commit series alignment distinguished between external and internal contributors well. Our future research directions include leveraging the commits to external repositories since developers usually contribute to several open source projects that do not belong to their company. Furthermore, it would be interesting to study how engineering teams have evolved as the company has grown. We believe that combining the recent advancements in machine learning with the expressive features can shed light on relevant coding collaboration opportunities. \bibliographystyle{ml4se2019} {\scriptsize
1,941,325,219,899
arxiv
\section{Introduction} \label{sec:intro} TeV gamma-ray emitting binary systems are extremely rare objects, likely corresponding to a relatively brief period in the evolution of some massive star binaries \citep{2017AnA...608A..59D}. They consist of a neutron star or black hole in a binary orbit with a massive star, and their high-energy emission provides a unique opportunity to study relativistic particle acceleration in a continuously changing physical environment. Of particular interest are systems where the compact object is a pulsar, since the pulsed emission firmly identifies the nature of the compact object, and provides an accurate determination of the orbital parameters, the available energy budget, and the likely acceleration mechanism. Prior to this work, only one TeV binary with a known pulsar had been detected: the pulsar / Be-star binary system PSR\,B1259-63 / LS\,2883 \citep{2005A&A...442....1A}. In this paper, we present the discovery of a second member of this class, PSR\,J2032+4127 / MT91\,213. Pulsed emission, with a period of $P=143\U{ms}$, was first detected from PSR\,J2032+4127\ in a blind search of \textit{Fermi}-LAT gamma-ray data \citep{2009Sci...325..840A} and was subsequently detected in radio observations with the Green Bank Telescope \citep{2009ApJ...705....1C}. These observations revealed dramatic changes in the pulsar spin-down rate, an effect most easily explained by Doppler shift due to the pulsar's motion in a long-period binary system \citep{2015MNRAS.451..581L}. The pulsar's companion was identified as a B0Ve star, MT91\,213, which has a mass of around 15$\U{M_\odot}$ and a circumstellar disk which varies in radius by more than a factor of two, from $0.2\U{AU}$ to $0.5\U{AU}$ \citep{2017MNRAS.464.1211H}. The pulsar spin-down luminosity ($\dot{\mathrm{E}}$) is $1.7\times 10^{35}\U{erg}\UU{s}{-1} $, with a characteristic age of 180\U{kyr}, and the system lies at a distance of 1.4--1.7$\U{kpc}$, in the Cyg OB2 stellar association. Further observations refined the orbital parameters, yielding a binary period of 45--50 years, an eccentricity between 0.94 and 0.99, and a longitude of periastron between 21\arcdeg and 52\arcdeg. Periastron occurred on 2017 November 13 with a separation between PSR\,J2032+4127\ and MT91\,213\ of approximately 1~AU \citep{2017MNRAS.464.1211H, 2017ATel10920....1C}. Significant X-ray brightening from the direction of PSR\,J2032+4127\ was first detected by \citet{2017MNRAS.464.1211H}, with the X-ray flux increasing by a factor of twenty relative to 2010 measurements. \citet{2017ApJ...843...85L} used data from \textit{Chandra}, \textit{Swift}-XRT, \textit{NuSTAR}, and \textit{XMM-Newton} to conduct a detailed study of the long-term light curve, finding variability on timescales of weeks on top of the long-term increasing trend, which they attributed to clumps in the stellar wind. The structure of the stellar wind was further explored by \citet{2018MNRAS.474L..22P}, who used \textit{Swift}-XRT observations to map the circumstellar environment of the Be star. Recently, \citet{2018ApJ...857..123L} presented detailed \textit{Swift}, \textit{Fermi}-LAT and radio observations of PSR\,J2032+4127\ over the 2017 periastron period. They report strong variability in the X-ray flux, but no variability in the GeV gamma-ray flux, likely because this is masked by magnetospheric emission from the pulsar. PSR\,J2032+4127\ lies at the edge of the steady, extended, very-high-energy (VHE, $E>100\U{GeV}$) gamma-ray source, TeV\,J2032+4130. This object was the first VHE source to be serendipitously discovered, by HEGRA \citep{2002A&A...393L..37A}, and was not associated with any counterpart at other wavelengths. Subsequent observations by HEGRA and MAGIC revealed an extended object, with a width of approximately 6\arcmin\ and a hard power-law spectrum ($\Gamma\sim2.0$) \citep{2005A&A...431..197A, 2008ApJ...675L..25A}. VERITAS observations have shown that the extended emission is asymmetric and coincident with a void in radio emission \citep{2014ApJ...783...16A}. Prior to the discovery of its binary nature, an association of TeV\,J2032+4130\ with the pulsar wind nebula (PWN) of PSR\,J2032+4127 seemed the most likely origin for the VHE source. Thus far, only a handful of VHE gamma-ray emitting binary systems have been detected, of which only PSR\,B1259-63 / LS\,2883 has an identified compact object: a pulsar in a highly elliptical ($e=0.87$) orbit with a period of 3.4~years and a periastron separation of about $1\U{AU}$ \citep{1992ApJ...387L..37J,2011ApJ...732L..11N}. The unpulsed radio, X-ray and VHE gamma-ray fluxes show complex light curves, with the majority of the emission occurring close to periastron in two distinct peaks, likely related to the pulsar passage through the circumstellar decretion disk. High-energy (HE, $0.1\U{GeV}<E<100\U{GeV}$) emission, conversely, is generally weak around periastron, followed by intense flaring episodes which occur typically more than 30, and as long as 70, days after periastron \citep{2011ApJ...736L..11A,2018ApJ...863...27J}. In this paper, we report on the results of extensive VHE gamma-ray observations of the 2017 periastron passage of the PSR\,J2032+4127 / MT91\,213\ system using VERITAS and MAGIC, as well as presenting contemporaneous X-ray observations with \textit{Swift}-XRT. \section{Observations and Analysis} \label{sec:observations} VHE gamma-ray observations of PSR\,J2032+4127 / MT91\,213\ were conducted by the MAGIC and VERITAS imaging atmospheric Cherenkov telescope arrays, which are sensitive to astrophysical gamma rays above $50\U{GeV}$. MAGIC consists of two 17\,m-diameter telescopes, located at the observatory of El Roque de Los Muchachos on the island of La Palma, Spain, whereas VERITAS is an array of four 12\,m-diameter telescopes at the Fred Lawrence Whipple Observatory near Tucson, Arizona. The observatories and their capabilities are described in \citet{Aleksic2016} (MAGIC) and \citet{2015ICRC...34..771P} (VERITAS), and references therein. In this paper, we present the results of 181.3 hours of observations with VERITAS (51.6 hours of archival data taken before 2016, 30.1 hours between 2016 September and 2017 June, 99.6 hours between 2017 September and December) and 87.9 hours of observations with MAGIC (53.7 hours between 2016 May and September and 34.2 hours between 2017 June and December). Observations were conducted in ``wobble'' mode, with the source location offset from the center of the field-of-view, allowing simultaneous evaluation of the background \citep{Fomin94}. The data were analyzed using standard tools \citep{Zanin13, 2017arXiv170804048M}, in which Cherenkov images are first calibrated, cleaned and parameterized \citep{Hillas85}, then used to reconstruct the energy and arrival direction of the incident gamma ray, and to reject the majority of the cosmic ray background \citep{magic:RF, 2006APh....25..380K}. A total of 186 \textit{Swift}-XRT \citep{2005SSRv..120..165B} observations were taken between 2008 June 16 and 2018 April 15, equating to 136.4 hours of live time. The data were collected in photon-counting mode and analyzed using the HEAsoft analysis package version 6.24\footnote{\url{https://heasarc.nasa.gov/lheasoft/}}. The background was estimated from five regions equidistant from PSR\,J2032+4127, and the flux was calculated using XSPEC. \section{Variability, Morphology and Spectrum} \label{sec:results} A new, spatially-unresolved, time-varying VHE gamma-ray source was detected at a position compatible with PSR\,J2032+4127 / MT91\,213. The source is named \vername\ and MAGIC\,J2032+4127\ in the VERITAS and MAGIC source catalogs, respectively. It is also spatially coincident with TeV\,J2032+4130, the previously detected extended VHE source, but offset from the centroid of the extended emission by approximately $10\arcmin$. The complete X-ray and gamma-ray light curves are shown in \autoref{fig:LC}. Following the initial detection of PSR\,J2032+4127 / MT91\,213\ in 2017 September by VERITAS and MAGIC \citep{2017ATel10810....1V} with a flux exceeding the baseline flux from TeV\,J2032+4130, gamma-ray emission was observed to increase up to the time of periastron (2017 November 13; MJD 58070), reaching a factor of ten higher than the baseline. Approximately one week after periastron the flux sharply decreased, to a level compatible with the baseline emission, before recovering to the periastron level a few days later. Further observations were conducted after periastron, but the combination of low source elevation angle, poor weather conditions and rather brief exposure, resulted in a relatively poor flux measurement. In total, during the 2017 fall observations (MJD 57997--58110), VERITAS detected PSR\,J2032+4127 / MT91\,213\ with a significance of 21.5 standard deviations ($\sigma$) and MAGIC with a significance of $19.5\U{\sigma}$. \begin{figure*}[ht!] \gridline{\leftfig{fig1a.pdf}{\columnwidth}{(a) Full Dataset} \rightfig{fig1b.pdf}{\columnwidth}{(b) Periastron}} \caption{Upper panels (left axes) show the 0.3--10.0~keV background-subtracted \textit{Swift}-XRT energy-flux light curve (red circles) of PSR\,J2032+4127 / MT91\,213. For clarity, observations with exposures less than 1.4 ks are excluded from the plot. Lower panels show the $>200\U{GeV}$ photon-flux light curves from VERITAS (green triangles) and MAGIC (blue squares). The left plot shows the full light curve, while the right plot shows only the months around periastron. The horizontal solid lines indicate the average flux prior to 2017 for the respective experiments. The solid gray lines (right axes) are the energy-flux light curve predictions from \cite{2018ApJ...857..123L} for X-rays and updated predictions from \cite{2017ApJ...836..241T} using the parameters from \cite{2018ApJ...857..123L} (Takata, private communication) for VHE gamma rays. Both models assume an inclination angle of $60\arcdeg$. The vertical gray dashed line indicates periastron. \label{fig:LC}} \end{figure*} \begin{figure*}[ht!] \centering \gridline{\leftfig{fig2a.pdf}{\columnwidth}{(a) VERITAS 2017 fall sky map} \rightfig{fig2b.pdf}{\columnwidth}{(b) MAGIC 2017 fall sky map}} \caption{Significance sky maps of the region around PSR\,J2032+4127 / MT91\,213\ showing both the VERITAS (left) and MAGIC (right) results for observations during 2017 fall. The position of PSR\,J2032+4127 / MT91\,213\ is shown as a black ``{\bf+}'', the centroid of the gamma-ray emission as a black ``{\bf$\circ$}'', the position and extension for the respective telescope's measurements of TeV\,J2032+4130\ are shown as a black ``{\bf$\times$}'' and a dashed line, and the position of Cygnus X-3 is shown with a white diamond. The white circle in the lower left hand corner is of radius 0\fdg1, the approximate point spread function for these measurements at $1\U{TeV}$. The wobble positions are shown as white ``{\bf$\times$}''. \label{fig:maps}} \end{figure*} \autoref{fig:maps} shows skymaps for the complete fall 2017 VHE datasets, revealing overlapping emission from TeV\,J2032+4130\ and PSR\,J2032+4127 / MT91\,213. For both the VERITAS and MAGIC data we fit the gamma-ray excess maps with a two-component model, consisting of a bivariate Gaussian function to represent the extended source and a symmetrical Gaussian function to model the unresolved emission at the location of the binary. The parameters of the extended source model, indicated by the dashed ellipses in \autoref{fig:maps}, were constrained to match those measured prior to the appearance of the binary. For MAGIC, the extended source has a semi-major axis of ${0\fdg125}\pm{0\fdg01}$ and a semi-minor axis of ${0\fdg08}\pm{0\fdg01}$, centered on R.A.=$20^\mathrm{h} 31^\mathrm{m} 39.7^\mathrm{s} \pm 2^\mathrm{s}$, Dec=$41\arcdeg 34\arcmin 23\arcsec \pm 20\arcsec$, at an angle of $34\arcdeg \pm 2\arcdeg$ east of north. For VERITAS, the extended source parameters are those reported in \citet{2018ApJ...861..134A}: semi-major and semi-minor axes of ${0\fdg19}\pm{0\fdg02}_{stat}\pm{0\fdg01}_{sys}$ and ${0\fdg08}\pm{0\fdg01}_{stat}\pm{0\fdg03}_{sys}$, centered on R.A.=$20^\mathrm{h} 31^\mathrm{m} 33^\mathrm{s}\pm 2^\mathrm{s}_\mathrm{stat}\pm 2^\mathrm{s}_\mathrm{sys}$, Dec=$41\arcdeg 34\arcmin 38\arcsec \pm36\arcsec_\mathrm{stat} \pm36\arcsec_\mathrm{sys}$, with an orientation of $41\arcdeg \pm{4\arcdeg}_{stat}\pm{1\arcdeg}_{sys}$ east of north. The centroid of the unresolved component is measured to be at R.A.=$20^\mathrm{h} 32^\mathrm{m} 10^\mathrm{s} \pm 2^\mathrm{s}_\mathrm{stat} \pm 2^\mathrm{s}_\mathrm{sys}$, Dec=$41\arcdeg 27\arcmin 34\arcsec \pm 16\arcsec_\mathrm{stat} \pm 26\arcsec_\mathrm{sys}$ for VERITAS, and R.A.=$20^\mathrm{h} 32^\mathrm{m} 7^\mathrm{s} \pm 2_\mathrm{stat}^\mathrm{s}$, Dec=$41\arcdeg 28\arcmin 4\arcsec \pm 20\arcsec_\mathrm{stat}$ for MAGIC, which are consistent, within the measured uncertainties, with the location of PSR\,J2032+4127 / MT91\,213\ ($20^\mathrm{h} 32^\mathrm{m} 13.12^\mathrm{s} \pm 0.02^\mathrm{s} +41\arcdeg 27\arcmin 24.34\arcsec \pm 0.03\arcsec$; \cite{2018yCat.1345....0G}). The source spectrum (\autoref{fig:spectra}) is also formed of two emission components: steady, baseline emission from the extended source TeV\,J2032+4130\ and variable emission associated with the binary system. Prior to 2017, only the baseline emission component was present, while the 2017 data include contributions from both the baseline and the variable binary. We performed a global spectral fit to the complete dataset, in which the pre-2017 observations were fit with a pure power-law for the baseline, and the 2017 data were fit with the same power-law, plus an additional component for the binary emission. Two models were tested for the binary emission: a pure power law and a power law with an exponential cutoff. The VERITAS data favor the cutoff model over the power law for the binary emission, with an F-test probability of 0.997 and a cutoff energy of $0.57\pm0.20\U{TeV}$. MAGIC observations also favor an exponential cutoff, with a probability of 0.993 and a cutoff energy of $1.40\pm0.97\U{TeV}$. Full details of the fit parameters are given in \autoref{tab:spectralFits}. We note that the only other gamma-ray binary to display a spectral cutoff in the VHE regime is LS\,5039, with a cutoff at $8.7\pm2.0\U{TeV}$ in the VHE high state, close to inferior conjunction \citep{2006A&A...460..743A}. The fit process was then repeated with the 2017 data broken up into two periods, to search for spectral variation with orbital phase and/or flux state of the binary system. We define a high state (MJD 58057--58074 and 58080--58110), which covers the periods around periastron where the flux above 0.2 TeV was greater than 1.0$\times10^{-11}$~cm$^{-2}$ s$^{-1}$ (approximately five times greater than the baseline flux from TeV\,J2032+4130), and a low state, covering the 2017 observations prior to periastron (MJD 57928--58056). We performed a global fit to the datasets, with the high and low states fit with the baseline power law plus either a pure power law or a power-law with an exponential cutoff. The VERITAS data favor a cutoff model in the low state, with an F-test probability of 0.999 and a cutoff value of $0.33\pm0.13\U{TeV}$. MAGIC observations also favor a low-state cutoff model, with a probability of 0.980 and a cutoff value of $0.58\pm0.33\U{TeV}$. For both observatories, the high-state data are well-fit by a pure power law and including a cutoff does not significantly change the quality of the fit. \begin{figure*}[ht!] \centering \gridline{\leftfig{fig3a.pdf}{\columnwidth}{(a) VERITAS 2017 fall average\label{fig:VERAverage}} \rightfig{fig3b.pdf}{\columnwidth}{(b) MAGIC 2017 fall average}} \gridline{\leftfig{fig3c.pdf}{\columnwidth}{(c) VERITAS high \& low states} \rightfig{fig3d.pdf}{\columnwidth}{(d) MAGIC high \& low states}} \caption{Spectral energy distributions for PSR\,J2032+4127 / MT91\,213\ and TeV\,J2032+4130\ from VERITAS (left) and MAGIC (right). The blue butterflies are the spectral fits to TeV\,J2032+4130. The red butterflies in the upper plots are fits to the 2017 fall data: the sum of a power-law fit to TeV\,J2032+4130\ and a cutoff power-law fit to PSR\,J2032+4127 / MT91\,213. In the bottom plots, orange is the fit to the low-state data (PSR\,J2032+4127 / MT91\,213\ is fit with a cutoff) while green represents the high-state data (PSR\,J2032+4127 / MT91\,213\ is fit with a power law). The fit parameters are given in \autoref{tab:spectralFits} and the time periods are defined in the text. \label{fig:spectra}} \end{figure*} \begin{deluxetable*}{lccccccc} \tablenum{1} \tablecaption{VHE gamma-ray spectral fit results. Each group of rows shows the result of a simultaneous fit of both the baseline emission from the region prior to the appearance of the binary, modeled as a power law (PL), and the sum of this baseline with a new component from the binary, modeled as either a power law or a power law with an exponential cutoff (PLEC). These fits were performed across the data periods defined in \autoref{sec:results}. In each row, the parameters shown correspond to the model component listed in \textbf{bold}, where N$_0$ is the differential flux normalization (calculated at the de-correlation energy E$_0$), $\Gamma$ is the spectral index, and E$_\mathrm{C}$ is the cutoff energy for PLEC models. The $\chi^2$ and degrees of freedom (dof) are calculated from the joint fit across the given data. \label{tab:spectralFits}} \tablehead{ \colhead{} & \colhead{Period} & \colhead{Model Components} & \colhead{N$_0$} & \colhead{E$_0$} & \colhead{$\Gamma$} & \colhead{E$_\mathrm{C}$} & \colhead{$\chi ^2$/dof}\\ \colhead{} & \colhead{} & \colhead{} & \colhead{[cm$^{-2}$ s$^{-1}$ TeV$^{-1}$]} & \colhead{[TeV]} & \colhead{} & \colhead{[TeV]}& \colhead{} } \startdata \multirow{13}{2cm}{\centering VERITAS ($>220$ GeV)} & Pre-2017& \textbf{PL$_\mathrm{baseline}$} & (8.78 $\pm$ 2.56) $\times 10^{-15}$ & 3.47 & 2.14 $\pm$ 0.53 & - & \multirow{2}{*} {\writeChitworow{40.6/7}} \\ & Fall 2017& PL$_\mathrm{baseline}$ + \textbf{PL$_\mathrm{binary}$} & (1.53 $\pm$ 0.14) $\times 10^{-12}$ & 0.70 & 2.81 $\pm$ 0.09 & - & \\ \cline{2-8} & Pre-2017& \textbf{PL$_\mathrm{baseline}$} & (7.62 $\pm$ 1.51) $\times 10^{-15}$ & 4.18 & 2.14 $\pm$ 0.29 & - & \multirow{2}{*}{\writeChitworow{8.6/6}} \\ & Fall 2017& PL$_\mathrm{baseline}$ + \textbf{PLEC$_\mathrm{binary}$} & (8.04 $\pm$ 3.37) $\times 10^{-12}$ & 0.64 & 1.26 $\pm$ 0.45 & 0.57 $\pm$ 0.20 & \\ \cline{2-8} & Pre-2017& \textbf{PL$_\mathrm{baseline}$} & (4.65 $\pm$ 1.18) $\times 10^{-15}$ & 4.98 & 2.14 $\pm$ 0.85 & - & \multirow{3}{*}{\hspace{-.5pt}\writeChithreerow{26.8/10}} \\ &Low State& PL$_\mathrm{baseline}$ + \textbf{PL$_\mathrm{binary}$} & (8.12 $\pm$ 3.77) $\times 10^{-14}$ & 1.76 & 2.86 $\pm$ 0.11 & - & \\ & High State& PL$_\mathrm{baseline}$ + \textbf{PL$_\mathrm{binary}$} & (9.69 $\pm$ 1.75) $\times 10^{-13}$ & 1.17 & 2.72 $\pm$ 0.15 & - & \\ \cline{2-8} & Pre-2017& \textbf{PL$_\mathrm{baseline}$} & (1.23 $\pm$ 0.24) $\times 10^{-14}$ & 3.43 & 2.14 $\pm$ 0.28 & - & \multirow{3}{*}{\writeChithreerow{7.9/9}} \\ & Low State& PL$_\mathrm{baseline}$ + \textbf{PLEC$_\mathrm{binary}$} & (1.63 $\pm$ 1.12) $\times 10^{-11}$ & 0.56 & 0.65 $\pm$ 0.75 & 0.33 $\pm$ 0.13 & \\ & High State& PL$_\mathrm{baseline}$ + \textbf{PL$_\mathrm{binary}$}& (1.45 $\pm$ 0.18) $\times 10^{-12}$ & 1.00 & 2.73 $\pm$ 0.15 & - & \\ \cline{2-8} & Pre-2017& \textbf{PL$_\mathrm{baseline}$} & (1.26 $\pm$ 0.25) $\times 10^{-14}$ & 3.39 & 2.14 $\pm$ 0.28 & - & \multirow{3}{*}{\writeChithreerow{7.2/8}} \\ & Low State& PL$_\mathrm{baseline}$ + \textbf{PLEC$_\mathrm{binary}$} & (1.64 $\pm$ 1.12) $\times 10^{-11}$ & 0.56 & 0.65 $\pm$ 0.75 & 0.33 $\pm$ 0.13 & \\ & High State& PL$_\mathrm{baseline}$ + \textbf{PLEC$_\mathrm{binary}$} & (1.20 $\pm$ 0.41) $\times 10^{-11}$ & 0.51 & 2.37 $\pm$ 0.50 & 2.39 $\pm$ 3.23 & \\ \hline \multirow{14}{2cm}{\centering MAGIC ($>80$ GeV)} &Pre-2017& \textbf{PL$_\mathrm{baseline}$} & $(2.04\pm0.63)\times 10^{-14}$&3.50&$2.23\pm0.17$&-& \multirow{2}{*}{\writeChitworow{9.6/12}}\\ &Fall 2017& PL$_\mathrm{baseline}$ + \textbf{PL$_\mathrm{binary}$} & $(1.65\pm0.33)\times 10^{-12}$&0.70&$2.61\pm0.18$&-& \\ \cline{2-8} &Pre-2017& \textbf{PL$_\mathrm{baseline}$} & $(2.20\pm0.64)\times 10^{-14}$&3.50&$2.17\pm0.26$&-& \multirow{2}{*}{\writeChitworow{4.8/11}}\\ &Fall 2017& PL$_\mathrm{baseline}$ + \textbf{PLEC$_\mathrm{binary}$} & $(3.77\pm1.68)\times 10^{-12}$&0.70&$1.74\pm0.37$&$1.40\pm0.97$& \\ \cline{2-8} &Pre-2017& \textbf{PL$_\mathrm{baseline}$} & $(2.30\pm0.67)\times 10^{-14}$&3.50&$2.15\pm0.19$&-&\multirow{3}{*}{\writeChithreerow{4.4/15}} \\ &Low State& PL$_\mathrm{baseline}$ + \textbf{PL$_\mathrm{binary}$} & $(9.84\pm3.41)\times 10^{-13}$&0.70&$2.57\pm0.26$&-&\\ &High State& PL$_\mathrm{baseline}$ + \textbf{PL$_\mathrm{binary}$} & $(3.69\pm0.64)\times 10^{-12}$&0.70&$2.17\pm0.23$&-& \\ \cline{2-8} &Pre-2017& \textbf{PL$_\mathrm{baseline}$} & $(2.63\pm0.60)\times 10^{-14}$&3.50&$2.06\pm0.17$&-&\multirow{3}{*}{\writeChithreerow{3.0/14}} \\ &Low State& PL$_\mathrm{baseline}$ + \textbf{PLEC$_\mathrm{binary}$} & $(5.11\pm3.61)\times 10^{-12}$&0.70&$1.55\pm0.61$&$0.58\pm0.33$&\\ &High State& PL$_\mathrm{baseline}$ + \textbf{PL$_\mathrm{binary}$} & $(1.65\pm0.14)\times 10^{-12}$&0.70&$2.20\pm0.40$&-& \\ \cline{2-8} & Pre-2017& \textbf{PL$_\mathrm{baseline}$} & \multicolumn{5}{c}{\multirow{3}{*}{\textit{Insufficient data to discriminate between PL and PLEC in High State.}}} \\ & Low State& PL$_\mathrm{baseline}$ + \textbf{PLEC$_\mathrm{binary}$} & \\ & High State& PL$_\mathrm{baseline}$ + \textbf{PLEC$_\mathrm{binary}$} & \\ \enddata \end{deluxetable*} \section{Discussion} \label{sec:discussion} PSR\,J2032+4127 / MT91\,213\ is the second TeV gamma-ray binary system to be detected in which the nature of the compact object is clearly established. Non-thermal emission from these systems likely results from the interaction of the pulsar wind with the wind and/or disk of the Be star \citep{1997ApJ...477..439T, 1999APh....10...31K, 2013AnARv..21...64D}. Particles are accelerated at the shock which forms between the pulsar and Be star winds. These subsequently produce synchrotron emission from radio to X-ray bands and inverse Compton emission at TeV energies. Numerous competing factors play a role in creating and modulating the observed emission. These include the efficiency of inverse Compton production and the degree of photon-photon absorption, which both depend upon the geometrical properties of the system with respect to the line of sight and the intensity, wavelength and spatial distribution of target photon fields \citep{2005ApJ...634L..81B}. Additional factors include: the position of the pulsar in relation to structures in the stellar wind \citep{2018MNRAS.474L..22P}; the bulk motion and cooling of the post-shocked material \citep{2006A&A...456..801D}; the structure of the magnetic field around the star \citep{2005MNRAS.356..711S}; and the degree of magnetization of the pulsar wind, and its evolution with radial distance from the pulsar \citep{2009ApJ...702..100T}. Isotropized pair cascades, triggered by misaligned VHE photons which would not otherwise be observed, can also contribute to the emission \citep{1997AnA...322..523B,2014JHEAp...3...18S}. Finally, interactions with the material and radiation of a circumstellar disk, the defining feature of the Be stellar class, may also modulate the X-ray and gamma-ray fluxes \citep{2008MNRAS.385.2279S}. Modeling the time-dependent broadband emission is therefore complex, and challenging. \citet{2017ApJ...836..241T} have presented a model which explains the increasing X-ray flux prior to periastron as the result of the radial dependence of the pulsar wind magnetization, and the X-ray suppression at periastron due to Doppler boosting effects caused by bulk motion of the post-shocked flow, naturally leading to an emission light curve which is asymmetric with respect to periastron. A recently revised version of their model predictions is given in \citet{2018ApJ...857..123L}, and also in \autoref{fig:LC}. The model prediction matches the early part of the XRT light curve reasonably well, when scaled by a factor of 0.5, but is unable to reproduce the rapid brightening around MJD 58080, when PSR\,J2032+4127 / MT91\,213\ was at superior conjunction. This feature may be explained, at least in part, by interaction of the pulsar with the circumstellar disk of the Be star, which could be confirmed by observations of radio pulsations during the periastron passage. Alternatively, as discussed in \citet{2018MNRAS.474L..22P}, it may be caused by geometrical effects associated with the orientation of the stellar disk with respect to the pulsar's orbit. \citet{2018JPhG...45a5201B} also calculated gamma-ray emission from the system, including a detailed treatment of the pair cascades triggered by the absorption of primary gamma rays, and the subsequent production of inverse Compton emission. They do not calculate a detailed light curve, but conclude that the binary emission may dominate the overall VHE flux, becoming comparable to, or exceeding, the steady flux from TeV\,J2032+4130\ for a few weeks around periastron and superior conjunction. The predicted elevated flux close to periastron of $\sim1.6\times10^{-12}\U{erg}\UU{cm}{-2}\UU{s}{-1}$ at $1\U{TeV}$ is similar to the high-state emission levels reported in this work. We also note that the VHE efficiency (L$_{>200\U{GeV}}$/$\dot{\mathrm{E}}=1.4\%$) for PSR\,J2032+4127 / MT91\,213\ is approximately the same as that of PSR\,B1259-63 / LS\,2883. In contrast, the GeV efficiency of PSR\,J2032+4127 / MT91\,213\ is significantly lower than that of PSR\,B1259-63 / LS\,2883, which can exceed 100\% \citep{2018ApJ...857..123L, 2018ApJ...863...27J}. A distinctive feature observed in the VHE light curve is a sharp flux drop around seven days after periastron, lasting just a few days. As noted in \citet{2017ApJ...836..241T}, a similar dip has been seen in the lightcurve of PSR\,B1259-63 / LS\,2883, which \citet{2017ApJ...837..175S} attributed to photon-photon absorption. This effect is predicted to be strongest when both the interaction angle between the photons is optimal and when the gamma-ray photons pass through the densest photon field, which occurs around superior conjunction, 5--15 days after periastron for PSR\,J2032+4127 / MT91\,213. Based on the detailed sampling of the VHE and X-ray light curves reported here, coupled with the measurement of an unexpected low-energy spectral cutoff in the VHE low state, it is clear that the existing models will require significant revision. Analysis of the pulsar timing evolution over periastron will provide important additional input, including more accurate measurements of the system geometry. It will also allow for more sensitive searches for GeV emission in the \textit{Fermi}-LAT data, with the dominant magnetospheric emission from the pulsar removed by a temporally-gated analysis. Finally, it is interesting to reconsider the properties of the steady VHE source, TeV\,J2032+4130, in the light of these results. As noted in \citet{2014ApJ...783...16A}, if we assume that TeV\,J2032+4130\ is the pulsar wind nebula of PSR\,J2032+4127, then PSR\,J2032+4127\ is one of the oldest and weakest pulsars with a nebula seen in both X-ray and VHE gamma rays. In a recent population study, \citet{2017arXiv170208280H} derive empirical relations between VHE luminosity and pulsar spin-down energy, and also between PWN radius and characteristic age. For PSR\,J2032+4127, these relations predict a radius of over $20\U{pc}$ (compared to a measured extent of $4.7\times2.0\U{pc}$), and a TeV luminosity (1 to 10 TeV) of $2\times10^{33}\U{erg}\UU{s}{-1}$ (compared to the measured value of $8\times10^{32}\U{erg}\UU{s}{-1}$). However, the measured properties of VHE PWN display a large intrinsic scatter, and the physical size of the nebula can be strongly modified by the local interstellar environment. We conclude that PSR\,J2032+4127\ remains a plausible candidate for the power source driving TeV\,J2032+4130\ and note that it may be worthwhile to search for extended TeV nebulae around other known TeV binary systems -- although the formation of TeV\,J2032+4130\ may only be possible due to the exceptionally long orbital period and large eccentricity of the binary system, which allows PSR\,J2032+4127\ to spend much of its orbit effectively as an isolated pulsar. X-ray and gamma-ray monitoring of PSR\,J2032+4127 / MT91\,213\ will continue. PSR\,B1259-63 / LS\,2883 produces bright gamma-ray flares in the days and months after periastron, and it ejects rapidly moving plasma clumps generated by the interaction of the pulsar with the stellar disk \citep{2015ApJ...806..192P}. Similar phenomena may occur in the case of PSR\,J2032+4127 / MT91\,213. The ongoing observing campaigns therefore provide a rare opportunity to completely sample the high-energy behavior of this system around periastron, which will not be repeated until approximately 2067. \acknowledgments VERITAS is supported by the U.S. Department of Energy, the U.S. National Science Foundation, the Smithsonian Institution, and by NSERC in Canada. We acknowledge the excellent work of the support staff at the Fred Lawrence Whipple Observatory and at collaborating institutions in the construction and operation of VERITAS. We acknowledge \textit{Fermi} and \textit{Swift} GI program grants 80NSSC17K0648 and 80NSSC17K0314. The MAGIC Collaboration thanks the funding agencies and institutions listed in: \noindent \url{https://magic.mpp.mpg.de/ack_201805} \vspace{5mm} \facilities{\textit{Swift}-XRT, VERITAS, MAGIC} \software{astropy \citep{2013A&A...558A..33A}, ROOT \citep{ROOT}, XSPEC \citep{Arnaud96}} \subsubsection*{#1}} \pagestyle{headings} \markright{Reference sheet: \texttt{natbib}} \usepackage{shortvrb} \MakeShortVerb{\|} \begin{document} \thispagestyle{plain} \newcommand{\textsc{Bib}\TeX}{\textsc{Bib}\TeX} \newcommand{\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}}{\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}} \begin{center}{\bfseries\Large Reference sheet for \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ usage}\\ \large(Describing version \fileversion\ from \filedate) \end{center} \begin{quote}\slshape For a more detailed description of the \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package, \LaTeX\ the source file \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\texttt{.dtx}. \end{quote} \head{Overview} The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package is a reimplementation of the \LaTeX\ |\cite| command, to work with both author--year and numerical citations. It is compatible with the standard bibliographic style files, such as \texttt{plain.bst}, as well as with those for \texttt{harvard}, \texttt{apalike}, \texttt{chicago}, \texttt{astron}, \texttt{authordate}, and of course \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}. \head{Loading} Load with |\usepackage[|\emph{options}|]{|\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}|}|. See list of \emph{options} at the end. \head{Replacement bibliography styles} I provide three new \texttt{.bst} files to replace the standard \LaTeX\ numerical ones: \begin{quote}\ttfamily plainnat.bst \qquad abbrvnat.bst \qquad unsrtnat.bst \end{quote} \head{Basic commands} The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package has two basic citation commands, |\citet| and |\citep| for \emph{textual} and \emph{parenthetical} citations, respectively. There also exist the starred versions |\citet*| and |\citep*| that print the full author list, and not just the abbreviated one. All of these may take one or two optional arguments to add some text before and after the citation. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citet{jon90}| & Jones et al. (1990)\\ |\citet[chap.~2]{jon90}| & Jones et al. (1990, chap.~2)\\[0.5ex] |\citep{jon90}| & (Jones et al., 1990)\\ |\citep[chap.~2]{jon90}| & (Jones et al., 1990, chap.~2)\\ |\citep[see][]{jon90}| & (see Jones et al., 1990)\\ |\citep[see][chap.~2]{jon90}| & (see Jones et al., 1990, chap.~2)\\[0.5ex] |\citet*{jon90}| & Jones, Baker, and Williams (1990)\\ |\citep*{jon90}| & (Jones, Baker, and Williams, 1990) \end{tabular} \end{quote} \head{Multiple citations} Multiple citations may be made by including more than one citation key in the |\cite| command argument. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citet{jon90,jam91}| & Jones et al. (1990); James et al. (1991)\\ |\citep{jon90,jam91}| & (Jones et al., 1990; James et al. 1991)\\ |\citep{jon90,jon91}| & (Jones et al., 1990, 1991)\\ |\citep{jon90a,jon90b}| & (Jones et al., 1990a,b) \end{tabular} \end{quote} \head{Numerical mode} These examples are for author--year citation mode. In numerical mode, the results are different. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citet{jon90}| & Jones et al. [21]\\ |\citet[chap.~2]{jon90}| & Jones et al. [21, chap.~2]\\[0.5ex] |\citep{jon90}| & [21]\\ |\citep[chap.~2]{jon90}| & [21, chap.~2]\\ |\citep[see][]{jon90}| & [see 21]\\ |\citep[see][chap.~2]{jon90}| & [see 21, chap.~2]\\[0.5ex] |\citep{jon90a,jon90b}| & [21, 32] \end{tabular} \end{quote} \head{Suppressed parentheses} As an alternative form of citation, |\citealt| is the same as |\citet| but \emph{without parentheses}. Similarly, |\citealp| is |\citep| without parentheses. Multiple references, notes, and the starred variants also exist. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citealt{jon90}| & Jones et al.\ 1990\\ |\citealt*{jon90}| & Jones, Baker, and Williams 1990\\ |\citealp{jon90}| & Jones et al., 1990\\ |\citealp*{jon90}| & Jones, Baker, and Williams, 1990\\ |\citealp{jon90,jam91}| & Jones et al., 1990; James et al., 1991\\ |\citealp[pg.~32]{jon90}| & Jones et al., 1990, pg.~32\\ |\citetext{priv.\ comm.}| & (priv.\ comm.) \end{tabular} \end{quote} The |\citetext| command allows arbitrary text to be placed in the current citation parentheses. This may be used in combination with |\citealp|. \head{Partial citations} In author--year schemes, it is sometimes desirable to be able to refer to the authors without the year, or vice versa. This is provided with the extra commands \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citeauthor{jon90}| & Jones et al.\\ |\citeauthor*{jon90}| & Jones, Baker, and Williams\\ |\citeyear{jon90}| & 1990\\ |\citeyearpar{jon90}| & (1990) \end{tabular} \end{quote} \head{Forcing upper cased names} If the first author's name contains a \textsl{von} part, such as ``della Robbia'', then |\citet{dRob98}| produces ``della Robbia (1998)'', even at the beginning of a sentence. One can force the first letter to be in upper case with the command |\Citet| instead. Other upper case commands also exist. \begin{quote} \begin{tabular}{rl@{\quad$\Rightarrow$\quad}l} when & |\citet{dRob98}| & della Robbia (1998) \\ then & |\Citet{dRob98}| & Della Robbia (1998) \\ & |\Citep{dRob98}| & (Della Robbia, 1998) \\ & |\Citealt{dRob98}| & Della Robbia 1998 \\ & |\Citealp{dRob98}| & Della Robbia, 1998 \\ & |\Citeauthor{dRob98}| & Della Robbia \end{tabular} \end{quote} These commands also exist in starred versions for full author names. \head{Citation aliasing} Sometimes one wants to refer to a reference with a special designation, rather than by the authors, i.e. as Paper~I, Paper~II. Such aliases can be defined and used, textual and/or parenthetical with: \begin{quote} \begin{tabular}{lcl} |\defcitealias{jon90}{Paper~I}|\\ |\citetalias{jon90}| & $\Rightarrow$ & Paper~I\\ |\citepalias{jon90}| & $\Rightarrow$ & (Paper~I) \end{tabular} \end{quote} These citation commands function much like |\citet| and |\citep|: they may take multiple keys in the argument, may contain notes, and are marked as hyperlinks. \head{Selecting citation style and punctuation} Use the command |\bibpunct| with one optional and 6 mandatory arguments: \begin{enumerate} \item the opening bracket symbol, default = ( \item the closing bracket symbol, default = ) \item the punctuation between multiple citations, default = ; \item the letter `n' for numerical style, or `s' for numerical superscript style, any other letter for author--year, default = author--year; \item the punctuation that comes between the author names and the year \item the punctuation that comes between years or numbers when common author lists are suppressed (default = ,); \end{enumerate} The optional argument is the character preceding a post-note, default is a comma plus space. In redefining this character, one must include a space if one is wanted. Example~1, |\bibpunct{[}{]}{,}{a}{}{;}| changes the output of \begin{quote} |\citep{jon90,jon91,jam92}| \end{quote} into [Jones et al. 1990; 1991, James et al. 1992]. Example~2, |\bibpunct[; ]{(}{)}{,}{a}{}{;}| changes the output of \begin{quote} |\citep[and references therein]{jon90}| \end{quote} into (Jones et al. 1990; and references therein). \head{Other formatting options} Redefine |\bibsection| to the desired sectioning command for introducing the list of references. This is normally |\section*| or |\chapter*|. Define |\bibpreamble| to be any text that is to be printed after the heading but before the actual list of references. Define |\bibfont| to be a font declaration, e.g.\ |\small| to apply to the list of references. Define |\citenumfont| to be a font declaration or command like |\itshape| or |\textit|. Redefine |\bibnumfmt| as a command with an argument to format the numbers in the list of references. The default definition is |[#1]|. The indentation after the first line of each reference is given by |\bibhang|; change this with the |\setlength| command. The vertical spacing between references is set by |\bibsep|; change this with the |\setlength| command. \head{Automatic indexing of citations} If one wishes to have the citations entered in the \texttt{.idx} indexing file, it is only necessary to issue |\citeindextrue| at any point in the document. All following |\cite| commands, of all variations, then insert the corresponding entry to that file. With |\citeindexfalse|, these entries will no longer be made. \head{Use with \texttt{chapterbib} package} The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package is compatible with the \texttt{chapterbib} package which makes it possible to have several bibliographies in one document. The package makes use of the |\include| command, and each |\include|d file has its own bibliography. The order in which the \texttt{chapterbib} and \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ packages are loaded is unimportant. The \texttt{chapterbib} package provides an option \texttt{sectionbib} that puts the bibliography in a |\section*| instead of |\chapter*|, something that makes sense if there is a bibliography in each chapter. This option will not work when \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ is also loaded; instead, add the option to \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}. Every |\include|d file must contain its own |\bibliography| command where the bibliography is to appear. The database files listed as arguments to this command can be different in each file, of course. However, what is not so obvious, is that each file must also contain a |\bibliographystyle| command, \emph{preferably with the same style argument}. \head{Sorting and compressing citations} Do not use the \texttt{cite} package with \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}; rather use one of the options \texttt{sort} or \texttt{sort\&compress}. These also work with author--year citations, making multiple citations appear in their order in the reference list. \head{Long author list on first citation} Use option \texttt{longnamesfirst} to have first citation automatically give the full list of authors. Suppress this for certain citations with |\shortcites{|\emph{key-list}|}|, given before the first citation. \head{Local configuration} Any local recoding or definitions can be put in \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\texttt{.cfg} which is read in after the main package file. \head{Options that can be added to \texttt{\char`\\ usepackage}} \begin{description} \item[\ttfamily round] (default) for round parentheses; \item[\ttfamily square] for square brackets; \item[\ttfamily curly] for curly braces; \item[\ttfamily angle] for angle brackets; \item[\ttfamily colon] (default) to separate multiple citations with colons; \item[\ttfamily comma] to use commas as separaters; \item[\ttfamily authoryear] (default) for author--year citations; \item[\ttfamily numbers] for numerical citations; \item[\ttfamily super] for superscripted numerical citations, as in \textsl{Nature}; \item[\ttfamily sort] orders multiple citations into the sequence in which they appear in the list of references; \item[\ttfamily sort\&compress] as \texttt{sort} but in addition multiple numerical citations are compressed if possible (as 3--6, 15); \item[\ttfamily longnamesfirst] makes the first citation of any reference the equivalent of the starred variant (full author list) and subsequent citations normal (abbreviated list); \item[\ttfamily sectionbib] redefines |\thebibliography| to issue |\section*| instead of |\chapter*|; valid only for classes with a |\chapter| command; to be used with the \texttt{chapterbib} package; \item[\ttfamily nonamebreak] keeps all the authors' names in a citation on one line; causes overfull hboxes but helps with some \texttt{hyperref} problems. \end{description} \end{document}
1,941,325,219,900
arxiv
\section{Introduction}\label{sec:intro} To provide safer, cleaner, and more efficient transportation is the promise of autonomous driving technologies~\cite{anderson2014autonomous}. Thanks to the serious efforts that have been made in both academia and industry to pursue this goal, advances in perception, decision-making/planning, control theory, and computing systems have made fully autonomous driving possible \cite{buehler2009darpa}. However, before autonomous vehicles can be deployed in mass production, their control systems need to be tested and validated in terms of guaranteeing safety and performance when operating in various traffic environments, which remains a challenging problem \cite{Kalra16safe}. On the one hand, simulation tools can be used for quick and safe virtual tests of these systems and to reduce the time and cost of road tests. On the other hand, the reliability of virtual tests depends on the fidelity of the simulations in terms of modeling traffic scenarios. In the near to medium term, autonomous vehicles will operate in traffic scenarios together with human-driven vehicles, where interactions between autonomous vehicles and human-driven vehicles will constantly occur. Among different traffic scenarios, the interactive behavior of vehicles at intersections may be particularly complex. An autonomous driving system must account for these interactions to be able to operate safely at an intersection. By types of traffic control, intersections can be classified as signal-controlled, ``stop'' or ``yield'' sign-controlled, and uncontrolled \cite{UIIG}. Uncontrolled intersections are intersections without traffic signals or signs, and are common in both urban and rural settings over the world \cite{bjorklund2005driver,liu2014analyzing,patil2016microscopic}. According to the U.S. National Highway Traffic Safety Administration's fatality analysis report, more than one fourth of fatal crashes in the U.S. occur at or are related to intersections, and about $50\%$ of these occur at uncontrolled intersections \cite{NHTSA}. At an uncontrolled intersection, due to the lack of guidance from traffic signals or signs, drivers/automations need to decide whether, when, and how to enter and pass through the intersection on their own; in this case, accounting for the interactions among vehicles is particularly important. Failures in accounting for these interactions may cause deadlocks if driving overly conservatively -- the vehicles may get stuck and never pass through the intersection, or may cause collisions if driving overly aggressively. Advanced strategies that have been proposed for handling interactive traffic at intersections include cooperative driving, where vehicles cooperate with each other and with road infrastructure to resolve traffic conflicts. They may cooperate through vehicle-to-vehicle negotiations \cite{carlino2013auction,de2013autonomous,ahmane2013modeling}, or through coordination by a centralized traffic ``manager'' in the approach called ``autonomous intersection management'' \cite{dresner2008multiagent,wu2012cooperative,lee2012development}. Although strategies based on cooperative driving have been shown to be capable of improving intersection traffic safety and efficiency, they rely on dense penetration of vehicle-to-vehicle and/or vehicle-to-infrastructure communications as well as autonomous driving systems, which will likely not be the case in the near to medium term. Alternative strategies have been focused on individual control of the autonomous ego vehicle. To account for the interactions among vehicles, approaches based on, e.g., online verification using reachability analysis \cite{althoff2009model,althoff2014online}, receding-horizon optimization \cite{liniger2015optimization,schwarting2017safe}, learning \cite{you2019advanced}, and game theory \cite{mandiau2008behaviour,sadigh2016planning,bahram2016game,yu2018human,dreves2018generalized}, may be used. Although these approaches establish theoretical foundations of creating autonomous vehicles that are capable of handling interactive traffic at uncontrolled intersections, they must be calibrated and validated to achieve control systems that provide promised safety and performance. Simulation tools used for virtual tests of these control systems are supposed to be capable of representing the interactive behavior of vehicles with reasonable fidelity, which motivates the development of approaches to modeling vehicle interactions. In this paper, we propose a novel game-theoretic vehicle interaction modeling approach for uncontrolled intersections. Game theory is in general a suitable tool for modeling strategic interaction between rational decision-makers \cite{myerson2013game}, and has been exploited for modeling driver/vehicle interactions at intersections by several researchers. In \cite{mandiau2008behaviour}, the vehicle-to-vehicle interactions at an intersection are modeled based on normal-form games -- the vehicles select actions between ``Stop'' and ``Go'' based on their payoff matrices. The performance of the approach in \cite{mandiau2008behaviour} is limited by the limited number of action choices (i.e., two) and the fact that the dynamic behavior of the vehicles is not explicitly taken into account when the payoff matrices are designed. For instance, as the number of interacting vehicles increases to $6$, almost half of the simulation runs following the approach of \cite{mandiau2008behaviour} lead to deadlocks. In \cite{sadigh2016planning}, the interactions between a human-driven vehicle and an autonomous vehicle are modeled based on a two-player game formulation, where vehicle dynamics are explicitly accounted for. The results of a two-vehicle traffic scenario on an one-lane four-way intersection where both vehicles are going straight to cross the intersection are reported. Extensions of this approach to more vehicles have not been reported and may not be straightforward due to both theoretical limitations and computational challenges. In our previous work \cite{oyler2016game,li2016hierarchical,li2018game_2}, a game-theoretic framework for modeling vehicle-to-vehicle interactions in multi-vehicle highway traffic scenarios has been proposed. The framework is based on the application of level-$\mathcal{K}$ game theory \cite{nagel1995unraveling,stahl1995players} and explicitly takes into account the dynamic behavior of the vehicles. The vehicle driving policies are determined using reinforcement learning. Once the policies have been obtained offline, highway traffic scenarios with a possibly large number of interacting vehicles can be modeled with minimum online computational effort. Such a level-$\mathcal{K}$ game-theoretic framework has also been extended to model the interactions between two vehicles at an uncontrolled two-lane four-way intersection in \cite{li2018game}. However, generalizations to more complex intersection traffic scenarios, e.g., with more than $2$ interacting vehicles and at intersections of various configurations, have not been addressed. The contributions of the present paper are: 1) We propose a novel framework based on a formulation of dynamic/sequential leader-follower games with multiple concurrent leader-follower pairs and receding-horizon optimization for modeling the interactive behavior of vehicles at uncontrolled intersections. The framework explicitly accounts for the dynamic behavior of the vehicles, decision-making delays, and common traffic rules. It is generalizable to traffic scenarios with more than $2$ interacting vehicles (results of up to $10$ vehicles are reported) and to intersections of various configurations. 2) We describe an intersection model that parameterizes the intersection layouts and geometries so that uncontrolled intersections with a wide range of configurations can be modeled using a finite set of parameters. 3) We apply our interaction modeling approach to the intersection model to simulate the interactive behavior of vehicles in various uncontrolled intersection traffic scenarios (with various numbers of interacting vehicles, intersection layouts and geometries, etc). 4) Based on simulation results and statistical evaluations, we show that the model exhibits reasonable behavior expected in traffic -- it can reproduce scenarios extracted from real-world traffic data and has reasonable performance in resolving traffic conflicts in complex intersection traffic scenarios. Furthermore, the model demonstrates a manageable increase in computational complexity as the number of interacting vehicles increases. 5) We also describe a generalized version of the interactive decision-making model of vehicles proposed in \cite{li2018game} based on level-$\mathcal{K}$ games, simulate the interactions between the model proposed in this paper and this alternative model, and demonstrate that the model proposed in this paper is capable of resolving conflicts with different drivers. This paper is organized as follows: In Section~\ref{sec:lf_game}, we present our game-theoretic approach to model interactive decision-making of vehicles at uncontrolled intersections. In Section~\ref{sec:intersection_path}, we describe our intersection model with parameterized layouts and geometries, to which our vehicle interaction modeling approach is applied. In Section~\ref{sec:kinematics_rewards}, we introduce the kinematics model to represent vehicles' dynamic behavior at uncontrolled intersections and the reward function design to represent drivers' decision-making objectives. In Section~\ref{sec:additions}, we incorporate several additional considerations in our model to improve the fidelity of our model in imitating the decision-making processes of human drivers. In Section~\ref{sec:levelK}, we consider a previously proposed interactive decision-making model of vehicles based on level-$\mathcal{K}$ game theory. In Section~\ref{sec:simulations}, we run multiple simulation case studies to comprehensively illustrate and evaluate our proposed framework for modeling vehicle interactions at uncontrolled intersections. The paper is summarized and concluded in Section~\ref{sec:sum}. \section{Vehicle interaction modeling based on leader-follower games}\label{sec:lf_game} In this section, we introduce our game-theoretic approach to model interactive decision-making of vehicles at uncontrolled intersections. We first describe the logic for leader-follower role assignment to vehicles at uncontrolled intersections in Section~\ref{sec:leader_follower}, which is the foundation for formulating our leader-follower games. We then describe our vehicle interactive decision-making model based on leader-follower games in two-vehicle interaction settings in Section~\ref{sec:game_2}, and generalize it to multi-vehicle interactions based on our proposed ``pairwise leader-follower games'' in Section~\ref{sec:game_n}. \subsection{Leader-follower role assignment to vehicles at uncontrolled intersections}\label{sec:leader_follower} Human drivers can usually resolve traffic conflicts at uncontrolled intersections by following the ``right-of-way'' rules \cite{NHTSA_2}. The right-of-way rules help the drivers decide who proceeds first at an intersection. Motivated by the right-of-way rules, we assign a leader-follower relationship to each pair of vehicles (denoted by $(i,j)$) at an intersection based on the following logic: \begin{enumerate}[(1)] \item If vehicles $i,j$ have both entered the intersection, the vehicle with a strictly smaller signed distance to the exit of the intersection is the leader. \item If at most one of vehicles $i,j$ has entered the intersection, the vehicle with a strictly smaller signed distance to the entrance of the intersection is the leader. \item If no leader-follower relationship has been assigned based on (1) or (2), then the vehicle on the right is the leader when the two vehicles are coming from adjacent road arms. \item If no leader-follower relationship has been assigned based on (1), (2), or (3), then the vehicle going straight is the leader when the other vehicle is making a turn. \end{enumerate} We note that if a vehicle has entered (resp. exited) the intersection, then its signed distance to the entrance (resp. exit) of the intersection is the negative of the corresponding distance. The entrance and exit points of an intersection (see Fig.~\ref{fig:vehicle_kinematics}) are defined in Section~\ref{sec:intersection_path} for arbitrary intersection layouts and geometries. If vehicle $i$ is the leader of the pair $(i,j)$, we write $i \prec j$; if $i$ is not the leader (i.e., either $j$ is the leader or no leader-follower relationship has been assigned based on (1)-(4)), we write $i \succeq j$. We note that the relations $\prec$ and $\succeq$ are not (pre)orders, as they do not have the transitivity property: if $i \prec j$ and $j \prec k$ (resp. $i \succeq j$ and $j \succeq k$), then $i \prec k$ (resp. $i \succeq k$). This can be seen by considering the traffic scenario where four vehicles $i$, $j$, $k$, and $l$ coming from different road arms arrive at the entrances of a four-way intersection at the same time. Then, based on the above role assignment logic we have $i \prec j$, $j \prec k$, $k \prec l$, and $l \prec i$. Indeed, this scenario and similar scenarios where such a cyclic pattern occurs are challenging scenarios for both human drivers and autonomous vehicles -- they may lead to deadlocks, i.e., no one decides to enter the intersection or everyone gets stuck in the middle of the intersection. The leader-follower role assignment is presented formally as an algorithm in Section~\ref{sec:kinematics_rewards}, which incorporates the above logic with vehicle kinematics, intersection layouts, perception imperfections, etc. \subsection{Leader-follower game to model two-vehicle interactions}\label{sec:game_2} Once a leader-follower relationship has been assigned to a pair of vehicles $(i,j)$, we use a leader-follower game (also referred to as a Stackelberg game) to model their interactive decision-making. We choose to use the Stackelberg model because it incorporates the asymmetric roles of the two players and grants one player advantages over the other \cite{basar1999dynamic}, which can be used to account for common traffic rules such as that a car arriving earlier to the intersection typically has the right of way over a car arriving later to the intersection. Let $\gamma_{l}$ (resp. $\gamma_{f}$) denote an action of the leader (resp. follower), taking values in an action set $\Gamma_{l}$ ($\Gamma_{f}$). Either player makes decisions on its action choices to maximize a reward function, denoted by $\mathbb{R}_{l}({\bf s},\gamma_{l},\gamma_{f})$ for the leader and by $\mathbb{R}_{f}({\bf s},\gamma_{l},\gamma_{f})$ for the follower, where ${\bf s} \in {\bf S}$ denotes the present state in which the two players are making their decisions. When modeling the interactions of two vehicles, ${\bf s}$ contains the states of these two vehicles, i.e., ${\bf s} = (s_i,s_j)$, where $s_i$ (resp. $s_j$) denotes the state of vehicle $i$ (resp. vehicle $j$) (its detailed definition depends on the vehicle kinematics model to be used, which is introduced in Section~\ref{sec:kinematics}). The dependence of either player's reward on both players' states and actions reflects the interactive nature of such a decision-making process. Following the concept of Stackelberg equilibrium \cite{basar1999dynamic}, one could model the leader-follower decision-making process as follows: \begin{align}\label{equ:leader_follower_2_1} \mathbb{Q}_{l}'({\bf s},\gamma_{l}) &:= \min_{\gamma_{f} \in \Gamma_{f}^{*}{\mkern-3mu'}({\bf s},\gamma_{l})} \mathbb{R}_{l}({\bf s},\gamma_{l},\gamma_{f}), \nonumber \\[-1pt] \Gamma_{f}^{*}{\mkern-2mu'}({\bf s},\gamma_{l}) &:= \big\{\gamma_{f}' \in \Gamma_{f} : \mathbb{R}_{f}({\bf s},\gamma_{l},\gamma_{f}') \ge \mathbb{R}_{f}({\bf s},\gamma_{l},\gamma_{f}), \nonumber \\ & \quad\quad\quad\quad\quad\quad\;\, \forall \gamma_{f} \in \Gamma_{f} \big\}, \nonumber \\[2pt] \gamma_{l}^* &\in \argmax_{\gamma_{l} \in \Gamma_{l}}\, \mathbb{Q}_{l}'({\bf s},\gamma_{l}), \nonumber \\ \gamma_{f}^* &\in \argmax_{\gamma_{f} \in \Gamma_{f}}\, \mathbb{R}_{f}({\bf s},\gamma^*_{l},\gamma_{f}). \end{align} The actions of the leader and the follower are interdependent. In particular, the leader has the so-called ``first mover advantage'': the leader controls the follower's set of rational actions $\Gamma_{f}^{*}{\mkern-2mu'}({\bf s},\gamma_{l})$ through the leader's own action choice $\gamma_{l}$. In such a game formulation, it is assumed that the leader is aware that the follower is capable of observing the leader's action $\gamma_{l}$ before selecting its own action $\gamma_{f}$. However, in the setting of drivers making decisions in traffic, each driver responds to the actions of other drivers with a reaction delay. More specifically, each driver can only observe the actions of other drivers that are applied at time step $t$ and take them into account in his/her own decision-making at the next time step $t+1$. From the follower's standpoint, since it cannot instantly observe and respond to the instant action of the leader, to secure its possible rewards against the uncertain action choices of the leader, we assume that it applies a ``maximin'' strategy, i.e., \begin{align}\label{equ:follower_2} \mathbb{Q}_{f}({\bf s},\gamma_{f}) &:= \min_{\gamma_{l} \in \Gamma_{l}} \mathbb{R}_{f}({\bf s},\gamma_{l},\gamma_{f}), \nonumber \\ \gamma_{f}^* &\in \argmax_{\gamma_{f} \in \Gamma_{f}}\, \mathbb{Q}_{f}({\bf s},\gamma_{f}). \end{align} We assume that the leader is aware that the follower is using such a maximin strategy to secure its rewards. Taking this awareness into account, the leader makes rational decisions based on: \begin{align}\label{equ:leader_2} \mathbb{Q}_{l}({\bf s},\gamma_{l}) &:= \min_{\gamma_{f} \in \Gamma_{f}^*({\bf s})} \mathbb{R}_{l}({\bf s},\gamma_{l},\gamma_{f}), \nonumber \\ \Gamma_{f}^*({\bf s}) &:= \big\{\gamma_{f}' \in \Gamma_{f} : \mathbb{Q}_{f}({\bf s},\gamma_{f}') \ge \mathbb{Q}_{f}({\bf s},\gamma_{f}), \forall \gamma_{f} \in \Gamma_{f} \big\}, \nonumber \\[4pt] \gamma_{l}^* &\in \argmax_{\gamma_{l} \in \Gamma_{l}}\, \mathbb{Q}_{l}({\bf s},\gamma_{l}). \end{align} We now make assumptions on the uniqueness of maximizers as follows: \begin{align}\label{equ:uniqueness} &\,\, \forall\, ({\bf s},\gamma_{f}) \in {\bf S} \times \Gamma_{f},\, \exists !\, \gamma_{l}' \in \Gamma_{l} \text{ such that } \nonumber \\ &\,\, \mathbb{R}_{l}({\bf s},\gamma_{l}',\gamma_{f}) \ge \mathbb{R}_{l}({\bf s},\gamma_{l},\gamma_{f}),\, \forall\, \gamma_{l} \in \Gamma_{l}; \nonumber \\[4pt] &\,\, \forall\, {\bf s} \in {\bf S},\, \exists !\, \gamma_{f}' \in \Gamma_{f} \text{ such that } \nonumber \\ & \min_{\gamma_{l} \in \Gamma_{l}} \mathbb{R}_{f}({\bf s},\gamma_{l},\gamma_{f}') \ge \min_{\gamma_{l} \in \Gamma_{l}} \mathbb{R}_{f}({\bf s},\gamma_{l},\gamma_{f}),\, \forall\, \gamma_{f} \in \Gamma_{f}. \end{align} Assumption \eqref{equ:uniqueness} means that at each traffic state ${\bf s}$, for either player ($l$ or $f$), there is one action that is strictly better than the others to use. Although not strictly required in the leader-follower decision-making process described by \eqref{equ:follower_2} and \eqref{equ:leader_2}, assumption \eqref{equ:uniqueness} can simplify the mathematical expression of \eqref{equ:follower_2}-\eqref{equ:leader_2}, i.e., \begin{align}\label{equ:leader_follower_2_2} \mathbb{Q}_{l}({\bf s},\gamma_{l}) &= \mathbb{R}_{l}({\bf s},\gamma_{l},\gamma_{f}^*), \nonumber \\[2pt] \mathbb{Q}_{f}({\bf s},\gamma_{f}) &= \min_{\gamma_{l} \in \Gamma_{l}} \mathbb{R}_{f}({\bf s},\gamma_{l},\gamma_{f}), \nonumber \\ \gamma_{l}^* &= \argmax_{\gamma_{l} \in \Gamma_{l}}\, \mathbb{Q}_{l}({\bf s},\gamma_{l}), \nonumber \\ \gamma_{f}^* &= \argmax_{\gamma_{f} \in \Gamma_{f}}\, \mathbb{Q}_{f}({\bf s},\gamma_{f}). \end{align} We also note that based on our reward function design that is introduced in Section~\ref{sec:rewards}, assumption \eqref{equ:uniqueness} holds. Note that in the game formulation \eqref{equ:follower_2} and \eqref{equ:leader_2} (and in \eqref{equ:leader_follower_2_2}), the leader has been given the ``first mover advantage'' as the follower applies a maximin strategy, a conservative strategy assuming worst-case scenarios, whereas the leader is able to select comparatively more aggressive actions taking advantage of its awareness of the follower's maximin strategy. An alternative formulation is to let the leader apply a conservative maximin strategy, which corresponds to an assumption that the leader knows that it cannot control the follower's set of rational actions $\Gamma_{f}^*({\bf s})$ through its instant action choice $\gamma_{l}$ since $\gamma_{l}$ is not instantly observable to the follower. Then, \eqref{equ:leader_follower_2_1} becomes \begin{align}\label{equ:leader_follower_2_3} \mathbb{Q}_{l}'({\bf s},\gamma_{l}) &= \min_{\gamma_{f} \in \Gamma_{f}} \mathbb{R}_{l}({\bf s},\gamma_{l},\gamma_{f}), \nonumber \\ \gamma_{l}^* &= \argmax_{\gamma_{l} \in \Gamma_{l}}\, \mathbb{Q}_{l}'({\bf s},\gamma_{l}), \nonumber \\ \gamma_{f}^* &= \argmax_{\gamma_{f} \in \Gamma_{f}}\, \mathbb{R}_{f}({\bf s},\gamma^*_{l},\gamma_{f}). \end{align} It is clear that \eqref{equ:leader_follower_2_3} is equal to \eqref{equ:leader_follower_2_2} up to a switch of the roles ``leader'' and ``follower.'' Based on the role assignment criterion introduced in Section~\ref{sec:leader_follower}, we choose to use formulation \eqref{equ:leader_follower_2_2} as it agrees with the common traffic rule that a leader, e.g., a car arriving earlier to the intersection, typically has the right of way over a follower, e.g., a car arriving later to the intersection. \subsection{Pairwise leader-follower game to model multi-vehicle interactions}\label{sec:game_n} This section discusses a computationally scalable generalization of the vehicle interaction modeling framework based on leader-follower games proposed in Section~\ref{sec:game_2} to intersection traffic scenarios with $n$ interacting vehicles, where $n \ge 2$. Although $2$-player leader-follower games may be generalized to $n$-player games through considering a multi-level decision-making hierarchy, e.g., player~$k$ being the leader of players~$k+1,\cdots,n$ and being the follower of players~$1,\cdots,k-1$ for every $k \in \{1,\cdots,n\}$, or allowing a level to accommodate multiple players, e.g., players~$2,\cdots,n$ being the followers of player~$1$ and applying Nash equilibrium-based strategies among themselves, such generalizations require exponentially increased computational efforts to solve for solutions as the number of players increases. For instance, a Stackelberg equilibrium solution can be difficult to compute when $n > 3$ \cite{yoo2018predictive}. Therefore, to handle intersection traffic scenarios with a possibly large number of traffic participants, we propose an alternative generalization approach. Our approach relies on pairwise leader-follower relationships defined for all vehicle pairs at the intersection, and each vehicle's decision-making accounts for all the pairwise leader-follower relationships related to itself. In particular, vehicle $i$ makes decisions on its action choices according to: \begin{align}\label{equ:leader_follower_n_1} \underline{\mathbb{Q}}_{i}({\bf s}_{\text{traffic}},\gamma_{i}) &:= \min_{j \in \{1,\cdots,n\},\, j \neq i} \mathbb{Q}_{i,j}({\bf s}_{i,j},\gamma_{i}), \nonumber \\ \mathbb{Q}_{i,j}({\bf s}_{i,j},\gamma_{i}) &:= \begin{cases} \mathbb{Q}_{l}({\bf s}_{i,j},\gamma_{i}) & \text{if } i \prec j, \nonumber \\ \mathbb{Q}_{f}({\bf s}_{i,j},\gamma_{i}) & \text{if } i \succeq j, \end{cases} \\[2pt] \gamma_{i}^* &\in \argmax_{\gamma_i \in \Gamma_i}\, \underline{\mathbb{Q}}_{i}({\bf s}_{\text{traffic}},\gamma_{i}), \end{align} where $\mathbb{Q}_{l}({\bf s}_{i,j},\gamma_{i})$ (resp. $\mathbb{Q}_{f}({\bf s}_{i,j},\gamma_{i})$) is defined in \eqref{equ:leader_follower_2_2} with player $i$ being the leader $l$ (resp. the follower $f$); the traffic state ${\bf s}_{\text{traffic}}$ contains the states of all interacting vehicles at the intersection, i.e., ${\bf s}_{\text{traffic}} = (s_1,\cdots,s_n)$; and ${\bf s}_{i,j} = (s_i,s_j)$ represents the state of the vehicle pair $(i,j)$. The decision-making model \eqref{equ:leader_follower_n_1} can be interpreted as follows: If $i$ is the follower of $j$, the secured reward of action $\gamma_{i}$ is the least reward $i$ may get due to the uncertain action choice of $j$; if $i$ is the leader of $j$, $i$ is aware that the most aggressive action that $j$ can choose is subject to $j$'s maximin principle between their pairwise interactions, and thus $i$ predicts the reward of action $\gamma_{i}$ by assuming $j$ to apply its maximin action of their pair. On top of this, to account for its interactions with all other players, $i$ maximizes the minimum of its secured/predicted rewards over all pairwise interactions. We note that when $i$ is the leader of $j$, its reward prediction may be inaccurate as the actually applied action of $j$ is not only subject to $j$'s maximin principle between their pairwise interactions but also subject to $j$'s interactions with the other players. However, in the setting of vehicle interactions at uncontrolled intersections where the central question is ``who goes first,'' the above strategy for the leader, i.e., predicting action rewards by assuming the follower to apply the maximin action of their pair, is reasonable -- if the follower's maximin action of their pair is not to go first, the follower will likely not choose to go first when it maximizes the minimum of its secured/predicted rewards over all pairwise interactions. With this strategy, the pairwise leader is able to select comparatively more aggressive actions than the pairwise follower. And as a result, if there is an overall leader, i.e., the leader in every pairwise leader-follower relationship related to itself, it can take comparatively most aggressive actions than all other players, e.g., to go first. The effectiveness of decision-making model \eqref{equ:leader_follower_n_1} in resolving traffic conflicts at uncontrolled intersections, i.e., driving every vehicle safely through the intersection without causing collisions or deadlocks, is illustrated through multiple simulation case studies in Section~\ref{sec:simulations}. Furthermore, decision-making model \eqref{equ:leader_follower_n_1} decouples the $n$-player interactions into pairwise interactions, and thus significantly decreases the computational complexity in solving for solutions. It also agrees with intuition -- when driving in traffic, a driver may focus more on the interactions between each neighbouring driver and him/herself than on the interactions among the other drivers. We note also that our strategy based on leader-follower games is not equivalent to a rule-based strategy where ``who goes first'' is determined by specified rules or logic, e.g., a strategy where a follower always waits until a leader passes through the intersection before the follower itself enters the intersection. Our leader-follower game based strategy allows a follower to enter the intersection even when a leader is still in the intersection, for instance, in situations where the follower's action choices have minor conflicts with the leader's action choices. Whether or not the follower's action choices have conflicts with the leader's action choices and how these conflicts influence their interactive decision-making are represented and automatically handled by the decision-making process \eqref{equ:leader_follower_n_1}. \section{Parameterized intersection and vehicle path modeling} \label{sec:intersection_path} Simulation tools used for verification and validation of autonomous vehicles are supposed to cover a sufficiently rich set of traffic scenarios. For instance, intersections in real-world road networks can have different layouts (e.g., number of road arms) and geometries (e.g., angles between road arms and lane width). To model traffic scenarios at intersections of various layouts and geometries, in this section, we first describe an intersection model that parameterizes the intersection layouts and geometries and then present an approach to model the paths of vehicles at the intersections. Although there has been a rich literature on path planning for vehicles \cite{schwarting2018planning}, our vehicle path model is simple but sufficient for our purpose. Moreover, both the intersection and vehicle path models described in this section are designed in such a way that they are convenient for the application of our vehicle interaction model described in Section~\ref{sec:lf_game}. \subsection{Parameterized intersection modeling}\label{sec:intersection} We characterize the layout and geometry of an intersection using a set of parameters, i.e., \begin{equation}\label{equ:inter_para} \big(N, \{M_{\text{f}}^{(m)}\}_{m=1}^N, \{M_{\text{b}}^{(m)}\}_{m=1}^N, \{\phi^{(m)}\}_{m=1}^N, w_{\text{lane}} \big), \end{equation} where $N$ is the number of road arms of the intersection, $M_{\text{f}}^{(m)}\in\{0,1,2,\cdots\}$ and $M_{\text{b}}^{(m)}\in\{0,1,2,\cdots\}$ are, respectively, the numbers of forward and backward lanes\footnote{A ``forward'' lane (resp. a ``backward'' lane) is a lane for traffic ``entering the intersection'' (resp. ``moving away from the intersection'').} of the $m$th arm, $\phi^{(m)}$ is the counter-clockwise angle of the $m$th arm with respect to the $x$-axis, and $w_{\text{lane}}$ is the lane width\footnote{We assume that all of the lanes have the same width although in principle they do not have to.} (see Fig.~\ref{fig:intersection}(a)). We note that $M_{\text{f}}^{(m)} = 0$ (or $M_{\text{b}}^{(m)} = 0$) represents one-way road, and $M_{\text{f}}^{(m)}=0$ and $M_{\text{b}}^{(m)}=0$ should not happen at the same time. We assume that the road centerlines\footnote{The road centerlines are the lane markings that separate lanes of traffic moving in the opposite directions.} of all the road arms intersect at the same point, which is referred to as the intersection center with coordinates $(x_\text{o},y_\text{o}) = (0,0)$. In this paper, we consider three-way, four-way, and five-way intersections (see Fig.~\ref{fig:intersection}(b)), i.e., $N \in \{3,4,5\}$, as they are most common in real-world road networks. \begin{figure}[h!] \begin{center} \begin{picture}(195.0, 246.0) \put( 5, 90){\epsfig{file=Figures/parameterized_intersection.pdf,height=2.1in}} \put( 0, 0){\epsfig{file=Figures/intersection_types.pdf,height=0.93in}} \small \put( 152.5, 167){$x$-axis} \put( 93, 228){$y$-axis} \put( 102, 158){$\phi^{(1)}$} \put( 140, 158){$\phi^{(2)}$} \put( 100, 178){$\phi^{(3)}$} \put( 96, 194){$\phi^{(4)}$} \put( 126, 100){$(x(0),y(0))$} \put( 126, 116){$(x(\rho^{\text{en}}), y(\rho^{\text{en}}))$} \put( 18, 210){$(x(\rho^{\text{ex}}), y(\rho^{\text{ex}}))$} \put( 80, 130){$w_{\text{lane}}$} \put( 125, 102){\vector(-1,0){25}} \put( 125, 122){\vector(-1,1){25}} \put( 70, 206){\vector(0,-1){25}} \put( 175, 246){(a)} \put( 175, 70){(b)} \normalsize \end{picture} \end{center} \caption{Intersection geometry and topology. (a) A four-way intersection, where the orange dashed lines are the road centerlines, the black dashed lines are the lane markings that separate lanes of traffic moving in the same directions, the black solid lines are the road boundaries, and the shaded polygons are off-road regions. (b) The topologies of three-way, four-way, and five-way intersections, where the black sticks indicate road arms in their nominal directions and the shaded areas indicate the admissible directional variations for the road arms.} \label{fig:intersection} \end{figure} Given a set of parameters \eqref{equ:inter_para}, the lane markings and road boundaries of the $m$th arm can be expressed according to \begin{equation}\label{equ:lane_func} x\sin(\phi^{(m)}) - y\cos(\phi^{(m)}) + \frac{k w_{\text{lane}}}{2} = 0, \end{equation} where $k \in \{-2M_{\text{b}}^{(m)}, \cdots, 2M_{\text{f}}^{(m)}\}$. When $k = 2M_{\text{f}}^{(m)}$ (resp. $k = -2M_{\text{b}}^{(m)}$), \eqref{equ:lane_func} represents the right-hand-side road boundary when looking in the forward direction (resp. in the backward direction); when $k \in 2\{M_{\text{f}}^{(m)}-1, \cdots, 1\}$ (resp. $k \in 2\{-M_{\text{b}}^{(m)}+1, \cdots, -1\}$), \eqref{equ:lane_func} represents a lane marking that separates two lanes of traffic moving in the forward direction (resp. in the backward direction); when $k = 0$, \eqref{equ:lane_func} represents the road centerline; and when $k \in 2\{M_{\text{f}}^{(m)}, \cdots, 1\}-1$ (resp. $k \in 2\{-M_{\text{b}}^{(m)}, \cdots, -1\}+1$), \eqref{equ:lane_func} represents the center of a forward lane (resp. backward lane). On the basis of \eqref{equ:lane_func}, we assign an ``entrance point'' to each forward lane, $(x(\rho^{\text{en}}), y(\rho^{\text{en}}))$, which indicates entering the intersection, as follows: We first locate the $N$ intersection corners\footnote{An intersection corner is the intersection point of two adjacent road boundaries.}. The line segment connecting each pair of adjacent corners is referred to as the ``entrance line'' of the corresponding road arm. The entrance point of each forward lane is determined as the intersection point of its center and the entrance line of the road arm it belongs to. On the other hand, we also assign ``exit points'' indicating exiting the intersection, $(x(\rho^{\text{ex}}), y(\rho^{\text{ex}}))$, which are used in determining the leader-follower relationships between vehicles (see Section~\ref{sec:leader_follower}). In particular, the determination of an exit point is coupled with our model for vehicle path, which is described in the next section. We note that the parameterized intersection model described above corresponds to the right-hand traffic \cite{kincaid1986rule}. To model intersections in the context of left-hand traffic requires corresponding and straightforward modifications. \subsection{Vehicle path modeling}\label{sec:path} We assume that vehicles can plan their paths according to their origin lanes and target lanes\footnote{The origin lane and the target lane of a vehicle are, respectively, the lane where it is driving before entering the intersection and the lane that it is going to after exiting the intersection.} before entering the intersection and follow these pre-planned paths to pass through the intersection. Such an assumption is often adopted in the literature \cite{chen2016cooperative}. When there are conflicts between vehicles, they can adjust their speeds along the paths according to their interactions with each other. In this section, we describe our vehicle path model. At first, we specify the origin lane and target lane of each vehicle to be modeled. When specifying the origin lane and target lane, some constraints representing common traffic rules can be enforced, including: 1) given an origin road arm, if the target road arm corresponds to a left turn, then the origin lane (resp. the target lane) must be the leftmost forward lane of the origin road arm (resp. the leftmost backward lane of the target road arm); 2) given an origin road arm, if the target road arm corresponds to a right turn, then the origin lane (resp. the target lane) must be the rightmost forward lane of the origin road arm (resp. the rightmost backward lane of the target road arm); and 3) when going straight, if the origin lane is the $\eta$th forward lane from the left of the origin road arm, then the target lane must be the $\eta'$th backward lane from the left of the target road arm $m$, where $\eta' = \min(\eta,M_{\text{b}}^{(m)})$. In particular, given the intersection layout and geometry, we determine whether a vehicle is ``making a left turn,'' ``going straight,'' or ``making a right turn'' based on the angle between its origin road arm and target road arm. When the clockwise angle from its origin road arm to its target road arm is in the interval $(0,\frac{3\pi}{4}]$, then it is ``making a left turn''; when the angle is in the interval $(\frac{3\pi}{4},\frac{5\pi}{4})$, then it is ``going straight''; it is ``making a right turn'' otherwise. We also note that U-turns are not considered in this paper, i.e., the origin lane and the target lane of a vehicle must belong to two different road arms. Once the origin lane and target lane of a vehicle is specified, we assign the vehicle an ``initial point,'' $(x^{\text{ini}},y^{\text{ini}})$, located in the center of its origin lane and a ``terminal point,'' $(x^{\text{term}},y^{\text{term}})$, located in the center of its target lane. After that, we model the vehicle's path, $\mathcal{P}$, as a curve composed of three segments. The first segment is a line segment with the two end points being $(x^{\text{ini}},y^{\text{ini}})$ and the intersection entrance point $(x(\rho^{\text{en}}), y(\rho^{\text{en}}))$ of the origin lane (see Section~\ref{sec:intersection}). Similarly, the third segment is a line segment with $(x^{\text{term}},y^{\text{term}})$ as one of its end points and extending in the direction of the target lane. The second segment is an arc that connects the first segment and the third segment, tangential to the first segment at $(x(\rho^{\text{en}}), y(\rho^{\text{en}}))$ and tangential to the third segment, where the point of tangency is defined as the intersection exit point, $(x(\rho^{\text{ex}}), y(\rho^{\text{ex}}))$, associated with path $\mathcal{P}$\footnote{So $(x(\rho^{\text{ex}}), y(\rho^{\text{ex}}))$ is the other end point of the third segment.}. Thus, $\mathcal{P}$ is a smooth curve. For any point on the curve, $(x,y) \in \mathcal{P}$, we define $\rho$ as the length of the curve piece from $(x^{\text{ini}},y^{\text{ini}})$ to $(x,y)$. Note that the curve $\mathcal{P}$ can be expressed as an injective function of $\rho$ since $\mathcal{P}$ does not intersect itself, i.e., any point on the curve can be determined by a unique $\rho$. In particular, we have $(x(0),y(0)) = (x^{\text{ini}},y^{\text{ini}})$, i.e., the initial point is the location of the vehicle when its traveled distance along the path is zero. Also, we let $\rho^{\text{en}}$ (resp. $\rho^{\text{ex}}$) denote the value of $\rho$ corresponding to the point on $\mathcal{P}$ with coordinates $(x(\rho^{\text{en}}), y(\rho^{\text{en}}))$ (resp. $(x(\rho^{\text{ex}}), y(\rho^{\text{ex}}))$)\footnote{So it is reasonable to name the intersection entrance (resp. exit) point as $(x(\rho^{\text{en}}), y(\rho^{\text{en}}))$ (resp. $(x(\rho^{\text{ex}}), y(\rho^{\text{ex}}))$) in the first place.}, i.e., the vehicle enters (resp. exits) the intersection when its traveled distance along the path is $\rho^{\text{en}}$ (resp. $\rho^{\text{ex}}$). To facilitate reproducing the results of this paper, the functions used to generate $\mathcal{P}$ given the coordinates of $(x^{\text{ini}},y^{\text{ini}})$, $(x^{\text{term}},y^{\text{term}})$, and $(x(\rho^{\text{en}}), y(\rho^{\text{en}}))$ are explicitly provided in Appendix~A. Based on the above descriptions, the path of a vehicle can be determined given its origin lane and target lane and represented as \begin{align}\label{equ:center_coordinate} \mathcal{P}:&\,\,\, \mathbf{R} \to \mathbf{R}^2, \quad\quad \rho \mapsto \begin{bmatrix} x(\rho) \\ y(\rho) \end{bmatrix}, \end{align} where $\rho$ represents the distance the vehicle has traveled along the path. In particular, we define the signed distance of the vehicle to the entrance (resp. exit) of the intersection as $\Delta \rho^{\text{en}} = \rho^{\text{en}} - \rho$ (resp. $\Delta \rho^{\text{ex}} = \rho^{\text{ex}} - \rho$) so that $\Delta \rho^{\text{en}}(t) < 0$ (resp. $\Delta \rho^{\text{ex}}(t) < 0$) represents that the vehicle has entered (resp. exited) the intersection. \section{Vehicle kinematics and rewards at uncontrolled intersections}\label{sec:kinematics_rewards} In this section, we introduce the model to represent vehicles' dynamic behavior and the reward function to represent drivers' objectives at uncontrolled intersections. \subsection{Vehicle kinematics}\label{sec:kinematics} We represent a vehicle using a rectangle bounding the vehicle's geometric contour projected onto the ground. This rectangle is referred to as the ``collision zone'' ($c$-zone) as an overlap of two vehicles' $c$-zones indicates a danger of collision. To fully characterize the $c$-zone of a vehicle, we need a $5$-tuple, $(x,y,\theta,l_c,w_c)$, where $(x,y)$ are the coordinates of its geometric center, $\theta$ is the vehicle's heading angle (the counter-clockwise angle of the vehicle's heading direction with respect to the $x$-axis), and $l_c$ (resp. $w_c$) is the length (resp. width) of the rectangle. On the basis of the vehicle path model \eqref{equ:center_coordinate} and the assumption that the vehicle can follow its pre-planned path $\mathcal{P}$ perfectly, $(x,y)$ can be written as functions of $\rho$, i.e., $(x(\rho),y(\rho))$, and the vehicle's heading angle $\theta(\rho)$ can be computed using the path geometry as follows: \begin{equation}\label{equ:heading_angle_1} \theta(\rho) = \lim_{h \to 0^+} \text{arctan2} \Big(y(\rho + h) -y(\rho),x(\rho + h) - x(\rho)\Big), \end{equation} which, using the fact that $\mathcal{P}$ is smooth, can be written as \begin{equation}\label{equ:heading_angle_2} \theta(\rho) = \text{arctan2} \Big(\frac{\text{d}y}{\text{d}\rho},\frac{\text{d}x}{\text{d}\rho}\Big). \end{equation} Based on \eqref{equ:center_coordinate}-\eqref{equ:heading_angle_2} and the assumption above, the dynamic behavior of a vehicle can be fully characterized by the dynamics of $\rho$ as follows: \begin{align}\label{equ:kinematics} \rho(t+1) &= \rho(t) + v(t)\, \Delta t, \nonumber \\ v(t+1) &= v(t) + a(t)\, \Delta t, \end{align} where $t$ denotes the discrete time instant, $v(t) \in [v_{\min},v_{\max}]$ and $a(t)$ denote, respectively, the vehicle's speed and acceleration at $t$, and $\Delta t$ is the sampling period. We collect all relevant variables and define the state of a vehicle as an $8$-tuple, i.e., \begin{align}\label{equ:state} s(t) =&\, \big(\mathcal{P},\rho(t),v(t),x(\rho(t)),y(\rho(t)),\theta(\rho(t)), \nonumber \\ &\,\,\,\, \Delta \rho^{\text{en}}(t),\Delta \rho^{\text{ex}}(t)\big). \end{align} The vehicle kinematics at a typical two-lane four-way intersection are illustrated in Fig.~\ref{fig:vehicle_kinematics}. The blue rectangle represents the vehicle's $c$-zone where the end with double lines is the vehicle's front end. The blue dotted curve represents the pre-planned path $\mathcal{P}$. The states $x(t)$, $y(t)$ and $\theta(t)$ can be computed using the traveled distance along the path $\rho(t)$ and the path geometry. The green triangles represent the intersection entrance points $(x(\rho^{\text{en}}), y(\rho^{\text{en}}))$ and the red triangles represent the intersection exit points $(x(\rho^{\text{ex}}), y(\rho^{\text{ex}}))$. \begin{figure}[h!] \begin{center} \begin{picture}(188.0, 163.0) \put( 0, -10){\epsfig{file=Figures/Kinematics.pdf,height=2.4in}} \put( 175, 0){$x$-axis} \put( 6, 157){$y$-axis} \small \put( 130, 63){$x$-axis} \put( 72, 97.5){$y$-axis} \put( 134, 97.5){$v(t)$} \put( 112, 74.5){$\theta(t)$} \put( 74, 59){$(x(t),y(t))$} \put( 100.5, 136){$\mathcal{P}(\rho)$} \normalsize \end{picture} \end{center} \caption{Vehicle kinematics model.} \label{fig:vehicle_kinematics} \end{figure} To adjust speed along the path, we assume that a vehicle has a finite number of acceleration levels to choose from at each time step, i.e., \begin{equation}\label{equ:accelerations} a(t) \in A = \big\{a^1, \cdots, a^{\mathcal{M}}\big\}, \quad \forall\, t. \end{equation} On the basis of the model representing vehicles' dynamic behavior at uncontrolled intersections described above, we now present the leader-follower role assignment logic in Section~\ref{sec:leader_follower} formally as Algorithm~\ref{alg:Role}. \begin{algorithm} \caption{Leader-follower role assignment} \label{alg:Role} \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{an ordered pair of vehicles $(i,j)$ and their states $\big(s_i(t),s_j(t)\big)$ } \Output{whether $i$ is the leader of $j$ } \lIf{($\Delta \rho_i^{\text{en}}(t) \le 0$ and $\Delta \rho_j^{\text{en}}(t) \le 0$) and $\Delta \rho_i^{\text{ex}}(t) < \Delta \rho_j^{\text{ex}}(t) - \delta$}{$i \prec j$} \lElseIf{($\Delta \rho_i^{\text{en}}(t) > 0$ or $\Delta \rho_j^{\text{en}}(t) > 0$) and $\Delta \rho_i^{\text{en}}(t) < \Delta \rho_j^{\text{en}}(t) - \delta$}{$i \prec j$} \lElseIf{$i$ and $j$ are coming from adjacent ways and $i$'s way is on the right of $j$'s way}{$i \prec j$} \lElseIf{$i$ is going straight and $j$ is making a turn}{$i \prec j$} {\bf else} $i \succeq j$. \end{algorithm} In Algorithm~\ref{alg:Role}, $\delta \ge 0$ is a threshold for differentiating the distances, accounting for the fact that drivers can only estimate the distances with limited accuracy. In particular, we assume that drivers cannot recognize which distance is smaller when $|\Delta \rho_i^{\text{en}}(t) - \Delta \rho_j^{\text{en}}(t)| \le \delta$ (resp. $|\Delta \rho_i^{\text{ex}}(t) - \Delta \rho_j^{\text{ex}}(t)| \le \delta$). On the basis of Algorithm~\ref{alg:Role}, at most one of outcomes $i \prec j$ or $j \prec i$ can take place. It may happen that $i \succeq j$ and $j \succeq i$. In such a case, both vehicles will view themselves as followers and thus make conservative decisions. In line~4, ``going straight'' and ``making a turn'' need to be differentiated, which has been described for arbitrary intersection layouts and geometries in Section~\ref{sec:path}. \subsection{Reward function}\label{sec:rewards} Basic goals of a driver at an intersection include: 1) to maintain safety, e.g., to not have a collision with another vehicle, 2) to keep a reasonable distance from other vehicles to improve safety and comfort, and 3) to pass through the intersection and get to his/her target under traffic rules and in a timely manner. We assume that common traffic rules, such as that a left turn can only be made when the vehicle is entering the intersection from a left-turn lane (usually the leftmost forward lane), and speed limits, have been incorporated in path planning (see Section~\ref{sec:path}) and in speed bounds $v(t) \in [v_{\min},v_{\max}]$. Then, the other goals can be represented using a reward function as follows: \begin{equation}\label{equ:reward} \mathbb{R}(t) = \sum_{\tau = 1}^{\mathcal{N}} \lambda^{\tau-1} R(\tau|t), \end{equation} where $R(\tau|t)$ is a predicted stage reward at time instant $t+\tau$ with the prediction made at the current time instant $t$, $\mathcal{N}$ is the prediction horizon, and $\lambda \in [0,1]$ is a factor discounting future rewards. The stage reward is defined as a linear combination of three terms, each of which represents a goal introduced above, i.e., \begin{equation}\label{equ:stage_reward} R(\tau|t) = w_1 \hat{c}(\tau|t) + w_2 \hat{s}(\tau|t) + w_3 \hat{v}(\tau|t), \end{equation} where $w_i > 0$, $i \in \{1,2,3\}$, are weighting factors, and the terms $\hat{c}(\tau|t)$, $\hat{s}(\tau|t)$, and $\hat{v}(\tau|t)$ are further explained below. On the basis of our decision-making model \eqref{equ:leader_follower_n_1}, the ego vehicle making decisions considers its interactions with each of the other vehicles separately. Let vehicle $i$ denote the ego vehicle and vehicle $j$ denote the vehicle in the pairwise interaction with vehicle $i$. \noindent $\boldsymbol{\cdot}$ Collision avoidance, $\hat{c}\,$: \begin{equation*} \hat{c}(\tau|t) = - \big(1 + S_c(\tau|t) + \hat{w} |v_i(\tau|t) v_j(\tau|t)| \big)\, \mathbb{I}\big(S_c(\tau|t)>0\big), \end{equation*} where $S_c(\tau|t) \ge 0$ is the predicted area of the intersection of vehicle $i$ and $j$'s $c$-zones, $v_i(\tau|t)$ and $v_j(\tau|t)$ are, respectively, vehicle $i$ and $j$'s predicted speeds, $\hat{w}>0$ is a tunable parameter, and $\mathbb{I}(\cdot)$ is an indicator function taking $1$ if $(\cdot)$ holds and $0$ otherwise. The $c$-zone of a vehicle is defined at the beginning of Section~\ref{sec:kinematics}, which is a rectangle characterized by the 5-tuple $(x,y,\theta,l_c,w_c)$. Thus, $S_c(\tau|t)$ is determined by the predicted states of vehicle $i$ and $j$, i.e., $S_c(\tau|t) = S_c\big(s_i(\tau|t),s_j(\tau|t)\big)$. The term $\hat{c}$ is designed in the above form so that if $S_c(\tau|t) = 0$, i.e., there is no danger of collision, then $\hat{c}(\tau|t) = 0$; if $S_c(\tau|t) > 0$, i.e., there is a danger of collision, then the penalty depends on the $c$-zone intersection area $S_c$ and the vehicle speeds $v_i$ and $v_j$. In particular, larger penalties are imposed for larger $S_c$ values and for larger absolute values of $v_i$ and $v_j$ as they imply more severe collisions; the parameter $\hat{w}>0$ adjusts the relative contribution of intersection area versus speeds; and the addition of $1$ ensures a minimum penalty for collisions. \noindent $\boldsymbol{\cdot}$ Separation, $\hat{s}$: \begin{equation*} \hat{s}(\tau|t) = - \big(1 + S_s(\tau|t) + \hat{w} |v_i(\tau|t) v_j(\tau|t)| \big)\, \mathbb{I}\big(S_s(\tau|t)>0\big), \end{equation*} where $S_s(\tau|t) \ge 0$ is the predicted area of the intersection of vehicle $i$ and $j$'s ``separation zones'' ($s$-zones). The $s$-zone of a vehicle is defined as a rectangle that shares the same longitudinal line of symmetry with the vehicle's $c$-zone and over-bounds the $c$-zone with a safety margin (see Fig.~\ref{fig:vehicle_zones}). It can be fully characterized by a 6-tuple $(x,y,\theta,l_{s,\text{f}},l_{s,\text{r}},w_s)$, where $l_{s,\text{f}},l_{s,\text{r}} \ge l_c/2$ and $w_s \ge w_c$. In particular, when vehicle $i$ is the leader (resp. the follower) in its pairwise interaction with vehicle $j$, vehicle $i$ assumes that both vehicles have their $s$-zones of the same size, denoted by $(l_{s,\text{f}}^l,l_{s,\text{r}}^l,w_s^l)$ (resp. $(l_{s,\text{f}}^f,l_{s,\text{r}}^f,w_s^f)$). We let $l_{s,\text{f}}^l \le l_{s,\text{f}}^f$, $l_{s,\text{r}}^l \le l_{s,\text{r}}^f$, and $w_s^l \le w_s^f$ to further encourage the leader to choose comparatively more aggressive actions than the follower. \begin{figure}[h!] \begin{center} \begin{picture}(186.0, 105.0) \put( 0, -5){\epsfig{file=Figures/Zones.pdf,height=1.6in}} \small \put( 150, 34){$x$-axis} \put( 68, 96){$y$-axis} \put( 164, 76){$v(t)$} \put( 122, 52){$\theta(t)$} \put( 36, 35){$(x(t),y(t))$} \put( 86, 16){$l_c$} \put( 13.5, 17.5){$w_c$} \put( 94, 88.5){$l_{s,\text{f}}$} \put( 36, 64){$l_{s,\text{r}}$} \put( 2, 12){$w_s$} \normalsize \end{picture} \end{center} \caption{The $c$-zone (dark blue rectangle) and $s$-zone (light blue rectangle) of a vehicle.} \label{fig:vehicle_zones} \end{figure} \noindent $\boldsymbol{\cdot}$ Velocity, $\hat{v}$: As the vehicle is assumed to follow its pre-planned path, its status of approaching its target can be characterized by its speed along the path. In particular, we set the term $\hat{v}(\tau|t) = v_i(\tau|t)$. Note that the predicted stage reward \eqref{equ:stage_reward} depends on the predicted states of the two vehicles $\big(s_i(\tau|t),s_j(\tau|t)\big)$. On the basis of the vehicle kinematics model \eqref{equ:center_coordinate}-\eqref{equ:state}, $s_\xi(\tau|t)$, $\xi \in \{i,j\}$, is uniquely determined by the current state of vehicle $\xi$, $s_\xi(t)$, and the predicted acceleration sequence of vehicle $\xi$, $\{a_\xi(k|t)\}_{k=0}^{\tau-1}$, i.e., $s_\xi(\tau|t) = s_\xi \big(s_\xi(t),\{a_\xi(k|t)\}_{k=0}^{\tau-1}\big)$. We define an action choice of vehicle $\xi$, $\xi \in \{i,j\}$, at the current time instant $t$ as $\gamma_\xi(t) = \{a_\xi(\tau|t)\}_{\tau=0}^{\mathcal{N}-1}$, taking values in the action set $\Gamma = A^\mathcal{N}$. Therefore, the cumulative reward \eqref{equ:reward} is a function of ${\bf s}_{i,j}(t)$, $\gamma_i(t)$, and $\gamma_j(t)$, in consistent with the expressions in our decision-making model \eqref{equ:leader_follower_n_1}. The closed-loop operation of each vehicle is based on receding-horizon optimization, i.e., once an acceleration sequence $\gamma_i(t) = \{a_i(\tau|t)\}_{\tau=0}^{\mathcal{N}-1}$ is determined, the ego vehicle $i$ applies the first acceleration value $a_i(0|t)$ for one time step, i.e., $a_i(t) = a_i(0|t)$, then repeats the decision-making process \eqref{equ:leader_follower_n_1} at the next time instant. \section{Additional modeling considerations}\label{sec:additions} To model the interactive behavior of human-driven vehicles with higher fidelity, we incorporate several additional considerations in our model. They are discussed in this section. \subsection{Courteous driving} A vehicle is not supposed to interrupt other vehicles' nominal drives. More specifically, a vehicle is not supposed to intentionally choose an action that would cause a collision when other vehicles maintain their speeds. We account for this by adjusting the action set for each vehicle, i.e., modify the decision-making model \eqref{equ:leader_follower_n_1} according to \begin{equation}\label{equ:leader_follower_n_2} \gamma_{i}^*(t) \in \argmax_{\gamma_i \in \Gamma_i(t)}\, \underline{\mathbb{Q}}_{i}({\bf s}_{\text{traffic}}(t),\gamma_{i}), \end{equation} where $\Gamma_i(t) \subset \Gamma = A^\mathcal{N}$ is defined as \begin{align}\label{equ:courteous} \Gamma_i(t) &:= A_i(t) \times A^{\mathcal{N}-1}, \nonumber \\ A_i(t) &:= \Big\{a_i(t) \in A \,\big|\, a_i(t) \text{ satisfies either } \forall j \neq i, a_j(t) = 0 \nonumber \\ &\, \implies S_c\big({\bf s}_{i,j}(t+1)\big) = S_c\big({\bf s}_{i,j}(t),a_i(t),a_j(t)\big) = 0, \nonumber \\ &\,\,\, \text{ or } a_i(t) = \min \{a \in A\} \Big\}. \end{align} \subsection{Limited perception ranges}\label{sec:ranges} Human drivers have limited ranges of visual perception. To account for this, we assume that a driver only considers his/her interactions with the other vehicles that are in a certain vicinity of his/her own. In particular, we further modify the decision-making model \eqref{equ:leader_follower_n_1} according to \begin{equation}\label{equ:leader_follower_n_3} \underline{\mathbb{Q}}_{i}({\bf s}_{\text{traffic}}(t),\gamma_{i}) = \min_{j \in \Omega_i(t)} \mathbb{Q}_{i,j}({\bf s}_{i,j}(t),\gamma_{i}), \end{equation} where $\Omega_i(t) \subset \{1,\cdots,n\}$ is defined as \begin{align} \Omega_i(t) &:= \Big\{j \in \{1,\cdots,n\} \,\big|\, j \neq i \text{ and} \nonumber \\ &\, \sqrt{\big(x_j(t)-x_i(t)\big)^2+\big(y_j(t)-y_i(t)\big)^2} \le \omega_i \Big\}, \end{align} with $\omega_i > 0$ representing vehicle $i$'s maximum perception distance. We also note that such a modification decreases the computational complexity of our model to a greater extent by reducing the number of interacting vehicle pairs $(i,j)$ in the decision-making process \eqref{equ:leader_follower_n_1} of each vehicle, and as a result, further improves the scalability of our modeling framework. \subsection{Breakage of deadlocks via exploratory actions}\label{sec:randomness} As discussed at the end of Section~\ref{sec:leader_follower}, in some scenarios cyclic patterns, such as $i \prec j$, $j \prec k$, $k \prec l$, and $l \prec i$, may occur and lead to deadlocks -- no one decides to enter the intersection or everyone gets stuck in the middle of the intersection. Such cyclic patterns also exist in real-world traffic. However, human drivers can usually break a deadlock. When a deadlock occurs, we usually observe that one or more human drivers will probe the possibility of going first and such probes can often help the drivers reach an agreement on their orders of passing through the intersection. On the basis of such an observation, we propose a strategy to break deadlocks via random exploratory actions, which is presented as Algorithm~\ref{alg:Break_D}. \begin{algorithm} \caption{Breakage of deadlocks via exploratory actions} \label{alg:Break_D} \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{the states of all vehicles ${\bf s}_{\text{traffic}}(t) = \big(s_1(t),\cdots,s_n(t)\big)$ and the acceleration choices of all vehicles $\big(a_1(t),\cdots,a_n(t)\big)$ obtained based on \eqref{equ:leader_follower_n_1}} \Output{the modified acceleration choices of all vehicles $\big(a_1(t),\cdots,a_n(t)\big)$} $\Omega_{\text{conflict}} = \text{Null}$; \\ \For{$i = 1,\cdots,n$}{ \lIf{$i$ is the first vehicle coming from its origin lane that has not exited the intersection ($\Delta \rho_i^{\text{ex}}(t) > 0$)}{add $i$ to $\Omega_{\text{conflict}}$}} \If{$v_i(t) = 0$ and $a_i(t) = 0,\, \forall i \in \Omega_{\rm{conflict}}$}{ \For{$i \in \Omega_{\rm{conflict}}$}{\If{$\{a \in A_i(t) \,|\, a > 0\} \neq \emptyset$}{reset $a_i(t)$ based on $ a_i(t) = \begin{cases} \min \{a \in A_i(t) \,|\, a > 0\}, & \text{with prob.} = p_i, \\ 0, &\!\!\!\!\!\!\!\!\!\! \text{with prob.} = 1-p_i. \end{cases} $ \\ }}} \end{algorithm} In Algorithm~\ref{alg:Break_D}, lines~2-4 are used to identify the vehicles that are in conflict. For example, vehicle $i$ that has exited the intersection is not a vehicle in conflict. As another example, if there is a vehicle $j$ that is entering/has entered the intersection from the same lane as $i$, drives in front of $i$\footnote{There shall be no ambiguity in ``in front of'' here since $i$ and $j$ are entering/have entered the intersection from the same lane.}, and has not exited the intersection, then vehicle $i$ is not a vehicle in conflict. Line~5 is used to identify the occurrence of a deadlock, i.e., all of the vehicles in conflict have stopped and no one decides to move according to decisions of \eqref{equ:leader_follower_n_1}. Then, lines~6-10 assign the vehicles in conflict probabilities of making slight movements to probe the possibility of going first. The effectiveness of Algorithm~\ref{alg:Break_D} in terms of breaking deadlocks is illustrated through the simulation case studies in Section~\ref{sec:simulations}. \section{An alternative vehicle interactive decision-making model}\label{sec:levelK} Human drivers can usually resolve traffic conflicts even when they are interacting with other drivers whose driving styles are a priori unknown. A model representing driver decision-making is supposed to have reasonable robustness against uncertainties in the behavior of interacting drivers. To illustrate the robustness of decision-making using the proposed model \eqref{equ:leader_follower_n_1} based on pairwise leader-follower games, we also consider another interactive decision-making model, which is based on level-$\mathcal{K}$ game theory \cite{nagel1995unraveling,stahl1995players} and has been developed in \cite{li2018game}. We simulate the interactions between model \eqref{equ:leader_follower_n_1} and this alternative model in the simulation case studies in Section~\ref{sec:simulations} and compare them. The level-$\mathcal{K}$ model is premised on the idea that each strategic agent in a non-cooperative multi-agent setting (each vehicle at an uncontrolled intersection, in our setting) makes decisions through a finite number of reasoning steps (called ``levels''), and different agents may have different reasoning levels. The reasoning process starts from level-$0$. Level-$0$ typically represents an agent's decisions of minimal rationality, e.g., instinctive decisions, to pursue its goals without strategically accounting for its interactions with the other agents. On the contrary, level-$\mathcal{K}$, $\mathcal{K} \ge 1$, represents an agent's decisions optimally responding to the level-($\mathcal{K}-1$) decisions of the other agents. In particular, once the level-$0$ decisions, either as time series or as a function of a vehicle's own state and its interacting vehicles' states (denoted respectively by $s_i$ and $\mathbf{s}_{-i} = (s_j)_{j \neq i}$ for vehicle $i$), are formulated, the corresponding level-$\mathcal{K}$, $\mathcal{K} \ge 1$, decisions can be computed through solving the optimization problems \begin{equation}\label{equ:levelK} \gamma_i^{\mathcal{K}} \in \argmax_{\gamma_i \in \Gamma_i}\, \mathbb{R}_i(s_i,\mathbf{s}_{-i},\gamma_i,{\bm \gamma}_{-i}^{\mathcal{K}-1}) \end{equation} sequentially for $\mathcal{K} = 1,2,\cdots$, where ${\bm \gamma}_{-i}^{\mathcal{K}-1} = (\gamma_j^{\mathcal{K}-1})_{j \neq i}$ denotes the level-($\mathcal{K}-1$) decisions of the vehicles interacting with vehicle $i$, and $\gamma_j^0 = \gamma_j^0(s_j,\mathbf{s}_{-j})$ is the formulated level-$0$ decision of vehicle $j$. To make this level-$\mathcal{K}$ reasoning and decision-making process more explicit, we present it for the case of $2$-agent interactions formally as Algorithm~\ref{alg:levelK_2agents}. Algorithm~\ref{alg:levelK_2agents} can be generalized to the case of $n$-agent interactions straightforwardly. \begin{algorithm} \caption{Level-$\mathcal{K}$ decision-making process in $2$-agent interactions} \label{alg:levelK_2agents} \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{the states of the ego agent $s_1$ and the other agent $s_2$, the level of the ego agent $\mathcal{K}$, a level-$0$ decision rule} \Output{the level-$\mathcal{K}$ decision of the ego agent} $k = 1$; \\ \eIf{$\text{mod}(\mathcal{K},2)=0$}{\While{$k < \mathcal{K}$}{ $\gamma_2^{k} \in \argmax_{\gamma_2 \in \Gamma_2} \mathbb{R}_2(s_1,s_2,\gamma_1^{k-1},\gamma_2)$; \\ $\gamma_1^{k+1} \in \argmax_{\gamma_1 \in \Gamma_1} \mathbb{R}_1(s_1,s_2,\gamma_1,\gamma_2^{k})$; $k \leftarrow k+2$; }}{\While{$k < \mathcal{K}-1$}{ $\gamma_1^{k} \in \argmax_{\gamma_1 \in \Gamma_1} \mathbb{R}_1(s_1,s_2,\gamma_1,\gamma_2^{k-1})$; \\ $\gamma_2^{k+1} \in \argmax_{\gamma_2 \in \Gamma_2} \mathbb{R}_2(s_1,s_2,\gamma_1^{k},\gamma_2)$; $k \leftarrow k+2$; } $\gamma_1^{\mathcal{K}} \in \argmax_{\gamma_1 \in \Gamma_1} \mathbb{R}_1(s_1,s_2,\gamma_1,\gamma_2^{\mathcal{K}-1})$; } output $\gamma_1^{\mathcal{K}}$. \end{algorithm} In this paper, the state $s_i(t)$, the action $\gamma_i(t) = \{a_i(\tau|t)\}_{\tau=0}^{\mathcal{N}-1}$, the kinematics model relating the predicted states $s_i(\tau|t)$ to the action $\gamma_i(t)$, and the reward function $\mathbb{R}_i(t)$ of vehicle $i$ are defined in the same way as in the leader-follower game based decision-making model \eqref{equ:leader_follower_n_1} (see Section~\ref{sec:kinematics_rewards}). In particular, since there are no leader-follower relationships defined in the scheme of level-$\mathcal{K}$ models, when computing its rewards, a level-$\mathcal{K}$ vehicle assumes that the $s$-zones of all vehicles (including itself and all other vehicles in interaction) are of the same size, denoted by $(l_{s,\text{f}}^\mathcal{K},l_{s,\text{r}}^\mathcal{K},w_s^\mathcal{K})$. In this paper, a level-$0$ decision of vehicle $i$ is to maximize the reward $\mathbb{R}_i(t)$ while treating the other vehicles as stationary obstacles, i.e., $v_j(\tau|t) = 0$ for all $j \neq i$ and $\tau = 0,\cdots, \mathcal{N}-1$. In this setting, a level-$0$ vehicle may represent an aggressive driver in real-world traffic who assumes that the other drivers will yield the right of way. Note that a level-$\mathcal{K}$ agent optimally responds to level-($\mathcal{K}-1$) interacting agents. When the levels of interacting agents are not known a priori, online estimation and adaptation can be incorporated in decision-making. In particular, we consider the following decision-making model \cite{li2018game}: \begin{align}\label{equ:model_D} & \gamma_i^{\mathfrak{K}} \in \argmax_{\gamma_i \in \Gamma_i} \\ &\! \sum_{\substack{k_j \in \{0,\cdots,k_{\max}\}, \\ j \neq i }}\!\! \bigg[ \Big(\prod_{j \neq i} \mathbb{P}(\mathcal{K}_j = k_j) \Big) \mathbb{R}_i \Big(s_i,\mathbf{s}_{-i},\gamma_i,(\gamma_j^{k_j})_{j \neq i}\Big) \bigg], \nonumber \end{align} where $\mathbb{P}(\mathcal{K}_j = k_j)$ represents vehicle $i$'s belief in that vehicle $j$ can be modeled as level-$k_j$, which is updated after each time step $t$ according to the following algorithm: For each $j \neq i$, if there exist $k_j, k_j' \in \{0,\cdots,k_{\max}\}$ such that $a_j^{k_j}(0|t) \neq a_j^{k_j'}(0|t)$, then \begin{align}\label{equ:model_estimation} & \tilde{k}_j \in \argmin_{k_j \in \{0,\cdots,k_{\max}\}} \big|a_j^{k_j}(0|t) - a_j^{\text{actual}}(t)\big|, \nonumber \\ & \mathbb{P}(\mathcal{K}_j = \tilde{k}_j) \leftarrow \mathbb{P}(\mathcal{K}_j = \tilde{k}_j) + \Delta \mathbb{P}, \nonumber \\ & \mathbb{P}(\mathcal{K}_j = k_j) \leftarrow \mathbb{P}(\mathcal{K}_j = k_j) \bigg/ \Big(\sum_{k_j' = 0}^{k_{\max}} \mathbb{P}(\mathcal{K}_j = k_j')\Big), \nonumber \\[-2pt] & \quad\quad\quad\quad\quad\quad\;\; \forall\, k_j = 0,\cdots,k_{\max}; \end{align} otherwise, $\mathbb{P}(\mathcal{K}_j = k_j) \leftarrow \mathbb{P}(\mathcal{K}_j = k_j)$ for all $k_j = 0,\cdots,k_{\max}$, where $a_j^{k_j}(0|t)$ is vehicle $j$'s predicted acceleration for time step $t$ that is predicted by vehicle $i$ at the time instant $t$ and using the level-$k_j$ model, and $a_j^{\text{actual}}(t)$ is vehicle $j$'s actual acceleration over time step $t$ that is observed by vehicle $i$ after the time step $t$ is over. The model estimation algorithm \eqref{equ:model_estimation} is to increase the belief in the level-$\tilde{k}_j$ model whose prediction matches the actual behavior of vehicle $j$ best. It is triggered only when there is at least one predicted value $a_j^{k_j}(0|t)$ of some level-$k_j$ model different from others; otherwise, when the predictions of all models are the same, the ego vehicle has no information to improve its beliefs \cite{li2018game,tian2018adaptive}. The decision-making process \eqref{equ:model_D} selects actions to maximize the weighted sum of rewards corresponding to all possible level combinations of the interacting vehicles where the weights are the ego vehicle's beliefs in each level combination, i.e., to maximize the reward expectation. Since the decision-making model \eqref{equ:model_D} is developed upon the set of decision-making models \eqref{equ:levelK} induced from level-$\mathcal{K}$ game theory, and it adapts itself to uncertain interacting agents using online model estimation \eqref{equ:model_estimation}, we refer to it as an ``adaptive level-$\mathcal{K}$'' decision-making model or model $\mathfrak{K}$. \section{Simulation case studies}\label{sec:simulations} In this section, we illustrate our game-theoretic framework to model vehicle interactions at uncontrolled intersections through multiple simulation case studies. The values of parameters used in all of the reported simulation results are collected in Table~\ref{tab:para} in Appendix~B. \subsection{Case study 1: Reproducing real-world traffic scenarios} We first show that our proposed vehicle interaction model can reproduce real-world traffic scenarios. The scenarios we consider are extracted from the video dataset used in \cite{ren2018learning}. We note that although the traffic data are collected at a signalized intersection in Canmore, Alberta, we consider only the vehicles that are legally allowed to enter the intersection (i.e., under a green light or making a right turn), the behavior of which can be modeled similarly to that at uncontrolled intersections. In particular, we initialize the vehicles in our simulation based on the positions and velocities of the corresponding vehicles in the video, and compare the evolution of the scenario simulated using our model to that provided by the video. The results of a scenario involving 3 interacting vehicles are shown in Fig.~\ref{fig:real_1} and those of a scenario involving 4 interacting vehicles are shown in Fig.~\ref{fig:real_2}. It can be observed that our model reproduces the scenarios with satisfactory accuracy. \begin{figure}[h!] \begin{center} \begin{picture}(204.0, 310) \put( 0, 226){\epsfig{file=Figures/validation/Case_1/Test1Critical1Step2.pdf,height=1.1in}} \put( 98, 236){\epsfig{file=Figures/validation/Case_1/Case1_2.pdf,height=0.85in}} \put( 0, 149){\epsfig{file=Figures/validation/Case_1/Test1Critical1Step3.pdf,height=1.1in}} \put( 98, 159){\epsfig{file=Figures/validation/Case_1/Case1_3.pdf,height=0.85in}} \put( 0, 72){\epsfig{file=Figures/validation/Case_1/Test1Critical1Step4.pdf,height=1.1in}} \put( 98, 82){\epsfig{file=Figures/validation/Case_1/Case1_5.pdf,height=0.85in}} \put( 0, -5){\epsfig{file=Figures/validation/Case_1/Test1Critical1Step5.pdf,height=1.1in}} \put( 98, 5){\epsfig{file=Figures/validation/Case_1/Case1_6.pdf,height=0.85in}} \small \put(195, 301){(a)} \put(195, 224){(b)} \put(195, 147){(c)} \put(195, 70){(d)} \normalsize \end{picture} \end{center} \caption{Real-world traffic scenario with 3 interacting vehicles.} \label{fig:real_1} \end{figure} \begin{figure}[h!] \begin{center} \begin{picture}(204.0, 235) \put( 0, 149){\epsfig{file=Figures/validation/Case_2/Test1Critical1Step1.pdf,height=1.1in}} \put( 98, 159){\epsfig{file=Figures/validation/Case_2/Case2_2.pdf,height=0.85in}} \put( 0, 72){\epsfig{file=Figures/validation/Case_2/Test1Critical1Step5.pdf,height=1.1in}} \put( 98, 82){\epsfig{file=Figures/validation/Case_2/Case2_8.pdf,height=0.85in}} \put( 0, -5){\epsfig{file=Figures/validation/Case_2/Test1Critical1Step6.pdf,height=1.1in}} \put( 98, 5){\epsfig{file=Figures/validation/Case_2/Case2_9.pdf,height=0.85in}} \small \put(195, 224){(a)} \put(195, 147){(b)} \put(195, 70){(c)} \normalsize \end{picture} \end{center} \caption{Real-world traffic scenario with 4 interacting vehicles.} \label{fig:real_2} \end{figure} \subsection{Case study 2: Completely symmetric} As discussed at the end of Section~\ref{sec:leader_follower} and at the beginning of Section~\ref{sec:randomness}, one type of challenging scenarios at uncontrolled intersections for both human drivers and autonomous vehicles are scenarios where no one has a determinable role of the leader. Among these scenarios, the ones where all the vehicles arrive at the entrances of a geometrically symmetrical intersection at the same time with the same speed may be particularly challenging. In this section, we show the simulation results of two such ``completely symmetric'' cases. Both cases involve a geometrically symmetrical four-lane (two for each direction) four-way intersection. In the first case, 8 vehicles are approaching the entrances of the intersection from each of the eight forward lanes with the same initial distance to their corresponding entrance points $\Delta \rho^{\text{en}}(0)$ and the same initial speed $v(0)$. Their target lanes all correspond to maneuvers of going straight to cross the intersection. In the second case, 4 vehicles are approaching the entrances of the intersection from each of the four leftmost forward lanes of the road arms with the same $\Delta \rho^{\text{en}}(0)$ and $v(0)$. Their target lanes all correspond to maneuvers of making left turns. In both cases, all of the vehicles are using the decision-making model \eqref{equ:leader_follower_n_1} based on pairwise leader-follower games combined with Algorithm~\ref{alg:Break_D} to handle deadlocks. The simulation results are shown in Figs.~\ref{fig:completely_symmetric_1} and \ref{fig:completely_symmetric_2}. \begin{figure}[h!] \begin{center} \begin{picture}(234.0, 312.0) \put( 0, 206){\epsfig{file=Figures/experiment/Case_1/Test1Critical1Step1.pdf,height=1.52in}} \put( 112, 206){\epsfig{file=Figures/experiment/Case_1/Test1Critical1Step3.pdf,height=1.52in}} \put( 0, 100){\epsfig{file=Figures/experiment/Case_1/Test1Critical1Step4.pdf,height=1.52in}} \put( 112, 100){\epsfig{file=Figures/experiment/Case_1/Test1Critical1Step9.pdf,height=1.52in}} \put( 0, -6){\epsfig{file=Figures/experiment/Case_1/Test1Critical1Step14.pdf,height=1.52in}} \put( 112, -6){\epsfig{file=Figures/experiment/Case_1/Test1Critical1Step23.pdf,height=1.52in}} \small \put(98, 310){(a)} \put(210, 310){(b)} \put(98, 204){(c)} \put(210, 204){(d)} \put(98, 98){(e)} \put(210, 98){(f)} \normalsize \end{picture} \end{center} \caption{Completely symmetric case~1. Figures~(a-f) show the simulation snapshots at a series of sequential steps.} \label{fig:completely_symmetric_1} \end{figure} In Fig.~\ref{fig:completely_symmetric_1}, the eight vehicles arrive at the entrances of the intersection at the same time (Fig.~\ref{fig:completely_symmetric_1}(a)). According to Algorithm~\ref{alg:Role}, no vehicle holds an overall leader role (being the leader in every pairwise interaction). As a result, all of the eight vehicles stop at the intersection entrances (Fig.~\ref{fig:completely_symmetric_1}(b)). According to Algorithm~\ref{alg:Break_D}, a deadlock is detected and the blue vehicle makes a probing acceleration (Fig.~\ref{fig:completely_symmetric_1}(c)). Then, the symmetry is broken: the blue vehicle becomes the overall leader and crosses the intersection first (Fig.~\ref{fig:completely_symmetric_1}(d)). The other vehicles cross the intersection in a clockwise order (Figs.~\ref{fig:completely_symmetric_1}(e)(f)). In Fig.~\ref{fig:completely_symmetric_2}, after the four vehicles arrive at the entrances of the intersection at the same time, the left purple vehicle makes a probing acceleration (Fig.~\ref{fig:completely_symmetric_2}(a)) and gets to its target lane first (Fig.~\ref{fig:completely_symmetric_2}(b)). Similar to Fig.~\ref{fig:completely_symmetric_1}, the other vehicles then pass through the intersection in a clockwise order (Figs.~\ref{fig:completely_symmetric_2}(c)(d)). \begin{figure}[h!] \begin{center} \begin{picture}(234.0, 214.0) \put( 0, 100){\epsfig{file=Figures/experiment/Case_2/Test1Critical1Step4.pdf,height=1.52in}} \put( 112, 100){\epsfig{file=Figures/experiment/Case_2/Test1Critical1Step9.pdf,height=1.52in}} \put( 0, -6){\epsfig{file=Figures/experiment/Case_2/Test1Critical1Step14.pdf,height=1.52in}} \put( 112, -6){\epsfig{file=Figures/experiment/Case_2/Test1Critical1Step18.pdf,height=1.52in}} \small \put(98, 204){(a)} \put(210, 204){(b)} \put(98, 98){(c)} \put(210, 98){(d)} \normalsize \end{picture} \end{center} \caption{Completely symmetric case~2. Figures~(a-d) show the simulation snapshots at a series of sequential steps.} \label{fig:completely_symmetric_2} \end{figure} From the above ``completely symmetric'' cases we can observe that our model exhibits reasonable behavior expected in traffic and has good capability in resolving traffic conflicts in challenging scenarios. We note that there is no centralized control or management in our model to guide the vehicles to resolve their conflicts -- the vehicles make their decisions independently. In particular, they take into account their interactions when making decisions. Such a process is consistent with drivers' decision-making in real-world traffic. Moreover, when a deadlock occurs, the vehicles try to resolve it through exploratory movements, which is also consistent with the way in which human drivers resolve deadlocks in real-world traffic. \subsection{Case study 3: Leader-follower versus adaptive level-$\mathcal{K}$} As discussed at the beginning of Section~\ref{sec:levelK}, a model representing driver interactive decision-making is supposed to have a reasonable capability of resolving traffic conflicts when interacting with other drivers whose driving styles are a priori unknown. To illustrate such capability of the decision-making model \eqref{equ:leader_follower_n_1} based on pairwise leader-follower games, we let vehicles that make decisions using \eqref{equ:leader_follower_n_1} interact with vehicles that make decisions using models different from \eqref{equ:leader_follower_n_1}, in particular, using the adaptive level-$\mathcal{K}$ decision-making model \eqref{equ:model_D}. Note that \eqref{equ:model_D} does not take into account the leader-follower relationships among vehicles, and thus, a vehicle using \eqref{equ:model_D} may not yield the right of way to another vehicle that is supposed to be the leader in their interactions. We initialize the traffic scenario as follows: Three vehicles are approaching the entrances of a geometrically symmetrical four-lane four-way intersection from three different road arms. Vehicle~1 (blue) is coming from the bottom arm and is making a left turn to the left arm; vehicle~2 (red) is coming from the right arm and is making a left turn to the bottom arm; and vehicle~3 (green) is coming from the top arm and is going straight to the bottom arm. The three vehicles are initialized with the same $\Delta \rho^{\text{en}}(0)$ and $v(0)$. At first, we let vehicle~1 using \eqref{equ:leader_follower_n_1} interact with vehicles~2 and 3 using \eqref{equ:model_D}, and the simulation results are shown in Fig.~\ref{fig:lf_vs_k} (left column). Then, we let vehicle~1 using \eqref{equ:model_D} interact with vehicles~2 and 3 using \eqref{equ:leader_follower_n_1}, and the simulation results are shown in Fig.~\ref{fig:lf_vs_k} (right column). \begin{figure}[h!] \begin{center} \begin{picture}(234.0, 522.0) \put( 56, 412){\epsfig{file=Figures/experiment/Case_4/Test1Critical1Step1.pdf,height=1.52in}} \put( 0, 307){\epsfig{file=Figures/experiment/Case_3/Test1Critical1Step2.pdf,height=1.52in}} \put( 112, 307){\epsfig{file=Figures/experiment/Case_4/Test1Critical1Step3.pdf,height=1.52in}} \put( 0, 202){\epsfig{file=Figures/experiment/Case_3/Test1Critical1Step8.pdf,height=1.52in}} \put( 112, 202){\epsfig{file=Figures/experiment/Case_4/Test1Critical1Step7.pdf,height=1.52in}} \put( 0, 97){\epsfig{file=Figures/experiment/Case_3/Test1Critical1Step15.pdf,height=1.52in}} \put( 112, 97){\epsfig{file=Figures/experiment/Case_4/Test1Critical1Step12.pdf,height=1.52in}} \put( 0, -8){\epsfig{file=Figures/experiment/Case_3/level_estimation.pdf,height=1.52in}} \put( 112, -8){\epsfig{file=Figures/experiment/Case_4/level_estimation.pdf,height=1.52in}} \small \put(165, 506){(a)} \put(98, 411){(b)} \put(210, 411){(f)} \put(98, 306){(c)} \put(210, 306){(g)} \put(98, 201){(d)} \put(210, 201){(h)} \put(98, 96){(e)} \put(210, 96){(i)} \footnotesize \put(18, 87){level-0} \put(18, 54){level-1} \put(18, 21){level-2} \put(130, 87){level-0} \put(130, 54){level-1} \put(130, 21){level-2} \normalsize \end{picture} \end{center} \caption{Leader-follower versus adaptive level-$\mathcal{K}$ case~1 (left): the blue car is using the decision-making model \eqref{equ:leader_follower_n_1} and the other two cars are using \eqref{equ:model_D} and \eqref{equ:model_estimation}, and case~2 (right): the blue car is using \eqref{equ:model_D} and \eqref{equ:model_estimation} and the other two cars are using \eqref{equ:leader_follower_n_1}. Figures~(a-d) show the simulation snapshots at a series of sequential steps of case~1, and figure~(e) shows the corresponding model estimation time histories of the three vehicles (the blue curves correspond to the level-$0$, $1$, and $2$ belief histories of the blue car, etc); figures~(a) and (f-h) show the simulation snapshots at a series of sequential steps of case~2, and figure~(i) shows the corresponding model estimation time histories of the three vehicles.} \label{fig:lf_vs_k} \end{figure} In Figs.~\ref{fig:lf_vs_k}(a-d) where vehicle~1 (blue) is using \eqref{equ:leader_follower_n_1}, according to Algorithm~\ref{alg:Role}, it is the follower of vehicles~2 (red) and 3 (green). As a result, it yields the right of way to both vehicles~2 and 3. Vehicles~2 and 3 pass through the intersection ahead of vehicle~1 based on \eqref{equ:model_D} and \eqref{equ:model_estimation}. Fig.~\ref{fig:lf_vs_k}(e) shows the model estimation time histories of the three vehicles: it can be observed that as vehicle~1 decides to yield the right of way, the belief that it can be modeled as level-$1$ (corresponding to the most conservative driver) increases over $7-12\,$[s] (when vehicles~2 and 3 are inside the intersection). In Figs.~\ref{fig:lf_vs_k}(a) and (f-i) where vehicles~2 (red) and 3 (green) are using \eqref{equ:leader_follower_n_1}, according to Algorithm~\ref{alg:Role}, vehicle~3 is the leader in every pairwise interaction. As a result, it decides to cross the intersection first. Such a result illustrates that our model \eqref{equ:leader_follower_n_1} is not conservative when it has the legitimate right of way. Different from Figs.~\ref{fig:lf_vs_k}(a-d), vehicle~1 decides to pass through the intersection ahead of vehicle~2 based on \eqref{equ:model_D} and \eqref{equ:model_estimation}. This is because when vehicle~2 yields the right of way to vehicle~3, vehicle~1 thinks that vehicle~2 is a conservative driver (which can be seen from the model estimation time histories Fig.~\ref{fig:lf_vs_k}(i), where the belief that vehicle~2 can be modeled as level-$1$ increases at the beginning). Thus, vehicle~1 decides to go ahead of this conservative driver. Note that the adaptive level-$\mathcal{K}$ decision-making model \eqref{equ:model_D} and \eqref{equ:model_estimation} does not account for the right-of-way traffic rules. Although vehicle~2 is supposed to have the right of way over vehicle~1 when they arrive at the entrances of the intersection, it decides to wait until vehicle~1 passes because of vehicle~1's aggressive preemption. The above ``leader-follower versus adaptive level-$\mathcal{K}$'' cases illustrate that our model \eqref{equ:leader_follower_n_1} is capable of making effective use of its legitimate rights of way in resolving traffic conflicts. Nevertheless, when encountering unexpected disruptions, it can quickly adapt its strategy to avoid traffic incidents. \subsection{Case study 4: Randomized traffic scenarios} To show the capability of the proposed framework to model vehicle interactions at a wide range of uncontrolled intersection traffic scenarios and statistically evaluate its performance, we randomly generate the layout and geometry parameters $\{M_{\text{f}}^{(m)}\}_{m=1}^N$, $\{M_{\text{b}}^{(m)}\}_{m=1}^N$, and $\{\phi^{(m)}\}_{m=1}^N$ of the intersections for each fixed number of intersection arms $N \in \{3,4,5\}$, randomly generate the origin lanes, target lanes, initial distances to intersection entrances $\Delta \rho^{\text{en}}(0)$, and initial speeds $v(0)$ of the vehicles for each fixed number of vehicles $n \in \{2,4,6,8,10\}$, and have 100 simulation runs for each pair of $(N,n)$. In particular, we sample $\{M_{\text{f}}^{(m)}\}_{m=1}^N$, $\{M_{\text{b}}^{(m)}\}_{m=1}^N$ based on categorical distributions and $\{\phi^{(m)}\}_{m=1}^N$ based on truncated normal distributions\footnote{with mean $\frac{2m\pi}{N}$ and standard deviation $\frac{\pi}{24}$ truncated to the range $\big[\frac{2m\pi}{N}-\frac{\pi}{8},\frac{2m\pi}{N}+\frac{\pi}{8}\big]$.} as follows: \begin{align*} M_{\xi}^{(m)} & \sim \text{Cat} \Big(\{1,2,3\}, \{0.15,0.7,0.15\}\Big), \quad \xi \in \{\text{f},\text{b}\}, \nonumber \\ \phi^{(m)} & \sim \text{Normal} \Big(\frac{2m\pi}{N}, \frac{\pi}{24}, \big[\frac{2m\pi}{N}-\frac{\pi}{8},\frac{2m\pi}{N}+\frac{\pi}{8}\big]\Big), \end{align*} for each $m = 1,\cdots,N$. Once the intersection has been created, we assign each vehicle's origin road arm based on a uniform distribution over all road arms, and assign its origin lane based on a uniform distribution over all forward lanes of its origin road arm. After that, we assign its target road arm and target lane based on uniform distributions over, respectively, all acceptable road arms and all acceptable backward lanes of the assigned target road arms, where ``acceptable'' means satisfying the traffic rule constraints described at the beginning of Section~\ref{sec:path}. Then, each vehicle's $\Delta \rho^{\text{en}}(0)$ and $v(0)$ are initialized based on uniform distributions over the ranges $[10, 28]\,$[m] and $[2, 4]\,$[m/s]. Furthermore, we enforce a minimum initial separation, $\rho^{\text{sep}}$, between any two vehicles that are initialized on the same origin lane -- if $\Delta \rho^{\text{en}}_i(0)$ of vehicle $i$ is in the range of $[\Delta \rho^{\text{en}}_j(0) - \rho^{\text{sep}}, \Delta \rho^{\text{en}}_j(0) + \rho^{\text{sep}}]$ for any vehicle $j$ that has been initialized before vehicle $i$ and is on the same origin lane of vehicle $i$, then $\Delta \rho^{\text{en}}_i(0)$ is re-sampled as above. Some of the simulated traffic scenarios are shown in Fig.~\ref{fig:success_cases}. \begin{figure}[h!] \begin{center} \begin{picture}(230.0, 312.0) \put( 0, 206){\epsfig{file=Figures/experiment/random/3_way_10_car_deadlock_1.pdf,height=1.52in}} \put( 112, 206){\epsfig{file=Figures/experiment/random/3_way_10_car_deadlock_3.pdf,height=1.52in}} \put( 0, 100){\epsfig{file=Figures/experiment/random/4_way_10_car_deadlock_1.pdf,height=1.52in}} \put( 112, 100){\epsfig{file=Figures/experiment/random/4_way_10_car_deadlock_2.pdf,height=1.52in}} \put( 0, -6){\epsfig{file=Figures/experiment/random/5_way_10_car_deadlock_1.pdf,height=1.52in}} \put( 112, -6){\epsfig{file=Figures/experiment/random/5_way_10_car_deadlock_3.pdf,height=1.52in}} \small \put(98, 310){(a)} \put(210, 310){(b)} \put(98, 204){(c)} \put(210, 204){(d)} \put(98, 98){(e)} \put(210, 98){(f)} \normalsize \end{picture} \end{center} \caption{Randomized traffic scenarios at randomized intersections. Figures~(a-b) show the snapshots of a three-way intersection simulation at two sequential steps, figures~(c-d) show the snapshots of a four-way intersection simulation at two sequential steps, and figures~(e-f) show the snapshots of a five-way intersection simulation at two sequential steps.} \label{fig:success_cases} \end{figure} \subsubsection{Statistical evaluation} We define several statistical metrics to evaluate the proposed framework to model vehicle interactions at uncontrolled intersections, including the rate of success (SR), the rate of collision (CR), and the rate of deadlock (DR). The rate of success is defined as the proportion of simulation runs where all the vehicles safely (without colliding with any other vehicles) reach their terminal points $(x^{\text{term}},y^{\text{term}})$ within $60\,$[s] of simulation time. The rate of collision is defined as the proportion of simulation runs where at least one vehicle collision occurs (once a vehicle collision occurs at a simulation step, the simulation run stops at that step). The rate of deadlock is defined as the proportion of simulation runs where no vehicle collision occurs but there is at least one vehicle that does not reach its terminal point $(x^{\text{term}},y^{\text{term}})$ within $60\,$[s] of simulation time. We note that based on their definitions, $\text{SR}+\text{CR} + \text{DR} = 1$. A model representing driver interactive decision-making is supposed to have reasonably high SR, and reasonably low CR and DR. The evaluation results of our model are shown in Fig.~\ref{fig:results}. It can be observed that as the numbers of road arms and of vehicles increase, which correspond to traffic scenarios of increased complexity, the rates of collision and of deadlock also increase. In three-way and four-way intersection traffic scenarios with $2$ or $4$ vehicles, no collisions or deadlocks are observed. When up to $10$ vehicles are interacting at three-way or four-way intersections, the rates of success are higher than $0.9$. It can also be observed that five-way intersections are more challenging (with higher rates of collision and of deadlock compared to three-way and four-way intersections) -- the rate of success drops to $0.84$ for the case of $10$ interacting vehicles. This is because more vehicles may get to the entrances of the intersection or be inside the intersection at the same time for five-way intersections compared to three-way and four-way intersections, which may lead to higher chances of traffic conflicts. Indeed, five-way intersections are also more challenging to drivers compared to three-way and four-way intersections in real-world traffic scenarios. \begin{figure}[h!] \begin{center} \begin{picture}(210.0, 152.0) \put( 0, 5){\epsfig{file=Figures/statistics/success_deadlock_collision_rates.pdf,height=2.1in}} \small \put( -8, -3){$\#$ of arms} \put( -10, 65){\rotatebox{90}{Rate}} \put( 40, -3){$3$} \put( 112, -3){$4$} \put( 184, -3){$5$} \normalsize \end{picture} \end{center} \caption{Statistical evaluation of the vehicle interaction model. Light color: SR, medium color: DR, dark color: CR.} \label{fig:results} \end{figure} Furthermore, by watching the animations of the simulation runs, we observe that most collisions are caused by simultaneous exploratory actions of two or more vehicles in deadlock scenarios. Note that in Algorithm~\ref{alg:Break_D}, a vehicle is not permitted to accelerate if its acceleration would lead to a collision when the other vehicles in conflict remain stopped. However, if two or more vehicles accelerate at the same time, it is possible that their simultaneous accelerations lead to a collision although each single acceleration would not. Two of the failure scenarios are shown in Fig.~\ref{fig:failure_cases}. The rates of failures ($\text{CR} + \text{DR}$) resulting from our framework are much lower than those resulting from the scheme in \cite{mandiau2008behaviour} ($3\%$ versus almost $50\%$ for the case of $6$ vehicles at four-way intersections). Note that communications and negotiations among human drivers in deadlock scenarios, such as through eye contacts or gestures, are not considered in our framework. \begin{figure}[h!] \begin{center} \begin{picture}(230.0, 98.0) \put( 0, 0){\epsfig{file=Figures/experiment/random/failure_1.pdf,height=1.3in}} \put( 120, 0){\epsfig{file=Figures/experiment/random/failure_3.pdf,height=1.3in}} \small \put(96, 96){(a)} \put(215, 96){(b)} \normalsize \end{picture} \end{center} \caption{Two failure cases. (a) A deadlock scenario. (b) A collision scenario. } \label{fig:failure_cases} \end{figure} For the vehicles that safely reach their terminal points within $60\,$[s] of simulation time, we count their average completion time (CT), where a vehicle's CT is defined as the duration (in [s] of simulation time) from the simulation initialization to the time instant when the vehicle reaches its terminal point. The average CT can reflect how conservative the decision-making model is. The average CTs for different numbers of road arms and vehicles are shown in Fig.~\ref{fig:ACT}. It can be observed that as the numbers of vehicles increase, the vehicles need more time to pass through the intersections. In particular, for the cases of $2$ and $4$ interacting vehicles, the average CTs exhibited by our model correspond to level-B in the real-world traffic quality rating system called the ``level-of-service'' (LOS) for unsignalized intersections defined based on the average control delay \cite{quiroga1999measuring}. LOS-B corresponds to traffic with a high degree of freedom and a small amount of interactions and is characterized by $10$-$15\,$[s] of average control delay \cite{HCM2000}. For the cases of $6$-$10$ interacting vehicles, the average CTs exhibited by our model correspond to LOS-C, which corresponds to traffic with restricted freedom due to significant interactions and is characterized by $15$-$25\,$[s] of average control delay. Furthermore, among three-way, four-way, and five-way intersections, the vehicles spend the least times to pass through four-way intersections -- this may be explained by the observation that the ``right-of-way'' rules (see Section~\ref{sec:leader_follower}) may function best for four-way intersections, which, as a matter of fact, are most common in real-world road networks. \begin{figure}[h!] \begin{center} \begin{picture}(240.0, 142.0) \put( 10, 8){\epsfig{file=Figures/statistics/complete_time_per_car.pdf,height=1.8in}} \small \put( 2, 2){$\#$ of arms} \put( 2, 60){\rotatebox{90}{ACT [s]}} \put( 56, 2){$3$} \put( 124, 2){$4$} \put( 194, 2){$5$} \normalsize \end{picture} \end{center} \caption{Average completion time (ACT). The black vertical bars represent the standard deviations.} \label{fig:ACT} \end{figure} \subsubsection{Computational complexity} In addition to rate of success and average completion time, we also care about the computational effort of the proposed framework to model vehicle interactions at uncontrolled intersections, as it determines the framework's scalability to model traffic scenarios of increased complexity. As mentioned in Section~\ref{sec:game_n}, traditional generalizations of leader-follower game-theoretic decision-making models to $n$-player settings require exponentially increased computational efforts to solve for solutions as the number of players increases \cite{yoo2018predictive}. However, thanks to the pairwise decoupling of vehicle interactions, the computational complexity of our decision-making model \eqref{equ:leader_follower_n_1} (solved using a tree-search method) increases only linearly with the number of interacting vehicles increasing. We use the average and the worst computation times per vehicle per step (in [s] of real time) to represent our model's computational complexity, which are, respectively, the average and the worst CPU times for one vehicle to confirm its action choice over one step (including the time to compute the initial action choice using \eqref{equ:leader_follower_n_1} and the time to adjust the action choice using Algorithm~\ref{alg:Break_D} if a deadlock is detected). The results for different numbers of road arms and vehicles are shown in Fig.~\ref{fig:CPUT}. The simulations are performed on Matlab~R2016a platform using an Intel Core i7-4790 3.60~GHz PC with Windows~10 and 16.0~GB of RAM. The computation times are calculated using Matlab \textit{tic-toc} command. \begin{figure}[h!] \begin{center} \begin{picture}(240.0, 156.0) \put( 11, 10){\epsfig{file=Figures/statistics/computation_time_per_step_per_car.pdf,height=2.1in}} \small \put( 0, 2){$\#$ of arms} \put( 0, 50){\rotatebox{90}{CPU time [s]}} \put( 52, 2){$3$} \put( 122, 2){$4$} \put( 193, 2){$5$} \normalsize \end{picture} \end{center} \caption{Average (dark-colored bars) and worst (light-colored bars) computation times per vehicle per step.} \label{fig:CPUT} \end{figure} It can be observed that: 1) As the number of vehicles increases, the computation time increases. 2) For four-way and five-way intersections, the increase in computation time slows down with the number of vehicles increasing. And 3) the increase in computation time slows down as the number of road arms increases from three to four and five. The explanations to 2) and 3) may be that as the numbers of vehicles and road arms increase, the vehicles are more likely to be outside each other's perception range, and as a result, the number of vehicles involved in the computation of model \eqref{equ:leader_follower_n_1} decreases (see Section~\ref{sec:ranges}). In general, the increase in computational effort for our decision-making model \eqref{equ:leader_follower_n_1} to solve for solutions is only linear in the increase in the number of interacting vehicles. And in turn, the total computation time for all vehicles to solve for decisions is quadratic in the number of vehicles. This makes our framework have reasonably good scalability to model traffic scenarios at uncontrolled intersections of increased complexity. In contrast, the adaptive level-$\mathcal{K}$ decision-making model described in Section~\ref{sec:levelK} may not be as well-scalable as the decision-making model \eqref{equ:leader_follower_n_1} based on pairwise leader-follower games. This is because the adaptive level-$\mathcal{K}$ model needs to first compute the level-$k$ decision of each of the interacting vehicles for $k=0,1,\cdots,k_{\max}$ using \eqref{equ:levelK} in order to estimate their levels using \eqref{equ:model_estimation}, and then compute the optimal decision of the ego vehicle using \eqref{equ:model_D}, which itself involves combinatorially increased computational complexity in the number of interacting vehicles. \section{Summary}\label{sec:sum} In this paper, we proposed a game-theoretic framework for modeling the interactive behavior of vehicles in multi-vehicle traffic scenarios at uncontrolled intersections. Our approach takes into account common traffic rules to designate a leader-follower relationship between each pair of interacting vehicles. A decision-making process based on pairwise leader-follower relationships is used to represent interactive decision-making of vehicles. Additional modeling considerations, representing courteous driving, limited perception ranges, and the capability of human drivers in resolving deadlock scenarios through probing, are also accounted for in the process. In particular, uncontrolled intersections were modeled based on a parametrization scheme, to which the proposed vehicle interaction modeling approach was applied. This way, the interactive behavior of vehicles at a rich set of uncontrolled intersection traffic scenarios (with various numbers of interacting vehicles, intersection layouts and geometries, etc) could be modeled. Simulation results were reported and showed that the vehicle interaction model exhibited reasonable behavior expected in traffic. The performance of the model was then evaluated based on several statistics, including the rate of success, the rate of collision, the rate of deadlock, the average completion time, as well as the average and the worst computation times. It was shown that the model had reasonably high rates of success in resolving traffic conflicts and average completion times matching the level-of-service criteria used for rating real-world traffic. Moreover, thanks to the pairwise decoupling of vehicle interactions, the computational complexity of the decision-making model increases linearly as the number of interacting vehicles increases, which improves the model's scalability. Furthermore, another game-theoretic model to represent interactive decision-making of vehicles was considered. The approaches to modeling vehicle interactions based on the model proposed in this paper and this alternative model were discussed and compared in simulations. The framework proposed in this paper for modeling multi-vehicle interactions at uncontrolled intersections can be used as simulation tool for calibration, validation and verification of autonomous driving systems \cite{li2018game_2,li2018game_3,li2019game}. In addition, it may also be used in high-level decision-making algorithms of autonomous vehicles \cite{li2018game,tian2018adaptive}, and to support intersection automation/autonomous intersection management \cite{chen2016cooperative}. Moreover, vehicle interactions in some other traffic scenarios, such as highway merging and driving in parking lots, may be modeled based on the proposed framework with modified road layouts and geometries. These are left as topics for our future research. \bibliographystyle{IEEEtran}
1,941,325,219,901
arxiv
\section{Introduction} Let $p: X\to Y$ be a holomorphic map of relative dimension $n$ between two non-singular, compact K\"ahler manifolds. We denote by $\Delta\subset Y$ the union of codimension one components of the set of singular values of $p$, and we will use the same notation for the associated divisor. The reduced divisor corresponding to the support of the inverse $p^{-1}(\Delta)$ will be denoted by $W$. In order to formulate our results, we will assume from the very beginning that $\Delta$ and $W$ have simple normal crossings. \medskip \noindent We denote by $\Omega^1_X\langle W\rangle$ and $\Omega^1_Y\langle \Delta\rangle$ the logarithmic co-tangent bundle associated to $(X, W)$ and $(Y, \Delta)$ respectively (cf. \cite{Del} and the references therein). The pull-back of forms with log poles along $\Delta$ gives an injective map of sheaves \begin{equation}\label{higher0} p^\star\Omega^1_Y\langle \Delta\rangle\to \Omega^1_X\langle W\rangle. \end{equation} Let $\displaystyle \Omega^1_{X/Y}\langle W\rangle$ be the co-kernel of the map \eqref{higher0}. \noindent We consider a Hermitian line bundle $(L, h_L)$ on $X$. The metric $h_L$ is assumed to be smooth when restricted to the generic fibers of $p$. The curvature of $(L, h_L)$ will be denoted by $\Theta_L$. \smallskip \noindent In this article our primary goal is to explore the differential geometric properties of the higher direct images \begin{equation}\label{higher1} {\mathcal R}^{i}p_\star\left(\Omega^{n-i}_{X/Y}\langle W\rangle\otimes L\right) \end{equation} where $n\geq i\geq 1$ together with some applications. We will proceed in two steps. \medskip \noindent (A) \emph{Restriction to the set of regular values} First we will analyze the properties of \eqref{higher1} when restricted to the complement of a closed analytic subset of $Y$ which we now describe. By a fundamental result of Grauert \cite{BaSt}, the sheaves \eqref{higher1} are coherent, thus locally free on a Zariski open subset of $Y$. The set of singular values of the map $p$ will be denoted by $\Sigma\subset Y$. Then the complement $\Sigma\setminus\Delta$ is a union of algebraic subsets of $Y$ of codimension at least two. For each $i= 1,\dots, n$ we consider the function \begin{equation}\label{higher50} t\to h^{i}(X_t, \Omega^{n-i}_{X_t}\otimes L_t), \end{equation} defined on $Y_0:= Y\setminus \Sigma$. These functions are upper semicontinuous, hence constant on a Zariski open subset of $Y$ (in the analytic topology) which will be denoted by ${\mathcal B}$. The set $p^{-1}({\mathcal B})\subset X$ will be denoted by ${\mathcal X}$, and we will use the same symbol $p:{\mathcal X}\to {\mathcal B}$ for the restriction to ${\mathcal X}$ of our initial map. It is a proper holomorphic submersion and the restriction of \eqref{higher1} to ${\mathcal B}$ is equal to \begin{equation}\label{higher2} {\mathcal R}^{i}p_\star\big(\Omega^{n-i}_{X/Y}\langle W\rangle\otimes L\big)|_{{\mathcal B}} = {\mathcal R}^{i}p_\star\big(\Omega^{n-i}_{{\mathcal X}/{\mathcal B}}\otimes L\big). \end{equation} We denote the vector bundle \eqref{higher2} by ${\mathcal H}^{i, n-i}$. Its fiber at $t$ is isomorphic to the Dolbeault cohomology space $\displaystyle H^{i}(X_t, \Omega^{n-i}_{X_t}\otimes L_t)$. \smallskip \noindent Let $\displaystyle [u]\in \Gamma(V, {\mathcal H}^{i, n-i})$ be a local holomorphic section of \eqref{higher2}, where $V\subset {\mathcal B}$ be a coordinate open subset and $u$ is a $\bar\partial$-closed $(0,i)$-form with values in the bundle $\displaystyle \Omega^{n-i}_{{\mathcal X}/{\mathcal B}}\otimes L|_{p^{-1}(V)}$. The restriction $u_t$ of $u$ to $X_t$ is a $\bar\partial$-closed $L_t$-valued $(i,n-i)$-form on $X_t$. We denote by $[u_t]$ its $\bar\partial$-cohomology class. Let $\omega$ be a smooth $(1,1)$-form on $X$, whose restriction to the fibers of $p$ is positive definite. Then one may use $\omega|_{X_t}$ and $h_L$ to define a Hermitian norm on ${\mathcal H}^{i,n-i}$. More precisely, the norm of a class $[u_t]$ will be defined as the $L^2$-norm of its unique $\bar\partial$-harmonic representative, say $u_{t,h}$, i.e. \begin{equation}\label{higher3} \Vert [u]\Vert^2_t:= \int_{{\mathcal X}_t}|u_{t, h}|^2e^{-\varphi}\omega^n/{n!} \end{equation} By a fundamental result of Kodaira-Spencer the variation of $u_{t, h}$ with respect to $t$ is smooth. \smallskip Since the bundle $\Omega^{n-i}_{{\mathcal X}/B}\otimes L$ is a quotient of $\Omega^{n-i}_{{\mathcal X}}\otimes L$, we can construct (cf. section 2) a smooth $(0,i)$-form $\widetilde u$ with values in $\Omega^{n-i}_{{\mathcal X}}\otimes L|_{p^{-1}(V)}$ which is $\bar\partial$-closed on the fibers of $p$ and whose restriction to each $X_t$ belongs to the cohomology class $[u_{t}]$. In what follows $\widetilde u$ will be called a \emph{representative} of $[u]$. \smallskip \noindent Summing up, ${\mathcal H}^{i, n-i}$ is a holomorphic Hermitian vector bundle. The sign of the corresponding curvature form is of fundamental importance in applications. Certainly, the curvature tensor is an intrinsic object, completely determined by the complex and the Hermitian structure of the bundle. Nevertheless, it can be expressed in several ways, depending on the choice of the representatives $\widetilde u$ mentioned above. So an important question would be to choose ``the best" representative for the holomorphic sections of ${\mathcal H}^{i, n-i}$. \smallskip \smallskip \noindent We first assume that the curvature form $\Theta_L$ of $(L, h_L)$ is semi-negative on $X$ and strictly negative on the fibers of $p:{\mathcal X}\to {\mathcal B}$. In this case, we can take $\omega:= -\Theta_L$. Another consequence of this assumption is the existence of a \emph{vertical representative} corresponding to any section $u$ (cf. section 5). The vertical representatives are used in the proof of our first main result, namely Theorem \ref{curvature}. We show that the curvature of ${\mathcal H}^{i, n-i}$ evaluated in a direction $u$ can be written as difference of two semi-positive forms on the base ${\mathcal B}$. Roughly speaking, the positive contribution in the expression of the curvature is given by the contraction (or cup-product) with the Kodaira-Spencer class. \smallskip \noindent Rather than stating Theorem \ref{curvature} here, we mention one of its consequences, which will be important in applications. Let ${\mathcal K}^i$ be the kernel of the map \begin{equation}\label{higher4} {\mathcal R}^{i}p_\star\big(\Omega^{n-i}_{X/Y}\langle W\rangle\otimes L\big) \to {\mathcal R}^{i+1}p_\star\big(\Omega^{n-i-1}_{X/Y}\langle W\rangle\otimes L\big)\otimes \Omega^1_Y\langle \Delta\rangle \end{equation} defined by contraction with the Kodaira-Spencer class. By shrinking ${\mathcal B}$, we can assume that the restriction ${\mathcal K}^i|_{\mathcal B}$ is a sub-bundle of ${\mathcal R}^{i}p_\star\big(\Omega^{n-i}_{X/Y}\langle W\rangle\otimes L\big)|_{\mathcal B}$. We have the following statement. \begin{thm}\label{main, 0} We assume that $\Theta_L\leq 0$ on $X$ and that $\Theta_L|_{{\mathcal X}_t}< 0$ for any $t\in {\mathcal B}$. Then the bundle $\displaystyle {\mathcal K}^i|_{\mathcal B}$ endowed with the metric induced from \eqref{higher3} is semi-negative in the sense of Griffiths, provided that we choose $\omega|_{{\mathcal X}_t}= -\Theta_L|_{{\mathcal X}_t}$. \end{thm} \medskip We list next a few other results related to Theorem \ref{curvature}. For example, assume that the canonical bundle $K_{{\mathcal X}_t}$ of ${\mathcal X}_t$ is ample (i.e. the family $p$ is canonically polarized). Then, by a crucial theorem of Aubin and Yau (\cite{Aubin}, \cite{Yau}), we can construct a metric $h_t= e^{-\psi_t}$ on $\displaystyle K_{{\mathcal X}_t}$ such that $\omega_t= \sqrt{-1}\partial\dbar\psi_t$ is K\"ahler-Einstein. By a result of Schumacher, \cite{Schumacher}, the metric on the relative canonical bundle $\displaystyle K_{{\mathcal X}/{\mathcal B}}$ induced by the family $\left(e^{-\psi_t}\right)_{t\in {\mathcal B}}$ is semi-positively curved. We then consider the induced metric on $\displaystyle L:= -K_{{\mathcal X}/{\mathcal B}}$. Our formula in this case coincides with earlier results of Siu, \cite{Siu}, Schumacher, \cite{Schumacher} and To-Yeung, \cite{To-Yeung}, and following the arguments in \cite{Schumacher} and \cite{To-Yeung} (with some variations), we show that this leads to the construction of a metric of strictly negative sectional curvature on the base manifold $Y$. During the (long) preparation of this paper, the article \cite{Naumann} by Naumann was posted on arXiv. In this work the Theorem \ref{curvature} is also established. Naumann shows that the expression of the curvature of ${\mathcal H}^{i, n-i}$ can be derived by a method similar to Schumacher's, \cite{Schumacher}. Our approach here is very different and perhaps lighter technically. It can be seen as a generalization of the computations in \cite{Berndtsson1} and \cite{Berndtsson2} for the case of ${\mathcal H}^{n,0}$ (or better say, ${\mathcal H}^{0,n}$). \medskip \noindent In section 7 we adapt the previous arguments to the case when the curvature of the bundle $L$ is not strictly negative but instead identically zero on the fibers, and seminegative on the total space. We then establish a variant of Theorem \ref{curvature} in this setting. This formula generalizes the classical formula of Griffiths for the case when $L$ is trivial. As it turns out, the only difference between our formula and Griffiths' is an additional term, coming from the curvature of $L$ on the total space. In previous work, Nannicini \cite{Nannicini}, has generalized Siu's computations from \cite{Siu} for ${\mathcal H}^{n-1,1}$ under related conditions. Here we also mention that the general case, when the curvature of $L$ is seminegative on fibers is treated in a somewhat different way by the third author in \cite{Wang-h}. \medskip \noindent A projective family $p:X\to Y$ is called \emph{Calabi-Yau} if $c_1(X_y)= 0$, which is equivalent to the fact that some multiple of the canonical bundle of the generic fibers is trivial. In this case $L=K_{{\mathcal X}/Y}$ is fiberwise flat and we show that $L$ has a metric which is moreover seminegative on the total space. We say that $p$ has \emph{maximal variation} if the Kodaira-Spencer map is injective. \smallskip \noindent The notion of \emph{Kobayashi hyperbolicity} describes in a very accurate manner weather a complex manifold contains ``large'' discs. In this setting we have the following statement, obtained as a corollary of our curvature formules combined with a few ideas from \cite{To-Yeung}, \cite{Schumacher}. \begin{thm}\label{hypb1} Let $p:X\to Y$ be a canonically polarized or Calabi-Yau family. We assume that $p$ has maximal variation. Let $\omega_Y$ be a Hermitian metric on $Y$. Then there exists a constant $C> 0$ such that given a holomorphic map $\gamma: \mathbb{D}\to Y\setminus\Sigma$ we have $\displaystyle \Vert\gamma^\prime(0)\Vert_{\omega_Y}\leq C$. \end{thm} \noindent In Theorem \ref{hypb1} we denote by $\mathbb{D}\subset {\mathbb C}$ the unit disk. Our result states that the Kobayashi infinitesimal pseudo-metric of the complement $Y\setminus \Sigma$ is non-degenerate. \noindent We note that Theorem \ref{hypb1} (and a few other results we prove in this article) have also been announced by S.-K. Yeung and W. To in their joint project \cite{Yeung-Ober}. For questions related to Brody hyperbolicity we refer to the very recent preprint \cite{PoTaji} by M.~ Popa, B.~Taji and L. Wu (and the references therein). \bigskip \noindent (B) \emph{Extension across the singularities.} The kernel \begin{equation}\label{higher5} {\mathcal K}^i\subset {\mathcal R}^{i}p_\star\big(\Omega^{n-i}_{X/Y}\langle W\rangle\otimes L\big) \end{equation} of the map \eqref{higher4} is a coherent sheaf of ${\mathcal O}_Y$-modules whose sections are called \emph{quasi-horizontal} in \cite{Griff84} (p 26). Unlike in the case $L= {\mathcal O}_X$, it is not clear whether ${\mathcal K}^i$ is torsion free or not (cf. \cite{Kang}). Fortunately, this is not a problem for us. We define \begin{equation}\label{tquot} {\mathcal K}^i_f:= {\mathcal K}^i/T({\mathcal K}^i) \end{equation} where $T({\mathcal K}^i)\subset {\mathcal K}^i$ is the torsion subsheaf. It turns out that the metric induced on ${\mathcal K}^i_f|_B$ is also semi-negatively curved in the sense of Griffiths (the quotient map ${\mathcal K}^i\to {\mathcal K}^i_f$ is an isometry). \medskip \noindent Our goal is to show that the metric defined on the bundles ${\mathcal K}^i_f|_{\mathcal B}$ extends as a semi-negatively curved singular Hermitian metric (in the sense of \cite{BP}, \cite{PT}) on the torsion-free sheaf ${\mathcal K}^i_f$, provided that the metric $h_L$ satisfies one of the two conditions below. \smallskip \noindent $\left({\mathcal H}_1\right)$ We have $\displaystyle \Theta_L\leq 0$ on $X$ and moreover $\displaystyle \Theta_L|_{X_y}= 0$ for each $y$ in the complement of some Zariski closed set. \smallskip \noindent $\left({\mathcal H}_2\right)$ We have $\displaystyle \Theta_L\leq 0$ on $X$ and moreover there exists a K\"ahler metric $\omega_Y$ on $Y$ such that we have $\displaystyle \Theta_L\wedge p^\star\omega_Y^m\leq -\varepsilon_0\omega\wedge p^\star\omega_Y^m$ on $X$. \smallskip \noindent Thus the first condition requires $L$ to be semi-negative and trivial on fibers, whereas in $\left({\mathcal H}_2\right)$ we assume that $L$ is uniformly strictly negative on fibers, in the sense that we have \begin{equation}\label{mp0301'} \Theta_{h_L}(L)|_{X_y}\leq - \varepsilon_0\omega|_{X_y} \end{equation} for any regular value $y$ of the map $p$. \medskip \noindent In this context we have the following result. \begin{thm}\label{main, I} Let $p:X\to Y$ be an algebraic fiber space, and let $(L, h_L)$ be a Hermitian line bundle which satisfies one of the hypothesis $\left({\mathcal H}_i\right)$ above. We assume that the restriction of $h_L$ to the generic fiber of $p$ is non-singular. Then: \begin{enumerate} \item[{\rm (i)}] For each $i\geq 1$ the sheaf $\displaystyle {\mathcal K}^i_f$ admits a semi-negatively curved singular Hermitian metric. \smallskip \item[{\rm (ii)}] We assume that curvature form of $L$ is smaller than $-\varepsilon_0p^\star\omega_Y$ on the $p$-inverse image of some open subset $\Omega\subset Y$ of $Y$. Then the metric on $\displaystyle {\mathcal K}^i_f$ is strongly negatively curved on $\Omega$ (and semi-negatively curved outside a codimension greater than two subset of $Y$). \end{enumerate} \end{thm} \noindent As a consequence of the point (i), we infer e.g. that the dual sheaf $\displaystyle {\mathcal K}^{i\star}_f$ is weakly semi-positive in the sense of Viehweg, cf. \cite{Vbook}, \cite{Kang},\cite{BruneB}. We note that similar statements appear in various contexts in algebraic geometry, cf. \cite{BruneB}, \cite{GT}, \cite{Fujino}, \cite{KK08}, \cite{Kol87}, \cite{MT07}, \cite{MT08}, \cite {PY}, \cite{PoSch}, \cite{VZ02}, \cite{VZ03}, \cite{Kang} among many others. The fundamental results of Cattani-Kaplan-Schmid \cite{CKS} are an indispensable tool in the arguments of most of the articles quoted above. Unfortunately, the analogue of the period mapping in our context is not defined (because of the twisting with the bundle $L$) so we do not have these techniques at our disposal. \smallskip \noindent Part of the motivation for the analysis of the curvature properties of the kernels ${\mathcal K}^i$ is the existence of a non-trivial map $${\mathcal K}^{i\star}_f\to \Sym^i\Omega^1_Y\langle \Delta\rangle$$ for some $i$ such that $\displaystyle {\mathcal K}^{i\star}_f\neq 0$ provided that $L^\star:= \det \Omega^1_{X/Y}\langle W\rangle$ and $p$ is either Calabi-Yau or canonically polarized with maximal variation. This is a well-known and very important fact. We refer to \cite{VZ03}, \cite{PoSch} and the references therein for related results. \smallskip \noindent The point is that here we construct the so-called \emph{Viehweg-Zuo sheaf} ${\mathcal K}^{i\star}$ in a very direct and explicit manner, without the use of ramified covers as in the original approach. Indeed, in case of a canonically polarized or Calabi-Yau family $p$ with maximal variation we show in section 9 that the bundle $\det \Omega^1_{X/Y}\langle W\rangle$ has a property which is very similar to the hypothesis $\left({\mathcal H}_i\right)$ above. Hence Theorem \ref{main, I} applies, and we obtain a new proof the existence of the Viehweg-Zuo sheaf, cf. \cite{VZ03}. Our arguments work as well in case of Calabi-Yau families, where we obtain a similar result. We remark that the proof is much simpler than in the canonically polarized case. \smallskip On the other hand, in the article \cite{PoSch}, Popa-Schnell obtain a vast generalization of the result by Viehweg-Zuo \cite{VZ03} in the case of a family $p$ whose generic fiber is of general type. To this end, they are using deep results from the theory of Hodge modules. It would be very interesting to see if our explicit considerations here could provide an alternative argument. \medskip \noindent This paper is organized as follows. \tableofcontents \subsection*{Acknowledgments} We would like to thank St\'ephane Druel, Mihnea Popa, Christian Schnell and Valentino Tosatti and for numerous useful discussions about the topics of this paper. M.P. was partially supported by NSF Grant DMS-1707661 and Marie S. Curie FCFP. \section{Preliminaries} As in the introduction we let $p:{\mathcal X}\to {\mathcal B}$ be a smooth proper fibration of relative dimension $n$ over an $m$-dimensional base ${\mathcal B}$. We let $L\to {\mathcal X}$ be a holomorphic line bundle, equipped with a smooth metric, $\phi$. Mostly we will assume that $\phi$ has semi-negative curvature, and that $\Omega:-i\partial\dbar\phi$ is strictly positive on each fiber $X_t$. Sometimes we let $\omega_t$ denote the restriction of $\Omega$ to the fiber $X_t$, sometimes we just write $\omega$. We consider maps $t\to u_t$ defined for $t$ in ${\mathcal B}$, where $u_t$ is an $n$ form on $X_t$ for each $t$. We call such a map a {\it section}, of the bundle of $n$-forms on the fiber. We will need to define what it means for such a map to be smooth. If $t=(t_1,...t_m)$ are local coordinates we can, via the map $p$ consider $t_j$ to be functions on ${\mathcal X}$, and similarily we have the forms $dt_j$, $d\bar t_j$, $dt=dt_1\wedge...dt_m$ and $d\bar t$ on ${\mathcal X}$. That $p$ is a smooth fibration means that its differential is surjective everywhere, so locally near a point $x$ in ${\mathcal X}$, we can complete the $t_j$:s with functions $z_1, ...z_n$ to get local coordinates on ${\mathcal X}$. Then, for $t$ fixed, $z_j$ are local coordinates on $X_t$, and we can write $$ u_t= \sum_{|I|=p, |J|=q} c_{I,J}(t,z) dz_{I}\wedge d\bar z_J. $$ We say that $u_t$ is smooth if the coefficients $c_{I,J}$ are smooth. Since this amounts to saying that the form on ${\mathcal X}$ $$ u_t\wedge dt\wedge d\bar t $$ is smooth, it does not depend on the choice of coordinates. \begin{lma} A section $u_t$ is smooth if and only if there is a smooth $n$-form, $\tilde u$ on ${\mathcal X}$, such that the restriction of $\tilde u$ to each $X_t$ equals $u_t$. \end{lma} \begin{proof} Locally, this is clear from the discussion above: we may take $\tilde u=\sum c_{I,J} dz_I\wedge d\bar z_J$. The global case follows via a partition of unity. \end{proof} These definitions extend to forms with values in $L$. We will call a form $\tilde u$ as in the lemma a {\it representative} of the section $u_t$. \begin{lma} Two forms, $\tilde u$ and $\tilde u'$, on ${\mathcal X}$ have the same restriction to the fiber $X_t$, i e represent the same section $u_t$, if and only if $$ (\tilde u -\tilde u')\wedge dt\wedge d\bar t=0, $$ on $X_t$, where $dt=dt_1\wedge...dt_m$ for some choice of local coordinates on the base. This holds if and only if $$ \tilde u'=\tilde u +\sum_1^m a_j\wedge dt_j+\sum_1^m b_j\wedge d\bar t_j $$ for some $(n-1)$-forms $a_j$ and $b_j$ on ${\mathcal X}$. \end{lma} \begin{proof} Both statements follow by choosing local coordinates $(t,z)$ as above. \end{proof} Notice that the forms $a_j$ and $b_j$ are not uniquely determined, but their restriction to the fibers are uniquely determined. This follows since if $$ \sum a_j\wedge dt_j+\sum b_j\wedge d\bar t_j=0, $$ e g wedging with $\widehat{dt_j}\wedge d\bar t$, we get $$ a_j\wedge dt\wedge d\bar t=0, $$ which means that $a_j$ vanishes on fibers by the lemma. We will mainly be interested in the case when $u_t$ are $\bar\partial$-closed of a fixed bidegree $(p,q)$ and in their cohomology classes $[u_t]$ in $H^{p,q}(L_t)$. By Hodge theory, each such class has a unique harmonic representative for the $\bar\partial$-Laplacian $\Box''$ defined by the fiber metric $\phi$ (restricted to $X_t$) and the K\"ahler form $\omega_t$. Since $\Box''=\bar\partial\dbar^*+\bar\partial^*\bar\partial$ and $X_t$ is compact, a form $u$ is harmonic if and only if $\bar\partial u=0$ and $\bar\partial^* u=0$. In the next lemma we give an alternative characterization of harmonic forms that will be useful in our computations. \begin{lma} Let $X$ be an $n$-dimensional compact complex manifold and $L\to X$ a holomorphic line bundle with a negatively curved metric $\phi$, i e $\omega:=-i\partial\dbar\phi>0$ on $X$. Let $u$ be a $(p,q)$-form on $X$ with values in $L$, where $p+q=n$. Then $u$ is harmonic for the K\"ahler metric $\omega$ on $X$ and the metric $\phi$ on $L$ if and only if $\bar\partial u=0$ and $\partial^{\phi} u=0$, where $\partial^{\phi}=e^\phi\partial e^{-\phi}$ is the $(1,0)$-part of the connection on $L$ induced by the metric $\phi$. Moreover, a harmonic $(p,q)$-form with $p+q=n$ is primitive, i e satisfies $\omega\wedge u=0$. \end{lma} \begin{proof} Let $\Box'=\partial^{\phi}(\partial^{\phi})^*+(\partial^{\phi})^*\partial^{\phi}$ be the $(1,0)$-Laplacian. By the Kodaira-Nakano formula $$ \Box''=\Box'+[i\partial\dbar\phi,\Lambda_\omega]. $$ Since $i\partial\dbar\phi=-\omega$ and the commutator $[\omega,\Lambda_\omega]=0$ on $n$-forms, we see that $\Box'=\Box''$ on $n$-forms. Hence, $\partial^{\phi} u=0$ if $u$ is harmonic, which proves one direction in the lemma. We also have, when $\bar\partial u=\partial^{\phi} u=0$, that $\bar\partial\partial^{\phi} u +\partial^{\phi}\bar\partial u=0$, which gives that $\omega\wedge u=0$, so $u$ is primitive. Then $* u=c u$, where $c\in\mathbb C$ such that $|c|=1$, so when $\partial^{\phi} u=0$, $\bar\partial^* u=- *\partial^{\phi} * u=0$. Hence $\Box'' u=0$ if $\bar\partial u=0$ and $\partial^{\phi} u=0$, which proves the converse direction of the lemma. \end{proof} A similar result of course holds (with the same proof) in case $L$ is positive and we use $\omega=i\partial\dbar\phi$ as our K\"ahler metric. At this point we also insert an estimate that will be useful later when we simplify the curvature formula. It generalizes a formula in \cite{Berndtsson2} (see section 4 in that paper). \begin{prop} With notation and assumptions as in Lemma 2.3, let $g$ be an $L$-valued form of bidegree $(p,q)$ and $f$ an $L$-valued form of bidegree $(p+1,q-1)$, where $p+q=n$. Assume $$ \partial^{\phi} g=\bar\partial f $$ and that $f\perp Range(\bar\partial)$ and $g\perp Ker(\partial^{\phi})$. Then $$ \|f\|^2-\|g\|^2=\langle (\Box''+1)^{-1}f,f\rangle. $$ Moreover, $g\in R(\Box''-1)$ and $$ \|f\|^2-\|g\|^2=\langle (\Box''-1)^{-1}g,g\rangle. $$ \end{prop} \begin{proof} Since $g$ is orthogonal to the space of harmonic forms, $g$ lies in the domain of $(\Box'')^{-1}$. Since the spectrum of $\Box''$ is discrete, $g$ does also lie in the domain of $(\Box''-\lambda)^{-1}$ for $\lambda$ sufficiently small and positive. We have $$ \lambda\langle (\Box''-\lambda)^{-1} g,g\rangle +\|g\|^2= \langle \Box''(\Box''-\lambda)^{-1} g,g\rangle= $$ $$ \langle \Box'(\Box'-\lambda)^{-1} g,g\rangle = \langle (\Box'-\lambda)^{-1} \partial^\phi g,\partial^\phi g\rangle=\langle (\Box'-\lambda)^{-1}\bar\partial f,\bar\partial f\rangle. $$ Here we have used in the first equality on the second line that $\Box'=\partial^\phi(\partial^\phi)^*+(\partial^\phi)^*\partial^\phi$ and $g$ is orthogonal to $R(\partial^\phi)$. Since $\Box'=\Box''+1$ on forms of degree $(n+1)$, this equals $$ \langle\bar\partial^*\bar\partial(\Box''+1-\lambda)^{-1} f,f\rangle=\langle\Box''(\Box''+1-\lambda)^{-1} f,f\rangle, $$ where the last equality follows since $f$ is orthogonal to $R(\bar\partial)$. Clearly this expression stays bounded as $\lambda$ increases to 1, so the expansion of $g$ in eigenforms of $\Box''$ can not have any terms corresponding to eigenvalues less than or equal to 1. Hence $g$ lies in the domain of $(\Box''-\lambda)^{-1}$ for all $0\leq\lambda\leq 1$ and $$ \lambda \langle (\Box''-\lambda)^{-1} g,g\rangle +\|g\|^2= \langle \Box''(\Box''+1-\lambda)^{-1} f,f\rangle. $$ Putting $\lambda=1$ we get $$ \|g\|^2+\langle (\Box''-1)^{-1}g,g\rangle =\|f\|^2, $$ and putting $\lambda=0$ we get $$ \|g\|^2= \langle \Box''(\Box''+1)^{-1} f,f\rangle=\|f\|^2 - \langle (\Box''+1)^{-1} f,f\rangle. $$ \end{proof} Note that in the case of positive curvature, when $i\partial\dbar\phi=\omega$, we get with the same proof that $$ \|g\|^2-\|f\|^2=\langle (\Box''+1)^{-1}g,g\rangle, $$ when $\bar\partial f=\partial^\phi g$, $g$ is orthogonal to the range of $\partial^\phi$ and $f$ is orthogonal to the kernel of $\bar\partial$. For future use in the case of Calabi-Yau families we also record the following counterpart of the previous proposition, which is proved in a similar (but simpler) way. \begin{prop} Let $(X,\omega)$ be a compact K\"ahler manifold and $(L,e^{-\phi)}$ a hermitian holomorphic line bundle over $X$ with $i\partial\dbar\phi=0$. Let $g$ be an $L$-valued form of bidegree $(p,q)$ and $f$ $L$-valued of bidegree $(p+1,q-1)$, where $p+q=n$. Assume $f\perp Range(\bar\partial)$ and $g\perp Ker(\partial^\phi)$ and that $$ \partial^\phi g=\bar\partial f. $$ Then, $$ \|f\|^2=\|g\|^2. $$ \end{prop} The next lemma gives in particular a simple way to compute norms of harmonic forms. \begin{lma} Let $c_n=(i)^{n^2}$. If $u$ is a primitive $(p,q)$-form, with $p+q=n$ on an $n$-dimensional K\"ahler manifold $(X,\omega)$, $$ (-1)^q c_n u\wedge\bar u=|u|^2_\omega \omega^n/n!. $$ \end{lma} The proof can be found in \cite{Griffiths-Harris}. This means that the $L^2$-norms of primitive forms $u_t$ over a fiber $X_t$ can be expressed in terms of the fiberwise integrals of $u_t\wedge\bar u_t$. When we differentiate such integrals with respect to $t$ it is convenient to express them as pushforwards. \begin{lma} If $u_t$ is a smooth section of primitive $(p,q)$-forms on $X_t$, where $p+q=n$, we have $$ \int_{X_t}|u_t|^2_{\omega_t} \omega_t^n/n! \,e^{-\phi}=(-1)^q c_n p_*(\tilde u\wedge\overline{\tilde u} e^{-\phi}), $$ if $\tilde u$ is any representative of $u_t$. \end{lma} We will also need (follows from the Fundamental theorem of Kodaira-Spencer in \cite{KSp}) \begin{lma}\label{KodSpe} Let ${\mathcal X}\to {\mathcal B}$ be a smooth proper fibration, and $L\to {\mathcal X}$ a holomorphic line bundle. Assume that the dimension of $H^{p,q}(L_t)$ is a constant. Let $t\to u_t$ be a smooth section of $(p,q)$-forms, $\bar\partial$-closed on $X_t$, and let $u_{t,h}$ be the harmonic representatives of the classes $[u_t]$ in $H^{p,q}(L_t)$. Then $t\to u_{t, h}$ is smooth. \end{lma} In the computations below we will frequently use the {\it interior multiplication} of a form with a vector (field), defined as the adjoint of exterior multiplication under the natural duality between forms and the exterior algebra of tangent vectors. If $V$ is a vector and $\theta$ is a form, it is denoted as $V\rfloor\theta$. Interior multiplication is an antiderivation, so $V\rfloor (a\wedge b)= (V\rfloor a)\wedge b+ (-1)^{deg(a)}a\wedge (V\rfloor b)$. More generally, we can take the interior product of a wedge product of vectors with a form, and it holds that $$ (V_1\wedge...V_k)\rfloor \theta=V_1\rfloor...(V_k\rfloor \theta). $$ \section{The bundles ${\mathcal H}^{n-q,q}$} We will now consider the vector bundles ${\mathcal H}^{n-q,q}$ over the base ${\mathcal B}$ whose fibers are $${\mathcal H}^{n-q,q}_t:= H^{n-q,q}(L_t).$$ A smooth section of this bundle can be written as $t\to [u_t]$ such that $t\to u_{t,h}$ (see Lemma \ref{KodSpe}) is a smooth section of the bundle of $(n-q,q)$-forms on $X_t$ as in Lemma 1.1 (when we write $[u_t]$ it always means that $u_t$ is $\bar\partial$-closed on $X_t$). We will first give a way to express the $(0,1)$-part of the connection on ${\mathcal H}^{n-q,q}$. For this we let $\tilde u$ be any representative of the section $[u_t]$, i e an $(n-q,q)$-form on ${\mathcal X}$ whose restriction to each $X_t$ cohomologous to $u_t$. Since $\bar\partial\tilde u=0$ on fibers, Lemma 1.2 shows that $$ \bar\partial\tilde u=\sum_1^m dt_j\wedge \eta_j +\sum_1^m d\bar t_j\wedge \nu_j , $$ for some forms $\eta_j$ and $\nu_j$. Taking $\bar\partial$ of this equation and wedging with $\widehat{d\bar t_j}\wedge dt$ we see that $\bar\partial\nu_j=0$ on fibers. We can then define \begin{equation}\label{high0} D''[u_t]=\sum [\nu_j|_{X_t}]\otimes d\bar t_j. \end{equation} For this to make sense we have to check that it does not depend on the various choices made. First, if $\tilde u'$ is smooth and restricts to each $X_t$ to a form $u_t'$ cohomologous to $u_t$, we can write $u_t'=u_t +\bar\partial v_t$, where $v_t$ can be taken to depend smoothly on $t$. Then, if $\tilde v$ is a representative of $v_t$ then $\tilde u'-\bar\partial \tilde v$ is a representative of $u_t$. We therefore only have to check that the definition does not depend on the choice of representatives in the sense of Lemma 2.1. But if we change a representative $\tilde u$ by adding $\sum a_j\wedge dt_j+\sum b_j\wedge d\bar t_j$, $\nu_j$ changes only by the $\bar\partial$-exact term $\bar\partial b_j$ so the cohomology class $[\nu_j|_{X_t}]$ does not change. Thus, the definition of $D''[u_t]$ does not depend on the choice of representative of the section $u_t$ nor on the choice of representative in the cohomology class. \medskip \begin{prop} The operator $D''$ defines an integrable complex structure on ${\mathcal H}^{n-q,q}$. \end{prop} \begin{proof} Let $[u_t]$ be a smooth section and $\tilde u$ a representative of $[u_t]$. Then $$ D''[u_t]=\sum [\nu_j]\otimes d\bar t_j, $$ where $$ \bar\partial\tilde u=\sum dt_j\wedge \eta_j +\sum d\bar t_j\wedge \nu_j. $$ Since $\bar\partial\nu_j=0$ on fibers we get $$ \bar\partial\nu_j=\sum dt_k\wedge\eta_{j k}+\sum d\bar t_k\wedge \nu_{j k}, $$ so $$ D''[\nu_j]=\sum[\nu_{j k}]\otimes d\bar t_k. $$ Using that $\bar\partial^2\tilde u=0$ we get that $ \nu_{j k}=\nu_{k j}$ on fibers, which gives $(D'')^2[u_t]=0$. \end{proof} Our next goal is to show that the holomorphic structure introduced above on ${\mathcal H}^{n-q,q}$ coincides with the usual complex structure the higher direct image \begin{equation}\label{high1} {\mathcal R}^qp_\star(\Omega^{n-q}_{X/Y}\otimes L)|_{{\mathcal B}}. \end{equation} By definition, the \emph{holomorphic} sections of the vector bundle \eqref{high1} on a coordinate open subset $V\subset {\mathcal B}$ are elements of \begin{equation}\label{high2} H^{q}\left(p^{-1}(V), \Omega^{n-q}_{{\mathcal X}/{\mathcal B}}\otimes L|_{p^{-1}(V)}\right). \end{equation} \smallskip \noindent We recall (cf. introduction) that we can choose an open subset ${\mathcal B}\subset Y$ contained in the base of the map $p:X\to Y$ maximal such that the restriction sheaf \eqref{high1} is a holomorphic vector bundle whose fiber at $t\in {\mathcal B}$ is the cohomology group $\displaystyle H^{q}(X_t, \Omega^{n-q}_{X_t}\otimes L)$ (we refer to \cite{BaSt}, Theorem 4.12, (ii) for the result establishing the existence of ${\mathcal B}$). Let $(u_j)$ be a set of $\bar\partial$-closed $(0, q)$--forms with values in $\Omega^{n-q}_{{\mathcal X}/{\mathcal B}}\otimes L|_{p^{-1}(V)}$ such that their corresponding Dolbeault cohomology classes gives a basis for \eqref{high2}. Then for each $t\in V$ the restriction $\displaystyle(u_j|_{X_t})$ induces a basis for $\displaystyle H^{q}(X_t, \Omega^{n-q}_{X_t}\otimes L)$. In particular we infer that the space of smooth sections of \eqref{high1} coincides with the space of smooth sections of ${\mathcal H}^{n-q,q}$ (as defined at the beginning of the current section), since we can express $[u_t]$ as linear combination of $\displaystyle(u_j|_{X_t})$ for $t\in V$. \medskip \noindent Let $\displaystyle [u]\in {\mathcal C}^\infty(V, {\mathcal H}^{n-q,q})$ be a smooth section of the bundle ${\mathcal H}^{n-q,q}$. We then have: \begin{lma} $D^{\prime\prime}([u])= 0$ on $V$ if and only if $[u]$ is holomorphic as section of the $q^{\rm th}$ direct image bundle. \end{lma} \begin{proof} We first show that we have \begin{equation}\label{high3} D^{\prime\prime}([u])=0 \end{equation} if $\displaystyle [u]\in \Gamma(V, {\mathcal H}^{n-q,q})$. This can be seen as follows. By a partition of unit we can construct a smooth $(n-q, q)$-form $\widetilde u$ with values in $L$ whose image by the quotient map \begin{equation}\label{high4} \Omega^{n-q}_{{\mathcal X}}\otimes L|_{p^{-1}(V)}\to \Omega^{n-q}_{{\mathcal X}/{\mathcal B}}\otimes L|_{p^{-1}(V)} \end{equation} is equal to $u$. This implies that $\widetilde u$ is a representative of $[u]$, and moreover we have \begin{equation}\label{high5} \widetilde u\wedge dt= u\wedge dt. \end{equation} Notice that in \eqref{high5} the expression $u\wedge dt$ is a well-defined $(n-q+m, q)$-form with values in $L$. Now we apply the $\bar\partial$ in \eqref{high5}, which gives \begin{equation}\label{high6} \bar\partial \widetilde u\wedge dt= 0 \end{equation} since $\bar\partial u\wedge dt$ equals zero. Thus the $\nu_j$ defined by the representative $\tilde u$ are even zero, so $D''[u]=0$. \smallskip \noindent Let now $[u]$ be a smooth section of ${\mathcal H}^{n-q,q}$ such that \eqref{high3} holds at each point of $V$. Fix a holomorphic frame $[u_j]$ of ${\mathcal R}^qp_\star(\Omega^{n-q}_{X/Y}\otimes L)|_{{\mathcal B}}$. We have already shown that the $[u_j]$ satisfy $D''[u_j]=0$, so they form a holomorphic frame for the bundle ${\mathcal H}^{n-q,q}$ as well. Then \begin{equation}\label{high20} [u]= \sum_j f_j[u_j] \end{equation} for some smooth functions $f_j$ defined on $V$. We have \begin{equation}\label{high21} 0=D^{\prime\prime}([u])= \sum_j [u_j]\otimes\bar\partial f_j, \end{equation} so $f_j$ are holomorphic. Then clearly $[u]$ is holomorphic as a section of $ {\mathcal R}^qp_\star(\Omega^{n-q}_{X/Y}\otimes L)|_{{\mathcal B}}$ as well. \end{proof} \medskip \noindent We next equip ${\mathcal H}^{p,q}$ with the hermitian metric $$ \|[u_t]\|_t^2=\int_{X_t} |u_{t,h}|^2_\omega e^{-\phi} \omega^n/n! $$ (where we have written $\omega$ instead of $\omega_t$), where $u_{t,h}$ is the harmonic representative of $[u_t]$. By Lemmas 2.3 and 2.5, this equals $$ (-1)^q c_n\int_{X_t} u_{t,h}\wedge\bar u_{t,h}e^{-\phi}, $$ which by Lemma 2.6 also can be written as \begin{equation} \|[u_t]\|_t^2=(-1)^q c_n p_*(\tilde u\wedge\overline{\tilde u} e^{-\phi}), \end{equation} for any choice of representative of the smooth section $u_{t,h}$. Let now $\tilde u$ be any representative of a smooth section $u_t$ such that each $u_t$ is harmonic on $X_t$. By Lemma 2.3, this means that $\bar\partial u_t=0$ on $X_t$ and $\partial^{\phi} u_t=0$ on $X_t$. We then have that $$ \partial^{\phi} \tilde u=\sum dt_j \wedge \mu_j +\sum d\bar t_j\wedge \xi_j, $$ by Lemma 2.2, since the left hand side vanishes on fibers. We thus have two sets of equations for $\tilde u$, \begin{equation} \bar\partial\tilde u=\sum_1^m dt_j\wedge \eta_j +\sum_1^m d\bar t_j\wedge \nu_j , \end{equation} and \begin{equation} \partial^{\phi} \tilde u=\sum dt_j \wedge \mu_j +\sum d\bar t_j\wedge \xi_j. \end{equation} Taking $\bar\partial$ of the first equation and wedging with appropriate forms on the base, we see that $\eta_j$ and $\nu_j$ are $\bar\partial$-closed on fibers, and taking $\partial^{\phi}$ of the second we see that $\mu_j$ and $\xi_j$ are $\partial^{\phi}$-closed. We stress that the forms $\eta_j$ etc, and even their restrictions to fibers, depend on the choice of representative $\tilde u$, but the cohomology classes they define do not. We now polarize (3.1) and use it to compute $$ \bar\partial\langle u_t,v_t\rangle_t=(-1)^qc_n \left(\sum p_*(d\bar t_j\wedge\nu_j \wedge\overline{\tilde v} e^{-\phi}) +\sum (-1)^n p_*(\tilde u\wedge\overline{dt_j\wedge \mu_j} e^{-\phi}) \right ) $$ (the $\eta_j$:s and the $\xi_j$:s give no contrbution for bidegree reasons). From this we see that $\sum [\mu_{j}]\otimes dt_j=D'[u_t]$, so we now also have an expression for the $(1,0)$-part of the connection on $\mathcal H^{p,q}$ in terms of representatives. To compute the curvature of the connection $D=D'+D''$ we will compute $i\partial\dbar\|u_t\|_t^2$ and use the classical formula $$ -\langle \Theta^E_{W,\bar W} u_t, u_t\rangle_t +\|D'_W u_t\|^2_t=\bar W \rfloor (W \rfloor \partial\dbar \|u_t\|^2_t), $$ valid for any holomorphic section of a hermitian vector bundle $E$ and any $(1,0)$ vector in the base, where \begin{equation}\label{eq:theta-E} \Theta^E_{W,\bar W}:= \bar W \rfloor (W \rfloor \Theta^E), \ D'_W:= W\rfloor D', \end{equation} and the Chern curvature $\Theta^E$ denotes the square of the Chern connection of the hermitian vector bundle $E$ (in our case $E=\mathcal H^{p,q}$). Here it will be convenient to choose our representatives in a special way, so we first discuss this issue in the next sections. \section{The horizontal lift.} Recall that we assume that $\Omega=-i\partial\dbar\phi$ is strictly positive on fibers. In this situation we can, following Schumacher, \cite{Schumacher}, define the {\it horizontal lift} of a vector field on the base. This represents a variation of the notion of {\it harmonic lift}, introduced by Siu in \cite{Siu}. \begin{prop} Let $W$ be a $(1,0)$ vector field on the base $B$. If $\Omega>0$ on fibers, there is a unique vector field $V=V_W$ on ${\mathcal X}$ such that \begin{equation} dp(V)=W \end{equation} and \begin{equation} \Omega(V,\bar U)=0, \end{equation} for any field $U$ on ${\mathcal X}$ that is vertical, i e satisfies $dp(U)=0$. $V$ is called the horizontal lift of $W$. \end{prop} \begin{proof} We begin with uniqueness. Assume $V$ and $V'$ both satisfy (4.1) and (4.2). Then $V-V'$ is vertical, so $$ \Omega(V-V',\overline{V-V'})=\Omega(V,\overline{V-V'})- \Omega(V',\overline{V-V'})=0. $$ Since we have assumed that $\Omega$ is strictly positive on fibers, $V-V'=0$. For the existence we note that the proposition is a purely pointwise, linear algebra statement. Given an arbitrary lift $V'$ at a point $x$ in ${\mathcal X}$, a general lift can be written $V=V'+U$, where $U$ is vertical. We therefore have $n$ equations for $n$ unknowns, so uniqueness implies existence. \end{proof} Expressed differently, the condition (4.2) means that $$ V\rfloor \Omega(\bar U)=0 $$ for all vertical $U$, so \begin{equation} V_W\rfloor\Omega =\sum b_j d\bar t_j=:b, \end{equation} for some functions $b_j$ if $t_j$ are local coordinates on the base. Clearly $b$ depends linearlily on $W$, so if $W=\sum W_j\partial/\partial t_j$, $$ V_W\rfloor\Omega=i\sum c_{j,k}(\Omega) W_j d\bar t_k= V_W\rfloor C(\Omega), $$ where $$ C(\Omega)=i\sum c_{j,k}(\Omega) dt_j\wedge d\bar t_k. $$ (Here we have also used that $V_W\rfloor dt_j=W_j$.) We say that a $(1,0)$ vector (field) on ${\mathcal X}$ is horizontal if it is the horizontal lift of some vector (field) on the base. The dimension of the space of horizontal vectors at a point is the dimension of the base, and the intersection between the horizontal vectors and the vertical vectors is just the zero vector. Hence any vector at a point in ${\mathcal X}$ has a unique decomposition as a sum of a vertical and a horizontal vector. Similarily, a $(0,1)$ vector (field) $V$ is horizontal respectively vertical if $\bar V$ is horizontal or vertical. We next introduce the corresponding notions for forms. \begin{df} A form $\theta$ is {\it horizontal} if $U\rfloor \theta=0$ for any vertical $U$, and $\theta$ is {\it vertical} if $V\rfloor \theta=0$ for any horizontal vector (field) $V$. \end{df} To make this a little bit more concrete, let $V_j$ be the horizontal lift of $\partial/\partial t_j$, where $t_j$ are local coordinates on the base for $j=1, ...m$. Let $W_k$ for $k=1,...n$ be a system of vertical $(1,0)$-fields such that $V_j$ and $W_k$ together form a basis for the space of $(1,0)$ vectors near a given point $x$ in ${\mathcal X}$. Then we may find local coordinates $(s,z)$ near $x$ such that $W_k=\partial/\partial z_k$ for $k=1,...n$ and $V_j=\partial/\partial s_j$ for $j=1,...m$ at $x$. (Then, still at $x$, $ds_j(V_k)=\delta_{j k}$ and $ds_j(W_k)=0$, from which it follows that $ds_j=p^*(dt_j)(=dt_j)$.) Then $V_j\rfloor dz_k=0$ so all $dz_k$ are vertical forms, and similarily $W_k\rfloor ds_j=0$, so $dt_j=ds_j$ are horizontal (which is easy to see directly). Therefore the horizontal forms are of the type $$ \sum a_{J,K} dt_J\wedge d\bar t_K, $$ and vertical forms are of the type $$ \sum b_{J,K} dz_J\wedge d\bar z_K. $$ In this terminology, $C(\Omega)$ is horizontal, and the relation $V_W\rfloor\Omega=V_W\rfloor C(\Omega)$ for all horizontal $W$ implies that $\Omega=C(\Omega) + \Omega_v$, where $\Omega_v$ is vertical. Next, let $V_1, V_2, ...$ be the horizontal lifts of $\partial/\partial t_1, \partial/\partial t_2, ..$, where $t_j$ are local coordinates on the base. Let ${\mathcal V}=V_m\wedge... V_1$ and put for an arbitrary form $\theta$ on ${\mathcal X}$, \begin{equation} P_v(\theta)=(\bar{\mathcal V}\wedge {\mathcal V})\rfloor(dt\wedge d\bar t\wedge \theta). \end{equation} Clearly, $P_v(\theta)$ is always vertical and does not depend on the choice of local coordinates. Moreover, if $\theta$ is already vertical, then $P_v(\theta)=\theta$, so $P_v$ is a projection from the space of all forms on ${\mathcal X}$ to the space of vertical forms. \begin{prop} If $\theta$ is an arbitrary form on ${\mathcal X}$, $P_v(\theta)-\theta$ vanishes on fibers. If a vertical form vanishes on fibers it vanishes identically. \end{prop} \begin{proof} The first statement means that $$ P_v(\theta)\wedge dt\wedge d\bar t=\theta\wedge dt\wedge d\bar t. $$ This follows from the definition of $P_v$ if we use that contraction is an antiderivation and $ (\bar{\mathcal V}\wedge{\mathcal V}) \rfloor (dt\wedge d\bar t)=1$. For the second statement we use that if $\theta$ is vertical, $P_v(\theta)=\theta$. On the other hand, $P_v(\theta)=0$ if $\theta$ vanishes on fibers, since $dt\wedge d\bar t\wedge \theta=0$ then. \end{proof} Finally we note that since $C(\Omega)^{m+1}=0$ and $\Omega_v^{n+1}=0$, $$ \Omega^{n+m}/(n+m)!=C(\Omega)^m/m!\wedge \Omega_v^n/n!=C(\Omega)^m/m!\wedge\Omega^n/n!. $$ Thus, if $m=1$, and we write $C(\Omega)=c(\Omega)idt\wedge d\bar t$ we have: \begin{prop} If $m=1$, $$ c(\Omega)=\frac{\Omega^{n+1}/(n+1)!}{\Omega^n/n!\wedge idt\wedge d\bar t}. $$ \end{prop} This is the well known geodesic curvature of the curve of metrics $\phi_t$ in Mabuchi space (if $\phi_t$ depends only on the real part of $t$, and by extension in general). \section{Vertical representatives} Let $[u]$ be a section of the bundle of smooth $n$-forms on the fibers. Recall that this means that for each $t$ in the base, $u_t$ is an $n$-form on the fiber $X_t$, and there is a smooth $n$-form $\tilde u$ on the total space ${\mathcal X}$ (a representative of $u_t$) which restricts to $u_t$ on each $X_t$. \begin{prop} Any smooth section $u_t$ has a unique vertical representative. \end{prop} \begin{proof} Let $\tilde u$ be an arbitrary representative and let $\hat u:=P_v(\tilde u)$. Then Proposition 4.2 implies that $\hat u$ is also a representative of $u_t$ (since $\hat u-\tilde u$ vanishes on fibers), and $\hat u$ is by definition vertical. Uniqueness also follows from Proposition 4.2. \end{proof} In the next proposition we shall consider sections that are primitive on fibers, i. e. are such that on each $X_t$, $u_t\wedge \Omega =0$. In terms of representatives, this means that $$ \Omega\wedge\tilde u\wedge dt\wedge d\bar t=0 $$ if $dt=dt_1\wedge .. dt_m$ for a system of local coordinates on the base. \begin{prop} If $\hat u$ is a vertical representative of a section that is primitive on fibers, then $$ \Omega\wedge\hat u=C(\Omega)\wedge\hat u. $$ \end{prop} \begin{proof} The form $$ \theta:=(\Omega-C(\Omega))\wedge\hat u $$ is a product of vertical forms, hence vertical. Since $C(\Omega)$ vanishes on fibers and $\hat u$ is primitive on fibers, $\theta$ vanishes on fibers. Hence $\theta=0$ by Proposition 4.2. \end{proof} Notice that this means in particular that $\Omega\wedge\hat u\wedge dt=\Omega\wedge\hat u\wedge d\bar t=0$ since $C(\Omega)\wedge dt=C(\Omega)\wedge d\bar t=0$. \begin{prop} Let $\hat u$ be a vertical representative of a section $u_t$ that satisfies the equation $$ (\bar\partial +\partial^\phi)u_t=0 $$ on each fiber $X_t$. Let $\mathcal{D}:=(\bar\partial+\partial^\phi)$ on the total space ${\mathcal X}$. Then $$ \mathcal{D}\hat u=\sum a_j\wedge dt_j+ \sum b_j\wedge d\bar t_j, $$ where $a_j$ and $b_j$ are primitive and satisfy $(\bar\partial+\partial^\phi)a_j=(\bar\partial+\partial^\phi) b_j=0$ on fibers. \end{prop} \begin{proof} Since $(\bar\partial +\partial^\phi) \hat u$ vanishes on fibers, $$ \mathcal{D}\hat u=\sum a_j\wedge dt_j+ \sum b_j\wedge d\bar t_j, $$ for some $n$-forms $a_j$ and $b_j$. Since $\mathcal{D}^2=\Omega\wedge$ we get $$ \Omega\wedge\hat u=\sum \mathcal{D} a_j\wedge dt_j+\sum \mathcal{D} b_j\wedge d\bar t_j. $$ Wedging with $\widehat{dt_j}\wedge d\bar t$ we find that the restriction of $\mathcal{D} a_j$ to fibers vanishes since $\hat u\wedge\Omega\wedge d\bar t=0$, and similarily we find that $\mathcal{D} b_j$ vanishes on fibers. Hence $\mathcal{D}^2 a_j=\mathcal{D}^2 b_j$ also vanish when restricted to fibers, which gives that $a_j$ and $b_j$ are primitive. \end{proof} Notice the particular case when $u_t$ has pure bidegree $(p,q)$. Then the assumption means that $\bar\partial \hat u=\partial^\phi \hat u=0$ on fibers, i. e. that $u_t$ is harmonic on each fiber. The first conclusion is then that \begin{equation} \bar\partial\hat u=\sum_1^m dt_j\wedge \eta_j +\sum_1^m d\bar t_j\wedge \nu_j , \end{equation} and \begin{equation} \partial^{\phi} \hat u=\sum dt_j \wedge \mu_j +\sum d\bar t_j\wedge \xi_j. \end{equation} where all forms $\eta_j$, $\nu_j$, $\mu_j$ and $\xi_j$ are primitive. This follows from Proposition 5.4 since e g $(-1)^n a_j=\eta_j+\mu_j$ and $\eta_j$ and $\mu_j$ have different bidegrees so each of them must be primitive. We also get \begin{equation} \partial^\phi\eta_j=-\bar\partial\mu_j \end{equation} and \begin{equation} \partial^\phi\nu_j=-\bar\partial\xi_j \end{equation} from $(\bar\partial+\partial^\phi) a_j=0$ and $(\bar\partial+\partial^\phi) b_j=0$. \subsection{Vertical representatives and the Kodaira-Spencer tensor.} We first recall the definition of the Kodaira-Spencer class. Let $W$ be a holomorphic $(1,0)$-vector field on $B$ and let $V$ be a smooth lift of $W$ to ${\mathcal X}$ (so that $dp(V)=W$). In general, unless the fibration is locally trivial, we can not find a holomorphic lift, and the Kodaira-Spencer class is an obstruction to this. First we define $$ \kappa_V=\bar\partial V, $$ if $V$ is any smooth lift of $W$. Then $\kappa_V$ is a $(0,1)$-form on ${\mathcal X}$ with coefficients in $T^{(1,0)}({\mathcal X})$. Since $dp(V)=W$ is holomorphic, it actually takes values in the subbundle, $F$, of $T^{(1,0)}$ of vertical vectors, i e vectors $V'$ such that $dp(V')=0$. Any other lift, say $V'$, of $W$ can be written $V+V''$ where $V''$ is vertical and $$ \kappa_{V'}=\kappa_V+\bar\partial V'', $$ so the cohomology class of $\kappa_V$ in $H^{(0,1)}({\mathcal X}, F)$ is well defined. We will call it $[\kappa_W]$. Similarily, we let $[\kappa_W^t]$ be the cohomology class of $\kappa_V$ restricted to $X_t$. In the sequel we will assume that the lift of $W$ is taken as the horizontal lift. If $W=\partial/\partial t_j$ for a given system of coordinates on the base we will sometimes write $V_j$ for the horizontal lift of $\partial/\partial t_j$ and just $\kappa_j$ for $\kappa_{V_j}$. This depends of course on $\Omega$, so it would be more proper to write $\kappa_j(\Omega)$, but we will use the lighter notation instead. Locally, using local coordinates $x=(t,z)$ such that $p(x)=t$, $\kappa_V$ can be written $$ \kappa_V=\sum_1^n Z_j\otimes d\bar z_j +\sum_1^m T_k \otimes d\bar t_k, $$ where $Z_j$ and $T_k$ are vector fields tangential to fibers. Its restriction to a fiber $X_t$ is $$ \kappa^t_V=\sum_1^n Z_j\otimes d\bar z_j, $$ where we interpret $z_j$ as local coordinates on $X_t$. If $\theta$ is a form on ${\mathcal X}$, $\kappa_V$ operates on $\theta$ by letting the vector part of $\kappa_V$ operate by contraction, followed by taking the wedge product with the form part. The result is called $\kappa_V\cup \theta$, which we sometimes write $\kappa_V.\theta$. In the same way, $\kappa^t_V$ operates on forms on a fiber $X_t$. Notice that since the vector parts of the Kodaira-Spencer forms are vertical, the cup product commutes with restriction to fibers: $(\kappa_V\cup \theta)|_{X_t}= \kappa^t_V\cup (\theta|_{X_t})$. \begin{prop} Let $u_t$ be a smooth section of $(p,q)$-forms such that $u_t$ is harmonic on each fiber $X_t$. Let $\eta_j$ and $\xi_j$ be defined as in (5.1) and (5.2), where $\tilde u=\hat u$ is the vertical representative. Then, on each fiber $$ \eta_j=\kappa^t_j\cup u_t $$ and $$ \xi_j=\overline{\kappa^t_j}\cup u_t. $$ \end{prop} \begin{proof} In the proofs here we will use the easily verified formulas $\bar\partial(V\rfloor\theta)=(\bar\partial V).\theta -V\rfloor(\bar\partial\theta)$ and $\partial^\phi(\bar V\rfloor\theta)=(\partial\bar V).\theta-\bar V\rfloor (\partial^\phi\theta)$ for any form $\theta$ if $V$ is of type $(1,0)$. (Notice however that there are no similar formulas when $V$ is of type $(0,1)$.) We have $V_j\rfloor \hat u=0$, so taking $\bar\partial$ we get $$ (\bar\partial V_j).\hat u=V_j\rfloor\bar\partial\hat u=\eta_j +... $$ where the dots indicate forms that contain $dt_k$ or $d\bar t_j$ that therefore vanish when restricted to fibers. Restricting to fibers we then get (in view of the remarks above) that $$ \kappa^t_j\cup u_t=\eta_j. $$ For the second statement we use that $\bar V_j\rfloor \hat u=0$ and take $\partial^\phi$. Then $$ (\partial\bar V_j)\rfloor\hat u=\bar V_j\rfloor\partial^\phi\hat u= \xi_j +..., $$ which gives that $\xi_j=\overline{\kappa_j^t}\cup\hat u$. \end{proof} The final result in this section is a reflection of the familiar fact that the Kodaira-Spencer class defines a holomorphic section of the bundle with fibers $H^{0,1}(T^{1,0}(X_t))$. \begin{prop} Let $[u_t]$ be a holomorphic section of the bundle ${\mathcal H}^{n-q,q}$. Let $W$ be a holomorphic vector field on $B$. Then $$ [\kappa^t_W\cup u_t] $$ is a holomorphic section of ${\mathcal H}^{n-q-1,q+1}$. \end{prop} \begin{proof} It is enough to prove this when $W=\partial/\partial t_j$, so that $\kappa^t_j\cup u_t=\eta_j$ by Proposition 5.5. That $u_t$ is holomorphic means that we can find some representative $\tilde u$ such that $$ \bar\partial\tilde u=\sum dt_j\wedge\eta_j +\sum d\bar t_j \wedge \nu_j, $$ and each $\nu_j$ is zero on fibres, i.e. we can write $$ \nu_j=\sum dt_k \wedge a_{j}^k +\sum d\bar t_k \wedge b_j^k. $$ Taking $\bar\partial$ of $\bar\partial \tilde u$, we get that $$ 0= \sum dt_j\wedge \bar\partial \eta_j +\sum d\bar t_j \wedge \bar\partial \nu_j. $$ Thus each $\bar\partial \eta_j$ is zero on fibres and we can write $$ \bar\partial \eta_j=\sum dt_k \wedge \eta_j^k +\sum d\bar t_k \wedge \nu_j^k. $$ Thus we have \begin{eqnarray*} 0 & = & \sum dt_j \wedge dt_k \wedge \eta_j^k + \sum dt_j \wedge d\bar t_k \wedge \nu_j^k \\ & & +\sum dt_k \wedge d\bar t_j \wedge \bar\partial a_{j}^k + \sum d\bar t_k \wedge d\bar t_j \wedge \bar\partial b_j^k, \end{eqnarray*} which implies that $\nu_j^k=-\bar\partial a_k^j$ on fibres. Thus each $\nu_j^k$ are $\bar\partial$-exact on fibres and thus $\eta_j$ defines a holomorphic section of ${\mathcal H}^{n-q-1,q+1}$. \end{proof} As an example of this we consider the case when $L=-K_{{\mathcal X}/B}$ and $K_{{\mathcal X}/B}$ is positive. Then ${\mathcal H}^{n,0}$ has a canonical trivializing section, $u^0_t$, defined as follows. On any complex manifold $X$ the bundle $K_X-K_X$ is of course trivial and has a canonical trivializing section defined locally as $s=dz\otimes (dz)^{-1}$. Applying this to each fiber $X_t$ we get a trivializing section $u^0_t$ of the line bundle ${\mathcal H}^{n,0}$. Wedging with the corresponding trivializing section of the base, which is locally $dt\otimes (dt)^{-1}$ we get an $(n+m,0)$-form, $U^0$ on ${\mathcal X}$ with values in $K_{X_t}^{-1}\otimes K_B^{-1}=K_X^{-1}$ (by the adjunction isomorphism). The form $U^0$ is easily seen to be the canonical trivializing section of $K_{\mathcal X}-K_{\mathcal X}$; in particular it is holomorphic. This means that $\bar\partial u^0\wedge dt=0$, so $\bar\partial u^0=\sum dt_j\wedge\eta_j$, so $u^0$ is a holomorphic section of ${\mathcal H}^{n,0}$. Applying the proposition iteratively we get holomorphic sections of ${\mathcal H}^{n-q,q}$ as $$ \kappa^t_{i_1}\cup...\kappa^t_{i_q}\cup u^0_t $$ for any multi-index $I=(i_1, ...i_q)$. \section{The curvature formulas} We now have all the ingredients to begin computing the curvature of ${\mathcal H}^{n-q,q}$ with the $L^2$-metric. Let $u_t$ be a smooth section of this bundle, which now means that $\bar\partial u_t=0$ on $X_t$ for each $t$ and $\partial^\phi u_t=0$ on $X_t$ for each $t$. We take $\hat u$ to be a vertical representative of the section $u_t$ so that \begin{equation} \bar\partial\hat u=\sum dt_j\wedge\eta_j +\sum d\bar t_j\wedge \nu_j \end{equation} and \begin{equation} \partial^\phi\hat u=\sum dt_j\wedge\mu_j +\sum d\bar t_j\wedge\xi_j. \end{equation} Then $\nu_j$ and $\mu_j$ have bidegree $(p,q)$, $\eta_j$ have bidegree $(p-1,q+1)$ and $\xi_j$ have bidegree $(p+1,q-1)$ (forms of negative degree should be interpreted as zero). We decompose each of these forms on each fiber into one harmonic part and one part which is orthogonal to harmonic forms, so that e.g. $(\eta_j)_h$ is the harmonic part of $\eta_j$ and $(\mu_j)_\perp$ is the part of $\mu_j$ which is orthogonal to harmonic forms. Our main curvature formula is as follows. \begin{thm}\label{curvature} Let $L\to {\mathcal X}$ be a line bundle with metric $e^{-\phi}$ where $i\partial\dbar\phi<0$ on fibers and define the hermitian structure on ${\mathcal H}^{n-q,q}$ using the K\"ahler forms $\omega_t=-i\partial\dbar\phi|_{X_t}$ on fibers. Then $$ \langle\Theta_{\partial/\partial t_j,\partial/\partial \bar t_k}\, u_t,u_t\rangle_t = $$ $$ -\langle (\Box''+1)^{-1}(\mu_j)_\perp, (\mu_k)_\perp\rangle_t -\langle (\Box'' +1)^{-1}(\xi_k),(\xi_j)\rangle_t +\langle(\eta_j)_h,(\eta_k)_h\rangle_t -c_n(-1)^q\int_{X_t} c_{j,k}(\Omega)u_t\wedge \bar u_t e^{-\phi}, $$ where $\mu,\xi,\eta$ are defined in formulas (5.1) and (5.2), using a vertical representative of $u_t$. \end{thm} Recall that by Proposition 5.5, $\eta_j=\kappa_j\cup u_t$, and that $(\eta_j)_h$ is the harmonic representative of $\eta_j$. Similarily, $\xi_j=\overline{\kappa_j}\cup u_t$. As for the term containing $\mu_j$, we can use the second part of Proposition 2.4 and write $$ \langle (\Box''+1)^{-1} (\mu_j)_\perp,(\mu_k)_\perp\rangle= \langle (\Box''-1)^{-1}(\eta_j)_\perp,(\eta_k)_\perp\rangle, $$ in this way we see that the whole formula for the curvature can be expressed in terms of $u_t$ and $\kappa$. Notice that all the terms in the curvature formula are negative, except the one coming from the harmonic part of $\eta$. This 'bad term' is clearly zero when $(p,q)=(0,n)$, for bidegree reasons. It also vanishes in case the harmonic part of $\kappa$ vanishes, so that the fibration is isotrivial. More generally, if $W=\sum a_j\partial/\partial t_j$ is a vector at a point $t$ in the base, and the cohomology class defined by $[\kappa_W]\cup u_t=0$, then $\sum a_j(\eta_j)_h=0$. Therefore we get that $$ \langle \Theta_{W,\bar W} u_t,u_t\rangle_t \leq 0. $$ In particular, if $\kappa_j\cup u_t=0$ in cohomology for all $j$, then the full curvature operator acting on $u_t$ is negative. This will play an important role in section 9. \medskip In the proof of Theorem 6.1 we may assume that the base dimension $m$ is equal to 1. Indeed, this means that the formula for the curvature form implicit in Theorem 6.1 holds when restricted to any disk, and then it must hold on all of the base. We start from the classical fact that if $u_t$ is a holomorphic section of an hermitian holomorphic vector bundle $E$, then \begin{equation} \frac{\partial^2}{\partial t\partial\bar t} \|u_t\|^2_t= -\langle \Theta^E_{\partial/\partial t,\partial/\partial \bar t} u_t, u_t\rangle_t + \|D'_{\partial/\partial t} u_t\|^2_t. \end{equation} Thus we need to compute $i\partial\dbar \|u_t\|^2_t$ when $u_t$ is a holomorphic section and $u_t$ is harmonic on fibers. We choose some representative $\tilde u$. By Lemma 2.6 we have $$ \|u_t\|^2_t=(-1)^q c_n p_*(\tilde u\wedge\overline{\tilde u} e^{-\phi} ). $$ Since the pushforward commutes with $\bar\partial$ we get $$ \bar\partial p_*(\tilde u\wedge\overline{\tilde u} e^{-\phi} )= p_*(\bar\partial\tilde u\wedge\overline{\tilde u} e^{-\phi})+ (-1)^n p_*(\tilde u\wedge\overline{\partial^{\phi}\tilde u} e^{-\phi}). $$ We claim that the first term vanishes. For this we use that $\bar\partial\tilde u=dt \wedge \eta +d\bar t \wedge \nu$. The term containing $\eta$ gives no contribution for bidegree reasons. For the second term we use that $\nu$ is $\bar\partial$-exact on fibers, which gives that the fiber integral $$ \int_{X_t} \tilde u\wedge \bar\nu e^{-\phi}=0 $$ since $\tilde u=u_t$ on $X_t$ is harmonic on $X_t$. Thus $$ \bar\partial p_*(\tilde u\wedge\overline{\tilde u} e^{-\phi} )= (-1)^n p_*(\tilde u\wedge\overline{\partial^{\phi}\tilde u} e^{-\phi}). $$ Taking $\partial$ we find $$ \partial\dbar p_*(\tilde u\wedge\overline{\tilde u} e^{-\phi} )=(-1)^np_*(\partial^{\phi}\tilde u\wedge\overline{\partial^{\phi}\tilde u}e^{-\phi})+ p_*(\tilde u\wedge\overline{\bar\partial\partial^{\phi}\tilde u} e^{-\phi})=:A+B. $$ For the first term we recall that $$ \partial^{\phi}\tilde u=dt \wedge \mu + d\bar t \wedge \xi. $$ Clearly the mixed terms containing $\mu\wedge dt\wedge\bar\xi\wedge dt$ vanish, so \begin{equation} A=[p_*(\mu\wedge\bar\mu e^{-\phi}) - p_*(\xi\wedge\bar\xi e^{-\phi})] dt\wedge d\bar t. \end{equation} For the $B$-term we use $\bar\partial\partial^{\phi}=-\partial^{\phi}\bar\partial +\partial\dbar\phi$. Hence $$ B= -p_*(\tilde u\wedge\overline{\partial^{\phi}\bar\partial\tilde u} e^{-\phi})- p_*(\partial\dbar\phi\wedge \tilde u\wedge\overline{\tilde u}e^{-\phi}). $$ As we have seen $$ p_*(\tilde u\wedge\overline{\bar\partial\tilde u}e^{-\phi})=0. $$ Taking $\bar\partial$ of this we get $$ p_*(\bar\partial\tilde u\wedge\overline{\bar\partial\tilde u}e^{-\phi})= (-1)^{n+1} p_*(\tilde u\wedge\overline{\partial^{\phi}\bar\partial\tilde u}e^{-\phi}), $$ so $$ B= - p_*(\partial\dbar\phi\wedge \tilde u\wedge\overline{\tilde u}e^{-\phi})+(-1)^n p_*(\bar\partial\tilde u\wedge\overline{\bar\partial\tilde u}e^{-\phi}). $$ Now we use that $$ \bar\partial\tilde u=dt \wedge \eta+d\bar t \wedge \nu. $$ As before, the mixed terms give no contribution so $$ (-1)^n p_*(\bar\partial\tilde u\wedge\overline{\bar\partial\tilde u}e^{-\phi})= p_*(\eta\wedge\bar \eta e^{-\phi}) -p_*(\nu\wedge \bar\nu e^{-\phi}). $$ Putting all this together (and using that $\Omega=-i\partial\dbar\phi$) we finally get $$ i\partial\dbar\|u_t\|^2_t= $$ $$(-1)^q c_n [p_*(\mu\wedge\bar \mu e^{-\phi}) -p_*(\xi\wedge\bar\xi e^{-\phi})+ p_*(\eta\wedge\bar \eta e^{-\phi}) - p_*(\nu\wedge\bar \nu e^{-\phi})] idt\wedge d\bar t + $$ $$ (-1)^q c_np_*(\Omega\wedge\tilde u\wedge\overline{\tilde u} e^{-\phi}). $$ Up to this point the formula holds for any choice of representative. Now we take $\tilde u=\hat u$ to be the vertical representative. Then, by Proposition 5.4 and the comments after it, all forms $\eta,\mu,\xi$ and $\nu$ are primitive so by Lemma 2.6 the pushforward terms can be expressed as norms. Since $\mu$ and $\nu$ are of bidegree $(p,q)$ whereas $\xi$ is $(p+1,q-1)$ and $\eta$ is $(p-1,q+1)$ we get \begin{equation} i\partial\dbar\|u_t\|^2_t= \left(\|\mu\|_t^2 +\|\xi\|^2_t -\|\nu\|^2_t -\|\eta\|^2_t +c_n(-1)^q\int_{X_t} c(\Omega)u_t\wedge \bar u_t e^{-\phi} \right) idt\wedge d\bar t, \end{equation} where we have also used Proposition 5.2 in the last term. Here we decompose $\mu=\mu_h+\mu_\perp$, where $\mu_h$ is harmonic and $\mu_\perp$ is orthogonal to harmonic forms. Then $\mu_h=D'_{\partial/\partial t} u_t$, so comparing with the general curvature formula (6.1) we get \begin{equation}\label{bo_big_formula} \langle\Theta_{\partial/\partial t,\partial/\partial \bar t}\, u_t,u_t\rangle_t= -\|\mu_\perp\|^2 -\|\xi\|_t^2 +\|\nu\|^2_t+\|\eta\|^2_t - c_n(-1)^q\int_{X_t} c(\Omega)u_t\wedge \bar u_t e^{-\phi}. \end{equation} This formula contains two positive contributions to the curvature, one coming from the norm of $\nu$ and one coming from the norm of $\eta$. We shall now see that the first of these can be eliminated and the second can be improved by replacing $\eta$ by its harmonic part. For this we decompose the forms $\eta$, $\xi$ and $\nu$, as we did with $\mu$, into one harmonic part and one part orthogonal to harmonic forms. Since $\nu$ is exact, $\nu=\nu_\perp$. We now use formulas (5.3) and (5.4), which clearly also holds for the non harmonic parts of $\eta,\xi,\mu$ and $\nu$. By Proposition 2.4, we can now rewrite formula (6.6). First we note that since $\bar\partial\eta_\perp=0$ and $\bar\partial\nu_\perp=0$, they are not only orthogonal to harmonic forms, but also to all of the kernel of $\partial^{\phi}$. In the same way, $\mu_\perp$ and $\xi_\perp$ are orthogonal to the kernel of $\bar\partial$. Therefore we can apply Proposition 2.4 with $g= \eta_\perp$ and $f=-\mu_\perp$, and to $g=\nu_\perp$ and $f=-\xi_\perp$. Finally we use that $$ \langle (\Box''+1)^{-1}\xi_\perp,\xi_\perp\rangle_t +\|\xi_h\|^2= \langle (\Box''+1)^{-1}\xi,\xi\rangle_t. $$ Inserting this in formula (6.6) we obtain Theorem 6.1. We finally record the counterpart of Theorem 6.1 for positively curved bundles. This can be proved in the same way as Theorem 6.1 (using the analog of Proposition 2.4 for positively curved metrics as commented in section 2), or by Serre duality, using Theorem 6.1. \begin{thm}\label{curvaturepos} Let $L\to {\mathcal X}$ be a line bundle with metric $e^{-\phi}$ where $i\partial\dbar\phi>0$ on fibers and define the hermitian structure on ${\mathcal H}^{n-q,q}$ using the K\"ahler forms $\omega_t=i\partial\dbar\phi|_{X_t}$ on fibers. Then $$ \langle\Theta_{\partial/\partial t_j,\partial/\partial \bar t_k}\, u_t,u_t\rangle_t = $$ $$ \langle (\Box''+1)^{-1}\eta_j, \eta_k\rangle_t +\langle (\Box'' +1)^{-1}\nu_k,\nu_j\rangle_t -\langle(\xi_k)_h,(\xi_j)_h\rangle_t +c_n(-1)^q\int_{X_t} c_{j,k}(\Omega)u_t\wedge \bar u_t e^{-\phi}, $$ where $\mu,\xi,\eta$ are defined in formulas (5.1) and (5.2), using a vertical representative of $u_t$. \end{thm} This formula generalizes Theorem 1.2 in \cite{Berndtsson2}, which deals with the case $p=n, q=0$. Then $\xi=0$ (since it is of bidegree $(p+1,q-1)$) and $\nu=0$ since $\nu$ is of bidegree $(n,0)$ and therefore must vanish identically if $u_t$ is a holomorphic section. This corresponds to the simplest case of Theorem 6.1, when $(p,q)=(0,n)$. Then the only contribution with 'bad sign' in that theorem, coming from $\eta_h$, must vanish for bidegree reasons. \section{Fiberwise flat metrics} In this section we will consider the case when the bundle $L$ has a metric $e^{-\phi}$ with $i\partial\dbar\phi\leq 0$ on ${\mathcal X}$ and $i\partial\dbar\phi=0$ on each fiber $X_t$. We then have to assume given an auxiliary K\"ahler form $\Omega$ on ${\mathcal X}$. (When we assumed $i\partial\dbar\phi<0$ on fibers we could take $\Omega=-i\partial\dbar\phi$, or $\Omega=-i\partial\dbar\phi+p^*(\beta)$ where $\beta$ is a local K\"ahler form on the base.) We denote by $\omega_t$, or just $\omega$, the restriction of $\Omega$ to $X_t$. The main difference as compared to the previous case is that the cohomology is no longer necessarily primitive (with respect to $\omega$). Let us first pause to discuss the notion of primitive cohomology classes in $H^{p,q}(X, L)$, where $(X,\omega)$ is a compact K\"ahler manifold and $(L,e^{-\phi})$ is an Hermitian holomorphic line bundle. If $p+q=n-k$, we say that a class $[u]$ in $H^{p,q}(X, L)$ is primitive if the class $[\omega^{k+1}\wedge u]$ vanishes. This means that $\omega^{k+1}\wedge u=\bar\partial v$ for some $L$-valued form $v$ of degree $n+k+1$. By the pointwise Lefschetz theorem, $v=\omega^{k+1}\wedge v'$ and it follows that $$ \omega^{k+1}\wedge (u-\bar\partial v')=0 $$ so we could equivalently have taken as our definition that the class $[u]$ has a representative that is pointwise primitive. Thus, there is a natural notion of primitive classes also for cohomology with values in a line bundle. However, in general, the Lefschetz decomposition on forms does not induce a Lefschetz decomposition on cohomology. We now assume that $\partial\dbar\phi=0$ on $X$. Then it follows from the Kodaira-Nakano formula that $\Box'_\phi=\Box''_\phi=:\Box$. Moreover, the commutator $[\Box,\omega\wedge]$ vanishes. Indeed, this is well known when $L$ is trivial and $\phi=0$. Since the statement is local it also holds when $\partial\dbar\phi=0$, since we can always find a local trivialization with $\phi=0$ then. The first formula implies that a harmonic form $u$ satisfies $\partial^\phi u=0$, and the second formula shows that if a class $[u]$ is primitive, then its harmonic representative is (pointwise) primitive. Indeed $$ \omega^{k+1}\wedge u_h=\bar\partial v $$ implies $\omega^{k+1}\wedge u_h=0$ since $\omega^{k+1}\wedge u_h$ is harmonic . We then have a Lefschetz decomposition of cohomology classes $$ H^{p,q}=P^{p,q} +\omega\wedge P^{p-1,q-1} +\omega^2 \wedge P^{p-2,q-2} +... $$ where $P^{l,m}$ denotes the space of primitive classes, from the corresponding decomposition of harmonic forms. This implies $$ H^{p,q}=P^{p,q}+\omega\wedge H^{p-1,q-1} $$ and in particular $h^{p,q}=p^{p,q}+h^{p-1,q-1}$ for the dimensions of the corresponding spaces. We now apply this fiberwise to our fiber space $p:{\mathcal X}\to\mathcal{B}$, where $h^{p,q}_t$ is constant over $\mathcal{B}$ and $p+q=n$. Then $h^{p-1,q-1}_t$ is upper semicontinuous and since $P^{p,q}$ is the kernel of a smoothly varying homomorphism its dimension is also upper semicontinuous. Therefore, since their sum is constant, both $p^{p,q}_t$ and $h^{p-1,q-1}_t$ are constant on $\mathcal{B}$. This implies in particular \begin{prop} $\mathcal{P}^{p,q}$ is a smooth subbundle of ${\mathcal H}^{p,q}$. \end{prop} We also have \begin{prop} $\mathcal{P}^{p,q}$ is a complex subbundle of ${\mathcal H}^{p,q}$. \end{prop} \begin{proof} For this it is enough to show that if $u_t$ is in $\Gamma(\mathcal{P}^{p,q})$, then $D''u_t\in \Gamma^{0,1}(\mathcal{P}^{p,q})$. Let $\hat u$ be the vertical representative of $u_t$ with respect to the K\"ahler form $\Omega$, as defined in section 5. Recall that $$ D'' u_t= \sum [\nu_j] d\bar t_j, $$ where $$ \bar\partial\hat u=\sum\eta_j\wedge dt_j+\sum \nu_j\wedge d\bar t_j. $$ We have $$ \nu_j\wedge dt\wedge d\bar t\wedge\Omega= \pm \bar\partial\hat u\wedge \widehat{d\bar t_j}\wedge dt\wedge \Omega=0, $$ since $\Omega\wedge dt\wedge \hat u=0$. Hence $\nu_j$ is a primitive representative of $[\nu_j]$. \end{proof} It follows in the same way that $\eta_j$ are primitive. As before we can now define a hermitian metric on our bundle $\mathcal{P}^{p,q}$ by $$ \|[u]\|^2_t=(-1)^q c_n\int_{X_t} u_h\wedge\bar u_h e^{-\phi_t}, $$ i. e. the norm of a class is the norm of its harmonic representative, which is given by the above integral since the harmonic representative is primitive. As we have seen, if $[u_t]$ is a section of our bundle, then $\partial^\phi(u_t)_h=0$ on fibers. Hence, if $\hat u$ is the vertical representative of $(u_t)_h$, $$ \partial^\phi\hat u=\sum\mu_j\wedge dt_j+\sum\xi_j\wedge d\bar t_j $$ as before, and we see that $\mu_j$ and $\xi_j$ are primitive on fibers in the same way that we proved that $\nu_j|_{X_t}$ is primitive. \begin{lma} $\partial\dbar\phi=p^*(C(\phi))$, where $C(\phi)$ is a $(1,1)$-form on the base. \end{lma} \begin{proof} Choose local coordinates $(t,z)$ on ${\mathcal X}$ such that $p(t,z)=t$. Since $\partial\dbar\phi$ vanishes on fibers we have $$ \partial\dbar\phi=\partial\dbar_{t,\bar t}\phi +\partial\dbar_{t, \bar z}\phi+ \partial\dbar_{z, \bar t}\phi. $$ Since $i\partial\dbar\phi\leq 0$ it follows from Cauchy's inequality that the mixed terms vanish, so $$ \partial\dbar\phi=\sum \phi_{j k}(t,z) dt_j\wedge d\bar t_k. $$ Finally, the condition that $\partial\dbar\phi$ is $d$-closed gives that the coefficients are independent of $z$, which proves the lemma. \end{proof} \begin{prop} On each fiber $$ \bar\partial\mu_j+\partial^\phi\eta_j=0 $$ and $$ \bar\partial\xi_j+\partial^\phi\nu_j=0. $$ \end{prop} \begin{proof} We have $$ (\bar\partial\mu_j+\partial^\phi\eta_j)\wedge dt\wedge d\bar t= (\bar\partial\partial^\phi+\partial^\phi\bar\partial)\hat u\wedge \widehat{dt_j}\wedge d\bar t= \partial\dbar\phi\wedge \hat u\wedge \widehat{ dt_j}\wedge d\bar t=0, $$ by the previous lemma. \end{proof} For a moment we now assume that the base is one dimensional and get exactly as in section 6 that formula (6.5) still holds, $$ \langle \Theta u,u\rangle= ( -\|\mu\|^2-\|\xi\|^2+\|\nu\|^2+\|\eta\|^2)dt\wedge d\bar t -c_n(-1)^q p_*(C(\phi)\wedge \hat u\wedge\overline{\hat u} e^{-\phi}). $$ We then decompose the forms $\mu, \xi,\eta$ and $\nu$ into a harmonic part and one part that is orthogonal to harmonic forms. The harmonic part of $\nu$ is zero since the section is holomorphic, and the harmonic part of $\mu$ is also zero if we assume that $D' u_t=0$ at the given point. We then apply Proposition 2.5, and conclude that $\|\mu_\perp\|=\|\eta_\perp\|$ and $\|\xi_\perp\|=\|\nu_\perp\|$. Hence, the curvature formula becomes $$ \langle \Theta u,u\rangle= -\|\xi_h\|^2+\|\eta_h\|^2 -c_n(-1)^q p_*(C(\phi)\wedge \hat u\wedge\overline{\hat u} e^{-\phi}). $$ Since this holds for the restriction of $\Theta$ to any line in the base we finally get the curvature formula: \begin{thm} Let $L\to{\mathcal X}$ be a a line bundle with hermitian metric $e^{-\phi}$ where $i\partial\dbar\phi\leq 0$ and $i\partial\dbar\phi=0$ on fibers. Then the curvature of the $L^2$ metric on $\mathcal{P}^{p,q}$, $p+q=n$ is $$ \langle\Theta u,u\rangle_t= \sum \langle (\eta_j)_h,(\eta_k)_h\rangle_t dt_j\wedge d\bar t_k- \sum \langle (\xi_j)_h,(\xi_k)_h\rangle_t dt_j\wedge d\bar t_k + C(\phi)\|u\|^2_t. $$ \end{thm} We could now have continued and applied similar arguments to ${\mathcal H}^{p-i,q-i}$. Summing up the results we get that the same curvature formula holds for the entire bundle ${\mathcal H}^{p,q}$. Notice that when $L$ is trivial and $\phi=0$, this is a classical formula of Griffiths, \cite{GT}. Here, however we will be content with the subbundle $P^{p,q}$ since that is enough for our applications. \section{Estimates of the holomorphic sectional curvature and hyperbolicity} \noindent Let $p:X\to Y$ be a surjective holomorphic map between two compact K\"ahler manifolds. We denote by $Y_0\subset Y$ the set of regular values of $p$, so that if $X_0:= p^{-1}(Y_0)$ the restriction $p:X_0\to Y_0$ becomes a proper submersion. Our goal in this section is to analyze the properties of the base $Y_0$ induced by the variation of the complex structure of the fibers $X_y$ of $p$ together with the semi-positivity properties of the canonical bundle of the said fibers. Under certain hypothesis we will construct a Finsler metric on the subset ${\mathcal B}\subset Y_0$ (cf. section 1) whose holomorphic sectional curvature is bounded from above by a negative constant. To this end we follow the same line of arguments as in the work by To-Yeung, \cite{To-Yeung} and Schumacher, \cite{Schumacher}. We repeat them here (with some modifications) to see how they adapt in our more general setting. The main conclusion is that when the base is one dimensional we get a metric on the base with curvature bounded from above by a negative constant, provided that our metrics satisfy an integral bound which is automatic in the K\"ahler-Einstein case. We also take the opportunity to include the case of families of Calabi-Yau manifolds, as this does not seem to have appeared in the literature yet. For simplicity of formulation we will assume that the base is one dimensional. \subsection{The canonically polarized case}\label{CaPo} We start with the case of a family of canonically polarized manifolds; $p:{\mathcal X}\to {\mathcal B}$. We let $L= -K_{{\mathcal X}/{\mathcal B}}$ and let $h=e^{-\phi}$ be a smooth metric on $L$ with $\Omega=-i\partial\dbar\phi$ strictly positive on each fiber and semipositive on the total space. By a result of Schumacher this holds if $\phi$ is a (normalized) potential of the K\"ahler-Einstein metric on each fiber, but for the moment we make no such assumption. Let $t$ be a local coordinate on the base ${\mathcal B}$, $V$ be the horizontal lift of $\partial/\partial t$, and $\kappa$ be the section of the bundle with fibers $Z^{0,1}(X_t, T^{1,0}(X_t))$, defined by $\kappa_t=\bar\partial_{X_t}V$. As in the last paragraph of section 5, we let $u^0$ be the canonical trivializing section of ${\mathcal H}^{n,0}$ (the bundle with fibers $H^{n,0}(X_t, -K_{X_t})\sim {\mathbb C}$), and then define $u^q$ inductively by $u^q=(\kappa\cup u^{q-1})_h$. As we have seen in section 5, $u^q$ is a holomorphic section of ${\mathcal H}^{n-q,q}$. It follows from Theorem 6.1 that \begin{equation} \langle i\Theta u^q,u^q\rangle \leq (-\|\xi_h\|^2 +\|\eta_h\|^2) (idt\wedge d\bar t) \end{equation} since $\langle (1+\Box)^{-1}\xi,\xi\rangle\geq \|\xi_h\|^2$, and $c(\Omega)\geq 0$ since we have assumed that $\Omega\geq 0$. Recall that $\eta=\kappa\cup u^q$, $\xi=\bar\kappa\cup u^q$, and the subscript $h$ means that we have taken the harmonic part. We start with the following important observation by To-Yeung, \cite{To-Yeung}. \begin{prop} $$ \|\xi_h\|^2\geq \frac{\|u^q\|^4}{\|u^{q-1}\|^2} $$ if $u^{q-1}\neq 0$. \end{prop} \begin{proof} We have $$ \langle\xi_h, u^{q-1}\rangle=\langle\bar\kappa\cup u^q,u^{q-1}\rangle= \|u^q\|^2. $$ Hence Cauchy's inequality gives $$ \|u^q\|^2\leq\|\xi_h\|\|u^{q-1}\| $$ which gives the claim. \end{proof} Next we introduce the notation $$ \phi_q=\log\|u^q\|^2. $$ Then $$ i\partial\dbar\phi_q\geq -\langle i\Theta u^q,u^q\rangle/\|u^q\|^2, $$ so formula (8.1) together with the proposition gives that \begin{equation} i\partial\dbar\phi_q\geq (e^{\phi_q-\phi_{q-1}} -e^{\phi_{q+1}-\phi_q})(idt\wedge d\bar t), \end{equation} since $\|\eta_h\|^2=e^{\phi_{q+1}}$. A few comments are in order. Notice that $\phi_q$ is only locally defined since it depends on the local coordinate $t$ in the base. Changing local coordinates we see that $e^{\phi_q}$ transforms as a metric on the $q$:th power of the tangent bundle of ${\mathcal B}$, and $e^{\phi_q-\phi_{q-1}}$ defines a metric on the tangent bundle. In particular $e^{\phi_1}$ is the generalized Weil-Petersson metric on ${\mathcal B}$, and it is the genuine Weil-Petersson metric when the metric $\phi$ on $K_{{\mathcal X}/{\mathcal B}}$ is the (normalized) K\"ahler-Einstein potential on each fiber. Notice also that $u^q$ may be identically 0 on all fibers. If this happens for some $q$ we let $m$ be the maximal $q$ such that $u^q$ is not identically 0. Since $u^m$ is a holomorphic section of a vector bundle it can then only vanish on an analytic set. Hence $e^{\phi_m}$ defines a singular metric on the $m$:th power of the tangent bundle of ${\mathcal B}$. We will assume that the family is effectively parametrized, which means that $m$ is at least equal to 1. Multiplying (8.2) by $q$ and summing from 1 to $m$ we get $$ i\partial\dbar\sum_1^m q\phi_q\geq \sum_1^m (e^{\phi_q-\phi_{q-1}})(idt\wedge d\bar t). $$ If $a_q>0$ and $\sum a_q=1$ we get by the convexity of the exponential function that the right hand side here is greater than $$ \sum_1^m a_q e^{\phi_q-\phi_{q-1}}\geq e^{\sum a_q(\phi_q-\phi_{q-1})}. $$ Now take $a_q=c(m+(m-1)+...q)$, with $c$ chosen so that $\sum_1^m a_q=1$. Then $a_{q-1}-a_q=c(q-1)$ so since $\sum_1^m a_q(\phi_q-\phi_{q-1})= a_m\phi_m+(a_m-a_{m-1})\phi_{m-1} +...-a_1\phi_0 $ we get $\sum a_q(\phi_q-\phi_{q-1})= c\sum a_q\phi_q-c a_1\phi_0$. Hence $$ i\partial\dbar\sum_1^n cq\phi_q\geq c e^{c\sum q\phi_q} e^{-ca_1\phi_0} (idt\wedge d\bar t). $$ Moreover, since $e^{\phi_q-\phi_{q-1}}$ is a metric on the tangent bundle of ${\mathcal B}$, $$ e^{\sum a_q(\phi_q-\phi_{q-1})}= e^{c\sum q\phi_q} e^{-ca_1\phi_0} $$ is also a metric on the tangent bundle of ${\mathcal B}$, and so is $ e^{c\sum q\phi_q}=:e^\Phi$, since $\phi_0$ is a function. In conclusion, there is a metric with fundamental form $e^\Phi idt\wedge d\bar t$, on ${\mathcal B}$ which satisfies $$ i\partial\dbar\Phi\geq c e^\Phi e^{-c a_1\phi_0}idt\wedge d\bar t, $$ and thus has curvature bounded from above by $$ -ce^{-ca_1\phi_0}, $$ and so by a fixed negative constant if $$ \phi_0=\log c_n\int_{X_t} u^0\wedge \bar u^0 e^{-\phi} $$ is bounded from above. The last requirement is automatic if $\phi$ is a normalized K\"ahler-Einstein potential, which means that $$ u^0\wedge \bar u^0 e^{-\phi}=(i\partial\dbar\phi)^n/n! $$ on each fiber. \subsection{The Calabi-Yau case}\label{CY} Let $\omega>0$ be an arbitrary K\"ahler form on the total space $X$ of the fibration $p$. As before $t$ denites a local coordinate on the base, and now let $V$ be the horizontal lift of $\partial/\partial t$ with respect to $\omega$. The Kodaira-Spencer representative $\kappa$, and the holomorphic sections $u^q$ of ${\mathcal H}^{n-q,q}$ are defined as in the canonically polarized case. \medskip \noindent We will use the fact that the fibers $X_t$ are Calabi-Yau as follows. \begin{prop}\label{m-Ber} There exists a metric $e^{-\phi_{X/Y}}$ on the relative canonical bundle $K_{X/Y}$ such that $i\partial\dbar\phi_{X/Y}\geq 0$ and such that \begin{equation}\label{cy0} c_n\int_{X_t} u^0\wedge \bar u^0 e^{\phi_t}=1, \end{equation} for each $t\in Y_0$. Moreover, we have $\displaystyle \partial\dbar\phi_t:= \partial\dbar\phi_{X/Y}|_{X_t}= 0$. \end{prop} \begin{proof} We first remark that by a result due to V. Tosatti (cf. \cite{Tosatti}) for any fixed fiber $X_{t_0}$ there is a positive integer $m$ such that $\displaystyle mK_{X_{t_0}}$ is trivial. This implies that $\displaystyle h^0(mK_{X_t})$ is then equal to 1 for every $t$. Indeed, in the complement of a closed analytic subset of the base $Y$ this is true by general semi-continuity arguments, cf. \cite{BaSt}. At special points $\tau$ we have $\displaystyle h^0(mK_{X_\tau})\geq 1$, hence equal to one because $c_1(X_\tau)= 0$. In conclusion there exists a positive integer $m$ such that $\displaystyle h^0(mK_{X_t})= 1$ for all $t\in Y_0$. Moreover, the group $H^0(X_t, mK_{X_t})$ is generated by a nowhere-vanishing section (by Poincar\'e-Lelong formula) which extends locally near $t$. \smallskip \noindent We show next that the metric $e^{-\phi_{X/Y}}$ with the properties stated in \ref{m-Ber} is simply the $m$-Bergman metric cf. \cite{BP}. Let $t\in Y_0$ be a regular value of the map $p$, and let $x\in X_t$ be a point of the fiber $X_t$. If $(z_1,\dots, z_n)$ is a coordinate system on $X_t$ centered at $x$, then we obtain a coordinate system $(z_1,\dots, z_n, t)$ of the total space $X_0$ at $x$ by adding $t$. This induces in particular a trivialization of the relative canonical bundle with respect to which the expression of the $m$-Bergman metric becomes \begin{equation}\label{cy1} e^{\phi_{X/Y}(z, t)}=\frac{|f(z, t)|^{2/m}}{\int_{X_t}|u|^{2/m}}\, , \end{equation} where the notations are as follows: $u$ is any non-zero section of $mK_{X_t}$, and $|u|^{2/m}$ is the corresponding volume element on $X_t$. As we have seen, $u$ admits an extension on the $p$-inverse image of a small open set centered at $t$. Locally near $x$ we write $\displaystyle u= f \frac{\left(dz\wedge dt\right)^{\otimes m}}{\left(dt\right)^{\otimes m}}$ for some holomorphic function $f$. This identifies with $f\left(dz\right)^{\otimes m}$, so we see that we have \begin{equation}\label{cy2} c_nu^0\wedge \bar u^0 e^{\phi_t(z)}= \frac{|f(z, t)|^{2/m}}{\int_{X_t}|u|^{2/m}} d\lambda(z) \end{equation} from which the normalization condition \eqref{cy0} follows. It follows from \cite{BP} that $\phi_{X/Y}$ is psh. The restriction $\displaystyle i\partial\dbar\phi_{X/Y}|_{X_t}$ is a closed positive current whose cohomology class is zero. We therefore infer that $\displaystyle i\partial\dbar\phi_{X/Y}|_{X_t}= 0$ and Proposition \ref{m-Ber} is proved.\end{proof} \noindent The rest of the argument now goes as in the canonically polarized case. We put $$ \phi_q=\log \|u^q\|^2. $$ Then, $\phi_0=0$ by construction, and for $q\geq 1$ we get from Theorem 7.10 that $$ i\partial\dbar\phi_q\geq (e^{\phi_q-\phi_{q-1}}-e^{\phi_{q+1}-\phi_q})(idt\wedge d\bar t). $$ Defining $\Phi=c\sum_1^n q\phi_q$ again, we find that $$ i\partial\dbar\Phi\geq ce^{\Phi}i dt\wedge d\bar t, $$ and that $e^\Phi$ defines a metric on the tangent bundle of $Y$ with curvature bounded from above by a strictly negative constant. \medskip \begin{remark} We refer the reader to the preprint \cite{Xu} by the third author where it is showed that the curvature formula in the fiberwise flat case can be used in a different way to produce a Hermitian metric on $Y$ of strictly negative {\it bisectional} curvature. \end{remark} \subsection{Hyperbolicity} In this section we prove Theorem \ref{hypb1}. \noindent If $M$ is a complex manifold, the \emph{Kobayashi-Royden infinitesimal metric} is a Finsler metric on $T_M$ defined as \begin{equation}\label{hyp1} k(x, v):= \inf\{\lambda> 0 : \exists \gamma: \mathbb{D}\to M, \gamma(0)= x, \gamma^\prime(0)= 1/\lambda v\}. \end{equation} We take $M:= Y_0$, where we recall that $Y_0\subset Y$ was the set of regular values of the map $p$. Then we have to show that \begin{equation}\label{hyp2} \frac{k(x, v)}{|v|}\geq C_0 > 0 \end{equation} where the norm of the vector $v$ in \eqref{hyp2} is measured with respect to a smooth metric on $Y$. By a result of H. Royden, cf. \cite{Roy} the function $k$ is upper semi-continuous on $T_{Y_0}$. As a consequence, it would be enough to obtain a lower bound as required in \eqref{hyp2} on a dense subset of $Y_0$. \medskip \noindent This, however, is a direct consequence of the results in the subsections \ref{CaPo} and \ref{CY} respectively, as we see next. \begin{thm}\label{consec} We assume that the hypothesis of Theorem \ref{hypb1} are satisfied. Then there exists a subset ${\mathcal B}\subset Y$ such that: \begin{enumerate} \smallskip \item[{\rm (1)}] The complement $Y\setminus {\mathcal B}$ is a closed analytic set. \smallskip \item[{\rm (2)}] There exists a Finsler metric on ${\mathcal B}$ locally bounded from above at each point of $Y_0\setminus{\mathcal B})$ and whose holomorphic sectional curvature is bounded from above by $-C$. \end{enumerate} \end{thm} \noindent It is absolutely clear that the properties (1) and (2) above together with Alhfors-Schwarz lemma (see e.g. \cite{JPhyp}) imply Theorem \ref{hypb1}. In what follows we indicate the proof of \ref{consec}. \begin{proof} We define the set ${\mathcal B}\subset Y_0$ as in the introduction, such that the restriction of ${\mathcal H}^{n-i, i}|_{{\mathcal B}}$ is a vector bundle, whose fiber at $t$ is $H^{n-i,i}(X_t, L|_{X_t})$. \noindent The Finsler metric defined in \ref{CaPo} and \ref{CY} has the required curvature property. The only thing to be checked is that the functions $\phi_q$ are locally bounded from above near each point of $Y_0\setminus {\mathcal B}$. If the base $Y$ has dimension one, we argue as follows. The function $\phi_q$ is the norm of the $q^{\rm th}$ contraction of the tautological section $u_0$ with $\bar\partial V$, where $V$ is the horizontal lift of $\frac{\partial}{\partial t}$. The norm of the harmonic representative of a cohomology class is smaller than the norm of any other representative, and since we are ``far'' from the singularities of $p$, the boundedness statement in (2) follows. These arguments are easily adapted in the case of a higher dimensional base, and Theorem \ref{consec} is proved. \end{proof} \section{Extension of the metric} \noindent In this section the set-up is as follows. We are given a surjective map $p: X\to Y$ between two smooth projective manifolds, such that the fibers of $p$ are connected (this last condition is not really necessary...). This will be referred to as \emph{algebraic fiber space}. Such a map $p$ will not be a submersion in general, so we cannot use directly the results obtained in the previous sections. In order to formulate our next results, we will recall next the notion of \emph{logarithmic tangent bundle} cf. \cite{Del}, which offers the right context to deal with the (eventual) singularities of $p$. \subsection{Notations, conventions and statements} Let $(W_\alpha)_{\alpha\in J}$ be a finite set of non-singular hypersurfaces of $X$ which have transverse intersections. The logarithmic tangent bundle $T_X\langle W\rangle$ is the vector bundle whose local frame in a coordinate set $U$ is given by \begin{equation}\label{mmp1} z_1\frac{\partial}{\partial z_1},\dots, z_k\frac{\partial}{\partial z_k}, \frac{\partial}{\partial z_{k+1}},\dots, \frac{\partial}{\partial z_{n}} \end{equation} where $z_1,\dots, z_n$ are coordinates defined on $U$, such that $W_j\cap U= (z_j= 0)$ for $j=1,\dots k$. Hence we assume implicitly that only $k$ among the hypersurfaces $W_\alpha$ intersect the coordinate set $U$. We remark that the logarithmic tangent bundle is a subsheaf of $T_X$; its dual is the logarithmic cotangent bundle $\Omega_X\langle W\rangle$. \smallskip \noindent Throughout the current section we will observe the following conventions. \noindent $\bullet$ We denote by $\Sigma\subset Y$ the set of singular values of $p$. Let $\Delta\subset \Sigma$ be the codimension one subset of $\Sigma$. We assume that the components of $\Delta$ are smooth and have transverse intersections. Note that the closure of the difference $\Sigma\setminus \Delta$ is a set of codimension at least two. \noindent $\bullet$ Let $B\subset Y\setminus \Sigma$ the Zariski open subset of $Y$ for which the dimension of the relevant cohomology groups \eqref{higher50} is constant. \smallskip \noindent $\bullet$ We assume that there exists a Zariski open subset ${\mathcal Y}_0\subset Y$ whose codimension is greater than two, such that if we denote ${\mathcal X}_0:= p^{-1}({\mathcal Y}_0)$ then the following holds. For any $x_0$ and $y_0$ in ${\mathcal X}_0$ and ${\mathcal Y}_0$ respectively, such that $p(x_0)= y_0$ we have local coordinates $(z_1,\dots z_{n+m})$ centered at $x_0$ and $(t_1,\dots t_m)$ centered at $y_0$ with respect to which the map $p$ is given by \begin{equation}\label{mmp2} (z_1,\dots, z_{m+n})\to (z_{n+1},\dots, z_{n+m-1}, z_{n+m}^{b_{n+m}}\prod_{j=1}^qz_j^{b_j}) \end{equation} where $b_l\geq 1$ above are strict positive integers and $q\leq n$. Moreover, locally near $y_0$ the set $\Delta\setminus \Sigma$ is given by the equation $t_m= 0$. \smallskip \begin{remark} Given an algebraic fiber space $f:{\widetilde X}\to Y$, part of the properties above can be achieved modulo the modification of the total space $\pi: X\to {\widetilde X}$ along the inverse image of the discriminant of $f$. Also, if the discriminant $\Delta$ is not a simple normal crossing divisor, we consider a log-resolution of $(Y, \Delta)$, together with the corresponding fibered product. Hence up to such transformations we can assume without loss of generality that the bullets above hold true for the maps $p$ we are considering. \end{remark} \medskip \noindent We will denote by $T_Y\langle\Delta\rangle$ the logarithmic tangent bundle of $(Y, \Delta)$ described locally as in \eqref{mmp1}; let $\Omega_Y\langle\Delta\rangle$ be its dual. \medskip \noindent We have the following easy and well-known statement, consequence of the bullets above. \begin{lma} We have a natural morphism of vector bundles \begin{equation}\label{mmp3} p^\star\left(\Omega_Y\langle\Delta\rangle\right)\to \Omega_X\langle W\rangle \end{equation} which is defined and injective on ${\mathcal X}_0$. \end{lma} \begin{proof} The verification is immediate: in the complement of the divisor $\Delta$ things are clear. Let $x_0$ and $y_0$ in $X$ and $Y$ respectively be two points as in (ii). We have the coordinates $(z_i)$ and $(t_j)$ with respect to which the map $p$ writes as in \eqref{mmp2}. Then we have \begin{equation}\label{mmp50'} p^\star(dt_j)= {dz_{n+j}} \end{equation} for $j= 1,\dots, m-1$ and \begin{equation}\label{mmp51} p^\star\left(\frac{dt_m}{t_m}\right)= b_{n+m}\frac{dz_{n+m}}{z_{n+m}}+ \sum_{i=1}^qb_{i}\frac{dz_{i}}{z_{i}}. \end{equation} Thus the lemma is proved, given that the right hand side of \eqref{mmp51} is a local section of $\Omega_X\langle W\rangle$. \end{proof} \noindent We note that the map \eqref{mmp3} is an injection of sheaves on $X$, but in order to obtain an injection of vector bundles, in general we have to restrict to ${\mathcal X}_0$ (as one sees by considering the blow-up of a point in ${\mathbb C}^2$). \medskip \noindent Let $\Omega_{X/Y}\langle W\rangle$ be the co-kernel of \eqref{mmp3}, so that we have the exact sequence \begin{equation}\label{mmp4} 0\to p^\star\left(\Omega_Y\langle\Delta\rangle\right)\to \Omega_X\langle W\rangle\to \Omega_{X/Y}\langle W\rangle\to 0 \end{equation} on the open set ${\mathcal X}_0\subset X$. \noindent For further use, we will give next the expression of a local frame of the bundle $\displaystyle \Omega_{X/Y}\langle W\rangle$ with respect to the coordinates $z$ and $t$ considered above. Let $U\subset {\mathcal X}_0$ be the open set, on which the functions $z_j$ are defined. Then the local frame of $\displaystyle \Omega_{X}\langle W\rangle|_U$ are given by \begin{equation}\label{mmp8} \frac{dz_1}{z_1},\dots, \frac{dz_q}{z_q}, dz_{q+1},\dots, dz_{n+m-1}, \frac{dz_{n+m}}{z_{n+m}}. \end{equation} Thus, the local frame of $\displaystyle \Omega_{X/Y}\langle W\rangle|_U$ is given by the symbols \begin{equation}\label{mmp9} \frac{dz_1}{z_1},\dots, \frac{dz_q}{z_q}, dz_{q+1},\dots, dz_{n}, \frac{dz_{n+m}}{z_{n+m}} \end{equation} modulo the relation \begin{equation}\label{mmp10} b_{n+m}\frac{dz_{n+m}}{z_{n+m}}+ \sum_{j=1}^qb_j\frac{dz_{j}}{z_{j}}= 0. \end{equation} The edge morphism corresponding to the direct image of the dual of \eqref{mmp3} gives the analogue of the Kodaira-Spencer map in logarithmic setting, as follows \begin{equation}\label{mmp5} ks: T_Y\langle \Delta\rangle\to {\mathcal R}^1p_\star T_{X/Y}\langle W\rangle. \end{equation} It turns out that we have \begin{equation}\label{mmp6} T_{X/Y}\langle W\rangle\simeq \Omega_{X/Y}^{n-1}\langle W\rangle\otimes K_{X/Y}^{-1}\otimes {\mathcal O}(p^\star(\Delta)- W), \end{equation} on ${\mathcal X}_0$, hence we can rewrite \eqref{mmp5} as follows \begin{equation}\label{mmp5'} ks: T_Y\langle \Delta\rangle\to {\mathcal R}^1p_\star \left(\Omega_{X/Y}^{n-1}\langle W\rangle\otimes L\right). \end{equation} provided that the twisting bundle equals \begin{equation}\label{mmp10'} L:= K_{X/Y}^{-1}\otimes {\mathcal O}\left(p^\star(\Delta)- W\right). \end{equation} Hence, this fits perfectly into the framework developed in this article. \smallskip In general, given any bundle $L$ and an index $i\geq 0$ we have a map \begin{equation}\label{mmp6'} \tau^i: {\mathcal R}^{i}p_\star\left(\Omega^{n-i}_{X/Y}\langle W\rangle\otimes L\right)\to {\mathcal R}^{i+1}p_\star\left(\Omega^{n-i-1}_{X/Y}\langle W\rangle\otimes L\right)\otimes \Omega_Y\langle\Delta\rangle \end{equation} obtained by contraction with \eqref{mmp5}. A local section of ${\mathcal R}^{i}p_\star\left(\Omega^{n-i}_{X/Y}\langle W\rangle\otimes L\right)$ is represented by a $(0,i)$-form with values in the $(n-i)$ exterior power of the relative logarithmic cotangent bundle twisted with $L$. We couple this form with the $(0,1)$-form with values in the logarithmic tangent bundle of $X$ corresponding to a local section of $T_Y\langle \Delta\rangle$ via \eqref{mmp5}. The result is a $(0,i+1)$-form with values in the $(n-i-1)$ exterior power of the logarithmic cotangent bundle twisted with $L$. This is the map \eqref{mmp6'}; as we see, it is only defined in the complement of a set of codimension at least two in $Y$. We denote by ${\mathcal K}^i$ its kernel; it is a subsheaf of $\displaystyle {\mathcal R}^{i}p_\star\left(\Omega^{n-i}_{X/Y}\langle W\rangle\otimes L\right)$. We can assume that the restriction ${\mathcal K}^i|_B$ is a subbundle of $\displaystyle {\mathcal R}^{i}p_\star\left(\Omega^{n-i}_{X/Y}\langle W\rangle\otimes L\right)\big|_B$. \begin{remark}\label{noyau} In view of our main formula (cf. Theorem \ref{curvature}), it is clear that the curvature of ${\mathcal K}^i|_B$ has better chances to be semi-negative than the full bundle $\displaystyle {\mathcal R}^{i}p_\star\left(\Omega^{n-i}_{X/Y}\langle W\rangle\otimes L\right)$. Indeed, if the section $[u]$ in \ref{curvature} is a local holomorphic section of ${\mathcal K}^i|_B$, then the fiberwise projection of the $\eta_j$'s on the space of harmonic forms is identically zero. \end{remark} \medskip \noindent The sheaves ${\mathcal K}^i$ have been extensively studied in algebraic geometry, cf. \cite{Kang}, \cite{BruneB} and the references therein. We will be concerned here with their differential geometric properties. To this end, we formulate the requirements below concerning Hermitian bundle $(L, h_L)$. \smallskip \noindent $\left({\mathcal H}_1\right)$ We have $\displaystyle i\Theta_{h_L}(L)\leq 0$ on $X$ and moreover $\displaystyle i\Theta_{h_L}(L)|_{X_y}= 0$ for each $y$ in the complement of some Zariski closed set. \smallskip \noindent $\left({\mathcal H}_2\right)$ We have $\displaystyle i\Theta_{h_L}(L)\leq 0$ on $X$ and moreover there exists a K\"ahler metric $\omega_Y$ on $Y$ such that we have $\displaystyle i\Theta_{h_L}(L)\wedge p^\star\omega_Y^m\leq -\varepsilon_0\omega\wedge p^\star\omega_Y^m$ on $X$. \smallskip \noindent Thus the first condition requires $L$ to be semi-negative and trivial on fibers, whereas in $\left({\mathcal H}_2\right)$ we assume that $L$ is uniformly strictly negative on fibers, in the sense that we have \begin{equation}\label{mp0301} i\Theta_{h_L}(L)|_{X_y}\leq - \varepsilon_0\omega|_{X_y} \end{equation} for any regular value $y$ of the map $p$. \medskip \noindent As in the introduction, we denote by ${\mathcal K}^i_f$ the quotient of ${\mathcal K}^i$ by its torsion subsheaf. In this context we have the following result. \begin{thm}\label{kernels, I} Let $p:X\to Y$ be an algebraic fiber space, and let $(L, h_L)$ be a Hermitian line bundle which satisfies one of the hypothesis $\left({\mathcal H}_i\right)$ above. We assume that the restriction of $h_L$ to the generic fiber of $p$ is non-singular. Then: \begin{enumerate} \item[(a)] For each $i\geq 1$ the bundle $\displaystyle {\mathcal K}^i_f$ admits a semi-negatively curved singular Hermitian metric. Moreover, this metric is smooth on a Zariski open subset of $Y$. \smallskip \item[(b)] We assume that curvature form of $L$ is smaller than $-\varepsilon_0p^\star\omega_Y$ on the $p$-inverse image of some open subset $\Omega\subset Y$ of $Y$. Then the metric on $\displaystyle {\mathcal K}^i_f$ is strongly negatively curved on $\Omega$ (and semi-negatively curved in the complement a codimension greater than two subset of $Y$). \end{enumerate} \end{thm} \smallskip \noindent The method of proof of Theorem \ref{kernels, I} also gives the following statement, which is potentially important in the analysis of families of holomorphic disks tangent to the pair $(Y, \Delta)$. Prior to stating this result, we define the ``iterated Kodaira-Spencer map'', cf. \cite{Schumacher} \begin{equation}\label{mmp58} ks^{(i)}: \Sym^iT_Y\langle \Delta\rangle\otimes {\mathcal R}^0p_\star\left(\Omega^{n}_{X/Y}\langle W\rangle\otimes L\right) \to {\mathcal R}^ip_\star\left(\Omega^{n-i}_{X/Y}\langle W\rangle\otimes L\right) \end{equation} by contracting successively the sections of $\displaystyle \Omega^{\bullet}_{X/Y}\langle W\rangle\otimes L$ with the {$T_{X/Y}\langle W\rangle$-valued $(0,1)$-forms} given by $ks$ in \eqref{mmp5} (we remark that the contraction operations are commutative, and this is the reason why the map \eqref{mmp58} is defined on $\Sym^i$ rather than $\otimes^i$ of the log tangent bundle of the base). \begin{prop}\label{ks} We assume that the singularities of the map $p$ are contained in the snc divisor $\Delta$. Let $\omega_{{\mathcal P}, X}$ and $\omega_{{\mathcal P}, Y}$ be two K\"ahler metrics with Poincar\'e singularities on $(X, W)$ and $(Y, \Delta)$ respectively. We assume that the metric $h_L$ of $L$ is smooth when restricted to the $p$-inverse image of a Zariski open subset of $Y$, and that its weights are bounded from below. Then for each $i\geq 1$ we have \begin{equation}\label{mmp52} \Vert ks^{(i)}\Vert^{2/i}_y\leq C\log^N\frac{1}{|s_{\Delta, y}|^2}, \end{equation} where the norm \eqref{mmp52} is induced by the metrics $\omega_{{\mathcal P}, X}$ and $\omega_{{\mathcal P}, Y}$. The constants $C, N$ are depending on everything but the point $y\in Y$. \end{prop} \begin{remark} The proof of Proposition \ref{ks} is very similar to the arguments we will give for Theorem \ref{kernels, I}, and we have decided that we can afford to skip it. We will however highlight the main points after completing the arguments for \ref{kernels, I}. \end{remark} \medskip \noindent Assume that the bundle $(L, h_L)$ verifies one of the hypothesis $\left({\mathcal H}_i\right)$, and that moreover we have \begin{equation}\label{mmp30} H^0\left(X, \Omega^{n}_{X/Y}\langle W\rangle\otimes L\right)\neq 0. \end{equation} If $\sigma$ is a holomorphic section of $\displaystyle \Omega^{n}_{X/Y}\langle W\rangle\otimes L$, then we have a holomorphic map \begin{equation}\label{mmp31} ks^{(1)}_\sigma: {\mathcal O}_Y\to {\mathcal R}^{1}p_\star\left(\Omega^{n-1}_{X/Y}\langle W\rangle\otimes L\right)\otimes \Omega_Y\langle\Delta\rangle. \end{equation} induced by the map $ks^{(1)}$ cf. \eqref{mmp58}. \medskip \noindent We formulate yet another hypothesis. \smallskip \noindent $\left({\mathcal H}_3\right)$ There exists a holomorphic section $\sigma$ of $\displaystyle \Omega^{n}_{X/Y}\langle W\rangle\otimes L$ such that the map $\displaystyle ks^{(1)}_\sigma$ is not identically zero when restricted to a non-empty open subset of ${\mathcal B}$. \medskip \noindent The following result is a corollary of Theorem \ref{kernels, I}. \begin{thm}\label{kernels, II} Let $p:X\to Y$ be an algebraic fiber space, and let $(L, h_L)$ be a Hermitian line bundle which satisfies one of the hypothesis $\left({\mathcal H}_j\right)$ for $j=1, 2$. Moreover, we assume that $\left({\mathcal H}_3\right)$ is equally satisfied. Then there exists $s\leq n= \dim (X_y)$ and a non-trivial map \begin{equation}\label{mmp31'} {\mathcal K}^{s\star}_f\to \Sym^s \Omega_Y\langle \Delta\rangle. \end{equation} In addition, if the curvature of $L$ is smaller than $-\varepsilon_0p^\star\omega_Y$ on the pre-image of some non-empty open subset $V$ of $Y$, then there exists an ample line bundle $A$ on $Y$ such that the bundle $$\otimes^M\Omega_Y\langle \Delta\rangle\otimes A^{-1}$$ has a (non-identically zero) global section, for some $M\gg 0$. \end{thm} \medskip \noindent Our next results will show that in some interesting geometric circumstances a version of the hypothesis $\left({\mathcal H}_2\right)$ is verified. If the canonical bundle of the generic fiber of $p: X\to Y$ is ample, then we say that the family defined by $p$ is \emph{canonically polarized}. If the Chern class of the canonical bundle of the generic fiber of $p$ equals zero, then we say that $p$ defines a \emph{Calabi-Yau} family. Next, a canonically polarized or Calabi-Yau family $p$ has \emph{maximal variation} if the Kodaira-Spencer map \eqref{mmp5} is injective when restricted to a non-empty open subset of $Y_0$. \medskip \noindent Then we have the following result, which provides a (geometric) sufficient condition under which the anti-canonical bundle of $p$ has strong curvature properties. \begin{thm}\label{kernels} Let $p:X\to Y$ be a family of canonically polarized manifolds of maximal variation, and let $\omega$ be a reference metric on $X$. Then there exists an effective divisor $\Xi$ in $X$ and a metric $h_{X/Y}$ on the twisted relative canonical bundle $K_{X/Y}+ \Xi$ such that the following hold true. \begin{enumerate} \item[(i)] The curvature corresponding to $h_{X/Y}$ is greater than $\varepsilon_0\omega$ for some $\varepsilon_0> 0$ and the restriction $h_{X/Y}|_{X_y}$ is non-singular for all $y$ in the complement of a Zariski closed subset of $Y$. \smallskip \item[(ii)] The codimension of the direct image $p(\Xi)$ is greater than two. \end{enumerate} \end{thm} \begin{remark}Given that $\displaystyle K_{X_y}$ is ample, a natural choice for the construction of a positively curved metric on $K_{X/Y}$ would be the fiberwise KE metric, cf. previous section --this is well defined in the complement of the singular locus of $p$, and it extends across the singularities, cf. \cite{Paun}. However, it is not clear for us whether it is possible to obtain a \emph{useful} lower bound of the eigenvalues of the KE metric with respect to a fixed K\"ahler metric on $X$ as we are approaching the singular locus of $p$. And this information is critical in the process of extending the metric, as we will see below. \end{remark} \medskip \noindent In order to derive an interesting consequence of Theorem \ref{kernels} we recall the following result. \begin{thm}\label{bo_powers}\cite{BP} We assume that the hypothesis of Theorem \ref{kernels} are satisfied. Then there exists a metric $h_{X/Y}^{(1)}$ on the relative canonical bundle $K_{X/Y}$ with the following properties. \begin{enumerate} \item[(i)] The restriction of $h_{X/Y}^{(1)}$ to the fiber $X_y:= p^{-1}(y)$ is smooth. It is induced by the sections of $\displaystyle mK_{X_y}$ for some $m\gg 0$, for all $y$ in the complement of a proper algebraic subset $Z\subset Y$. \smallskip \item[(ii)] The corresponding curvature current $\Theta$ is positive definite on each compact subset of $p^{-1}(Y\setminus Z)$, and moreover we have \begin{equation}\label{031001mp} \Theta\geq \sum(t^j- 1)[W_j] \end{equation} in the sense of currents on $X$. \end{enumerate} \end{thm} \noindent In the statement \eqref{bo_powers} we denote by $t^j$ the multiplicity of the hypersurface $W_j$ in the inverse image $p^{-1}(\Delta)$. By combining Theorem \ref{kernels} and Theorem \ref{bo_powers}, we obtain the following result. \begin{cor}\label{approx} Let $\eta> 0$ be a positive real number. We assume that the hypothesis of Theorem \ref{kernels} are satisfied, and we define the following family of metrics: \begin{equation}\label{031002mp} \varphi_{X/Y}^{(\eta)}:= \eta\varphi_{X/Y}+ (1- \eta)\varphi_{X/Y}^{(1)}- \sum(t^j-1)\log |f_j|^2 \end{equation} on the bundle $K_{X/Y}- \sum (t^j-1)W_j$, where $\varphi_{X/Y}$ and $\varphi_{X/Y}^{(1)}$ are the weights given by Theorem \ref{kernels} and Theorem \ref{bo_powers}, respectively, and $f_j$ is a local equation of the hypersurface $W_j$. The resulting metrics $h_{X/Y}^{(\eta)}$ have the following properties. \begin{enumerate} \item[(a)] The curvature current $\Theta_\eta$ verifies $$\Theta_\eta\geq \eta\varepsilon_0\omega- \eta\sum(t^j- 1)[W_j]- \eta[\Xi].$$ \smallskip \item[(b)] Each of the metrics $h_{X/Y}^{(\eta)}$ is smooth when restricted to the generic fiber of $p$. \smallskip \item[(c)] For each compact subset $K\subset X\setminus p^{-1}(\Delta)$ we have $\Theta_\eta|_K\geq C_K\omega|_K$, where $C_K> 0$ is independent of $\eta$. \end{enumerate} \end{cor} \noindent Thus, in the set-up of Theorem \ref{kernels} the point (a) of Corollary \ref{approx} shows that a version of hypothesis $\left({\mathcal H}\right)_2$ is satisfied. We will show at the end of this section that that this suffices to infer that Theorem \ref{kernels, I} holds true for $L:= -K_{X/Y}+ \sum (t^j-1)W_j$, endowed with the metric deduced from $h_{X/Y}^{(1)}$. \medskip \noindent The following important result due to Viehweg-Zuo (cf. \cite{VZ03}) can now be seen as a direct consequence of the results \ref{kernels, I}, \ref{kernels, II} and \ref{kernels}. \begin{thm}\label{VZ}\cite{VZ03} Let $p:X\to Y$ be a family of canonically polarized manifolds of maximal variation. Then there exists a positive integer $q\leq \dim(Y)$ such that the bundle $\Sym^q\Omega_Y\langle \Delta\rangle$ contains a non-trivial big coherent subsheaf. \end{thm} \medskip \noindent As a by-product of our methods it turns out that a completely similar result holds in the context of Calabi-Yau families. \begin{thm}\label{CY17} Let $p:X\to Y$ be a Calabi-Yau family. Assume that $p$ has maximal variation. Then there exists a positive integer $q\leq \dim(Y)$ such that the bundle $\Sym^q\Omega_Y\langle \Delta\rangle$ contains a non-trivial big coherent subsheaf. \end{thm} \medskip \medskip \noindent In the following subsections we will establish the results stated above. \subsection{Proof of Theorem \ref{kernels, I}} Our first task will be to construct a metric on the quotient \begin{equation}\label{corrmp1} {\mathcal K}^i_f:= {\mathcal K}^i/T({\mathcal K}^i) \end{equation} of the kernel by its torsion subsheaf. This will be naturally induced by the metric on the direct image, thanks to the following simple observation. \begin{lma}\label{torsquot} The support of the torsion subsheaf $T({\mathcal K}^i)$ is contained in $Y\setminus B$. \end{lma} \noindent This is clear, since ${\mathcal K}^i|_B$ is a vector bundle. \smallskip \noindent Next, the sheaf ${\mathcal K}^i_f$ is coherent and torsion-free, therefore it is locally free on an open subset ${\mathcal Y}_0$ whose codimension in $Y$ is at least two. We define next a metric on the restriction ${\mathcal K}^i_f|_B$ as follows. Let $s_1, s_2$ be local sections of ${\mathcal K}^i_f$ defined on an open set $V$ such that the sequence \begin{equation}\label{corrmp3} 0\to \Gamma\left(V, T({\mathcal K}^i)\right)\to\Gamma\left(V, {\mathcal K}^i\right)\to \Gamma\left(V, {\mathcal K}^i_f\right)\to 0 \end{equation} is exact. We consider two sections $u_1, u_2\in \Gamma\left(V, {\mathcal K}^i\right)$ projecting into $s_1$ and $s_2$, respectively via \eqref{corrmp3}. Then for each $t\in B\cap V$ we define \begin{equation}\label{corrmp4} \langle s_1, s_2\rangle_t:= \langle u_1, u_2\rangle_t. \end{equation} We note that this does not depends on the $u_i$'s by Lemma \ref{torsquot}, and therefore we have a well-defined Hermitian structure on ${\mathcal K}^i_f|_B$. \smallskip \noindent Next, we will use the Leray isomorphism in order to construct a representative of a local section $[u]$ of ${\mathcal K}^i$. Since this is slightly different from the convention adopted in the previous sections, we give a few precisons in what follows. Assume that $[u]$ is defined on a small enough open subset $V\subset Y$ containing a smooth point of the divisor $\Delta$. In particular, $[u]$ corresponds to an element of the cohomology group \begin{equation}\label{corrmp10} H^i\left(p^{-1}(V), \Omega^{n-i}_{X/Y}\langle W\rangle\otimes L|_{p^{-1}(V)}\right). \end{equation} Let ${\mathcal U}= (U_\alpha)$ be a finite cover of $X$, such that the intersections $U_{I}:= U_{\alpha_0}\cap\dots\cap U_{\alpha_s}$ are contractible for all multi-indexes $I= (\alpha_0<\dots< \alpha_s)$ having $s+1$ components and all $s$. The section $[u]$ corresponds to a collection of holomorphic sections $(u_I)$ of the bundle $$\displaystyle \Omega^{n-i}_{X/Y}\langle W\rangle\otimes L|_{U_I}$$ whose Cech co-boundary is equal to zero (i.e. it is a cocycle). Here the multi-index $I$ has $i+1$ components. We can assume that the map \begin{equation}\label{corrmp11} \Omega^{n-i}_{X}\langle W\rangle\otimes L|_{U_I}\to \Omega^{n-i}_{X/Y}\langle W\rangle\otimes L|_{U_I} \end{equation} is surjective at the level of sections. Let $\widetilde u_I$ be a lifting of the section $u_I$ via the map \eqref{corrmp11}. Each $\widetilde u_I$ is an $L$-valued $n-i$-form with log poles on $U_I$. We note that in general the collection $(\widetilde u_I)$ is not necessarily a cocycle. However, this is the case for the restriction $\displaystyle (\widetilde u_I|_{U_I\cap X_t})$ for each $t\in B$, and this identifies with the class $[u_t]$. We apply the Leray isomorphism procedure to $(\widetilde u_I)$; this gives a $(0, i)$-form with values in $\displaystyle \Omega^{n-i}_{X}\langle W\rangle\otimes L|_{p^{-1}(V)}$ which we denote by $\widetilde u$. The exact formula is as follows \begin{equation}\label{corrmp12} \widetilde u= \sum_I\rho_{\alpha_s}\widetilde u_{\alpha_0\dots \alpha_s}\bar\partial \rho_{\alpha_0}\wedge \dots\wedge \bar\partial \rho_{\alpha_{s-1}} \end{equation} where $(\rho_\alpha)$ is a partition of unit subordinate to the Leray cover ${\mathcal U}$. In the usual formula, we do not have the factor $\rho_{\alpha_s}$ in \eqref{corrmp12}, since the forms defined on the open sets $U_{\alpha_s}$ coincide on intersections. As already mentioned, here $(\widetilde u_I)$ is not necessarily a cocycle, but this is the case for the projection $(u_I)$. Since $\sum \rho_j=1$, the image of $\widetilde u$ on the space of $(0,i)$-forms with values in $\Omega^{n-i}_{X/Y}\langle W\rangle\otimes L|_{p^{-1}(V)}$ is $\bar\partial$-closed and it represents the class $[u]$. \smallskip \noindent -- \emph{Convention.}-- For the rest of this section, we will call $\widetilde u$ a representative of $[u]$. Also, the representative of a section $u$ of ${\mathcal K}^i_f$ will be by definition the representative of a section of ${\mathcal K}^i$ projecting into $u$. \smallskip \noindent Let $\widetilde u$ be a fixed representative of a local section of the bundle ${\mathcal K}^i_f$, which is defined near a smooth point $y_0\in \Delta$. Thus $\widetilde u$ is a $(0, i)$-form with values in the bundle $\displaystyle \Omega^{n-i}_{X}\langle W\rangle\otimes L|_{p^{-1}(V)}$, such that the following properties are satisfied for any $t\in B\cap V$. \noindent $\bullet$ The restriction of the form $\widetilde u$ to the fibers $X_t$ is $\bar\partial$-closed. \smallskip \noindent $\bullet$ The cup-product of $\widetilde u$ with any element in the image of $ks$ is cohomologically zero when restricted to $X_t$. \smallskip \noindent We first assume that the bundle $(L, h_L)$ verifies the hypothesis $\left({\mathcal H}_2\right)$. Let $\theta:= -\Theta_{h_L}(L)$ be the corresponding curvature form. Then we have \begin{equation}\label{mmp11} \sqrt{-1}c_{jk}(\theta)dt_j\wedge d\overline{t}_k\geq 0 \end{equation} and given that $\widetilde u$ is a section of the kernel, the fiberwise projection of the (troublesome) forms $\eta_j$ on the space of harmonic forms is equal to zero (as above, we assume that $t\in B$). Thus, for any local holomorphic section $u$ of ${\mathcal K}^i_f$ defined on an open set $V$ centered at the point $y_0$ the function \begin{equation}\label{mmp12} t\to \log\Vert [u]\Vert_t^2 \end{equation} is psh on $V\setminus \Delta$, as a consequence of Theorem \ref{curvature} (the bracket notation in \eqref{mmp12} has the same meaning as in section 2). We will show next that we have \begin{equation}\label{mmp13} \sup_{t\in V\setminus \Delta}\log\Vert [u]\Vert_t^2<\infty \end{equation} and then Theorem \ref{kernels, I} follows as a consequence of elementary properties of psh functions. In order to establish \eqref{mmp13}, by the definition of the norm of a section of ${\mathcal H}^{p, q}$ we have the inequality \begin{equation}\label{mmp14} \Vert [u]\Vert_t^2\leq \int_{X_t}\left|\widetilde u_{|X_t}\right|^2_{\theta, h_L}\theta^n \end{equation} for each $t\in V\setminus \Delta$. Hence it would be enough to bound the right hand side term of \eqref{mmp14}. \smallskip \noindent Unfortunately we cannot do this directly (as the following computations will show, the technical reason is that we do not have an upper bound for the eigenvalues of $\theta$ at our disposal) and we will proceed as follows. By hypothesis $\left({\mathcal H}_2\right)$ the form $\theta$ is semi-positive on $X$ and greater than $\varepsilon_0\omega$ on the fibers of $p$. For each $j$ let $s_j$ be the global section whose set of zeroes is the hypersurface $W_j$ (recall that the $W_j$ are the support of the inverse image $p^{-1}(\Delta)$). They correspond locally on the open set $U$ (cf. \eqref{mmp2}) to the coordinates $z_1,\dots, z_q, z_{m+n}$. We have the decomposition $\displaystyle U:= \bigcup U_j$, where $U_j\subset U$ corresponds to the set of points for which $|z_j|$ is minimum among $|z_1|,\dots, |z_q|, |z_{n+m}|$. If $j=1,\dots, q$ then on the open set $U_j\cap X_t$ of the fiber $X_t$ we take the local coordinates $z_1,\dots z_{j-1}, z_{n+m}, z_{j+1},\dots, z_n$. On the set $X_t\cap U_{n+m}$ the coordinates are $z_1,\dots, z_n$. \noindent There exists a constant $C> 0$ such that for each $\varepsilon> 0$, on $X_t\cap U_{n+m}$ we have \begin{equation}\label{mmp15} \theta-\varepsilon\sum_{j}dd^c\log\log\frac{1}{|s_j|^2}\geq C\sqrt{-1}\left(\sum_{j=1}^ndz_j\wedge d\overline z_j+ \varepsilon\sum_{j=1}^q\frac{dz_j\wedge d\overline z_j} {|z_j|^2\log^2|z_j|^2}\right). \end{equation} \smallskip \noindent The inequality \eqref{mmp15} is a direct consequence of the fact that $\theta$ is greater than $\varepsilon_0\omega$ on the fibers of $p$, by hypothesis. We thus change the metric on $L$ in order to have Poincar\'e singularities on fibers. In order to apply Theorem \ref{curvature}, we perturb the metric of $L$ as follows \begin{equation}\label{mmp16} h_{L,\varepsilon}:= \left(\prod_j \log^\varepsilon\frac{1}{|s_j|^2}\right)^{-1} \left(\log^{C\sqrt \varepsilon}\frac{1}{|t_m|^2}\right)^{-1} e^{C\sqrt{\varepsilon}|t|^2}h_L \end{equation} and let $\theta_\varepsilon$ be the opposite of resulting curvature form (whose restriction to fibers is none other than the left hand side of \eqref{mmp15}). The constant $C\gg 0$ in \eqref{mmp16} is chosen so that $\theta_\varepsilon$ still verifies $\left({\mathcal H}_2\right)$, for each positive $\varepsilon> 0$. We remark that the negativity induced by the first factor of \eqref{mmp16} in the expression of $\theta_\varepsilon$ is of order $\displaystyle \sum_j\frac{\varepsilon}{\log1/|s_j|^2}\omega$. This is compensated by the functions depending on $t$ only in \eqref{mmp16}. \noindent We clearly have \begin{equation}\label{mmp17} \lim_{\varepsilon\to 0}\Vert [u]\Vert^2_{\varepsilon, t}= \Vert [u]\Vert^2_t \end{equation} for every $t\in V\setminus \Delta$. In \eqref{mmp17} above we denote by $\displaystyle \Vert \cdot\Vert^2_{\varepsilon, t}$ the norm induced by the perturbed metric of $L$. The equality \eqref{mmp17} is a consequence of the usual elliptic theory, cf. \cite{KSp}, applied for each $t$. \medskip \noindent We formulate the following claim, where we recall that $\widetilde u$ is a representative of the section $u$. \smallskip \noindent{\bf Claim.} For each $\varepsilon> 0$ there exists a constant $C_\varepsilon> 0$ such that we have \begin{equation}\label{mmp18} \sup_{t\in V\setminus \Delta}|t_m|^{2\varepsilon}\int_{X_t}\left|\widetilde u_{|X_t}\right|^2_{\theta_\varepsilon, h_{L, \varepsilon}}\theta^n_\varepsilon \leq C_\varepsilon< \infty. \end{equation} \medskip \noindent If we are able to show that \eqref{mmp18} holds true, then we are done (despite of the fact that the constant $C_\varepsilon$ above may not be bounded as $\varepsilon\to 0$). Indeed, this would imply that the log of the expression \begin{equation}\label{mmp22} t\to |t_m|^{2\varepsilon}\Vert [u]\Vert^2_{\varepsilon, t} \end{equation} defines a psh function on $V$ for each $\varepsilon > 0$. We write the corresponding mean inequality \begin{equation}\label{mp0309} \log \left(|\tau|^{2\varepsilon}\Vert [u]\Vert^2_{\varepsilon, {(t', \tau)}}\right)\leq \varepsilon C+ \int_0^{2\pi} \!\!\!\log \Vert [u]\Vert^2_{\varepsilon, {(t', \tau_\theta)}}\frac{d\theta}{2\pi} \end{equation} where we use the notation $\tau_\theta:= \tau+ re^{\sqrt{i}\theta}$ under the integral sign in \eqref{mp0309}, and $0< r\ll 1$. On the other hand, if $|t_m|= \delta_0> 0$, then we have \begin{equation}\label{mmp19} \sup_{|t_m|= \delta_0,\hskip 2pt t\in V}\int_{X_t}|\widetilde u_{|X_t}|^2_{\theta_\varepsilon, h_{L, \varepsilon}}\theta^n_\varepsilon \leq C_0 \end{equation} \emph{uniformly with respect to $\varepsilon$}, since we are ``away'' from the singularities of the map $p$. The mean inequality \eqref{mp0309}, combined with \eqref{mmp19}, \eqref{mmp17} and the concavity of the log function give the uniform boundedness of the initial norm, which is what we wanted. \smallskip \noindent In conclusion, the argument is that we will be using the non-effective estimate \eqref{mmp18} in order to infer that the function \eqref{mmp22} is log-psh. Then the uniform boundedness of our initial norm follows by the mean inequality, together with the convergence as $\varepsilon\to 0$ property \eqref{mmp17}. \begin{proof} To establish the claim we will use the local frame \eqref{mmp9} in order to do a few local computations. Assume that $z$ belongs to the coordinate set $U_{n+m}\subset U$. For each $t\in V\setminus \Delta$ the local coordinates will be $z_1,\dots, z_n$. We write \begin{equation}\label{mmp23} \widetilde u|_{X_t}= \sum_{I, J}\xi_{I\overline J}(z, t)\frac{dz_\alpha}{z_\alpha}\wedge dz_\beta\wedge d\overline z_J\otimes e_L \end{equation} where $\alpha, \beta$ above are multi-indices whose union is equal to $I$; we assume that $\alpha\subset \{1,\dots q\}$ and that $\beta\subset \{q+1,\dots, n\}$. The length of $I$ and $J$ is $n-i$ and $i$, respectively. The important remark is that the absolute value of the coefficients $\xi_{I\overline J}(\cdot, t)$ of the restriction \eqref{mmp23} are bounded uniformly with respect to $t\in V\setminus \Delta$. This is quickly seen thanks to the relation \eqref{mmp10}, and the fact that the quotients $|z_{n+m}|/|z_j|$ are bounded from above on $U_{n+m}$. \smallskip \noindent In general, given two K\"ahler metrics $g_1$ and $g_2$ such that $g_1\geq g_2$ then for any form $\gamma$ of type $(n-i, i)$ we have \begin{equation}\label{mp63} |\gamma|_{g_1}\leq |\gamma|_{g_2} \end{equation} and in our setting this implies the inequality \begin{equation}\label{mmp24} \left|\widetilde u_{|X_t}\right|_{\theta_\varepsilon, h_{L,\varepsilon}}^2\leq C\frac{e^{-\varphi_{L, \varepsilon}}}{\varepsilon^2}\sum_{I, J} |\xi_{I{\overline J}}|^2 \prod_{j\in \alpha} \log^2 \left|z_j\right|^2 \end{equation} at each point of the intersection $U_{n+m}\cap X_t$, cf. \eqref{mmp15}. Next we remark that the local weights of the metric $h_L$ are bounded from below, given the curvature hypothesis; this is also true for the perturbed metric $h_{L, \varepsilon}$ (and actually the lower bound is independent of $\varepsilon$). On the other hand, for each $\varepsilon> 0$ there exists a constant $C_\varepsilon> 0$ such that we have \begin{equation}\label{mmp25} |t_m|^{2\varepsilon}\prod_{j\in \alpha} \log^2 \left|z_j\right|^2\leq C_\varepsilon \end{equation} for any $z\in X_t\cap U$, cf. \eqref{mmp2}. We thus obtain \begin{equation}\label{mmp26} |t_m|^{2\varepsilon} \left|\widetilde u_{|X_t}\right|_{\theta_\varepsilon, h_{L,\varepsilon}}^2\leq C_\varepsilon(\xi) \end{equation} where $C_\varepsilon(\xi)$ is uniform with respect to the point $t\in V\setminus \Delta$, as a consequence of \eqref{mmp24} and \eqref{mmp25}. \noindent All in all, we infer the existence of a constant $C_\varepsilon(\xi)$ such that we have \begin{equation}\label{mmp27} |t_m|^{2\varepsilon} \int_{X_t}\left|\widetilde u_{|X_t}\right|_{\theta_\varepsilon, h_{L,\varepsilon}}^2\theta_{\varepsilon}^n\leq C_\varepsilon(\xi) \end{equation} because the volume of each fiber $(X_t, \theta_{\varepsilon})$ is bounded from above by a positive constant independent of $t$ (and $\varepsilon$). Therefore the claim is proved, and so is Theorem \ref{kernels, I} in case the line bundle $(L, h_L)$ verifies the hypothesis ${\mathcal H}_2$. \medskip \noindent If the curvature of $L$ is trivial when restricted to fibers of $p$, then we can use an arbitrary K\"ahler metric on $X$ to define the norm on ${\mathcal K}^i$. Given this, the proof is completely identical, and we will not provide further details. \end{proof} \begin{remark} One can also prove the claim by a slightly different argument, which avoids the use of the restriction \eqref{mmp23}. The observation is that we have \begin{equation}\label{corrmp100} \left|\widetilde u_{|X_t}\right|_{\theta_\varepsilon, h_{L,\varepsilon}}^2\leq \left|\widetilde u\right|_{\theta_\varepsilon, h_{L,\varepsilon}}^2 \end{equation} at each point of the fiber $X_t$. This follows by a simple linear algebra calculation. The right hand side of \eqref{corrmp100} can be bounded from above by using the fact that $\theta_\varepsilon$ is a metric with Poincar\'e singularities when restricted to $U$. In the inequality \eqref{mmp19} however, we have to work with the left hand side term of \eqref{corrmp100} directly, mainly because we have to obtain an upper bound which is independent of $\varepsilon$, and the metric $h_L$ is only assumed to be strictly positively curved in the fibers directions. \end{remark} \begin{remark} The proof of Theorem \ref{kernels, I} is much easier than the one establishing the \emph{positivity} of the direct image sheaf $${\mathcal F}:= p_\star(K_{X/Y}+ L)$$ in case $(L, h_L)$ has is semi-positively curved. The reason is that in order to show that the natural metric of ${\mathcal F}$ extends across the eventual singularities of the map $p$ one has to show that the function $$\displaystyle t\to \Vert [u_t]\Vert_t^2$$ is bounded \emph{from below} by a strictly positive constant, as soon as we have $\displaystyle [u_t]_{t_0}\not\in m_{0}{\mathcal F}_{t_0}$ where $t_0\in V\cap \Delta$ and $m_{0}\subset {\mathcal O}_{Y, t_0}$ is the maximal ideal associated to this point. \noindent On the other hand, in Theorem \ref{kernels, I} we assume that $L$ is endowed with a metric whose curvature satisfies very strong uniformity requirements (with respect to the fibers of $p$). As we will see in Theorem \ref{kernels}, it is not always very easy to construct such metrics e.g. if $L= -K_{X/Y}$. \end{remark} \medskip \noindent As already mentioned, the proof of Proposition \ref{ks} is is practically contained in the arguments invoked for Theorem \ref{kernels, I}. The only additional tool would be the following statement, generalizing \cite{Del}. \begin{lma}\label{mp6} Let $\Omega\subset Y$ be a small coordinate set centered at the point $y_0$. For each $i=1,\dots, m$ there exists a vector field $v_i$ defined on $p^{-1}(\Omega)$ with the following properties. \begin{enumerate} \item[(i)] We have $\displaystyle dp(v_i)= \frac{\partial}{\partial t_i}$ for $i=1,\dots, m-1$ and $\displaystyle dp(v_m)= t_m\frac{\partial}{\partial t_m}$ on $\Omega$. \smallskip \item[(ii)] On the open set $U\subset X$ as above we have \begin{equation}\label{mp10} v_i= \frac{\partial}{\partial z_{n+i}}+ \sum_{j= 1}^q\psi^j_i\left(b_{m+n}z_j\frac{\partial}{\partial z_j}- {b_jz_{m+n}}\frac{\partial}{\partial z_{m+n}}\right)+ \sum_{l= q+1}^n\psi^l_i\frac{\partial}{\partial z_l} \end{equation} for $i=1,\dots, m-1$, well as \begin{equation}\label{mp11} v_m= \frac{1}{b_{m+n}}z_{m+n}\frac{\partial}{\partial z_{n+m}}+ \sum_{j= 1}^q\psi^j_m\left(b_{m+n}z_j\frac{\partial}{\partial z_j}- {b_jz_{m+n}}\frac{\partial}{\partial z_{m+n}}\right)+ \sum_{l= q+1}^n\psi^l_m\frac{\partial}{\partial z_l}, \end{equation} where the functions $\psi^i_j$ are smooth on $U$. \end{enumerate} \end{lma} \begin{proof} The proof is straightforward: locally the lifting in (i) and (ii) clearly exist, given the structure of the map $p$. We glue them by a partition of unit; the logarithmic tangent bundle being invariant with respect to change of coordinates, the conclusion follows. \end{proof} \smallskip \noindent We remark that in order to construct the vector fields as in Lemma \ref{mp6} it is enough to assume the following: \begin{enumerate} \item[{($s_1$)}] The singular values of the map $p$ are contained in a snc divisor $\Delta$; \smallskip \item[{($s_2$)}] The inverse image $p^{-1}(\Delta)$ is a divisor $W$ with snc support. \end{enumerate} \medskip \subsection{Proof of Theorem \ref{kernels, II}.} We will follow the method of Viehweg-Zuo, cf. \cite{VZ03}. \noindent In order to simplify the writing, we introduce the following notation \begin{equation}\label{mp76} E^s_{X/Y}:= {\mathcal R}^sp_\star\left(\Omega_{X/Y}^{n-s}\langle W\rangle\otimes L\right) \end{equation} for $s\geq 0$. \noindent Then for each $s\geq 0$ we have the holomorphic map \begin{equation}\label{mp69} \tau^s: E^s_{X/Y}\to E^{s+1}_{X/Y}\otimes \Omega^1_Y\langle \Delta\rangle \end{equation} as recalled at the beginning of the current section. By hypothesis, there exists a section $\sigma$ of the bundle \begin{equation}\label{mmp40} K_{X/Y}+ W- p^\star(\Delta)+ L \end{equation} such that the map \begin{equation}\label{mp71} ks^{(1)}_\sigma: {\mathcal O}_Y\to E^1_{X/Y}\otimes \Omega^1_Y\langle \Delta\rangle. \end{equation} given by the cup-product of $\sigma$ with the Kodaira-Spencer class is non-identically zero on a non-empty open subset of $Y$. \smallskip \noindent Then the proof of Theorem \ref{kernels, II} is obtained as follows. Let \begin{equation}\label{mp81} ks^{(j)}_\sigma: {\mathcal O}_Y\to E^{j}_{X/Y}\otimes \Sym^{j}\Omega^1_Y\langle \Delta\rangle \end{equation} be the map deduced from the iterated Kodaira-Spencer map \eqref{mmp58}. We denote by $\Pi_j$ the projection \begin{equation}\label{higher81} E^{j}_{X/Y}\otimes \Sym^{j}\Omega^1_Y\langle \Delta\rangle\to \left(E^{j}_{X/Y}/{\mathcal K}^j\right)\otimes \Sym^{j}\Omega^1_Y\langle \Delta\rangle, \end{equation} and let $i$ be the smallest integer such that $\displaystyle \Pi_{i}\circ ks^{(i)}_\sigma$ is identically zero. This means that the image of $ks^{(i)}_\sigma$ is contained in \begin{equation}\label{mp83}\displaystyle {\mathcal K}^{i}\otimes\Sym^{i}\Omega^1_Y\langle \Delta\rangle \end{equation} and that $ks^{(i)}_\sigma$ is not identically zero. Indeed, if $ks^{(i)}_\sigma$ would be zero, then the image of $ks^{(i-1)}_\sigma$ is contained in $\displaystyle {\mathcal K}^{i-1}\otimes\Sym^{i-1}\Omega^1_Y\langle \Delta\rangle$. This contradicts our choice of $i$. Note that we have $i\geq 1$ thanks to the assumption $({\mathcal H})_3$. We obtain a non-identically zero section of the bundle \eqref{mp83}. This in turn defines a non-trivial map \begin{equation}\label{mp84} {\mathcal K}_f^{i \star}\to \Sym^{i}\Omega^1_Y\langle \Delta\rangle. \end{equation} since the dual of ${\mathcal K}^i$ is the same as the dual of ${\mathcal K}^i_f$. We remark that this already shows that the bundle $\Sym^{i}\Omega^1_Y\langle \Delta\rangle$ has a semi-positively curved subsheaf (i.e. the image of the map \eqref{mp84}). \smallskip \noindent If the curvature of $(L, h_L)$ satisfies the hypothesis $({\mathcal H}_2)$ and if moreover there exists an open subset $\Omega\subset Y$ together with a positive $\varepsilon_0> 0$ such that $$\Theta_{h_L}(L)\leq -\varepsilon_0p^\star (\omega_Y)$$ on $p^{-1}(\Omega)$, then the curvature properties of the kernels ${\mathcal K}^is$ are considerably better: \smallskip \noindent $\bullet$ The sheaf ${\mathcal K}^i$ is admits a singular semi-negatively curved Hermitian metric which is smooth on a Zariski open subset of $Y$. \smallskip \noindent $\bullet$ For any local holomorphic section $[u_t]$ of the bundle $\displaystyle {\mathcal K}^i|_{\Omega}$ we have \begin{equation}\label{mmp50} dd^c\log \Vert [u]\Vert^2\geq \varepsilon_0\sum_{j=1}^m \sqrt{-1}dt_j\wedge d\overline t_j. \end{equation} \medskip \noindent In this context we recall the following result, which concludes the proof of the last part of Theorem \ref{kernels, II}. Let $Y$ be a smooth projective variety, and let ${\mathcal F}$ be a torsion free coherent sheaf on $Y$. We consider ${\mathbb P}({\mathcal F}):= {\mathcal Proj} (\bigoplus_{m \ge 0} S^m({\mathcal F}))$ the scheme over $Y$ associated to ${\mathcal F}$, together with the projection $\pi : {\mathbb P}({\mathcal F}) \to Y$. We denote by ${\mathcal O}_{{\mathcal F}}(1)$ be the tautological line bundle on ${\mathbb P}({\mathcal F})$. Let $Y_1 \subset Y$ be a Zariski open subset on which ${\mathcal F}$ is locally free (in particular ${\mathbb P}({\mathcal F})$ is smooth over $Y_1$) and ${\rm codim}_{Y}(Y\setminus Y_{1}) \ge 2$. \medskip \noindent The following statement is established in \cite{PT}. \begin{thm} \label{bigness}\cite{PT} We suppose that ${\mathcal O}_{{\mathcal F}}(1)|_{\pi^{-1}(Y_{1})}$ admits a singular Hermitian metric $g$ with semi-positive curvature, and that there exists a point $y \in Y_1$ such that ${\mathcal I}(g^k|_{{\mathbb P}({\mathcal F}_y)})={\mathcal O}_{{\mathbb P}({\mathcal F}_y)}$ for any $k>0$, where ${\mathbb P}({\mathcal F}_{y})=\pi^{-1}(y)$. Assume moreover that there exists an open neighborhood $\Omega$ of $y$ such that $\Theta_{g}\left({\mathcal O}(1)\right) - \pi^{\star}\omega_Y \ge 0$ on $\pi^{-1}(\Omega)$. Then ${\mathcal F}$ is big. \end{thm} \medskip \noindent This result together with the properties of the kernels ${\mathcal K}^j_f$ summarized in the bullets above show that each of ${\mathcal K}^j_f$ is big; in particular this applies to ${\mathcal K}^{i}_f$. \qed \begin{remark} The hypothesis $({\mathcal H}_i)$ (for $i=1,2$) together with the existence of a section $\sigma$ of $K_{X/Y}+ W- p^\star(\Delta)+ L$ have a few consequences on the map $p$ itself which we will now discuss. We write \begin{equation}\label{higher100} K_{X/Y}= L_1- L \end{equation} where $\displaystyle L_1:= \big(K_{X/Y}+ W- p^\star(\Delta)+ L\big)+ p^\star(\Delta)- W$. In particular, $L_1$ is an effective line bundle. As for $-L$, it is semi-positive and strictly positive/trivial on the fibers of $p$. Therefore, the relative canonical bundle $K_{X/Y}$ is relatively big in the first case, and pseudo-effective in the second. If moreover the curvature of $L$ verifies the requirements in the second part of Theorem \ref{kernels, II}, then for any $m$ large enough the bundle $\det p_\star(mK_{X/Y})$ is big. From this perspective, Theorem \ref{kernels, II} is similar to (but slightly weaker than) the main statement of Popa-Schnell in \cite{PoSch}. \end{remark} \subsection{Metrics on the relative canonical bundle} \noindent In this section we will establish the following result; afterwards we will show that it implies Theorem \ref{kernels}. \begin{thm}\label{mp5} Let $p:X\to Y$ be an algebraic fiber space, and let $\omega$ be a fixed, reference metric on $X$. We assume that the following properties are satisfied. \begin{enumerate} \item[(1)] The canonical bundle of the generic fiber of $p$ is ample. \smallskip \item[(2)] There exists a positive $m\gg 0$ such that the line bundle $$\det p_\star(mK_{X/Y})$$ is big. \end{enumerate} \noindent Then there exist an effective, $p$-contractible divisor $\Xi$ on $X$ and a singular metric $\displaystyle h_{X/Y}= e^{-\varphi_{X/Y}}$ on $K_{X/Y}+ \Xi$ such that: \begin{enumerate} \item[(i)] There exists $\varepsilon_0> 0$ for which we have $dd^c\varphi_{X/Y}\geq \varepsilon_0\omega$ in the sense of currents on $X$, i.e. the curvature of $(K_{X/Y}+ \Xi, h_{X/Y})$ is strongly positive. \smallskip \item[(ii)] The restriction $\displaystyle h_{X/Y}|_{X_y}$ is non-singular for each $y\in B$, where $B$ is a Zariski open subset of $Y$ (which can be described in a precise manner). \end{enumerate} \end{thm} \noindent Before presenting the arguments of the proof, a few comments about the preceding statement. \begin{remark} If instead of the hypothesis (1) above we assume that $\displaystyle K_{X_y}$ is big, then the metric $e^{-\varphi_{X/Y}}$ can be constructed so that it has the same singularities as the \emph{metric with minimal singularities} on this bundle. It would be really interesting to develop the techniques in sections (1)-(6) under the assumption that the bundle $L$ is endowed with a sequence of smooth metrics $h_\varepsilon$ such that the curvature $\theta_\varepsilon$ admits a lower bound $$\theta_\varepsilon \geq (\varepsilon_0- \lambda_\varepsilon)\omega$$ where $C> 0$ and $\lambda_\varepsilon$ is a family of positive functions uniformly bounded from above and converging to zero almost everywhere. \end{remark} \begin{remark} If the Kodaira dimension of the fibers of $p$ is not maximal we can derive a similar result --of course, the conclusion (i) has to be modified accordingly. However, in the absence of the hypothesis (2) it is not clear what we can expect. \end{remark} \begin{proof} The arguments which will follow rely on two results. We recall next the first one, cf. \cite{CP}. \begin{thm}\label{mp31} \cite{CP} We assume that $K_X$ is ${\mathbb Q}$-effective when restricted to the generic fiber of $p$. For any positive integer $m$ there exists a real number $\eta_0> 0$ and a divisor $\Xi$ on $X$ such that the codimension of $p(\Xi)$ is at least two, and such that the difference \begin{equation}\label{mp2} K_{X/Y}+ \Xi - \eta_0p^\star\left(\det\big(p_\star(mK_{X/Y})\big)\right) \end{equation} is pseudo-effective. \end{thm} \noindent Actually what we will use is not this result in itself, but its proof --this will be needed in order to establish (ii) above. Hence the plan for the remaining part of this subsection is to review the main parts of the proof as in \cite{CP} (see also the references therein), and to extract the statement we want --this concerns the singularities of the current in \eqref{mp2}. \medskip \noindent $\bullet$ If the map $p$ is a smooth submersion, then the arguments in \cite{CP} are borrowed from Viehweg's weak semi-stability theorem proof, cf. \cite{Vbook}. The first observation is that given any vector bundle $E$ of rank $r$ we have a canonical injection \begin{equation}\label{mp32} \det(E)\to \otimes^rE, \end{equation} and hence a nowhere vanishing section of the bundle \begin{equation}\label{mp33} \otimes^rE\otimes \left(\det(E)\right)^\star. \end{equation} We apply this to the direct image \begin{equation}\label{mp34} E:= p_\star\left(m_0K_{X/Y}\right) \end{equation} which is indeed a vector bundle, by the invariance of plurigenera \cite{Siupg}. Here $m$ is a positive integer, large enough so that the multiple $\displaystyle m_0K_{X_t}$ is very ample, and $r$ is the rank of the direct image \eqref{mp34}. In this case the bundle $\otimes^rE$ can be interpreted as direct image of the relative pluricanonical bundle of the map \begin{equation}\label{mp35} p^{(r)}:X^{(r)}\to Y \end{equation} where $\displaystyle X^{(r)}:= X\times_Y\dots\times_Y X$ is the $r^{\rm th}$ fibered product corresponding to the map $p:X\ to Y$. Note that $X^{(r)}\subset \times^rX$ is a non-singular submanifold of the $r$-fold product of $X$ with itself (thanks to the assumption that $p$ is a submersion). The important observation is that we have the formulas \begin{equation}\label{mp36} p^{(r)}_\star\left(m_0K_{X^{(r)}/Y}\right)= \otimes^rp_\star\left(m_0K_{X/Y}\right) \end{equation} as well as \begin{equation}\label{mp37} K_{X^{(r)}/Y}= \prod \pi_i^\star(K_{X/Y}) \end{equation} where $\pi_i:X^{(r)}\to X$ is induced by the projection on the $i^{\rm th}$ factor. By relations \eqref{mp33} and \eqref{mp36} we have a section, say $\sigma$, of the bundle \begin{equation}\label{mp38} m_0K_{X^{(r)}/Y}-p^{(r)\star}\left(\det p_\star\left(m_0K_{X/Y}\right) \right). \end{equation} The formula \eqref{mp37} shows that the restriction of the bundle $K_{X^{(r)}/Y}$ to the diagonal $X\subset X^{(r)}$ is precisely $rK_{X/Y}$, so all in all the restriction of the bundle in \eqref{mp38} to $X$ coincides with \eqref{mp2}. However, we cannot use directly the section $\sigma$ in order to conclude, since by construction we have $\displaystyle \sigma|_{X}\equiv 0$. \smallskip To bypass this difficulty, we will use the following superb trick invented by E. Viehweg, cf. \cite{Vbook}. Let $\varepsilon_0> 0$ be a small enough positive rational number such that the pair \begin{equation}\label{mp39} \left(X^{(r)}, B\right) \end{equation} is klt, where the boundary $B:= \varepsilon_0Z_\sigma$ is the $\varepsilon_0$ multiple of the divisor $Z_\sigma:= (\sigma= 0)$. The next step is to apply the results in \cite{BP} and deduce that there exists a fixed ample line bundle $A_Y$ on $Y$ such that for any $k\geq 1$ divisible enough the restriction map \begin{equation}\label{mp40} H^0\left(X^{(r)}, k\left(K_{X^{(r)}/Y}+ B\right)+ p^{(r)\star}A_Y\right) \to H^0\big(X^{(r)}_y, k(K_{X^{(r)}_y}+ B|_{X^{(r)}_y})\big) \end{equation} is surjective, for any $y\in Y\setminus \Delta$. Now the bundle on the right hand side of \eqref{mp40} is equal to \begin{equation} k(1+ \varepsilon_0m_0)\prod_{i=1}^r \pi_i^\star \left(K_{X_y}\right), \end{equation} so it has many sections, given that $K_{X_y}$ is ample. Moreover, all the sections in question are automatically $L^{2/k}$ integrable, given that $(X^{(r)}, B)$ is a klt pair, and so it is its restriction to a generic fiber $X^{(r)}_y$ of our map $p^{(r)}$. For the complete argument concerning the surjectivity of the map \eqref{mp40} we are referring to \cite{CP} (and the references therein). In particular, for each $k$ divisible enough the bundle \begin{equation}\label{mp41} k\left(K_{X^{(r)}/Y}+ B\right)+ p^{(r)\star}A_Y \end{equation} admits a holomorphic section whose restriction to the diagonal $X\subset X^{(r)}$ is not identically zero. Indeed, we have (many) sections of the bundle \eqref{mp40} on fiber $X_y\times\dots\times X_y$ whose restriction to the diagonal $X_y\subset X_y\times\dots\times X_y$ is not identically zero. \smallskip \noindent In conclusion, this shows the existence of a positively-curved metric on the bundle \begin{equation}\label{mp42} r(1+\varepsilon_0m_0)K_{X/Y}-\varepsilon_0p^\star\det \big(p_\star(m_0K_{X/Y})\big)+ \frac{1}{k}p^\star A_Y \end{equation} whose restriction to the generic fiber $X_y$ of $p$ is induced by the sections of multiples of $\displaystyle K_{X_y}$. \medskip \noindent $\bullet$ The general case where $p$ is simply a surjective map is much more involved; the complications are arising from the fact that the fibered product $X^{(r)}$ is no longer smooth. However, as shown in \cite{CP} one obtains a similar result, modulo adding the divisor $\Xi$ which projects in codimension greater than two. We will not reproduce here the proof, but just mention that modulo the singularities of $X^{(r)}$, the structure of the argument is identical to the case detailed at the preceding bullet --in particular, we control the singularities of the restriction of the metric on \eqref{mp42} to the generic fiber $X_y$. \begin{remark} It would be really interesting to have a more direct proof of Theorem \ref{mp31}, i.e. without using Viehweg's trick. \end{remark} \bigskip \noindent Recall that we we have an integer $m_0$ above such that the following conditions are satisfied. \begin{enumerate} \item[(i)] The bundle $\displaystyle m_0K_{X_y}$ is very ample. \smallskip \item[(ii)] The bundle $\displaystyle \det \left(p_\star(m_0K_{X/Y})\right)$ is big. \end{enumerate} \noindent By the point (ii) above, there exists a positive integer $m_1\gg 0$ such that \begin{equation}\label{mp47} m_1\det \left(p_\star(m_0K_{X/Y})\right)\simeq A_Y+ E_Y \end{equation} where $E_Y$ is an effective divisor on $Y$. The properties of the bundle \eqref{mp42} combined with our previous considerations show the existence of a metric $e^{-\psi_{X/Y}}$ on the bundle $K_{X/Y}+ \Xi$ with the following properties. \begin{enumerate} \item[(a)] The metric $e^{-\psi_{X/Y}}$ is semi-positively curved, and it has algebraic singularities (meaning that its local weights are of the form log of a sum of squares of holomorphic functions, modulo a smooth function). Moreover, the restriction $\displaystyle e^{-\psi_{X/Y}}|_{X_y}$ is smooth, for any $y$ belonging to a Zariski open subset of $Y$. \smallskip \item[(b)] The form $dd^c\psi_{X/Y}$ is smooth and definite positive on the $p$-inverse image of a (non-empty) Zariski open subset. \end{enumerate} \medskip \noindent We see that the main difference between the properties (a) and (b) of the metric $e^{-\psi_{X/Y}}$ and the conclusion of Theorem \ref{mp5} is strict positivity, i.e. the existence of $\varepsilon_0$ as in (i). In order to finish the proof, we will use a result due to M. Nakamaye, cf. \cite{Nak} concerning the augmented base loci of nef and big line bundles. As a preparation for this, we blow-up the ideal corresponding to the singularities of $e^{-\psi_{X/Y}}$; let $\pi: \widetilde X\to X$ be the associated map. We write \begin{equation}\label{frei1} \pi^{\star}\left(K_{X/Y}+ \Xi\right)\simeq {\mathcal L_1}+ {\mathcal L_2} \end{equation} such that ${\mathcal L_j}$ above are ${\mathbb Q}$-line bundles induced by the decomposition of the inverse image of the curvature of $e^{-\psi_{X/Y}}$ into smooth and singular parts denoted by $\theta_1$ and $\theta_2$, respectively. By the properties (a) and (b) above, $\theta_1$ is smooth and definite positive in the complement of an algebraic set which projects into a strict subset of $Y$. As for $\theta_2$, it is simply the current of integration on an algebraic subset of $\widetilde X$ which equally projects properly into $Y$. \smallskip \noindent In particular, the bundle ${\mathcal L_1}$ is nef and big (given that it admits a metric whose curvature is $\theta_1$). We denote by $\displaystyle {\rm B}_+({\mathcal L_1})$ the stable base locus of the ${\mathbb Q}$-bundle ${\mathcal L_1}- A$, where $A$ is any small enough ample on $\widetilde X$. We recall the following result, cf. \cite{Nak}. \begin{thm}\cite{Nak} The algebraic set $\displaystyle {\rm B}_+({\mathcal L_1})$ is the union of $W\subset {\widetilde X}$ such that $\displaystyle \int_W\theta_1^d= 0$, where $d$ is the dimension of $W$. \end{thm} \noindent Given the positivity properties of the form $\theta_1$, we infer that the set $p\circ \pi \left({\rm B}_+({\mathcal L_1})\right)$ is strictly contained in $Y$. This then implies that we can endow ${\mathcal L_1}$ with a metric whose curvature can be written as the sum of a K\"ahler metric (given by the ample $A$ above) and a closed positive current whose singularities are $p\circ \pi$-vertical. By combining it with the metric on ${\mathcal L_2}$ we get a metric with the same properties on the inverse image $\pi^\star(K_{X/Y}+ \Xi)$. \smallskip \noindent The metric $h_{X/Y}$ is obtained by push-forward, and Theorem \ref{mp5} is proved. \end{proof} \medskip \subsection{Proof of Theorem \ref{kernels}} The hypothesis of \ref{kernels} together with the results in \cite{Kol87}, \cite{Kawa}, \cite{PoSch} that for each $m\gg 0$ the bundle \begin{equation}\label{mmp60} \det\left(p_\star(mK_{X/Y})\right) \end{equation} is big. This can equally be obtained along the line of arguments in this paper as follows. It is shown in \cite{Berndtsson2} that the curvature form of the vector bundle $p_\star(mK_{X/Y})$ has no zero eigenvalue --as if not, the ``maximal variation" hypothesis would be contradicted. Thus the curvature form of its determinant is semi-positive, and strictly positive at some point. The fact that it is big follows e.g. by holomorphic Morse inequalities. We now apply Theorem \ref{mp5} and \ref{kernels} is proved. \subsection{Proof of Theorem \ref{VZ}} This is almost a linear combination of the results obtained in the previous sections: we define the $L$ as follows \begin{equation}\label{031005mp} L:= -K_{X/Y}+ \sum(t^i-1)W_i \end{equation} so that the bundle \eqref{mmp40} is trivial (in particular, it has a section). \smallskip \noindent We intend to use Theorem \ref{kernels, II} in order to conclude, but we do not seem to prove that the bundle \eqref{031005mp} admits a metric satisfying the hypothesis $\left({\mathcal H}_2\right)$. However, in Corollary \ref{approx} we construct a family of metrics whose curvature properties represent an approximation of this hypothesis. We will show next that we can adapt the proof of Theorem \ref{kernels}, and obtain the same conclusion by using Corollary \ref{approx} instead of $\left({\mathcal H}_2 \right)$. We will denote by $h_L$ the metric on \eqref{031005mp} induced by the metric $h_{X/Y}^{(1)}$. Note that the curvature current corresponding to this metric is semi-positive on $X$, and strictly positive on generic fibers of $p$ by Theorem \ref{bo_powers}. Let $[u_t]$ be a local holomorphic section of the bundle ${\mathcal K}^i$, defined on the open co-ordinate subset $V\subset Y$ centered at a generic point of $\Delta$. We have to show that the quantity \begin{equation}\label{031101mp} \sup_{t\in V\setminus \Delta}\log\Vert [u_t]\Vert^2_t< \infty \end{equation} is finite. To this end, we follow the same path as in the proof of \ref{kernels, I}, so we will only highlight the main differences next. By Corollary \ref{approx} we have \begin{equation}\label{031102mp} h_L|_{X_t}= \lim_{\eta\to 0}h_{X/Y}^{(\eta)}|_{X_t} \end{equation} for each $t\in V\setminus \Delta$. For each $0<\varepsilon\ll \eta$, we define the perturbation $\displaystyle h_{L, (\varepsilon,\eta)}$ of the metric $h_{X/Y}^{(\eta)}$ precisely as in \eqref{mmp16} (meaning that $h_L$ in \eqref{mmp16} is replaced with $h_{X/Y}^{(\eta)}$ in the actual context). Then we have \begin{equation}\label{031103mp} \sup_{t\in V\setminus \Delta}|t_m|^{C\eta}\Vert [u_t]\Vert^2_{t, (\varepsilon, \eta)}\leq C(\varepsilon, u)< \infty, \end{equation} where $C$ in \eqref{031103mp} is a fixed constant, large enough compared with the multiplicities $b^i$ in the expression of the divisor $p^{-1}(\Delta)$. The inequality \eqref{031103mp} above is established by the same procedure as the proof of the Claim --the only slight difference here is the presence of a factor of order ${\mathcal O}(\eta)$ with the wrong sign in the metric $h_{X/Y}^{(\eta)\star}$, accounting for the first negative term in the curvature estimate (a) in Corollary \ref{approx}. This term in tamed by the factor $|t_m|^{C\eta}$. The inequality \eqref{031103mp} together with the mean inequality show that \eqref{031101mp} hold true. \smallskip \noindent Hence, under the hypothesis of Theorem \ref{VZ} the conclusion of theorems \ref{kernels, I} and \ref{kernels, II}, respectively hold true provided that the bundle $(L, h_L)$ is chosen as in \eqref{031005mp} above. \subsection{Proof of Theorem \ref{CY17}} Let $p:X\to Y$ be a Calabi-Yau family, which has maximal variation. By the results in \cite{Kol87}, \cite{Kawa}, \cite{PoSch} we infer that for all $m\gg 0$ divisible enough the bundle \begin{equation}\label{mmp100} {\mathcal L}:= \left(p_\star(mK_{X/Y})\right)^{\star\star} \end{equation} is big. Actually the metric version of this statement is true, as we will see after recalling a few facts. Let $y\in Y$ be a regular value of the map $p$. Since $c_1(X_y)= 0$, there exists a positive integer $m$ such that the bundle $\displaystyle mK_{X_y}$ admits a nowhere vanishing section say $s_y$. The invariance of plurigenera \cite{Siupg} shows that for any co-ordinate open subset $V$ containing $y$ there exists a section $s$ of the bundle $\displaystyle mK_{X/Y}|_{p^{-1}(V)}$ whose restriction to the fiber $X_y$ equals $s_y$. By shrinking $V$, we can assume that $s$ is nowhere vanishing, and therefore it gives a trivialization of the bundle ${\mathcal L}|_V$. With respect to this trivialization, the local weight of the metric on ${\mathcal L}$ in \cite{Berndtsson1}, \cite{PT} is given by the function \begin{equation}\label{mmp101} \varphi_{\mathcal L}(w)= \log\int_{X_w}|s|^{2/m} \end{equation} where the expression under the integral in \eqref{mmp101} is the volume element on $X_w$ induced by the restriction of $s$. \medskip \noindent It is proved in \cite{PT} that the metric \eqref{mmp101} is semi-positively curved. Under the hypothesis of \ref{CY17} the following stronger result hold true. \begin{lma}\label{cymetric} We assume that the Calabi-Yau family $p$ has maximal variation. Then the curvature of $({\mathcal L}, e^{-\varphi_{\mathcal L}})$ is positive definite on each compact subset of a Zariski closed subset of $Y$. In particular, ${\mathcal L}$ is big. \end{lma} \begin{proof} We observe that the metric $e^{\varphi_{\mathcal L}}$ is smooth on a Zariski open subset $Y_0\subset Y$. Let $\Theta_{\mathcal L}$ be the corresponding curvature form. We claim that $\Theta_{\mathcal L}^{\dim (Y)}> 0$ on $Y_0\cap U$, where $U\subset Y$ is such that the restriction of the Kodaira-Spencer map $ks|_U$ to $U$ is injective. This is standard: if we have $\Theta_{\mathcal L}^{\dim (Y)}= 0$, then the kernel of $\Theta_{\mathcal L}$ would define a foliation which is not necessarily holomorphic, but whose leaves are holomorphic. We consider a holomorphic disk $\mathcal{D}$ contained in a generic leaf of this (local) foliation. Then the family $f: X_\mathcal{D}\to \mathcal{D}$ induced by $p$ a submersion, where $X_\mathcal{D}:= p^{-1}(\mathcal{D})$, and moreover the curvature of the direct image $f_\star(mK_{X_\mathcal{D}/\mathcal{D}})$ is equal to zero. By the results in \cite{Choi} this forces the vanishing of the Kodaira-Spencer class of $f$. We therefore obtain a contradiction, so the Lemma \ref{cymetric} is proved. \end{proof} \smallskip \noindent As a consequence, we obtain the following statement. \begin{cor}\label{cymetricII} Let $p:X\to Y$ be a Calabi-Yau family which has maximal variation. Then the relative canonical bundle $K_{X/Y}$ has the following property. The curvature current $\Theta$ of the $m$-Bergman metric $\displaystyle e^{-\varphi^{(m)}_{X/Y}}$ is semi-positive on $X$ and strongly positive on the pre-image of some open subset of $Y$. Moreover, we have \begin{equation}\label{mmp102} \Theta\geq \sum_i(t^i- 1)[W_i] \end{equation} on $X$. \end{cor} \begin{proof} Indeed, in our set-up the $m$--Bergman metric has the following expression \begin{equation}\label{mmp103} e^{\varphi^{(m)}_{X/Y}(x)}= \frac{|s|_x^{2/m}}{\int_{X_y}|s|^{2/m}} \end{equation} where $y= p(x)$. Therefore we have $\Theta= p^\star(\Theta_{\mathcal L})$ and our statement follows from Lemma \ref{cymetric}. \end{proof} \noindent Theorem \ref{CY17} is a direct consequence of Theorem \ref{kernels, I} together with Theorem \ref{kernels, II}. This is the end.
1,941,325,219,902
arxiv
\section{Introduction} Dark Energy is the dominant source causing the current accelerated expansion of the universe, as has been confirmed by observations~\cite{Riess:1998cb,Schmidt:1998ys,Bennett:2012zja,Ade:2013zuv}. Although there exist some possibilities explaining Dark Energy, a tiny positive cosmological constant would be the prime candidate, in perfect agreement with recent observations~\cite{Bennett:2012zja,Ade:2013zuv}. If one wants to understand the purely theoretically origin of this cosmological constant, we should promote Einstein gravity to be consistent with its quantum formulation. String theory is quite motivated for this purpose as it is expected to provide the quantum nature of gravity as well as particle physics. A cosmological constant could be realized in the context of flux compactifications~\cite{Giddings:2001yu,Dasgupta:1999ss} of 10D string theories, where a vacuum expectation value of the moduli potential at minima contributes to the vacuum energy in a four-dimensional space-time universe. Since there exist many possible choices of quantized fluxes and also a number of types of compactifications, the resultant moduli potential including a variety of minima forms the string theory landscape (see reviews~\cite{Douglas:2006es,Grana:2005jc,Blumenhagen:2006ci,Silverstein:2013wua,Quevedo:2014xia,Baumann:2014nda}). Although there exist many vacua in the string theory landscape, when we naively stabilize the moduli and obtain the minima, negative cosmological constants seem likely to come by. Hence an 'uplift' mechanism from the negative vacuum energy keeping stability should be important to realize an accelerated expanding universe. Some possible ways of the uplift mechanism have been proposed in string compactifications. \begin{itemize} \item Explicit SUSY breaking achieved by brane anti-brane pairs contributes positively in the potential, and thus can be used for the uplift~\cite{Kachru:2002gs,Kachru:2003aw,Kachru:2003sx}. When the D3 brane anti-brane pairs are localized at the tip of a warped throat, the potential energy may be controllable due to a warping factor. As the uplift term contributes to the potential at ${\cal O}({\cal V}^{-4/3})$, which appears larger than the F-term potential for stabilization which is in general $\mathcal{O}(\ll \mathcal{V}^{-2})$, de Sitter ($dS$) vacua with tiny positive cosmological constant may be achieved as a result of tuning of warping. A caveat of this proposal is that the SUSY breaking term needs to compensate the entire Anti-de Sitter ($AdS$) energy, so it is an open question if the SUSY breaking term, originally treated in a probe approximation, can be included as a backreaction in supergravity appropriately. \item As an alternative uplift mechanism, one may use the complex structure sector~\cite{Saltman:2004sn}. In the type IIB setup, the complex structure moduli as well as the dilaton are often stabilized at a supersymmetric point. Owing to the no-scale structure, the potential for the complex structure sector is positive definite $V_{\rm c.s.} = e^{K} |DW|_{\rm c.s.}^2 \sim {\cal O} (\mathcal{V}^{-2})$. So when we stay at the SUSY loci, the potential is given convex downward in general and thus tractable. However, if one stabilizes the complex structure sector at non-supersymmetric points, then there appears a chance to have a positive contribution in the potential without tachyons, that may be applied for the uplift with a tuning. See also recent applications of this mechanism~\cite{Danielsson:2013rza,Blaback:2013qza,Kallosh:2014oja}. \item When we include the leading order $\alpha'$-correction coming of ${\cal O}(\alpha'^3)$ in the K\"ahler potential~\cite{Becker:2002nn} which breaks the no-scale structure, this generates a positive contribution in the effective potential if the Euler number $\chi$ of the Calabi-Yau is given by a negative value~\cite{Balasubramanian:2004uy}. This positive term can balance the non-perturbative terms in the superpotential such that stable $dS$ vacua can be achieved in this K\"ahler Uplift model~\cite{Westphal:2006tn,Rummel:2011cd,deAlwis:2011dp} (see also~\cite{Westphal:2005yz}). In the simplest version of the K\"ahler uplifting scenario, there is an upper bound on the overall volume of the Calabi-Yau such that one may worry about higher order $\alpha'$-corrections. However, this bound can be significantly relaxed when embedded in a racetrack model~\cite{Sumitomo:2013vla}. \item It has been proposed that the negative curvature of the internal manifold may be used for $dS$ constructions as it contributes positive in the scalar potential~\cite{Silverstein:2007ac}. Motivated by this setup, there were many attempts constructing $dS$ vacua~\cite{Haque:2008jz,Flauger:2008ad,Danielsson:2009ff,Caviezel:2009tu,deCarlos:2009fq,deCarlos:2009qm,Dong:2010pm,Andriot:2010ju,Danielsson:2010bc,Danielsson:2011au,Danielsson:2012et,Danielsson:2012by,Blaback:2013ht}. Using the necessary constraint for the extrema~\cite{Hertzberg:2007wc,Wrase:2010ew} and for the stability~\cite{Shiu:2011zt}, we see that the existence of minima requires not only negative curvature, but also the presence of orientifold planes. \item When the stabilization mechanism does not respect SUSY, D-terms can provide a positive contribution to the potential if the corresponding D7-brane is magnetically charged under an anomalous $U(1)$~\cite{Burgess:2003ic,Cremades:2007ig,Krippendorf:2009zza}. The potential of D-terms arises of order ${\cal O} ( {\cal V}^{-n})$ with $n\geq 2$, depending on the cycle that the D7-brane wraps. If we take into account the stabilization of matter fields having a non-trivial VEV, originating from fluxed D7-branes wrapping the large four-cycle, then the uplift contribution becomes ${\cal O} ({\cal V}^{-8/3})$~\cite{Cremades:2007ig}. So a relatively mild suppression, for instance by warping, is required for this volume dependence. See also recent applications to explicit scenarios in~\cite{Cicoli:2012vw,Cicoli:2013mpa} and also together with a string-loop correction in fibred Calabi-Yaus~\cite{Cicoli:2011yh}. \item Recently, it has been proposed that a dilaton-dependent non-perturbative term can also work for the uplift mechanism toward $dS$ vacua~\cite{Cicoli:2012fh}. The non-perturbative term depends on both the dilaton and a vanishing blow-up mode which is stabilized by a D-term. Since the D-term turns out to be trivial at the minima due to a vanishing cycle, the non-trivial dilaton as well as the vanishing cycle dependence generate the uplift term within the F-term potential. In this setup, the given uplift term is proportional to $e^{-2b \langle s \rangle}/{\cal V}$. Although the volume does not suppress the uplift term so much, we may expect an exponential fine-tuning by the dilaton dependence to balance with the moduli stabilizing F-term potential. \end{itemize} In this paper, we introduce an uplifting term of the form $e^{-a_s \tau_s}/{\cal V}^2$, where $\tau_s$ is the volume of a small 4-cycle, which naturally balances with the stabilizing F-term potential in the Large Volume Scenario (LVS). The following ingredients are necessary for this mechanism: \begin{itemize} \item one non-perturbative effect on a 4-cycle $D_2$ to realize the standard LVS moduli stabilization potential, \item another non-perturbative effect on a different cycle $D_3$, \item a D-term constraint that enforces the volumes $\tau$ of the two 4-cycles to be proportional $\tau_s \sim \tau_2 \propto \tau_3$ via a vanishing D-term potential. \end{itemize} Hence, the minimal number of K\"ahler moduli for this uplifting scenario is $h^{1,1}_+=3$. At the level of the F-term potential the effective scalar potential reduces to the standard LVS moduli stabilization potential plus the mentioned uplifting term yielding metastable $dS$ vacua. The K\"ahler moduli are stabilized at large values avoiding dangerous string- and $\alpha'$-corrections. Compared to~\cite{Cicoli:2012fh}, the dilaton can take rather arbitrary values determined by fluxes as there is no tuning required to keep the uplifting term suppressed. Note that for $h^{1,1}_+=2$, a racetrack setup with two non-perturbative effects on one small cycle does not allow stable $dS$ vacua in the LVS. Hence, we have to consider at least two small cycles and a relating D-term constraint to construct $dS$ vacua. We also have to consider a necessary condition for coexistence of the vanishing D-term constraint with the non-perturbative terms in superpotential. If the rigid divisors for the two non-perturbative effects intersect with the divisor on which the D-term constraint is generated via magnetic flux, we have to worry whether the VEV of matter fields that are generated by this magnetic flux are given accordingly such that the coefficients of non-perturbative terms remain non-zero. On the other hand, we may avoid additional zero mode contributions in a setup with minimal intersections. A general constraint is that the non-perturbative effects and D-term potential have to fulfill all known consistency condition, for instance requiring rigid divisors, avoiding Freed-Witten anomalies~\cite{Minasian:1997mm,Freed:1999vc} and saturating D3, D5, and D7 tadpole constraints. We expect these constraints to become less severe as the number of K\"ahler moduli increases beyond $h^{1,1}_+=3$ as in principle the degrees of freedom such as flux choices and rigid divisors increases. This paper is organized as follows. We illustrate the uplift proposal generated through the multi-K\"ahler moduli dependence in the F-term potential and the required general geometric configuration in Section~\ref{genmech_sec} and give some computational details in Appendix~\ref{app_fluxconstraint}. We further discuss the applicability of the uplift mechanism in more general Swiss-Cheese type Calabi-Yau manifolds in Section~\ref{sec:real-more-moduli}. \section{D-term generated racetrack uplift - general mechanism} \label{genmech_sec} We illustrate the uplift mechanism by a D-term generated racetrack in Calabi-Yaus with the following properties: there are two small 4-cycles and two linear combinations of these small cycles that are rigid such that the existence of two non-perturbative terms is guaranteed in the superpotential avoiding additional fermionic zero modes from cycle deformations or Wilson lines. We show that this setup in general allows to stabilize the moduli in a $dS$ vacuum at large volume. \subsection{Geometric setup and superpotential} We consider an orientifolded Calabi-Yau $X_3$ with $h^{1,1}_+\geq3$ with the following general volume form of the divisors $D_i$ \begin{equation} \mathcal{V} = \frac16 \left(\sum_{i,j,k=1}^{h^{1,1}_+}\kappa_{i,j,k} t_i t_j t_k \right)\,,\label{gen2cycles} \end{equation} in terms of 2-cycle volumes $t_i$ and intersection numbers \begin{equation} \kappa_{ijk} = \int_{X_3} D_i \wedge D_j \wedge D_k\,. \end{equation} The 4-cycle volumes are given as \begin{equation} \tau_i = \frac{\partial \mathcal{V}}{\partial t_i} = \frac12 \kappa_{ijk} t_j t_k\,.\label{tauandt} \end{equation} We assume that $X_3$ has a Swiss-Cheese structure with a big cycle named $D_a$ and at least two small cycles $D_b$ and $D_c$, i.e., its volume form can be brought to the form \begin{equation} \mathcal{V} = \gamma_a \tau_a^{3/2} - \gamma_b \tau_b^{3/2} - \gamma_c \tau_c^{3/2} - \mathcal{V}_{\text{rest}}\,, \end{equation} with $\mathcal{V}_{\text{rest}}$ parametrizing the dependence of the volume on the remaining $h^{1,1}_+-3$ moduli. Now let us assume there are two rigid divisors $D_2$ and $D_3$ of which a linear combination forms the small cycles $D_b$ and $D_c$. \begin{align} \begin{aligned} D_2 &= d_{2b} D_b + d_{2c} D_c\,,\\ D_3 &= d_{3b} D_b + d_{3c} D_c\,. \end{aligned}\label{tau2taua} \end{align} Even if there do not exist two divisors $D_2$ and $D_3$ that are rigid, one might still be able to effectively `rigidify' one or more divisors by fixing all the deformation moduli of the corresponding D7-brane stacks via a gauge flux choice~\cite{Bianchi:2011qh,Cicoli:2012vw,Louis:2012nb}. Under these assumptions, the superpotential in terms of the K\"ahler moduli $T_i = \tau_i + i\, \zeta_i$ is of the form \begin{equation} W = W_0 + A_2 e^{-a_2 T_2} + A_3 e^{-a_3 T_3} = W_0 + A_2 e^{-a_2 \left(d_{2b} T_b + d_{2c} T_c \right)} + A_3 e^{-a_3 \left(d_{3b} T_b + d_{3c} T_c \right)}\,,\label{Wnonpert} \end{equation} with non-zero $A_2$, $A_3$ and $W_0$ being the Gukov-Vafa-Witten flux superpotential~\cite{Gukov:1999ya}. \subsection{D7-brane and gauge flux configuration} \label{d7config3moduli_sec} The orientifold plane O7 induces a negative D7 charge of $-8[O7]$ that has to be compensated by the positive charge of D7-branes. In general the O7 charge can be cancelled by introducing a Whitney brane with charge $8[O7]$~\cite{Collinucci:2008pf}. The non-perturbative effects of~\eqref{Wnonpert} can be either generated by ED3-instantons or gaugino condensation. For the latter, we choose a configuration with $N_2$ D7-branes on $D_2$ and $N_3$ D7-branes on $D_3$. In this case, the exponential coefficients of the non-perturbative terms in~\eqref{Wnonpert} are $a_2 = 2 \pi / N_2$ and $a_3 = 2 \pi / N_3$. The corresponding gauge group is either $SO(N)$ or $Sp(N)$ (which becomes $SU(N)$ if gauge flux is introduced), depending on if the divisor lies on the orientifold plane or not. Furthermore we introduce a third stack of $N_D$ branes on a general linear combination $D_D$ of basis divisors that is not either $D_2$ or $D_3$. This stack will introduce a D-term constraint that reduces the F-term effective scalar potential by one degree of freedom/ K\"ahler modulus. In the case of $h^{1,1}_+=3$ this corresponds to a two K\"ahler moduli LVS potential plus an uplift term that allows $dS$ vacua as we will show in Section~\ref{effectiveVgen_sec}.\footnote{In the case of $D_D$ being a linear combination of only $D_2$ and $D_3$ this divisor is only meaningful if $D_2$ and $D_3$ intersect as a linear combination of non-intersecting and rigid, i.e., local, four-cycles would not make sense. This is the reason we consider non-zero intersections between $D_2$ and $D_3$ in the first place as opposed to the more simple setup $\mathcal{V} \sim \tau_1^{3/2} - \tau_2^{3/2} - \tau_3^{3/2} - \mathcal{V}_{\text{rest}}$.} Note that in general all required D7-brane stacks have to be consistent with possible factorizations of the Whitney brane that cancels the O7 charge~\cite{Collinucci:2008pf,Cicoli:2011qg}. The D-term constraint is enforced via a Fayet-Illiopoulos (FI) term \begin{equation} \xi_D = \frac{1}{\mathcal{V}} \int_{X_3} D_D \wedge J \wedge \mathcal{F}_D = \frac{1}{\mathcal{V}} q_{Dj} t_j\,,\label{FID} \end{equation} where $J=t_i D_i$ is the K\"ahler form on $X_3$ and $q_{Dj} = \tilde f^k_D \kappa_{Djk}$ is the anomalous $U(1)$-charge of the K\"ahler modulus $T_j$ induced by the magnetic flux $\mathcal{F}_D = \tilde f^k_D D_k$ on $D_D$. We choose flux-quanta $\tilde f^k_D$ such that $\xi_D = 0$ in~\eqref{FID} implies \begin{equation} \tau_c = c\, \tau_b\,,\label{tauctaubprop} \end{equation} with a constant $c$ depending on flux quanta and triple intersection numbers. In a concrete example it is important to check that a constant $c$ in~\eqref{tauctaubprop} is realized which is consistent with stabilizing the moduli inside the K\"ahler cone of the manifold. An important constraint arises from the requirement of two non-vanishing non-perturbative effects $A_2, A_3\neq 0$ on generally intersecting cycles $D_2$ and $D_3$. The cancellation of Freed-Witten anomalies requires the presence of fluxes $\mathcal{F}$ on the D7-branes wrapping these divisors that can potentially forbid the contribution from gaugino condensation in the superpotential. This gauge invariant magnetic flux $\mathcal{F}$ is determined by the gauge flux $F$ on the corresponding D7-brane and pull-back of the bulk $B$-field on the wrapped four-cycle via \begin{equation} \mathcal{F} = F - B\,. \end{equation} If $D_2$ and $D_3$ intersect each other, the $B$-field can in general not be used to cancel both of theses fluxes to zero. However, it is still possible that both fluxes $\mathcal{F}_2$ and $\mathcal{F}_3$ can be chosen to be effectively trivial, such that no additional zero modes and FI-terms are introduced. These zero-modes would be generated via charged matter fields arising at the intersection of D7-brane stacks or from the bulk D7 spectrum. The constraint has to be checked on a case-by-case basis. We work out a sufficient condition on the intersections $\kappa_{ijk}$ for $\mathcal{F}_2$ and $\mathcal{F}_3$ to be trivial for the case of $D_2$ and $D_3$ not intersecting any other divisors $\kappa_{2,j,k} = \kappa_{3,j,k} =0$ for $j,k\neq 2,3$ in Appendix~\ref{app_fluxconstraint}. Furthermore, it has to be checked that $\mathcal{F}_D$ does not generate any additional zero-modes at the intersections of $D_D$ with $D_2$ and $D_3$. Finally, the chosen D7-brane and gauge flux setup has to be consistent with D3, D5 and D7 tadpole cancellation. As for every explicit construction this has to be checked on a case-by-case basis for the particular manifold under consideration. We do not expect tadpole cancellation to be in general more restrictive than in e.g., the $AdS$ LVS. In particular, we do not require a large number of D7-branes~\cite{Westphal:2006tn} and/or racetrack effect~\cite{Sumitomo:2013vla} on a particular single divisor to achieve a large volume as in the K\"ahler Uplifting scenario. \subsection{Effective potential of the K\"ahler moduli} \label{effectiveVgen_sec} We start with a slightly simplified model where the F-term potential \begin{equation} V_F = e^{K} \left( K^{\alpha\bar{\beta}} D_\alpha W \overline{D_\beta W} - 3 |W|^2 \right)\,,\label{VF} \end{equation} is given by \begin{equation} \begin{split} K=& - 2 \ln \left({\cal V} + {\xi \over 2}\right), \quad {\cal V} = (T_a + \bar{T}_a)^{3/2} - (T_b + \bar{T_b})^{3/2} - (T_c + \bar{T_c})^{3/2},\\ W =& W_0 + A_2 e^{-a_2 T_b} + A_3 e^{-a_3(T_b + T_c)}, \end{split} \label{F-term potential-simplified} \end{equation} where we have used equal intersection numbers $\gamma$ and assumed stabilization of the dilaton and complex structure moduli via fluxes~\cite{Giddings:2001yu}. The values of these parameters are not essential for the uplift dynamics we illustrate in this paper. The superpotential in~\eqref{F-term potential-simplified} corresponds to a particular choice of the general linear combination in~\eqref{Wnonpert}. The model~\eqref{F-term potential-simplified} is known to include the solutions of the LVS~\cite{Balasubramanian:2005zx} that stabilizes the moduli in a non-supersymmetric way in the presence of the leading $\alpha'$-correction~\cite{Becker:2002nn} and one non-perturbative term. The $\alpha'$-correction is given by $\xi \propto - \chi g_s^{-3/2}$ where $\chi$ is the Euler number of the Calabi-Yau manifold.\footnote{Recently, it has been argued that the leading correction in both $\alpha'$ and string coupling constants on $SU(3)$ structure manifold comes with the Euler characteristic of the six-dimensional manifold as well as Calabi-Yau compactifications~\cite{Grana:2014vva}.} The D-term potential is given through the magnetized D7-branes wrapping the Calabi-Yau divisor $D_i$~\cite{Haack:2006cy}: \begin{equation} V_D = {1 \over \re (f_D)} \left(\sum_j c_{Dj} \hat{K}_j \varphi_j - \xi_D\right)^2\,, \label{D-term potential} \end{equation} where the gauge kinetic function \begin{equation} \text{Re}\,(f_D) = \frac12\, \int_{D_D} J \wedge J - \frac{1}{2g_s}\int_{D_D} \mathcal{F}_D \wedge \mathcal{F}_D \,,\label{fD} \end{equation} and $\varphi_j$ are matter fields associated with the diagonal $U(1)$ charges $c_{Dj}$ of a stack of D7-branes and the FI-term $\xi_D$ is defined in~\eqref{FID}. Now we redefine the coordinates: \begin{equation} T_s \equiv {1\over 2 }\left(T_b + T_c \right), \quad Z \equiv {1\over 2} \left(T_b - T_c \right). \end{equation} When the D7-branes wrapping the divisor $D_D$ are magnetized and the matter fields are stabilized either at $\langle \varphi_i \rangle = 0$ or satisfying $ \langle \sum c_{ij} \hat{K}_j \varphi_j \rangle =0$, the D-term potential may become \begin{equation} V_D \propto {1\over \re(f_D)}{1 \over {\cal V}^2} \left(\sqrt{\tau_b} - \sqrt{\tau_c} \right)^2\,, \label{D-term potential in simple model} \end{equation} using $\xi_D \propto \sqrt{\tau_b} - \sqrt{\tau_c}$ implied by the flux $\mathcal{F}_D$, see~\eqref{tauctaubprop} where we use $c=1$ for simplicity. In the large volume limit, the F-term potential generically scales as ${\cal O} ({\cal V}^{-3})$ in the minima given in the LVS model. Stabilizing the K\"ahler moduli at ${\cal O} ({\cal V}^{-3})$ then requires a vanishing D-term potential, i.e., $\tau_b = \tau_c$ corresponding to $z \equiv \re Z=0$. Thanks to the topological coupling to the two-cycle supporting magnetic flux, the imaginary mode of the $Z$ modulus is eaten by a massive $U(1)$ gauge boson through the St\"uckelberg mechanism. Since the gauge boson has a mass of order of the string scale ${\cal O} ({\cal V}^{-1/2})$, the degree of freedom of $\im Z$ charged under the anomalous $U(1)$ as well as the gauge boson is integrated out at the high scale. Hence, we are left with the stabilization of the remaining moduli fields by the F-term potential. \subsection{F-term uplift\label{sec:f-term-uplift}} Next we will consider the stabilization by the F-term potential given in (\ref{F-term potential-simplified}). We are interested in LVS like minima ${\cal V} \sim e^{\hat{a}_i \tau_i}$ realizing an exponentially large volume. Then the leading potential of order ${\cal V}^{-3}$ is given by \begin{equation} \begin{split} V & \sim {3 W_0^2 \xi \over 4 {\cal V}^3} + {2 W_0 \over {\cal V}^2} \left( a_2 A_2 {\tau_b} e^{-a_2 \tau_b/2} + a_3 A_3 (\tau_b + \tau_c) e^{-a_3 (\tau_b + \tau_c)/2}\right) \\ & + {2 \over 3 {\cal V}} \left(a_2^2 A_2^2 \sqrt{\tau_b} e^{- a_2 \tau_b} + a_3^2 A_3^2 (\sqrt{\tau_b}+\sqrt{\tau_c}) e^{-a_3 (\tau_b + \tau_c)} + 2 a_2 a_3 A_2 A_3 \sqrt{\tau_b} e^{-a_2 \tau_b/2 - a_3 (\tau_b + \tau_c)/2} \right)\,, \end{split} \label{effective potential at larger volume 0} \end{equation} where the imaginary directions are stabilized at $\im T_i= 0$, and $\im T_a$ is stabilized by non-perturbative effects that are omitted in (\ref{F-term potential-simplified}), inducing a very small mass for $\im T_a$ and with negligible influence on the stabilization of the other moduli. Although the general minima of $\im T_i$ are given by $a_i \im T_i = m_i \pi$ with $m_i \in {\mathbb Z}$, the different solutions just change the sign of $A_i$ and thus we can simply have the potential of the above form. As the D-term stabilizes $\tau_c = \tau_b$, the resultant potential becomes \begin{equation} \begin{split} V &\sim {3 W_0^2 \xi \over 4 {\cal V}^3} + {4 W_0 \over {\cal V}^2} \left(a_2 A_2 \tau_s e^{-a_2 \tau_s} + 2 a_3 A_3 \tau_s e^{-2 a_3 \tau_s}\right)\\ &+ {2 \sqrt{2} \over 3 {\cal V}} \left({a_2^2 A_2^2 \sqrt{\tau_s}} e^{-2 a_2 \tau_s} + {4 a_3^2 A_3^2 \sqrt{\tau_s}} e^{-4 a_3 \tau_s} + 2 a_2 a_3 A_2 A_3 \sqrt{\tau_s} e^{-(a_2 + 2 a_3)\tau_s} \right)\,, \end{split} \label{effective potential at larger volume with ts} \end{equation} where we have defined $\tau_s = \re T_s$. One may consider that this form of the potential looks similar to the racetrack type. Although cross terms of $A_2, A_3$ appear due to the $T_b$ dependence, the important point for the uplift mechanism demonstrated in this paper is that the cross terms between $T_b$ dependence of the $A_2$ term and $T_c$ dependence of the $A_3$ term appear at ${\cal O} ({\cal V}^{-4})$.\footnote{Note that this would be more obvious if we start from a toy setup with $W = W_0 + A_2 e^{-a_2 T_b} + A_3 e^{-a_3 T_c}$, although one might not obtain the D-term constraint like $\tau_c = c \tau_b$.} If the cross term appears at ${\cal O}({\cal V}^{-3})$, it disturbs uplifting to $dS$. We further redefine the fields and parameters such that there are no redundant parameters affecting the stabilization: \begin{equation} \begin{split} &x_s = a_2 \tau_s, \quad {\cal V}_x = {\cal V} a_2^{3/2},\\ &c_i = {A_i \over W_0},\quad \xi_x = \xi a_2^{3/2}, \quad \beta = {2 a_3 \over a_2}. \end{split} \label{redefined parameters} \end{equation} Then the effective potential at order ${\cal O}({\cal V}^{-3})$ becomes \begin{equation} \begin{split} \hat{V} \equiv \left({a_2^{-3} W_0^{-2}}\right) V \sim {3\xi_x \over 4 {\cal V}_x^3} + {4 c_2 x_s \over {\cal V}_x^2}e^{-x_s} + {2\sqrt{2} c_2^2 x_s^{1/2}\over 3 {\cal V}_x} e^{-2 x_s } + {4 \beta c_3 x_s \over {\cal V}_x^2} e^{-\beta x_s}. \end{split} \label{redefined potential} \end{equation} We have neglected the term proportional to $c_3^2$ and the cross term between $c_2, c_3$ in the expression above. In fact, these terms are not important for the uplift mechanism of our interest, and we will justify this assumption a posteriori later. Since the uplift term comes together with $e^{-\beta x_s}$, this term contributes of ${\cal O}({\cal V}_x^{-3})$ when $\beta \sim {\cal O} (1)$. Hence, it contributes at the same order as the stabilizing F-term potential and no suppression factor provided by warping or dilaton dependence is required. Before performing the uplift, we consider the LVS solution by setting $c_3 =0$. We use a set of parameters: \begin{equation} c_2 = -0.01, \quad \xi_x = 5. \label{test input parameter} \end{equation} The extremal equations $\partial_I \hat{V} = 0$ at $c_3=0$ can be simplified as \begin{equation} \xi_x = {64 \sqrt{2} (x_s-1) x_s^{5/2} \over (4 x_s -1)^2}, \quad c_2 = - {6\sqrt{2} (x_s-1) x_s^{1/2} \over (4 x_s -1) {\cal V}_x} e^{-x_s}. \end{equation} Solving the equations above, we obtain \begin{equation} {\cal V}_x \sim 467, \quad x_s \sim 1.50.\label{VxxsAds} \end{equation} We can easily check that this solution gives an $AdS$ vacuum. Note that when we have just two moduli fields ${\cal V}_x, x_s$ in the LVS, the positivity of $\xi_x$ automatically guarantees the stability of the minima since the required condition $x_i > 1$ is satisfied (see e.g.~\cite{Rummel:2013yta}). Now we consider non-zero $c_3$ for the uplift. As $c_3$ increases, the vacuum energy of the potential minimum increases and eventually crosses the Minkowski point. In Figure~\ref{fig:uplift-illustration}, we illustrate the behavior of the minimum point by changing the value of $c_3$. Interestingly, the volume increases as the vacuum energy increases, suggesting that the effective description of the theory will be more justified toward $dS$ vacua. On the other hand, the minimum value of the Hessian decreases. Destabilization occurs when the uplift term dominates the entire potential. As this happens at higher positive values of the cosmological constant, there certainly exist a range of parameters yielding stable $dS$ vacua within this setup. As a reference, we show numerical values of parameters close to crossing the Minkowski point. When we use \begin{equation} \beta = {5\over 6}, \label{test beta value} \end{equation} the minimum reaches Minkowski at \begin{equation} c_3 \sim 4.28 \times 10^{-3}, \quad {\cal V}_x \sim 3240, \quad x_s \sim 3.07. \end{equation} So we see that the volume increases quite drastically from the $AdS$ vacuum~\eqref{VxxsAds}. Since $c_3$ remains small compared to the input value of $c_2$, we see that our approximation neglecting the term proportional to $c_3^2$ is justified. \begin{figure}[t] \includegraphics[width=20.5em]{lambda-c3.pdf} \vspace{4mm} \includegraphics[width=20em]{lambda-ddv.pdf} \includegraphics[width=20.5em]{lambda-vol.pdf} \caption{\footnotesize Illustration of the D-term generated racetrack uplift mechanism. We plot the cosmological constant $\hat \Lambda$ vs $c_3$, $\text{min}(\partial^2 \hat V)$ and $\mathcal{V}_x$ at the minima of the potential, especially near the Minkowski point.} \label{fig:uplift-illustration} \end{figure} In fact, it is not difficult to see how these values change when the presence of $c_3^2$ terms and cross terms between $c_2,c_3$ in the potential (\ref{effective potential at larger volume with ts}) are taken into account. With the input parameters we used in (\ref{test input parameter}), the Minkowski vacuum is obtained when \begin{equation} c_3 \sim 5.11 \times 10^{-3}, \quad {\cal V}_x \sim 2860, \quad x_s \sim 2.61. \label{test solution with c_3^2} \end{equation} Since the obtained values are not significantly different from the case where $c_3^2$ terms and cross terms between $c_2, c_3$ are neglected, we conclude a posteriori that the uplift term is dominated by the term linear in $c_3$. Let us comment on the stabilization of the axionic partner of each modulus field. As stated, the imaginary mode of $Z$ is eaten up by a massive gauge boson and hence integrated out at the high scale. The axionic partner of the big divisor $T_a$ is stabilized by non-perturbative effects yielding a tiny mass. The remaining modulus $\im T_s$ is stabilized by the F-term potential as the D-term potential does not depend on the latter. In the approximated potential up to ${\cal O}(\mathcal{V}^{-3})$, the Hessian of $ y_s = a_2 \im T_s$ is \begin{equation} \partial_{y_s}^2 \hat{V}|_{\rm ext} \sim 5.14 \times 10^{-10}\,, \end{equation} where we have included $c_3^2$ and $c_2, c_3$ cross terms, and used the solution (\ref{test solution with c_3^2}) and $\im T_{a} = \im T_i =0$. Thus all K\"ahler moduli are stabilized. \subsection{Analytical estimate} It is difficult to analytically derive a generic condition for the D-term generated racetrack uplift since the formulas are still complicated enough even after using several approximations. However, some of the expressions can be simplified under an additional reasonable approximation. In this subsection, we illustrate some analytical analyses for a better understanding. Since we checked that the uplift mechanism works even at linear approximation of the uplift parameter $c_3$, we only keep terms up to linear order in $c_3$ and neglect the higher order terms including cross terms. The extremal condition $\partial_i \hat{V} = 0$ of the potential (\ref{redefined potential}) is now simplified by \begin{equation} \begin{split} &c_2 \sim -{6\sqrt{2x_s} (x_s-1) \over 4 x_s -1} {e^{x_s} \over {\cal V}_x} + c_3 {\beta (\beta x_s -1) \over x_s -1} e^{(1-\beta)x_s} ,\\ &\xi_x \sim {64\sqrt{2} x_s^{5/2} (x_s-1) \over (4x_s-1)^2} - c_3 {32 \beta x_s^2 \left(2(\beta+2)x_s + \beta-7 \right) \over 9 (x_s-1) (4 x_s-1)} e^{-\beta x_s} {\cal V}_x . \end{split} \label{extremal condition of effective potential} \end{equation} Although our interest is the uplift toward $dS$ vacua, we have to cross the Minkowski point along the way. Thus, the condition that the minimum structure holds when uplifted to Minkowski vacua is a necessary condition for the $dS$ uplift mechanism. The condition for Minkowski at the extrema $\hat{V}|_{\rm ext} = 0$ reads \begin{equation} \begin{split} c_3 \sim {18 \sqrt{2} (\beta-1) x_s^{3/2} \over \beta (4 (\beta-1) x_s - 3)^2} {e^{\beta x_s} \over {\cal V}_x}. \end{split} \label{minkowski condition} \end{equation} Next, we proceed to check the stability at the Minkowski point. Although we know the conditions to check the stability, the formula of the Hessian is yet too complicated to perform an analytical analysis. So we further focus on the region satisfying $x_s \gg 1$. The region with $x_s \gg 1$ is motivated since the $AdS$ minimum points, before adding an uplift term, are guaranteed to have a positive Hessian since all eigenvalues are positive definite when satisfying $x_s > 1$ in LVS type stabilizations (see e.g.~\cite{Rummel:2013yta}). Furthermore, higher instanton corrections can be safely neglected. As shown in Figure~\ref{fig:uplift-illustration}, the minima can be uplifted keeping the positivity of the Hessian until reaching the destabilization point with a relatively high positive vacuum energy. Hence, having $x_s \gg 1$ is motivated to see the basic feature of the D-term generated racetrack uplift mechanism. Since there is no reason to take $\beta$ to be small/large, we consider $\beta \sim {\cal O} (1)$. When we use the approximation $x_s \gg 1$, a component of the Hessian and the determinant at the extrema become (\ref{extremal condition of effective potential}) \begin{equation} \begin{split} \partial_{{\cal V}_x}^2 \hat{V}|_{\rm ext} \sim& {6\sqrt{2} x_s^{3/2} \over {\cal V}_x^5} - c_3 {8 \beta x_s e^{-\beta x_s} \over {\cal V}_x^4} \sim {6\sqrt{2} x_s^{3/2} \over {\cal V}_x^5},\\ \det \left(\partial_i \partial_j \hat{V} \right)|_{\rm ext} \sim& {162 x_s^2 \over {\cal V}_x^8} + c_3 {24\sqrt{2} \beta(\beta^2 + \beta -2) x_s^{5/2} e^{-\beta x_s} \over {\cal V}_x^7} \sim {54 (1- \beta) x_s^2 \over {\cal V}_x^8}\,, \end{split} \label{analytical hessian at minkowski} \end{equation} where in the last step of both equations, we have used the Minkowski condition (\ref{minkowski condition}). According to Sylvester's criterion, the positivity of a matrix can be checked by the positivity of the determinant of all sub-matrices. Thus it is enough to check the positivity of the quantities in (\ref{analytical hessian at minkowski}). Therefore we conclude that the stability at the Minkowski point requires $\beta < 1$. This condition is clearly satisfied in the previous numerical example following (\ref{test beta value}), which may justify the crude approximations we took in this subsection. Note that the Hessian of the imaginary mode is guaranteed to be positive under the above used approximations: \begin{equation} \partial_{y_s}^2 \hat{V}|_{\rm ext} \sim {6\sqrt{2} x_s^{3/2} \over {\cal V}_x^3} - c_3 {4 \beta^2 (\beta +1) x_s e^{-\beta x_s} \over {\cal V}_x^2} \sim {6\sqrt{2} x_s^{3/2} \over {\cal V}_x^3}. \end{equation} Finally, let us check the extremal and Minkowski conditions in the limit $x_s \gg 1$. Now all conditions are simplified to be \begin{equation} \begin{split} \xi_x \sim 4 \sqrt{2} x_s^{3/2}, \quad c_2 \sim -{3 \sqrt{x_s} \over \sqrt{2}} {e^{x_s} \over {\cal V}_x}, \quad c_3 \sim {9 \over 4 \sqrt{2x_s} \beta (1-\beta)} {e^{\beta x_s} \over {\cal V}_x}. \end{split} \end{equation} We see that the minimum point needs $\xi_x >0$ and $c_2 < 0$ in agreement with the minimum requirement of the two-moduli LVS at $AdS$. The stability condition $\beta < 1$ suggests $c_3 > 0$. In fact, the extremal condition for $\xi_x, c_2$ is simply the leading order approximation of each first term in \eqref{extremal condition of effective potential} as the $c_3$ contribution appears sub-dominant. This justifies that the linear approximation for $c_3$ is compatible with $x_s \gg 1$. Hence, we can regard the last term in the potential (\ref{redefined potential}) as the uplift term. \section{On realization in models with more moduli\label{sec:real-more-moduli}} In this section, we show that the uplift mechanism works well in the presence of additional K\"ahler moduli in Swiss-Cheese type Calabi-Yau compactifications. We consider a simple toy model with $h^{1,1}_+=4$, which captures the essential features of the D-term generated racetrack uplift mechanism defined by \begin{equation} \begin{split} K=& - 2 \ln \left({\cal V} + {\xi \over 2}\right), \quad {\cal V} = (T_a + \bar{T}_a)^{3/2} - (T_b + \bar{T_b})^{3/2} - (T_c + \bar{T_c})^{3/2} - (T_e + \bar{T_e})^{3/2},\\ W =& W_0 + A_2 e^{-a_2 T_b} + A_3 e^{-a_3 (T_b+T_c)} + A_4 e^{- a_4 T_e}. \end{split} \end{equation} Again we are interested in the case of a Swiss-Cheese volume for moduli stabilization of the LVS type. Note that we used the name $T_e$ to avoid confusion with $T_D$. Taking into account the D-term potential generated by the magnetized D7-branes wrapping the divisor $Z$, we assume again that the $Z =\frac12 (T_b - T_c)$ modulus is stabilized at $Z=0$. Setting $a_4 = a_2$ for simplicity, the effective potential at ${\cal V}^{-3}$ from the F-terms is given by \begin{equation} \begin{split} {\hat{V}} \equiv \left({a_2^{-3} W_0^{-2}}\right) V \sim& {3\xi_x \over 4 {\cal V}_x^3} + {4 c_2 x_s \over {\cal V}_x^2}e^{-x_s} + {2\sqrt{2} c_2^2 x_s^{1/2}\over 3 {\cal V}_x} e^{-2 x_s} + {4 c_4 x_4 \over {\cal V}_x^2}e^{-x_4} + {2\sqrt{2} c_4^2 x_4^{1/2}\over 3 {\cal V}_x} e^{-2 x_4}\\ &+ {4 \beta c_3 x_s \over {\cal V}_x^2} e^{-\beta x_s} + {\sqrt{2} c_3^2 x_s^{1/2}\over 3 {\cal V}_x} e^{-2 \beta x_s} + {2\sqrt{2}\beta c_2 c_3 x_s^{1/2} \over 3 {\cal V}_x} e^{-(1+\beta)x_s}, \end{split} \end{equation} where we have further defined $x_e = a_4 \tau_e, \beta=2a_3/a_2$ and $c_4 = A_4/ W_0$ in addition to~\eqref{redefined parameters}. Here we included the term proportional to $c_3^2$ as well as the cross term $c_2 c_3$ even though they are potentially subleading. When we use a set of parameters: \begin{equation} c_2 = -0.0114, \quad c_4 = -3.38 \times 10^{-4}, \quad \xi_x = 19, \label{input of 4 moduli model} \end{equation} then the $AdS$ LVS minimum at $c_3=0$ is located at \begin{equation} \begin{split} {\cal V}_x \sim 2740, \quad x_s \sim 2.60, \quad x_e \sim 1.12. \end{split} \label{4modminval} \end{equation} The stability of multi-K\"ahler moduli models of the LVS type is ensured if the constraint $x_i>1$ is satisfied~\cite{Rummel:2013yta}. Hence, the extremal point~\eqref{4modminval} is stable. Now we add the uplift terms $c_3\neq 0$ and $\beta=5/6$. The minimum with the input parameters (\ref{input of 4 moduli model}) reaches Minkowski at \begin{equation} \begin{split} c_3 \sim 4.55 \times 10^{-3}, \quad {\cal V}_x \sim 5.64 \times 10^4, \quad x_s \sim 5.45, \quad x_e \sim 2.26. \end{split} \end{equation} Although the volume is drastically changed during the uplift toward $dS$ vacua, we can check the stability of the minimum by plugging the values into the Hessian, similarly to the simple three moduli model. The cosmological constant can further increase in the positive region keeping the stability until the minima exceeds the potential barrier where decompactification happens. \section{Discussion} We have proposed an uplift mechanism using the structure of at least two small K\"ahler moduli $T_b$ and $T_c$ in Swiss-Cheese type compactifications. The uplift contribution arises as an F-term potential when using a D-term condition which fixes $\re T_b = c \re T_c$ at a higher scale, where $c$ is determined by magnetized fluxes on D7-branes. The uplift term becomes of the form $e^{-a_s \tau_s}/{\cal V}^2$ at large volumes, and hence it can naturally balance with the stabilizing potential in the Large Volume Scenario (LVS), without requiring suppressions in the coefficient, for instance, by warping or a dilaton dependent non-perturbative effect. In addition, we have shown that the D-term generated racetrack uplift works in the presence of additional K\"ahler moduli. Together with the fact that constraints on the uplift parameters are rather relaxed, i.e., $\beta< 1$ and $c_3>0$, this makes us optimistic that there should be many manifolds admitting the proposed uplift mechanism. Since the proposed uplift mechanism requires certain conditions for a D-term constraint and two non-vanishing non-perturbative effects, it should be interesting if we can construct an explicit realization of this model in a particular compactification. Such an explicit construction requires to match all known consistency conditions such as cancellation of Freed-Witten anomalies and cancellation of the D3, D5, and D7 tadpole~\cite{Cicoli:2011qg,Cicoli:2012vw,Louis:2012nb,Cicoli:2013mpa,Cicoli:2013zha}. We hope to report on an explicit example in another paper. Furthermore, the phenomenological aspect of the proposed uplift mechanism should be interesting. Even though the moduli are essentially stabilized as in the LVS, the resultant behavior of the mass spectrum and/or soft SUSY breaking terms may be different depending on which uplift mechanism we employ to realize the $dS$ vacuum. Finally, in this paper, we concentrated on analyzing the structure of $dS$ minima. However, the structure of the potential is also changed by the uplift term in regions that might be important for including inflationary dynamics. We relegate the analysis of possible inflation scenarios as well as phenomenological consequences compared to other uplift proposals to future work. \section*{Acknowledgments} We would like to thank Joseph P. Conlon and Roberto Valandro for valuable discussions and important comments, and also the organizers of the workshop "String Phenomenology 2014" held at ICTP Trieste, Italy, where some of the results of this paper were presented. YS is grateful to the Rudolph Peierls Centre for Theoretical Physics, University of Oxford where part of this work was done for their hospitality and support. MR is supported by the ERC grant `Supersymmetry Breaking in String Theory'. This work is partially supported by the Grant-in-Aid for Scientific Research (No. 23244057) from the Japan Society for the Promotion of Science.
1,941,325,219,903
arxiv
\section{Introduction} It is well known that the perturbative approach to finding a theory of quantum gravity faces the problem of non-renormalizability. This is partly solved in string theory, but at the expense of introducing extra dimensions. There exist however, non-renormalizable field theory models for which the quantum theory is known to exist non-perturbatively. One example of this is the Gross-Neveu model in three dimensions which is a theory with a four-fermion interaction; the interaction coupling constant has negative mass dimension indicating power counting non-renormalizability. The model exists non-perturbatively in the ultraviolet regime \cite{3dgross-neveau-sol}. Another is Einstein gravity in three dimensions \cite{witten3d,ashetal3d}. But this is a theory with no propagating degrees of freedom, and it is still not clear how to construct a consistent quantum theory of 3D gravity \cite{witten2}. In any case, we do not know how to proceed from lower to higher dimensional theories, and so this case too may be considered special. Similar comments apply to BF theory \cite{vh-topqm,bg}. There is so far only one example of a four dimensional diffeomorphism invariant theory that is not renormalizable, but which exists as a non-perturbative quantum theory \cite{hk,ashetal-hk} . The catch is that although the model has local degrees of freedom, its dynamics is trivial. It nevertheless shows that perturbative non-renormalizability is not necessarily a sound criteria for discarding a theory. The question of whether this could be the case for quantum gravity in four dimensions has been one motivation for seeking a non-perturbative canonical formulation. However, no such program has yet been completed due to the problem of dealing with the Hamiltonian constraint in the Dirac quantization approach, which leads to the Wheeler-DeWitt equation. Nevertheless there are recent indications that this can be circumvented by deparametrizing the system by using matter fields to fix time and space gauges. One such approach uses a pressureless dust to fix only a time gauge, thereby eliminating the Hamiltonian constraint problem \cite{hp-dust} and replacing it by a true Hamiltonian with only spatial diffeomorphism symmetry. Motivated by such approaches we present a new type of theory with dynamical metric and a fixed one-form field\footnote{We note that there is a model with a dynamical scalar field \cite{fb} to which the methods of this paper may be applied; in a particular time gauge, this model has an interesting physical Hamiltonian with a diffeomorphism constraint}. The theory is such that it has a built in time that does not arise via a gauge fixing as in the aforementioned approaches. Its canonical decomposition reveals that there is a true Hamiltonian together with spatial diffeomorphism and Gauss constraints, which generate the only gauge symmetry. The theory can be coupled to matter in natural way. Its quantization can be carried out using the methods of loop quantum gravity. It therefore provides an example of a non-renormalizable geometric theory whose quantum theory exists non-perturbatively. In the following we describe the theory and its canonical formulation, and then outline a non-perturbative quantization scheme using the background independent techniques developed in the loop quantum gravity (LQG) program. \section{The model} The fields in the theory are an $su(2)$ gauge field $A_\mu^i$, a dreibein $e_\mu^i$, scalar field $\phi$, and a fixed non-dynamical one-form field $\zeta_\alpha$ which gives the two-form $\omega=d\zeta$. ($i,j,k\cdots$ are $su(2)$ indices, and $\alpha,\beta \cdots $ are world indices.) The dreibein fields $e_\mu^i$ define a degenerate $4-$metric, and give rise to the tensor density \begin{equation} \tilde{u}^\alpha = \frac{1}{3!} \tilde{\eta}^{\alpha\beta\mu\nu}e_\beta^i e_\mu^j e_\nu^k\ \epsilon_{ijk}, \end{equation} where $\tilde{\eta}^{\alpha\beta\mu\nu}$ is the Levi-Civita symbol (independent of $e_\alpha^i$ and $\zeta_\alpha$), and $\epsilon^{ijk}$ is the $su(2)$ structure constant. Using this we define a scalar density and vector field by \begin{equation} \tilde{u} = \tilde{u}^\alpha \zeta_\alpha, \ \ \ \ \ \ u^\alpha = \frac{\tilde{u}^\alpha} {\tilde{u}}, \label{u} \end{equation} and a co-triad by \begin{equation} e^{\alpha}_i = \frac{1}{2\tilde{u}} \ \tilde{\eta}^{\alpha\beta\mu\nu}\zeta_\beta e_\mu^j e_\nu^k\ \epsilon_{ijk}. \label{cotriad} \end{equation} The scalar density $\tilde{u}$ would vanish if $\zeta_\alpha$ were a linear combination of the $e_\alpha^i$, so we assume this is not the case. These definitions give the relations \footnote{With these relations we note that the 2-form $\omega$ is invertible (because $u\cdot \omega ={\cal L}_u \zeta \ne 0$, and $e^{\alpha}\cdot \omega \ne 0$), therefore it is a symplectic form. However this fact is not needed in our subsequent development of the model.} \begin{equation} u^\alpha \zeta_\alpha = 1,\ \ \ \ \ \ u^\alpha e_\alpha^i = 0, \ \ \ \ \ \zeta_\alpha e^{\alpha}_i=0 \end{equation} \begin{equation} e^{\alpha}_i e_\alpha^j = \delta_i^j, \ \ \ \ \ e^{\alpha}_i e_\beta^i = \delta^\alpha_\beta. \end{equation} We note finally that a non-degenerate Euclidean or Lorentzian signature $4-$metric may be defined by \begin{equation} g_{\alpha\beta} = \pm \zeta_\alpha \zeta_\beta + e_\alpha^i e_\beta^i. \label{4metric} \end{equation} We are now ready to define the action for the model which contains the field $\zeta_\alpha$ as a fixed ``background'' structure. The action is \begin{eqnarray} S&=& S_G + S_\Lambda + S_\phi \nonumber\\ &=& \frac{1}{l^2}\int_M \tilde{\eta}^{\alpha\beta\mu\nu} \epsilon^{ijk}e_\alpha^ie_\beta^j F_{\mu\nu}^k(A) + \Lambda \int_M \tilde{u} \\ && + \int_M \tilde{u}\ \left( -u^\mu u^\nu\partial_\mu\phi \partial_\nu\phi + e^{\mu}_ie^{\nu}_i \partial_\mu\phi \partial_\nu \phi\right). \end{eqnarray} The first term is the action of the model introduced in \cite{hk}, where $F(A)$ is the curvature of the gauge field $A$. Its canonical theory has an identically vanishing Hamiltonian constraint, so it is a theory with three local degrees of freedom and no dynamics. The fixed one-form field $\zeta_\alpha$ makes it possible to introduce the cosmological constant term and coupling to matter in the manner displayed. The coupling constant $l$ is a fundamental length scale obtained by assigning the usual canonical dimension to the connection, i.e. $A$ has mass dimension one, and $e$ is dimensionless. This assignment makes the theory power counting non-renormlizable just as in Einstein gravity, since changing the gauge algebra from $so(3,1)$ to $su(2)$ does not affect this counting. \subsection{Hamiltonian theory} To construct the Hamiltonian theory let us introduce the embedding variable $X^\alpha (t,x^a)$ which provides a smooth map \begin{equation} X: \mathbb{R}\times \Sigma \longrightarrow M \end{equation} where $\Sigma$ is a three manifold. The inverse map gives the functions $x^a(X)$ and $t(X)$. The 3+1 split of the first term in the action is obtained \cite{hk} by substituting into the action the decompositions \begin{equation} \tilde{\eta}^{\alpha\beta\mu\nu} = \tilde{\eta}^{abc}\dot{X}^\alpha X^\beta_{,a}X^\mu_{,b}X^\nu_{,c}, \end{equation} where the time deformation vector field $\dot{X}^\alpha$ decomposes as \begin{equation} \dot{X}^\alpha = u^\alpha + N^\alpha = u^\alpha + X^\alpha_{,a}N^a. \label{Xdot} \end{equation} We also use the spatial projections of the fields defined by \begin{equation} e_a^i = e_\alpha^i X^\alpha_{,a}, \ \ \ A_a^i = A_\alpha^i X^\alpha_{,a}, \ \ \ \tilde{e}^{ai} = \tilde{\eta}^{abc}e_b^je_c^k\epsilon^{ijk}, \ \ \ e^{ai}=\tilde{e}^{ai}/\tilde{e}, \end{equation} where $\tilde{e}= \tilde{\eta}^{abc}e_a^ie_b^je_c^k\epsilon^{ijk}$. These are the decompositions needed to arrive at the canonical form of the first part of the action, which is \begin{equation} S_G = \int_M d^3x dt\left[\tilde{e}^{ai} \dot{A}_a^i - N^a(\partial_{[a}A_{b]}^i\tilde{e}^{bi} -A_a^i\partial_b \tilde{e}^{bi}) - \Lambda^i D_a\tilde{e}^{ai} \right] \end{equation} where $N^a = e^{ai}(e_\beta^i \dot{X}^\beta)$ and $\Lambda^i = A_\alpha^iu^\alpha$. This identifies the fundamental Poisson brackets for the geometric variables: \begin{equation} \{A^i_a(x),\tilde{e}^b_j(x')\}=\tilde{\delta}^3(x-x')\delta^i_j \delta^b_a. \end{equation} To obtain the canonical decomposition of $S_\Lambda$ and $S_\phi$ we note first that \begin{eqnarray} \tilde{u} &=& \dot{X}^\alpha \zeta_\alpha \tilde{e} = (1 + X^\alpha_{,a}\zeta_\alpha N^a)\tilde{e} , \nonumber\\ e^{\alpha i} &=& X^\alpha_{,a}e^{ai} + \dot{X}^\alpha (t_{,\beta}e^{\beta i}). \end{eqnarray} Now the identity $e^{\alpha i} \zeta_\alpha =0$ applied to the last equation gives \begin{equation} 0= X^\alpha_{,a} \zeta_\alpha e^{ai} + \dot{X}^\alpha \zeta_\alpha (t_{,\beta}e^{\beta i}). \end{equation} Thus if we choose the foliation $X^\alpha(t,x^a)$ such that $X^\alpha_{,a} \zeta_\alpha=0$ (ie. adapted to the fixed field $\zeta_\alpha$) we have \begin{equation} \tilde{u} = \tilde{e}, \ \ \ \ \ \ \ e^{\alpha i} = X^\alpha_{,a}e^{ai}. \end{equation} Substituting these together with (\ref{Xdot}) into the action gives \begin{equation} S_\Lambda + S_\phi = \int d^3xdt\left[ \Lambda \tilde{e} + \frac{P^2_\phi}{2\tilde{e}} + \tilde{e}e^{ai}e^{bi}\partial_a\phi \partial_b\phi -N^a P_\phi \partial_a\phi \right]. \end{equation} The Hamiltonian decomposition of the full action then shows that the phase space variables are the canonical pairs $(e^{ai}, A_a^i)$ and $(\phi, P_\phi)$ with the Hamiltonian \begin{equation} H = \int d^3x \left[ \Lambda \tilde{e} + \frac{P^2_\phi}{4\tilde{e}} + \tilde{e}e^{ai}e^{bi}\partial_a\phi \partial_b\phi\right].\label{cham} \end{equation} The theory has two sets of first class constraints that generate SU(2) gauge transformations and spatial diffeomorphisms. Thus the theory has four local configuration degrees of freedom of which three are geometric and one is matter. The remarkable feature is that true dynamics is obtained by introducing a fixed one-form field which may be interpreted as providing a symplectic structure on the manifold. Thus the presence of this structure may be viewed as providing a time variable, while maintaining full general covariance of the action. The Hamiltonian equations of motion provide a view of the dynamics. Evolution is a combination of gauge (Gauss and spatial diffeomorphisms) and true motion via $H$. We note first that the three geometry does not evolve: \begin{equation} \dot{\tilde e}^{ai} = \{ \tilde{e}^{ai}, H\} = 0, \end{equation} but its conjugate connection does \begin{equation} \dot{A}_a^i = \{ A_a^i, H \} = e_a^i\left( \Lambda- \frac{P_\phi^2}{4 \tilde{e}^2} - e^{bj}e^{cj}\partial_b\phi \partial_c\phi \right)+2\partial_a\phi e^b_i\partial_b \phi. \end{equation} The scalar field equations are the usual ones for a field on a curved space-time given by the metric (\ref{4metric}). The geometrical phase space variables in our theory are identical to those of the Ashtekar-Barbero canonical formulation of general relativity. There the connection $A_a^i$ is a sum of the (spatial) metric connection and the extrinsic curvature. Thus the comparison allows us to interpret the canonical equations of motion of our model as evolving the extrinsic curvature, but not the spatial metric. With this in mind, the model may be viewed as evolving both the matter field and the four-geometry (through the connection $A_a^i$). \section{Quantization} Using the similarity of the geometrical part of the phase space with that of general relativity in the connection-triad variables, we turn to a discussion of the quantum theory of this model. Using an extension of the spin network Hilbert space used in loop quantum gravity to include scalar matter degrees of freedom \cite{thiemann-qsd5}, we will see that it is possible to set up a complete quantum theory. The starting point of the LQG approach is the set of phase space functions \begin{equation} U_\gamma[A] = P \exp\int_\gamma A_a^i\tau^i dx^a,\ \ \ \ \ \ \ F^i_S = \int_S \tilde{e}_a^i dS^a, \end{equation} where $\gamma$ is a loop and $S$ a surface in a spatial hypersurface $\Sigma$, and $\tau^i$ are generators of $SU(2)$. Gauge invariant versions of these were first used for quantization of BF theory \cite{vh-topqm} and in \cite{bg}. Their Poisson bracket forms the so called holonomy-flux algebra \begin{equation} \{ H_\gamma[A], F^i_S \} = \int_\gamma ds \int_S\ d^2\sigma U_\gamma[A]\tau^i \delta^3(\gamma(s), S(\sigma)). \end{equation} The analogous observables for the scalar field are \begin{equation} V_k(\phi(x)) = \exp [ik\phi(x)], \ \ \ \ \ \ \ \ P_f = \int_\Sigma f P_\phi d^3x, \end{equation} where $k \in \mathbb{R}$ and $f(x)$ is a suitable function with rapid fall-off. These satisfy \begin{equation} \{ V_k (x), P_f \} = ik f(x)V_k(x). \end{equation} \subsection{Geometry Hilbert space} There is a well-defined path to quantization of the gravitational variables which are discussed in detail in a number of reviews \cite{lqg-revs}. Therefore we restrict attention to describing the basic guidelines. A crucial first step is the choice of Hilbert space for a connection representation $\Psi[A]$. One considers an oriented graph $\Gamma$ with ordered edges $e_1, e_2 \cdots e_N$, and vertices $n_1,n_2\cdots n_M$ embedded in the spatial surface $\Sigma$, and associates the holonomy function in the representation $j$ of SU(2), $H_{e}^j[A]$, with edge $e$. A spin network state is a function of such holonomies \begin{equation} f[A] = f(U_{e_1}^{j_1}, U_{e_2}^{j_2},\cdots U_{e_N}^{j_N}). \end{equation} These are essentially functions of $SU(2)$ group elements, so the natural inner product is the Haar measure on (tensor product copies) of this group. A convenient orthonormal basis for this space of functions is the spin network basis; the wave function of a graph with a single edge $e$ is the matrix $(U_e^j)[A]_{m_1,m_2} \equiv \langle A| j;m_1,m_2\rangle$ in the representation $j$, where $m_1,m_2$ are its matrix indices. This generalizes readily to multi-edges graphs. The space ${\cal H}_{Kin}$ is not the physical Hilbert space of our model, since its elements are neither gauge nor spatial diffeomorphism invariant. The LQG path to achieve invariance under these transformations is done in two steps. The first step is the formulation of ${\cal H}_{Kin}^G$, the space of SU(2) invariant states. The usual formulation of this involves group averaging of states in ${\cal H}_{Kin}$. Intuitively this amounts to tracing over matrix indices using SU(2) invariant tensors (called intertwiners) at all vertices of the graph $\Gamma$, and ensuring there are no open ``dangling'' edges. This gives the Gauss invariant states. Such states may be represented as the kets \begin{equation} |\Gamma;{\bf j};{\bf I}\rangle:=|\Gamma; j_1, \cdots j_N; I_1, \cdots I_M\rangle, \end{equation} where a spin $j$ is associated with each edge and an intertwiner $I$ with each vertex of the graph $\Gamma$. The inner product in ${\cal H}_{Kin}^G$ is the obvious one guided by this characterization of the basis: \begin{eqnarray} &&\langle \Gamma; j_1, \cdots j_N; I_1, \cdots I_M |\Gamma; j_1', \cdots j_N'; I_1', \cdots I_M'\rangle \nonumber\\ &&= \delta_{j_1,j_1'}\cdots \delta_{j_N,j_N'} \delta_{I_1,I_1'}\cdots \delta_{I_M, I_M'}, \end{eqnarray} if the graphs are the same, and zero otherwise. For spin networks with only trivalent vertices, the intertwiners are unique (up to a multiplicative constant). An explicit example of a gauge invariant trivalent spin network state with three edges is \begin{equation} \psi[A]_{1,\frac{1}{2},\frac{1}{2}} = \langle A| 1, {\textstyle\frac{1}{2}}, {\textstyle\frac{1}{2}}; \sigma, \sigma\rangle=[U^{\frac{1}{2}}_{e_1}]^A_{\ B} \ [U^{1}_{e_2}]^i_{\ j} \ [U^{\frac{1}{2}}_{e_3}]^C_{\ D}\ \sigma_{iAC}\ \sigma^{jBD}, \label{gspin} \end{equation} where $\sigma^i$ are the Pauli matrices. This example also illustrates why edges must be oriented; the matrix indices $(iAC)$ come together at one vertex and $(jBD)$ at the other. Having characterized ${\cal H}_{Kin}^G$ in this manner, the next step is to address the requirement of invariance under spatial diffeomorphisms. We note first that there is a natural action of diffeomorphisms on the gauge invariant spin network states such as (\ref{gspin}). This stems from the observation that such transformations ``drag the graph around'' but do not affect the combinatoric information in the spins and intertwiners \cite{ashetal-hk,lqg-revs}. Formally, for $\phi \in { Diff}(\Sigma)$, we have \begin{equation} {\cal {U}}_D[\phi] |\Gamma; j_1, \cdots j_N; I_1, \cdots I_M\rangle = |\phi^{-1}\Gamma; j_1, \cdots j_N; I_1, \cdots I_M\rangle. \end{equation} Thus for a fixed graph $\Gamma$ the diffeomorphism invariant information is just the set of spins and intertwiners (up to some subtleties \cite{lqg-revs}). We denote this Hilbert space by ${\cal H}_{geom}$, and in the following consider the case where the underlying graph is a cubic (abstract) lattice. Thus each node will be 6-valent, and we will assume that the associated non-zero spins and intertwiners form a finite set. This will aid in defining the physical Hamiltonian operator.\footnote{The choice of cubic graph represents a restriction of the quantum theory, since in principle all graphs should be included; this choice allows a systematic construction of the Hamiltonian operator. The solution of the diffeomorphism constraint to yield ${\cal H}_{geom}$ for a cubic lattice proceeds as in \cite{ashetal-hk}, with a finite set of excitations on the lattice.} \subsection{Matter Hilbert space} The geometry Hilbert space ${\cal H}_{geom}$ described above is the physical Hilbert space of the model without matter. Its extension to include matter is accomplished by associating an additional quantum number with the vertices of graphs. Given a graph $\Gamma$ with vertices $v_1\cdots v_M$, a basis for the matter Hilbert space, (${\cal H}_{matter}$) is $|k_1, \cdots k_M\rangle$, where $k_i \in \mathbb{R}$ are the quantum numbers associated with matter. The inner product is \begin{equation} \langle k'_1, \cdots,k'_M |k_1, \cdots,k_M\rangle=\delta_{k'_1,k_1}\cdots \delta_{k'_M,k_M}. \end{equation} The classical scalar field variables $V_k(\phi(x)),P_f$ defined above have the quantum realizations \begin{eqnarray} \hat{V}_{k}(v_l)|k_1,\cdots,k_M\rangle &=&|k_1,\cdots,k_l+k ,\cdots,k_M\rangle,\\ \hat{P}_f|k_1,\cdots,k_M\rangle &=& \sum_{i=1}^M k_i f(v_i)|k_1,\cdots,k_M\rangle, \end{eqnarray} where $v_i$ is a vertex. It is readily verified that these definitions provide a representation of the classical Poisson algebra. \subsection{Physical Hilbert space and Hamiltonian} The physical Hilbert space of our model is the tensor product ${\cal H}_{geom} \otimes {\cal H}_{matter}$, with basis \begin{equation} |\Gamma;{\bf j};{\bf I};{\bf k}\rangle =|\Gamma; j_1, \cdots, j_N; I_1, \cdots, I_M; k_1, \cdots,k_M\rangle. \end{equation} As mentioned above we assume that the geometric and matter excitations are on an infinite cubic graph. Its regularity provides a systematic way to construct the Hamiltonian operator to which we now turn. The classical expression for the Hamiltonian (\ref{cham}) contains geometric terms that appear in the Hamiltonian constraint of LQG. The operator realizations of these are well studied in the literature \cite{thiemann-qsd5}. For example the $\tilde{e}$ term in the Hamiltonian is realized using the LQG volume operator, and its inverse is realized as a commutator of the square root of the volume and holonomy operators, a construction well known in LQG. Turning to the matter operators, the $P_\phi^2$ factor is diagonal in the basis we are using. It can be localized by writing the integral for $P_f$ as a sum over vertices of the graph, taking $f$ to be unity, ie \begin{equation} \int d^3x \frac{P^2_\phi}{\tilde{e}} \longrightarrow \sum_{i} {\widehat{\frac{1}{\tilde{e}}}}_{v_i} P^2(v_i) \end{equation} The factors of $\partial_a\phi$ may be realized by using a ``finite difference'' approach. We first define a local field operator as \begin{equation} \Phi_k(v_i) := \frac{1}{2ik}\ \left( V_k(v_i) - V_{-k}(v_i)\right). \end{equation} Using this, one way to define the operator corresponding to the matter gradient $e^a\partial_a\phi$ via a finite difference scheme. The simplest such scheme is forward Euler, where for a single direction $z_k$ on the cubic lattice we have \begin{equation} e^z\partial_z \phi(v_i) \longrightarrow \hat{F}_{z_k}(\Phi_k(v_{i+z_k}) - \Phi_k(v_k)), \end{equation} where $\hat{F}_{z_k}$ is the flux operator associated with the edge $z_k$ that connects the adjacent vertices $v_{i+z_k}$ and $v_k$. It is evident that there are other ways to write this operator; our purpose is to point out that the Hamiltonian can be defined using the basic operators. \section{Summary} We have developed a new type of geometric theory defined on a symplectic manifold that is topologically $\mathbb{R}^4$. The theory has a ``built in'' time that does not arise via a gauge fixing as in the aforementioned approaches. Its canonical decomposition reveals that there is a true Hamiltonian together with spatial diffeomorphism and Gauss constraints, which generate the only gauge symmetry. The theory can be coupled to matter in a natural way. The connection $A_a^i$ defines an extrinsic curvature via the Ashtekar-Barbero relation $A_a^i = \Gamma_a^i(e) + K_a^i$. From this we note the theory may be interpreted as giving a dynamical 4-geometry, even though the 3-geometry given by $e^a_i$ does not evolve. Quantization of the theory can be carried out using the methods of LQG. The model therefore provides an example of a perturbatively non-renormalizable geometric theory that exists non-perturbatively at the quantum level. \section{Acknowledgements} This work was supported by the Natural Science and Engineering Research Council of Canada. \section*{References}
1,941,325,219,904
arxiv
\section{Introduction} \label{sec-intro} Ultraluminous X-ray sources (ULXs) are off-nuclear, extragalactic X-ray sources with isotropic luminosities that exceed the Eddington limit for a stellar-mass black hole (BH) ($M_{\rm BH}{{\lesssim}}20\,M_{\odot}$: see \citealt{2011NewAR..55..166F} and references therein, and also \citealt{2016AN....337..349B} and \citealt{2017ARA&A..55..303K} for up-to-date and comprehensive reviews). It was initially suggested that ULXs were rare instances of intermediate-mass BHs accreting at sub-Eddington rates \citep{1999ApJ...519...89C,2000ApJ...535..632M,2001MNRAS.321L..29K,2003ApJ...585L..37M}; essentially a scaled-up version of standard galactic BH X-ray binaries (BH-XRBs). However, it was quickly realised that a considerable fraction (if not all) of the ULX population can be powered by stellar-mass BHs accreting at super-Eddington rates \citep[e.g.][]{2003ApJ...596L.171G,2004NuPhS.132..369G,2004MNRAS.349.1193R,2007MNRAS.377.1187P,2009MNRAS.393L..41K}. Furthermore, the astounding discovery of a pulsating ULX (PULX: \citealt{2014Natur.514..202B}) demonstrated that ULXs can be powered by neutron stars (NSs). After the initial discovery by \citeauthor{2014Natur.514..202B}, two more PULXs have been detected \citep{2017Sci...355..817I,2016ApJ...831L..14F}. The discovery of NS-ULXs provided further support to the scenario of super-Eddington accretion onto a stellar-mass compact object as the power source of ULXs, but also posed the crucial question of the potential prevalence of NSs as the engines of ULXs. Indeed, it has been known for some time that most of the ULXs do not transition through the phenomenological states of standard BH-XRBs, that is,~spectral transitions between hard and soft states and the appearance and quenching of relativistic jets \citep[for a review of spectral and temporal properties of NS- and BH-XRBs see e.g.][]{2001AdSpR..28..307B,2006ARA&A..44...49R,2007A&ARv..15....1D,2010LNP...794...17G}. While some ULXs exhibit significant luminosity and spectral variations \citep[e.g.][]{2004ApJS..154..519S,2013MNRAS.435.1758S,2016ApJ...831..117F} , they do not transition between the markedly different and characteristic ``hard'' and ``soft'' spectral states of nominal, sub-Eddington BH-XRBs \cite[e.g.][]{2008ApJ...687..471B,2010ApJ...724L.148G,2013MNRAS.435.1758S}. Interestingly two hyperluminous X-ray sources (HLXs) -- which have luminosities exceeding ${\sim}$\oergs{41} and are strong IMBH candidates -- seem to follow the transition pattern of stellar-mass BH-XRBs (ESO~243-49~HLX-1: \citealt{2009ApJ...705L.109G}; \citealt{2011ApJ...743....6S}; \citealt{2012Sci...337..554W}, or M82~X-1: \citealt{2010ApJ...712L.169F}). In this work we only consider ULXs that are limited to luminosities ${\lesssim}10^{41}$\,erg/s. In addition to the lack of state transitions, the spectra of numerous ULXs are significantly different from the typical spectra of BH-XRBs. More specifically, the majority of ULX spectra feature a notable spectral curvature above ${{\sim}}6\,$keV \cite[e.g.][]{2006MNRAS.368..397S,2009MNRAS.397.1836G,2013MNRAS.435.1758S}. It has been proposed that perhaps the unusual spectra of ULXs correspond to a novel state of super-critical accretion, dubbed the {\it ultralumimous state} \citep{2009MNRAS.397.1836G}. In subsequent works it was posited that ULX spectra can be empirically classified into three classes based on their spectral shape: singly peaked {\it broadened disc} (BD) class and two-component hard ultraluminous (HUL) and soft ultraluminous (SUL) states \citep{2013MNRAS.435.1758S}. Recognising the physical mechanisms underlying the observed spectral characteristics of ULXs is a crucial step towards understanding accretion at super-Eddington states but also determining the nature of their accretor. In the recent months leading to this publication, an increasing number of compelling theoretical considerations -- that point to NSs as plausible engines behind ULX emission -- have been put forward by numerous authors \citep[e.g.][]{2016MNRAS.458L..10K,2017MNRAS.468L..59K,2017MNRAS.tmp..143M}. Motivated by these findings and the apparent spectral and temporal similarities between ULXs and NS-XRBs we decided to revisit the spectra of known ULXs, in search of indications that may favour this newly emerging trend. More specifically, we ask whether the curvature of ULX spectra is due to hot thermal emission, rather than a hard power law with a cutoff at an improbably low energy and whether this can be physically interpreted in terms of emission from a super-critically accreting NS. To investigate this hypothesis, we have selected eighteen well known ULXs that have been studied by multiple authors in the past and were also included in the samples used in the seminal works of \cite{2006MNRAS.368..397S} and \cite{2013MNRAS.435.1758S}, in which the spectral shape of ULXs was standardised and classified observationally. Below (Section~\ref{sec:origin}) we briefly present the different interpretations of the spectral curvature in ULXs, the latest theoretical arguments for the nature of their central engine and the motivation behind our choice to revisit the spectra of known ULXs. In Section~\ref{sec-observations} we present the details of our data analysis and in Section~\ref{discussion} we discuss our findings and their implications with regard to the nature of the accretor in our sample and in ULXs in general. \section{Origin of the curvature in the spectra of ULXs} \label{sec:origin} \subsection{Optically thick, ``warm'' corona} Nominal BH-XRBs exhibit a power-law shaped tail towards high energies, during all spectral states. More specifically, during episodes of low-luminosity-advection-dominated accretion (also known as a {\it hard state} ) the spectrum is dominated by a hard power law (spectral index of up to ${\sim}$1.5) with an exponential cutoff at ${\sim}$100-200\,keV, while in the high-luminosity {\it soft state}, the power-law component becomes, less prominent and softer (spectral index exceeding ${\sim}$2), but without an observable spectral cutoff \citep[e.g.][]{2001AdSpR..28..307B,2003MNRAS.344...60G,2005Ap&SS.300..177N,2006ARA&A..44...49R, 2007A&ARv..15....1D,2010LNP...794...17G}. As discussed above, the spectra of many ULXs (including all sources studied in this work) feature a spectral curvature and an abrupt drop in spectral counts at considerably lower energies than standard BH-XRBs. The spectral roll-over in ULXs was detected in {\it XMM-Newton}\xspace data of numerous sources \citep[e.g.][]{2006MNRAS.368..397S,2013MNRAS.435.1758S} and was subsequently confirmed with the {\it NuSTAR}\xspace telescope \cite[e.g.][]{2013ApJ...778..163B,2014ApJ...793...21W,2015ApJ...806...65W,2017ApJ...834...77F}. It is often observed as low as ${\sim}$3\,keV. In the past, many authors have modelled the unusual low-energy curvature of ULX spectra (including the ones revisited here) using a power-law spectrum with a low-energy cutoff and a relatively large e-folding energy \citep[e.g.][]{2006MNRAS.368..397S,2007Ap..SS.311..203R,2009MNRAS.397.1836G,2013MNRAS.435.1758S}. The uncommonly low-energy cutoff, is often linked to the presence of a corona of hot, thermal electrons with an unusually high Thomson scattering optical depth. Namely, multiple authors have considered that the shape of the high-energy part of the spectrum is the result of thermal Comptonisation of soft ($h\nu<0.5$\,keV) photons, by a corona of thermal electrons with an optical depth that often exceeds $\tau_{\rm T}\approx20$ \citep[e.g.][]{2006MNRAS.368..397S,2007Ap..SS.311..203R,2009MNRAS.397.1836G,2012MNRAS.420.1107P,2013MNRAS.435.1758S,2014MNRAS.439.3461P}. While this configuration successfully reproduces the observed spectral shapes, its feasibility under realistic circumstances in the vicinity of critically accreting X-ray binaries may be problematic. More specifically, the high scattering depth combined with the increased photon density will pose a significant burden to the coronal thermalisation. This can be illustrated in the following simplified example, where we consider cooling due to Inverse Compton (IC) scatterings and Coulomb collisions as the sole thermalisation mechanism. The cooling rate of thermal electrons due to multiple IC scatterings by photons with $h\nu\ll kT_{e}$ depends strongly on the scattering optical depth of the corona, that is,~the cooling timescale for large optical depth is \citep[e.g.][]{1979rpa..book.....R} \begin{equation} t_{\rm cool}\approx \frac{m_{\rm e}c^{2}(R/c)}{4h\nu\,{\tau_{\rm T}}^{2}}, \label{eq:tcool} \end{equation} where $m_{\rm e}$ is the electron mass and $R/c$ the characteristic source size, which can be inferred from variability considerations. It is obvious from Eq.~\ref{eq:tcool} alone that for the values of $\tau_{\rm T}$ reported in these works, Compton cooling will be very rapid, that is,~$t_{\rm cool}\ll{R/c}$. Below we illustrate that the cooling timescale will be too small to allow for electron thermalisation. The thermalisation will occur primarily through energy exchange between high-energy electrons and the thermal background of electrons and protons in the coronal plasma. For a high scattering optical depth (i.e.~${\tau}_{T}>5$), one can plausibly assume that the main mechanism for the energy exchange will be Coulomb interactions between electrons and electrons, and electrons and protons, ignoring, for example, collective interactions between particles \citep[e.g.][]{1988ApJ...332..872B}. If the relativistic electrons exchange energy more rapidly than they cool due to multiple scatterings, then the plasma will thermalise efficiently. The timescale of the Coulomb energy exchange will be \begin{equation} t_{exch}{{\sim}}\frac{E}{\left |dE/dt \right |}, \label{eq:tcul} \end{equation} where E is the electron energy and dE/dt is the Coulomb cooling rate \citep{1975ApJ...196..689G,1979PhRvA..20.2120F,1990MNRAS.245..453C}. For relativistic electrons the Coulomb rate can be rewritten as \citep[see also][]{1999ASPC..161..375C}, \begin{equation} \frac{dE}{dt}{{\sim}}-\tau_{\rm T}\ln{ \Lambda}{\left ( \frac{R}{c} \right )}^{-1} \label{eq:culrate} ,\end{equation} where $\ln{ \Lambda}$ is the usual Coulomb logarithm. From the above approximations, it becomes evident that while the Coulomb cooling becomes more rapid, as optical depth increases ($t_{exch}{{\sim}}{\tau_{\rm T}}^{-1}$), the ${\tau_{\rm T}}^{-2}$ dependence of the Compton cooling timescale, results in a corona in which energetic photons cool down before they can thermalise. As ${\tau_{\rm T}}$ increases, a thermal corona becomes more and more difficult to sustain. The problem of coronal thermalisation is well known, and it can also become significant in the low $\tau_{\rm T}$ regime \citep[e.g.][and references therein]{1999ASPC..161..375C}, particularly for hot ($kT_{e}{{\gtrsim}}{m_{\rm e}}c^{2}$) coronas. As a result, the presence of hybrid thermal/non-thermal electron distributions \citep[e.g.][]{1999ASPC..161..375C,2008AA...491..617B,2009MNRAS.392..570M} is usually assumed. However, the high-scattering optical depths that were claimed in earlier ULX literature \citep[e.g.][]{2006MNRAS.368..397S,2007Ap..SS.311..203R,2009MNRAS.397.1836G,2013MNRAS.435.1758S}, present major sustainability issues even when considering hybrid, ``warm'' coronas with lower electron temperatures. It is only under very tight restrictions that coronas with $\tau_{\rm T}>5$ can be sustained \citep[e.g.][]{2015A..A...580A..77R}. We must stress here, that the issues concerning the physical plausibility of the optically thick corona model have been noted by the community since relatively early on \citep[e.g.][]{2011MNRAS.417..464M,2012MNRAS.422..990K}, with the majority of later works, only using the power law with the exponential cutoff as an empirical description of the hard emission of ULXs, rather than a physical interpretation. \subsection{Optically thick winds} The most widely accepted interpretation of the spectral curvature of ULXs invokes the presence of strong, optically thick winds. Namely, under the assumption that ULXs are accreting BHs in the super-Eddington regime, then the spectral shape of the emission may be strongly influenced by the presence of massive, optically thick outflows. \cite{2003MNRAS.345..657K} and \cite{2007MNRAS.377.1187P} argued that the curvature of the spectra (of at least some) ULXs can be interpreted in terms of reprocessing of the primary emission in the optically thick wind. In this scenario the soft thermal emission is associated with the wind itself, and the hard emission is also thermal and originates in the hot, innermost parts of an accretion disk. \cite{2009MNRAS.398.1450K} followed up the predictions of \cite{2007MNRAS.377.1187P} and by studying the spectra of eleven known ULXs they claimed that the temperature of the soft thermal component decreased with its luminosity (i.e.~$L_{soft}{{\sim}}{T_{in}}^{-3.5}$), in agreement with the prediction for emission from an optically thick wind. However, subsequent studies by \cite[e.g.][]{2013ApJ...776L..36M} found that the luminosity of the soft component correlates positively with temperature, approximately following the $L_{soft}{{\sim}}{T_{in}}^{4}$ relations expected for standard accretion disks. However, in a recent study of numerous long-term observations of HO IX X-1, \cite{2016MNRAS.460.4417L} show that -- with increasing luminosity -- the source spectra evolve from a two-component spectrum to a (seemingly) single-component, thermal-like spectrum. The authors argue that the apparent heating of the soft-disk component may be model dependent, an artifact caused by neglecting to properly account for the evolving spectra. In addition to the prediction of the ULX spectral shape, the optically thick wind model also offers an interpretation for the short-timescale variability noted in many sources \citep[e.g.][]{2013MNRAS.435.1758S}. More specifically, \cite{2015MNRAS.447.3243M} combined the arguments of \cite{2003MNRAS.345..657K} and \cite{2007MNRAS.377.1187P} with predictions for wind inhomogeneity \citep[e.g.][]{2013PASJ...65...88T} and mass-accretion rate fluctuations \citep[e.g.][]{2013MNRAS.434.1476I} to account for spectral and timing variability of nine ULXs. The authors made a well-founded case for super-Eddington accretion onto stellar-mass BHs as the driving force behind ULXs. In this scheme the fractional variability noted by previous authors is explained in terms of a ``clumpy'' wind partially obscuring the hard component, which appears to fluctuate. In the same context the different empirical states indicated by \cite{2013MNRAS.435.1758S} are the result of different orientations between the observer and the disk/wind structure \citep[see Fig.~1 of][]{2015MNRAS.447.3243M}. Building on these considerations several authors \citep[e.g.][]{2014ApJ...793...21W,2015ApJ...806...65W,2016MNRAS.460.4417L} have modelled ULX spectra extracted from {\it XMM-Newton}\xspace and {\it NuSTAR}\xspace data using a dual thermal model, in which the soft thermal emission is attributed to the optically thick wind and the hotter component to emission from the inner parts of a hot accretion disk. The presence of the strong outflows are also supported by the uniform optical counterparts of numerous ULXs, which indicate a hot wind origin \citep[e.g.][]{2002astro.ph..2488P,2015NatPh..11..551F}, but also the presence of wind or jet blown, radio ``bubbles'' around some ULXS \citep[e.g.][]{2003RMxAC..15..197P,2006MNRAS.368.1527S,2012ApJ...749...17C}. The strongest indication of an outflow lies in the discovery of soft X-ray spectral residuals near ${\sim}1$\,keV \citep[e.g.][]{2014MNRAS.438L..51M,2015MNRAS.454.3134M,2016Natur.533...64P,2017MNRAS.468.2865P}, which have been interpreted as a direct signature of their presence. While broad emission- and absorption-like residuals near the ${\sim}1$\,keV mark have been observed in the spectra of numerous NS- and BH-XRBs during different states and at different mass accretion rates (e.g. \citealt{2003A&A...407.1079B}; \citealt{2006A&A...445..179D}; \citealt{2010A&A...522A..96N}; \citealt{2014MNRAS.437..316K}), the absorption lines detected in the spectra of NGC~1313 X-1 and NGC~5408 X-1 \citep{2016Natur.533...64P}, and more recently in NGC~55 ULX \citep{2017MNRAS.468.2865P}, seem to be strongly blue-shifted, indicating outflows with velocities reaching 0.2\,c. \subsection{The case of accreting neutron stars} The recent discoveries of the three PULXs, has established the fact that ULXs can be powered by accretion onto NSs. This realisation is perhaps not surprising, considering that a mechanism that allows accretion at super-Eddington rates onto highly magnetised (B${{\gtrsim}}10^{12}$G) NSs has been put forward since the late seventies \citep{1973A&A....25..233G,1975A&A....42..311B,1976MNRAS.175..395B}. However, these works were not aimed at discussing super-Eddington accretion in the context of ULXs. The authors were attempting to resolve the complications that arise from the fact that when material is accreted onto a very small area on the surface of the NS the Eddington limit is considerably lower than the ${\sim}1.8\,10^{38}$\,erg/s, which corresponds to isotropic accretion onto a NS. Therefore, persistent X-ray pulsars with luminosities exceeding a\,few\,$10^{37}$\,erg/s, were in fact breaking the (local) Eddington limit. More specifically, in high-B accreting NSs, the accretion disk is interrupted by the magnetic field near the magnetospheric radius, at which point the accreted material is guided by the magnetic field lines onto a small area centered around the NS magnetic poles \citep[e.g.][]{1972A&A....21....1P,2012MNRAS.421...63R}. The resulting formation is known as an accretion column. Due to the high anisotropy of the photon–electron scattering cross-section in the presence of a strong magnetic field \citep{1971PhRvD...3.2303C,1974ApJ...190..141L}, the emission from the accretion column is concentrated in a narrow (``pencil-'') beam, which is directed parallel to the magnetic field lines (and hence the magnetic field axis, \citealt{1975A&A....42..311B}). However, at high mass-accretion rates a radiation dominated shock is formed at a few km above the surface of the NS. As accretion rate exceeds a critical value \citep[corresponding to a critical luminosity of ${\sim}10^{37}$erg/sec, (e.g.][]{1976MNRAS.175..395B,2015MNRAS.447.1847M}, the accretion funnel is suffused with high-density plasma which is gradually sinking in the gravitational field of the NS \citep{1976MNRAS.175..395B,1981A&A....93..255W}. As a result, the accretion funnel, in the direction parallel to the magnetic field axis, becomes optically thick and the emerging X-ray photons mostly escape from its -- optically thin -- sides, in a ``fan-beam'' pattern \citep[see e.g. Fig.1][]{2007A&A...472..353S}. In recent refinements of this mechanism, it has been demonstrated that depending on the magnetic field strength and the pulsar spin it can facilitate luminosities of the order of $10^{40}$erg/sec \citep[][]{2015MNRAS.454.2539M}. Observations of multiple X-ray pulsars have yielded an empirical description of the primary emission of the accretion column as a very hard power law (spectral index ${\lesssim}1.8$) with a low-energy (${\lesssim}10$\,keV) cutoff \citep[e.g.][]{2012MmSAI..83..230C}. While a general, self consistent description of the spectral shape of the accretion column emission has not been achieved yet, several authors have attempted to reproduce it \citep[e.g.][]{1981ApJ...251..288N,1985ApJ...299..138M,1991ApJ...367..575B,2004ApJ...614..881H,2007ApJ...654..435B}. More specifically, \cite{2007ApJ...654..435B} have reproduced the spectrum, assuming bulk and thermal Comptonisation of seed Bremsstrahlung, black body and cyclotron photons. While, in the high-field regime, super-Eddington accretion can be efficiently sustained, lowly magnetised NSs can only reach moderately super-Eddington luminosities and only in the soft state of the so-called Z-sources \citep[e.g.][]{2002ApJ...568L..35M,2007A&ARv..15....1D,2009ApJ...696.1257L}. When the accretion rate reaches and exceeds the Eddington limit, strong outflows are expected to inhibit higher accretion rates. Nevertheless, in a recent publication, \cite{2016MNRAS.458L..10K} argue that super-Eddington accretion onto lowly magnetised NSs can be maintained -- along with powerful outflows -- in a similar fashion to super-Eddington accretion onto BHs \citep{2001ApJ...552L.109K}. Therefore, a considerable fraction of (non-pulsating) ULXs may be the result of beamed emission from NSs with relatively weak magnetic fields (B$<10^{11}$\,G), accreting at high mass-transfer rates. Intriguingly, one of the first \citep{1984PASJ...36..741M} and most frequently observed spectral characteristics associated with the soft state of Z-sources is the presence of two thermal components \citep[e.g.][]{1988ApJ...324..363W,1989PASJ...41...97M,2001AdSpR..28..307B,2007ApJ...667.1073L,2013MNRAS.434.2355R}. The ``cool'' thermal component most likely originates in a thin \citeauthor{1973A&A....24..337S} disk and the additional ``hot'' thermal component corresponds to emission produced in hot optically thick plasma on the the surface of the NS, known as a {\it boundary layer} \citep{1986SvAL...12..117S,2000AstL...26..699S}. Therefore, the presence of a dual thermal spectrum in XRBs strongly suggests emission from a solid surface, indicating a lowly magnetised NS. Nevertheless, in a new publication by \cite{2017MNRAS.tmp..143M} it is argued that in accreting high-B NSs, the normally optically thin \citep[e.g.][]{1980A&A....87..330B,1992ApJ...396..147N,1996PASJ...48..425E} material trapped in the Alfv{\'e}n surface becomes optically thick as the luminosity exceeds ${{\sim}}5\,10^{39}$\,erg/s. The emission of the resulting structure will have a quasi-thermal spectrum at temperatures exceeding 1\,keV. Combined with the soft thermal emission from a truncated accretion disk, the spectra of highly magnetised NSs may also feature the same dual thermal shape as high-state Z-sources \citep[][see more details in Sections~\ref{sec-observations} and~\ref{discussion}]{2017MNRAS.tmp..143M}. Based on these considerations, it becomes apparent that the reanalysis of ULX spectra is warranted. To this end, we explore the relevance of models, usually implemented in the modelling of emission from NS-XRBs, in the context of ULXs. More importantly, we investigate whether or not our best-fit models yield parameter values that are physically meaningful and in accordance with the predictions for the emission from ULXs. \section{Spectral extraction and data analysis} \label{sec-observations} We have analysed archival {\it XMM-Newton}\xspace observations of eighteen ULXs presented in Table~\ref{tab:log}. We have selected sources that have been studied extensively in the past and are confirmed ULXs. Furthermore, specific datasets were selected to have a sufficient number of counts to allow robust discrimination between the different models used to describe their spectral continuum. Except for these conditions, sources were chosen randomly. Therefore, the ULX sample analysed in this work is not complete. Nevertheless, the purpose of this work is not to revisit the entire ULX catalogue, but to demonstrate that a significant fraction of ULXs follow a specific pattern (presented below). For this purpose, our source sample is sufficiently extensive. For six of these sources we also analysed archival {\it NuSTAR}\xspace data. \subsection{{\it XMM-Newton}\xspace spectral extraction} For the {\it XMM-Newton}\xspace data, we only considered the EPIC-pn detector, which has the largest effective area of the three EPIC detectors, in the full 0.3-10\,keV bandpass, and registered more than ${\sim}$1000 photons for each of the observations considered, thus providing sufficient statistics to robustly discriminate between different spectral models while ensuring simplicity and self-consistency in our analysis. Therefore the following description of data analysis refers only to this instrument. The data were handled using the {\it XMM-Newton}\xspace data analysis software SAS version 15.0.0. and the calibration files released\footnote{XMM-Newton CCF Release Note: XMM-CCF-REL-332} on January 22, 2016. In accordance with the standard procedure, we filtered all observations for high background-flaring activity, by extracting high-energy light curves (10$<$E$<$12\,keV) with a 100\,s bin size. By placing appropriate threshold count rates for the high-energy photons, we filtered out time intervals that were affected by high particle background. During all observations pn was operated in Imaging Mode. In the majority of our sources, spectra were extracted from a circular region with a radius $>$35\arcsec\ centred at the core of the point spread function (psf) of each source. We thus ensured the maximum encircled energy fraction\footnote{See {\it XMM-Newton}\xspace Users Handbook \S3.2.1.1 \\ http://xmm-tools.cosmos.esa.int/external/xmm\_user\_support/ \\ documentation/uhb/onaxisxraypsf.html} within the extraction region. This was not possible in the case of NGC~4861 ULX1, M81~X-6, and NGC~253 ULX2 where we used spectral extraction apertures of $18{\arcsec}$, 13.8\arcsec and 12.5\arcsec in order to avoid contamination by adjacent sources or due to the proximity of our source to the edge of the chip\footnote{When part of the psf lies in a chip gap, effective exposure and encircled energy fraction may be affected}. The extraction and filtering process followed the standard instructions provided by the {\it XMM-Newton}\xspace Science Operations Centre (SOC\footnote{http://www.cosmos.esa.int/web/xmm-newton/sas-threads}). More specifically, spectral extraction was done with SAS task \texttt{evselect} using the standard filtering flags (\texttt{\#XMMEA\_EP \&\& PATTERN<=4} for pn), and SAS tasks \texttt{rmfgen} and \texttt{arfgen} were used to create the redistribution matrix and ancillary file, respectively. All spectra were regrouped to have at least 25 counts per bin and analysis was performed using the {\tt xspec} spectral fitting package, version 12.9.0 \citep{1996ASPC..101...17A}. \begin{table*}[!htbp] \caption{Observation Log.} \begin{center} \scalebox{0.8}{ \begin{tabular}{lcccccccc} \hline\hline\noalign{\smallskip} \multicolumn{1}{c}{Source} & \multicolumn{1}{c}{Distance$^{a}$} & \multicolumn{1}{c}{ObsID} & \multicolumn{1}{c}{Date} & \multicolumn{1}{c}{Duration$^{b}$} & \multicolumn{1}{c}{Rate$^{c}$} & \multicolumn{1}{c}{Obs. Mode} & \multicolumn{1}{c}{Filter} & \multicolumn{1}{c}{Position}\\ \noalign{\smallskip}\hline\noalign{\smallskip} \multicolumn{1}{c}{} & \multicolumn{1}{c}{Mpc} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{s} & \multicolumn{1}{c}{${\rm 10^{-1}\,{ct}\,{s}^{-1}}$} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \noalign{\smallskip}\hline\noalign{\smallskip} \noalign{\smallskip}\hline\noalign{\smallskip} {\it XMM-Newton}\xspace & & & & & & & & \\ \noalign{\smallskip}\hline\noalign{\smallskip} Ho II X-1 &3.27$\pm 0.60$ & 0200470101 & 2004-04-15 & 44130 & 6.97$\pm 0.04$ & FF & Medium & On-Axis \\ Ho IX X-1 &3.77$\pm 0.80$ & 0200980101 & 2004-09-26 & 75900 & 14.5$\pm 0.05$ & LW & Thin & On-Axis \\ IC~342 X-1 &2.73$\pm 0.70$ & 0206890201 & 2004-08-17 & 16970 & 3.96$\pm 0.05$ & EFF & Medium & On-Axis \\ M33 X-8 &0.91$\pm 0.50$ & 0102640101 & 2000-08-04 & 7144 & 55.1$\pm 0.28$ & FF & Medium & On-Axis \\ M81 X-6 &3.61$\pm 0.50$ & 0111800101 & 2001-04-22 & 79660 & 4.38$\pm 0.02$ & SW & Medium & On-Axis \\ M83 ULX &4.66$\pm 0.70$ & 0110910201 & 2003-01-27 & 19760 & 1.20$\pm 0.03$ & EFF & Thin & ${\sim} 6.5\arcmin$ \\ NGC~55 ULX &1.60$\pm 0.20$ & 0028740201 & 2001-11-14 & 34442 & 1.25$\pm 0.07$ & FF & Thin & On-Axis \\ NGC~253 ULX2 &3.56$\pm 0.80$ & 0152020101 & 2003-06-19 & 65010 & 2.57$\pm 0.02$ & FF & Thin & On-Axis \\ NGC~253 XMM2 &3.56$\pm 0.80$ & 0152020101 & 2003-06-19 & 64850 & 2.41$\pm 0.02$ & FF & Thin & ${\sim} 4.1\arcmin$ \\ NGC~1313 X-1 &4.25$\pm 0.80$ & 0106860101 & 2000-10-17 & 19880 & 6.89$\pm 0.01$ & FF & Medium & ${\sim} 5.4\arcmin$ \\ NGC~1313 X-2 &4.25$\pm 0.80$ & 0405090101 & 2006-10-15 & 80470 & 6.19$\pm 0.03$ & FF & Medium & ${\sim} 4.0\arcmin$ \\ NGC~4190 ULX1 &2.83$\pm 0.10$ & 0654650301 & 2010-11-25 & 11070 & 12.6$\pm 0.11$ & FF & Medium & On-Axis \\ NGC~4559 X-1 &7.31$\pm 0.20$ & 0152170501 & 2003-05-27 & 33950 & 2.72$\pm 0.03$ & FF & Medium & On-Axis \\ NGC~4736 ULX1 &4.59$\pm 0.80$ & 0404980101 & 2006-11-27 & 35040 & 1.89$\pm 0.02$ & FF & Thin & On-Axis \\ NGC~4861 ULX &7.00$\pm 1.00$& 0141150101 & 2003-06-14 & 13180 & 0.73$\pm 0.02$ & FF & Medium & On-Axis \\ NGC~5204 X-1 &4.76$\pm 0.90$ & 0405690101 & 2006-11-15 & 7820 & 9.67$\pm 0.11$ & FF & Medium & On-Axis \\ NGC~5907 ULX &17.1$\pm 0.90$& 0729561301 & 2014-07-09 & 37480 & 3.30$\pm 0.03$ & FF & Thin & On-Axis \\ NGC~7793 P13 &3.58$\pm 0.70.$& 0748390901 & 2014-12-10 & 41970 & 4.85$\pm 0.04$ & FF & Thin & ${\sim} 4.0\arcmin$ \\ \noalign{\smallskip}\hline\noalign{\smallskip} {\it NuSTAR} & & & & & & & & \\ \noalign{\smallskip}\hline\noalign{\smallskip} Ho II X-1 &3.27$\pm 0.60$ & 30001031005 & 2013-09-17 & 111104 & 0.38$\pm 0.01$ & -- & None & ${\sim} 3.1\arcmin$ \\ Ho IX X-1 &3.77$\pm 0.80$ & 30002033003 & 2012-10-26 & 88030 & 1.29$\pm 0.13$ & -- & None & ${\sim} 1.6\arcmin$ \\ IC~342 X-1 &2.73$\pm 0.70$ & 90201039002 & 2016-10-16 & 49173 & 1.26$\pm 0.02$ & -- & None & On-Axis \\ NGC~1313 X-1 &4.25$\pm 0.80$ & 30002035002 & 2012-12-16 & 100864 & 6.19$\pm 0.03$ & -- & None & ${\sim} 2.8\arcmin$ \\ NGC~1313 X-2 &4.25$\pm 0.80$ & 30002035002 & 2012-12-16 & 100864 & 6.19$\pm 0.03$ & -- & None & On-Axis \\ NGC~5907 ULX &17.1$\pm 0.90$& 80001042002 & 2014-07-09 & 57113 & 0.26$\pm 0.01$ & -- & None & On-Axis \\ \noalign{\smallskip}\hline\noalign{\smallskip} \noalign{\smallskip}\hline\noalign{\smallskip} \noalign{\smallskip}\hline\noalign{\smallskip} \noalign{\smallskip}\hline\noalign{\smallskip} \noalign{\smallskip}\hline\noalign{\smallskip} \end{tabular} } \end{center} \tablefoot{ \tablefoottext{a}{All distance estimations are from \cite{2013AJ....146...86T}} \tablefoottext{b}{Of filtered pn observation}. \tablefoottext{c}{Corresponding to the pn spectrum of the source}.} \label{tab:log} \end{table*} \subsection{{{\it NuSTAR}\xspace} spectral extraction} The {\it NuSTAR}\xspace data were processed using version 1.6.0 of the {\it NuSTAR}\xspace data analysis system ({\it NuSTAR}\xspace DAS). We downloaded all public {\it NuSTAR}\xspace datasets using the ``heasarc\_pipeline'' scripts (Multimission Archive Team{\@}OAC, in prep.). These have already been processed to obtain L1 products. We then ran {\tt nuproducts} using a 50\arcsec extraction region around the main source and a 50-80\arcsec extraction region for background, in the same detector as the source when possible, as far as we could to avoid contributions from the point-spread function (PSF) wings. We applied standard PSF, alignment, and vignetting corrections. Spectra were rebinned in order to have at least 30 counts per bin to ensure the applicability of the $\chi^2$ statistics. All sources in our sample dominate the background up to 20-30\,keV. The models we use are relatively simple, and the physical interpretation does not change considerably for a change of best-fit parameters of 10-20\%, and so we do not need an extremely precise modelling of the background. \subsection{Spectral analysis} \subsubsection{{\it XMM-Newton}\xspace} The spectral continuum was modelled twice, firstly using a combination of a multicolour disk black body (MCD) and a black body component ({\tt diskbb+bbody}), with the black body ({\tt bbody}) acting as the hot thermal component and secondly using two MCD components ({\tt diskbb+diskbb}). The first model was used because it is the most widely used model describing the spectra of NS-XRBs in the high-accretion, ``soft'' state. Our choice for the second model was based on the recent theoretical predictions by \cite{2017MNRAS.tmp..143M}, where it is argued that critically accreting NSs with a high magnetic field (i.e.~B${\gtrsim}10^{12}$\,G) can become engulfed in an optically thick toroidal envelope which is the result of accreting matter moving along the magnetic field lines. Emission from the optically thick envelope is predicted to have a multicolour black body spectrum, with a temperature exceeding ${\sim}$1\,keV (more details in section~\ref{discussion}). We model this hot thermal emission using the {\tt diskbb} model because it is the simplest and most reliable multi-temperature black body model in {\tt xspec}; however we stress that we do not expect this emission to originate from a disk. Therefore, the inner disk radius corresponding to the hot {\tt diskbb} component has no physical meaning and is not tabulated. In both models the cool disk component is modelled as a geometrically thin, optically thick \cite{1973A&A....24..337S} disk, which is expected to extend inwards until it reaches the surface of the NS, unless it is disrupted by strong outflows or a strong magnetic field (more details in section~\ref{discussion}). We did not model intrinsic and/or host galaxy absorption separately from the Galactic absorption, but used one component for the total interstellar absorption, which was modelled using {\tt tbnew\_gas}, the latest improved version of the {\tt tbabs} X-ray absorption model \citep{2011_in_prep}. For the dual MCD model, we assume that the disk becomes truncated at approximately the magnetospheric radius at which point the material follows the magnetic field lines to form the hypothetical envelope. Under this assumption, we also estimate the strength of the magnetic field (B), assuming that the inner radius of the ``cool'' {\tt diskbb} coincides with the magnetospheric radius ($R_{\rm mag}$) and using the expression for $R_{\rm mag}$ given by \citeauthor{2014EPJWC..6401001L} (2014; see also Eq. 1 from \citealt{2017MNRAS.tmp..143M}). The complete {\tt xspec} model used in the spectral fits is {\tt tbnew\_gas(cflux*diskbb + cflux*(diskbb or bbody) )}, where {\tt cflux } is a convolution model that is used to calculate the flux of the two thermal components. Some of the sources exhibited strong residuals in the 0.5-1.2\,keV region, commensurate with X-ray emission lines from hot, optically thin plasma. The emission features were modelled using the {\tt mekal} model, which models the emission spectrum of a hot diffuse gas. The best fit parameters of the continuum were not sensitive to the modelling of these features (e.g. using a Gaussian instead of {\tt mekal}), however they are strongly required by the fit (${\delta}{\chi}^2>15$ for two d.o.f in all sources). More specifically, plasma temperature was ${\sim}$0.92\,keV for Ho IX X-1, ${\sim}$1.09\,keV for M81 X-6, ${\sim}$0.95\,keV for M83 ULX , ${\sim}$0.40keV for NGC~4736 ULX, and ${\sim}$1.08\,keV for NGC~5204 X-1. While soft X-ray atomic features may be crucial to our understanding of the nature of ULXs \citep[namely, the presence of strong winds and the chemical composition of the accreted material, e.g.:][]{2015MNRAS.447.3243M, 2015MNRAS.454.3134M, 2016Natur.533...64P}, they are not the focus of this work and are only briefly discussed in Section~\ref{discussion}, but not studied further. Given the high (${{\gtrsim}}1$\,keV) temperatures of the hot thermal component in both double-thermal models, it is expected that electron scattering will have a significant effect on the resulting spectrum, as it becomes comparable to free-free absorption. Therefore, the actual emission will be radiated as a ``diluted'' black body, which when modelled using a prototypical thermal model like {\tt diskbb} or {\tt bbody} will result in temperature and radius estimations that deviate from their ``true'' values. This issue is commonly addressed by considering a correction factor ($f_{\rm col}$\xspace) that approximately accounts for the spectral modification \citep{1986ApJ...306..170L,1986SvAL...12..383L,1995ApJ...445..780S}; this factor is often referred to as a {\it colour correction factor} and detailed calculations, combined with multiple observations have demonstrated that it depends weakly on the size\footnote{or inner radius in the case of an accretion disk} of the emitting region and the mass accretion rate \citep[e.g.][]{1995ApJ...445..780S}. Therefore in the first approximation it can be considered independent of these parameters and its value is estimated between ${\sim}1.5$ and ${\sim}$2.1 \citep[e.g.][and references therein]{2005ApJ...618..832Z}. The colour correction factor affects both the temperature and normalisation of the thermal models (i.e.~{\tt diskbb} and {\tt bbody}), with the corrected values given by \begin{equation} T_{\rm cor}=\frac{T}{f_{col}} \label{eq:Tcor} ,\end{equation} and \begin{equation} R_{\rm in,cor}=R_{\rm in}\,{f_{\rm col}}^2 \label{eq:Rcor} ,\end{equation} where $T$ is the temperature of the MCD component and $R_{\rm in}$ is the inner radius. Although the spectral hardening effects are expected to be noticeable, particularly in the hot thermal component, we have decided not to include the colour correction in any of our calculations and to tabulate and plot the values of all quantities of interest as provided by our best fits. The reader is, however, advised to note that the value of our results may vary by a value of ${\sim}${$f_{\rm col}$\xspace}. The value of the inferred inner disk radius is also dependent on the viewing angle ($i$) of the accretion disk (i.e.~$R_{in}{{\sim}}1/\sqrt{\cos{i}}$). This dependence may become important in the estimation of ${\rm R_{in}}$ if the accretion disk is viewed at a large inclination angle (i.e.~edge-on view). Nevertheless, since we have no indications for a high viewing angle (e.g. dips\footnote{With the exception of NGC~55 ULX, which does show dips in its light curve \citep{2004MNRAS.351.1063S}. However, due to its likely supercritical accretion rates, the dips are not as constraining, for its inclination, as in typical XRBs.} in their light curves or spectral absorption features resulting from an edge-on view of the accretion disk atmosphere) in any of our sources, we have selected a value of $i=60\deg$ for all sources in our list. All best fit values for the absorbed dual-MCD model together with the estimations for the magnetic field and their classification, as proposed by \cite{2013MNRAS.435.1758S}, are presented in Table~\ref{tab:xmm_fit}. The values for the absorbed MCD/black body model are presented in Table~\ref{tab:bbody}. In Table~\ref{tab:bbody} we also provide the estimations of the ``spherization'' radius \citep{1973A&A....24..337S} of each source. Lastly, we note that the $\chi^2$ values from the {\tt tbnew\_gas*(diskbb+bbody)} fits were similar, albeit moderately higher than those of the dual MCD model and with moderately lower temperature of the hot component ($kT_{\rm BB}$ between ${\sim}$0.9\,keV and ${\sim}$2.2\,keV). \subsubsection{{\it NuSTAR}} In the {\it NuSTAR} spectra we ignored all channels below 4\,keV, and thus we did not require the addition of the cool thermal component. The primary spectral component used to model all spectra is again a multicolour disk black body ({\it diskbb}), the ``hot'' thermal component from the {\it XMM-Newton}\xspace fits. Furthermore, we also look for the presence of a potential hard, non-thermal tail, which is usually detected in most XRBs, even in the soft state. A simultaneous broadband fit of the combined {\it NuSTAR} plus {\it XMM-Newton}\xspace spectra is not explored in this paper. Recent, rigorous works have extensively studied the {\it XMM-Newton}\xspace (or {\it Swift}\xspace) + {\it NuSTAR}\xspace data that we revisit here (Ho~II~X-1: \citealt{2015ApJ...806...65W}; HoIX~X-1: \citealt{2016MNRAS.460.4417L}; IC~342~X-1: \citealt{2015ApJ...799..121R}; NGC~1313~X-1,X-2: \citealt{2013ApJ...778..163B}; NGC~5907~X-1: \citealt{2017ApJ...834...77F}) and have noted the presence of a spectral shape that can be modelled either as a hot thermal component or sharp cutoff with an additional, weak power-law tail. In this paper we do not seek to reproduce these analyses, but to discuss a possible novel interpretation of the spectral shape. The {\it NuSTAR} data are used with the purpose of confirming (or dismissing) the presence of these components in a comprehensive and consistent study. To this end, the separate analysis is swift and effective. The {\it NuSTAR}\xspace data were modelled using a single {\it diskbb} model and an additional power law. The hard ($>$10\,keV) power-law emission is faint, with less than 5\% of observed photons registered above 20\,keV, on average. Furthermore, the background contamination becomes predominant above 25\,keV. Therefore, the slope or even the exact shape (i.e.~the presence of an exponential cutoff) of the hard spectral tail cannot be constrained accurately. Both the thermal component and the power-law tail are required, in order to achieve an acceptable fit. More specifically, fitting the {\it NuSTAR}\xspace data with only the {\tt diskbb} model results in pronounced residuals above ${{\sim}}15$\,keV (e.g. Figure~\ref{fig:res}) and a value for the reduced $\chi^2$ that exceeds ${\sim}$1.3 in all sources. Similarly, fitting the {\it NuSTAR}\xspace spectra with just a power-law, results in residual structure characteristic of thermal emission (Figure~\ref{fig:po}) and reduced $\chi^2$ values exceeding ${\sim}$1.2. The temperature gradient of the accretion curtain will, most likely, differ from the $T{{\sim}}r^{-0.75}$ predicted by the standard thin disk MCD models like {\tt diskbb} and this deviation will be enhanced further by electron scatterings. To diagnose the impact of this effect on the registered spectra, we also fitted them with the {\tt xspec} model {\tt diskpbb}, in which the disk temperature is proportional to $r^{-p}$ and p is a free parameter. We find that the value of $p$ is significantly smaller (on average $p{{\lesssim}}0.53$) than that of a standard thin accretion disk. More interestingly, we find that the {\tt diskpbb} fits did not require the addition of a power-law tail and yielded the same $\chi^2$ values as the {\tt diskbb + powerlaw} fits; albeit with the notable exception of NGC~5907, which is the only pulsating ULX in our {\it NuSTAR}\xspace sample. In principle, the {\tt diskpbb} model could also be used to model the cool thermal emission, detected in the {\it XMM-Newton}\xspace data, since the inner disk parts may also become inflated due to the high accretion rates (see discussion in Sect.~\ref{discussion}). Nevertheless, the addition of an extra free parameter in each thermal component will only add to the degeneracy between different models and will not provide any further insight into the physical parameters (i.e.~temperature and size) of the emitting regions. Therefore, the {\tt diskpbb} model is only used as a diagnostic for the geometry of the accretion curtain, and only for the {\it NuSTAR}\xspace data where its impact is much more significant; its implications are discussed further in Section~\ref{discussion}. While we have analysed all available {\it NuSTAR} data for our sources, we only tabulate the results for those observations with the largest number of counts and for luminosities closer to the {\it XMM-Newton}\xspace observations. Nevertheless all {\it NuSTAR} observations produced -- more or less -- similar results to the ones presented here (see observation log in Table~\ref{tab:log}, for the {\it NuSTAR} observations analysed in this work). Best fit values (including the value of $p$ and the temperature of the {\tt diskpbb} models) for the {\it NuSTAR} data are presented in Table~\ref{tab:nus_fit}. \begin{table*}[!htbp] \caption{Best fit parameters of the dual MCD model for the {\it XMM-Newton}\xspace observations. All errors are in the 1$\sigma$ confidence range.} \begin{center} \scalebox{0.8}{ \begin{tabular}{lccccccccccc} \hline\hline\noalign{\smallskip} \multicolumn{1}{c}{Source} & \multicolumn{1}{c}{nH} & \multicolumn{1}{c}{k${\rm T_{disk}}$} & \multicolumn{1}{c}{${\rm R_{disk}}^{a}$} & \multicolumn{1}{c}{k${\rm T_{hot}}$} & \multicolumn{1}{c}{${\rm {L/L_{edd}}^{b}}$} & \multicolumn{1}{c}{${\rm L_{\rm hot}/L_{\rm disk}}$} & \multicolumn{1}{c}{$B$} & \multicolumn{1}{c}{Clas.$^{c}$} & \multicolumn{1}{r}{red. $\chi^2/dof$ }\\ \noalign{\smallskip}\hline\noalign{\smallskip} \multicolumn{1}{c}{} & \multicolumn{1}{c}{[$\times10^{21}$\,cm$^2$]} & \multicolumn{1}{c}{keV} & \multicolumn{1}{c}{[km]} & \multicolumn{1}{c}{keV} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{$10^{12}$G} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \noalign{\smallskip}\hline\noalign{\smallskip} \noalign{\smallskip}\hline\noalign{\smallskip} Ho~II X-1 & 0.40$^{d}$ & 0.42$_{-0.01}^{+0.02}$ & 1421$_{-112 }^{+128 }$ & 1.65$_{-0.05}^{+0.06}$ & 69.8$_{-17.7}^{+21.3}$ & 0.74$\pm0.03$ & 22.7$_{-2.98}^{+3.27}$& SUL & 1.22/137 \\ Ho~IX X-1 & 0.64$_{-0.13}^{+0.15}$ & 0.48$_{-0.04}^{+0.05}$ & 557$_{-108 }^{+134 }$ & 3.15$_{-0.09}^{+0.10}$ & 78.5$_{-22.2}^{+27.5 }$ & 5.62$\pm0.32$& 4.64$_{-1.42}^{+2.09}$& HUL & 1.03/156 \\ IC~342 X-1 & 7.89$_{-0.53}^{+0.60}$ & 0.48$_{-0.05}^{+0.06}$ & 431$_{-97.8}^{+132 }$ & 2.84$_{-0.09}^{+0.10}$ & 21.8$_{-7.45}^{+9.70 }$ & 2.38$\pm0.17$& 1.56$_{-0.55}^{+0.98}$& HUL & 1.02/102 \\ M33 X-8 & 0.95$\pm0.01$ & 0.50$_{-0.07}^{+0.08}$ & 237$_{-47.8}^{+70.4}$ & 1.35$_{-0.05}^{+0.06}$ & 11.1$_{-7.73}^{+8.93}$ & 4.22$\pm1.11$ & 0.42$_{-0.13}^{+0.25}$& BD & 0.99/121 \\ M81 X-6 & 1.63$_{-0.15}^{+0.13}$ & 0.93$_{-0.30}^{+0.23}$ & 145$_{-19.2}^{+18.6}$ & 1.72$_{-0.27}^{+0.16}$ & 24.8$_{-5.74}^{+6.55}$ & 1.27$\pm0.81$ & 0.27$_{-0.06}^{+0.14}$& BD & 1.13/124 \\ M83 ULX & 0.11$_{-0.10}^{+0.93}$ & 0.37$_{-0.12}^{+0.11}$ & 411$_{-76.1}^{+62.3}$ & 1.23$_{-0.13}^{+0.16}$ & 6.25$_{-1.39}^{+1.61}$ & 3.25$\pm2.18$ & 0.80$_{-0.37}^{+0.36}$& -- & 0.92/44 \\ NGC~55 ULX & 2.40$_{-0.60}^{+0.59}$ & 0.28$\pm 0.02$ & 1163$_{-161}^{+108}$ & 0.82$\pm 0.02$ & 10.1$_{-3.22}^{+4.01}$ & 1.12$\pm0.10$ & 6.06$_{-2.81}^{+3.04}$& SUL & 1.08/89 \\ NGC~253 ULX2 & 3.59$_{-0.60}^{+0.59}$ & 0.24$_{-0.03}^{+0.05}$ & 1263$_{-594 }^{+661 }$ & 1.64$\pm 0.04$ & 18.5$_{-6.08}^{+7.56}$ & 8.99$\pm3.72$ & 9.89$_{-6.64}^{+8.54}$& BD & 0.99/120 \\ NGC~253 XMM2 & 1.17$_{-0.17}^{+0.19}$ & 0.53$_{-0.07}^{+0.08}$ & 239$_{-45.8}^{+65.4}$ & 1.54$_{-0.10}^{+0.14}$ & 10.2$_{-3.51}^{+4.40}$ & 2.60$\pm0.64$ & 0.41$_{-0.13}^{+0.21}$& BD & 1.13/108 \\ NGC~1313 X-1 & 2.42$_{-0.23}^{+0.24}$ & 0.32$\pm 0.02$ & 1612$_{-236 }^{+282 }$ & 2.33$_{-0.10}^{+0.11}$ & 52.3$_{-13.3}^{+16.1}$ & 2.41$\pm0.17$ & 24.3$_{-5.91}^{+7.91}$& SUL & 0.88/113 \\ NGC~1313 X-2 & 1.83$_{-0.10}^{+0.11}$ & 0.65$_{-0.06}^{+0.07}$ & 313$_{-41.4}^{+52.3}$ & 2.25$_{-0.12}^{+0.16}$ & 50.2$_{-14.6}^{+17.5 }$& 3.23$\pm0.81$ & 1.44$_{-0.32}^{+0.45}$& BD & 0.95/144 \\ NGC~4190 ULX1 & 0.99$_{-0.47}^{+0.58}$ & 0.50$_{-0.12}^{+0.22}$ & 352$_{-162 }^{+299 }$ & 1.95$_{-0.10}^{+0.20}$ & 36.2$_{-2.16}^{+2.28}$ & 6.88$\pm3.02$ & 1.52$_{-0.96}^{+2.78}$& BD & 1.00/112 \\ NGC~4559 X-1 & 0.80$_{-0.31}^{+0.33}$ & 0.26$_{-0.02}^{+0.03}$ & 2609$_{-604 }^{+802 }$ & 1.41$_{-0.07}^{+0.08}$ & 42.5$_{-1.63}^{+1.59}$ & 1.99$\pm0.73$ & 49.1$_{-18.2}^{+29.3}$& SUL & 1.05/84 \\ NGC~4736 ULX1 & 3.54$_{-1.62}^{+1.03}$ & 0.40$_{-0.09}^{+0.08}$ & 513$_{-178 }^{+373 }$ & 1.18$_{-0.08}^{+0.13}$ & 19.5$_{-4.46}^{+5.30}$ & 2.34$\pm0.51$ & 1.96$_{-0.73}^{+1.23}$& BD & 1.09/68 \\ NGC~4861 ULX & 1.24$_{-0.90}^{+1.07}$ & 0.25$_{-0.05}^{+0.09}$ & 1639$_{-360 }^{+512 }$ & 1.13$_{-0.20}^{+0.44}$ & 12.6$_{-2.13}^{+2.51}$ & 1.52$\pm0.47$ & 11.4$_{-4.00}^{+6.96}$& -- & 1.09/22 \\ NGC~5204 X-1 & 0.18$_{-0.18}^{+0.41}$ & 0.36$_{-0.05}^{+0.04}$ & 1428$_{-322 }^{+567 }$ & 1.41$_{-0.12}^{+0.08}$ & 48.8$_{-12.5}^{+15.1}$ & 1.22$\pm0.16$ & 19.0$_{-6.85}^{+11.6}$& SUL & 1.09/64 \\ NGC~5907 ULX & 0.75$_{-0.09}^{+0.10}$ & 0.40$_{-0.06}^{+0.08}$ & 2497$_{-324 }^{+257 }$ & 2.53$_{-0.11}^{+0.13}$ & 483$_{-40.9}^{+39.7}$ & 5.51$\pm0.66$ & 184 $_{-12.9}^{+11.3}$& -- & 1.09/83 \\ NGC~7793 P13 & 1.25$_{-0.26}^{+0.27}$ & 0.33$_{-0.03}^{+0.04}$ & 671$_{-160 }^{+203 }$ & 3.27$_{-0.12}^{+0.13}$ & 34.6$_{-8.78}^{+10.6}$ & 11.0$\pm0.35$ & 4.17$_{-1.58}^{+2.46}$& -- & 1.08/141 \\ \noalign{\smallskip}\hline\noalign{\smallskip} \noalign{\smallskip}\hline\noalign{\smallskip} \noalign{\smallskip}\hline\noalign{\smallskip} \noalign{\smallskip}\hline\noalign{\smallskip} \end{tabular} } \end{center} \tablefoot{ \tablefoottext{a}{Radius inferred from cool {\texttt {diskbb}} component by solving K=${\rm(R_{\rm disk}/{D_{10kpc}})^{2}}\,\cos{i}$, for $R_{\rm disk}$ (the inner radius of the disk in km). `K' is the normalisation of the cool \texttt{diskbb} model, ${\rm D_{10kpc}}$ is distance in units of 10\,kpc and `i' is the inclination. }\\ \tablefoottext{b}{``Bolometric'' luminosity (L) in the 0.1-100\,keV range, extrapolated from the best-fit model. ${\rm L_{edd}}$ is the Eddington luminosity for an isotropically accreting NS with a mass of $1.4\,{\rm M_{\odot}}$}.\\ \tablefoottext{c}{From \cite{2013MNRAS.435.1758S}. BD: broadened disc state, HUL: hard ultraluminous state, SUL: soft ultraluminous (SUL) state.} \\ \tablefoottext{d}{Parameter frozen at total galactic H\,I column density \citep{1990ARA&A..28..215D}.}\\ } \label{tab:xmm_fit} \end{table*} \begin{table*}[!htbp] \caption{Best fit parameters for the {\it NuSTAR} observations. All errors are in the 1$\sigma$ confidence range.} \begin{center} \scalebox{0.8}{ \begin{tabular}{lcccccccccc} \hline\hline\noalign{\smallskip} \multicolumn{1}{c}{Source} & \multicolumn{1}{c}{k${\rm T_{hot}}$} & \multicolumn{1}{c}{${\rm K_{hot}}^{a}$} & \multicolumn{1}{c}{${\rm \Gamma}$} & \multicolumn{1}{c}{${\rm K_{po}}^{b}$} & \multicolumn{1}{c}{${\rm {L/L_{edd}}^{c}}$} & \multicolumn{1}{r}{red. ${\chi^2/dof}$ } & \multicolumn{1}{c}{p } & \multicolumn{1}{c}{k${\rm T_{hot}}$}& \multicolumn{1}{r}{red. ${\chi^2/dof}$ }\\ \noalign{\smallskip}\hline\noalign{\smallskip} \multicolumn{1}{c}{} & \multicolumn{1}{c}{keV} & \multicolumn{1}{c}{[$10^{-3}$]} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{[$10^{-3}$]} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{{\tt diskbb+po}} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{keV} & \multicolumn{1}{c}{{\tt diskpbb}} \\ \noalign{\smallskip}\hline\noalign{\smallskip} \noalign{\smallskip}\hline\noalign{\smallskip} Ho~II X-1 & 2.11$_{-0.11}^{+0.17}$ & 5.09$_{-1.91 }^{+2.21 }$& 2.41$_{-0.26}^{+0.21}$ & 0.98$_{-0.05}^{+0.07}$ & 52.1$_{-7.36}^{+8.85}$ & 0.98/227& <0.507 &3.31$_{-0.09}^{+0.10}$& 1.01/228 \\ Ho~IX X-1 & 3.19$_{-0.13}^{+0.14}$ & 2.74$_{-0.46 }^{+0.56 }$& 2.38$_{-0.11}^{+0.12}$ & 2.12$_{-22.2}^{27.5 }$ & 86.9$_{-33.0}^{+40.8}$ & 1.01/440& 0.542$\pm0.009$ &4.33$_{-0.19}^{+0.21}$& 1.02/441 \\ IC~342 X-1 & 2.37$_{-0.11}^{+0.14}$ & 9.81$_{-2,51}^{+3.15 }$ & 2.58$_{-0.18}^{+0.16}$ & 3.58$_{-1.33}^{1.60 }$ & 39.1$_{-13.3}^{+18.9}$ & 1.00/296& <0.508 &3.41$_{-0.10}^{+0.07}$& 1.04/297 \\ NGC~1313 X-1 & 2.64$_{-0.37}^{+0.44}$ & 2.41$_{-0.71}^{+1.61 }$ & 2.80$_{-0.23}^{+0.29}$ & 1.58$_{-0.23}^{0.29 }$ & 36.0$_{-12.3}^{+14.8}$ & 1.02/122& <0.505 &3.52$_{-0.13}^{+0.14}$& 1.08/123 \\ NGC~1313 X-2 & 1.94$_{-0.25}^{+0.19}$ & 1.46$_{-0.43}^{+0.63 }$ & 4.26$_{-0.22}^{+0.49}$ & 1.18$_{-0.31}^{0.84 }$ & 14.7$_{-5.00}^{+6.08}$ & 0.98/61 & <0.522 &1.77$_{-0.07}^{+0.08}$& 0.98/62\\ NGC~5907 ULX & 2.51$_{-0.12}^{+0.14}$ & 2.41$_{-0.66 }^{+0.56}$ & 1.68$_{-0.41}^{+0.57}$ & 0.05$_{-0.06}^{0.04 }$ & 457$_{-46.5}^{+49.6}$ & 1.02/107& 0.579$_{-0.04}^{+0.06}$ &3.39$_{-0.07}^{+0.08}$& 1.25/108 \\ \noalign{\smallskip}\hline\noalign{\smallskip} \noalign{\smallskip}\hline\noalign{\smallskip} \noalign{\smallskip}\hline\noalign{\smallskip} \noalign{\smallskip}\hline\noalign{\smallskip} \end{tabular} } \end{center} \tablefoot{ \tablefoottext{a}{Where ${\rm K_{hot}}$ is the normalisation parameter for the {\texttt {diskbb}} component. Namely ${\rm K_{hot}}$=${\rm(R_{\rm hot}/{D_{10kpc}})^{2}}\,\cos{i}$, where $R_{\rm hot}$ is the inner radius of the disk in km, ${\rm D_{10kpc}}$ is the distance in units of 10\,kpc and `i' is the inclination. }\\ \tablefoottext{b}{Power-law component normalisation constant: photons/keV/cm$^2$/s at 1\,keV} \\ \tablefoottext{c}{Luminosity in the 3.-78\,keV range, extrapolated from the best-fit model. ${\rm L_{edd}}$ is the Eddington luminosity for an isotropically accreting NS with a mass of $1.4\,{\rm M_{\odot}}$}\\ } \label{tab:nus_fit} \end{table*} \begin{figure} \resizebox{\hsize}{!}{\includegraphics[angle=-90,clip,trim=0 0 0 0,width=0.8\textwidth]{offset_from_diskbb.eps}} \caption{Ho IX X-1: Unfolded spectrum. Energy and data-vs-model ratio plot, for only the {\tt diskbb} model. There are clear residuals above 20\,keV. } \label{fig:res} \end{figure} \begin{figure} \resizebox{\hsize}{!}{\includegraphics[angle=-90,clip,trim=0 0 0 0,width=0.8\textwidth]{offset_from_po.eps}} \caption{Ho IX X-1: Unfolded spectrum. Energy and data-vs-model ratio plot, for only the power law model. There is a clear curvature in the spectrum that cannot be described by the power law. } \label{fig:po} \end{figure} \section{Discussion} \label{discussion} The use of a double thermal spectrum, with similar temperatures to those observed in the dual thermal spectra of soft-state NS-XRBs, successfully describes the spectra of ULXs in our list and the unusual high-energy roll-over of the ULX spectra can be re-interpreted as the Wien tail of a hot (multicolour) black body component. The similarities between the spectral morphology of ULXs and those of NS-XRBs in the soft state are illustrated in Figure~\ref{fig:softstate}. We have plotted the {\it XMM-Newton}\xspace spectra of two known NS-XRBs (4U 1916-05 and 4U 1705-44: see appendix~\ref{appen}) along with the spectra of two (non pulsating) ULXs from our sample. Dotted lines correspond to the dual thermal model (in this example it is an absorbed MCD plus black body model) which -- in the case of the two NS-XRBs -- is used to model emission from the boundary layer and the thin accretion disk. The same configuration is used to model the spectra of the two ULXs (in this example NGC~4559 X-1 and M81~X-6). M81~X-6 is in the BD state and NGC~4559 X-1 in the SUL state. We stress that the unfolded spectra presented in Figure~\ref{fig:softstate}, are model dependent. They are used here in order to illustrate the apparent similarities between the spectral shapes of ULXs and soft-state NS-XRBS and not to extract any quantitative information on the spectral parameters (see also, a similar example plot in \citealt{2013MNRAS.435.1758S}). The suitability of a double thermal model for the spectra of ULXs had been noted by \cite{2006MNRAS.368..397S}; but the model was dismissed, as it was difficult to explain the presence of a secondary thermal component in terms of an accreting black hole. To probe beyond this superficial similarity, we explore the parameter space of the different spectral fits with respect to theoretical expectations, and discuss our findings and their implications below. \begin{figure} \centering \includegraphics[trim=0cm 0cm 0cm 0cm, width=0.45\textwidth, angle=-0]{4plots_v4_new2.epsi} \caption{Example, unfolded spectra of two ULXs from our sample and two well known NS-XRBs in the soft state. (a) 4U 1705-44, during a soft state. (b) Double thermal spectrum from NS-LMXB 4U 1916-05. (c) Apparent, dual thermal emission from ULX M81~X-6, at similar temperatures (see table~\ref{tab:xmm_fit}) as 4U 1705-44. (d) Similarly shaped spectrum from NGC~4559 X-1. } \label{fig:softstate} \end{figure} \begin{figure} \centering \includegraphics[trim=1.2cm 0cm 0cm 0cm, width=0.53\textwidth, angle=-0]{kolio_plot_V4.ps} \caption{Unabsorbed luminosity (in the 0.5-10\,keV range) vs. the temperature (in keV) of the hot multicolour disk component for the {\it XMM-Newton}\xspace data (Table~\ref{tab:xmm_fit}). The (red) solid curves correspond to internal temperature ($T_{in}$) of the accretion curtain versus total luminosity, as predicted by \cite{2017MNRAS.tmp..143M} (see their Fig.~3). Different curves correspond to different magnetic field strength. From left to right it is $10^{12}$, $2{\times}10^{12}$,$4{\times}10^{12}$, $8{\times}10^{12}$, $1.6{\times}10^{13}$ and $3.2{\times}10^{13}$\,G. } \label{fig:T_L} \end{figure} \begin{figure} \centering \includegraphics[trim=0cm 0cm 0cm 0cm, width=0.5\textwidth, angle=-0]{B_vs_L.eps} \caption{Magnetic field strength vs.~Luminosity for the dual MCD model. All values are taken from Table~\ref{tab:xmm_fit} (columns 6 and 8). } \label{fig:B_L} \end{figure} \begin{figure} \centering \includegraphics[trim=0cm 0cm 0cm 0cm, width=0.5\textwidth, angle=-0]{L_kTdisk_noline.eps} \caption{Luminosity vs.~disk temperature for the dual MCD model. All values are taken from Table~\ref{tab:xmm_fit} (columns 3 and 6). } \label{fig:Tdisk_L} \end{figure} \begin{table*}[!htbp] \caption{Best fit parameters of the MCD plus black body model for the {\it XMM-Newton}\xspace observations. All errors are in the 1$\sigma$ confidence range.} \begin{center} \scalebox{0.8}{ \begin{tabular}{lccccccc} \hline\hline\noalign{\smallskip} \multicolumn{1}{c}{Source} & \multicolumn{1}{c}{nH} & \multicolumn{1}{c}{k${\rm T_{in}}$} & \multicolumn{1}{c}{${\rm R_{in}}$} & \multicolumn{1}{c}{k${\rm T_{bb}}$} & \multicolumn{1}{c}{${\rm R_{bb}}$} & \multicolumn{1}{c}{{${\rm R_{sph}}^{a}$}} & \multicolumn{1}{r}{red. $\chi^2/dof$ }\\ \noalign{\smallskip}\hline\noalign{\smallskip} \multicolumn{1}{c}{} & \multicolumn{1}{c}{[$\times10^{21}$\,cm$^2$]} & \multicolumn{1}{c}{K} & \multicolumn{1}{c}{[km]} & \multicolumn{1}{c}{K} & \multicolumn{1}{c}{km} & \multicolumn{1}{c}{km} & \multicolumn{1}{c}{} \\ \noalign{\smallskip}\hline\noalign{\smallskip} \noalign{\smallskip}\hline\noalign{\smallskip} Ho II X-1 & 0.40$^{b}$ & 0.57$_{-0.01}^{+0.02}$ & 724$_{-62.1}^{+76.5 }$ & 1.26$_{-0.18}^{+0.17}$ & 93.5$_{-22.7}^{+28.3}$& 974 $_{-247 }^{+297}$ & 1.25/137 \\ Ho IX X-1 & 0.33$_{-0.08}^{+0.10}$ & 0.91$_{-0.03}^{+0.04}$ & 198$_{-42.2 }^{+38.3}$ & 1.95$_{-0.25}^{+0.27}$ & 59.2$_{-17.7}^{+18.8}$& 1120$_{-310 }^{+384}$ & 1.06/156 \\ IC~342 X-1 & 7.42$_{-0.41}^{+0.44}$ & 0.61$\pm0.05$ & 302$_{-52.5 }^{+61.1}$ & 1.66$_{-0.10}^{+0.14}$ & 72.8$_{-19.8}^{+23.0}$& 304 $_{-104 }^{+135}$ & 1.05/101 \\ M33 X-8 & 0.82$_{-0.02}^{+0.03}$ & 0.70$_{-0.07}^{+0.06}$ & 202$_{-41.0 }^{+51.8 }$& 1.11$_{-0.18}^{+0.16}$ & 62.1$_{-44.5}^{+50.2}$& 155 $_{-108 }^{+125}$ & 1.01/121 \\ M81 X-6 & 1.61$_{-0.12}^{+0.16}$ & 1.03$_{-0.42}^{+0.33}$ & 155$_{-22.2 }^{+20.8 }$& 1.51$_{-0.35}^{+0.46}$ & 38.2$_{-6.78}^{+8.12}$& 346 $_{-80.1}^{+91.4}$ & 1.13/124 \\ M83 ULX & 0.39$^{b}$ & 0.38$_{-0.08}^{+0.10}$ & 482$_{-91.8 }^{+86.8 }$& 0.83$_{-0.11}^{+0.09}$ & 88.3$_{-16.3}^{+20.0}$& 87.2$_{-19.4}^{+22.5}$ & 0.95/44 \\ NGC~55 ULX & 2.16$_{-0.51}^{+0.58}$ & 0.33$_{-0.02}^{+0.03}$ & 986$_{-201 }^{+108 }$ & 0.65$_{-0.15}^{+0.16}$ & 128$_{-17.7}^{+18.3}$ & 153 $_{-29.8}^{+32.1}$ & 1.11/89 \\ NGC~253 ULX2 & 2.35$_{-0.51}^{+0.58}$ & 0.95$_{-0.12}^{+0.14}$ & 121$_{-257 }^{+302 }$ & 1.39$_{-0.15}^{+0.16}$ & 51.8$_{-15.7}^{+14.3}$& 258 $_{-84.8}^{+105}$ & 1.05/120 \\ NGC~253 XMM2 & 1.09$_{-0.21}^{+0.19}$ & 0.65$_{-0.10}^{+0.09}$ & 218$_{-47.6 }^{+64.3 }$& 1.17$_{-0.09}^{+0.11}$ & 54.2$_{-16.6}^{+22.0}$& 142 $_{-49.0}^{+61.4}$ & 1.14/108 \\ NGC~1313 X-1 & 1.83$_{-0.17}^{+0.18}$ & 0.44$_{-0.20}^{+0.21}$ & 778$_{-133 }^{+156 }$ & 1.39$_{-0.43}^{+0.54}$ & 52.8$_{-22.8}^{+22.0}$& 730 $_{-186 }^{+225}$ & 1.09/113 \\ NGC~1313 X-2 & 1.70$\pm0.11$ & 0.86$_{-0.08}^{+0.11}$ & 261$_{-35.6 }^{+43.2 }$& 1.66$_{-0.05}^{+0.06}$ & 54.6$_{-15.8}^{+19.1}$& 700 $_{-204 }^{+244}$ & 1.01/144 \\ NGC~4190 ULX1 & 0.56$_{-0.31}^{+0.32}$ & 0.87$_{-0.38}^{+0.42}$ & 256$_{-120 }^{+208 }$ & 1.51$_{-0.19}^{+0.17}$ & 85.0$_{-6.06}^{+10.2}$& 505 $_{-30.1}^{+31.8}$ & 1.06/112 \\ NGC~4559 X-1 & 0.15$^{b}$ & 0.43$_{-0.05}^{+0.04}$ & 1133$_{-412 }^{+528 }$ & 1.04$_{-0.10}^{+0.12}$ & 169$_ {-18.1}^{+21.5}$& 626 $_{-22.7}^{+22.2}$ & 1.07/84 \\ NGC~4736 ULX1 & 0.86$_{-0.31}^{+0.33}$ & 0.61$_{-0.18}^{+0.22}$ & 269$_{-122 }^{+209 }$ & 0.99$\pm0.06$ & 76.6$_{-10.7}^{+11.3}$& 272 $_{-62.2}^{+73.9}$ & 1.12/68 \\ NGC~4861 ULX & 1.15$_{-0.81}^{+0.92}$ & 0.33$_{-0.08}^{+0.11}$ & 2398$_{-687 }^{+728 }$ & 0.93$_{-0.25}^{+0.31}$ & 205$_{-35.5}^{+44.7}$& 176 $_{-29.7}^{+35.2}$ & 1.10/22 \\ NGC~5204 X-1 & 0.14$^{b}$ & 0.43$_{-0.12}^{+0.09}$ & 1262$_{-295 }^{+308 }$ & 0.99$_{-0.15}^{+0.18}$ & 152$_{-39.1}^{+41.0}$& 681 $_{-174 }^{+211}$ & 1.15/65 \\ NGC~5907 ULX & 6.04$_{-2.52}^{+3.01}$ & 0.74$\pm0.11$ & 608$_{-101}^{+133 }$ & 1.38$_{-0.09}^{+0.12}$ & 224$_{-57.7}^{+49.3}$& 6740$_{-571 }^{+554}$ & 1.09/83 \\ NGC~7793 P13 & 0.66$\pm0.14$ & 0.63$\pm0.04$ & 251$_{-115 }^{+108 }$ & 1.70$\pm0.05$ & 59.7$_{-19.1}^{+22.0}$& 483 $_{-122 }^{+148}$ & 1.10/141 \\ \noalign{\smallskip}\hline\noalign{\smallskip} \noalign{\smallskip}\hline\noalign{\smallskip} \noalign{\smallskip}\hline\noalign{\smallskip} \noalign{\smallskip}\hline\noalign{\smallskip} \end{tabular} } \end{center} \tablefoot{ \tablefoottext{a}{Spherisation radius \citep[e.g.][]{2016MNRAS.458L..10K}: ${\rm R_{sph}}=27/4{\dot m}{R_{g}}$, where ${\rm R_{g}}=GM/c^{2}$ and ${\dot m}={\dot M}/{\dot M_{Edd}}$ .}\\ \tablefoottext{b}{Parameter frozen at total galactic H\,I column density \citep{1990ARA&A..28..215D}.}\\ } \label{tab:bbody} \end{table*} More specifically, to investigate the case for (super-Eddington) accretion onto lowly magnetised NSs \citep[e.g.][]{2016MNRAS.458L..10K}, we applied the {\tt diskbb} plus {\tt bbody} model that is often used to model NS-XRBs in the soft state. Indeed this model describes well the spectra of the sources in our list. However, the radius inferred from the hot black body fit has a size that is approximately an order of magnitude larger\footnote{The boundary layer is expected to be a few km in size (\citealt{2009ApJ...696.1257L} and Table~\ref{appen} in this work.) } than the size of the spreading layer on the surface of the NS (see Table~\ref{tab:bbody}). This is not surprising, since at such high accretion rates -- and for a low magnetic field NS (as considered in the \citealt{2016MNRAS.458L..10K} model) -- the accretion disk will extend well beyond the ``spherization'' radius \citep[${\rm R_{sph}}$:][]{1973A&A....24..337S}. The flow will be strongly super-Eddington and the material will, most likely, be ejected away from the surface of NS \citep[e.g.][]{2017MNRAS.468L..59K}. In this case the hot thermal component may be the result of emission of the inner disk layers, exposed by the strong outflows. In this case the hot thermal component would correspond to the stripped, inner accretion disk and the soft thermal component, as proposed in the optically thick wind scenario. Indeed, the best fit values for the size of the soft thermal component are in agreement with the ${\rm R_{sph}}$ for most of the sources in our list, alluding to the exciting possibility of accreting NSs powering a large fraction of ULXs. However, it is surprising that the dual thermal spectrum would be almost indistinguishable between the BH-ULX and the NS-ULXs, given that in this framework the maximum temperature of the accretion disk should exceed ${\sim}4\,$keV in the case of the NSs (perhaps even higher given the very high accretion rates of particular sources). On the other hand, there is still no strictly defined mechanism to account for the hot black body emission for the super-Eddington regime in accreting NS, and a more precise treatment may be able to resolve this apparent discrepancy. The fact still remains that the homogeneous fit parameters hint at the possibility of most (if not all) sources in our sample belonging to one uniform population. It is certainly plausible that this is a population of NS-XRBs instead of BH-XRBs. This implication, becomes even more intriguing when we consider the fact that two of our sources (the pulsators NGC~5907~ULX and NGC~7793~P13) are almost certainly powered by highly magnetised NSs, but -- unlike sub-Eddington NS-XRBs -- the spectra of pulsating and non-pulsating ULXs are remarkably similar. The possibility of highly magnetised NSs powering more ULXs in our list becomes even more relevant when we consider that the most reliable and thoroughly established mechanism for sustained super-Eddington accretion episodes is the funneling of material onto the magnetic poles of high-B NSs. Indeed, in a recent publication, \cite{2017ApJ...836..113P} indicate that since the hard emission from many ULXs can be described by a combination of a hard power law and an exponential cutoff -- as expected for the emission of the accretion funnel -- this could be considered as an indication in favour of highly magnetised NSs powering a significant fraction of ULXs. However, this claim is problematic, since -- in the presence of a high magnetic field -- the photons are expected to be concentrated in a narrow beam, most likely following the fan-beam emission diagram. Therefore, as the NS rotates, the emission should be registered in the form of characteristic pulsations. More importantly, the shape of their pulse profile has a complex shape comprised of two or more characteristic sharp peaks \citep[e.g.][]{1981ApJ...251..288N, 1983ApJ...270..711W,1992hrfm.book.....M, 1996BASI...24..729P, 1997A&A...319..507P, 2004A&A...421..235R, 2013A&A...558A..74V,2014A&A...567A.129V,2016MNRAS.456.3535K}. This picture is further complicated by the fact that -- most likely -- a fraction of the fan-beam emission is scattered by fast electrons at the edge of the accretion column and subsequently beamed towards the surface of the NS \citep{1976SvA....20..436K,1988SvAL...14..390L,2013ApJ...777..115P} off of which it is reflected, resulting in a secondary ``polar'' beam, which further complicates the pulse shape \citep[e.g.][]{2013ApJ...764...49T,kolio2017}. All but three known ULXs lack any evidence of pulsations and the three pulsating sources (NGC~5907, M82 X-2 and NGC~7793 P13) have very smooth and simple sinusoidal pulse profiles. Therefore, very serious doubts are cast on the interpretation of the ULX spectra as being due to direct emission from the accretion column. This contradiction appears to be resolved in a new publication by \cite{2017MNRAS.tmp..143M}. In this work, the authors demonstrate that highly magnetised NSs, accreting at high mass-accretion rates, can become engulfed in a closed and optically thick envelope (see their Fig. 1). As the primary, beamed emission of the accretion funnel is reprocessed by the optically thick material, the original pulsation information is lost. However, if the latitudinal gradient is sufficiently pronounced -- and depending on the viewing angle and inclination of the accretion curtain -- the emission may be registered as smooth sinusoidal pulses (this could be the case of the three PULXs). More interestingly, the reprocessed emission is expected to have a multicolour black body (MCB) spectrum with a high temperature (${\gtrsim}1.0$\,keV). The hot MCB component will be accompanied by a cooler (${\lesssim}0.5$\,keV), thermal component, which originates in a truncated accretion disk. More specifically, the accretion disk is expected to extend uninterrupted, until it reaches the ${{\sim}}R_{\rm mag}$, where the material follows the magnetic field lines to form the optically thick curtain. In this description the characteristic double thermal spectra of NS-XRBs can coexist with high-B super-Eddington accretion, thus setting NSs as excellent candidates for powering ULXs. In light of these findings, we remodelled the {\it XMM-Newton}\xspace spectra of the 18 ULXs, using two MCD components. Indeed, the dual MCD model yields marginally better fits than the MCD/black-body fit, in all sources. The best fit values for the temperature and inner radius of the cool MCD component, indicate the presence of a strong magnetic field in all the ULXs in our sample. Namely, their values are consistent with a truncated accretion disk. If we assume that the disk is truncated close to the magnetospheric radius (i.e.~to the first approximation ${\rm R_{in}}$=${\rm R_{mag}}$), we estimate that the intensity of the magnetic field exceeds $10^{12}$G in most sources in our list. More importantly, the temperatures of the hot MCD component and the fit-derived luminosities (see Table~\ref{tab:xmm_fit}) occupy the same parameter space, as predicted by \citeauthor{2017MNRAS.tmp..143M} (\citeyear{2017MNRAS.tmp..143M}: Their Fig. 3, and also Fig.~\ref{fig:T_L} in this work). Namely, the best-fit values for k${\rm T_{hot}}$ appear to follow the theoretical curves predicted in \cite{2017MNRAS.tmp..143M} and as a general trend, sources with stronger magnetic fields are more luminous (see Fig.~\ref{fig:B_L}) and have a hotter accretion curtain. The observed correlation between the magnetic field strength and the source luminosity is in agreement with the predictions of \cite{2015MNRAS.454.2539M}, where the accretion luminosity of magnetised neutron stars, in the super-critical regime, is discussed. Following this scheme, we also place NGC~5907 ULX in the magnetar regime (B${\sim}1.8\,10^{14}$G) which is in agreement with the findings of \cite{2017Sci...355..817I}. As \citeauthor{2017Sci...355..817I} also point out, such a high value of the magnetic field is puzzling\footnote{In \cite{2017Sci...355..817I} a multi-pole component is proposed as a resolution.}, since the source should be repeatedly entering the propeller regime \citep{1975A&A....39..185I,1986ApJ...308..669S}. However, we must underline the fact that the magnetic field values presented in this work are estimated based on the crude assumption that the $R_{\rm mag}$ is equal to the truncation radius of a thin Shakura-Sunyaev disk accreting onto a bipolar magnetic field. As such, the derived values should be treated as indications of a strongly magnetised accretor, but not considered at face value. A more realistic treatment of specific sources could yield B-field values of up to a factor of 5-6 times lower. For instance, if we re-estimate the magnetic values using the latest considerations of \cite{2017MNRAS.470.2799C} -- where it is shown that in the radiation-pressure-dominated regime, the size of the magnetosphere is independent of the mass accretion rate (see their Eqs. 39, 41 and 61) -- we end up with a value of B${\sim}3.5\,10^{13}$G for NGC~5907 ULX and a factor of ${\sim}30-470\%$ lower magnetic field values for the other sources in our list. Nevertheless, the main outcome of our analysis remains. The best-fit parameters are consistent with our underlying assumption of a high magnetic field, which reinforces the plausibility of this scenario. A similar scenario -- in which (non pulsating) ULXs are interpreted as high-B NSs in a supercritical propeller stage -- is also proposed by \cite{2017arXiv170804502E} in a study that was submitted for publication in MNRAS, during the refereeing process of this work. An additional implication of the high magnetic fields is the requirement that these sources are very young. Depending on the initial value of the magnetic field (which in this scenario should be at magnetar levels), the initial spin period, and the mass accretion rate, these sources are most likely younger than ${\sim}5{\times}10^{6}$\,yr, if we assume that they currently have a magnetic field of the order of $10^{12}$\,G \citep[e.g.][]{1979ApJ...234..296G,2006MNRAS.366..137Z,2016MNRAS.461....2P}. Indications in favour of a relatively young age (of the order of ${\sim}$10\,Myr) can be maintained for sources HoII~X-1, HoIX~X-1, IC342~X-1, M81~X-6, NGC~1313~X-2, NGC~253~XMM2, NGC~253~ULX2, NGC~4559, NGC~4736 and NGC~5204, which are associated with young stellar environments and star forming regions \citep{2005MNRAS.356...12S,2007ApJ...661..165L,2008A&A...486..151G,2008ApJ...687..471B,2010ApJ...708..354B,2011ApJ...734...23G,2013ApJ...776..100B}. Furthermore, sources HoII~X-1, HoIX~X-1, IC342~X-1, M81~X-6, NGC~1313~X-2 and NGC~5204 have optical counterparts indicating that they are very young objects \citep{2004ApJ...603..523Z, 2004MNRAS.351L..83K,2006ApJ...641..241R}, while the nebula of IC~342~X-1 and HoIX~X-1 indicate activity of less than ${\sim}$1\,Myr \citep{2002astro.ph..2488P, 2004MNRAS.351L..83K, 2007AstBu..62...36A, 2008ApJ...675.1067F,2012ApJ...749...17C}. However, estimation of the stellar companion's age based on the optical counterpart can be hindered by the fact that its emission may originate in the (irradiated) outer accretion disk, rather than the photosphere of the donor star \citep[e.g.][]{2012ApJ...745..123G,2012ApJ...758...85T,2012ApJ...750..110T}. Furthermore, the magnetic field values inferred from the spectral fitting of some of our sources would require even younger ages than those derived from observations -- that is, for B${\gtrsim}10^{13}$\,G and assuming standard magnetic field decay \citep[e.g.][]{2002apa..book.....F,2006MNRAS.366..137Z}. Therefore, investigation for further indications of the presence of magnetic field in ULXs and more accurate estimation of the magnetic field strength is required to explore this intriguing scenario. If the ``cool'' MCD component, indeed originated in a truncated accretion disk, we would expect a positive correlation between the disk temperature (${\rm T_{disk}}$) and the bolometric luminosity (L) as argued by \cite{2013ApJ...776L..36M}. In Figure~\ref{fig:Tdisk_L} we have plotted ${\rm R_{disk}}$ versus ${\rm T_{disk}}$, however, since the accretion disk is expected to become truncated at different values of ${\rm R_{disk}}$ (which in our scheme correspond to different B-field strength and mass accretion rates of different sources), there is a large scatter in the derived values and an accurate estimation of the $L{\sim}T$ relation cannot be attempted. Regarding our choice to model the cool thermal emission using a thin disk model, we must note that while in all sources analysed in this work the $R_{\rm mag}$ is larger than ${\rm R_{sph}}$ (for any plausible value of ${f_{\rm col}}$) and therefore the accretion disk could be assumed to remain thin, the $R_{\rm mag}$ values are only nominally larger than ${\rm R_{sph}}$ and -- more importantly -- the fit-inferred luminosities of the disk component are super-Eddington, suggesting that the disk will most likely be geometrically thick. In this case we would expect that the advection will perturb the thin MCD spectrum which we have used to model the cool thermal component. Nevertheless, this will not introduce any significant qualitative difference in our results (see e.g. \citealt{2013A&A...553A..61S} and also discussion in \citealt{2017MNRAS.tmp..143M}) and therefore -- as with the hot thermal component -- the {\tt diskbb} model is sufficient for the purposes of this work. Another potential issue of the geometrically thick disk is the expected emergence of outflows due to the radiation pressure, which may put the stability of this mechanism into question. However, in the presence of strong magnetic fields, the accreting, optically thick material will remain bound, even for luminosities of the order of ${\sim}10^{40}$\,erg/s \citep{2017MNRAS.tmp..143M,2017MNRAS.470.2799C}. We also note that the numerically estimated curves in Figure~3 of \citeauthor{2017MNRAS.tmp..143M}, refer to the internal temperature (${\rm T_{in}}$: is the temperature of the inner boundary of the emission curtain, which faces the NS) of the geometrically thick accretion envelope. In the optically thick regime, ${\rm T_{in}}$ is related to ${\rm T_{out}}$ (which corresponds to the observed ${\rm T_{hot}}$) as, \begin{equation} T_{\rm in}{\approx}{T_{\rm out}}{\tau}^{\frac{1}{4}} \label{eq:tinout} .\end{equation} Therefore, the values of ${\rm T_{hot}}$ presented in Figure~\ref{fig:T_L} should be multiplied by a factor of ${\approx}1.8-2.1$ (corresponding to an optically thick corona, i.e.~$\tau{\approx}10-20$) in order to represent the internal temperature (${\rm T_{in}}$). However, as stated in Section~\ref{sec-observations}, in our temperature estimations we have ignored the spectral hardening which would have produced colour-corrected temperatures given by ${\rm T_{cor}}={\rm T_{hot}}$/{$f_{\rm col}$\xspace}, with $f_{\rm col}$\xspace ranging between 1.5 and 2.1. This notable consistency between the colour correction factor and the relation between internal and external temperature in an optically thick accretion envelope further reinforces its plausibility, and with it, our confidence in the observational verification of this new scheme proposed by \cite{2017MNRAS.tmp..143M}. These intriguing findings are also supported by the {\it NuSTAR}\xspace observations. Indeed, analysis of the {\it NuSTAR} data confirms the presence of the $<$10\,keV roll over, observed in the {\it XMM-Newton}\xspace data (see e.g. Figure~\ref{fig:po}). More importantly, when the spectral curvature is modelled as hot MCB emission, the temperatures of the {\tt diskbb} component in the {\it NuSTAR} data are in agreement with the {\it XMM-Newton}\xspace observations, particularly in those sources that were observed at similar luminosity. The presence of a thermal-like component in the {\it NuSTAR}\xspace spectra of ULXs has also been noted for sources Circinus ULX5 \citep{2013ApJ...779..148W} and NGC~5204~X-1 \citep{2015ApJ...808...64M}, further supporting the case for emission from hot, optically thick material. Indications for the presence of optically thick material at the boundary of the magnetosphere can also be found in sources that lie below the Eddington limit. Several X-ray pulsars (in the sub-Eddington regime) exhibit a characteristic spectral ``soft excess'', which is well described by a black body distribution at a temperature of ${{\sim}}$0.1-0.2\,keV \citep[e.g.][and references therein]{2004ApJ...614..881H}. This emission has been attributed to reprocessing of hard X-rays by optically thick material in the vicinity of the magnetosphere. The size of the reprocessing region is considerably larger than the inner edge of a standard accretion disk and it is argued that it may partially cover the primary hard emission from the accretion column \citep[e.g.][]{2006A&A...448..261Z,2006A&A...455..283L,2009A&A...494.1073R,2015MNRAS.449.3710S}. It is plausible that the predictions of \cite{2017MNRAS.tmp..143M} are -- in essence -- an expansion of these arguments to the super-Eddington regime, where the optically thick material engulfs the entire magnetosphere, obfuscating most (or all) of the primary hard emission. The resulting accretion envelope has a temperature that is an order of magnitude higher than that of the soft excess in the sub-Eddington sources. The {\it NuSTAR}\xspace data also reveal the presence of a weak power law above ${\sim}15$\,keV. The hard emission, which is often present in the spectra of NS-XRBs in the soft state \citep[e.g.][and references therein]{2001AdSpR..28..307B,2007A&ARv..15....1D,2009ApJ...696.1257L}, has also been noted by \cite{2014ApJ...793...21W}, \cite{2015ApJ...806...65W}, and \cite{2017ApJ...834...77F} in {\it NuSTAR}\xspace data of Ho~IX X-1, Ho~II X-1, and NGC~5907 ULX, respectively. The thermal emission of the accretion curtain may be modified by IC scattering from a photoionised atmosphere, analogous to an accretion disk corona \citep[e.g.][]{1980A&A....86..121S,1993ApJ...413..507H}. The presence of this highly ionised plasma is also supported by the detection of emission-like features in some of the observations in our list (see Sect.~\ref{sec-observations}). We note that the presence of similar, broad-emission-like features centred at ${\sim}1\,$keV have also been detected in the spectra of ``nominal'' X-ray pulsars at lower accretion rates \citep[e.g.][]{2002MNRAS.337.1185R,2015MNRAS.449.3710S,2016MNRAS.458L..74L}, which also exhibit the soft excess. We must also highlight the possibility that the apparent power-law tail may in fact be an artifact, resulting from modelling the MCB emission of the quasi-spherical accretion curtain with a MCD model. In Section~\ref{sec-observations}, it was noted that when we model the hot thermal component with a MCB model where ${\rm T_{hot}}$ is proportional to $r^{-p}$ and p is left to vary freely, the {\it NuSTAR}\xspace spectra can be successfully fitted without the requirement for the hard tail. More specifically, in all cases, the value of p is less than ${\sim}$0.58, which -- in the context of accretion disks -- points to an ``inflated'', advective, slim disk \citep[e.g.][and references therein]{2004ApJ...601..428K}. In this case, it is fairly plausible that the radically different geometry of the accretion curtain will cause a significant deviation from the temperature gradient of $T{{\sim}}r^{-0.75}$ assumed by the standard MCD model used in our fits, resulting in an underestimation of the hard emission, which appears as an excess above ${\sim}15\,$keV. The hypothesis of an ``obscured'', highly magnetised NS as the central engine in ULXs may also resolve the contradiction regarding the only known (to this date) detection of a relativistic jet in a ULX, in Ho~II~X-1 \citep{2015MNRAS.452...24C}. While collimated jets are often detected in BH-XRBs and AGN, they are only present during the low-accretion, non-thermal {\it hard state}. In the case of Ho~II~X-1, though, the collimated jet is detected in a high accretion state, during which the spectrum is dominated by thermal emission, which is in stark contrast to the BH-XRB/jet paradigm. This contradiction is resolved when we consider a high-B NS as the accretor, which -- for sufficiently high values of the magnetic field -- can power collimated relativistic jets at high accretion rates \citep{2017MNRAS.469.3656P,2016ApJ...822...33P}. However, in this framework the presence of the jet is also contingent upon the NS spin period. Only a limited set of parameters would yield a powerful jet in the high accretion-rate regime, which may explain the lack of a jet in most known ULXs. It is also certainly plausible that the non-detection of radio jets may be the result of a lack of sensitivity, since the expected flux would most likely lie in the few $\mu$\,Jy range or less. Given the above discussion and the bulk of theoretical expectations, strong outflows should be expected for any type of accretor (i.e. BH, highly or lowly magnetised NS). The more pertinent question, then, would be whether the soft thermal component originates in a hot optically thick wind component, close to the accretor or the truncated accretion disk. Therefore, given the recent considerations regarding the different candidates for the ULX accretors, it is important that the evolution of the $L_{soft}{{\sim}}{T_{in}}$ relation for specific sources is revisited. The ``universal'', power-law-shaped luminosity function of ULXs and HMXBs \citep[e.g.][]{2004NuPhS.132..369G,2004ApJS..154..519S,2012MNRAS.419.2095M} may also be interpreted as favouring NS-powered ULXs. More specifically, the smooth shape of the HMXB luminosity function up to ${\log L}{\sim}40.5$ strongly implies that ULXs are composed of ordinary HXMBs with stellar-mass accretors. Since most HMXBs are powered by NSs \citep[e.g.][]{2006A&A...455.1165L,2009ApJ...707..870B} and also most ULXs are found in star-forming regions \cite[e.g.][and references therein]{2011NewAR..55..166F} that favour the evolution of NS-HMXBs, it is reasonable to postulate that most ULXs are indeed NS-XRBs. It is also of great interest to investigate if there are fundamentally different characteristics between ULXs and sources that lie above the ${\sim}10^{40}$\,erg/s break in the luminosity function \citep{2012MNRAS.419.2095M}. Indeed the two brightest HLXs -- M82~X-1 and ESO~243-49~HLX-1 -- do not feature the spectral cutoff of ULXs \citep[e.g.][]{2006ApJ...637L..21D,2009Natur.460...73F} and also appear to transition between the empirical BH-XRB accretion states (e.g. \citealt{2009ApJ...705L.109G}; \citealt{2010ApJ...712L.169F}, although \citealt{2016ApJ...829...28B} recently indicated that M82~X-1, during episodes of high accretion, can be modelled as a stellar-mass BH, accreting at super-Eddington rates). The differing aspects between sources above and below the luminosity break indicate a different type of accretor between HLXs and ULXs. Within the scheme discussed in this work, this could mean that while most ULXs have NS accretors, HLXs harbour either supercritically accreting stellar-BHs or sub-Eddington accreting IMBHs. Nevertheless, given the very small sample of HLX sources, such hypotheses remain strictly in the realm of speculation. \section{Conclusion} \label{conclusion} We have presented an alternative interpretation of the X-ray spectra of eighteen well-known ULXs, which provides physically meaningful spectral parameters. More specifically, from the analysis of the {\it XMM-Newton}\xspace and {\it NuSTAR}\xspace spectra, we note that the curvature above ${{\sim}}$5\,keV -- found in the spectra of most ULXs -- is consistent with the Wien tail of thermal emission in the $>$1\,keV range. Furthermore the high-quality {\it XMM-Newton}\xspace spectra confirm the presence of a secondary, cooler thermal component. These findings are in agreement with the analysis presented in previous works \citep[e.g.][]{2014ApJ...793...21W,2015ApJ...806...65W,2016MNRAS.460.4417L}. However, in contrast to the currently accepted paradigm, we propose that the dual thermal spectrum may be the result of accretion onto a highly magnetised NS, as predicted in recent theoretical models \citep{2017MNRAS.tmp..143M} in which the hot thermal component originates in an optically thick envelope that engulfs the entire NS at the boundary of the magnetosphere, and the soft thermal component originates in an accretion disk that becomes truncated at approximately the magnetospheric radius. We claim that this finding offers an additional and compelling argument in favour of neutron stars as more suitable candidates for powering ULXs, as has been recently suggested \citep{2016MNRAS.458L..10K,2017MNRAS.468L..59K}. In light of this interpretation, the ultraluminous state classification put forward by \cite{2013MNRAS.435.1758S} can be re-interpreted in terms of different temperatures and relative flux contribution of the two thermal components, which result in the different spectral morphologies. Nevertheless, we stress that there is considerable degeneracy between different models that can fit the spectra equally well, and so far there are no observational features, such as cyclotron lines or transitions to the propeller regime \citep[e.g.][]{1972A&A....21....1P,1973ApJ...184..271L,1975A&A....39..185I}, that will conclusively favour this hypothesis over other comprehensive and equally plausible interpretations (i.e.~optically thick outflows from critically accreting black holes). Furthermore, the presence of strong outflows is also expected in the case of accreting high field NSs, which may account for the soft thermal emission in the NS scenario as well. Given the encouraging results of this work, further examination of this scenario is warranted. To this end, the fractional variability of ULXs (which is addressed by the BH super-Eddington wind model) should be reviewed in the context of the highly magnetised NS model, and the possibility of aperiodic flux variation due to the rotating accretion curtain must be explored further. Moreover, deeper broadband observations that will also allow precise phase resolved spectroscopy of the pulsating sources, as well as long-term monitoring -- in sources for which this is feasible -- are necessary in order to further probe this newly emerging paradigm. \section{Acknowledgements} The authors would like to thank the anonymous referee whose contribution significantly improved our manuscript. Also, F.~K., O.~G., N.~W. and D.~B. acknowledge support from the CNES. F.~K. warmly thanks Apostolos Mastichiadis and Maria Petropoulou for comments and stimulating discussion. \begin{appendix} \section{Spectral extraction and analysis of the NS-XRBs} {\it XMM-Newton}\xspace spectra for NS-XRBs 4U 1705-44 and 4U 1916-05 were extracted and analysed using the same procedures as described in Section~\ref{sec-observations}. Both sources where fitted with an absorbed MCD plus black body model ({\tt xpsec} model {\tt tbnew\_gas(diskbb+bbody)}), where the MCD model was used for the ``cool'' thermal emission of the truncated accretion disk and the black body for the ``hot'' thermal emission, expected to originate in the boundary layer formed on the surface of the NS. Best fit values are presented in Table~\ref{appen}. The inner disk radius inferred from the normalisation parameter of the {\tt diskbb} component was estimated using the expression given in Table~\ref{tab:xmm_fit}. We assumed an inclination of $i=60\deg$ for 4U 1705-44 and $i=80\deg$ for 4U 1916-05 (this is an edge-on viewed source \citealt{2004A&A...418.1061B}). The size of the black body emitting region (column 6) was estimated using the expression given in Table~\ref{tab:bbody}. Distances of 8\,kpc and 9\,kpc were assumed for 4U 1705-44 and 4U 1916-05, respectively. \begin{table*}[!htbp] \begin{center} \begin{threeparttable} \caption{ Best fit values for {\it XMM-Newton}\xspace observations 0085290301 and 0551270201 of sources 4U 1916-05 and 4U 1705-44, respectively.} \begin{tabular}{lccccc} \toprule Source & nH & k${\rm T_{in}}$ &${\rm R_{in}}$ &${\rm T_{BB}}$ &${\rm R_{BB}}$ \\ & & & & & \\ & $\times10^{21}$\,cm$^2$ & keV & km & keV & km \\ \midrule 4U 1705-44 & 22.5$\pm0.13$ & 1.14$\pm0.03$ & 11.8$_{-0.38}^{+0.37}$ & 1.76$\pm0.01$ & 5.36$_{-0.19}^{+0.22}$ \\ 4U 1916-05 & 2.15$\pm0.03$ & 0.66$\pm0.01$ & 13.2$_{-0.12}^{+0.31}$ & 1.75$\pm0.01$ & 1.23$_{-0.01}^{+0.02}$ \\ \bottomrule \end{tabular} \begin{tablenotes} \small \item \footnotesize{} \end{tablenotes} \label{appen} \end{threeparttable} \end{center} \end{table*} \end{appendix}
1,941,325,219,905
arxiv
\section{Introduction} \subsection{Stein's spherical maximal function and its arithmetic analogue} In \cite{Stein_spherical}, Stein introduced the spherical maximal function and proved that it is bounded on $L^p(\R^\dimension)$ for $p>\frac{\dimension}{\dimension-1}$ and $\dimension \geq 3$. This was later extended to $p>2$ when $\dimension=2$ by Bourgain in \cite{Bourgain_circular}. Recently, discrete analogues of Stein's spherical maximal function have been considered. The discrete sphere of radius $\radius \geq 0$ in $\Z^\dimension$ is $\arithmeticsphere := \{x\in \Z^\dimension : |x|^2=r^2 \}$ which contains $N_{\dimension}(\radius) = \# \arithmeticsphere$ lattice points. For dimensions $\dimension \geq 4$, the set $\arithmeticsphere$ is non-empty precisely when $\radius^2 \in \N$. Let $\acceptableradii$ denote the set of radii $\radius$ such that $\arithmeticsphere \not= \emptyset$, then $\acceptableradii$ is precisely $\setof{\radius \in \R_{\geq 0} : \radius^2 \in \N}$ when $\dimension \geq 4$. For $\radius \in \acceptableradii$, we introduce the discrete spherical averages: \begin{equation} \arithmeticsphericalaveragefxn(x) = \inverse{N_{\dimension}(\radius)} \sum_{y \in \arithmeticsphere} \fxn(x-y) = \fxn \convolvedwith \arithmeticspheremeasure{\radius} (x) \end{equation} where $\arithmeticspheremeasure{\radius} := \frac{1}{N_\dimension(\radius)} {\bf 1}_{ \{x\in \Z^\dimension : |x|^\degreetwo=r^\degreetwo\} }$ is the uniform probability measure on $\arithmeticsphere$. The associated (full) maximal function is \begin{equation} \arithmeticsphericalmaxfxn = \sup_{\radius \in \acceptableradii} \absolutevalueof{\arithmeticsphericalaveragefxn} . \end{equation} Motivated by Stein's theorem, it is natural to ask: \emph{when is $\maxop$ bounded on $\ell^p(\Z^\dimension)$?} Testing the maximal operator on the delta function and using the asymptotics for the number of lattice points on spheres, $N_\dimension(\radius) \eqsim \radius^{\dimension-2}$ when $\dimension \geq 5$, we expect that the maximal operator is bounded on $\ell^p$ for $p>\frac{\dimension}{\dimension-2}$ when $\dimension \geq 5$. In fact, building on the work of \cite{Magyar_dyadic}, this was proven in \cite{MSW_spherical} with a subsequent restricted weak-type bound at the endpoint $p=\frac{\dimension}{\dimension-2}$ proven in \cite{Ionescu_spherical}. In particular, $\maxop$ is a bounded operator from $\ell^{p,1}(\Z^\dimension)$ to restricted $\ell^{p,\infty}(\Z^\dimension)$ for $p=\frac{\dimension}{\dimension-2}$; that is, $\maxop$ is restricted weak-type $(\frac{\dimension}{\dimension-2},\frac{\dimension}{\dimension-2})$. This result is sharp. For generalizations to higher degree varieties where the sharp ranges of \(\ell^{p,\infty}(\Z^\dimension)\) are unknown, we refer the reader to \cite{Magyar_ergodic} and \cite{Hughes_Vinogradov}. \subsection{The lacunary spherical maximal function and its arithmetic analogue} Shortly after Stein's work on the spherical maximal function \cite{Stein_spherical}, it was observed by Calder\'on and Coifman--Weiss that lacunary versions of Stein's spherical maximal function are bounded on a larger range of $L^p(\R^\dimension)$-spaces than for the full Stein spherical maximal function -- see \cite{Calderón_lacunary_spherical} and \cite{Coifman_Weiss_lacunary_spherical} respectively. In particular, they proved: \begin{lacunarythm} The lacunary (continuous) spherical maximal function is bounded on $L^p(\R^\dimension)$ for $\dimension \geq 2$ and $1 < p \leq \infty$. \end{lacunarythm} Similarly, we define the lacunary discrete spherical maximal function when $\dimension \geq 5$ by restricting the set of radii to lie in a lacunary sequence $\radii := \{ \radius_{j} \}_{j \in \N} \subset \acceptableradii$. Recall that a sequence is lacunary if $\radius_{j+1} > c \, \radius_j$ for some $c > 1$. More generally, for any \(\radii \subseteq \acceptableradii\), the \emph{discrete spherical maximal function over $\radii$} is defined in the natural way as \begin{equation} \sequentialarithmeticsphericalmaxfxn := \sup_{\radius_j \in \radii} \; \absolutevalueof{\lacunaryavgfxn{\radius_j}} . \end{equation} By the Magyar--Stein--Wainger discrete spherical maximal theorem in \cite{MSW_spherical}, we know that any discrete spherical maximal function in 5 or more dimensions over a subsequence of radii in $\acceptableradii$ is bounded on $\ell^p(\Z^\dimension)$ for $\dimension \geq 5$ and $p > \frac{\dimension}{\dimension-2}$. In particular this holds true for any lacunary subsequence in 5 or more dimensions. It is conjectured that the continuous lacunary spherical maximal function is bounded from $L^1(\R^\dimension)$ to $L^{1,\infty}(\R^\dimension)$ for $\dimension \geq 2$. See \cite{STW_endpoint_spherical} and \cite{STW_pointwise_lacunary} for recent work in this direction. Analogously, it is a folklore conjecture that the arithmetic lacunary spherical maximal function is bounded on $\ell^p(\Z^\dimension)$ for $p>1$. The following conjecture is our motivation for this paper. \begin{conjecture}\label{conjecture:lacunary} For $\dimension \geq 5$, if $\radii$ is a lacunary subsequence of $\acceptableradii$, then $\sequentialarithmeticsphericalmaxop: \ell^1(\Z^\dimension) \to \ell^{1,\infty}(\Z^\dimension)$. \end{conjecture} \subsection{It's a trap!}\label{subsection:Ackbar} Surprisingly, J. Zienkiewicz has shown that Conjecture~\ref{conjecture:lacunary} is false in general.\footnote{This counterexample was communicated to the author by Zienkiewicz after an initial draft of this paper was completed.} More precisely, Zienkiewicz proved that there exist infinite, yet arbitrarily thin subsets \(\radii \subset \acceptableradii\) such that \(\sequentialarithmeticsphericalmaxfxn\) is unbounded on \(\ell^p(\Z^\dimension)\) for \(1 \leq p < \frac{\dimension}{\dimension-1}\) and \(\dimension \geq 5\). Zienkiewicz's counterexamples proceed by a probabilistic argument that incorporates information about the discrete spherical averages when one reduces mod \(Q\) for \(Q \in \N\). By a probabilistic argument, Zienkiewicz constructs counterexamples that violate \eqref{good_primes_estimate} of {\bf G} below for infinitely many primes. In Section~\ref{section:conclusion} we revise Conjecture~\ref{conjecture:lacunary} to account for these counterexamples. \subsection{Results of this paper} Our main theorem is the following improvement to the range of boundedess for maximal functions over lacunary sequences of radii possessing an interesting dichotomy. \begin{thm}\label{thm:variants_of_Euclid} Let \(\radii := \setof{\radius_j} \subset \acceptableradii\) be a lacunary subsequence of \(\R^+\). Assume that \(\radii\) decomposes the primes \(\theprimes \subset \N\) into two (not necessarily disjoint) sets: the good primes \(\goodprimes\) and the bad primes \(\badprimes\) such that \begin{itemize} \item[{\bf G}:]\label{item:good_primes_estimate} for each $\oddprime \in \goodprimes$, \begin{equation}\label{good_primes_are_units} \setof{\radius_j^2 \mod \oddprime} \subset \unitsmod{\oddprime} , \end{equation} and for all $\epsilon>0$, \begin{equation}\label{good_primes_estimate} \# \setof{\radius_j^2 \mod \oddprime} \lesssim_{\epsilon} \oddprime^\epsilon \end{equation} where the implicit constants may depend on \(\epsilon\), but not on \(p \in \goodprimes\), \item[{\bf B}:]\label{item:bad_primes_estimate} and the bad primes satisfy \begin{equation}\label{bad_primes_estimate} \sum_{\oddprime \in \badprimes} \oddprime^{-s} < \infty \text{ for some $s \in (0,1]$. } \end{equation} \end{itemize} If \( p \geq \frac{\dimension}{\dimension-(1+s)}\) and \(p > \frac{\dimension-1}{\dimension-2} \), then \(\sequentialarithmeticsphericalmaxop\) is a bounded operator on \(\ell^p(\Z^\dimension)\) for \(\dimension \geq 5\). If additionally \(2 \in \goodprimes\), then \(\sequentialarithmeticsphericalmaxop\) is bounded on \(\ell^p(\Z^\dimension)\) for the same range of \(p\) and \(\dimension \geq 4\). \end{thm} Theorem~\ref{thm:variants_of_Euclid} reduces our problem to finding sequences of natural numbers satisfying certain arithmetic properties, and it would be superfluous if we could not find a sequence of radii satisfying the {\bf G} and {\bf B} dichotomy. Our next theorem gives a family of sequences satisfying these conditions. This family is well known in number theory as it includes \emph{primorials}, also known as \emph{Euclidean primes}, whose definition is motivated by Euclid's proof of the infinitude of primes. For these sequences, \eqref{good_primes_estimate} is simple to verify. However, \eqref{bad_primes_estimate} is difficult to verify, and we only have very poor bound for it in this article. In turn, for our family of sparse sequences, presently we are only able to show that the associated discrete spherical maximal function is strong-type at the Magyar--Stein--Wainger endpoint for the full discrete spherical maximal function as opposed to restricted weak-type bound in \cite{Ionescu_spherical}. \begin{thm}\label{thm:ENFANT} Let $\sequencegrowth>1$. For any fixed \(m \in \N\), the sequence of radii $\radii = \setof{\radius_j \in \R^+ : \radius_j^2 = m + \prod_{j_0 \leq i \leq 2^{j^\sequencegrowth}} \oddprime_i}$ satisfy \eqref{good_primes_are_units} and \eqref{good_primes_estimate} of {\bf G} for all primes and \eqref{bad_primes_estimate} of {\bf B} for \(s=1\). \end{thm} Theorem~\ref{thm:variants_of_Euclid} and Theorem~\ref{thm:ENFANT} immediately combine to yield the following endpoint Magyar--Stein--Wainger theorem applied to such sequences. \begin{cor}\label{cor:Euclid} Let \(\dimension \geq 4\), \(\sequencegrowth > 1\) and \(\radii = \setof{\radius_j \in \R^+ : \radius_j^2 = 1 + \prod_{i \leq 2^{j^\sequencegrowth}} \oddprime_i}\) where \(\oddprime_i\) is the \(i^{th}\) prime. Let \(\sequentialarithmeticsphericalmaxop\) denote the spherical maximal function associated to $\radii$. Then \(\sequentialarithmeticsphericalmaxop\) is a bounded operator on \(\ell^p(\Z^\dimension)\) for \(p \geq \frac{\dimension}{\dimension-2}\) and \(\dimension \geq 4\). \end{cor} \begin{rem} By the Prime Number Theorem, $\prod_{i \leq T} \oddprime_i \eqsim e^{T}$ as $T \to \infty$. We see that our sequence grows much faster than lacunary since \(\prod_{i \leq 2^{j^\sequencegrowth}} \oddprime_i \eqsim e^{2^{j^\sequencegrowth}}\) for any $\sequencegrowth > 0$ as $j \to \infty$. The existence of thicker sequences with property \eqref{good_primes_estimate} would be interesting. On the other hand, our main difficulty in this paper is to establish \eqref{bad_primes_estimate}. We only succeed in doing so for $s=1$; hence the limitation to \(p \geq \frac{\dimension}{\dimension-2}\) in Corollary~\ref{cor:Euclid}. \end{rem} An intriguing aspect of Corollary~\ref{cor:Euclid} is that \(\sequentialarithmeticsphericalmaxop\) is bounded on $\ell^2(\Z^4)$. This is surprising since the full discrete spherical maximal function, $\arithmeticsphericalmaxop$ fails to be bounded on $\ell^2(\Z^4)$. Worse yet, for dimensions $\dimension \leq 4$, the full maximal function is only bounded on $\ell^\infty(\Z^\dimension)$. Theorem~\ref{thm:variants_of_Euclid} and Corollary~\ref{cor:Euclid} mark the first results in 4 dimensions for boundedness of the arithmetic spherical maximal function over infinite sequences. Let us examine the four dimensional situation further. In $\Z^4$, there are precisely 24 lattice points on a sphere of radius $2^j$ for all $j \in \N$, e.g. $N_{4}(2^j) = 24$. Applying the discrete spherical maximal function to the delta function demonstrates that the naive definition of our maximal function in 4 dimensions is wrong. However, further considerations suggest that there could be a version of the Magyar--Stein--Wainger theorem in 4 dimensions. To make this precise, we must account for some arithmetic phenomena. From the work of Hardy--Littlewood on the circle method, we have the asymptotic formula \begin{equation}\label{eq:HL_asymptotic} N_\dimension(\radius) = \singularseries{\radius^2} \frac{\pi^{\dimension/2}}{\Gamma(\dimension/2)} \radius^{\dimension-2} +O_{\epsilon}(\radius^{\dimension/2+\epsilon}) \end{equation} where $\singularseries{\radius^2}$ is the singular series, which satisfies $\singularseries{\radius^2} \eqsim 1$ when $\dimension \geq 5$. Lagrange's theorem and Jacobi's four square theorem demonstrate that the 4 dimensional case (i.e. \(S^3(\radius) \in \Z^4\)) is different. In four dimensions, the bound for the error term in \eqref{eq:HL_asymptotic} dominates the main term, and therefore, \eqref{eq:HL_asymptotic} is not useful as an asymptotic. However, Kloosterman was able to refine their method by exploiting oscillation between Gauss sums to improve \eqref{eq:HL_asymptotic} to \begin{equation}\label{equation:Kloosterman} N_\dimension(\radius) = \singularseries{\radius^2} \frac{\pi^{\dimension/2}}{\Gamma(\dimension/2)} \radius^{\dimension-2} +O_{\epsilon}(\radius^{\frac{\dimension}{2}-\frac{1}{9}+\epsilon}) \end{equation} for all $\epsilon > 0$ and $\dimension \geq 4$. The cost here is that the singular series in the asymptotic formula is not uniform, and in fact can be very small. One can predict this from Jacobi's theorem since there are precisely 24 lattice points when $\radius = 2^j$ for all $j \in \N$; in this case, one sees that $\singularseries{4^j} \lesssim 4^{-j}$. To avoid this 2-adic obstruction in the singular series when $\dimension = 4$, we make the additional assumption that \emph{$\radius_j^2 \not\equiv 0 (\mod 4)$ for each $j \in \N$} or \eqref{good_primes_are_units} holds for the prime 2. In either case $N_4(\radius^2) \eqsim \radius^2$ so that there are many lattice points on $S^3(\radius)$. Modifying the discrete spherical maximal function in 4 dimensions in this way, it is natural to conjecture that it is bounded on $\ell^p(\Z^4)$ for $2 < p \leq \infty$ -- see \cite{Hughes_thesis} for a precise statement of this conjecture and a related result. \subsection{Notations}\label{subsection:notations} Our notation is a mix of notations from analytic number theory and harmonic analysis. Most of our notation is standard, but there are a few choices based on aesthetics. \begin{itemize} \item The torus $\T^\dimension$ may be identified with any box in $\euclideanspace$ of sidelengths 1, for instance $[0,1]^\dimension$ or $[-1/2,1/2]^\dimension$. \item We identify $\Zmod{\modulus}$ with the set $\inbraces{1, \dots, \modulus}$ and $\unitsmod{\modulus}$ is the group of units in $\Zmod{\modulus}$, also considered as a subset of $\inbraces{1, \dots, \modulus}$. \item $\eof{t}$ will denote the character $e^{2 \pi i t}$ for $t \in \R, \Zmod{\modulus}$ or $\T$. \item We abuse notation by writing ${b}^2$ to mean $\sum_{i=1}^{\dimension} b_i^2$ for $b \in (\Zmod{\modulus})^{\dimension}$ and the dot product notation $b \cdot m$ to mean $\sum_{i=1}^{\dimension} b_i m_i$ for $b, m \in (\Zmod{\modulus})^\dimension$ or \(\Z^\dimension\). \item For any $\modulus \in \N$, $\totientfunction{\modulus}$ will denote Euler's totient function, the size of $\unitsmod{\modulus}$. \item For two functions $f, g$, $f \lesssim g$ if $\absolutevalueof{f(x)} \leq C \absolutevalueof{g(x)}$ for some constant $C>0$. $f$ and $g$ are comparable $f \eqsim g$ if $f \lesssim g$ and $g \lesssim f$. All constants throughout the paper may depend on dimension $\dimension$. \item If $f: \R^\dimension \to \C$, then we define its Fourier transform by $\contFT{\fxn}(\xi) := \int_{\R^\dimension} f(x) e(x \cdot \xi) dx$ for $\xi \in \R^\dimension$. If $f: \T^\dimension \to \C$, then we define its Fourier transform by $\torusFT{\fxn}(\latticepoint) := \int_{\T^\dimension} f(x) e(-\latticepoint \cdot x) dx$ for $\latticepoint \in \lattice$. If $f:\lattice \to \C$, then we define its inverse Fourier transform by $\latticeFT{\fxn}(\xi) := \sum_{\latticepoint \in \lattice} f(\latticepoint) e(n \cdot \xi)$ for $\xi \in \T^\dimension$. \item $\ellpoperatornorm{T}$ will denote the $\ell^p(\Z^\dimension)$ to $\ell^p(\Z^\dimension)$ operator norm of the operator $T$. \end{itemize} \subsection{Layout of the paper} By interpolation with the usual $\ell^\infty(\Z^\dimension)$ bound, we restrict our attention to the range $1 \leq p \leq 2$. From \cite{MSW_spherical}, we understand that each average decomposes into a main term (resembling the singular series and singular integral of the circle method) and an error term. We recall this machinery in section~\ref{section:MSW_machinery}. The main term and error term will be bounded on ranges of $\ell^p(\Z^\dimension)$-spaces by distinct arguments. In section~\ref{section:main_term}, our bounds for the main term exploit the Weil bounds for Kloosterman sums via the transference principle of Magyar--Stein--Wainger. The main result here is Lemma~\ref{lemma:main_term} and we introduce a more precise decomposition of the multipliers in order to use the Kloosterman method. In section~\ref{section:error_term}, the error term is handled by a square function argument using the lacunary condition. The main lemma here is Lemma~\ref{lemma:error_term_lemma}. Our novelty here is that we exploit (well-known) cancellation for averages of Ramanujan sums to improve the straight-forward $\ell^1(\Z^\dimension)$ bound. Theorem~\ref{thm:variants_of_Euclid} follows immediately by combining Lemma~\ref{lemma:main_term} with Lemma~\ref{lemma:error_term_lemma}. In section~\ref{section:ENFANT_example}, we prove Theorem~\ref{thm:ENFANT}. The properties of our sequences are well known to analytic number theorists, but we could not find them in the literature. Section~\ref{section:conclusion} concludes our paper with some questions and remarks. \section{MSW machinery and the Kloosterman refinement}\label{section:MSW_machinery} Before turning to the proof of Theorem~\ref{thm:variants_of_Euclid}, we review the Kloosterman refinement as in (1.9) of Lemma~1 from \cite{Magyar_discrepancy}, some machinery from \cite{MSW_spherical} and bounds for exponential sums. Let $\arithmeticspheremeasure{\radius} := \frac{1}{N_\dimension(\radius)} \charfxn{\setof{\latticepoint \in \lattice : \absolutevalueof{\latticepoint} = \radius}}$ denote the normalized surface measure on the sphere of radius $\radius$ centered at the origin for some $\radius \in \radii$. The circle method of Hardy--Littlewood, and of Kloosterman yields $N_\dimension(\radius) \eqsim \radius^{\dimension-2}$ for $\radius \in \acceptableradii$ when $\dimension \geq 5$ and for $\radius^2 \not\equiv 0 \mod 4$ when $\dimension=4$, so we renormalize our spherical measure to \begin{equation \arithmeticspheremeasure{\radius} := \radius^{2-\dimension} \cdot \charfxn{\setof{\latticepoint \in \lattice : \absolutevalueof{\latticepoint} = \radius}} . \end{equation} Note that our subsequences of radii $\radii$ exclude the case $\radius^2 \equiv 0 \mod 4$ when $\dimension=4$, so that we may renormalize in this case when 2 is a good prime; that is, 2 satisfies \eqref{good_primes_are_units} of {\bf G }. Furthermore, we renormalize our averages and maximal function accordingly. Using Heath-Brown's version of the Kloosterman refinement to the Hardy--Littlewood--Ramanujan circle method from \cite{HB_cubic_forms}, Magyar gave an approximation formula generalizing \eqref{equation:Kloosterman} for $\arithmeticspheremeasure{\radius}$ in \cite{Magyar_discrepancy}. We recall this now: \begin{approximationlemma} If $\dimension \geq 4$, then for each $\radius \in \acceptableradii$, \begin{equation}\label{eq:Kloosterman_approximation} \arithmeticFT{\arithmeticspheremeasure{\radius}}(\toruspoint) = \sum_{\modulus=1}^{\radius} \sum_{\latticepoint \in \lattice} K(\modulus,\radius^2;\latticepoint) \Psi(\modulus \toruspoint - \latticepoint) \contFT{d\spheremeasure_\radius}(\toruspoint - \latticepoint/\modulus) + \arithmeticFT{\error}(\toruspoint) \end{equation} with error term, $\error$ that is the convolution operator given by the multiplier $\arithmeticFT{\error}$, satisfying \begin{equation}\label{eq:Kloosterman_error_bound} \norm{\error \fxn}_{\ell^2(\lattice)} \lesssim_\epsilon \radius^{2-\frac{\dimension+1}{2}+\epsilon} \norm{\fxn}_{\ell^2(\lattice)} \end{equation} for any $\epsilon > 0$. \end{approximationlemma} Here and throughout, for $\modulus, N \in \N$ and $\latticepoint \in \Z^\dimension$, \begin{equation} K(\modulus, N; \latticepoint) := \modulus^{-\dimension} \sum_{\unit \in \unitsmod{\modulus}} \eof{ -\frac{\unit N}{\modulus} } \sum_{b \in \inparentheses{\Zmod{\modulus}}^\dimension} \eof{\frac{\unit b^\degreetwo + b \cdot \latticepoint}{\modulus}} \end{equation} are \emph{Kloosterman sums}, $\Psi$ is a smooth function supported in $[-1/4, 1/4]^\dimension$ and equal to 1 on $[-1/8, 1/8]^\dimension$. Our Kloosterman sums arise naturally in Waring's problem as a weighted sum of the Gauss sums \begin{equation} \twistedGausssum{\latticepoint} := \modulus^{-\dimension} \sum_{b \in \inparentheses{\Zmod{\modulus}}^\dimension} \eof{\frac{\unit b^\degreetwo + b \cdot \latticepoint}{\modulus}} \end{equation} ($\modulus \in \N$, $\unit \in \unitsmod{\modulus}$ and $\latticepoint \in \Z^\dimension$) so that $K(\modulus, N; \latticepoint) = \sum_{\unit \in \unitsmod{\modulus}} \eof{ -\frac{\unit N}{\modulus} } \twistedGausssum{\latticepoint}$. \(d\spheremeasure_\radius\) denotes the induced Lebesgue measure on the sphere of radius \(\radius\) in \(\R^\dimension\) normalized so that the total surface measure is $\pi^{\dimension/2} / \Gamma(\dimension/2)$ for each $\radius > 0$. Note that this spherical measure is also the restriction of the Gelfand--Leray form to the sphere of radius \(\radius\), or the Dirac delta measure; both with the appropriate normalization. One may take $\toruspoint=0$ to check that \eqref{eq:Kloosterman_approximation} is compatible with \eqref{eq:HL_asymptotic} (keep in mind our renormalization). \begin{rem} The bound for the error term in \eqref{eq:Kloosterman_error_bound} was obtained with a weaker exponent of $2-\frac{\dimension}{2}-\frac{1}{9}+\epsilon$ in place of $2-\frac{\dimension}{2}-\frac{1}{2}+\epsilon$ for the dyadic maximal function version in \cite{Hughes_thesis} by extending Kloosterman's original method in \cite{Kloosterman} while Magyar achieved the (presumably optimal) savings of \eqref{eq:Kloosterman_error_bound} using Heath-Brown's method in \cite{HB_cubic_forms}. Alternately, Heath-Brown's method in \cite{HB_quadratic_forms} achieves \eqref{eq:Kloosterman_error_bound}. \end{rem} With The Approximation Formula in mind, it is necessary to understand the relationship between multipliers defined on $\torus$ and $\euclideanspace$. Suppose that $\mu$ is a multiplier supported in $[-1/2,1/2]^\dimension$, then we can think of $\mu$ as a multiplier on $\euclideanspace$ or $\torus$; denote these as $\mu_{\euclideanspace}$ and $\mu_{\torus}$ respectively where $\mu_{\torus}(\toruspoint):= \sum_{\latticepoint \in \lattice} \mu(\toruspoint-\latticepoint)$ is the periodization of $\mu_{\euclideanspace}$. These have convolution operators $T_{\euclideanspace}$ and $T_{\torus}$ on their respective spaces. Explicitly, for $F:\euclideanspace \to \C$, \[ T_{\euclideanspace}F(\spacepoint) := \int_{\euclideanspace} \mu_{\euclideanspace}(\freqpoint) \contFT{F}(\freqpoint) \eof{-\spacepoint \cdot \freqpoint} \, d\freqpoint \] and for $f:\lattice \to \C$, \[ T_{\lattice} f(\latticepoint) := \int_{\torus} \mu_{\torus}(\toruspoint) \arithmeticFT{f}(\toruspoint) \eof{-\latticepoint \cdot \toruspoint} \, d\toruspoint . \] We will need apply these to maximal functions, so we extend these notions to Banach spaces. Let $B_1, B_2$ be two finite dimensional Banach spaces with norms $\norm{\cdot}_1, \norm{\cdot}_2$, and $\mathcal{L}(B_1,B_2)$ is the space of bounded linear tranformations from $B_1$ to $B_2$. Let $\ell^p_{B_i}$ be the space of functions $f:\lattice \to B_i$ such that $\sum_{\latticepoint \in \lattice} \norm{f}_{i}^p < \infty$ and $L^p_{B_i}$ be the space of functions $F:\euclideanspace \to B_i$ such that $\int_{\euclideanspace} \norm{F}_{i}^p < \infty$. For a fixed modulus $\modulus \in \N$, suppose that $\mu: [-1/2\modulus,1/2\modulus]^\dimension \to \mathcal{L}(B_1,B_2)$ is a multiplier with convolution operators $T_{\euclideanspace}$ on $\euclideanspace$ and $T_{\lattice}$ on $\lattice$. Extend $\mu$ periodically to the torus to define $\mu^{\modulus}_{\torus}(\freqpoint) := \sum_{\latticepoint \in \lattice} \mu_{\torus}(\freqpoint-\latticepoint/\modulus)$ with convolution operator $T^{\modulus}_{\lattice}$ on $\lattice$ defined by $\arithmeticFT{T^{\modulus}_{\lattice}f}(\freqpoint) = \mu^{\modulus}_{\torus}(\freqpoint) \cdot \arithmeticFT{f}(\freqpoint)$. Magyar--Stein--Wainger proved a transference principle which relates the boundedness of $T_{\R^\dimension}$ to that of $T_{\Z^\dimension}^{\modulus}$ for any finite dimensional Banach space. The following transference principle is Proposition~2.1 in \cite{MSW_spherical}: \begin{MSW_transference} For $1 \leq p \leq \infty$, \begin{equation}\label{tranference_principle} \norm{T^{\modulus}_{\lattice}}_{\ell^p_{B_1} \to \ell^p_{B_2}} \lesssim \norm{T_{\euclideanspace}}_{L^p_{B_1} \to L^p_{B_2}} . \end{equation} The implicit constant is independent of $B_1,B_2,p$ and $\modulus$. \end{MSW_transference} We will apply this lemma with $B_1 = B_2 = \ell^\infty(\N)$ in order to compare averages over the discrete spherical maximal function with known bounds for averages over the continuous lacunary spherical maximal function. Technically, we should truncate the maximal function and apply the lemma with $B_1 = B_2 = \ell^\infty(\setof{1, \dots, N})$ for arbitrarily large $N \in \N$ with bounds independent of \(N\). However, this is a standard technique that we will not emphasize. The Magyar--Stein--Wainger transference lemma allows us to utilize our understanding of the continuous theory for spherical averages and reduces our problem to understanding the arithmetic aspects of the multipliers $\sum_{\latticepoint \in \lattice} K(\modulus,\radius^2;\latticepoint) \Psi(\modulus \toruspoint - \latticepoint) \contFT{d\spheremeasure}(\radius(\toruspoint - \latticepoint/\modulus))$ for each $\modulus$. To handle these we recall Proposition~2.2 in \cite{MSW_spherical}: \begin{lemma}[Magyar--Stein--Wainger]\label{lemma:MSW_periodic_inequality} Suppose that $\mu(\toruspoint) = \sum_{\latticepoint \in \lattice} g({\latticepoint}) \bumpfxn(\toruspoint - \latticepoint/\modulus)$ is a multiplier on $\torus$ where $\bumpfxn$ is smooth and supported in $[-1/2\modulus,1/2\modulus]^\dimension$ with convolution operator $T$ on $\lattice$. Furthermore, assume that $g({\latticepoint})$ is $\modulus$-periodic ($g(\latticepoint_1) = g(\latticepoint_2)$ if $\latticepoint_1 \equiv \latticepoint_2 \mod{\modulus}$). For a $q$-periodic sequence, define the \((\Zmod{\modulus})^\dimension\)-Fourier transform $\arithmeticFT{g}(\latticepoint) := \sum_{b \in \inparentheses{\Zmod{\modulus}}^\dimension} g(b) e\left( \frac{\latticepoint \cdot b}{\modulus} \right)$. Then for $1\leq p \leq 2$, \begin{equation}\label{eq:MSW_periodic_inequality} \norm{T}_{\ell^p(\lattice) \to \ell^p(\lattice)} \lesssim \left( \sup_{m \in \inparentheses{\Zmod{\modulus}}^\dimension} \lvert g({m}) \rvert \right)^{2-2/p} \left( \sup_{n \in \inparentheses{\Zmod{\modulus}}^\dimension} \lvert \arithmeticFT{g}({n}) \rvert \right)^{2/p-1} \end{equation} with implicit constants depending on $\bumpfxn$ and $p$, but independent of $g$. \end{lemma} We will apply Lemma~\ref{lemma:MSW_periodic_inequality} with the sequence $g(\latticepoint)$ taken to be the Kloosterman sums $K(\modulus, \radius^2;\latticepoint)$. We have the following estimates for the Kloosterman and Gauss sums. \begin{Gauss}[(12.5) on p. 151 of \cite{Grosswald}] For all $\latticepoint \in \lattice$, \begin{equation} \absolutevalueof{\twistedGausssum{\latticepoint}} \leq 2^{\dimension/2}\modulus^{-\dimension/2} . \end{equation} \end{Gauss} Applying the triangle inequality, we immediately obtain the {Gauss bound} for Kloosterman sums: \begin{equation} \absolutevalueof{K(\modulus,\latticepoint; N)} \leq 2^{\dimension/2} \modulus^{1-\dimension/2} . \end{equation} Kloosterman beat the Gauss bound for Kloosterman sums by making use of oscillation between Gauss sums in the Kloosterman sums, and consequently, he extended the Hardy--Littlewood circle method for representations of diagonal quadratic forms in 5 or more variables down to 4 variables. Similarly here, the Gauss bound is insufficient for our purposes and we need to make use of cancellation between the Gauss sums. The first type of bound to appear for this is due to Kloosterman in \cite{Kloosterman}. The best possible estimate of this sort is \emph{Weil's bound} which essentially obtains square-root cancellation in the average over $\unit \in \unitsmod{\modulus}$. \begin{Weil}[(1.13) of \cite{Magyar_discrepancy}] For each modulus $\modulus \in \N$ write $\modulus = \modulus_{odd} \cdot \modulus_{even}$ where $\modulus_{odd}$ is odd while $\modulus_{even}$ is the precise power of 2 that divides $\modulus$. For all $\epsilon>0$, we have \begin{equation}\label{eq:Weil_bound} \absolutevalueof{\twistedKloostermansum{\latticepoint}} \lesssim_{\epsilon} \modulus^{-\frac{\dimension-1}{2}+\epsilon} \gcdof{\modulus_{odd}}{\largenum}^{1/2} \modulus_{even}^{1/2} \end{equation} where the implicit constants are independent of $\modulus$ and uniform in $\latticepoint \in \Z^\dimension$. \end{Weil} \begin{rem} We note that in our definition of the Kloosterman sums $K(\modulus, N, \latticepoint)$, we have the following important multiplicativity property: if $\gcdof{\modulus_1}{\modulus_2}=1$, then for any $N \in \N$ and $\latticepoint \in \Z^\dimension$, \begin{equation}\label{eq:Kloosterman_multiplicativity} K(\modulus_1 \modulus_2, N; \latticepoint) = K(\modulus_1, N; \latticepoint) K(\modulus_2, N; \latticepoint) . \end{equation} For a proof see Lemma 5.1 in \cite{Davenport}. \end{rem} \section{The main term}\label{section:main_term} Our starting point is The Approximation Formula; we have \[ \latticeFT{\arithmeticspheremeasure{\radius}}(\toruspoint) = \sum_{\modulus=1}^{\radius} \latticeFT{\HLop^{\modulus}}(\toruspoint) + \arithmeticFT{\error}(\toruspoint) \] where $\latticeFT{\HLop^{\modulus}}(\toruspoint)$ is the multiplier \[ \sum_{\latticepoint \in \lattice} K(\modulus,\radius^2;\latticepoint) \Psi(\modulus \toruspoint - \latticepoint) \contFT{d\spheremeasure_\radius}(\toruspoint - \latticepoint/\modulus) . \] Let $\HLop^{\modulus}$ be the convolution operator with multiplier $\latticeFT{\HLop^{\modulus}}$. Then letting $\HLop := \sum_{1 \leq \modulus \leq \radius} \HLop^{\modulus}$, we have $\arithmeticsphericalavgop{\radius} = \HLop + \error$ for each $\radius \in \acceptableradii$. The main goal of this section is to prove the following lemma regarding the main terms $\HLop$. We will discuss the error terms $\error$ in Section~\ref{section:error_term}. \begin{lemma}\label{lemma:main_term} If \(\radii \subset \acceptableradii\) is a lacunary subsequence of radii satisfying \eqref{good_primes_are_units}, \eqref{good_primes_estimate} of {\bf G} and \eqref{bad_primes_estimate} of {\bf B} for some \(s \in [0,1]\), then for $\dimension \geq 5$, \begin{equation}\label{eq:main_term_bound} \lpnorm{p}{\sup_{\radius \in \radii} \absolutevalueof{\HLop \fxn}} \lesssim \lpnorm{p}{\fxn} \end{equation} if \( \frac{\dimension}{\dimension-(1+s)} \leq p \leq 2 \) and simultaneously \( \frac{\dimension-1}{\dimension-2} < p \leq 2 . \) Furthermore, if $\dimension = 4$ and 2 is a good prime (\(2 \in \goodprimes\)), then \eqref{eq:main_term_bound} is true for the same range of \(p\). \end{lemma} Before proving Lemma~\ref{lemma:main_term} we orient ourselves with a few propositions. All implicit constants are allowed to depend on the dimension $\dimension$ and $p$. To start, we have the triangle inequality for any subsequence $\radii \subseteq \acceptableradii$, \begin{equation}\label{eq:singular_series_bound} \ellpoperatornorm{\sup_{\radius \in \radii} \absolutevalueof{\HLop}} \leq \sum_{\modulus=1}^\infty \ellpoperatornorm{\sup_{\radius \in \radii} \absolutevalueof{\HLop^{\modulus}}} . \end{equation} We restrict our attention to an individual summand for the time being. We have the following bound from \cite{MSW_spherical}. \begin{prop}[Proposition~3.1 (a) in \cite{MSW_spherical}]\label{prop:Stein+Gauss_bound_for_multipliers} If $\frac{\dimension}{\dimension-1} < p \leq 2$, then \begin{equation*} \ellpoperatornorm{\sup_{\radius \in \acceptableradii} \absolutevalueof{\HLop^{\modulus}}} \lesssim \modulus^{1-\frac{\dimension}{p'}} . \end{equation*} \end{prop} \noindent This bound applies to the full sequence of radii and hence any subsequence, which we will choose to be $\radii$ in a moment. We briefly record that the range of $\ell^p(\Z^\dimension)$-spaces improves if one replaces Stein's theorem (for the spherical maximal function) with the Calder\'on, Coifman--Weiss theorem for any lacunary subsequence of $\acceptableradii$ in the proof of Proposition~\ref{prop:Stein+Gauss_bound_for_multipliers}. See Proposition~3.1 (a) in \cite{MSW_spherical} for more details. \begin{prop}\label{prop:Gauss_bound_for_multipliers} If $\radii$ is a lacunary subsequence of $\acceptableradii$ and $1 < p \leq 2$, then \begin{equation}\label{eq:CCW+Gauss_bound_for_multipliers} \ellpoperatornorm{\sup_{\radius \in \radii} \absolutevalueof{\HLop^{\modulus}}} \lesssim \; \modulus^{1-\frac{\dimension}{p'}} . \end{equation} \end{prop} In \cite{MSW_spherical} we learned that we can factor $\HLop^{\modulus} = S^{\modulus}_\radius \circ T^{\modulus}_\radius = T^{\modulus}_\radius \circ S^{\modulus}_\radius$ into two commuting multipliers $S^{\modulus}_\radius$ and $T^{\modulus}_\radius$, effectively separating the arithmetic and analytic aspects of $\HLop^{\modulus}$, by using a smooth function $\Psi'$ such that $\charfxn{\setof{[-1/4,1/4]^\dimension}} \leq \Psi' \leq \charfxn{\setof{[-1/2,1/2]^\dimension}}$ on $\T^\dimension$ so that $\Psi \cdot \Psi' = \Psi$. For $\radius \in \acceptableradii$ and $\modulus \in \N$, we have the \emph{Kloosterman multipliers} \begin{equation} \latticeFT{S^{\modulus}_\radius}(\xi) := \sum_{\latticepoint \in \lattice} K(\modulus,\radius^2;\latticepoint) \Psi(\modulus \toruspoint - \latticepoint) \end{equation} and the localized spherical averaging multipliers \begin{equation} \latticeFT{T^{\modulus}_\radius}(\xi) := \sum_{\latticepoint \in \lattice} \Psi'(\modulus \toruspoint - \latticepoint) \contFT{d\spheremeasure_{\radius}}(\toruspoint - \latticepoint/\modulus) . \end{equation} In order to improve on the Magyar--Stein--Wainger range of $\ell^p(\Z^\dimension)$-spaces for $\sup_{\radius \in \radii} \absolutevalueof{\HLop}$, \footnote{The sharp range of $\ell^p(\Z^\dimension)$-spaces is $p > \frac{\dimension}{\dimension-2}$ when $\radii$ is $\acceptableradii$, which results from summing \eqref{eq:CCW+Gauss_bound_for_multipliers} over $\modulus \in \N$ in \eqref{eq:singular_series_bound}} we need to beat the exponent ${1-\frac{\dimension}{p'}}$ of the modulus $\modulus$ in \eqref{eq:CCW+Gauss_bound_for_multipliers}. Using the Weil bound, we do so for an individual convolution operator $\HLop^\modulus$. \begin{prop}[Weil bound for Kloosterman multipliers]\label{prop:Weil_bound_for_a_multiplier} If $1 \leq p \leq 2$ and $\modulus$ is an odd number, then for each $\radius \in \acceptableradii$ and for all $\epsilon>0$, \begin{equation}\label{eq:Weil_bound_for_a_multiplier} \ellpoperatornorm{\HLop^{\modulus}} \lesssim_\epsilon \totientfunction{\modulus}^{\frac{2}{p}-1} \cdot \modulus^{-\frac{\dimension-1}{p'} + \epsilon} \gcdof{\modulus}{\radius^\degreetwo}^{\frac{1}{p'}} . \end{equation} \end{prop} \begin{proof} On $\ell^2(\Z^\dimension)$, we apply the Weil bound \eqref{eq:Weil_bound} to the Kloosterman sums in $\latticeFT{S^\modulus_\radius}$. Meanwhile on $\ell^1(\Z^\dimension)$, if $K^{\modulus}_\radius$ denotes the kernel of the multiplier $\latticeFT{S^\modulus_\radius}$, then $K^{\modulus}_\radius(b) = \sum_{\unit \in \unitsmod{\modulus}} \eof{\frac{\unit [\radius^2-b^2]}{\modulus}}$ for $b \in \inparentheses{\Zmod{\modulus}}^\dimension$ where we have the trivial bound of $\totientfunction{\modulus}$. \eqref{eq:MSW_periodic_inequality} of Lemma~\ref{lemma:MSW_periodic_inequality} yields the bound. \end{proof} However, for a fixed modulus $\modulus$ in $\N$ with $\modulus = \modulus_1 \modulus_2$ such that $\modulus_1$ and $\modulus_2$ coprime, we can factor $S_\radius^\modulus$ into two pieces. If $\gcdof{\modulus_1}{\modulus_2}=1$, then by the Chinese Remainder Theorem and multiplicativity of Kloosterman sums \eqref{eq:Kloosterman_multiplicativity} we have \begin{equation}\label{eq:decomposition_of_HLop} \HLop^{\modulus} = T^{\modulus}_\radius \circ U^{1, \modulus}_\radius \circ U^{2, \modulus}_\radius \end{equation} where the operators $U^{1, \modulus}_\radius$ and $U^{2, \modulus}_\radius$ are defined by the multipliers \begin{align*} & \latticeFT{U^{1, \modulus}_\radius}(\toruspoint) := \sum_{\latticepoint \in \lattice} K(\modulus_1,\radius^2;\latticepoint) \Psi_{\modulus_1 \modulus_2}'~\inparentheses{\toruspoint - \frac{\latticepoint}{\modulus_1 \modulus_2}} \\ & \latticeFT{U^{2, \modulus}_\radius}(\toruspoint) := \sum_{\latticepoint \in \lattice} K(\modulus_2,\radius^2;\latticepoint) \Psi_{\modulus_1 \modulus_2}~\inparentheses{\toruspoint - \frac{\latticepoint}{\modulus_1 \modulus_2}} \end{align*} since \begin{align*} \latticeFT{S^{\modulus}_\radius}(\xi) & = \sum_{\latticepoint \in \lattice} K(\modulus_1 \modulus_2,\radius^2;\latticepoint) \Psi_{\modulus_1 \modulus_2}(\toruspoint - \frac{\latticepoint}{\modulus_1 \modulus_2}) \\ & = \sum_{\latticepoint \in \lattice} K(\modulus_1,\radius^2;\latticepoint) K(\modulus_2,\radius^2;\latticepoint) \Psi_{\modulus_1 \modulus_2}~\inparentheses{\toruspoint - \frac{\latticepoint}{\modulus_1 \modulus_2}} \Psi_{\modulus_1 \modulus_2}'~\inparentheses{\toruspoint - \frac{\latticepoint}{\modulus_1 \modulus_2}} \\ & = \inparentheses{\sum_{\latticepoint \in \lattice} K(\modulus_1,\radius^2;\latticepoint) \Psi_{\modulus_1 \modulus_2}'~\inparentheses{\toruspoint - \frac{\latticepoint}{\modulus_1 \modulus_2}} } \inparentheses{\sum_{\latticepoint \in \lattice} K(\modulus_2,\radius^2;\latticepoint) \Psi_{\modulus_1 \modulus_2}~\inparentheses{\toruspoint - \frac{\latticepoint}{\modulus_1 \modulus_2}} } . \end{align*} Note that $U^{1, \modulus}_{\radius}$ is $\modulus_1$-periodic in $\radius^\degreetwo$ and $U^{2, \modulus}_{\radius}$ is $\modulus_2$-periodic in $\radius^\degreetwo$ while both of $K(\modulus_1,\radius^2;\latticepoint)$ and $K(\modulus_2,\radius^2;\latticepoint)$ are $\modulus_1 \modulus_2$-periodic in $\latticepoint \in \lattice$. Using our refined decomposition \eqref{eq:decomposition_of_HLop}, we now come to the main proposition that enables us to prove Lemma~\ref{lemma:main_term}. \begin{prop}\label{prop:counting_bound_for_maximal_multipliers} Fix $\modulus \in \N$ such that $\modulus = \modulus_1 \modulus_2$ with $\gcdof{\modulus_1}{\modulus_2}=1$ and $\radii$ a lacunary subsequence of $\acceptableradii$. Let $\radii_{i (\modulus_1)}$ denote the set of radii $\setof{\radius \in \radii : \radius^\degreetwo \equiv i \mod \modulus_1}$. If $1 < p \leq 2$, then \begin{equation}\label{eq:counting_bound_for_maximal_multipliers} \ellpoperatornorm{\sup_{\radius \in \radii} |\HLop^{\modulus}|} \lesssim \modulus_2^{1-\dimension/p'} \cdot \# \setof{i \in \Zmod{\modulus_1} : \radii_{i (\modulus_1)} \not= \emptyset} \cdot \sup_{i \in \Zmod{\modulus_1}} \inbraces{\ellpoperatornorm{U^{1, \modulus}_{\radius_i}}} \end{equation} where $\radius_i$ is a chosen representative of $\radii_{i(\modulus_1)}$ for each $i \in \Zmod{\modulus_1}$. \end{prop} It will be important in our proof of Lemma~\ref{lemma:main_term} that $\# \setof{i \in \Zmod{\modulus} : \radii_{i (\modulus)} \not= \emptyset}$ is small for \emph{most} moduli $\modulus$ and that we can apply Proposition~\ref{prop:Weil_bound_for_a_multiplier}, the Weil bound for Kloosterman multipliers to the operators $U^{1, \modulus}_{\radius_i}$. \begin{proof}[Proof of Proposition~\ref{prop:counting_bound_for_maximal_multipliers}] Let $\modulus = \modulus_1 \modulus_2$ and subset $\radii \subset \acceptableradii$ be a lacunary subsequence. The union bound applied to $\radii = \cup_{i \in \Zmod{\modulus_1}} \radii_{i (\modulus_1)}$ implies \begin{equation}\label{eq:good_prime_projected_bound} \ellpoperatornorm{\sup_{\radius \in \radii} \absolutevalueof{\HLop^{\modulus}}} \leq \sum_{i=1}^{\modulus_1} \ellpoperatornorm{\sup_{\radius \in \radii_{i (\modulus_1)}} \absolutevalueof{\HLop^{\modulus}}} , \end{equation} with the understanding that if $\radii_{i (\modulus_1)}$ is empty, then $\ellpoperatornorm{\sup_{\radius \in \radii_{i (\modulus_1)}}{\absolutevalueof{\HLop^{\modulus}}}}$ is 0. Therefore, \eqref{eq:counting_bound_for_maximal_multipliers} will follow from proving \begin{equation} \ellpoperatornorm{\sup_{\radius \in \radii_{i (\modulus_1)}} \absolutevalueof{\HLop^{\modulus}}} \lesssim \modulus_2^{1-\dimension/p'} \ellpoperatornorm{ {U^{1, \modulus}_{\radius_i}}} . \end{equation} Our decomposition \eqref{eq:decomposition_of_HLop} implies that \[ \sup_{\radius \in \radii_{i (\modulus_1)}} \absolutevalueof{\HLop^{\modulus}\fxn} = \sup_{\radius \in \radii_{i (\modulus_1)}} \absolutevalueof{T^\modulus_\radius S^\modulus_{\radius} \fxn} = \sup_{\radius \in \radii_{i (\modulus_1)}} \absolutevalueof{T^\modulus_\radius U^{2,\modulus}_{\radius} U^{1,\modulus}_{\radius} \fxn} . \] If $\radius_1, \radius_2 \in \radii_{i (\modulus_1)}$, then $U^{1,\modulus}_{\radius_1} = U^{1,\modulus}_{\radius_2}$. Therefore, if $\radius_i$ is a chosen representative radius in $\radii_{i (\modulus_1)}$, then \[ \sup_{\radius \in \radii_{i (\modulus_1)}} \absolutevalueof{\HLop^{\modulus}\fxn} = \sup_{\radius \in \radii_{i (\modulus_1)}} \absolutevalueof{ {T^\modulus_\radius U^{2,\modulus}_{\radius}} \inparentheses{U^{1,\modulus}_{\radius_i} \fxn} } . \] The operator $T^\modulus_\radius U^{2,\modulus}_{\radius}$ is very similar to $\HLop^{\modulus_2}$, and in fact \eqref{eq:CCW+Gauss_bound_for_multipliers} holds with $\HLop^{\modulus_2}$ replaced by $T^\modulus_\radius U^{2,\modulus}_{\radius}$ since $U^{2,\modulus}_{\radius}$ is $\modulus_2$-periodic in $\radius^2$ and $K(\modulus_2,\radius^2;\latticepoint)$ are $\modulus_1 \modulus_2$-periodic in $\latticepoint \in \lattice$. (Likewise, $U^{1,\modulus}_{\radius_i}$ is very similar to $\HLop^{\modulus_1}$ and the Weil bound \eqref{eq:Weil_bound_for_a_multiplier} applies with $\HLop^{\modulus_1}$ replaced by $U^{1,\modulus}_{\radius_i}$.) The Magyar--Stein--Wainger transference principle combined with the Calderon, Coifman--Weiss theorem and \eqref{eq:CCW+Gauss_bound_for_multipliers} imply \eqref{eq:counting_bound_for_maximal_multipliers} since $\radii_{i (\modulus_1)}$ is also a (possibly finite) lacunary sequence. \end{proof} \subsection{Proof of Lemma~\ref{lemma:main_term}} Recall that $\theprimes$ denotes the set of primes in $\N$. In this section we fix our collection of radii to be a lacunary sequence \(\radii \subset \acceptableradii\) so that the set of primes \(\theprimes = \badprimes \cup \goodprimes \) is a union of the sets bad primes \(\badprimes\) and good primes \(\goodprimes\) satisfying \eqref{bad_primes_estimate} and \eqref{good_primes_are_units}, \eqref{good_primes_estimate} respectively. If $\oddprime$ is a good prime, then lifting \eqref{good_primes_estimate} to $\Zmod{\oddprime^\primepower}$ for $\primepower \in \N$ implies \[ \# \setof{i \in \Zmod{\oddprime^\primepower} : \radii_{i (\oddprime^\primepower)} \not= \emptyset} \lesssim_{\epsilon} \oddprime^{\primepower-1+\epsilon} \] for any $\epsilon>0$. Using the Chinese Remainder Theorem, we extend this to moduli $\modulus$ composed only of good primes; that is, if $\oddprime | \modulus$, then $\oddprime \in \goodprimes$. Let $\orderofprime{\modulus}$ denote the precise power of the prime $\oddprime$ dividing $\modulus$. If $\modulus$ is composed only of good primes, then \begin{equation}\label{eq:Euclidean_radii_good_prime_set_size} \# \setof{i \in \Zmod{\modulus} : \radii_{i (\modulus)} \not= \emptyset} \leq \prod_{\oddprime | \modulus} \# \setof{\radius^2 \mod \oddprime^{\orderofprime{\modulus}} : \radius \in \radii} \lesssim_\epsilon \prod_{\oddprime | \modulus} \oddprime^{\orderofprime{\modulus}-1+\epsilon} \lesssim_{\epsilon} \modulus^{\epsilon} \cdot \totientfunction{\modulus} \end{equation} for any $\epsilon>0$. For a modulus $\modulus$, write $\modulus = \goodmodulus \cdot \badmodulus$ where $\goodmodulus$ is composed only of good primes while $\badmodulus$ is composed only of bad primes ($\badmodulus$ is composed of bad primes if $\oddprime | \badmodulus$ implies $\oddprime \in \badprimes$). In the case that a prime is both good and bad, we regard it as a bad prime in the following estimate. Now \eqref{eq:counting_bound_for_maximal_multipliers} of Proposition~\ref{prop:counting_bound_for_maximal_multipliers}, the Weil bound for Kloosterman multipliers \eqref{eq:Weil_bound_for_a_multiplier} and \eqref{eq:Euclidean_radii_good_prime_set_size} imply that, \begin{align*} \ellpoperatornorm{\sup_{\radius \in \radii} \absolutevalueof{\HLop \fxn}} & \lesssim \sum_{\modulus \in \N} \badmodulus^{1-\dimension/p'} \cdot \# \setof{i \in \Zmod{\modulus} : \radii_{i (\modulus)} \not= \emptyset} \cdot \sup_{i \in \Zmod{\modulus}} \inbraces{\ellpoperatornorm{U^{1, \modulus}_{\radius_i}}} \\ & \lesssim \sum_{\modulus \in \N} \inparentheses{ \totientfunction{\goodmodulus}^{{2}/{p}} \cdot {\goodmodulus}^{\epsilon-\frac{\dimension-1}{p'}} \cdot \badmodulus^{1-\frac{\dimension}{p'}} } \\ & = \prod_{\oddprime \in \goodprimes} \inparentheses{1 + \oddprime^{1-\frac{2}{p}+\epsilon} \sum_{\primepower \in \N} \oddprime^{\primepower (\frac{2}{p}-1-\frac{\dimension-1}{p'})}} \cdot \prod_{\oddprime \in \badprimes} \inparentheses{1 + \sum_{k \in \N} \oddprime^{k(1-\frac{\dimension}{p'})}} \\ & \lesssim_p \inbrackets{\prod_{\oddprime \in \goodprimes} \inparentheses{1 + \oddprime^{\epsilon-\frac{\dimension-1}{p'}}} } \cdot \inbrackets{\prod_{\oddprime \in \badprimes} \inparentheses{1 + \oddprime^{1-\frac{\dimension}{p'}}} } \\ & < \zeta \inparentheses{\frac{\dimension-1}{p'}-\epsilon} \cdot \prod_{\oddprime \in \badprimes} \inparentheses{1 + \oddprime^{1-\frac{\dimension}{p'}}} \end{align*} for all sufficiently small $\epsilon>0$. The third inequality is true provided that $\frac{2}{p}-1-\frac{\dimension-1}{p'} < 0$ and $1-\frac{\dimension-1}{p'} < 0$; this is equivalent to $p > \frac{\dimension-1}{\dimension-2}$. The zeta--function converges for small enough $\epsilon>0$ if and only if $\frac{\dimension-1}{p'} > 1$. Unravelling this condition yields that we again require $p > \frac{\dimension-1}{\dimension-2}$. The second factor in the final inequality is bounded precisely when \( \sum_{\oddprime \in \badprimes} \oddprime^{1-\frac{\dimension}{p'}} < \infty . \) By assumption \eqref{bad_primes_estimate}, we assume that \( \sum_{\oddprime \in \badprimes} \oddprime^{-s} < \infty \) for some \( s \in (0,1] \). Taking \( 1-\dimension/p' \leq -s \), we require \( p \geq \frac{\dimension}{\dimension-(1+s)} . \) \section{The error term}\label{section:error_term} In this section we handle the error term. In particular we show that over an arbitrary lacunary subsequence, we can bound the error term on $\ell^p$ for $\frac{\dimension-1}{\dimension-2} < p \leq 2$. Before doing so, we prove a weaker bound that does not make use of cancellation in averages of Ramanujan sums, but is simpler, and suffices for Corollary~\ref{cor:Euclid}. \subsection{Preliminary bound for the error term} In this section, we will bound the error term using the improved bound \eqref{eq:Kloosterman_error_bound} in the Kloosterman refinement of our operators and a simple bound on $\ell^1(\Z^\dimension)$ for the operators $C^\modulus_\radius$. \begin{lemma}\label{lemma:weak_error_term_bound} Let $\dimension \geq 4$. Suppose that $\radii$ is a lacunary sequence. Then for $\frac{\dimension+1}{\dimension-1} < p \leq 2$, \begin{equation}\label{eq:weak_error_term_bound} \lpnorm{p}{\sup_{\radius \in \radii} \absolutevalueof{\error \fxn}} \lesssim_p \lpnorm{p}{\fxn} . \end{equation} \end{lemma} The proof for $\ell^2$ is standard: bound the sup by a square function and apply the Kloosterman refinement of \eqref{eq:Kloosterman_error_bound}. To obtain our range of $p$, we will need a suitable bound on $\ell^1(\Z^\dimension)$. For this we have the following proposition. \begin{prop}\label{prop:trivial_ell1_bound_for_error_term} For any modulus $\modulus \in \N$, \begin{equation}\label{eq:trivial_ell1_bound} \lpnorm{1,\infty}{\Kloostermanoperator \fxn} \lesssim \frac{\radius \cdot \varphi(\modulus)}{\modulus} \lpnorm{1}{\fxn} . \end{equation} \end{prop} Here and throughout, $\varphi$ denotes Euler's totient function. With this bound, we can prove Lemma~\ref{lemma:weak_error_term_bound}. \begin{proof}[Proof of Lemma~\ref{lemma:weak_error_term_bound}] For $\ell^2(\Z^\dimension)$ we have $\lpnorm{\infty}{\arithmeticFT{\error}(\xi)} \lesssim \radius^{-\delta}$ by \eqref{eq:Kloosterman_error_bound} for all $\delta < \frac{\dimension-3}{2}$. On $\ell^1(\Z^\dimension)$ Proposition~\ref{prop:trivial_ell1_bound_for_error_term} implies that $\lpnorm{1,\infty}{C^\modulus_\radius \fxn} \lesssim \radius \lpnorm{1}{\fxn}$ so that \begin{equation} \lpnorm{1, \infty}{\sum_{\modulus=1}^\radius C^\modulus_\radius \fxn} \lesssim \radius^2 \lpnorm{1}{\fxn} \end{equation} while $\lpnorm{1}{\avgop} \lesssim 1$ for each $\radius$ so that $\lpnorm{1,\infty}{\error \fxn} \lesssim \radius^2 \lpnorm{1}{\fxn}$. By interpolation, for $1 < p \leq 2$, \[ \lpnorm{p}{\error \fxn} \lesssim \radius^{2(\frac{2}{p}-1)} \cdot \radius^{-\delta(2-\frac{2}{p})} = \radius^{\frac{2}{p}(2+\delta)-2(1+\delta)} \lpnorm{p}{\fxn} . \] $\frac{2}{p}(2+\delta)-2(1+\delta) < 0$ if and only $p > \frac{2+\delta}{1+\delta}$. This holds for all $\delta < \frac{\dimension-3}{2}$ which gives $p > \frac{\dimension+1}{\dimension-1}$. Sum over a lacunary set in this range of $p$ to obtain \eqref{eq:weak_error_term_bound}. \end{proof} \begin{rem} The best known bound for $\delta$ is all $\delta < \frac{\dimension-3}{2}$ by Magyar's version of Heath-Brown's Kloosterman refinement in \cite{Magyar_discrepancy}. Due to the existence of cusp forms, this is the best one can expect. \end{rem} We are left to prove Proposition~\ref{prop:trivial_ell1_bound_for_error_term}. We use the structure of the kernel to prove a weak-type bound. \begin{proof}[Proof of Proposition~\ref{prop:trivial_ell1_bound_for_error_term}] For $t>0$, let $\dilateby{t}$ be the operator $\dilateby{t}\fxn(x) = \fxn(tx)$. Since \[ \latticeFT{C^\modulus_\radius}(\xi) = \sum_{\latticepoint \in \lattice} K(\modulus, \radius^2, \latticepoint) \dilateby{\modulus} \Psi(\xi - x/\modulus) \contFT{d\spheremeasure_\radius}(\modulus \xi - x ) , \] one can calculate the kernel $\Kloostermankernel$ for the multiplier $\latticeFT{C^\modulus_\radius}$ and $x \in \R^\dimension$, \begin{equation}\label{eq:kernel_formula} \Kloostermankernel(x) = \sum_{\unit \in \unitsmod{\modulus}} \eof{\frac{\unit (\radius^\degreetwo-\absolutevalueof{x}^{\degreetwo})}{\modulus}} \cdot \radius^{-\dimension} \contFT{\dilateby{\modulus/\radius} \Psi} \convolvedwith {d\spheremeasure}(x/\radius) . \end{equation} A standard argument -- see \cite{Ionescu_spherical} -- shows \[ \contFT{\dilateby{\modulus/\radius} \Psi} \convolvedwith {d\spheremeasure}(x) \lesssim (\modulus/\radius)^{-1} (1+\absolutevalueof{x})^{-2 \dimension} . \] Then \[ \absolutevalueof{\Kloostermankernel(x)} \lesssim \radius^{1-\dimension} \inparentheses{1+\absolutevalueof{x/\radius}}^{-2 \dimension} . \] $\radius^{-\dimension} \inparentheses{1+\absolutevalueof{x/\radius}}^{-2 \dimension}$ is an approximation to the identity which implies \eqref{eq:trivial_ell1_bound} by the Magyar--Stein--Wainger transference principle. \end{proof} \subsection{The Ramanujan bound for the error term} In this section we improve the bound \eqref{eq:trivial_ell1_bound} for the error term $\error$. The following lemma concludes the proof of Theorem~\ref{thm:variants_of_Euclid}. \begin{lemma}\label{lemma:error_term_lemma} Let \(\dimension \geq 4\). If \(\radii\) forms a lacunary sequence , then for \(\frac{\dimension-1}{\dimension-2} < p \leq 2\), \begin{equation} \lpnorm{p}{\sup_{\radius \in \radii} \absolutevalueof{\error \fxn}} \lesssim_p \lpnorm{p}{\fxn} . \end{equation} \end{lemma} The strategy is the same as in Lemma~\ref{lemma:weak_error_term_bound} but we improve the bound on $\ell^1(\Z^\dimension)$ to the following. \begin{prop}\label{prop:improved_ell1_bound_for_maximal_operators} For $\radius \in \acceptableradii$ and all \(\epsilon>0\), we have \begin{equation}\label{eq:improved_ell1_bound} \lpnorm{1,\infty}{\sum_{\modulus = 1}^{\radius} \Kloostermanoperator \fxn} \lesssim_{\epsilon} \radius^{1+\epsilon} \lpnorm{1}{\fxn} . \end{equation} \end{prop} The sums \begin{equation} c_\modulus(N) := \sum_{\unit \in \unitsmod{\modulus}} \eof{\frac{\unit N}{\modulus}} \end{equation} are known as \emph{Ramanujan sums} and clearly satisfy the bound $\absolutevalueof{c_\modulus(N)} \leq \phi(\modulus)$ for all $N$. However, there is an improved bound on average -- see (3.44) on page 126 of \cite{Bourgain_lattice_restriction_1}: \begin{equation}\label{eq:average_bound_for_Ramanujan_sums} \sum_{Q \leq \modulus < 2Q} \absolutevalueof{c_\modulus(N)} = \sum_{Q \leq \modulus < 2Q} \absolutevalueof{\sum_{\unit \in \unitsmod{\modulus}} \eof{\frac{\unit N}{\modulus}}} \lesssim Q \cdot d(N, Q) \end{equation} where $d(N, Q)$ is the number of divisors of $N$ up to $Q$. Therefore we can bound the above average of Ramanujan sums, \eqref{eq:average_bound_for_Ramanujan_sums} by $\lesssim_\epsilon Q \cdot N^{\epsilon}$ for all $\epsilon > 0$; that is, with a ``log-loss". Using the improved average bound for Ramanujan sums, we improve \eqref{eq:trivial_ell1_bound} to \eqref{eq:improved_ell1_bound}. \begin{proof}[Proof of Proposition~\ref{prop:improved_ell1_bound_for_maximal_operators}] Again, for $t>0$, let $\dilateby{t}$ be the operator $\dilateby{t}\fxn(x) = \fxn(tx)$. We rewrite \eqref{eq:kernel_formula} as \begin{equation} \Kloostermankernel(x) = c_\modulus(\radius^\degreetwo-\absolutevalueof{x}^{\degreetwo}) \cdot \radius^{-\dimension} \contFT{\dilateby{\modulus/\radius} \Psi} \convolvedwith {d\spheremeasure}(x/\radius) \end{equation} By the Magyar--Stein--Wainger transference principle, \eqref{eq:improved_ell1_bound} will follow from proving the pointwise bound for all $x \in \R^\dimension$ and any $\epsilon > 0$, \begin{equation}\label{eq:pointwise_bound_for_kernels} \absolutevalueof{\sum_{\modulus = 1}^{\radius} \Kloostermankernel(x)} \lesssim_{\epsilon} \radius^{1+\epsilon-\dimension} \inparentheses{1+\absolutevalueof{x/\radius}}^{-2 \dimension} . \end{equation} From \begin{equation} \Kloostermankernel(x) = c_\modulus(\radius^\degreetwo-\absolutevalueof{x}^{\degreetwo}) \cdot \contFT{\dilateby{\modulus} \Psi} \convolvedwith {d\spheremeasure_\radius}(x) , \end{equation} we easily see \begin{equation} \sum_{\modulus = 1}^{\radius} \Kloostermankernel(x) = \inbrackets{\sum_{\modulus = 1}^{\radius} c_\modulus(\radius^\degreetwo-\absolutevalueof{x}^{\degreetwo}) \cdot \contFT{\dilateby{\modulus} \Psi}} \convolvedwith {d\spheremeasure_\radius}(x) . \end{equation} Note that $\dilateby{\modulus} \Psi$ is supported in $[-1/4\modulus, 1/4\modulus]^\dimension$ for each $1 \leq \modulus \leq \radius$. Using an appropriate partition of unity for each $\dilateby{\modulus} \Psi$, we are able to sum over $q \leq \radius$ and use \eqref{eq:average_bound_for_Ramanujan_sums} to obtain \eqref{eq:pointwise_bound_for_kernels}. \end{proof} \section{Proof of Theorem~\ref{thm:ENFANT}}\label{section:ENFANT_example} Fix $\sequencegrowth > 1$ a real number. In this section we prove Theorem~\ref{thm:ENFANT} for the collection of radii \[ \radii := \setof{\radius_j \in \R^+ : \radius_j^2 = 1 + \prod_{i \leq h(j)} \oddprime_i} \] where \(h(j) := 2^{j^\sequencegrowth}\). The proof for the remaining sequences in Theorem~\ref{thm:ENFANT} is similar, but notationally cumbersome. Let $\theprimes$ denote the set of primes in $\N$. We split the primes into bad primes and good primes as follows. Let the \emph{bad primes} $\badprimes$ be the set of primes dividing $\radius_j^2$ for some radius $\radius_j \in \radii$ together with the prime 2. Let the \emph{good primes} $\goodprimes := \theprimes \setminus \badprimes$ be the remaining primes. We enumerate the primes so that \( \oddprime_n \) denotes the \(n^{th}\) prime. The Prime Number Theorem says that \(\oddprime_n \sim n \log n\) as \(n \to \infty\). If $\oddprime$ is a prime, then choose $J$ such that $\oddprime_{h(J)} \leq \oddprime < \oddprime_{h(J+1)}$. If $j > J$, then $\radius_j^2 \equiv 1 \mod \oddprime$, and \[ \# \setof{i \in \Zmod{\oddprime} : \radii_{i (\oddprime)} \not= \emptyset} \leq J+1 \lesssim J^\sequencegrowth \lesssim \log \oddprime \] where the last inequality follows from the Prime Number Theorem, with an implicit constant that is independent of the prime $\oddprime$. The last inequality is explained as follows. By the prime number theorem, $\oddprime_{2^{J^\sequencegrowth}} \eqsim 2^{J^\sequencegrowth} \cdot \log {2^{J^\sequencegrowth}} \eqsim 2^{J^\sequencegrowth} \cdot {J^\sequencegrowth}$. This implies that $\log \oddprime_{2^{J^\sequencegrowth}} \eqsim {J^\sequencegrowth} + \sequencegrowth \log J \eqsim J^\sequencegrowth$ so that $J < J^\sequencegrowth \lesssim \log \oddprime$. These estimates hold for every prime; in particular, they hold for the good primes. An essential point is that there are few bad primes for our sequence; this is quantified by the following bound: \begin{equation}\label{eq:bad_primes_bound} \sum_{\oddprime \in \badprimes} \oddprime^{-1} \leq \sum_{j=1}^\infty h(j) \cdot \oddprime_{h(j)}^{-1} \lesssim \sum_{j=1}^\infty h(j) \inbrackets{h(j) \log h(j)}^{-1} = \sum_{j=1}^\infty \inbrackets{\log h(j)}^{-1} . \end{equation} The first inequality is true since each prime dividing $\radius_j$ is at least of size $\oddprime_{h(j)}$ and there are at most $h(j)$ prime divisors, and the last inequality follows from the Prime Number Theorem which says $\oddprime_n \eqsim n \log n$. Since $h(j) = 2^{j^\sequencegrowth}$ for some $\sequencegrowth > 1$, \eqref{eq:bad_primes_bound} converges. \section{Concluding remarks and open questions}\label{section:conclusion} \begin{question} Estimate \eqref{eq:bad_primes_bound} of the Dirichlet series \( \sum_{\oddprime \in \badprimes} \oddprime^{-s} \) is rather crude. Improving this estimate would improve our range of $\ell^p(\Z^\dimension)$-spaces, potentially to $p > \frac{\dimension-1}{\dimension-2}$ . The author is unaware of any investigations of our Dirichlet series in the literature. \emph{ Does \( \sum_{\oddprime \in \badprimes} \oddprime^{-s} \) converge for some \(s \in (0,1)\) for sequences related to Theorem~\ref{thm:ENFANT}? } \end{question} \begin{question} \emph{ Can we prove \eqref{eq:bad_primes_bound} where \(h(j)\) grows more slowly such as \(h(j):=j\)? } \end{question} \begin{rem} In section~\ref{subsection:Ackbar}, we mentioned that J. Zienkiewicz showed that Conjecture~\ref{conjecture:lacunary} fails in general. More generally, one can show that if \eqref{good_primes_estimate} is violated for infinitely many primes, then \(\sequentialarithmeticsphericalmaxfxn\) is unbounded on \(\ell^p(\Z^\dimension)\) for \(p\) close to 1 and \(\dimension \geq 5\). We revise Conjecture~\ref{conjecture:lacunary} to take into account this obstruction. \begin{conjecture}\label{conjecture:adelic_lacunary} For $\dimension \geq 5$, if $\radii$ is a lacunary subsequence of $\acceptableradii$ such that \eqref{good_primes_estimate} holds for all but finitely primes \(\oddprime\), then $\sequentialarithmeticsphericalmaxop: \ell^1(\Z^\dimension) \to \ell^{1,\infty}(\Z^\dimension)$. The same is true if \(\dimension = 4\) and 2 is a good prime. \end{conjecture} \end{rem} \begin{question} There is an elegant characterization of the \(L^p(\R^\dimension)\)-boundedness of the continuous spherical maximal function over subsequences of \(\R^+\) in \cite{SWW_spherical}. \emph{Is there such a characterization for the discrete spherical averages?} Zienkiewicz's result shows that any such characterization must also account for arithmetic phenomena. \end{question} \bigskip \section*{Acknowledgements} The author would like to thank Lillian Pierce for discussions on the arithmetic lacunary spherical maximal function and for pointing out a critical mistake in a previous version of this paper, and his advisor, Elias Stein, for introducing him to the problem. The author would also like to thank Roger Heath-Brown for a discussion on the limitations of the circle method, Peter Sarnak for discussions regarding Kloosterman sums and Jim Wright for explaining aspects of the continuous lacunary spherical maximal function. And a special thanks to Lutz Helfrich, James Maynard and Kaisa Matom\"aki at the Hausdorff Center for Mathematics's ENFANT and ELEFANT conferences in July 2014 for pointing out the family of sequences used in Theorem~\ref{thm:ENFANT}. \bigskip \bibliographystyle{amsalpha}
1,941,325,219,906
arxiv
\section{Introduction}\label{intro} Roughly speaking, the foundations of Quantum Field Theory dictates that quantum fields are the result of engaging well defined one particle states \cite{Wigner1} into interactions regulated, so to speak, by Lorentz covariance \cite{weinbergfeynman}. Besides, fundamental rules as the respect to the cluster decomposition principle \cite{wic} and (micro)causality, in the sense of Weinberg, do form the theoretical scope upon which a quantum field arises \cite{weinberg1}. Concerning spin one half particles, and corresponding fields, the aforementioned formulation has gained the status of a (no-go) theorem. Even never been enunciated as such initially, the formulation is exhaustive enough, however, for such an epithet. It asserts that a local and Lorentz invariant quantum field whose action in a vacuum state leads to spin one half particles is an usual Dirac field. There are two utterly relevant points in the whole formulation whose appreciation has been proved quite pertinent: the fermionic dual structure and the role of the parity operator. The (necessary) theory for the definition of a given adjoint structure in fermionic theory can be found in Ref. \cite{aaca,mdobook}, and its departure from the usual Dirac case for mass dimension one fermions was explored to circumvent the Weinberg no-go theorem \cite{nogo}, while for the very same field the role of parity was further explored evincing the one particle states of such field as belonging to an specific non standard Wigner class \cite{elkostates}. In this work we shall further explore the relation of quantum fermionic fields and the use of parity symmetry in their construction. More specifically we relate the fermions satisfying the Dirac equation - those entering in the no-go theorem - with a subset of a given particular class of spinors appearing in a classical classification, according to its bilinear covariants, due to Lounesto \cite{lounestolivro}. It is also shown that spinors not belonging to such a subset do not satisfy Dirac dynamics and lead to a non local quantum field (at least when the field adjoint is the usual one). Besides, their quantum states are unusual in a precise sense, we made it clear along the text. The Lounesto classification makes use of the Fierz-Pauli-Kofink identities and the inversion theorem \cite{tak} (allowing for expressing spinors in term of their bilinear covariants) to categorize spinors into six disjoint classes (see Ref. \cite{out} for a short review). Three of these classes, with non vanishing scalar ($\bar{\psi}\psi$) and/or pseudo-scalar ($\bar{\psi}\gamma_5\psi$) are called regular spinors. We shall deal in this paper with these regular spinors. The other sector, composed by singular spinors, have always null orthonormality relation and the standard physical interpretation may be obscured. We show that using a general spinor playing the role of expansion coefficient function of a fermionic quantum filed, the imposition of Dirac dynamics (at classical level) leads to a local quantum field within a theory respecting Lorentz symmetries and the coefficients are automatically restricted to a subclass, say $L_2$, of Type-2 spinors, according to Lounesto. For regular spinors of Type-2 other than the spinors belonging to $L_2$ subclass, and spinors belonging to other types, locality is not directly ensured and Dirac dynamics is not fulfilled. This approach may be seen as a link between the Weinberg no-go theorem and the Lounesto classification, evincing that even having a plethora of possibilities coming from the classical analysis, Dirac dynamics (or full Lorentz invariance) associated with the demand for locality restricts the possibilities of fermionic quantum fields (for which the dual is the Dirac one). With respect to this fact, we comment on the new possibilities coming from the fermionic dual theory \cite{mdobook} and speculate on the classification of its possible one particle states, as well. All the results of this work may be found in the next section, where we connect Type-2 spinors with a well behaved fermionic quantum field interpreting and discussing the results. In a subsection we develop an explicit example of non-local field and ended by addressing some comments on the possibilities which can arise in the scope of the fermionic dual theory. In the final section we conclude. \section{Types and Phases} We start by defining the eigenstates of the helicity operator, $\vec{\sigma}\cdot\hat{p}$, which reads \begin{equation}\label{operadorhelicidade} \vec{\sigma}\cdot\hat{p}\; \phi^{\pm}(k^{\mu}) = \pm \phi^{\pm}(k^{\mu}), \end{equation} where $\sigma$ stands for the Pauli matrices and the momentum unit vector reads $\hat{p}=(p\sin(\theta)\cos(\phi),\sin(\theta\sin(\phi), p\cos(\theta))$. We shall use these states as the spinors composing the bispinors representation. In the rest-frame referential the positive and negative helicity components read \begin{eqnarray}\label{components} \phi^{+}(k^{\mu}) = \sqrt{m}\left(\begin{array}{c} \cos(\theta/2)e^{-i\phi/2} \\ \sin(\theta/2)e^{i\phi/2} \end{array}\right), \;\; \phi^{-}(k^{\mu}) = \sqrt{m}\left(\begin{array}{c} \sin(\theta/2)e^{-i\phi/2} \\ -\cos(\theta/2)e^{i\phi/2} \end{array}\right). \end{eqnarray} A careful remark presented in Ref \cite{aaca,mdobook} elucidates the importance of additional relative phase factors in \eqref{components}. Such factors indeed play an important role in the study of the spinors behavior under discrete symmetries and may also be used to ensure locality for the quantum fields defined upon those expansion coefficients. Here, as we shall see in brief, we introduce, as a strategy, (disguised) relative trial phase factors in order to relate the behavior of spinors under parity (and explore locality issues) with its position, so to speak, in the Lounesto classes classification. In this regard, we shall not fix the phases \emph{a priori}, but instead to determine it relating the appropriate behaviors with the Lounesto classes in first and second quantization context. We opted for introducing two trial phases for the sake of exposition and interpretation. Let $\psi$ to be a single-helicity spinor in Weyl representation \begin{eqnarray} \psi_{(+,+)}(k^{\mu}) = \left(\begin{array}{c} e^{i\alpha}\phi^{+}(k^{\mu})\\ e^{i\beta}\phi^{+}(k^{\mu}) \end{array} \right), \;\; \psi_{(-,-)}(k^{\mu}) = \left(\begin{array}{c} e^{i\alpha}\phi^{-}(k^{\mu})\\ e^{i\beta}\phi^{-}(k^{\mu}) \end{array} \right), \end{eqnarray} in which $\alpha\in{\rm I\!R}$ and $\beta\in{\rm I\!R}$, standing for the alluded phases. To define the spinors for an arbitrary momentum referential, we define $\psi(p^{\mu})=\kappa\psi(k^{\mu})$, where the Lorentz boost operator is given by \begin{eqnarray} \kappa = \sqrt{\frac{E+m}{2m}}\left(\begin{array}{cc} \mathbb I+ \frac{\vec{\sigma}\cdot\vec{p}}{E+m} & 0 \\ 0 & \mathbb I- \frac{\vec{\sigma}\cdot\vec{p}}{E+m} \end{array} \right). \end{eqnarray} Calling $\mathfrak{B}_{\pm} = \sqrt{\frac{E+m}{2m}}\big(1\pm\frac{p}{E+m}\big)$ the appropriated Lorentz boosts factors, we may define a set of single-helicity spinors in the following fashion \begin{equation}\label{psisingleparticula} \psi^{\mathtt{P}}_{(+,+)}(p^{\mu}) = \sqrt{m}\left(\begin{array}{c} e^{i\alpha}\mathfrak{B}_{+}\cos(\theta/2)e^{-i\phi/2} \\ e^{i\alpha}\mathfrak{B}_{+}\sin(\theta/2)e^{i\phi/2} \\ e^{i\beta}\mathfrak{B}_{-}\cos(\theta/2)e^{-i\phi/2} \\ e^{i\beta}\mathfrak{B}_{-}\sin(\theta/2)e^{i\phi/2} \end{array}\right),\; \psi^{\mathtt{P}}_{(-,-)}(p^{\mu}) = \sqrt{m}\left(\begin{array}{c} -e^{i\alpha}\mathfrak{B}_{-}\sin(\theta/2)e^{-i\phi/2} \\ e^{i\alpha}\mathfrak{B}_{-}\cos(\theta/2)e^{i\phi/2} \\ -e^{i\beta}\mathfrak{B}_{+}\sin(\theta/2)e^{-i\phi/2} \\ e^{i\beta}\mathfrak{B}_{+}\cos(\theta/2)e^{i\phi/2} \end{array}\right). \end{equation} Moreover, taking into account the judicious constraint: $\phi_{R}(k^{\mu})=-\phi_{L}(k^{\mu})$, see a complete discussion in Refs \cite{ahluwalia1ryder,gaioliryder}, we may also define the following spinors related to anti-particles \begin{equation}\label{psisingleantiparticula} \psi^{\mathtt{A}}_{(+,+)}(p^{\mu}) = \sqrt{m}\left(\begin{array}{c} -e^{i\alpha}\mathfrak{B}_{+}\cos(\theta/2)e^{-i\phi/2} \\ -e^{i\alpha}\mathfrak{B}_{+}\sin(\theta/2)e^{i\phi/2} \\ e^{i\beta}\mathfrak{B}_{-}\cos(\theta/2)e^{-i\phi/2} \\ e^{i\beta}\mathfrak{B}_{-}\sin(\theta/2)e^{i\phi/2} \end{array}\right),\; \psi^{\mathtt{A}}_{(-,-)}(p^{\mu}) = \sqrt{m}\left(\begin{array}{c} e^{i\alpha}\mathfrak{B}_{-}\sin(\theta/2)e^{-i\phi/2} \\ -e^{i\alpha}\mathfrak{B}_{-}\cos(\theta/2)e^{i\phi/2} \\ -e^{i\beta}\mathfrak{B}_{+}\sin(\theta/2)e^{-i\phi/2} \\ e^{i\beta}\mathfrak{B}_{+}\cos(\theta/2)e^{i\phi/2} \end{array}\right). \end{equation} Of course the indexes $\mathtt{P}$ and $\mathtt{A}$ stand for particle and anti-particle, respectively. It is important to remark that since we have fixed the basis the solutions with $\alpha$ and $\beta$ are not a unitary transformation of the standard Dirac spinors. It is a good point to recall the set-up underling the Lounesto spinor classification \cite{lounestolivro}. For sections of $P_{SL(2,\mathbb{C})}\times_\rho \mathbb{C}^4$, where $\rho=(1/2,0)$, $(0,1/2)$, or $(1/2,0)\oplus(0,1/2)$, which is indeed the case for the spinors just outlined, Lounesto shown the existence of six dense and disjoint classes of spinors based in the behavior of the their bilinear covariants. The basic strategy for the classification is the use of the so-called inversion theorem \cite{tak}, by means of which it is possible to express the spinor in terms of its own bilinear covariants and constraint the several possibilities via Fierz-Pauli-Kofink identities \cite{out}. From among these classes, three of them have in common the fact that the scalar and/or pseudo-scalar bilinear are non-null. These are the regular classes of spinors and they are organized as follows: Type-1 spinors have both, scalar and pseudo-scalar, bilinear covariants non-vanishing. Type-2 spinors have the scalar non-null and vanishing pseudo-scalar while the opposite is true for Type-3 spinors. If these two bilinear covariants are null the classification gives another classes, all of them organized as singular spinors. Returning to the case at hands, we remark that without a judicious inspection of $\alpha$ and $\beta$ it is impossible to ascertain the class each spinor above belongs to. The analysis of the proper Lounesto classification of the general single-helicity spinors in terms of their phases leads to important cases and sub cases \cite{rodolfoconstraints} summarized in the Table I. We emphasize that any other relation is a subsidiary condition from one of the constraints displayed in Table I. \begin{table} \centering \begin{tabular}{c|c|c|c} \hline \;\;\;\;\;\;\;\;\;\;$\alpha=\mathfrak{n}\pi$\;\;\;\;\;\;\;\;\;\; & \;\;\;\;\;\;\;\;\;\;$\beta=\mathfrak{m}\pi$\;\;\;\;\;\;\;\;\;\; & \;\;\;\;\;\;\;\;\;\;Type\;\;\;\;\;\;\;\;\;\; & Constraints \\ \hline \hline $\mathfrak{n}=non-integer$ & $\mathfrak{m}=non-integer$ & 1 & $\mathfrak{n}\neq k\mathfrak{m}$, with $k = integer$. \\ $\mathfrak{n}=integer$ & $\mathfrak{m}=non-integer$ & 1 & $\mathfrak{m}\neq \mathfrak{n}/k$, with $k>2$. \\ $\mathfrak{n}=non-integer$ & $\mathfrak{m}=non-integer$ & 1 & $\mathfrak{m}= 1/\mathfrak{n}$. \\ $\mathfrak{n}=integer$ & $\mathfrak{m}=integer$ & 2 & $\mathfrak{n}=\mathfrak{m}$. \\ $\mathfrak{n}=integer$ & $\mathfrak{m}=integer$ & 2 & $\mathfrak{n}\neq \mathfrak{m}$.\\ $\mathfrak{n}=non-integer$ & $\mathfrak{m}=non-integer$ & 2 & $\mathfrak{n}= k\mathfrak{m}$, with $k = integer$. \\ $\mathfrak{n}=non-integer$ & $\mathfrak{m}=integer$ & 3 & $\mathfrak{n}=k/2$, with $k=odd$. \\ \hline \hline \end{tabular}\label{tabela1} \caption{The main phases constraints to classify regular spinors via their relative phases. The type is also preserved by interchanging the conditions for $\mathfrak{n}$ and $\mathfrak{m}$ \cite{rodolfoconstraints}.} \end{table} As one can see the trial phases engender a mapping between regular spinors and the plane $(\alpha,\beta)\simeq \mathbb{R}^2$. Besides, as will be clear in the following, the subclass $L_2$ of Type-2 spinors for which $\alpha=\beta$ (or $\mathfrak{n}=\mathfrak{m}$) is special. For this subclass, and only for this subclass, (correspondingly for the straight line $\mathbb{R}^2|_{\alpha=\beta}\simeq \mathbb{R}\subset\mathbb{R}^2$) the Dirac dynamics is ensured for the classical spinors and locality is attained for the associated quantum field. In order to approach the aforementioned results, we start computing the spin sums. As a central aspect of any quantized field, the spin sums shall enter in the analysis of quantum correlators and, as expected, give an important clue to the field dynamics. Hence, the spin sums associated to (\ref{psisingleparticula}) and (\ref{psisingleantiparticula}) read, respectively \begin{eqnarray}\label{spinsumparticula} \sum_{h}\psi^{\mathtt{P}}_{h}(p^{\mu})\bar{\psi}^{\mathtt{P}}_{h}(p^{\mu})= \left(\begin{array}{cccc} m e^{i(\alpha-\beta)} & 0 & E+p\cos\theta & p\sin\theta e^{-i\phi} \\ 0 & m e^{i(\alpha-\beta)} & p\sin\theta e^{i\phi} & E-p\cos\theta \\ E-p\cos\theta & -p\sin\theta e^{-i\phi} & m e^{-i(\alpha-\beta)} & 0 \\ -p\sin\theta e^{i\phi} & E+p\cos\theta & 0 & m e^{-i(\alpha-\beta)} \end{array} \right), \end{eqnarray} and \begin{eqnarray}\label{spinsumanti} \sum_{h}\psi^{\mathtt{A}}_{h}(p^{\mu})\bar{\psi}^{\mathtt{A}}_{h}(p^{\mu})=\left(\begin{array}{cccc} -m e^{i(\alpha-\beta)} & 0 & E+p\cos\theta & p\sin\theta e^{-i\phi} \\ 0 & -m e^{i(\alpha-\beta)} & p\sin\theta e^{i\phi} & E-p\cos\theta \\ E-p\cos\theta & -p\sin\theta e^{-i\phi} & -m e^{-i(\alpha-\beta)} & 0 \\ -p\sin\theta e^{i\phi} & E+p\cos\theta & 0 & -m e^{-i(\alpha-\beta)} \end{array} \right). \end{eqnarray} It is possible to recast the result more appropriately, in a Dirac-like fashion, as \begin{eqnarray} &&\sum_{h}\psi^{\mathtt{P}}_{h}(p^{\mu})\bar{\psi}^{\mathtt{P}}_{h}(p^{\mu})= \gamma_{\mu}p^{\mu} + m \mathbb I_{(\alpha,\beta)}, \\ &&\sum_{h}\psi^{\mathtt{A}}_{h}(p^{\mu})\bar{\psi}^{\mathtt{A}}_{h}(p^{\mu})=\gamma_{\mu}p^{\mu} - m \mathbb I_{(\alpha,\beta)}, \end{eqnarray} where $\gamma^\mu$ matrices are in the Weyl representation and the $\mathbb I_{(\alpha,\beta)}$ matrix stands for \begin{eqnarray} \mathbb I_{(\alpha,\beta)} = \left(\begin{array}{cccc} e^{i(\alpha-\beta)} & 0 & 0 & 0 \\ 0 & e^{i(\alpha-\beta)} & 0 & 0 \\ 0 & 0 & e^{-i(\alpha-\beta)} & 0 \\ 0 & 0 & 0 & e^{-i(\alpha-\beta)} \end{array} \right).\label{tra} \end{eqnarray} Despite formal similarity with the usual Dirac equation, spinors which do not exactly fulfill Dirac dynamics $(\alpha=\beta)$ experience locality differently. We also call attention to the following fact: $\det(\gamma_{\mu}p^{\mu}\pm m\mathbb I)=0$ and $\det(\gamma_{\mu}p^{\mu}\pm m\mathbb I_{(\alpha,\beta)})=0$, both yielding the usual dispersion relation $E=\pm\sqrt{m^2+p^2}$. It is then trivial to see that the spin sums are covariant under Lorentz symmetries and they indeed completeness relation \begin{eqnarray} \frac{1}{2m}\sum_{h}\big[\psi^{\mathtt{P}}_{h}(p^{\mu})\bar{\psi}^{\mathtt{P}}_{h}(p^{\mu})-\psi^{\mathtt{A}}_{h}(p^{\mu})\bar{\psi}^{\mathtt{A}}_{h}(p^{\mu})\big] =\mathbb I_{(\alpha,\beta)}, \end{eqnarray} as expected. Moreover for the special case performed by the subclass of Type-2 spinors for which $\alpha=\beta$ the spin sums are nothing but the usual textbook ones. The $\alpha=\beta$ imposition may also be appreciated from the acting of Dirac operator into the spinors at hand. We illustrate this claim by picking the spinor $\psi^{\mathtt{P}}_{(+,+)}(p^{\mu})$, for example. A straightforward calculation leads to \begin{equation}\label{diracnotype1} (\gamma_{\mu}p^{\mu}-m)\psi^{\mathtt{P}}_{(+,+)}(p^{\mu}) = \sqrt{m}\left(\begin{array}{c} m [e^{i\beta}-e^{i(2\alpha-\beta)}]\mathfrak{B}_{+}\cos(\theta/2)e^{-i\phi/2} \\ m [e^{i\beta}-e^{i(2\alpha-\beta)}]\mathfrak{B}_{+}\sin(\theta/2)e^{i\phi/2} \\ m [e^{i\alpha}-e^{i(2\beta-\alpha)}]\mathfrak{B}_{-}\cos(\theta/2)e^{-i\phi/2} \\ m [e^{i\alpha}-e^{i(2\beta-\alpha)}]\mathfrak{B}_{-}\sin(\theta/2)e^{i\phi/2} \end{array}\right). \end{equation} Therefore, the spinor is annihilated by the Dirac operator at the classical level if, and only if, $\alpha=\beta$ and we are lead to a subclass of Type-2 spinors. This is the alluded $L_2$ subclass. It may be verified that as far as we are concerned with this particular subclass then, indeed, $(\gamma_\mu p^\mu\mp m)\psi_h^{\mathtt{P/A}}(p^\mu)=0$. It was demonstrated in Ref \cite{speranca} that, at the classical level, Dirac dynamics is related to the parity operator, i. e., $P\psi(p^\mu)=m^{-1}\gamma^\mu p_\mu \psi(p^\mu)$ $\forall$ $\psi(p^\mu)\in P_{SL(2,\mathbb{C})}\times\mathbb{C}^4$. Therefore the requirement that a given spinor be eingenspinor of $P$ implies Dirac dynamics. Within this perspective, it is clear that different phases inserted in the Weyl spinors ($\alpha\neq\beta$) would lead to a violation in the Dirac dynamics: the different sides of the representation space (embracing left or right hand spinors) are not being equally treated. Hence parity is being violated at classical level and, consequently, the Dirac operator does not annihilate the resulting bi-spinor. We also notice that the matrix in (\ref{tra}) is related to $\gamma_5$ in a different disguise, i. e., $ \mathbb I_{\alpha,\beta}=\exp[i \gamma^5 (\alpha-\beta)]$. The above observations may be followed by some additional characterization of the mapping between the $\mathbb{R}^2$ plane $(\alpha, \beta)$ and the regular spinors. The matrix (\ref{tra}) is a relevant output of the introduced trial phases. This matrix is characterized by the mapping \begin{eqnarray} \mathbb I_{(\alpha,\beta)}:\mathbb{R}^2\rightarrow M_{4\times 4}(\mathbb{C}) \nonumber \hspace{4.3cm} &&\\ (\alpha,\beta)\mapsto \left(\begin{array}{cc} e^{i(\alpha-\beta)} & 0 \\ 0& e^{-i(\alpha-\beta)} \\ \end{array} \right)\otimes \left(\begin{array}{cc} 1 & 0 \\ 0 & 1 \\ \end{array} \right). \end{eqnarray} Notice that the already pointed the sector of $\mathbb{R}^2$ corresponding to $L_2$ spinors (for which $\mathbb I_{(\alpha,\beta)}\simeq \mathbb I$), $\mathbb{R}^2|_{\alpha=\beta}\simeq \mathbb{R}$, is topologically trivial. However, the complement to $\mathbb{R}^2$, that is, the portion of the plane corresponding to Type-1, Type-3, and Type-2 spinors for which $\alpha\neq \beta$, is clearly given by $\mathbb{R}^2\backslash (\mathbb{R}^2|_{\alpha=\beta}\simeq \mathbb{R})$ which is not connected\footnote{Besides, by means of an artificial, but ludic, analogy we may extract more differences between the situation performed by spinors belonging to $L_2$ and their counterpart sector in the $(\alpha,\beta)$ plane. Bearing in mind the range of $\mathbb I_{(\alpha,\beta)}$ one is able to study its behavior under partial derivatives. It is fairly simple to see that $\partial_\alpha\mathbb I_{(\alpha,\beta)}=-\partial_\beta\mathbb I_{(\alpha,\beta)}$. In this regard, it may be useful to think of $\mathbb I_{(\alpha,\beta)}$ as components of a special ``vector'' $\mathcal{V}=\mathbb I_{(\alpha,\beta)}\hat{\alpha}+\mathbb I_{(\alpha,\beta)}\hat{\beta}$ which, under acting of $\nabla:=\hat{\alpha}\partial_\alpha +\hat{\beta}\partial_\beta$, shows itself everywhere divergence-free in the plane $(\alpha,\beta)$ \begin{equation} \nabla\cdot\mathcal{V}=(\partial_\alpha+\partial_\beta)\mathbb I_{(\alpha,\beta)}=0. \nonumber \end{equation} This last equation is always satisfied but it is trivialized for $\mathbb{R}^2_{\alpha=\beta}\simeq \mathbb{R}$. In this region, and only there, the field $\mathcal{V}$ is also (pseudo)irrotational and conservative, with potential scalar function given by the remain phase multiplying the identity matrix. We use the adjectivation ``pseudo'' here to account on the dimension at hands. Without any worry to this analogy one can formalize the rotational of $\mathcal{V}$ in an artificial $(\hat{\alpha}\times\hat{\beta})$-direction. For every pair $(\alpha,\beta)$ there is a corresponding regular spinor belonging to a given type, according to Lounesto. The special cases for which the ``vector'' field composed by $\mathbb I_{(\alpha,\beta)}$ is conservative is given by spinors belonging to $L_2$, respecting Dirac dynamics. }, since $\pi_0(\mathbb{R}^2\backslash \mathbb{R})\neq 0$. Finally, returning to the physical aspects, it is possible to define quantum fields based in expansion coefficients performed by single-helicity spinors belonging to $L_2$ endowed with $\alpha=\beta$($=0$, for simplicity) \begin{eqnarray}\label{campoquantico} \mathfrak{F}(x) = \int \frac{d^3p}{(2\pi)^3} \frac{1}{\sqrt{2mE(\textbf{p})}}\sum_{h} \bigg[c_{h}(\textbf{p})\psi^{\mathtt{P}}_{h}(p^{\mu})e^{-ip_{\mu}x^{\mu}} + d^{\dagger}_{h}(\textbf{p})\psi^{\mathtt{A}}_{h}(p^{\mu})e^{ip_{\mu}x^{\mu}}\bigg], \end{eqnarray} and the associated dual given by \begin{eqnarray}\label{campoquanticodual} \bar{\mathfrak{F}}(x) = \int \frac{d^3p}{(2\pi)^3} \frac{1}{\sqrt{2mE(\textbf{p})}}\sum_{h} \bigg[c^{\dag}_{h}(\textbf{p})\bar{\psi}^{\mathtt{P}}_{h}(p^{\mu})e^{ip_{\mu}x^{\mu}} + d_{h}(\textbf{p})\bar{\psi}^{\mathtt{A}}_{h}(p^{\mu})e^{-ip_{\mu}x^{\mu}}\bigg]. \end{eqnarray} The creation and annihilation operators shall obey the usual fermionic relations \begin{eqnarray} \lbrace c_{h}(\textbf{p}),c^{\dag}_{h^{\prime}}(\textbf{p}^{\prime}) \rbrace = (2\pi)^{3} \delta^3(\textbf{p}-\textbf{p}^{\prime})\delta_{hh^{\prime}}, \\ \lbrace d_{h}(\textbf{p}),d^{\dag}_{h^{\prime}}(\textbf{p}^{\prime}) \rbrace = (2\pi)^{3} \delta^3(\textbf{p}-\textbf{p}^{\prime})\delta_{hh^{\prime}}, \\ \lbrace c_{h}(\textbf{p}),d_{h^{\prime}}(\textbf{p}^{\prime})\rbrace = \lbrace c^{\dag}_{h}(\textbf{p}),d^{\dag}_{h^{\prime}}(\textbf{p}^{\prime}) \rbrace=0. \end{eqnarray} Taking advantage of the previous clue concerning the Dirac dynamics for this subclass of spinors, it is possible to find the usual conjugate momentum field as $\pi(x) = i\psi^{\dag}$ and the equal time field-momentum quantum correlator reads \begin{eqnarray}\label{correlatorlocalL2} \{\mathfrak{F}(\vec{x},t), i\mathfrak{F}^{\dag}(\vec{x}\;^{\prime},t)\} = i\delta^3(\vec{x}-\vec{x}\;^{\prime})\mathbb I. \end{eqnarray} evincing a local field. A clear characteristic appearing in the Weinberg formulation is that the Dirac dynamics satisfied by the expansion coefficients of the quantum fermionic field may be understood as a (full) Lorentz-invariant record of bi-spinors belonging to the representation space $(1/2,0)\oplus (0,1/2)$ \cite{weinberg1}. As previously mentioned, there is a direct relation between parity operator acting upon spinors at the classical level and the Dirac dynamics \cite{speranca}, i. e., $P=m^{-1}\gamma^\mu p_{\mu}$. By its turn, a relation of eigenspinor with respect to parity is indeed valid if the one particle state resulting from the quantum field (acting upon the vacuum state) has not degeneracy beyond the one coming from the spin \cite{weinberg1, estwig}. We refer to these quantum states as standard\footnote{An unusual case, in the sense just described, is discussed in Ref. \cite{elkostates}.}. As we see, a fermionic quantum theory respecting full Lorentz symmetries and ensuring locality is only possible by means of spinors belonging to $L_2$ as expansion coefficients. For a spin $1/2$ quantum field theory, describing particles without further degeneracy, the usual physical requirements may be replaced, then, by the demand of expansion coefficients belonging to $L_2$. \subsection{An explicit example for non-locality} Let us analyse the resulting physical situation when the spinors at hand do not belong to the subclass $L_2$, that is for which $\alpha\neq\beta$. In order to provide a concrete case and fix ideas we shall develop the case $e^{i\alpha}=ia$ and $e^{i\beta}=b$ (with real non-null $a$ and $b$ constants), leading, then, to a Lounesto Type-3 case. The spinors read \begin{eqnarray} \psi^{\mathtt{P}}_{(+,+)}(p^{\mu}) = \sqrt{m}\left(\begin{array}{c} ia\mathfrak{B}_{+}\cos(\theta/2)e^{-i\phi/2} \\ ia\mathfrak{B}_{+}\sin(\theta/2)e^{i\phi/2} \\ b\mathfrak{B}_{-}\cos(\theta/2)e^{-i\phi/2} \\ b\mathfrak{B}_{-}\sin(\theta/2)e^{i\phi/2} \end{array}\right), \; \psi^{\mathtt{P}}_{(-,-)}(p^{\mu}) = \sqrt{m}\left(\begin{array}{c} -ia\mathfrak{B}_{-}\sin(\theta/2)e^{-i\phi/2} \\ ia\mathfrak{B}_{-}\cos(\theta/2)e^{i\phi/2} \\ -b\mathfrak{B}_{+}\sin(\theta/2)e^{-i\phi/2} \\ b\mathfrak{B}_{+}\cos(\theta/2)e^{i\phi/2} \end{array}\right). \end{eqnarray} Under the acting of the Dirac operator these spinors furnish \begin{eqnarray}\label{dirac1tipo3} (\gamma_{\mu}p^{\mu}-m)\psi^{\mathtt{P}}_{(+,+)}(p^{\mu})= m\sqrt{m}\left(\begin{array}{c} ia(-1+b/ia)\mathfrak{B}_{+}\cos(\theta/2)e^{-i\phi/2} \\ ia(-1+b/ia)\mathfrak{B}_{+}\sin(\theta/2)e^{i\phi/2} \\ b(-1+ia/b)\mathfrak{B}_{-}\cos(\theta/2)e^{-i\phi/2} \\ b(-1+ia/b)\mathfrak{B}_{-}\sin(\theta/2)e^{i\phi/2} \end{array}\right), \end{eqnarray} and \begin{eqnarray}\label{dirac2tipo3} (\gamma_{\mu}p^{\mu}-m)\psi^{\mathtt{P}}_{(-,-)}(p^{\mu})= m\sqrt{m}\left(\begin{array}{c} -ia(-1+b/ia)\mathfrak{B}_{-}\sin(\theta/2)e^{-i\phi/2} \\ ia(-1+b/ia)\mathfrak{B}_{-}\cos(\theta/2)e^{i\phi/2} \\ -b(-1+ia/b)\mathfrak{B}_{+}\sin(\theta/2)e^{-i\phi/2} \\ b(-1+ia/b)\mathfrak{B}_{+}\cos(\theta/2)e^{i\phi/2} \end{array}\right). \end{eqnarray} The condition necessary to a Dirac dynamics is therefore never reached. The spin sums may be straightforwardly adequate from (\ref{spinsumparticula}) and (\ref{spinsumanti}). These spin sums may also be recast in a Lorentz invariant form. According to our previous results, however, a quantum field may also be defined in a similar fashion of (\ref{campoquantico}) and (\ref{campoquanticodual}), this time of Type-3 spinors as expansion coefficients. The locality inspection, however, brings an peculiar element. Since the spinors do not obey Dirac dynamics at classical level, one is not able to write a Dirac-like Lagrangian and extract the conjugate momentum from it. As it may be readily verified, the Klein-Gordon dynamics is, of course, obeyed. Therefore, the only clue one can follow is starting writing a Klein-Gordon spinorial Lagrangian. The conjugated-momentum, thus, is given by \begin{eqnarray} \pi(x) = \frac{\partial \bar{\mathfrak{F}}(x)}{\partial t}, \end{eqnarray} and we arrive at the following equal time quantum correlator \begin{eqnarray}\label{correlator13-1} \Bigg\{\mathfrak{F}(\vec{x},t), \frac{\partial \bar{\mathfrak{F}}(\vec{x}\;^{\prime},t)}{\partial t}\Bigg\} = i\int \frac{d^3p}{(2\pi)^3}e^{i\vec{p}\cdot(\vec{x}-\vec{x}\;^{\prime})}\mathbb I_{(a,b)}+i\int \frac{d^3p}{(2\pi)^3}(\vec{\gamma}\cdot\vec{p})\;e^{i\vec{p}(\vec{x}-\vec{x}\;^{\prime})}. \end{eqnarray} The second term on the right-hand side of Eq.\eqref{correlator13-1} stands for the delta distribution and hence \begin{eqnarray}\label{correlator13-2} \Bigg\{\mathfrak{F}(\vec{x},t), \frac{\partial \bar{\mathfrak{F}}(\vec{x}\;^{\prime},t)}{\partial t}\Bigg\} =i\delta^3(\vec{x}-\vec{x}\;^{\prime})\mathbb I_{(\alpha,\beta)} + i\int \frac{d^3p}{(2\pi)^3}(\vec{\gamma}\cdot\vec{p})\;e^{i\vec{p}\cdot(\vec{x}-\vec{x}\;^{\prime})}, \end{eqnarray} and the non-local aspect of this field is then evident. Note that this result comes from considering the appropriate lagrangian for the case at hands. An eventual connection with the usual case $(\alpha=\beta)$ must be performed previously, leading then to the Dirac lagrangian, and consequently to the same result expressed in Eq. \eqref{correlatorlocalL2}. We are not advocating that these fields have not physical relevance, but only pointing out the fact that usual physical requirements for a given fermionic quantum field cannot be accomplished for regular spinors other than are not in $L_2$, with a possible exception discussed in the next paragraph. As the field defined upon Dirac spinors belonging to $L_2$, the fields constructed taking into account Type-1 and Type-3 spinors are of spin one half. Both satisfying the Klein-Gordon equation. The expansion coefficients of the former case stand for a complete set of eigenspinors of the parity operator, while for the latter case the expansion coefficients are not related with any discrete symmetry. Perhaps we could end this section by calling attention to the fact that even for the cases of quantum fields constructed upon expansion coefficients not belonging to $L_2$, there are possibilities to be explored in order to achieve a well behaved physical scenario. The point is that all we assert about locality depends explicitly on the used spinorial dual. Possibilities coming from the Clifford algebra, however, has opened the possibility of different duals \cite{rog1,rog2}. This possibility was first raised and gained the status of a theory (the dual theory) in Refs. \cite{aaca,mdobook}. The use of this crevice to circumvent the no-go theorem was performed in Ref. \cite{nogo}. By the same token, recently new fermionic fields, with well behaved quantum counterparts have been found making use of different duals \cite{novonovo}. The output of the dual theory is to find a well defined fermionic dual in order to make the field local and the fermionic theory covariant under Lorentz symmetries. If this procedure is possible for the cases at hand is unclear. We would like, however, to finalize this section by remarking that, if such a dual is possible here, then one can assert that the fermionic one particle states, whatever they describe, must present degeneracy beyond the spin. The reason is the following: as well known, one particle states, say $\Psi$, are defined as eingenstates of the momentum, spin projection and Hamiltonian operators. The introduction of parity\footnote{The parity operator is here denoted in bold face to be distinguish from the one acting upon classical spinors.}, ${\bf P}$, in the quantum Poincar\`e algebra preserves the eigenvalues of the state ${\bf P}\Psi$ with respect to the same previous operators. Therefore, in the absence of additional degeneracy one would have ${\bf P}\Psi\simeq \Psi$. But this clearly contradicts the fact that expansion coefficients in the present case are not annihilated by the Dirac operator. Thus the only possibility left is that the states of a particle show degeneracy beyond spin. \section{Concluding Remarks and Outlooks}\label{remarks} We have shown that, from among several possibilities, a fermionic quantum field endowed with Lorentz symmetries, local, and respecting the Dirac dynamics is only possible (for usual fermionic dual) for a subclass, here called $L_2$, of Type-2 spinors according to Lounesto. Also we have explicit shown an example of a non-local fermionic theory just by constructing an quantum field with expansion coefficients not belonging to $L_2$. Showing a precisely consistency of the Weinberg's formalism \cite{weinberg1}, the Dirac field describes fermions under the following two basic requirements: if parity covariance is ensured and by imposing locality in a quantum field theoretic framework. Within this context, only spinors belonging to $L_2$ should have the epithet of Dirac spinors. These results may be seen as a link between the quantum fermionic field no-go theorem and the Lounesto classification of classical spinor fields. This link points out what classical spinors lead to a non-pathological quantum theory. For the other cases, it is our current understanding that if there is a chance for a well behaved quantum theory even using spinors not belonging to $L_2$ then this chance must makes use of different duals. Otherwise one would have, eventually, to embrace a non-local theory. In this case, it would be interesting to wonder about the mass generation analogues of the Higgs mechanism. We remark by passing that additional trial phases may be used in the bi-spinor composing, leading essentially to the same results. In the light of our remarks along the text, theories coming from a sector whose spinors are not in $L_2$, but with a new dual leading to a well behaved final format, shall describe one particle states with degeneracy beyond the spin. In a Universe whose known constituents perform four percent, these new possibilities certainly may be faced as intriguing. Obviously, by using duals different from the standard one, the Lounesto classification must be revisited \cite{beyondlounesto}. This is a relevant point to further explore: what subclass of spinors (if any) in the new classification would lead to a well behaved fermionic quantum field. \section{Acknowledgements} The authors express their gratitude to Cheng-Yang Lee for insightful questions and ensuing discussions during the manuscript writing stage. JMHS thanks to CNPq grant No. 303561/2018-1 for partial support.
1,941,325,219,907
arxiv
\section{Introduction} All graphs considered in this paper are finite, connected and simple (they are undirected and do not have loops or multiple edges). A \emph{$2$-arc} of a graph is a triple $(u,v,w)$ of pairwise distinct vertices such that $v$ is adjacent to both $u$ and $w$. We say that a graph is \emph{$2$-arc-transitive} if its automorphism group acts transitively on its $2$-arcs. The class of 2-arc-transitive graphs has attracted a lot of interest. Although many partial classification results have been obtained, a full classification might be out of reach. A nice survey of some of the main results in this area can be found in~\cite{seress}. Recently, Conder, Li and Poto\v{c}nik have proved the following. \begin{theorem*}[{\cite[Theorem 1]{ConderLiPot}}]\label{ConderLiPotTheo} Let $k$ be a positive integer. \begin{enumerate} \item If $d\geqslant 3$, then there exist only finitely many $d$-valent $2$-arc-transitive graphs of order $kp$ with $p$ a prime. \item If $d\geqslant 4$, then there exist only finitely many $d$-valent $2$-arc-transitive graphs of order $kp^2$ with $p$ a prime. \end{enumerate} \end{theorem*} Inspired by this result, we are naturally led to ask whether an analogous statement holds for graphs of order $kp^3$, $kp^4$, etc. This is exactly the content of our main theorem: \begin{theorem}\label{main} There exist functions $c : \mathbb N \rightarrow \mathbb N$ and $g: \mathbb N \rightarrow \mathbb N$ such that, if $k$, $n$ and $d$ are positive integers with $d> g(n)$ and there exists a $d$-valent $2$-arc-transitive graph of order $kp^n$ with $p$ a prime, then $p\leqslant kc(d)$. \end{theorem} In other words, if $k$, $n$ and $d$ are fixed with $d$ large enough, then there are only finitely many $d$-valent $2$-arc-transitive graphs of order $kp^n$ with $p$ a prime. In some sense, this shows that, for classifying $2$-arc-transitive graphs along these lines, the most interesting case is when the valency is small. Indeed, there has been much activity in classifying such graphs, especially with $n$ and $k$ small. (For example, an overview of the case $(n,d)=(1,3)$ can be found in~\cite[Section 6]{ConderLiPot}.) The proof of Theorem~\ref{main}, which can be found in Section~\ref{sec:proof}, divides naturally into the affine and non-affine cases. Preparatory work for these cases is done in Sections~\ref{sec:nonaffine} and \ref{sec:affine}, culminating in Theorems~\ref{theo:nonaffine} and \ref{theo:affine}. (In fact, Theorem~\ref{theo:nonaffine} is stronger than required.) To complete the proof of Theorem~\ref{main}, we also require a result of Trofimov and Weiss that depends upon the Classification of the Finite Simple Groups (CFSG)~\cite{GLS}. On the other hand, all of the results in Section~\ref{sec:pre} are CFSG-free. \section{Preliminaries} \label{sec:pre} We begin with some preliminaries that set the stage for the proof of Theorem~\ref{main}. We denote the cyclic group of order $n$ by $\mathrm{C}_n$ and, for a prime power $d$, the elementary abelian group of order $d$ by $\mathrm{E}_d$. The \emph{soluble radical} of a group is its largest normal soluble subgroup. We write $H \lesssim G$ if $H$ is isomorphic to a subgroup of $G$. We will say that a group $H$ is \emph{involved} in a group $G$ if there are subgroups $K$ and $N$ of $G$ such that $N$ is a normal subgroup of $K$ and $K/N \cong H$. A permutation group is called \emph{quasiprimitive} if each of its non-trivial normal subgroups is transitive. A transitive permutation group is called \emph{$2$-transitive} if a point-stabiliser is transitive on the remaining points. It is easy to see that a $2$-transitive group is quasiprimitive. A graph is $G$-vertex-transitive if $G$ is a group of automorphisms of the graph acting transitively on its vertices. Let $\Gamma$ be a $G$-vertex-transitive graph and let $v$ be a vertex of $\Gamma$. We denote the set of neighbours of $v$ in $\Gamma$ by $\Gamma(v)$. We write $G_v^{\Gamma(v)}$ for the permutation group induced by the action of $G_v$ on $\Gamma(v)$ and $G_v^{[1]}$ for the kernel of this action. Given a permutation group $L$, the pair $(\Gamma,G)$ is said to be \emph{locally-$L$} if $G_v^{\Gamma(v)}$ is permutation isomorphic to $L$. Note that a graph $\Gamma$ is 2-arc-transitive if and only if the pair $(\Gamma,\mathrm{Aut}(\Gamma))$ is locally-$L$ with $L$ a $2$-transitive group. \subsection{Graph-restrictive groups and a key lemma} Following \cite{junior}, we say that a transitive group $L$ is \emph{graph-restrictive} if there exists a constant $c(L)$ such that, for every locally-$L$ pair $(\Gamma,G)$ and $v$ a vertex of $\Gamma$, the inequality $|G_v|\leqslant c(L)$ holds. The following lemma, which is inspired by~\cite[Theorem~2]{ConderLiPot}, is the crucial first step in our proof of Theorem~\ref{main}. \begin{lemma} \label{mainlemma} Let $L$ be a quasiprimitive graph-restrictive permutation group with corresponding constant $c(L)$, let $k$ and $n$ be positive integers and let $p$ be a prime with $p> kc(L)$. If $(\Gamma,G)$ is a locally-$L$ pair such that $\Gamma$ has order $kp^n$ and $v$ is a vertex of $\Gamma$, then the following hold: \begin{enumerate} \item $|G_v|$ is coprime to $p$; \item $G_v$ is isomorphic to a subgroup of ${\rm GL}(n,p)$. \end{enumerate} \end{lemma} \begin{proof} Let $P$ be a Sylow $p$-subgroup of $G$. By vertex-transitivity we have $|G| = kp^n |G_v|$. Since $L$ is graph-restrictive, we have $k|G_v| \leqslant kc(L) < p$ hence $|P|=p^n$, $|G_v|$ is coprime to $p$ and $P_v=1$. Moreover, since $|G:P| < p$, it follows from Sylow's Theorem that $P$ is normal in $G$. Let $C$ be the centraliser of $P$ in $G$ and let $\mathrm{Z}(P)$ be the centre of $P$. Note that $C$ is normal in $G$ and $\mathrm{Z}(P) = P \cap C$. Since $P$ is normal in $G$, $\mathrm{Z}(P)$ is a Sylow $p$-subgroup of $C$ and the Schur-Zassenhaus Theorem \cite[6.2.1]{gorenstein} yields $C=\mathrm{Z}(P)\times J$ for some characteristic subgroup $J$ of $C$. Since $P_v=1$, it follows that $C_v=J_v$. Suppose that $J_v\neq 1$. In particular, since $G_v^{\Gamma(v)}$ is quasiprimitive, $J$ has at most two orbits on the vertices of $\Gamma$. (See for example~\cite[Lemma~4]{ConderLiPot}.) Since $J$ is characteristic in $C$, it is normal in $G$, and thus these orbits have the same size. Since $p>2$, it follows that $p$ divides the size of these orbits, contradicting the fact that $|J|$ is coprime to $p$. It follows that $C_v=J_v=1$, and thus $G_v$ is isomorphic to a subgroup $X$ of ${\rm Aut}(P)$. By \cite[5.3.5]{gorenstein}, we have that $X$ acts faithfully on the Frattini quotient $P/\Phi(P)$, which is elementary abelian of rank at most $n$. Thus $X$ is isomorphic to a subgroup of $\mathrm{GL}(n,p)$. \end{proof} In view of Lemma~\ref{mainlemma}, we are led to consider the following definition. For a finite group $X$, let $$ \lambda(X) := \mathrm{min} \{n \mid X \text{ is involved in a finite subgroup of } {\rm GL}(n, \mathbb{F}),~\mathbb{F} \text{ a field with } \mathrm{char}(\mathbb{F}) \nmid |X| \}. $$ Note that, if $Y$ is involved in $X$, then $\lambda(Y)\leqslant\lambda(X)$. The next lemma shows that, when considering $\lambda(X)$, it suffices to work over the field of complex numbers. \begin{lemma}\label{lemma:new} If $X$ is a finite group, then there exists a finite subgroup $G$ of ${\rm GL}(\lambda(X),\mathbb C)$ such that $X$ is involved in $G$ and every prime divisor of $|G|$ divides $|X|$. \end{lemma} \begin{proof} By definition, there is a field $\mathbb{F}$ with $\mathrm{char}(\mathbb{F}) \nmid |X|$ and a finite subgroup $G$ of ${\rm GL}(\lambda(X),\mathbb{F})$ such that, for some normal subgroup $K$ of $G$, we have $G/K \cong X$. Choose $G$ such that $|G|$ is minimal. Without loss of generality, we may assume that $\mathbb{F}$ is algebraically closed. We claim that $K$ is nilpotent. (The argument used to prove this claim is taken from the proof of \cite[Lemma 5.5A(ii)]{dixon-mortimer}). Let $p$ be a prime and let $P$ be a Sylow $p$-subgroup of $K$. The Frattini Argument yields $G=\mathrm N_G(P) K$ and hence $\mathrm N_G(P)/ \mathrm N_K(P) \cong N_G(P)K/K \cong G/K \cong X$. The minimality of $|G|$ implies that $G=\mathrm N_G(P)$. In particular, $P$ is normal in $K$ and hence $K$ is nilpotent. We now show that every prime divisor of $|K|$ divides $|X|$. Let $p$ be a prime dividing $|K|$ and let $P$ be a Sylow $p$-subgroup of $K$. Since $K$ is nilpotent, we have $K=P \times Q$ for some characteristic subgroup $Q$ of $K$. If $p$ does not divide $|X|$, then the Schur-Zassenhaus Theorem \cite[6.2.1]{gorenstein} yields $G/Q \cong P \rtimes X$, and thus $G$ has a proper subgroup involving $X$, contradicting the minimality of $|G|$. In particular, every prime divisor of $|G|$ divides $|X|$ and thus $\mathrm{char}(\mathbb{F})$ does not divide $|G|$. Since $\mathbb{F}$ is algebraically closed, this implies that the degrees of representations of $G$ over $\mathbb{F}$ are the same as the degrees of representations of $G$ over $\mathbb C$, see for instance \cite[Chapter 15]{Isaacs} and, in particular, \cite[Theorem 15.13]{Isaacs}. We thus obtain a representation of $G$ over $\mathbb C$ of dimension $\lambda(X)$, completing the proof. \end{proof} \subsection{Locally non-affine pairs}\label{sec:nonaffine} The finite quasiprimitive groups are classified (see \cite{praegerquasip}). If $G$ is such a group, then its socle has the form $T^\ell$ for some finite simple group $T$. If $T$ is abelian, then $G$ is called \emph{affine}; otherwise, we say that $G$ is \emph{non-affine}. In this section, we consider the locally non-affine case of Theorem~\ref{main} (and, in fact, we prove a stronger result). For $n\geqslant 1$, let \begin{equation} \label{j defn} J(n)=(n!)\cdot12^{n(\pi(n+1)+1)} \end{equation} where $\pi(k)$ denotes the number of primes less than or equal to $k$. \begin{lemma}\label{simple} If $X$ is a finite group with trivial soluble radical, then $|X| \leqslant J(\lambda(X))$. \end{lemma} \begin{proof} By Lemma~\ref{lemma:new}, there exists a finite subgroup $G$ of ${\rm GL}(\lambda(X),\mathbb C)$ such that $G$ has a normal subgroup $K$ with $G/K\cong X$. By a theorem of Jordan \cite[Theorem 14.12]{Isaacs}, there exists an abelian normal subgroup $A$ of $G$ such that $|G:A| \leqslant J(\lambda(X))$. Since $G/K$ has trivial soluble radical, we have $A\leqslant K$ and hence $|X|=|G:K|\leqslant |G:A| \leqslant J(\lambda(X))$, as desired. \end{proof} \begin{theorem}\label{theo:nonaffine} Let $k$, $n$ and $d$ be positive integers with $d> J(n)$. Let $(\Gamma,G)$ be a locally-$L$ pair such that $\Gamma$ has order $kp^n$ for some prime $p$, $L$ has degree $d$ and is graph-restrictive and quasiprimitive. If $L$ is non-affine, then $p\leqslant kc(L)$. \end{theorem} \begin{proof} We assume for a contradiction that $p> kc(L)$. Let $v$ be a vertex of $\Gamma$. By Lemma~\ref{mainlemma}, $|G_v|$ is coprime to $p$ and $G_v$ is isomorphic to a subgroup of ${\rm GL}(n,p)$. In particular, $\lambda(L)\leqslant\lambda(G_v)\leqslant n$. Let $S$ be the socle of $L$. Since $L$ is quasiprimitive, $S$ is transitive. Since $L$ is non-affine, $S$ is a direct product of non-abelian simple groups and thus has trivial soluble radical. Lemma~\ref{simple} then implies that $d \leqslant |S|\leqslant J(\lambda(S))\leqslant J(\lambda(L))\leqslant J(n)$, contradicting the fact that $d> J(n)$. \end{proof} \begin{remark} It was conjectured by Praeger \cite{praegerconjecture} that finite quasiprimitive groups are graph-restrictive. The validity of this conjecture would render the graph-restrictive assumption in the hypothesis of Theorem~\ref{theo:nonaffine} superfluous. The conjecture remains open but has been shown to hold in certain cases \cite{PSVRestrictive,pablo,trofweiss1,trofweiss2}. \end{remark} \subsection{Locally affine pairs}\label{sec:affine} In this section we consider locally-$L$ pairs where $L$ is a $2$-transitive affine group. We first consider the case when $L$ is soluble. The finite soluble $2$-transitive groups were classified by Huppert~\cite{Huppert}. A consequence of this classification is that, up to finitely many exceptions, all such groups are subgroups of the one-dimensional affine semilinear group, which we now define. Let $d=r^f$ be a power of a prime $r$. We denote the field of order $d$ by $\mathbb{F}_d$ and the Galois group of the field extension $\mathbb{F}_{r^f}/\mathbb{F}_r$ by $\mathrm{Gal}(\mathbb{F}_{r^f}/\mathbb{F}_r)$. The group ${\rm A\Gamma L}(1,d)$ is $\langle T_{u,\alpha,\sigma} \mid u \in \mathbb{F}_d$, $\alpha \in \mathbb{F}_d^\#, \sigma \in \mathrm{Gal}(\mathbb{F}_{r^f}/\mathbb{F}_r) \rangle$ where $T_{u,\alpha,\sigma}$ is the permutation of $\mathbb{F}_d$ defined by \begin{equation}\nonumber T_{u,\alpha,\sigma} : x \mapsto \alpha(x^\sigma) + u, \hspace{0.5cm} x\in \mathbb{F}_d. \end{equation} The permutation group ${\rm A\Gamma L}(1,d)$ is $2$-transitive with a regular normal subgroup $$\mathrm{V}_d=\langle T_{u,1,1} \mid u \in \mathbb{F}_d\rangle \cong \mathrm{E}_d$$ and point-stabiliser conjugate to $${\rm \Gamma L}(1,d):=\langle T_{0,\alpha,\sigma} \mid \alpha \in \mathbb{F}_d^\#,\sigma \in \mathrm{Gal}(\mathbb{F}_{r^f}/\mathbb{F}_r) \rangle\cong \mathrm{C}_{d-1}\rtimes\mathrm{C}_f.$$ The point-stabiliser ${\rm \Gamma L}(1,d)$ contains the normal subgroup $${\rm GL}(1,d):=\langle T_{0,\alpha,1} \mid \alpha \in \mathbb{F}_d^\#\rangle\cong \mathrm{C}_{d-1},$$ while $${\rm AGL}(1,d):= \langle T_{u,\alpha,1} \mid u \in \mathbb{F}_d, \alpha \in \mathbb{F}_d^\# \rangle = \mathrm{V}_d \rtimes{\rm GL}(1,d)\cong \mathrm{E}_d\rtimes \mathrm{C}_{d-1},$$ is a $2$-transitive normal subgroup of ${\rm A\Gamma L}(1,d)$. In the following omnibus proposition, we collect a few results concerning $2$-transitive subgroups of ${\rm A\Gamma L}(1,d)$. \begin{proposition}\label{BigProp} Let $d=r^f$ be a power of a prime $r$, let $L$ be a $2$-transitive subgroup of ${\rm A\Gamma L}(1,d)$ and let $X=L\cap {\rm AGL}(1,d)$. The following hold: \begin{enumerate} \item $X=\mathrm{V}_d\rtimes X_0$; \label{newnew3} \item $X_0$ is a subgroup of ${\rm GL}(1,d)$ of index at most $f$; \label{newnew} \item every element of ${\rm \Gamma L}(1,d)\setminus{\rm GL}(1,d)$ has order at most $\frac{d-1}{f}$ unless $r=2$ and $2\leq f\leq 6$; \label{newLemma} \item every element of $L_0$ has order at most $|X_0|$; \label{newnew2} \item if $\mathbb{F}$ is a field with $\mathrm{char}(\mathbb{F}) \nmid |X|$ and $n$ is a positive integer such that $X \lesssim {\rm GL}(n,\mathbb{F})$ then $n\geqslant |X_0|$. \label{AGL} \end{enumerate} Moreover, if $(\Gamma,G)$ is a locally $L$-pair, $r$ and $f$ are coprime and $v$ is a vertex of $\Gamma$ then \begin{enumerate} \setcounter{enumi}{5} \item $G_v$ contains a subgroup isomorphic to $X$. \label{agl subgroup} \end{enumerate} \end{proposition} \noindent \textit{Proof.} We prove each claim in order. \begin{enumerate} \item The claim is clearly true if $d=4$ and thus we assume that $d\neq 4$. Since $L$ is soluble and $2$-transitive, its socle ${\rm soc}(L)$ is elementary abelian and transitive, and thus regular with order $r^f$. We first show that ${\rm soc}(L)=\mathrm{V}_d$. Suppose for a contradiction that ${\rm soc}(L) \neq \mathrm{V}_d$. Note that ${\rm soc}(L)/({\rm soc}(L) \cap V_d)\cong {\rm soc}(L) \mathrm{V}_d / \mathrm{V}_d\leq {\rm A\Gamma L}(1,d)/\mathrm{V}_d\cong {\rm \Gamma L}(1,d)$. Since the Sylow $r$-subgroups of ${\rm \Gamma L}(1,d)$ are cyclic, we have $|{\rm soc}(L) \cap V_d | = r^{f-1}$. On the other hand, there exists $vt \in {\rm soc}(L)$ with $v \in \mathrm{V}_d$ and $1\neq t\in {\rm \Gamma L}(1,d)$. Since $vt$ has order $r$, $t$ must have order $r$ and therefore a conjugate $t'$ of $t$ in ${\rm \Gamma L}(1,d)$ lies in $\mathrm{Gal}(\mathbb{F}_{r^f}/\mathbb{F}_r)$. Now both $\mathrm{V}_d$ and ${\rm soc}(L)$ are abelian hence ${\rm soc}(L) \cap \mathrm{V}_d$ is a subgroup of the centraliser $\mathbf{C}_{\mathrm{V}_d}(t)$ and we have $|\mathbf{C}_{\mathrm{V}_d}(t)|=|\mathbf{C}_{\mathrm{V}_d}(t')|$. By the Galois correspondence, $\mathbf{C}_{\mathrm{V}_d}(t')$ is the subfield $\mathbb{F}_{r^{f/r}}$ of $\mathbb{F}_{r^f}$. This yields $r^{f-1} \leqslant r^{f/r}$, which is a contradiction since $d\neq 4$. We have shown that ${\rm soc}(L)=\mathrm{V}_d$. This implies that $\mathrm{V}_d\leq X$ and the result follows. \item By~(\ref{newnew3}), $\mathrm{V}_d\leq L$ and thus $L_0\leq{\rm \Gamma L}(1,d)$ and $X_0=L_0\cap {\rm GL}(1,d)$. Since $|{\rm \Gamma L}(1,d):{\rm GL}(1,d)|=f$, we have $|L_0:X_0|\leq f$. Moreover, $L$ is $2$-transitive hence $|L_0|\geq d-1=|{\rm GL}(1,d)|$ and thus $|{\rm GL}(1,d):X_0|\leq f$. \item Let $x$ be a generator of ${\rm GL}(1,d)$, let $\sigma$ be a generator of $\mathrm{Gal}(\mathbb{F}_d/\mathbb{F}_r)$ and let $y=T_{0,1,\sigma}$. Now ${\rm \Gamma L}(1,d)=\langle x\rangle\rtimes\langle y\rangle$ where the action of $\langle y \rangle$ on $\langle x \rangle$ is the action of the Galois group of the field extension $\mathbb{F}_{r^f}/\mathbb{F}_r$ on $\mathbb{F}_d^\#$. We will show that any element of ${\rm \Gamma L}(1,d)\setminus\langle x\rangle$ has order at most $\frac{d-1}{f}$. Let $z$ be such an element and write $z=x'y'$ with $x' \in \langle x \rangle$ and $y' \in \langle y \rangle$. Note that $\langle z \rangle \cap \langle x \rangle$ is centralised by $z$ and $x'$ and thus by $y'$. Let $e=|y'|$ and $k=\frac{f}{e}$. By the Galois correspondence, the elements of $\mathbb F_d$ that are fixed by $y'$ are precisely those in the subfield $\mathbb F_{r^k}$. It follows that $|\langle z \rangle \cap \langle x \rangle|$ divides $r^k-1$ and hence $$ |z | = |\langle z \rangle : \langle z \rangle \cap \langle x \rangle|| \langle z \rangle \cap \langle x \rangle| = |y'| | \langle z \rangle \cap \langle x \rangle| \leq e(r^{k}-1).$$ Since $f=ek$ and $e\geqslant 2$, it is an easy exercise to show that $e(r^{k}-1)\leq \frac{r^f-1}{f}=\frac{d-1}{f}$ unless $r=2$ and $f\leq 6$. \item By~(\ref{newnew}), $|X_0|\leq \frac{d-1}{f}$ and thus the claim follows by~(\ref{newLemma}) unless $r=2$ and $2\leq f\leq 6$. In the latter case, the claim can be checked by computer (for example, with the help of {\sc Magma}~\cite{Magma}). \item Let $U$ be the natural ${\rm GL}(n,\mathbb{F})$-module considered as an $X$-module. Note that $X$ is a Frobenius group with kernel $\mathrm{V}_d$ and complement $X_0$. Since the characteristic of $\mathbb{F}$ does not divide $|X|$, Maschke's Theorem~\cite[Theorem 1.9]{Isaacs} implies that $U$ is a completely reducible $X$-module. Moreover, since $\mathrm{V}_d$ acts non-trivially on $U$, we have $U=\mathbf{C}_U(\mathrm{V}_d)\oplus W$, where $\mathbf{C}_U(\mathrm{V}_d)$ is the submodule of $U$ fixed by every element of $\mathrm{V}_d$ and $W$ is a non-zero submodule $W$ of $U$. Now $\mathbf{C}_W(\mathrm{V}_d)=0$ and hence we may apply \cite[Theorem 15.16]{Isaacs}, which shows that the dimension of $W$ is divisible by $|X_0|$. The result follows. \item Let $u$ be a neighbour of $v$ in $\Gamma$ and let $G_{uv}^{[1]}=G_u^{[1]}\cap G_v^{[1]}$. Note that $L\cong G_v/G_v^{[1]}$. In particular, if $G_v^{[1]}=1$, then the result is immediate. We therefore assume that $G_v^{[1]} \neq 1$ and thus $d\geq 3$. If $d=3$ then ${\rm A\Gamma L}(1,d)={\rm AGL}(1,d)\cong{\rm Sym}(3)$ and thus $L=X\cong{\rm Sym}(3)$ and the result follows from \cite{djokmiller}. Since $r$ and $f$ are coprime, we may thus assume that $d\geqslant 5$. By \cite[Theorem (ii)]{weissp}, we have $G_{uv}^{[1]}= 1$. In particular, $G_v^{[1]}$ is isomorphic to a subgroup of $G_{uv}/G_u^{[1]}$, and the latter group is itself isomorphic to a subgroup of ${\rm \Gamma L}(1,d)\cong \mathrm{C}_{d-1}\rtimes\mathrm{C}_f$. Now $f$ and $d-1$ are coprime to $r$, hence $|G_v^{[1]}|$ is coprime to $r$. Let $R$ be a Sylow $r$-subgroup of $G_v$. Since the order of $G_v^{[1]}$ is coprime to $r$, we see that $R G_v^{[1]}/G_v^{[1]}$ is a Sylow $r$-subgroup of $G_v/G_v^{[1]}$. Thus $R G_v^{[1]}$ is a normal subgroup of $G_v$. We claim that $R$ is normal in $G_v$. Since $G_u^{[1]}$ and $G_v^{[1]}$ are normal subgroups of $G_{uv}$, it follows that $[G_v^{[1]},G_u^{[1]}] \leqslant G_{uv}^{[1]}=1$. Let $T$ be the normal closure in $G_v$ of $G_u^{[1]}$ and observe that $[T, G_v^{[1]} ] = 1$. Since $T$ is normal in $G_v$ and $T\nleqslant G_v^{[1]}$ (for otherwise $G_u^{[1]} \leqslant G_v^{[1]}$ and this yields $G_v^{[1]}=1$), by the quasiprimitivity of $L$, we have $R G_v^{[1]} \leqslant T G_v^{[1]}$. Since $|TG_v^{[1]}:T|$ divides $|G_v^{[1]}|$, which is coprime to $r$, $T$ contains a Sylow $r$-subgroup of $TG_v^{[1]}$. The normality of $T$ in $G_v$ implies $T$ contains every Sylow $r$-subgroup of $TG_v^{[1]}$. It follows that $R \leqslant T$ and thus $R$ centralises $G_v^{[1]}$. Now $R G_v^{[1]} = R\times G_v^{[1]}$, thus $R$ is characteristic in $R G_v^{[1]}$, and therefore $R$ is normal in $G_v$. Moreover $G_v = R \rtimes G_{uv}$ since $|G_{uv}|=|G_{uv}:G_v^{[1]}||G_v^{[1]}|$ is coprime to $r$. Since $G_{uv}^{[1]}=1$ we see that $G_{uv}$ is isomorphic to a subgroup of $G_{uv}/G_v^{[1]} \times G_{uv}/G_u^{[1]}$ where $$G_{uv}/G_v^{[1]} \cong G_{uv}/G_u^{[1]}\cong L_0.$$ Note that $G_{uv}$ projects onto $L_{0}$ in both coordinates of the direct product. Let $\pi: G_{uv} \mapsto L_0$ be the projection onto the first coordinate and let $g$ be an element of $G_{uv}$ of minimal order such that $\pi(g)$ generates $X_0$. Write $g=(x,g_2)$ with $g_2\in L_0$. By~(\ref{newnew2}), $g_2$ has order at most $|X_0|$ and thus $g$ has order $|X_0|$. It follows that $\langle R,g\rangle=R \rtimes\langle g\rangle\cong \mathrm{V}_d\rtimes X_0=X$. \hfill \qed \end{enumerate} \vspace{0.2cm} To complete the case when $L$ is soluble, we will also need the following. \begin{lemma} \label{el ab sections of glnp} If $r^f$ is a power of a prime $r$, then $\lambda(\mathrm{E}_{r^f})\geqslant \frac{2f}{3}$. \end{lemma} \begin{proof} By Lemma~\ref{lemma:new}, $\mathrm{E}_{r^f}$ is involved in a finite $r$-subgroup $R$ of ${\rm GL}(\lambda(\mathrm{E}_{r^f}),\mathbb C)$. In particular, there is an integer $f'$ such that $f\leqslant f'$ and $R/\Phi(R) \cong \mathrm{E}_{r^{f'}}$. It then follows by \cite[Theorem~A]{isaacsrank} that $f \leqslant f' \leqslant \frac{3n}{2}$, as required. \end{proof} For the insoluble case, we prove the following result. \begin{lemma}\label{lem:E(H)} There exists an increasing function $I : \mathbb{N} \rightarrow \mathbb{N}$ such that, if $H$ is a finite insoluble affine $2$-transitive group then $|H| \leqslant I(\lambda(H))$. \end{lemma} \begin{proof} Let $n \in \mathbb N$. We will show that there is an upper bound on $|H|$ as $H$ runs over the finite insoluble affine $2$-transitive groups with $\lambda (H) \leqslant n$. This will allow us to define $$I(n) = \max\{|H| : H \textrm{ finite insoluble affine $2$-transitive, } \lambda(H) \leqslant n \}$$ with the required properties. Let $H$ be such a group, let $R(H)$ be the soluble radical of $H$ and let $T(H)$ be the socle of $H/{R(H)}$. By \cite[Theorem 6.1]{Hering}, $T(H)$ is a non-abelian simple group. We have $\lambda(T(H))\leq\lambda(H)\leq n$, and it follows by Lemma~\ref{simple} that $|T(H)|\leqslant J(n)$. By~\cite[Corollary 6.3]{Hering}, for a given finite non-abelian simple group $T$, there are only finitely many finite $2$-transitive groups $H$ with $T(H) \cong T$. This concludes the proof. \end{proof} For a positive integer $n$, let \begin{equation}\label{h defn} h(n)=\max\{I(n), 23^2,(3n/2)^{3n/2}\}. \end{equation} It was shown in~\cite{weissp} that affine $2$-transitive groups are graph-restrictive. Hence, in the hypothesis of the following theorem, $c(L)$ is well-defined. \begin{theorem}\label{theo:affine} Let $k$, $n$ and $d$ be positive integers with $d> h(n)$. Let $(\Gamma,G)$ be a locally-$L$ pair such that $\Gamma$ has order $kp^n$ for some prime $p$, $L$ has degree $d$ and is $2$-transitive. If $L$ is affine, then $p\leqslant kc(L)$. \end{theorem} \begin{proof} Since $L$ is affine, $d$ is a prime power, say $d=r^f$ for some prime $r$. We assume for a contradiction that $p> kc(L)$. Let $v$ be a vertex of $\Gamma$. By Lemma~\ref{mainlemma}, $|G_v|$ is coprime to $p$ and $G_v$ is isomorphic to a subgroup of ${\rm GL}(n,p)$. In particular, $\lambda(L)\leqslant\lambda(G_v)\leqslant n$. We first assume that $L$ is soluble. Since $d>23^2$, it follows by \cite[XII, 7.3]{huppertiii} that $ L\leqslant {\rm A\Gamma L}(1,d)$. Let $X= L\cap {\rm AGL}(1,d)$. If $r>f$ then, by Proposition~\ref{BigProp}(\ref{agl subgroup}), $G_v$ contains a subgroup isomorphic to $X$ and thus so does ${\rm GL}(n,p)$. By Proposition~\ref{BigProp}(\ref{AGL}), this implies $n\geqslant |X_0|$. Finally, Proposition~\ref{BigProp}(\ref{newnew}) yields $|X_0|\geqslant\frac{d-1}{f}$ and thus $n\geqslant \frac{d-1}{f}\geq\frac{d-1}{\log_2(d)}$, contradicting the fact that $d>\max\{23^2,(3n/2)^{3n/2}\}$. We may thus assume that $r\leqslant f$. Since the group $\mathrm{E}_d=\mathrm{E}_{r^f}$ is involved in $L$, it is involved in ${\rm GL}(n,p)$ and hence Lemma~\ref{el ab sections of glnp} gives $f\leqslant 3n/2$ and thus $d=r^f\leqslant f^f\leqslant (3n/2)^{3n/2}$, contradicting the fact that $d> h(n)$. We may thus assume that $L$ is insoluble. Lemma~\ref{lem:E(H)} implies that $d \leqslant |L|\leqslant I(\lambda(L)) \leqslant I(n)$, contradicting the fact that $d > h(n) \geqslant I(n)$. This final contradiction yields that $p\leqslant kc(L)$. \end{proof} \section{Proof of Theorem~\ref{main}}\label{sec:proof} By \cite[Theorem 1.4]{trofweiss1} $2$-transitive groups are graph-restrictive. Hence we may define $c(d)$ to be the maximum of $c(L)$ as $L$ runs over the $2$-transitive groups of degree $d$. Let $J$ and $h$ be as in (\ref{j defn}) and (\ref{h defn}), respectively, and, for a positive integer $n$, let $g(n)=\max\{J(n),h(n)\}$. The proof now follows by applying Theorems~\ref{theo:nonaffine} and~\ref{theo:affine}. \hfill \qed \begin{remark} By consulting the references, it is possible to explicitly compute the functions $c$ and $g$ defined above. Although one can find better bounds than the ones given, we choose not to attempt to optimise these functions, being satisfied merely with their existence. \end{remark} \noindent\textsc{Acknowledgements.} We are grateful to Michael Giudici for pointing out a mistake in an earlier version of this paper.
1,941,325,219,908
arxiv
\section{INTRODUCTION} The stellar disks of spiral galaxies can extend to many scale lengths (e.g. Davidge 2006; Pohlen \& Trujillo 2006), and the stars that populate the peripheral regions of disks likely have a range of origins. Some of the stars at large radii probably formed {\it in situ}. Ultraviolet light concentrations that trace young stellar regions are seen at large radii in some nearby spiral galaxies (e.g. Gil de Paz et al. 2008; Zaritsky \& Christlein 2007). While the density of interstellar material at large radii tends to be too low to trigger large-scale star formation, localized density enhancements may occur as a result of compression from spiral density waves (e.g. Bush et al. 2008). The presence of a dark baryonic component in the disk plane could also enable star formation in areas where the gas density may otherwise appear to be too low (Revaz et al. 2009). Some fraction of the stars in the outer disk are probably migrants from smaller radii, as recent studies have shown that secular processes (e.g. Roskar et al. 2008) and radial mixing induced by interactions (e.g. Quillen et al. 2009) can contribute significantly to the stellar contents of the outer regions of disks. The distribution of stars in the nearest spiral galaxies provide clues into the processes that populate the outermost regions of disks. Star-forming activity at large radii produces distinct signatures in galactic light and color profiles (e.g. Sanchez-Blazquez et al. 2009). As for secular effects, the processes that re-distribute stars throughout disks act in a cumulative manner on stellar orbits, with the result that stars that have moved the furthest from their places of birth will tend to be the oldest -- stars with progressively older ages may thus be found at progressively larger galactocentric distances (e.g. Roskar et al. 2008), in contradiction to what might niavely be expected due to inside-out disk formation. The present letter is part of a larger study of young and intermediate age stars throughout the disk of M33 (Davidge et al. 2011, in preparation). The entire dataset consists of five MegaCam pointings, and here we examine the distribution of stars in an area that includes two of the most remote star-forming complexes in M33. A distance modulus of 24.93 (Bonanos et al. 2006) is adopted. Recent distance modulus estimates for M33 show a spread of a few tenths of a dex, and the Bonanos et al. value was selected because it is based on eclipsing binaries, which are a primary distance indicator. \section{OBSERVATIONS, REDUCTIONS, \& PHOTOMETRIC MEASUREMENTS} The data were recorded as part of the 2009B MegaCam (Boulade et al. 2003) observing queue on the Canada-France-Hawaii Telescope (CFHT). The MegaCam detector is a mosaic of 36 $2048 \times 4612$ E2V CCDs that are deployed in a $4 \times 9$ format. A $0.96 \times 0.94$ degree$^2$ area is imaged with 0.18 arcsec pixel$^{-1}$ sampling. Five 150 sec exposures were recorded in $g'$, and ten 440 sec exposures were obtained in $u'$. The initial processing of the data, which included bias subtraction and flat-fielding, was done with the ELIXER pipeline at the CFHT. The ELIXER-processed images were aligned, stacked, and then trimmed to the area of common exposure time. This paper deals with objects in a single $18 \times 13.5$ arcmin$^2$ field that is centered at 01:34:43 Right Ascension and 31:22:00 Declination (E2000), and samples the disk of M33 at a galactocentric distance of 8.3 kpc, or 4 disk scale lengths. The southern half of this area, which includes the northern spiral arm and two young stellar complexes, is shown in Figure 1. Stars in the final images have FWHM $\sim 0.9$ arcsec. The brightnesses of individual stars were measured with the PSF-fitting routine ALLSTAR (Stetson \& Harris 1988). The photometric calibration was defined using zeropoints and transformation coefficients computed from standard star observations that were recorded during 2009B. Sources that depart from the trend between magnitude and the photometric error computed by ALLSTAR, which tend to be non-stellar in appearance (e.g. Davidge 2010), were removed from the photometric catalogue. \section{RESULTS} The $(u', g'-u')$ CMDs of stars in the `Spiral Arm' and `Outer Disk' areas indicated in Figure 1 are shown in Figure 2. The `Spiral Arm' region covers the diffuse distribution of stars in the northern spiral arm, while the `Outer Disk' area covers the remainder of the field. There is a large number of bright main sequence stars in the Spiral Arm CMD, and a comparison with Z = 0.004 isochrones from Girardi et al. (2004) indicates that stars with ages from $\leq 10$ Myr to $\geq 100$ Myr are detected. This metallicity was selected based on the oxygen abundance seen throughout much of the M33 disk (e.g. Magrini et al. 2010), although the predicted locus of the upper main sequence is not sensitive to the adopted metallicity. The main sequence in the Outer Disk CMD is much less pronounced than in the Spiral Arm CMD; still, that modest numbers of young and intermediate age main sequence stars are present indicates that a diffusely distributed, young stellar component occurs outside of the main body of the spiral arm. The locations on the sky of stars in three areas of the $(u', g'-u')$ CMD, marked in Figure 2, are shown in Figure 1. There are obvious age-related differences, in the sense that the stars in the 10 Myr sample tend to group together more than those in the 40 and 100 Myr samples. Even though the stellar distribution becomes more diffuse towards older ages, the northern spiral arm can still be identified in the 100 Myr sample. While not shown here, the overall distribution of older samples (e.g. those with $u'$ between 24.5 and 25.5, and $u'-g'$ between 0 and 1, which have an age $\sim 200$ Myr), is even more diffuse. The extent of clustering \footnote[2]{In this paper, clustering refers to the grouping of stars over a range of spatial scales. This includes, but is not restricted to, objects that are in `star clusters', which typically subtend only a few parsecs, and so are not resolved with these data.} can be quantified by examining the angular separations between star -- star pairs, and we refer to the histogram distribution of all possible pairings as the star--star separation function (S3F). The S3F is based on a simple observable -- an angular measurement on the sky -- and yields information about the large scale distribution of objects that can be difficult to quantify by eye. The gaps between the CCDs do not significantly affect the S3Fs, as these amount to only $\sim 4\%$ of the field covered. We first consider the S3F of sources in the northern half of the field. This area is well offset from the disk, and contains a mix of halo stars and unresolved galaxies, but few -- if any -- young or intermediate age stars belonging to M33. Thus, this area serves as a control for investigating the distribution of objects at smaller radii. The S3F of objects in the northern half of the field that fall within the 100 Myr region of the CMD is shown in the bottom panel of Figure 3. The gradual drop-off in the separation frequency at separations $> 450 -- 500$ arcsec is due to the finite size of the area sampled. The S3F is not symmetric because the field is not square. The inflexion point of the S3F of uniformly distributed objects will occur at a scale that is roughly one half the length of the shortest axis of the area examined, which is 425 arcsec (2 kpc), and this matches approximately the inflexion point in the bottom panel of Figure 3. The S3Fs of main sequence stars in the southern half of the MegaCam sub-panel are shown in the top three panels of Figure 3. These S3Fs clearly differ from the S3F of sources in the northern half of the MegaCam data, due to stellar grouping in M33. The S3F of the 10 Myr sample contains substantial signal at separations $r < 250$ arcsec ($< 1.1$ kpc), and the width of the peak at small separations indicates that the youngest stars are grouped on scales $< 150$ arcsec ($< 700$ pc). This is comparable to the dimensions of star-forming complexes in nearby galaxies (e.g. Efremov 1995), as well as the scale of coherent star-formation in M33 (Sanchez et al. 2010). There is a second peak in the 10 Myr S3F near $r \sim 450$ (2.1 kpc) arcsec, and this occurs because the two young stellar concentrations in this field beat against each other in the separation measurements; the separation between the two clumps in the upper right hand panel agrees with that between the two most prominent clumps in Figure 1. When compared with the 10 Myr S3F, clustering signatures are broader and have a smaller amplitude in the S3F of the 40 Myr sample. The majority of stars in the 40 Myr sample are separated by distances up to at least 300 arcsec (1.4 kpc). Even at this comparatively young age, stars have moved distances that are large enough to significantly blur clustering signatures in the S3F. There is a more-or-less uniform signal in the 100 Myr S3F between 150 and 400 arcsec (0.7 and 1.9 kpc), with no evidence of clustering at separations $< 150$ arcsec ($< 0.7$ kpc) -- large stellar complexes evidently dissipate over $\sim 100$ Myr timescales in this part of M33. In fact, the width of the ramp-up in the 100 Myr S3F suggests that the minumum star-star separation is typically $\sim 50$ arcsec, or $\sim 0.25$ kpc, for stars of this age, while the onset of the plateau in the S3F suggests that the typical star-star separation is at least 150 arcsec, or $\sim 0.7$ kpc. \section{DISCUSSION \& SUMMARY} The star-star separation function (S3F) has been used to investigate the projected distribution of main sequence stars in the northern disk of M33. Two young stellar complexes produce significant signal in the S3F of stars with ages $\sim 10$ Myr at separations $r < 150$ arcsec ($d < 0.7$ kpc). However, signatures of clustering are greatly diminished among stars with ages $\sim 40$ Myr, and the smooth S3F of stars with ages $\sim 100$ Myr suggests that there is little if any large-scale clustering among these stars. Thus, large scale stellar structures in this part of M33 evidently dissipate over time scales $\leq 100$ Myr. Stellar complexes in the outer regions of disks may be subjected to disruption mechanisms that differ from those in the main body of the disk. There is evidence for heating by halo structures in the outer regions of nearby galaxies (e.g. Martin \& Kennicutt 2001), and dynamical measurements suggest that halo bombardment becomes a significant source of heating at 4 disk scale lengths in nearby spirals (Herrmann et al. 2009), and this is the part of the M33 disk that we examine here. The broad, evenly distributed signal in the 100 Myr S3F between 150 and 450 arcsec (0.7 -- 2.1 kpc) results from random motions on the order of $\sim 20$ km sec$^{-1}$, and this is comparable to the outer disk extraplanar motions measured by Herrmann et al. (2009). Putman et al. (2009) find that HI in the outer regions of M33 has a velocity dispersion of 18.5 km sec$^{-1}$, and suggest that this may be a relic of an interaction within the past few Gyr between M31 and M33. Newly formed stellar systems will be more prone to disruption if they have a low star formation efficiency (SFE), as feedback will remove gas early-on, thereby reducing -- perhaps catastrophically -- the gravitational field of the nascent system (Lada \& Lada 2003). A general trend for the SFE to diminish towards larger radii is seen in nearby galaxies (Leroy et al. 2008). This result is based on measurements made over kpc spatial scales, which is comparable to the sizes of the large structures probed here, Star clusters are sub-structures within the large-scale complexes that are investigated here. The largest disk star clusters in M33 subtend $\leq 2$ arcsec (San Roman et al. 2010), and so fall in the smallest bin in the S3F. The signal in the S3F of the 100 Myr sample in the 0 - 20 arcsec bin is markedly smaller than in the 10 Myr sample, and if this trend extends to sub-arcsec sizes then this will be consistent with stellar clusters dissipating over $\sim 0.1$ Gyr timescales. In fact, the spatial distribution of star clusters with ages $< 0.1 - 0.3$ Gyr in M33 is more compact than that of stars with the same age (Sarajedini \& Mancone 2007; Roman et al. 2010), suggesting that young star clusters in M33 dissipate over time spans that are less than a few tenths of a Gyr. The disruption timescale of star clusters depends on a number of factors, including the rate at which remnant gas is removed from the cluster, the local environment, and two-body relaxation (e.g. summary by Elmegreen \& Hunter 2010). Gratier (2010) find that the masses of molecular clouds decrease with increasing radius in M33, and this should result in lower star cluster masses, which in turn may lead to a comparatively rapid disruption timescale for clusters in the outer regions of M33. Lamers et al. (2005a) estimates that a $10^4$ M$_{\odot}$ cluster in the main body of M33 typically disrupts after $\sim 1$ Gyr. Assuming no radial changes in the sources of dynamical heating, the ambient mass mixture that dominates the gravitational field, and the mean SFE within M33, then if the cluster disruption timescale $\propto$ mass$^\gamma$, where $\gamma =$ 0.62 (Baumgardt \& Makino 2003; Lamers et al. 2005b), then the majority of stars in the outer disk of M33 formed in clusters with masses $\leq 50 - 250$ M$_{\odot}$ if they are disrupted on timescales of $\sim 100$ Myr. This characteristic cluster mass is roughly two orders of magnitude lower than the peak of the solar neighborhood initial cluster mass function predicted by Parmentier et al. (2008) and Kroupa \& Boily (2002). In fact, this is an upper limit to the initial cluster mass, in the sense that the pace with which clusters dissolve depends on factors such as the local mass density and the initial cluster mass, and a 10$^4$ M$_{\odot}$ cluster in the peripheral regions of the M33 disk would be even longer lived than predicted by Lamers et al. (2005a). Thus, if star clusters are disrupted over $\sim 0.1$ Gyr timescales in the outer regions of M33 then we predict that the star clusters found there will have (1) young ages, and (2) lower masses than those at smaller radii. We close by noting that the orderly large scale motions induced by secular processes are probably not significant among stars of the age considered here, given that the rotation period of the M33 disk is 200 - 300 Myr (Corbelli \& Salucci 2000). Rather, stars with ages $\sim 100$ Myr in this part of M33 appear to have obtained random stellar motions that allow them to populate regions up to $\sim 2$ kpc from where they formed. This effectively pushes out the observational boundary of the young disk. \parindent=0.0cm
1,941,325,219,909
arxiv
\section{Introduction} \label{sec:intro} Quantum mechanics is a statistical theory that defines the probability of measurement outcomes without referring to a fundamental set of possible realities. The original formulation of the theory was based on analogies between the algebra of operators and the algebra of numbers. However, this analogy is somewhat misleading, since individual measurement outcomes are not described by the operators but by their eigenvalues. As a consequence, there is no quantum mechanical equivalent to a phase space point $(x,p)$ because position and momentum do not have common eigenstates. Nevertheless classical mechanics should emerge as a valid approximation of quantum statistics, so it would seem natural to ask how the notion of phase space points can emerge from a theory that does not assign any joint reality to $x$ and $p$. Early attempts to describe the relation between classical phase space statistics and quantum statistics focussed on formal relations that apply specifically to continuous variables and the Fourier transform relation between the eigenstates of position and momentum. Specifically, Wigner showed that the classical phase space distribution could be approximated by a Fourier transform along the anti-diagonal of the spatial density matrix, resulting in a quasi probability expression for the density operator that is now widely known as the Wigner function \cite{Wig32}. Almost immediately after this historic result, Kirkwood pointed out that a similar analogy with classical phase space distributions could be obtained by a more simple Fourier transform applied to only one side of the density matrix \cite{Kir33}. This quasi probability is necessarily complex, but it converges on the same classical limit and also produces the correct marginal distributions for both position and momentum. The early history of quasi probabilities thus illustrates the problem of finding a unique definition of joint probabilities in the absence of actual joint measurements. Recent developments in quantum information have seen a more general discussion of quantum mechanics as a statistical theory \cite{Har01,Fuc13}. In the spirit of these discussions, it may be worthwhile to reconsider the concept of joint probability based on the general operator algebra of quantum statistics. Specifically, it may be possible to derive a definition of joint probability from a set of reasonable conditions or axioms that characterize the relation between the joint probabilities and the actual measurement results. In the following, I propose a set of axioms that results in a definition of joint probability which is consistent with the quasi probability introduced by Kirkwood and therefore provides an objective reason for excluding the Wigner function. The essential criterion that eliminates alternative definitions of joint probabilities concerns the relation between physical properties with joint eigenstates: to ensure that the probabilities of outcomes associated with the same joint eigenstate of the two properties are the same in both measurements, the joint probabilities must be defined by a product of projectors that eliminates all states orthogonal to either of the two eigenstates. For all other definitions of joint probability, there will be non-zero joint probabilities for properties that directly contradict the known properties of the input state. It is therefore possible to argue that the product of projection operators is the only valid representation of a logical AND in the quantum formalism, resulting in the definition of a complex valued joint probability that is unique except for the ordering dependent sign of its imaginary part. \section{The operator algebra of joint probabilities} \label{sec:jprob} The motivation for a definition of joint probabilities of non-commuting observables can be explained in terms of the calculation of probabilities in the Hilbert space formalism. In Hilbert space, a state is represented by a d-dimensional complex vector, where the absolute values of the vector components represent the probabilities of measurement outcomes. However, the outcomes of other measurements will depend on the differences between the complex phases of the $d$ components. In the density matrix, these complex phases appear in the off-diagonal elements. In general, the probability of a measurement outcome $m$ is therefore given by a sum over all matrix elements of the density operator $\hat{\rho}$ and the measurement operator $\hat{\Pi}(m)$, as given by the product trace \begin{eqnarray} \label{eq:Pm1} P(m) &=& \mbox{Tr}\left( \hat{\Pi}(m) \hat{\rho} \right) \nonumber \\ &=& \sum_{a,a^\prime} \langle a \mid \hat{\Pi}(m) \mid a^\prime \rangle \langle a^\prime \mid \hat{\rho} \mid a \rangle. \end{eqnarray} If $a$ and $a^\prime$ refered to different properties, one could identify the matrix elements of $\hat{\rho}$ with joint probabilities and the matrix elements of $\hat{\Pi}(m)$ with conditional probabilities, and this analogy is probably behing the somewhat irritating claim that superposition assigns simultaneous reality to different and distinct values of the same property (the particle is ``simultaneously'' here and there, or the cat is ``both'' dead and alive). However, the off-diagonal elements do not appear in the measurement statistics of $a$ at all - they are only relevant for measurements of a different property $b$. It would therefore seem natural to express the density operator in terms of a joint probability of $a$ and $b$, so that general measurement probabilities could be expressed in closer analogy to classical statistics as \begin{equation} \label{eq:Pm2} P(m) = \sum_{a,b} P(m|a,b) \rho(a,b). \end{equation} Note that the number of matrix elements and the number of joint probabilities is both given by the square of the Hilbert space dimension $d^2$. Thus, the algebra of Hilbert space matrices is very similar to the algebra of joint and conditional probabilities. All it takes to make the connection is a transformation of the matrix representation into a joint probability representation. In general, this transformation can be represented by an operator $\hat{\Pi}(a,b)$ that assigns a joint probability $\rho(a,b)$ to the density opertor $\hat{\rho}$ through the product trace, \begin{equation} \rho(a,b) = \mbox{Tr}\left( \hat{\Pi}(a,b) \hat{\rho} \right). \end{equation} The construction of the operator $\hat{\Pi}(a,b)$ defines the joint probabilities $\rho(a,b)$. However, a meaningful definition of joint probabilities must satisfy a number of criteria that motivate the specific choice of $\hat{\Pi}(a,b)$ in terms of reasonable assumptions about the relation between the projective measurements of $a$ and $b$. In the following, I will formulate such a set of reasonable assumptions and show that they narrow down the mathematical possibilities for a definition of $\hat{\Pi}(a,b)$ to products of the projection operators. \section{Reasonable requirements} \label{sec:cond} The first obvious requirement of joint probabilities is that they should correctly describe the individual probabilities of $a$ and of $b$ observed in separate measurements of the two observables. Since the measurement operators of these measurements are given by the projectors of $a$ and of $b$, this condition can be applied directly to the operator algebra of $\hat{\Pi}(a,b)$. \newtheorem{cond}{Condition} \begin{cond} The marginals of the joint probabilities correspond to the probabilities of separate measurements of $a$ and $b$, \begin{eqnarray} \sum_{b} \hat{\Pi}(a,b) &=& \mid a \rangle \langle a \mid, \nonumber \\ \sum_{a} \hat{\Pi}(a,b) &=& \mid b \rangle \langle b \mid. \end{eqnarray} \end{cond} Next, it is useful to consider a situation where we have some confidence about the correct joint probability - specifically, the case where the input state $\hat{\rho}$ is an eigenstate of one of the observables with an eigenvalue of $a_\psi$ or $b_\psi$. In that case, it is reasonable to assume that the joint probabilities are zero for all other values of $a$ or $b$, so that the joint probability is given by the marginal probabilities $|\langle a \mid b \rangle|^2$. \begin{cond} Joint probabilities for input states with a precisely known value of $a$ or $b$ are zero for any other value of that obserable, \begin{eqnarray} \langle a_\psi \mid \hat{\Pi}(a,b) \mid a_\psi \rangle &=& \delta_{a,a_\psi}|\langle a \mid b \rangle|^2, \nonumber \\ \langle b_\psi \mid \hat{\Pi}(a,b) \mid b_\psi \rangle &=& \delta_{b,b_\psi}|\langle a \mid b \rangle|^2. \end{eqnarray} \end{cond} It may seem that this requirement is rather trivial, but it does eliminate all contributions to $\hat{\Pi}(a,b)$ that never show up in the marginal probabilities of $a$ or of $b$ because the sums over either $a$ or $b$ are all zero. It is rather easy to construct such artifacts, e.g. by adding and subtracting an arbitrary operator to each $\hat{\Pi}(a,b)$, so that there are equal numbers of additions and subtractions in each line or column defined by constant $a$ or $b$. Effectively, these constructions will introduce correlations into the joint probabilities even when one of the properties does not have any fluctuations that could be correlated to the other property. Thus, condition 2 could be summarized as ``no correlation without fluctuation''. Importantly, the second condition refers only to the specific sets of outcomes $\{a\}$ and $\{b\}$ that define the complete probability distribution. It is possible to formulate a more general condition that actually includes the second condition as a specific case by considering possible superpositions of a finite subset of $a$ ($b$). In this case, the input state $\mid m \rangle$ can be distinguished from the eigenstates of $a$ ($b$) by a projective measurement on a different property that has both $\mid a \rangle$ ($\mid b \rangle$) and $\mid m \rangle$ as eigenstates. We can therefore conclude that knowledge of $m$ excludes the possibility of $a$ ($b$) in the same way that the knowledge of $a_\psi$ excluded the possibilities of other values of $a$. \begin{cond} If the input state is characterized by the eigenvalue $m$ of a property that has a joint measurement outcome $m(a)$ ($m(b)$) with $a$ ($b$) which distinguishes $a$ ($b$) from the input $m$, then the joint probabilities for this measurement outcome $a$ ($b$) must all be zero. \begin{eqnarray} \label{eq:c3} \langle m \mid \hat{\Pi}(a,b) \mid m \rangle = 0 &\; \mbox{\rm if} \;& |\langle a \mid m \rangle|^2=0, \nonumber \\ \langle m \mid \hat{\Pi}(a,b) \mid m \rangle = 0 &\;\mbox{\rm if}\;& |\langle b \mid m \rangle|^2=0. \end{eqnarray} \end{cond} This condition eliminates the possibility that positive and negative joint probabilities for a specific outcome average to zero in the sums that determine the marginal probabilities. Whenever a marginal probability of zero is observed, the joint probabilities for this marginal must all be zero. Note that the reason for this condition relies on the obsevation that orthogonality of states implies that the states represent different outcomes of the same measurement. If the marginal probability of $a$ is zero, there is a direct experimentally observable contradiction between $a$ and the initial condition $m$, so that $m(a) \neq m$. Significantly, the third condition is violated by the Wigner function, since the Wigner function associates coherences between $x$ and $x^\prime$ with the average position of $(x-x)^\prime/2$, which can have a marginal probability of zero. For example, the Wigner function of a particle passing through a double slit has non-zero values at the position between the two slits, where there is not even an opening for the particle to pass through the screen. Thus, despite its usefulness in the evaluation of measurement statistics, the value of the Wigner function for a specific combination of $x$ and $p$ does not originate from the possibility of finding the position $x$ or the momentum $p$ in independent measurements. In general, the third condition is necessary in order to satisfy the expectation that the joint probability of $a$ and $b$ establishes a relation between measurement results that can actually be observed in separate measurements of $a$ and of $b$. Although it is mathematically possible to define joint functions of the quantities $a$ and $b$ that do not satisfy this condition, such functions do not express any relation between the individual outcomes $a$ and $b$ and should therefore not be considered joint probabilities. Since the values of the Wigner function at $x$ can be traced to a quantitative average of pairs of outcomes other than $x$, it does not actually qualify as a joint probability of the single outcome $x$ and the single outcome $p$. We can now apply the requirements and find the specific definition of $\hat{\Pi}(a,b)$ that satisfies all of them. In particular, the third requirement greatly reduces the number of possibilities. Since Eq.(\ref{eq:c3}) applies to all possible states $\mid m \rangle$, the operator $\hat{\Pi}(a,b)$ must assign a value of zero to any state that is orthogonal to either $\mid a \rangle$ or $\mid b \rangle$. Since such an assignment of zero is only possible by multiplication with the corresponding projection operator, the third condition can only be satisfied if the operator $\hat{\Pi}(a,b)$ is given by a product of the two projection operators. According to condition 1, there can be no additional factors, too. Only the choice of the operator ordering is arbitrary. In general, it is possible to chose any linear combination of the two orderings, but the choice of a specific ordering greatly simplifies the mathematical properties of the expression. If the projection on $a$ is applied first, the operator defining the joint probabilities reads \begin{equation} \label{eq:piform} \hat{\Pi}(a,b) = \mid b \rangle \langle b \mid a \rangle \langle a \mid. \end{equation} Since the eigenvalues of the projection operators represent the truth values of the statements associated with their state vectore, the product of two projectors corresponds to the classical definition of a logical AND as the product of two truth values. The definition of joint probabilities using the product of the projection operators is therefore consistent with the original idea that numbers should be replaced by operators. However, the replacement of truth values with projection operators has non-trivial consequences, since the non-commutativity of the two projection operators results in a non-hermitian operator that cannot be interpreted as a projector onto a joint reality of $a$ and $b$. Instead, the quantum mechanical relation between the separate realities of $a$ and $b$ is expressed by a complex valued joint probability obtained from the expectation values of the non-hermitian operator $\hat{\Pi}(a,b)$. In the following, I will point out that complex probabilities of this kind have a long history in quantum physics, perhaps culminating in the realization that they can be obtained experimentally in weak measurements. It is then possible to explain the physics expressed by the operator ordering and to consider wider implications for the foundations of quantum physics. \section{Joint probablities in quantum physics} \label{sec:phys} The discussion above is based entirely on the structure of the Hilbert space formalism and on conditions derived from projective measurements of operator eigenvalues. In particular, it was not based on methods of quantum state reconstruction by tomographically complete sets of measurements, which have often been used as a motivation for the introduction of joint probabilities. It is interesting to note that an expression for joint probabilities can be derived without any reference to joint measurements, only by considering the structure of the operator formalism and its application to separate projective measurements of $a$ and $b$. Since the result given in Eq.(\ref{eq:piform}) is a simple multiplication of projection operators, it appears in the equations of the operator algebra whenever two operators with eigenstates $\{\mid a \rangle\}$ and $\{\mid b \rangle\}$ are multiplied. It is therefore not surprising that the joint probability defined by Eq.(\ref{eq:piform}) has already been studied in other contexts. As mentioned above, its application to position and momentum results in the distribution introduced by Kirkwood in 1933 \cite{Kir33}. The general form for arbitrary pairs of observables was introduced by Dirac in 1945 \cite{Dir45}. These early works have recently attracted renewed attention, since it was discovered that the complex joint probabilities of Kirkwood and Dirac actually describe the results of weak measurements of a projection operator $\mid a \rangle\langle a \mid$ followed by a final measurement of $\mid b \rangle$ \cite{Joh07,Hof12,Lun12,Wu13,Bam14}. Complex joint probabilities therefore have a well-defined operational meaning that directly relates them to sequential measurements of the two non-commuting obervables. It is also significant that the complex joint probabilities completely characterize quantum states and processes. They can therefore be used as a starting point for a fundamental reformulation of quantum physics based on empirical principles \cite{Hof14}. In the present context, it is interesting to note that the relation with weak measurement also explains the dependence of $\hat{\Pi}(a,b)$ on operator ordering: the imaginary part of the weak value actually represents the response of the system to the dynamics generated by the observable \cite{Hof11,Dre12}. Upon time reversal, the direction of the force is inverted and the response changes its sign. It is therefore possible to identify the particular ordering with a temporal sequence and the sign of the imaginary part as the direction of the dynamics generated by the observables. In the formal sense, a specific operator ordering is desirable because it is mathematically convenient. As Kirkwood already noticed in 1933, the joint probability defined by $\hat{\Pi}(a,b)$ simply corresponds to the application of different basis sets to the right and the left side of the density matrix, \begin{equation} \rho(a,b) = \langle b \mid a \rangle \langle a \mid \hat{\rho} \mid b \rangle. \end{equation} The relation between $\rho(a,b)$ and a measurement probablity $P(m)$ is then naturally expressed in the form given by Eq.(\ref{eq:Pm2}), where \begin{equation} P(m|a,b) = \frac{\langle b \mid \hat{\Pi}(m) \mid a \rangle}{\langle b \mid a \rangle}. \end{equation} This complex conditional probability happens to be the weak value of the measurement operator $\hat{\Pi}(m)$ for an input state $\mid a \rangle$ and a post-selected state $\mid b \rangle$. It is therefore possible to obtain its value experimentally by a weak measurement of the fundamental relation between the physical properties $a$, $b$, and $m$. Since this relation can be applied to any quantum state $\hat{\rho}$, it actually describes the deterministic relation between the properties $(a,b)$ and $m$ \cite{Hof12}. Thus, complex valued conditional probabilities take the place of analytical functions that relate the values of physical properties to each other. Complex conditional probabilities actually represent the most fundamental formulation of the laws of physics, universally valid in both the quantum and the classical regime. It is therefore no accident that the quantum formalism results in a very specific definition of joint probabilities: what seemed to be ambiguities in the physics described by the operator algebra are actually well defined differences between the unjustified expectation of joint realities and the correct relations between different potential realities that is observed in sufficiently precise experiments \cite{Hof14}. \section{Conclusions} \label{sec:concl} The analysis above has shown that a relatively small set of reasonable assumptions can narrow down the possible definitions of joint probabilities for two non-commuting observables to the complex joint probabilities obtained from products of the two projection operators. Any other definition of joint probabilities would introduce non-zero probabilities for events that are never observed under the conditions described by the quantum state in question. It seems to be significant that no other quasi probabilities can satisfy these simple requirements. The conclusion appears to be that the standard formalism of quantum mechanics is much more specific regarding the precise relations between non-commuting properties than the conventional textbook discussions of uncertainty and superpositions suggest. Ultimately, the complex joint probabilities obtained by simply multiplying the projection operators and taking the product trace with the density matrix provide an explanation of quantum effects that avoids many of the ambiguities associated with the Hilbert space formulation and may therefore help to clarify the origin of quantum paradoxes and other failures of classical explanations in quantum physics. \section*{Acknowledgment} This work was supported by JSPS KAKENHI Grant Number 24540427.
1,941,325,219,910
arxiv
\section{introduction} As is well known, the Heisenberg model is a simple but realistic and extensively studied solid-state system \cite{3hammar,3eggert}. According to the sign of the interaction intensity $J$, the model can be classified as the ferromagnetic type and the antiferromagnetic one. Based on the interaction intensity along the different space directions, the model can be labelled as the $XXX$, $XXZ$, and $XYZ$ ones. Recently, it has been found that the Heisenberg interaction is not localized in spin system, and it can be realized in quantum dots \cite{3loss,3burkard}, nuclear spins \cite{3kane}, cavity QED \cite{3imamoglu,3zhengsb}. Thus, the study of this basic model is of interests and wide applications in physical fields. In the investigation of these models, it is a basic task to get the exact solutions \cite{jin1,zhang1,pan1,pan2,birman1}, i.e., to diagonalize the Hamiltonian. So far, only some special Heisenberg models can be exactly solved, such as $XXX$ antiferromagnetic model \cite{bethe}. In general, the linear spin-wave \cite{3andersonpw,3kubo,callaway} approximation is widely applied when we study the Heisenberg models. It has been known that the $XXZ$ model in linear spin-wave frame can be diagonalized by coherent state operators \cite{3xbh,3zwm} of $su(1,1)$ algebra. However, for the $XYZ$ model in this frame, the method of the coherent states does not work. Therefore, it is necessary for us to develop another algebraic diagonalization method to get the energy spectrum by using the algebraic structure which the model has. In this Letter, we review the $su(1,1)$ coherent states of the $XXZ$ antiferromagnetic model in linear spin-wave frame. Then, the $XYZ$ antiferromagnetic model in linear spin-wave frame can be written as the generators of $su(1,2)$ algebra. At last, the energy eigenvalues are obtained in terms of the algebraic diagonalization method, and some numerical solutions are given and discussed. \section{$XXZ$ antiferromagnetic model and $su(1,1)$ coherent states} The Hamiltonian of the $XXZ$ antiferromagnetic model reads: \begin{equation} \label{hxxz} H_{XXZ}=-J\sum_{<i,j>}(S_i^xS_j^x+S_i^yS_j^y+\eta S_i^zS_j^z)\;\;\; (J<0), \end{equation} where the notation $<i,j>$ denote the nearest neighbor bonds. Starting from the two-sublattice model and Holstein-Primakoff transformation \cite{3holstein}: \begin{eqnarray} \label{h-pt} S_a^z&=&-s+a^{\dag}a,\;\;\;\;\;\;\;\;S_b^z=s-b^{\dag}b,\nonumber\\ S_a^{\dag}&=&(2s)^{\frac{1}{2}}(1-a^{\dag}a/2s)^{\frac{1}{2}}a,\;\;S_a^{-}=(S_a^{\dag})^{\dag},\nonumber\\ S_b^{\dag}&=&(2s)^{\frac{1}{2}}b^{\dag}(1-b^{\dag}b/2s)^{\frac{1}{2}},\;\;S_b^{-}=(S_b^{\dag})^{\dag}, \end{eqnarray} where $a^+$ and $a$, ($b^+$, $b$) can be regarded as the creation and annihilation operators of boson on sublattice A (sublattice B), respectively, but the particle numbers $n_a=a^+a$, $n_b=b^+b$ can't excel $2s$, respectively. Because in low temperature and low excitation condition, $<a^{\dag}a>,<b^{\dag}b>\ll s$, the non-linear interaction in Eq. (\ref{h-pt}) could be reasonable ignored \cite{kittel}. Based on it, transferring the operators into momentum space, we obtain \begin{eqnarray} \label{hsw xxz} H_{XXZ}&=&-2ZsJ(Ns\eta-\sum_{\bf k}H_{\bf k}),\\ H_{\bf k} &=& \eta (a _{\bf k}^{\dag}a_{\bf k} +b_{\bf k}^{\dag}b_{\bf k})+\gamma_{\bf k}(a_{\bf k}b_{\bf k}+a^{\dag}_{\bf k}b^{\dag}_{\bf k}). \end{eqnarray} Here \begin{equation} \gamma_{\bf k}=Z^{-1}\sum_{\bf R} e^{i{\bf k}\cdot \bf R} =\gamma_{-\bf k}, \end{equation} in which $\bf R$ is a vector connecting an atom with its nearest neighbor, and the sum runs over the $Z$ nearest neighbors. $2N$ is the total number of the lattices. $H_{\bf k}$ can be expressed as the linear combination of the $su(1,1)$ algebra generators in the form \begin{eqnarray} \label{hk11} H_{\bf{k}}=2\eta E_{z}^{\bf k}+\gamma_{\bf k}(E_{+}^{\bf k}+E_{-}^{\bf k}), \end{eqnarray} with \begin{equation} \label{generators11} E_{+}^{\bf k} =a^+_{\bf k}b^+_{\bf k},\;\; E_{-}^{\bf k} =a_{\bf k}b_{\bf k},\;\; E_{z}^{\bf k} =\frac{1}{2}(n^a_{\bf k}+n^b_{\bf k}+1), \end{equation} which obey the commutation relations of $su(1,1)$ Lie algebra: \begin{equation} \label{cr11} [E^{\bf k}_{+} , E^{\bf k}_{-}]=-2E^{\bf k}_z, \;\; [E^{\bf k}_{z},E^{\bf k}_{\pm}]=\pm E^{\bf k}_{\pm}. \end{equation} By introducing an $su(1,1)$ displacement operator \begin{equation} \label{wxxz11} W(\xi_{\bf k})=\exp(\xi _{\bf k}E_{+}^{\bf k}-\xi _{\bf k}^*E_{-}^{\bf k}) \end{equation} with the coherent parameter $\xi_{\bf k}=r e^{i\theta}$, then we have \begin{eqnarray} \label{whwxxz} W^{-1}(\xi_{\bf k})H_{\bf k}W(\xi_{\bf k}) = \alpha E^{\bf k}_{z}+(\beta E^{\bf k}_{+} +\beta ^* E^{\bf k}_{-}), \end{eqnarray} where \begin{equation} \label{alpha} \alpha=2\eta \cosh 2r+\gamma_{\bf k}(e^{i\theta}+e^{-i\theta})\sinh2r, \end{equation} \begin{equation} \label{beta} \beta=\eta e^{i\theta}\sinh2r+\gamma_{\bf k}(\cosh^2r+e^{2i\theta}\sinh^2r). \end{equation} In order to diagonalize $H_{\bf{k}}$, the coefficient before the non-Cartan generators $E^{\bf k}_{+}$ and $E^{\bf k}_{-}$ of Lie algebra, $\beta$, should be chosen to zero (here we set $\theta=0$ for simplicity) and this leads to \begin{equation} \tanh2r=-\frac{\gamma_{\bf k}}{\eta},\;\;\;\; \alpha=2\sqrt{\eta ^2 -\gamma^2_{\bf k}}. \end{equation} So if we denote \begin{equation} |\xi_{\bf k}>=W(\xi_{\bf k})|vac\rangle, \end{equation} then one has \begin{equation} \label{enhxxz} H_{\bf k}|\xi_{\bf k}>=(n_a+n_b+1)\epsilon_{\bf k}|\xi_{\bf k}>, \end{equation} \begin{equation} \epsilon_{\bf k}=2ZJs\sqrt{\eta^2-\gamma_{\bf k}^2}, \end{equation} where requiring $|\eta|\geq|\gamma_{\bf k}|$. The diagonalization of the $XXZ$ antiferromagnetic model in linear spin-wave frame has turned out to be a direct product of $su(1,1)$ coherent states $\otimes_{\bf k} |\xi_{\bf k} \rangle$. One can see $\epsilon_{\bf k}$ is the quantum of antiferromagnetic spin-wave, i.e. the dispersion relation. From Eq. (\ref{enhxxz}), it is known that for any {\bf k}, there exist two branches of degenerate antiferromagnetic spin-wave which the quasi-particle numbers are described by $n_a$ and $n_b$, respectively. \section{$XYZ$ antiferromagnetic model in linear spin-wave frame with the $su(1,2)$ algebraic structure} Owing to the different interaction intensity along the different space directions, in general, the Hamiltonian of the $XYZ$ antiferromagnetic model is described by \begin{eqnarray} \label{hxyz} H_{XYZ}&=&-J\sum_{<i,j>}(\eta_{x} S_i^xS_j^x+\eta_{y} S_i^yS_j^y+ S_i^zS_j^z)\nonumber\\ &&(J<0,\;\;\eta_{x},\eta_{y}>0), \end{eqnarray} where we have set $\eta_{z}=1$. Similar to the former case of the $XXZ$ antiferromagnetic model, the $XYZ$ antiferromagnetic model in linear spin-wave frame is given by the Hamiltonian: \begin{eqnarray} H_{XYZ}&=&2ZsJ[Ns-(\sum_{k}{\cal H}_{\bf{k}}-1)],\\ \label{hkxyz} {\cal H}_{\bf{k}}&=&a _{\bf k}^{\dag}a_{\bf k} +b_{\bf k}^{\dag}b_{\bf k}\nonumber\\ &&+ \upsilon_{\bf k}(a_{\bf k}b_{\bf {-k}}^{\dag}+ a_{\bf k}^{\dag}b_{\bf{-k}})+{\rho}_{\bf k}(a_{\bf k}b_{\bf k}+a^{\dag}_{\bf k}b^{\dag}_{\bf k}),\nonumber\\ \end{eqnarray} with \begin{equation} \upsilon_{\bf k}=\frac {\eta_{x}-\eta_{y}}{2}\gamma_{\bf k},\;\; {\rho}_{\bf k}= \frac {\eta_{x}+\eta_{y}}{2}\gamma_{\bf k}. \end{equation} If we choose \begin{eqnarray} \label{esu12} & & I_{+}^{\bf k} =a^+_{\bf k}b_{-\bf k},\;\;I_{-}^{\bf k} =a_{\bf k}b^{+}_{-\bf k},\;\;U_{+}^{\bf k} =a_{\bf k}b_{\bf k}, \nonumber \\ & & V_{+}^{\bf k} =b^+_{\bf k}b^+_{-\bf k},\;\;V_{-}^{\bf k} =b_{\bf k}b_{-\bf k},\;\;U_{-}^{\bf k} =a^+_{\bf k}b^+_{\bf k}, \nonumber \\ & & I_{3}^{\bf k} =\frac{1}{2}(n^a_{\bf k}-n^b_{-\bf k}),\nonumber\\ &&I_{8}^{\bf k} =-\frac{1}{3} (n^a_{\bf k}+n^b_{-\bf k}+2n^b_{\bf k}+2), \end{eqnarray} then, one can see they obey the commutation relations of $su(1,2)$ Lie algebra (here we omit the momentum sign ${\bf k}$): \begin{eqnarray} \label{re12} &&[I_{3},I_{\pm}]=\pm {I_\pm},\;[I_{+},I_{-}]=2I_{3}, [I_{8},I_{\alpha}]=0,(\alpha=\pm,3)\nonumber\\ &&[I_{3},U_{\pm}]=\mp\frac{1}{2}U_{\pm},[I_{8},U_{\pm}]=\pm U_{\pm},[U_{+},U_{-}]=I_{3}-\frac{3}{2}I_{8},\nonumber\\ &&[I_{3},V_{\pm}]=\mp\frac{1}{2}V_{\pm}, [I_{8},V_{\pm}]=\mp V _{\pm},[V_{+},V_{-}]=I_{3}+\frac{3}{2}I_{8},\nonumber\\ &&[I_{\pm},U_{\pm}]=\mp V_{\mp},[U_{\pm},V_{\pm}]=\pm I_{\mp},[I_{\pm},V_{\pm}]=\pm U_{\mp},\nonumber\\ &&[I_{\pm},U_{\mp}]=[I_{\pm},V_{\mp}]=[U_{\pm},V_{\mp}]=0. \end{eqnarray} From Eqs. (\ref{hkxyz}) and (\ref{esu12}), ${\cal H}_{\bf{k}}$ can be expressed as the linear combination of six generators of Lie algebra $su(1,2)$ and processes $su(1,2)$ algebraic structure, i.e., \begin{eqnarray} \label{hsu12} {\cal H}_{\bf{k}}=I_{3}^{\bf k}-\frac{3}{2}I_{8}^{\bf k}+{\rho}_{\bf k}(I_{+}^{\bf k}+I_{-}^{\bf k})+ \upsilon_{\bf k}(U_{+}^ {\bf{k}}+U_{-}^{\bf k}). \end{eqnarray} \section{the diagnolization and the eigenvalues} If we set the general linear combination form of Lie algebra $su(1,2)$ as \begin{eqnarray} \label{h0xyz} H_0&=&aI_{+}+bI_{-}+cU_{+}+dU_{-}\nonumber\\ &&+eV_{+}+fV_{-}+gI_{3}+hI_{8}, \end{eqnarray} for ${\cal H}_{\bf{k}}$ (\ref{hsu12}), the coefficients in Eq. (\ref{h0xyz}) are: \begin{eqnarray} \label{abcdxyz} &&a=b={\rho}_{\bf k},\;\;\;c=d=\upsilon_{\bf k},\;\;\;e=f=0,\nonumber\\ &&g=1,\;\;\;\;\;h=-\frac{3}{2}. \end{eqnarray} Until now, the coherent state operator as Eq. (\ref{wxxz11}) has not been found for the $XYZ$ antiferromagnetic model in linear spin-wave frame. But following the standard Lie algebraic theory \cite{3zwm,3wsj6,3gilmore3,3chengjq3,3humphreys3}, if $H_0$ is a linear function of the generators of a compact semi-simple Lie group, it can be transformed into a linear combination of the Cartan operators of the corresponding Lie algebra by \begin{eqnarray} \label{h1xyz} {\cal H}_1={\cal W}{\cal H}_0{\cal W}^{-}. \end{eqnarray} Here ${\cal W}=\prod_{i=1}^Nexp(x_iA_i)$ is an element of the group and ${\cal W}^-$ denotes the inverse of ${\cal W}$, in which {$A_i$} ($i=1,...,N$) is a basis set in Cartan standard form of the semi-simple Lie algebra, and $x_i$ can be set to zero if the corresponding $A_i$ is a Cartan operator. By choosing \begin{eqnarray} \label{wxyz} {\cal W}&=&exp(x_{31}V_+)exp(x_{21}I_-)exp(x_{32}U_-)\nonumber\\ &&exp(x_{12}I_+)exp(x_{23}U_+)exp(x_{13}V_-), \end{eqnarray} and letting the coefficients of the non-Cartan operators vanish, while substituting Eqs. (\ref{h0xyz})(\ref{abcdxyz})(\ref{wxyz}) into the right-hand side of Eq. (\ref{h1xyz}), we get a complete set of algebraic equations of $x_{ij}$ after lengthy computation: \begin{equation} \label{ab3} \left\{ \begin{array}{l} -(h+\frac{1}{2}g)x_{13}+(a+dx_{13})x_{23}=0\\ c+bx_{13}+(\frac{1}{2}g-h)x_{23}+dx^2_{23}=0\\ a+dx_{13}-(g+dx_{23})x_{12}-bx^2_{12}=0, \end{array} \right. \end{equation} and the Hamiltonian after the transformation of ${\cal W}$ becomes diagonal: \begin{eqnarray} \label{whw12} {\cal H}_1&=&{\cal W}{\cal H}_{0}{\cal W}^{-}\nonumber\\ &=&(g+dx_{23}+2bx_{12})I_{3}+(h-\frac{3}{2}dx_{23})I_{8}. \end{eqnarray} One can see that although operator ${\cal W}$ is not unitary, the similar transformation (\ref{h1xyz}) guarantees that the eigenvalues of ${\cal H}_0$ equal those of ${\cal H}_1$. This is acceptable for we are only concerned with the eigenvalues. If the total particle number ${\cal N}=n^a_{\bf k}+n^b_{-\bf k}+n^b_{\bf k}$ ($n^a_{\bf k}=a_{\bf k}^{\dag}a_{\bf k},\ n^b_{-\bf k}=b_{-\bf k}^{\dag}b_{-\bf k},\ n^b_{\bf k}=b_{\bf k}^{\dag}b_{\bf k}$), from Eq. (\ref{re12}), $[\Gamma,{\cal N}]=0\ \ ({\Gamma}=I_\pm,V_\pm,U_\pm,I_3,I_8)$ holds. Hence, supposing the common eigenstates of the Cartan generators $I_3$ and $I_8$ of Lie algebra $su(1,2)$ are the Fock states $\mid n^a_{\bf k},n^b_{-\bf k},n^b_{\bf k}>$, i.e., for the commutative set $\{I_3,I_8,{\cal N}\}$ there exist: \begin{eqnarray} \label{fock12} &&I_3\mid n^a_{\bf k},n^b_{-\bf k},n^b_{\bf k}> =\frac{1}{2}(n^a_{\bf k}-n^b_{-\bf k})\mid n^a_{\bf k},n^b_{-\bf k},n^b_{\bf k}>,\nonumber\\ &&I_8\mid n^a_{\bf k},n^b_{-\bf k},n^b_{\bf k}>\nonumber\\ &&=-\frac{1}{3}(n^a_{\bf k}+n^b_{-\bf k}+2n^b_{\bf k}+2)\mid n^a_{\bf k},n^b_{-\bf k},n^b_{\bf k}>,\nonumber\\ &&{\cal N}\mid n^a_{\bf k},n^b_{-\bf k},n^b_{\bf k}>\nonumber\\ &&=(n^a_{\bf k}+n^b_{-\bf k}+n^b_{\bf k})\mid n^a_{\bf k},n^b_{-\bf k},n^b_{\bf k}>. \end{eqnarray} From Eqs. (\ref{whw12})(\ref{fock12}) it follows the eigenvalue of the Hamiltonian (\ref{hsu12}): \begin{eqnarray} E&=&\omega^{a}_{\bf k}n^a_{\bf k}+\omega^{b}_{\bf k}n^b_{\bf k}+\omega^{b}_{-\bf k}n^b_{-\bf k}+\omega^{E}_{\bf k},\nonumber\\ \omega^{a}_{\bf k}&=&(\frac{1}{2}g-\frac{1}{3}h+bx_{12}+dx_{23}),\nonumber\\ \omega^{b}_{\bf k}&=&(-\frac{2}{3}h+dx_{23}),\nonumber\\ \omega^{b}_{-\bf k}&=&(\frac{1}{2}g-\frac{1}{3}h-bx_{12}),\nonumber\\ \omega^{E}_{\bf k}&=&-\frac{2}{3}h+dx_{23}, \end{eqnarray} where the coefficients $b,d,g,h,$ are given in Eq. (\ref{abcdxyz}) and $x_{ij}$ can be obtained by solving Eq. (\ref{ab3}), and $\omega^{a}_{\bf k},\omega^{b}_{\bf k},\omega^{b}_{-\bf k}$ are the energies of the three different magnons respectively. In fact, the order of the operators in ${\cal W}$ can be chosen arbitrarily, but the coefficients $x_i$ are strongly dependent on the order. Although any specified order has a solution, a properly chosen order can simplify the procedure to get the $x_i$. In general, for the Hamiltonian with $su(n)$ (whose Cartan operators are $A_{ii}=b_i^+b_i$) or the isomorphic algebra $su(p,q)\;\;(p+q=n)$ structure, the transformation operator ${\cal W}$ can be chosen as \begin{eqnarray} \label{wusual} {\cal W}&=&exp(x_{N1}A_{N1})exp(x_{N2}A_{N2})...\nonumber\\ &&exp(x_{2N}A_{2N})exp(x_{1N}A_{1N}), \end{eqnarray} where the order of the operators of $exp(x_{ij}A_{ij})\;(i\neq j)$ is arranged according to the roots of $A_{ij}$ in a decreasing way. For example, the root of $A_{N1}$ is highest, and that of $A_{1N}$ is lowest. As a rule of our choice, the right or left $A_{ij}$ in Eq. (\ref{wusual}) is the one that is missing in the Hamiltonian ${\cal H}_0$; the middle operators sequence forms a circle root diagram. With this specification, in our experience the coefficients $x_{ij}$ are relatively easy to work out. We also choose the form of ${\cal W}$ in Eq. (\ref{wxyz}) as $exp(x_{13}V_-)exp(x_{12}I_+)exp(x_{23}U_+) exp(x_{21}I_-)exp(x_{32}U_-)exp(x_{31}V_+)$, and it can be proofed they lead to the same eigenvalues. \section{numerical solutions} From the Eq. (\ref{ab3}), maybe there have several sets of solutions, which consist the complete set. But the different sets of the slutions cannot be the eigenstate of the Hamiltonian (\ref{hkxyz}) or (\ref{hsu12}) simultaneously. Only those who possess the physical meaning is the solutions we need. In order to illustrate it, we consider a concrete example of the Hamiltonian (\ref{hkxyz}) or (\ref{hsu12}) with $\eta_{x}=0.8,\eta_{y}=0.5,\gamma_{\bf k}=1$. Then $v_{\bf k}=0.15,\rho_{\bf k}=0.65$. Using maple, one can show that there are six sets solutions of Eq. (\ref{ab3}), in which only one set $x_{13}=-0.6018692595e-1,x_{23}=0.9389946768e-1,x_{12}=0.4827143955$ leads to the positive energy $\omega^{a}_{\bf k}=1.327849277,\omega^{b}_{\bf k}=1.014084920,\omega^{b}_{-\bf k}=0.6862356430$ of the magnons. It is clear that only this solution is the accepted in physics. Other five sets solutions (with negative energy) are non-physical. This procedure is easily to be taken in solving the Eq. (\ref{ab3}) and Eq. (\ref{hsu12}). \section{Conclusion and remarks} In conclusion, the eigenvalue problems of the $XYZ$ antiferromagnetic model in linear spin-wave frame is solved from an algebraic point of view. To use this algebraic diagonalization method, first, it is needed to find the algebraic structure of the Hamiltonian and manage to write the Hamiltonian into a linear combination form of the algebraic generators just as Eq. (\ref{hsu12}). Second, according the particular structure of some Lie algebra, one may looking for the transformation operator. The key is that we can let the coefficients of the non-Cartan operators vanish successfully and get the solvable equations of the parameters through the transformation. Some numerical solutions check our diagonalization method, whose advantage is that the eigenvalues of ${\cal H}_0$ equal those of ${\cal H}_1$ although the Hamiltonian changes. Due to the different interaction intensity along the different space directions for the Heisenberg model, the complication in physical model leads to the enlargement of the algebra structure, such as the $XXZ$ case to the $XYZ$ case corresponds to the $su(1,1)$ algebra to the $su(1,2)$ one. Of course, the change of the algebra structure brings the different method in diagonalizing the Hamiltonian. It is reasonable to believe that more useful physical applications of the algebraic diagonalization method should be found. It may be possible to extend this case to higher-rank Lie algebras. \begin{acknowledgments} This work is in part supported by the National Science Foundation of China under Grant No. 10447103, Education Department of Beijing Province and Beihang University. \end{acknowledgments}
1,941,325,219,911
arxiv
\section{Introduction} Functional ultrasound (fUS) is a neuroimaging technique that indirectly measures brain activity by detecting changes in cerebral blood flow (CBF) and volume (CBV) \citep{fusrbc}. The fUS signal is related to brain activity through a process known as neurovascular coupling (NVC). When a brain region becomes active, it calls for an additional supply of oxygen-rich blood, which creates a hemodynamic response (HR), i.e., an increase of blood flow to that region. NVC describes this interaction between local neural activity and blood flow \citep{b2}. Functional ultrasound is able to measure the HR because of its sensitivity to fluctuations in blood flow and volume \citep{b1}. In the past decade, fUS has been successfully applied in a variety of animal and clinical studies, showing the technique's potential for detection of sensory stimuli, as well as complex brain states and behavior \citep{fus_npixels}. These include studies on small rodents \citep{param1,setup,cube1}, birds \citep{rau} and humans \citep{sadaf,humanfus,humanfus2}. Understanding the HR has been an important challenge not only for fUS \citep{b3}, but also for several other established functional neuroimaging modalities, including functional magnetic resonance imaging (fMRI) \citep{b4} and functional near-infrared spectroscopy (fNIRS) \citep{b5}. The HR can be characterized by a function representing the impulse response of the neurovascular system, known as the hemodynamic response function (HRF) \citep{b6}. To form a model for the HR, the HRF gets convolved with an input signal representing the experimental paradigm (EP), which is expressed as a binary vector that shows the on- and off- times of a given stimulus. However, not all brain activity can be explained via such predefined and external stimuli \citep{b13}. Indeed, even when no stimulus is presented, there can still be spontaneous, non-random activity in the brain, reported to be as large as the activity evoked by intentional stimulation \citep{Gilbert}. Therefore, the input signals that trigger brain activity should be generalized beyond merely the EP. This issue has been addressed by \citep{b13,actinduc2,actinduc}, where the authors have defined the term \emph{activity-inducing} signal, which, as the name suggests, comprises any input signal that induces hemodynamic activity. We will refer to activity-inducing signals as \emph{source signals} in the rest of this paper, which steers the reader to broader terminology not only used in biomedical signal processing, but also in acoustics and telecommunications \citep{sources}, and emphasizes that recorded output data are \emph{sourced} by such signals. An accurate estimation of the HRF is crucial to correctly interpret both the hemodynamic activity itself and the underlying source signals. Furthermore, the HRF has shown potential as a biomarker for pathological brain functioning, examples of which include obsessive-compulsive disorder \citep{hrfocd}, mild traumatic brain injury \citep{hrfinjury}, Alzheimer's disease \citep{hrfdementia}, epilepsy \citep{eegfmri} and severe psychosocial stress \citep{hrfstress}. While HRFs can as well be defined in nonlinear and dynamic frameworks with the help of Volterra kernels \citep{volterra}, linear models have particularly gained popularity due to the combination of their remarkable performance and simplicity. Several approaches have been proposed in the literature which employ linear modelling for estimating the HRF. The strictest approach assumes a constant a priori shape of the HRF, i.e. a mathematical function with fixed parameters, and is only concerned with finding its scaling (the activation level). The shape used in this approach is usually given by the canonical HRF model \citep{b7}. As such, this approach does not incorporate HRF variability, yet the HRF is known to change significantly across subjects, brain regions and triggering events \citep{b8, hrfchange1, hrfchange2}. A second approach is to estimate the parameters of the chosen shape function, which provides a more flexible and unbiased solution \citep{b3}. Alternatively, HRF estimation can be reformulated as a regression problem by expressing the HRF as a linear combination of several basis functions (which are often chosen to be the canonical HRF and its derivatives). This approach is known as the general linear model (GLM) \citep{b9}. Finally, it is also possible to apply no shape constraints on the HRF, and predict the value of the HRF distinctly at each time point. This approach suffers from high computational complexity and can result in arbitrary or physiologically meaningless forms \citep{b10}. Note that the majority of studies which tackle HRF estimation presume that the source signal is known and equal to the EP, leaving only one unknown in the convolution: the HRF \citep{neuralknown}. However, as mentioned earlier, a functional brain response can be triggered by more sources than the EP alone. These sources can be extrinsic, i.e., related to environmental events, such as unintended background stimulation or noise artefacts. They might also be intrinsic sources, which can emerge spontaneously during rest \citep{rest}. Under such complex and multi-causal circumstances, recovering the rather 'hidden' source signal(s) can be of interest. Moreover, even the EP itself can be much more complex than what a simple binary pattern allows for. Indeed, the hemodynamic response to, for instance, a visual stimulus, can vary greatly depending on its parameters, such as its contrast \citep{param1} or frequency \citep{param2}. In contrast to the aforementioned methods, where the goal was to estimate HRFs from a known source signal, there have also been attempts to predict the sources by assuming a known and fixed HRF \citep{b12} \citep{b13}. However, these methods fall short of depicting the HRF variability. To sum up, neither the sources nor the HRF are straightforward to model, and as such, when either is assumed to be fixed, it can easily lead to misspecification of the other. Therefore, we consider the problem of jointly estimating the source signals and HRFs from multivariate fUS time-series. This problem has been addressed by \citep{b14}, \citep{b15} and \citep{b16}. In \citep{b14}, it is assumed that the source signal (here considered as the neural activity) lies in a high frequency band compared to the HRF, and can thus be recovered using homomorphic filtering. On the other hand, \citep{b15} first estimates a spike-like source signal by thresholding the fMRI data and selecting the time points where the response begins, and subsequently fits a GLM using the estimated source signal to determine the HRF. Both of the mentioned techniques share the limitation of being univariate methods: although they analyze multiple regions and/or subjects, the analysis is performed separately on each time series, thereby ignoring any mutual information shared amongst biologically relevant ROIs. Recently, a multivariate deconvolution of fMRI time series has been proposed in \citep{b16}. The authors proposed an fMRI signal model, where neural activation is represented as a low-rank matrix - constructed by a certain (low) number of temporal activation patterns and corresponding spatial maps encoding functional networks - and the neural activation is linked with the observed fMRI signals via region-specific HRFs. The main advantage of this approach is that it allows whole-brain estimation of HRF and neural activation. However, all HRFs are defined via the dilation of a presumed shape, which may not be enough to capture all possible variations of the HRF, as the width and peak latency of the HRF are coupled into a single parameter. Moreover, the estimated HRFs are region-specific, but not activation-specific. Therefore, the model cannot account for variations in the HRF due to varying stimulus properties. Yet, the length and intensity of stimuli appear to have a significant effect on HRF shape even within the same region, as observed in recent fast fMRI studies \citep{stimhrf}. In order to account for the possible variations of the HRF for both different sources and regions, we model the fUS signal in the framework of convolutive mixtures, where multiple input signals (sources) are related to multiple observations (measurements from a brain region) via convolutive mixing filters. In the context of fUS, the convolutive mixing filters stand for the HRFs, which are unique for each possible combination of sources and regions, allowing variability across different brain areas and triggering events. In order to improve identifiability, we make certain assumptions, namely that the shape of the HRFs can be parametrized and that the source signals are uncorrelated. Considering the flexibility of tensor-based formulations for the purpose of representing such structures and constraints that exist in different modes or factors of data \citep{b19}, we solve the deconvolution by applying block-term decomposition (BTD) on the tensor of lagged measurement autocorrelation matrices. While in our previous work \citep{b20} we had considered a similar BTD-based deconvolution, this paper presents several novel contributions. First, we improve the robustness of the algorithm via additional constraints and a more sophisticated selection procedure for the final solution from multiple optimization runs. We also present a more detailed simulation study considering a large range of possible HRF shapes. Finally, instead of applying deconvolution on a few single pixel time-series, we now focus on fUS responses of entire ROIs, as determined by spatial independent component analysis (ICA). The selected ROIs represent three crucial anatomical structures within the mouse brain's colliculo-cortical, image-forming visual pathway: the lateral geniculate nucleus (LGN), the superior colliculus (SC) and the primary visual cortex (V1). These regions are vision-involved anatomical structures of importance \citep{huberman, seabrook}, which can be captured together well in a minimal number of coronal and sagittal slices \citep{bregma}, and have proven to consistently yield clear responses using fUS imaging \citep{param1,mace_visual}. The vast majority of information about visual stimuli are conveyed via the retinal ganglion cells (RGCs) to the downstream subcortical targets LGN and SC, before being relayed to V1 The LGN and SC are known to receive both similar and distinct visual input information from RGCs \citep{sclgn}. The asymmetry in information projected by the mouse retina to these two downstream targets is reflected in the output of these areas \citep{Ellis}. Our goal is to compare the hemodynamic activity in these regions by deconvolving the CBF/CBV changes recorded with fUS in response to visual stimulus. The rest of this paper is organized as follows. First, we describe our data model and the proposed tensor-based solution for deconvolution. Next, we describe the experimental setup and data acquisition steps used for fUS imaging of a mouse subject. This is followed by the deconvolution results, which are presented in two-folds: \emph{(i)} Numerical simulations, and \emph{(ii)} Results on real fUS data. Next, under discussion, we review the highlights of our modelling and results, and elaborate on the neuroscientific relevance of our findings. Finally, we state several future extensions and conclude our paper. \section{Signal Model} Naturally fUS images contain far more pixels than the number of anatomical or functional regions. We therefore expect certain groups of pixels to show similar signal fluctuations, and we consider the fUS images as parcellated in space into several regions. Consequently, we represent the overall fUS data as an $M \times N$ matrix, where each of the $M$ rows contain the average pixel time-series within a region-of-interest (ROI), and $N$ is the number of time samples. Assuming a single source signal, an individual ROI time-series $y(t)$ can be written as the convolution between the HRF $h(t)$ and the input source signal $s(t)$ as: \begin{equation} \label{eq:singleconv} y(t) = \sum_{l=0}^L h(l)s(t-l) \end{equation} where $L+1$ gives the HRF filter length. However, a single ROI time-series may be affected by a number of ($R$) different source signals. Each source signal $s_r(t)$ may elicit a different HRF, $h_r(t)$. Therefore, the observed time-series is the summation of the effect of all underlying sources: \begin{equation} \label{eq:ins_ica} y(t) = \sum_{r=1}^R \sum_{l=0}^L h_r(l)s_r(t-l). \end{equation} Finally, extending our model to multiple ROIs, where each ROI may have a different HRF, we arrive to the following multivariate convolutive mixture formulation: \begin{equation} \label{eq:convolutive} y_m(t) = \sum_{r=1}^R \sum_{l=0}^L h_{mr}(l)s_r(t-l) \end{equation} where $h_{mr}(l)$ is the convolutive mixing filter, belonging to the ROI $m$ and source $r$ \citep{b21}. In the context of fUS, the sources that lead to the time-series can be task-related ($T$), such as the EP, or artifact-related ($A$). The task-related sources are convolved with an HRF, whereas the artifact-related sources are directly additive on the measured time-series \citep{b22}. Yet, the strength of the effect that an artifact source exerts on a region should still depend on the artifact type and the brain region. To incorporate this in Eq. \ref{eq:convolutive}, each $h_{mr}(l)$ with $r \in A$ should correspond to a scaled (by $a_{mr}$) unit impulse function. Thus, we rewrite Eq. \ref{eq:convolutive} as: \begin{align} \label{eq:convolutive2} y_m(t) &= \sum_{r\in T} \sum_{l=0}^L h_{mr}(l)s_r(t-l)+\sum_{r\in A} \sum_{l=0}^{L} a_{mr} \delta(l)s_r(t-l) \nonumber \\ &= \sum_{r\in T} \sum_{l=0}^L h_{mr}(l)s_r(t-l)+\sum_{r\in A} a_{mr} s_r(t). \end{align} We aim at solving this deconvolution problem to recover the sources ($s_r,r\in T$) and HRFs ($h_{mr}, r\in T$) of interest separately at each ROI $m$. \section{Proposed Method} In this section, we will present the steps of the proposed tensor-based deconvolution method. We will first introduce how deconvolution of the observations modeled as in Eq. \ref{eq:convolutive2} can be expressed as a BTD. Due to the fact that this problem is highly non-convex, we will subsequently explain our approach to identifying a final solution for the decomposition. Finally, we will describe source signal estimation using the HRFs predicted by BTD. \subsection{Formulating the Block-Term Decomposition} We start by expressing the convolutive mixtures formulation in Eq. \ref{eq:convolutive} in matrix form as $\mathbf{Y}=\mathbf{H}\mathbf{S}$. The columns of $\mathbf{Y}$ and $\mathbf{S}$ are given by $\mathbf{y}(n)$, $n=1,\dots,N-L'$ and $\mathbf{s}(n)$, $n=1,\dots,N-(L+L')$, respectively. These column vectors are constructed as follows \citep{b23}: \begin{align} \begin{aligned} \label{eq:matrix_y_s} \mathbf{y}(n) &= [y_1(n),...,y_1(n-L'+1), \\ & ...,y_M(n),...,y_M(n-L'+1)]^T\; \; \text{and}\\ \mathbf{s}(n) &= [s_1(n),...,s_1(n-(L+L')+1), \\ & ...,s_R(n),...,s_R(n-(L+L')+1)]^T \end{aligned} \end{align} \noindent where $L'$ is chosen such that $ML'\geq R(L+L')$. Notice that $M$ has to be greater than $R$, and both matrices $\mathbf{Y}$ and $\mathbf{S}$ consists of Hankel blocks. The mixing matrix $\mathbf{H}$ is equal to \begin{equation} \label{eq:H} \mathbf{H}=[\mathbf{H}_1 \quad \dots \quad \mathbf{H}_R]= \begin{bmatrix} \mathbf{H_{11}} & \dots & \mathbf{H_{1R}}\\ \vdots & \ddots & \vdots \\ \mathbf{H_{M1}} & \dots & \mathbf{H_{MR}} \end{bmatrix} \end{equation} \noindent whose any block-entry $\mathbf{H}_{mr}$ is the Toeplitz matrix of $h_{mr}(l)$: \begin{equation} \label{eq:H_ij} \mathbf{H}_{mr}= \begin{bmatrix} h_{mr}(0) & \dots & h_{mr}(L) & \dots & 0\\ & \ddots & \ddots & \ddots & \\ 0 & \dots & h_{mr}(0) & \dots & h_{mr}(L) \end{bmatrix} . \end{equation} Next, the autocorrelation $\mathbf{R}_{\mathbf{y}}(\tau)$ for a time lag $\tau$ is expressed as: \begin{align} \label{eq:cov} \mathbf{R}_{\mathbf{y}}(\tau)&= \mathrm{E}\{\mathbf{y}(n)\mathbf{y}(n+\tau)^T\} = \mathrm{E}\{\mathbf{H}\mathbf{s}(n)\mathbf{s}(n+\tau)^T\mathbf{H}^T\} \nonumber \\ &=\mathbf{H} \mathbf{R}_{\mathbf{s}}(\tau)\mathbf{H}^T, \; \; \; \; \forall\tau. \end{align} Assuming that the sources are uncorrelated, the matrices $\mathbf{R}_\mathbf{s}(\tau)$ are block-diagonal, i.e. non-block-diagonal terms representing the correlations between different sources are 0. Therefore, the output autocorrelation matrix $\mathbf{R}_\mathbf{y}(\tau)$ is written as the block-diagonal matrix $\mathbf{R}_\mathbf{s}(\tau)$ multiplied by the mixing matrix $\mathbf{H}$ from the left and by $\mathbf{H}^\text{T}$ from the right. Then, stacking the set of output autocorrelation matrices $\mathbf{R}_\mathbf{y}(\tau)$ for various $\tau$ values will give rise to a tensor $\boldsymbol{\mathcal{T}}$ that admits a so-called block-term decomposition (BTD). More specifically, $\boldsymbol{\mathcal{T}}$ can be written as a sum of low-multilinear rank tensors, in this specific case a rank of $(L+L',L+L',\cdot)$ \citep{b24}. Due to the Hankel-block structure of $\mathbf{Y}$ and $\mathbf{S}$, $\mathbf{R}_{\mathbf{y}}(\tau)$ and $ \mathbf{R}_{\mathbf{s}}(\tau)$ are Toeplitz-block matrices. Note that the number of time-lags to be included is a hyperparameter of the algorithm, and we take it as equal to the filter length in this work. The decomposition for $R=2$ is illustrated in Fig. \ref{fig:btd_im}. Considering our signal model, where we have defined two types of sources, we can rewrite the block-columns of $\mathbf{H}=[\mathbf{H}_1 \; \mathbf{H}_2]$ (Eq. \ref{eq:H}) simply as $\mathbf{H}=[\mathbf{H}_T \; \mathbf{H}_A]$ instead. Here, $\mathbf{H}_T$ relates to the task-source, i.e. includes the region-specific HRFs, whereas $\mathbf{H}_A$ includes the region-specific scalings of the artifact source. \begin{figure}[H] \centering \includegraphics[width=.7\textwidth]{btd.pdf} \caption{A demonstration of BTD for $R=2$. The tensor $\boldsymbol{\mathcal{T}}$ of stacked measurement autocorrelations $\mathbf{R}_\mathbf{y}(\tau)$, $\forall \tau$ is first expressed in terms of the convolutive mixing matrix $\mathbf{H}$ and a core tensor $\boldsymbol{\mathcal{C}}$ which shows the stacked source autocorrelations $\mathbf{R}_\mathbf{s}(\tau)$, $\forall \tau$. Each $\mathbf{R}_\mathbf{s}(\tau)$ corresponds to a frontal slice of $\boldsymbol{\mathcal{C}}$ and exhibits a block-diagonal structure with inner Toeplitz-blocks. Note that, each slice comes as a lagged version of the preceeding slice. $\boldsymbol{\mathcal{T}}$ is decomposed into $R=2$ terms, each of which contains a core tensor ($\boldsymbol{\mathcal{C}}_T$ or $\boldsymbol{\mathcal{C}}_A$, which represents the autocorrelation of the corresponding source) and a block column of $\mathbf{H}$ ($\mathbf{H}_T$ or $\mathbf{H}_A$).} \label{fig:btd_im} \end{figure} In addition, we impose a shape constraint to the HRFs such that they are physiologically interpretable. For this purpose, we employed the model described in \citep{b3}, which is a fUS-based adaptation of the well-known canonical model used predominantly in fMRI studies \citep{b7} for depicting CBF or CBV changes, where the second gamma function leading to the undershoot response is removed from the canonical model, resulting in a reduced number of parameters. This model expresses the HRF in terms of a single gamma function defined on a parameter set $\boldsymbol{\theta}$ as below: \begin{equation} \label{eq:gamma} f(t,\boldsymbol{\theta}) = \theta_1(\Gamma(\theta_2)^{-1} \theta_3^{\theta_2}t^{\theta_2-1}\rm{e}^{-\theta_3t}) \end{equation} \noindent where $\theta_1$ is the scaling parameter to account for the strength of an HRF and the rest of the parameters define the shape of the HRF. Finally, the BTD is computed by minimizing the cost function: \begin{align} \label{eq:cost} J(\boldsymbol{\mathcal{C}},\boldsymbol{\theta},\mathbf{a}) = \lVert \boldsymbol{\mathcal{T}} &- \sum_{r \in T} \boldsymbol{\mathcal{C}}_r \times_1 \mathbf{H}_r(\boldsymbol{\theta}_r) \times_2 \mathbf{H}_r(\boldsymbol{\theta}_r) \nonumber \\ &-\sum_{r \in A} \boldsymbol{\mathcal{C}}_r \times_1 \mathbf{H}_r(\mathbf{a}_r) \times_2 \mathbf{H}_r(\mathbf{a}_r) \rVert^2_F \end{align} \noindent while all $\mathbf{H}_r$'s and $\boldsymbol{\mathcal{C}}_r$'s are structured to have Toeplitz blocks. The operator $||\cdot||_F$ is the Frobenius norm. The BTD is implemented using the structured data fusion (SDF) framework, more specifically using the quasi-Newton algorithm \texttt{sdf\_minf}, offered by Tensorlab \citep{b25}. \subsection{Identifying a Stable Solution for BTD} For many matrix and tensor-based factorizations, such as the BTD described above, the objective functions are non-convex. As such, the algorithm selected for solving the non-convex optimization might converge to local optimas of the problem \citep{tdunique}. In order to identify a stable solution, it is common practice to run the optimization multiple times, with a different initialization at each run. Finally, a choice needs to be made amongst different repetitions of the decomposition. For our problem, each BTD repetition produces $M$ HRFs, characterized by their parameters $\boldsymbol{\theta}_m, m=1,2,\dots,M$. We follow a similar approach as described in \citep{simonstable}, i.e. we cluster the solutions using the peak latencies of the estimated HRFs as features, and aim at finding the most coherent cluster. The steps of our clustering approach are as follows: \begin{enumerate} \item Run BTD $20$ times with random initializations, and from each run, store the following: \begin{itemize} \item Final value of the cost (i.e., objective) function \item $M$ HRFs \end{itemize} \item Eliminate the $P$ outlier BTD repetitions having significantly higher cost values (We use Matlab's \texttt{imbinarize} for the elimination which chooses an optimal threshold value based on Otsu's method \citep{otsu}, as we expect the best solution to be amongst the low-cost solutions) \item Form a matrix with $M$ columns (standing for the peak latencies of $M$ HRFs, these are the features) and $20-P$ rows (standing for the retained BTD repetitions, these are the observations) \item Apply agglomerative hieararchical clustering to the columns of the matrix in Step 3 \item Compute the following intracluster distance metric for each cluster as: \begin{equation} \label{dist_cluster} d_\text{C} = \frac{\max_{c_1,c_2 \in \text{C}} d(c_1,c_2)}{n_\text{C}} \end{equation} where the numerator gives the Euclidean distance between the two most remote observations inside the cluster $\text{C}$ (known as the complete diameter distance \citep{diameterdist}), and the denominator, $n_\text{C}$, is the number of observations included in $\text{C}$ \item Determine the most stable cluster as the one having the minimum intracluster distance \item Calculate the mean of the estimated HRFs belonging to the cluster identified in Step 6 \end{enumerate} To sum up, the clustering approach described above assumes that the best possible solution will be low-cost (Step 2), have low intracluster distance (numerator of Eq. \ref{dist_cluster}) and frequently-occurring (denominator of Eq. \ref{dist_cluster}). After we have the final HRF predictions, the last step is to estimate the sources. \subsection{Estimation of the Source Signals} The final HRF estimates are reorganized in a Toeplitz-block matrix as shown in Equations \ref{eq:H} and \ref{eq:H_ij}. This gives rise to $\hat{\mathbf{H}}_T$, i. e., the block columns of $\mathbf{H}$ that are of interest. Going back to our initial formulation $\mathbf{Y}=\mathbf{H}\mathbf{S}$, we can estimate the task-related source signals $\mathbf{S}_T$ by: \begin{equation} \hat{\mathbf{S}}_T=\hat{\mathbf{H}}_T^\dagger \mathbf{Y} \label{s_ls} \end{equation} where $(.)^\dagger$ shows the Moore-Penrose pseudo-inverse. In order to obtain the pseudo-inverse of $\hat{\mathbf{H}}_T$, we used truncated singular value decomposition (SVD). Truncated SVD is a method for calculating the pseudo-inverse of a rank-deficient matrix, which is the case for many signal processing applications on real data, such as for extraction of signals from noisy environments \citep{b26}. We applied this approach by heuristically setting the singular values of $\hat{\mathbf{H}}_T$ to be truncated as the lowest $90\%$ following our simulation study. \section{Experimental Setup and Data Acquisition} During our fUS experiment, we displayed visual stimuli to a mouse ($7$-months old, male, C57BL/6J; The Jackson laboratory) while recording the fUS-based HR of its brain via the setup depicted in Fig. \ref{fig:fussetup}. The mouse was housed with food and water \textit{ad libitum}, and was maintained under standard conditions (12/12 h light-darkness cycle, 22℃). Preparation of the mouse involved surgical pedestal placement and craniotomy. First, an in-house developed titanium pedestal (8 mm in width) was placed on the exposed skull using an initial layer of bonding agent (OptiBond™) and dental cement (Charisma\textregistered). Subsequently, a surgical craniotomy was performed to expose the cortex from Bregma -1 mm to -7 mm. After skull bone removal and subsequent habituation, the surgically prepared, awake mouse was head-fixed and placed on a movable wheel in front of two stimulation screens (Dell 23,8” S2417DG, 1280 x 720 pixels, 60 Hz) in landscape orientation, positioned at a 45° angle with respect to the antero-posterior axis of the mouse, as well as 20 cm away from the mouse’s eye, similar to \citep{setup}. All experimental procedures were approved \textit{a priori} by an independent animal ethical committee (DEC-Consult, Soest, the Netherlands), and were performed in accordance with the ethical guidelines as required by Dutch law and legislation on animal experimentation, as well as the relevant institutional regulations of Erasmus University Medical Center. The visual stimulus consisted of a rectangular patch of randomly generated, high-contrast images - white ``speckles'' against a black background - which succeeded each other with $25$ frames per second, inspired by \citep{param2,param1,speckles}. The rectangular patch spanned across both stimulation screens such that it was centralized in front of the mouse, whereas the screens were kept entirely black during the rest (i.e., non-stimulus) periods. The visual stimulus was presented to the mouse in $20$ blocks of $4$ seconds in duration. Each repetition of the stimulus was followed by a random rest period between $10$ to $15$ seconds. Before experimental acquisition, a high-resolution anatomical registration scan was made of the exposed brain's microvasculature so as to locate the most ideal imaging location for capturing the ROIs aided by the Allen Mouse Brain Atlas \citep{allen}, and to ensure optimal relative alignment of data across separately performed experiments. Ultimately, during the experiment, functional scans were performed on two slices of the mouse brain; one coronal at Bregma $-3.80$ mm, and one sagittal at Bregma $-2.15$ mm \citep{bregma}. For data acquisition, $14$ tilted plane waves were transmitted from an ultrasonic transducer (Vermon L$22-14$v, $15$ MHz), which was coupled to the the mouse's cranial window with ultrasound transmission gel (Aquasonic). A compound image was obtained by Fourier-domain beamforming and angular compounding, and non-overlapping ensembles were formed by concatenating $200$ consecutive compound images. We applied SVD based clutter filtering to separate the blood signal from stationary and slow-changing ultrasound signals arising from other brain tissue \citep{svdfilter}. SVD-filtering was performed on each ensemble by setting the first (i.e., largest) $30$\% of the singular values to $0$ and reconstructing the vascular signal of interest from the remaining singular components \citep{fus_setup}. Images were upsampled in the spatial frequency domain to an isotropic resolution of $25\mu$m. Finally, a Power-Doppler Image (PDI) was obtained by computing the power of the SVD-filtered signal for each pixel over the ensemble dimension. Hence, the time-series of a pixel (Eq. \ref{eq:convolutive2}) corresponds to the variation of its power across the PDI stream. A total of 3 ROIs (SC, LGN and V1) were selected from the captured slices. For this purpose, the data was first parcellated using spatial ICA with $10$ components in both slices \citep{spatial_ica}. A spatial mask was defined based on the spatial signature of the component corresponding to the SC from the coronal; and LGN and V1 from the sagittal slice. To obtain a representative time-series for each ROI, we averaged the time-series of pixels which are captured within the boundaries of each mask. Finally, the ROI time-series were normalized to zero-mean and unit-variance before proceeding with the BTD. \begin{figure}[H] \centering \includegraphics[width=\textwidth]{fussetup.jpg} \caption{The setup and flowchart for fUS imaging of the ROIs. In A, the experimental setup is shown, with the awake, head-fixed mouse walking on a movable wheel. During an experiment, either a rectangular patch of speckles (stimulus) or an entirely black screen (rest) is displayed across both monitors. In B, the process of forming a PDI is demonstrated for a coronal brain slice. First, back-scattered ultrasonic waves obtained at different imaging angles are beamformed, resulting in compound images. Next, the compound images are progressed to SVD-based clutter filtering in batches in order to remove the tissue motion from the vascular signal. From each filtered batch, a PDI is constructed by computing the power per-pixel. In C, the ROIs that we will focus on in the rest of this paper are shown. The pointed arrows represent the signal flow for processing of visual information.} \label{fig:fussetup} \end{figure} \section*{Data and Code Availability Statement} The data and MATLAB scripts that support the findings of this study are publicly available in \href{https://github.com/ayerol/btd_deconv}{https://github.com/ayerol/btd\_deconv}. \section{Results} To demonstrate the power of our BTD-based deconvolution approach, the following sections discuss a simulation study and the results of the \textit{in vivo} mouse experiment respectively. In both cases, we consider an EP with repeated stimulation. While we consider a task-related source (expected to be similar to the EP) to affect multiple brain regions through unique HRFs, we also take into account the influence of artifacts and possible hemodynamic changes which are unrelated to the EP on the region HRs. Note that we will use a single additive component (the second term in Eq. \ref{eq:convolutive}) to describe the sources of no interest. \subsection{Numerical Simulations} \label{simulation_sec} We simulated three ROI time-series, each with a unique HRF that is characterized using Eq. \ref{eq:gamma} on a different parameter set $\boldsymbol{\theta}$. We assumed that there are two underlying common sources that build up to the ROI time-series. The first source signal is a binary vector representing the EP. The EP involves $20$ repetitions of a $4$-seconds stimulus (where the vector takes the value $1$) interleaved with $10-15$ seconds of random non-stimulus intervals (where the vector takes the value $0$). This is the same paradigm that will be used later for deconvolution of \textit{in vivo}, mouse-based fUS data (Section \ref{deconv_results}). The EP is assumed to drive the hemodynamic activity in all ROIs, but the measured fUS signals are linked to the EP through possibly different HRFs. The second source signal stands for the artifact component and is generated as a Gaussian process with changing mean, in accordance with the system noise and artifacts modeled in \citep{noisesource}. Each ROI time-series is obtained by convolving the corresponding HRF and the common EP, and subsequently adding on the noise source. Note that the variance of the noise source is dependent on the region. In addition, the noise variance values are adjusted in order to assess the performance of the proposed method under various signal-to-noise ratios (SNRs). The data generation steps are illustrated in Fig. \ref{fig:sim}. We normalized the time-series to zero-mean and unit-variance before proceeding with the BTD. Due to the fact that the true source, in this case the EP, was generated as a binary vector, we binarized the source signal estimated after BTD as well to allow for a fair comparison. More specifically, we binarized the estimated source signal by applying a global threshold, and evaluated the performance of our source estimation by comparing the true onsets and duration of the EP with the predicted ones. \begin{figure}[H] \centering \includegraphics[width=\textwidth]{simulations3.pdf} \caption{Illustration of the simulator. Both of the simulated sources are shown in the left section, one being task-related (the EP) and one being artifact-related. In the middle section, the convolutive mixing filters are depicted. The filters which are convolved with the EP are the HRFs, whereas the filters which are convolved with the artifact source only differ by their scaling and modeled as impulses, such that their convolution with the artifact source lead to a direct summation on the measured time-series. In the last section, the convolved results are added together to deliver the time-series at each ROI.} \label{fig:sim} \end{figure} We performed a Monte Carlo simulation of $100$ iterations for different SNR values. In each iteration, the HRF parameters were generated randomly in such a way that the peak latency (PL) and width (measured as full-width at half-maximum; FWHM) of the simulated HRFs varied between $[0.25,4.5]$ and $[0.5,4.5]$ seconds respectively. These ranges generously cover the CBV/CBF-based HRF peak latencies (reported as $2.1 \pm 0.3$ s in \citep{fus_npixels}, and between $0.9$ and $2$ seconds in \citep{b3,hrf_rng1,hrf_rng2}) and FWHMs (reported as $2.9 \pm 0.6$ s in \citep{fus_npixels}) observed in previous mouse studies. We defined the following metrics at each Monte Carlo iteration to validate the performance of the algorithm: \begin{itemize} \item For quantifying the match between the estimated and true EP, we calculated the Intersection-over-Union (IoU) between them at each repetition of the EP. For example, if the true EP takes place between $[3,7]$ seconds but this is estimated as $[3.4,7.5]$ seconds, the IoU value will be: $\sfrac{(7-3.4)}{(7.5-3)}=0.8.$ For an easier interpretation, we converted the unit of the IoU ratio to seconds as follows: Since the ideal estimation should give an exact match of $4$ seconds (which corresponds to an IoU of $1$), we multiplied the IoU ratio by $4$. The IoU of $0.8$ in the example above corresponds to a match of $3.2$ seconds. Finally, we averaged the IoU values of $20$ repetitions of the EP to get one final value. \item We computed the absolute PL difference (in terms of seconds) between the true and estimated HRFs, averaged for $M=3$ ROIs. \end{itemize} Simulation results are provided in Fig. \ref{simresults}. Under $0$ dB SNR, the estimated HRFs have an error of $0.3 \pm 0.4$ (median $\pm$ standard deviation) seconds in the peak latencies across the Monte-Carlo iterations. In order to emphasize the importance of incorporating HRF variability in the signal model, we also compared the EP estimation results when a fixed HRF is assumed (the canonical HRF). The results (Fig. \ref{simresults}(d)) show that using a fixed HRF causes a significant decrease in EP estimation performance. In the context of real neuroimaging data, this difference could cause a misinterpretation of the underlying source signals and neurovascular dynamics. \begin{figure}[H] \centering \begin{subfigure}[t]{.49\textwidth} \centering \includegraphics[width=.87\textwidth]{hrf_range.pdf} \caption{Range of simulated HRFs. The first HRF (blue) has a peak latency and width of $0.25$ and $0.5$ seconds, whereas the second HRF (orange) has both its peak latency and width as $4.5$ seconds respectively. The peak latency and width of the second HRF are also displayed on its plot.} \end{subfigure} \hspace*{\fill} \begin{subfigure}[t]{.47\textwidth} \centering \includegraphics[width=\textwidth]{example_hrf_est.pdf} \caption{Visualization of the simulated HRFs and their corresponding estimates under $0$ dB SNR (from one Monte-Carlo iteration).} \end{subfigure} \par\medskip \centering \begin{subfigure}[t]{.49\textwidth} \centering \includegraphics[width=\textwidth]{example_source_est.pdf} \caption{Visualization of the estimated source signal versus the true EP under $0$ dB SNR (from one Monte-Carlo iteration). For a more precise comparison, we further binarize the estimated source signal by thresholding it as shown.} \end{subfigure} \hspace*{\fill} \begin{subfigure}[t]{.49\textwidth} \centering \includegraphics[width=\textwidth]{ep_iou_plot.pdf} \captionsetup{width=.91\linewidth} \caption{EP estimation performance with respect to SNR. The markers and errorbars denote the median and standard deviation of the IoU of EP estimation across the Monte-Carlo iterations respectively. When a fixed HRF is assumed, we see that the EP estimation is much less accurate.} \end{subfigure} \caption{Simulation results.} \label{simresults} \end{figure} \subsection{Experimental Data} \label{deconv_results} The selected ROIs are displayed in Fig. \ref{hrf_exp}(a) and Fig. \ref{hrf_exp}(b), showing SC in the former; LGN and V1 in the latter plot. The raw, normalized fUS time-series belonging to each region are displayed in Fig. \ref{hrf_exp}(c). By deconvolving this multivariate time-series data, we estimated the region-specific HRFs and the underlying source signal of interest. In Fig. \ref{hrf_exp}(d), the estimated HRFs are provided. Our results point to a peak latency of $1$ s in SC, $1.75$ s in LGN and $2$ s in V1. Similarly, the FWHMs are found as $1.25$ s in SC, $1.75$ s in LGN and $1.75$ s in V1. These results reveal that SC gives the fastest reaction to the visual stimulus amongst the ROIs, followed by the LGN. In addition, the HRF in SC is observed to be steeper than in LGN and V1. Fig. \ref{hrf_exp}(e) demonstrates the estimated source signal of interest. Unlike the simulations, we see that the source signal exhibits a substantial variation in amplitude. In order to interpret this behavior of the estimated source signal, we further investigated the raw fUS signals shown in Fig. \ref{hrf_exp}(c). When the responses given to consecutive repetitions of the stimulus are compared within each region, it can be observed that SC reacts most consistently to the stimulus, while the reproducibility of the evoked responses in LGN and V1 (particularly in V1) are much lower, especially in the second half of the repetitions. To better quantify and compare the region-specific differences in response-variability, we computed the Fano factor (FF) as the ratio of the variance to mean peak amplitude of each region's post-stimulus response \citep{ffmaxamp}, defined in a window $[0,10]$ seconds after a stimulus has been shown. We found an FF value of $0.23, 0.42$ and $0.8$ respectively for SC, LGN and V1. These findings indicate that the consistency of the HR strength is halved from SC to LGN, and again from LGN to V1. We can even see cases where there is almost no reaction (as detected by fUS) to the stimulus in V1, such as in repetitions $10, 12, 15, 16$ and $20$. These repetitions coincide with the points in Fig. \ref{hrf_exp}(e) wherein the most considerable drops in the estimated source signal were observed. As such, the variability of responses can explain the unexpected amplitude shifts of the estimated source signal. Due to its changing amplitude, comparing the estimated source signal to the EP becomes a more challenging task than in simulations, as binarization using a single global threshold would not work well (Fig. \ref{hrf_exp}(e)). However, it is still possible to observe local peaks of the estimated source signal occurring around the times that the stimulus was shown. While applying a global threshold can uncover $13$ out of $20$ repetitions, with a detection of local peaks, this number increases to $19$ out of $20$ repetitions. After detecting the peaks, we located the time points where for the first time a significant rise (and drop) was observed before (and after) the peak, leading to the starting (and ending) times of the estimated repetitions. Hence, we obtained an estimation of the EP by constructing a binary vector of all $0$'s with the exception of the time periods in between the predicted starting and ending points. In Fig. \ref{hrf_exp}(f), we compared our EP estimation (averaged across repetitions) with the true EP. We can appreciate that our EP estimation is a slightly shifted ($<0.5$ seconds) version of the true EP. Here, we also displayed the responses in SC, LGN and V1 (averaged across repetitions), from which it can be observed that the estimated HRFs follow the same order as the responses during the \textit{in vivo} experiment, as expected. Note that the observed trial-by-trial variability in temporal profile across the measured HRs underlines the importance of estimating the source signal. The conventional definition of the EP strictly assumes that the input of the convolution leading to the neuroimaging data (Eq. \ref{eq:singleconv}) is the same ($=1$) at each repetition of the stimulus. This would mean that the exact same input, shown at different times, outputs different responses, which would evidence a dynamic system \citep{balloon,dcm}. However, estimating the source signal allows for a non-binary and flexible characterization of the input, and thus LTI modelling \emph{can} remain plausible. Although extensive analysis of the repetition-dependent behavior of the vascular signal is beyond the scope of this work, we will discuss its possible foundations in the next section. \begin{figure}[H] \centering \begin{subfigure}[t]{.49\textwidth} \centering \includegraphics[width=\textwidth,trim={2cm 2cm 2cm 1cm},clip]{sc.pdf} \caption{ICA spatial map showing SC (blue).} \end{subfigure} \hspace*{\fill} \begin{subfigure}[t]{.49\textwidth} \centering \includegraphics[width=\textwidth,trim={2cm 2cm 2cm 1cm},clip]{v1_lgn.pdf} \caption{ICA spatial maps showing LGN (orange) and V1 (yellow).} \end{subfigure} \par\medskip \centering \begin{subfigure}[t]{.485\textwidth} \centering \includegraphics[width=\textwidth]{raw_data.pdf} \caption{The normalized fUS responses in SC, LGN and V1 (the experimental paradigm is displayed in the background of the plots).} \end{subfigure} \hspace*{\fill} \begin{subfigure}[t]{.495\textwidth} \centering \includegraphics[width=\textwidth]{hrfs_real.pdf} \caption{Estimated HRFs.} \end{subfigure} \par\medskip \centering \begin{subfigure}[t]{.49\textwidth} \centering \includegraphics[width=\textwidth]{ep_real.pdf} \caption{Estimated source signal.} \end{subfigure} \hspace*{\fill} \begin{subfigure}[t]{.49\textwidth} \centering \includegraphics[width=\textwidth]{ep_real_mean_ep.pdf} \captionsetup{width = .9\textwidth} \caption{True EP, estimated EP and normalized responses in SC, LGN and V1, all averaged across stimulus repetitions.} \label{reps_regions} \end{subfigure} \par\medskip \caption{Deconvolution results on fUS data. Figures (a) and (b) show the ROIs determined with ICA. The regional fUS responses are displayed in (c). HRF and source signal estimation results are given in (d) and (e-f) respectively.} \label{hrf_exp} \end{figure} \newpage \section{Discussion} \label{sec_disc} In this study, we considered the problem of deconvolving multivariate fUS time-series by assuming that the HRFs are parametric and source signals are uncorrelated. We formulated this problem as a block-term decomposition, which delivers estimates for the source signals and region-specific HRFs. We investigated the fUS response in three ROIs of a mouse subject, namely the SC, LGN and V1, which together compose significant pathways between the eye and the brain. The proposed method for deconvolution of the hemodynamic response has the advantage of not requiring the source signal(s) to be specified. As such, it can potentially take into account numerous sources besides the EP, that are unrelated to the intended task and/or outside of the experimenters' control. As mentioned in the Introduction, imaged responses might encompass both stimulus-driven and stimulus-independent signals \citep{Xi2020DiverseCN,stringer}. The experimental subject may experience a state of so-called ``quiet wakefulness'' \citep{quitewakeful}, or ``wakeful rest'' \citep{wakefulrest}; a default brain state, during which the unstimulated and resting brain exhibits spontaneous and non-random activity \citep{Gilbert}. This exemplifies how a large fraction of the brain's response is not only triggered by the EP, but also highly influenced by the brain's top-down modulation. This assumption is further supported by recent fUS-based research on functional connectivity \citep{Osmanski} and the default mode network in mice \citep{Dizeux}. Other types of unintentional triggers could be spontaneous epileptic discharges \citep{bori_ica_ep}. Despite the fact that the proposed solution has the premise of identifying multiple sources, the number of sources is limited by the selected number of ROIs. As we chose to focus on three ROIs, we were bound to assume less, i.e. two, underlying sources. Accordingly, we accounted for one of the sources to be task-related (related to the visual paradigm), whereas all other noise and artifact terms were combined to represent the second source. Our signal model assumes that the task-related source gets convolved with region-specific HRFs, whereas the artifact-related source is additive on the measured fUS data. As such, the signal model intrinsically assumes that the HRF in each studied region is driven by one common source signal. In fact, incorporating more ROIs and thus more sources can achieve a more realistic approximation of the vascular signal due to the aforementioned reasons. However, it should be noted that addition of more sources would introduce additional uncertainties: what should be the number of sources, and how do we match a source with a certain activity? For instance, several sources can represent the spontaneous (or resting state) brain activity, several sources can represent the external stimuli (depending on the number of different types of stimuli used during the experiment), and the remaining sources can represent the noise and artifacts. To model such cases, the simulations should be extended to include more sources. In addition, thorough studies are needed to explore accurate matching of estimated sources to the activity they symbolize. The assignment of sources can indeed require a priori knowledge of the activities, such as expecting a certain activity to be prominent over the others \citep{cpd_sources}, or defining frequency bands for dividing the signal subspace \citep{reswater}. When we applied our method to \textit{in vivo} mouse-based fUS data, we observed unforeseen amplitude variations in the estimated source signal. To examine this further, we investigated the hemodynamic responses in the selected ROIs across repetitions. We noticed that the response variability in the visual system increases from the subcortical to the cortical level. Consistent with our findings, electrophysiological studies such as \citep{catlgn} report an increase in trial-by-trial variability from subcortex to cortex, doubling from retina to LGN and again from LGN to visual cortex. Variability in responses could be related to external stimulation other than the EP, such as unintended auditory stimulation from experimental surroundings \citep{ito}. In addition, literature points to eye movements as a source of high response variability in V1, a behavior which can be found in head-fixated, but awake mice following attempted head rotation \citep{mouseeye}, which can extraordinarily alter stimulus-evoked responses \citep{eyemov}. We noted that the SC has the fastest reaction to stimuli, followed respectively by the LGN and V1. As V1 does not receive direct input from the retina, but via LGN and SC, its delayed HRF is consistent with the underlying subcortical-cortical connections of the visual processing pathway, as has also been reported by \citep{brunner,rats,lewis}. What's more, the SC's particular aptness to swiftly respond to the visual stimulus aligns with its biological function to indicate potential threats (such as flashing, moving or looming spots \citep{Gale, Wang, Inayat}). Compared to our previous BTD-based deconvolution, we have made several improvements in this work. To start with, the current method exploits all the structures in the decomposition scheme. For example, previously the core tensor representing the lagged source autocorrelations was structured to be having Toeplitz slices, yet, these slices were not enforced to be shifted versions of each other. Incorporating such theoretically-supported structures significantly reduced the computation time of BTD by lowering the number of parameters to be estimated. In addition, we increased the robustness of our algorithm by applying a clustering-based selection of the final HRF estimates amongst multiple randomly-initialized BTD runs. Nevertheless, the formulated optimization problem is highly non-convex with many local minima, and the simulations show that there is still room for improvement. For instance, the selection of hyperparameters - namely the HRF filter length, number of time lags in the autocorrelation tensor, and the number of BTD repetitions - affect the performance of the algorithm. In addition, the selection of the ``best'' solution amongst several repetitions of such a non-convex factorization can be made in various ways, such as with different clustering criteria \citep{icasso}. Although many methods have been proposed to estimate the HRF so far, it is challenging to completely rely on one. First and foremost, we do not know the ground truth HRFs within the brain. As such, it is a difficult research problem on its own to assess the accuracy of a real HRF estimate. Furthermore, all methods make different assumptions on the data to perform deconvolution, such as uncorrelatedness of source signals (this study), the spectral characteristics of neural activity \citep{b14}, and Gaussian-distributed noise terms \citep{b16}. While making certain assumptions about the data model might be inevitable, it is important to keep an open mind about which assumption would remain valid in practice, and under which conditions. Hence, further experiments can be performed in the future to explore the limits of our assumptions. \section{Conclusion} In this paper, we deconvolved the fUS-based hemodynamic response in several regions of interest along the mouse visual pathway. We started with a multivariate model of fUS time-series using convolutive mixtures, which allowed us to define region-specific HRFs and multiple underlying source signals. By assuming that the source signals are uncorrelated, we formulated the blind deconvolution problem into a block-term decomposition of the lagged autocorrelation tensor of fUS measurements. The HRFs estimated in SC, LGN and V1 are consistent with the literature and align with the commonly accepted neuroanatomical, biological, neuroscientific functions and interconnections of said areas, whereas the predicted source signal matches well with the experimental paradigm. Overall, our results show that convolutive mixtures with the accompanying tensor-based solution provides a flexible framework for deconvolution, while revealing a detailed and reliable characterization of hemodynamic responses in functional neuroimaging data. \section*{Acknowlegements} This study was funded by the Synergy Grant of Department of Microelectronics of Delft University of Technology and the Delft Technology Fellowship. \newpage
1,941,325,219,912
arxiv
\section{Introduction} It is well-known that on a compact Riemannian manifold $(X,g)$, any solution $u(t,z)$ of the wave equation $(\partial_t^2+\Delta_g)u(t,z)=0$ expands as a sum of oscillating terms of the form $e^{i\lambda_jt}a_j(z)$ where $\lambda_j^2$ are the eigenvalues of the Laplacian $\Delta_g$ and $a_j$ some associated eigenvectors. The eigenvalues then give the frequencies of oscillation in time. For non-compact manifolds, the situation is much more complicated and no general theory describes the behaviour of waves as time goes to infinity, at least in terms of spectral data. A first satisfactory description has been given by Lax-Phillips \cite{LP} and Vainberg \cite{V} for the Laplacian $\Delta_{X}$ with Dirichlet condition on $X:=\mathbb{R}^n\setminus\mathcal{O}$ where $\mathcal{O}$ is a compact obstacle and $n$ odd; indeed if $u(t)$ is the solution of $(-\partial_t^2-\Delta_X)u(t,z)=0$ with compactly supported smooth initial data in $X$ and under a \emph{non-trapping} condition, they show an expansion as $t\to+\infty$ of the form \[u(t,z)=\sum_{\substack{\lambda_j\in\mathcal{R}\\ \Im(\lambda_j)<N}}\sum_{k=1}^{m(\lambda_j)}e^{i\lambda_jt}t^{k-1}u_{j,k}(z)+\mathcal{O}(e^{-(N-\epsilon)t}), \quad \forall N>0, \forall \epsilon>0\] uniformly on compacts, where $\mathcal{R}\subset \{\lambda\in\mathbb{C},\Im(\lambda)\geq 0\}$ is a discrete set of complex numbers called \emph{resonances} associated with a multiplicity function $m:\mathcal{R}\to \mathbb{N}$, and $u_{j,k}$ are smooth functions. The real part of $\lambda_j$ is a frequency of oscillation while the imaginary part is an exponential decay rate of the solution. Resonances can in general be defined as poles of the meromorphic continuation of the Schwartz kernel of the resolvent of $\Delta_X$ through the continuous spectrum. In \cite{TZ}, Tang and Zworski extended this result for \emph{non-trapping} black-box perturbation of $\mathbb{R}^n$ and considered also a strongly trapped setting, namely when there exist resonances $\lambda_j$ such that\footnote{This is typically the case when $P$ has elliptic trapped orbits as shown in \cite{P}} $\Im(\lambda_j)<(1+|\lambda_j|)^{-N}$ for all $N>0$, satisfying in addition some separation and multiplicity conditions. The expansion of wave solutions then involved these resonances and the error is $\mathcal{O}(t^{-N})$ for all $N>0$. This last result has also been generalized by Burq-Zworski \cite{BZ} for semi-classical problems. It is important to notice that such results are almost certainly not optimal when the trapping is hyperbolic since, at least for all known examples, resonances do not seem to approach the real line faster than polynomially. Christiansen and Zworski \cite{CZ} studied two examples in hyperbolic geometry, the modular surface and the infinite volume cylinder, they showed a full expansion of waves in terms of resonances with exponentially decaying error terms. The proof is based on a separation of variables computation in the cylinder case (here the trapping geometry is that of a single closed hyperbolic orbit) while it relies on well-known number theoretic estimates for the Eisenstein series in the modular case. The case of De Sitter-Schwarzchild metrics has recently been studied by Bony-H\"afner \cite{BH} using also separation of variables and rotational symmetry of the space. This is another example of hyperbolic trapping. Clearly, the general hyperbolic trapping situation is an issue and the above results are always based on very explicit computations or the arithmetic nature of the manifold. It is therefore of interest to consider more general cases of hyperbolic trapping geometries, the most basic examples being the convex co-compact quotients of the hyperbolic space $\mathbb{H}^{n+1}$ that can be considered as the simplest non-trivial models of open quantum chaotic systems. \\ Hyperbolic quotients $\Gamma\backslash\mathbb{H}^{n+1}$ by a discrete group of isometries with only hyperbolic elements (those that do not fix points in $\mathbb{H}^{n+1}$ but fix two points on the sphere at infinity $S^n=\partial\mathbb{H}^{n+1}$) and admitting a finite sided fundamental domain are called \emph{convex co-compact}. The Laplacian on such a quotient $X$ has for continuous and essential spectrum the half-line $[n^2/4,\infty)$, the natural wave equation is \begin{equation}\label{waveeq} (\partial_t^2+\Delta_X-n^2/4)u(t,z)=0, \quad u(0,z)=f_0(z), \quad \partial_tu(0,z)=f_1(z), \end{equation} its solution is \begin{equation}\label{utf} u(t)=\cos\Big(t\sqrt{\Delta_X-\frac{n^2}{4}}\Big)f_0+ \frac{\sin \Big(t\sqrt{\Delta_X-\frac{n^2}{4}}\Big)}{\sqrt{\Delta_X-\frac{n^2}{4}}}f_1.\end{equation} For a convex co-compact quotient $X=\Gamma\backslash \mathbb{H}^{n+1}$, the group $\Gamma$ acts on $\mathbb{H}^{n+1}$ as isometries but also on the sphere at infinity $S^n=\partial\mathbb{H}^{n+1}$ as conformal transformations. The limit set $\Lambda(\Gamma)$ of the group is the set of accumulation points on $S^n$ of the orbit $\Gamma.m$ for the Euclidean topology of the ball $\{z\in\mathbb{R}^{n+1};|z|\leq 1\}$ for any picked $m\in\mathbb{H}^{n+1}$, it is well known that $\Lambda(\Gamma)$ does not depend on the choice of $m$. We denote by $\delta\in(0,n)$ the Hausdorff dimension of $\Lambda(\Gamma)$, \[\delta:=\dim_{H}(\Lambda(\Gamma)).\] It is proved by Patterson \cite{Pat} and Sullivan \cite{SU} that $\delta$ is also the exponent of convergence of Poincar\'e series \begin{equation}\label{poincareseries} P_{\lambda}(m,m'):=\sum_{\gamma\in\Gamma}e^{-\lambda d_{h}(m,\gamma m')}, \quad m,m'\in\mathbb{H}^{n+1}, \end{equation} where $d_h$ is the hyperbolic distance. Standard coordinates on the unit sphere bundle $SX=\{(z,\xi)\in TX; |\xi|=1\}$ show that $2\delta+1$ is the Hausdorff dimension of the trapped set of the geodesic flow on $SX$. We denote by $\Omega:=S^n\setminus \Lambda(\Gamma)$ the domain of discontinuity of $\Gamma$, this is the largest open subset of $S^n$ on which $\Gamma$ acts properly discontinuously. The quotient $\Gamma\backslash\Omega$ is a compact manifold and $X$ can be compactified into a smooth manifold with boundary $\bar{X}=X\cup \partial\bar{X}$ with $\partial\bar{X}= \Gamma\backslash\Omega$. It turns out that $\partial\bar{X}$ inherits from the hyperbolic metric $g$ on $X$ a conformal class of metrics $[h_0]$, namely the conformal class of $h_0=x^2g|_{T\partial\bar{X}}$ where $x$ is any smooth boundary defining function of $\partial\bar{X}$ in $\bar{X}$.\\ In this paper, we focus on the case when $\delta<n/2$ since if $\delta>n/2$, the Laplacian $\Delta_X$ has pure point spectrum in $(0,n^2/4)$ that gives the leading asymptotic behaviour of $u(t)$ by usual spectral theory. We prove the following result. \begin{theo}\label{mainth0} Let $X$ be an $(n+1)$-dimensional convex co-compact hyperbolic manifold such that $\delta<n/2$, and let $f_0,f_1,\chi\in C_0^\infty(X)$. With $u(t)$ defined by \eqref{utf}, as $t\to +\infty$, we have the asymptotic \begin{equation}\label{chiut} \chi u(t)=\frac{A_X}{\Gamma(\delta-\frac{n}{2}+1)}e^{-t(\frac{n}{2}-\delta)}\langle u_\delta,(\delta-\frac{n}{2})f_0+f_1\rangle \chi u_\delta +\mathcal{O}_{L^2}(e^{(\delta-\frac{n}{2})t}t^{-\infty})\end{equation} where $u_\delta$ is the Patterson generalized eigenfunction defined in \eqref{udelta} and $\langle,\rangle$ is the distributional pairing, $A_X\in\mathbb{C}\setminus\{0\}$ is a constant depending on $X$. \end{theo} \textsl{Remark 1}: when $\delta\notin n/2-\mathbb{N}$, this shows that the ``dynamical dimension'' $\delta$ controls the exponential decay rate of waves, or \emph{quantum decay rate}\footnote{This kind of result was predicted in \cite{N2}.}. It seems to be the first rather general example of hyperbolic trapping for which we have an explicit asymptotic for the waves, in terms of geometric data. However, we point out that the recent work of Petkov-Stoyanov \cite{PeS} should in principle imply an expansion in terms of a finite number of resonances for the exterior problem with strictly convex obstacles. We also believe that a result similar to Theorem \ref{mainth} holds for general negatively curved asymptotically hyperbolic manifolds, this will be studied in a subsequent work.\\ \textsl{Remark 2}: In the special case $\delta\in n/2-\mathbb{N}$ (note that it can happen only for $n\geq 3$ i.e. for four and higher dimensional manifolds) the leading term vanishes in view of the Euler $\Gamma$ fonction in \eqref{chiut}. Waves for this special case turn out to decrease faster. We explain this fact in the last section of the paper, and it is somehow related to the conformal theory of $\partial\bar{X}$: what happens is that when $\delta\notin n/2-\mathbb{N}$, $\lambda=\delta$ is always the closest pole to the continuous spectrum of the meromorphic extension of the resolvent $R(\lambda):=(\Delta_X-\lambda(n-\lambda))^{-1}$ and $u_\delta$ is an associated non-$L^2$ eigenstate $(\Delta_X-\delta(n-\delta))u_\delta=0$, while when $\delta\in n/2-k$ with $k\in\mathbb{N}$, the extended resolvent $R(\lambda)$ is holomorphic at $\lambda=\delta$ and $u_\delta$ has asymptotic behaviour near $\partial\bar{X}$ \[u_\delta(z)= x(z)^{\delta}f_\delta +\mathcal{O}(x(z)^{\delta+1})\] where $f_\delta\in C^\infty(\partial\bar{X})$ is an element of $\ker(P_k)$, $P_k$ being the $k$-th GJMS conformal Laplacian \cite{GJMS} of the conformal boundary $(\partial\bar{X},[h_0])$; more precisely $P_j>0$ for all $j=1,\dots,k-1$ while $\ker P_k=\textrm{Span}(f_\delta)$. The manifold has a special conformal geometry at infinity that makes the resonance $\delta$ disappears and transforms into a $0$-eigenvalue for the conformal Laplacian $P_k$.\\ The proof uses methods of Tang-Zworski \cite{TZ} together with informations on the closest resonance to the critical line, that is $\delta$ when $\delta\notin n/2-\mathbb{N}$ (the physical sheet for the resolvent $R(\lambda):=(\Delta-\lambda(n-\lambda))^{-1}$ is $\{\Re(\lambda)>n/2\}$) this last fact has been proved by Patterson \cite{Pa} using Poincar\'e series and Patterson-Sullivan measure. The powerful dynamical theory of Dolgopyat \cite{Do} has been used by the second author \cite{N} (for surfaces) and Stoyanov \cite{S} (in higher dimension) to prove the existence of a strip with no zero on the left of the first zero $\lambda=\delta$ for the Selberg zeta function. Using results of Patterson-Perry \cite{PP}, this implies a strip $\{\delta-\epsilon<\Re(\lambda)<\delta)\}$ with no resonance. Then we can view $u(t)$ as a contour integral of the resolvent $R(\lambda)$ and move the contour up to $\delta$ and apply residue theorem. This involves obtaining rather sharp estimates on the truncated (on compact sets) resolvent near the line $\{\Re(\lambda)=\delta\}$. This is achieved by combining the non-vanishing result with an a priori bound that results from a precise parametrix of the truncated resolvent.\\ A second result of this article is the proof of the existence of an explicit strip with infinitely many resonances. \begin{theo} Let $X=\Gamma\backslash \mathbb{H}^{n+1}$ be a convex co-compact hyperbolic manifold and let $\delta\in(0,n)$ be the Hausdorff dimension of its limit set. Then for all $\varepsilon>0$, there exist infinitely many resonances in the strip $\{ -n\delta-\varepsilon< \Re(s)< \delta \}$. If moreover $\Gamma$ is a Schottky group, then there exist infinitely many resonances in the strip $\{ -\delta^2-\varepsilon< \Re(s)< \delta \}$. \end{theo} Note that the existence of infinitely many resonances in some strips was proved by Guillop\'e-Zworski \cite{GZw} in dimension $2$ and Perry \cite{Pe} in higher dimension, but in both cases, they did not provide any geometric information on the width of these strips. Our proof is based on a Selberg like trace formula and uses all previously known counting estimates for resonances. An interesting consequence is an explicit Omega lower bound for the remainder in \eqref{chiut} for generic compactly supported initial data. \begin{cor} For any compact set $K\subset X$, there exists a generic set $\Omega\subset C^\infty(K)$ such that for all $f_1\in\Omega,f_0=0$ and all $\epsilon>0$, the remainder in \eqref{chiut} is not a $\mathcal{O}_{L^2}(e^{-(\frac{n}{2} +n\delta+\epsilon)t})$ as $t\to \infty$. If $X$ is Schottky, $\mathcal{O}_{L^2}(e^{-(\frac{n}{2} +n\delta+\epsilon)t})$ can be improved to $\mathcal{O}_{L^2}(e^{-(\frac{n}{2}+\delta^2+\epsilon)t})$. \end{cor} The meaning of "generic" above is in the Baire category sense, i.e. it is a $G_\delta$-dense subset. We point out that when $n=1$, all convex-cocompact surfaces are Schottky i.e. are obtained as $\Gamma\backslash \mathbb{H}^2$, where $\Gamma$ is a Schottky group. For a definition of Schottky groups in our setting we refer for example to the introduction of \cite{GLZ}. In higher dimensions, not all convex co-compact manifolds are obtained via Schottky groups. For more details and references around these questions we refer to \cite{GN}. The rest of the paper is organized as follows. In $\S 2$, we review and prove some necessary bounds on the resolvent in the continuation domain. In $\S 3$ we prove the estimate on the strip with finitely many resonances. In $\S4$, we derive the asymptotics by using contour deformation and the key bounds of $\S 2$. We also show how to relate $\S 3$ to an Omega lower bound of the remainder. The section $\S 5$ is devoted to the analysis of the special cases $\delta \in \frac{n}{2} -\mathbb{N}$ in terms of the conformal theory of the infinity. \bigskip \noindent \textbf{Acknowledgement}. Both authors are supported by ANR grant JC05-52556. C.G ackowledges support of NSF grant DMS0500788, ANR grant JC0546063 and thanks the Math department of ANU (Canberra) where part of this work was done. \section{Resolvent} We start in this section by analyzing the resolvent of the Laplacian for convex co-compact quotient of $\mathbb{H}^{n+1}$ and we give some estimates of its norms. \subsection{Geometric setting} We let $\Gamma$ be a convex co-compact group of isometries of $\mathbb{H}^{n+1}$ with Hausdorff dimension of its limit set satisfying $0<\delta<n/2$, we set $X=\Gamma\backslash \mathbb{H}^{n+1}$ its quotient equipped with the induced hyperbolic metric and we denote the natural projection by \begin{equation}\label{pigamma} \pi_\Gamma: \mathbb{H}^{n+1}\to X=\Gamma\backslash \mathbb{H}^{n+1}, \quad \bar{\pi}_\Gamma: \Omega\to \partial\bar{X}=\Gamma\backslash\Omega. \end{equation} By assumption on the group $\Gamma$, for any element $h\in \Gamma$ there exists $\alpha \in {\rm Isom}(\mathbb{H}^{n+1})$ such that for all $(x,y)\in \mathbb{H}^{n+1}=\mathbb{R}^n\times \mathbb{R}_+$, $$\alpha^{-1}\circ h \circ \alpha(x,y)=e^{l(\gamma)}(O_\gamma(x),y),$$ where $O_\gamma \in SO_n(\mathbb{R}), l(\gamma)>0$. We will denote by $\alpha_1(\gamma),\ldots,\alpha_n(\gamma)$ the eigenvalues of $O_\gamma$, and we set \begin{equation}\label{Ggamma} G_\gamma(k)=\det \left(I-e^{-kl(\gamma)}O_\gamma^k \right)= \prod_{i=1}^n \left(1-e^{-kl(\gamma)}\alpha_i(\gamma)^k \right). \end{equation} The Selberg zeta function of the group is defined by \[Z(\lambda)=\exp\left(-\sum_{\gamma}\sum_{m=1}^{\infty}\frac{1}{m}\frac{e^{-\lambda ml(\gamma)}}{G_\gamma(m)}\right),\] the sum converges for $\Re(\lambda)>\delta$ and admits a meromorphic extension to $\lambda\in\mathbb{C}$ by results of Fried \cite{Fr} and Patterson-Perry \cite{PP}. \subsection{Extension of resolvent, resonances and zeros of Zeta} The spectrum of the Laplacian $\Delta_X$ on $X$ is a half line of absolutely continuous spectrum $[n^2/4,\infty)$, and if we take for the resolvent of the Laplacian the spectral parameter $\lambda(n-\lambda)$ \[R(\lambda):=(\Delta_X-\lambda(n-\lambda))^{-1},\] this is a bounded operator on $L^2(X)$ if $\Re(\lambda)>n/2$. It is shown by Mazzeo-Melrose \cite{MM} and Guillop\'e-Zworski \cite{GZ2} that $R(\lambda)$ extends meromorphically in $\mathbb{C}$ as continuous operators $R(\lambda):L^2_{\rm comp}(X)\to L^2_{\rm loc}(X)$, with poles of finite multiplicity, i.e. the rank of the polar part in the Laurent expansion of $R(\lambda)$ at a pole is finite. The poles are called \emph{resonances} of $\Delta_X$, they form the discrete set $\mathcal{R}$ included in $\Re(\lambda)<n/2$, where each resonance $s\in\mathcal{R}$ is repeated with the mutiplicity \[m_s:=\textrm{rank} (\textrm{Res}_{\lambda=s}R(\lambda)).\] A corollary of the analysis of divisors of $Z(\lambda)$ by Patterson-Perry \cite{PP} and Bunke-Olbrich \cite{BO} is the \begin{prop}[{\bf Patterson-Perry, Bunke-Olbrich}]\label{patper} Let $s\in \mathbb{C}\setminus (-\mathbb{N}_0\cup(n/2-\mathbb{N}))$, then $Z(\lambda)$ is holomorphic at $s$, and $s$ is a zero of $Z(\lambda)$ if and only if $s$ is a resonance of $\Delta_X$. Moreover its order as zero of $Z(\lambda)$ coincide with the multiplicity $m_s$ of $s$ as a resonance. \end{prop} \subsection{Estimates on the resolvent $R(\lambda)$ in the non-physical sheet} The series $P_\lambda(m,m')$ defined in (\ref{poincareseries}) converges absolutely in $\Re(\lambda)>\delta$, is a holomorphic function of $\lambda$ there, with local uniform bounds in $m,m'$, which clearly gives \[\forall \epsilon>0, \exists C_\epsilon(m,m')>0, \forall \lambda \textrm{ with }\Re(\lambda)\in[\delta+\epsilon,n], \quad |P_{\lambda}(m,m')|\leq C_{\epsilon,m,m'}\] and $C_{\epsilon,m,m'}$ is locally uniform in $m,m'$. We show the \begin{prop}\label{borneres} With previous assumptions, there exists $\epsilon>0$ and a holomorphic family in $\{\Re(\lambda)>\delta-\epsilon\}$ of continuous operator $K(\lambda):L_{{\rm comp}}^2(X)\to L_{{\rm loc}}^2(X)$ such that the resolvent satisfies in $\Re(\lambda)>\delta$ \[R(\lambda)=\frac{(2\pi)^{-\frac{n}{2}}\Gamma(\lambda)}{\Gamma(\lambda-\frac{n}{2})}P(\lambda)+K(\lambda)\] where $P(\lambda)$ is the operator with Schwartz kernel $P_\lambda(m,m')$. Moreover there exists $M>0$ such that for any $\chi_1,\chi_2\in C_0^\infty(X)$, there is a $C>0$ such that \[||\chi_1K(\lambda)\chi_2||_{\mathcal{L}(L^2(X))}\leq C(|\lambda|+1)^M, \quad \Re(\lambda)>\delta-\epsilon\] \end{prop} \textsl{Proof}: we choose a fundamental domain $\mathcal{F}$ for $\Gamma$ with a finite number of sides paired by elements of $\Gamma$. By standard arguments of automorphic functions, the resolvent kernel $R(\lambda;m,m')$ for $m,m'\in\mathcal{F}$ is the average \[R(\lambda;m,m')=\sum_{\gamma\in\Gamma}G(\lambda;m,\gamma m')=\sum_{\gamma\in \Gamma}\sigma(d_h(m,\gamma m'))^{\lambda}k_\lambda(\sigma(d_h(m,\gamma m')))\] \[\sigma(d):=(\cosh d)^{-1}=2e^{-d} (1+e^{-2d})^{-1}\] where $G(\lambda;m,m')$ is the Green kernel of the Laplacian on $\mathbb{H}^{n+1}$ and $k_\lambda\in C^\infty([0,1))$ is the hypergeometric function defined for $\Re(\lambda)>\frac{n-1}{2}$ \[k_\lambda(\sigma):=\frac{2^{\frac{3-n}{2}}\pi^{\frac{n+1}{2}}\Gamma(\lambda)}{\Gamma(\lambda-\frac{n+1}{2}+1)} \int_0^1(2t(1-t))^{\lambda-\frac{n+1}{2}}(1+\sigma(1-2t))^{-\lambda}dt\] which extends meromorphically to $\mathbb{C}$ and whose Taylor expansion at order $2N$ can be written \[k_\lambda(\sigma)=2^{-\lambda-1}\sum_{j=0}^N\alpha_j(\lambda)\Big(\frac{\sigma}{2}\Big)^{2j}+k_\lambda^N(\sigma), \quad \alpha_j(\lambda):=\frac{\pi^{-\frac{n}{2}}\Gamma(\lambda+2j)}{\Gamma(\lambda-\frac{n}{2}+1)\Gamma(j+1)}\] with $k^N_\lambda\in C^{\infty}([0,1))$ and the estimate for any $\epsilon_0>0$ \begin{equation}\label{estimklan} |k_\lambda^N(\sigma)|\leq \sigma^{2N+2}C^N(|\lambda|+1)^{CN}, \quad \sigma\in[0,1-\epsilon_0), \quad \Re(\lambda)>\frac{n}{2}-N \end{equation} for some $C>0$ depending only on $\epsilon_0$, see for instance \cite[Lem. B.1]{GTh}. Extracting the first term with $\alpha_0$ in $k_\lambda$, we can then decompose \[R(\lambda;m,m')=\frac{\pi^{\frac{n}{2}}\Gamma(\lambda)}{2\Gamma(\lambda-\frac{n}{2}+1)}\Big( \sum_{\gamma\in\Gamma}e^{-\lambda d_h}+\sum_{\gamma\in\Gamma}e^{-(\lambda+1) d_h} f_\lambda(e^{-d_h})\Big)+\sum_{\gamma\in\Gamma}\sigma(d_h)^{\lambda}k^0_\lambda(\sigma(d_h)) \] \[f_\lambda(x):=\frac{(1+x^2)^{-\lambda}-1}{x},\] and where $d_h$ means $d_h(m,\gamma m')$ here. Thus to prove the Proposition, we have to analyze the term $K(\lambda):=2^{-1}\alpha_0(\lambda)K_1(\lambda)+K_2(\lambda)$ with \[K_1(\lambda):= \sum_{\gamma\in\Gamma}e^{-(\lambda+1) d_h} f_\lambda(e^{-d_h}), \quad K_2(\lambda):=\sum_{\gamma\in\Gamma}\sigma(d_h)^{\lambda}k^0_\lambda(\sigma(d_h))\] The first term $K_1$ is easy to deal with since $|f_\lambda(x)|\leq C(|\lambda|+1)$ for $x\in[0,1]$, thus we can use the fact that $P_{\lambda+1}(m,m')$ converges absolutely in $\Re(\lambda)>\delta-1$, is holomorphic there, and is locally uniformly bounded in $(m,m')$ thus \[|\alpha_0(\lambda)\chi_1(m)\chi_2(m')K_1(\lambda)|\leq C(|\lambda|+1)^{\frac{n}{2}+1}\] the same bound holds for the operator in $\mathcal{L}(L^2(X))$ with Schwartz kernel $\chi_1(m)\chi_2(m)F_1(\lambda)$. Note that $\alpha_0(\lambda)$ has no pole in $\Re(\lambda)>0$, thus no pole in $\Re(\lambda)>\delta/2>0$.\\ For $K_2(\lambda)$ we can decompose it as follows: for $m\in\textrm{Supp}(\chi_1)$, $m'\in\textrm{Supp}(\chi_2)$ (which are compact in $\mathcal{F}$), for $\epsilon_0>0$ fixed there is only a finite number of elements $\Gamma_0=\{\gamma_0,\dots,\gamma_L\in\Gamma\}$ such that $d_h(m,\gamma m')>\epsilon_0$ for any $\gamma\notin\Gamma_0$ and any $m,m'\in\mathcal{F}$, this is because the group acts properly discontinuously on $\mathbb{H}^{n+1}$. Thus we split the sum in $K_2(\lambda)$ into \begin{equation}\label{k2} K_2(\lambda)=\sum_{\gamma\in\Gamma_0}\sigma(d_h)^{\lambda}k^0_\lambda(\sigma(d_h))+\sum_{\gamma\notin \Gamma_0} \sigma(d_h)^{\lambda}k^0_\lambda(\sigma(d_h)).\end{equation} We first observe that the second term is a convergent series, holomorphic in $\lambda$, for $\Re(\lambda)>\delta-1$ and locally uniformly bounded in $(m,m')$. Indeed it is easily seen to be bounded by \begin{equation}\label{k22} CN(|\lambda|+1)\sum_{j=1}^N|\alpha_j(\lambda)|P_{\Re(\lambda)+2j}(m,m') +C^N(|\lambda|+1)^{CN}P_{\Re(\lambda)+2N+1}(m,m')\end{equation} by assumption on $\Gamma_0$ and using (\ref{estimklan}), $C$ depending on $\epsilon_0$ only. Moreover since $\alpha_j(\lambda)$ is polynomially bounded by $C(|\lambda|+1)^{2j}$ we have a polynomial bound for (\ref{k22}) of degree depending on $N$. The first term in (\ref{k2}) has a finite sum thus it suffices to estimate each term, but because of the usual conormal singularity of the resolvent at the diagonal, it explodes as $d_h(m,m')\to 0$. We want to use Schur's lemma for instance, so we have to bound \[\sup_{m\in\mathcal{F}}\int_{\mathcal{F}}|\chi_1(m)\chi_2(m')K_2(\lambda;m,m')|dm'_{\mathbb{H}^{n+1}},\quad \sup_{m'\in\mathcal{F}}\int_{\mathcal{F}}|\chi_1(m)\chi_2(m')K_2(\lambda;m,m')|dm_{\mathbb{H}^{n+1}}. \] First we recall that $\mathbb{H}^{n+1}=(0,\infty)_x\times\mathbb{R}^n_y$ has a Lie group structure with product \[(x,y).(x',y')=(xx',y+xy'), \quad (x,y)^{-1}=(\frac{1}{x},-\frac{y}{x})\] and neutral element $e:=(1,0)$. Then if $(u,v):=(x',y')^{-1}.(x,y)=(x/x',(y-y')/x')$ we get \begin{equation}\label{coshd} (\cosh(d_h(x,y;x',y')))^{-1}=\frac{2xx'}{x^2+{x'}^2+|y-y'|^2}=\frac{2u}{1+u+|v|^2}=(\cosh(d_h(u,v;1,0)))^{-1}. \end{equation} Moreover the diffeomorphism $(u,v)\to m'=m.(u,v)^{-1}$ on $\mathbb{H}^{n+1}$ pulls the hyperbolic measure $dm'_{\mathbb{H}^{n+1}}={x'}^{-n-1}dx'dy'$ back into the right invariant measure $u^{-1}dudv$ for the group action. This is to say that we have to bound \begin{equation}\label{sup1} \sup_{m\in\mathcal{F}}\int_{\mathcal{F}^{-1}.m}|\chi_1(m)\chi_2(m.(u,v)^{-1})K_2(\lambda;m,m.(u,v)^{-1})|\frac{dudv}{u}\end{equation} where $\mathcal{F}^{-1}.m:=\{{m'}^{-1}.m; m'\in\mathcal{F}\}$ and similarly \begin{equation}\label{sup2} \sup_{m'\in\mathcal{F}}\int_{\mathcal{F}^{-1}.m'}|\chi_1(m'.(u,v)^{-1})\chi_2(m')K_2(\lambda;m'.(u,v)^{-1},m')|\frac{dudv}{u}. \end{equation} Because $m,m'$ are in compact sets, the estimate (\ref{k22}) with $N=n$ gives a polynomial bounds in $\lambda$ in $\{\Re(\lambda)>\delta-\epsilon\}$ for the terms coming from $\gamma\notin\Gamma_0$. To deal with the term of (\ref{k2}) containing elements $\gamma\in\Gamma_0$, we use Lemma B.1 of \cite{GTh} which proves that for any compact $K$ of $\mathbb{H}^{n+1}$, there exists a constant $C_K$ such that \begin{equation}\label{borneresl1} \int_{K}|G(\lambda;(u,v),e)|\frac{dudv}{u}\leq \frac{C_K^N(|\lambda|+1)^{n-1}}{\textrm{dist}(\lambda,-\mathbb{N}_0)}, \quad \Re(\lambda)>\frac{n}{2}-N.\end{equation} Now to bound (\ref{sup1}) with $K_2(\lambda,\bullet,\bullet)$ replaced by $\sigma(d_h(\bullet,\gamma\bullet))^\lambda k'_\lambda(\sigma(d_h(\bullet,\gamma\bullet)))$ we note that before we did our change of variable in (\ref{sup1}), we can make the change of variable $m'\to\gamma^{-1}m'$ which amounts to bound \[\sup_{m\in\mathcal{F}}\int_{(\gamma\mathcal{F})^{-1}.m}\Big|\chi_1(m)\chi_2(\gamma^{-1}m.(u,v)^{-1})\Big(G(\lambda;(u,v),e)- 2^{-\lambda-1}\alpha_0(\lambda)\sigma^\lambda(d_h((u,v),e))\Big)\Big|\frac{dudv}{u}\] where we used (\ref{coshd}). But again, since $\chi_1,\chi_2$ have compact support, we get a polynomial bound in $\lambda$ using (\ref{borneresl1}) and a trivial polynomial bound for $k_\lambda(0)$. The term (\ref{sup2}) can be dealt with similarly and we finally deduce that for some $M$, \[||\chi_1 K_2(\lambda)\chi_2||_{\mathcal{L}(L^2(X))}\leq C(|\lambda|+1)^M, \quad \Re(\lambda)>\delta-\epsilon\] and the Proposition is proved. \hfill$\square$\\ This clearly shows that the resolvent extends to $\{\Re(\lambda)>\delta\}$ analytically. Actually, Patterson \cite{Pa} (see also \cite[Prop 1.1]{P}) showed the following. \begin{prop}[{\bf Patterson}]\label{pa} The family of operators $\Gamma(\lambda-n/2+1)R(\lambda)$ is holomorphic in $\{\Re(\lambda)>\delta\}$, has no pole on $\{\Re(\lambda)=\delta, \lambda\not=\delta\}$ and has a pole of order $1$ at $\lambda=\delta$ with rank $1$ residue given by \[{\rm Res}_{\lambda=\delta} \Gamma(\lambda-n/2+1)R(\lambda)=A_X u_\delta\otimes u_\delta\] where $A_X\not=0$ is some constant depending on $\Gamma$ and $u_\delta$ is the Patterson generalized eigenfunction defined by \begin{equation}\label{udelta} \pi_\Gamma^*u_\delta(m)=\int_{\partial_\infty\mathbb{H}^{n+1}}\Big(\mathcal{P}(m,y)\Big)^\delta d\mu_{\Gamma}(y) \end{equation} $\mathcal{P}$ being the Poison kernel of $\mathbb{H}^{n+1}$ and $d\mu_{\Gamma}$ the Patterson-Sullivan measure associated to $\Gamma$ on the sphere $\partial_{\infty}\mathbb{H}^{n+1}=\mathbb{R}^n\cup\{\infty\} = S^n$. \end{prop} We can but notice that $\delta\in n/2-\mathbb{N}$ is a special case since the resolvent becomes holomorphic at $\lambda=\delta$. We postpone the analysis of this phenomenon to section $\S 5$.\\ A rough exponential estimate in the non-physical sheet also holds using determinants methods (used for instance in \cite{GZ}). \begin{lem}\label{exponentres} For $\chi_1,\chi_2\in C_0^\infty(X)$, $j\in\mathbb{N}_0$, and $\eta>0$ there is $C>0$ such that for $|\lambda|\leq N/16$ and $\textrm{dist}(\lambda,\mathcal{R}\}>\eta$, \[\quad ||\partial_\lambda^j\chi_1R(\lambda)\chi_2||_{\mathcal{L}(L^2(X))}\leq e^{C(N+1)^{n+3}},\] \end{lem} \textsl{Proof}: we apply the idea of \cite[Lem. 3.6]{GZ} with the parametrix construction of $R(\lambda)$ written in \cite{GZ2}. Let $x$ be a boundary defining function of $\partial\bar{X}$ in $\bar{X}$, which can be considered as a weight to define Hilbert spaces $x^\alpha L^2(X)$, for any $\alpha\in\mathbb{R}$. For any large $N>0$ that we suppose in $2\mathbb{N}$ for convenience, Guillop\'e and Zworski \cite{GZ2} construct operators \[P_N(\lambda,\lambda_0):x^NL^2(X)\to x^{-N}L^2(X), \quad K_N(\lambda,\lambda_0):x^NL^2(X)\to X^NL^2(X)\] meromorphic with finite multiplicity in $O_N:=\{\Re(\lambda)>(n-N)/2\}$, whose poles are situated at $-\mathbb{N}_0$, and such that \[(\Delta_X-\lambda(n-\lambda))P_N(\lambda,\lambda_0)=1+K_N(\lambda,\lambda_0)\] with $\lambda_0$ large depending on $N$, take for instance $\lambda_0=n/2+N/8$. Moreover $K_N(\lambda,\lambda_0)$ is compact with characteristic values satisfying in $O_{N,\eta}:=O_N\cap\{\textrm{dist}(\lambda,-\mathbb{N}_0)>\eta\}$ \begin{equation}\label{muj} \mu_j(K_N(\lambda,\lambda_0))\leq C(1+|\lambda-\lambda_0|)j^{-\frac{1}{n}}+\left\{\begin{array}{ll} e^{CN} & \textrm{ if } j\leq CN^{n+1}\\ e^{-N/C}j^2 & \textrm{ if } j\geq CN^{n+1} \end{array}\right. \end{equation} for some $0<\eta<1/4$ and $C>0$ independent of $\lambda,N$. They also have $||K_N(\lambda_0,\lambda_0)||\leq 1/2$ in $\mathcal{L}(x^NL^2(X))$, thus by Fredholm theorem \[R(\lambda)=P_N(\lambda,\lambda_0)(1+K_N(\lambda,\lambda_0))^{-1}: x^NL^2(X)\to x^{-N}L^2(X)\] is meromorphic with poles of finite multiplicity in $O_N$. By standard method as in \cite[Lem. 3.6]{GZ} we define \[d_N(\lambda):=\det (1+K_N(\lambda,\lambda_0)^{n+2})\] which exists in view of (\ref{muj}), and we have the rough bound \begin{equation}\label{estimekn} ||(1+K_N(\lambda,\lambda_0))^{-1}||_{\mathcal{L}(x^NL^2(X))}\leq \frac{\det(1+|K_N(\lambda,\lambda_0)|^{n+2})}{|d_N(\lambda)|} \end{equation} in $O_{N,\eta}$ and where $|A|:=(A^*A)^{\frac{1}{2}}$ for $A$ compact. The term in the numerator is easily shown to be bounded by $\exp(C(N+1)^{n+2})$ in $O_{N,\eta}$ from (\ref{muj}), actually this is written in \cite[Lem. 5.2]{GZ2}. It remains to have a lower bound of $|d_N(\lambda)|$. In Lemma 3.6 of \cite{GZ}, they use the minimum modulus theorem to obtain lower bound of a function using its upper bound, but this means that the function has to be analytic in $\mathbb{C}$. Here there is a substitute which is Cartan's estimate \cite[Th. I.11]{Le}. We first need to multiply $d_N(\lambda)$ by a holomorphic function $J_N(\lambda)$ with zeros of sufficient multiplicity at $\{-k;k=0,\dots,N/2\}$ to make $J_N(\lambda)d_N(\lambda)$ holomorphic in $O_N$, for instance the polynomial \[J_N(\lambda):=\prod_{k=0}^{N/2}(\lambda-k)^{CN^{n+2}}\] for some large integer $C>0$ suffices in view of the order ($\leq C{N^{n+2}}$) of each $-k$ as a pole of $d_N(\lambda)$ proved in \cite[Lem. A.1]{GZ2}. Then clearly $f_N(\lambda):=J_N(\lambda+\lambda_0)d_N(\lambda+\lambda_0)/(J_N(\lambda_0)d_N(\lambda_0))$ is holomorphic in $\{|\lambda|\leq N/4\}$ and satisfies in this disk \[|f_N(\lambda)|\leq e^{C(N+1)^{n+3}}, \quad f_N(0)=1,\] where we used the maximum principle in disks around each $-k$ to estimate the norm there. Thus we may apply Cartan's estimate for this function in $|\lambda|<N/4$: for all $\alpha>0$ small enough there exists $C_\alpha>0$ such that, outside a family of disks the sum of whose radii is bounded by $\alpha N$ \[\log|f_N(\lambda)|>-C_\alpha\log \Big(\sup_{|\lambda|\leq N/4}|f_N(\lambda)|\Big)\] and $|\lambda|\leq N/4$. Fixing $\alpha$ sufficiently small, there exists $\beta_N\in(3/4,1)$ so that \[|d_N(\lambda)|>e^{-C(N+1)^{n+3}} \textrm{ for }|\lambda-\lambda_0|=\beta_N \frac{N}{4}.\] Note that we can also choose $\beta_N$ so that $\textrm{dist}(\beta_NN/4,\mathbb{N})>\eta$ for some small $\eta$ uniform with respect to $N$. Thus the same bound holds for $||(1+K_N(\lambda,\lambda_0))^{-1}||_{\mathcal{L}(x^NL^2(X))}$ using (\ref{estimekn}). Now we need a bound for $P_N(\lambda,\lambda_0)$ and it suffices to get back to its definition in the proof of Proposition 3.2 of \cite{GZ2}: it involves operators of the form $\iota^*\varphi R_{\mathbb{H}^{n+1}}(\lambda)\psi\iota_*$ for some cut-off functions $\psi,\varphi \in C^\infty (\mathbb{H}^{n+1})$ and isometry \[\iota: U\subset X\to \{(x,y)\in (0,\infty)\times \mathbb{R}^n; x^2+|y|^2<1\}\subset \mathbb{H}^{n+1},\] and operators whose norm is explicitely bounded in \cite[Sect. 4]{GZ2} by $e^{C(N+1)}$ in $O_{N,\eta}$. The appendix B of \cite{GTh} gives an estimate of the same form for $||\varphi R_{\mathbb{H}^{n+1}}(\lambda)\psi||$ as an operator in $\mathcal{L}(x^NL^2(X), x^{-N}L^2(X))$ for $\lambda\in O_{N,\eta}$ (this is actually a direct consequence of (\ref{borneresl1}) and (\ref{estimklan})) thus we have the bound \[||R(\lambda)||_{\mathcal{L}(x^NL^2(X), x^{-N}L^2(X))}\leq e^{C(N+1)^3}\] in $\{|\lambda-\lambda_0|=\beta_NN/4\}$. Let $\mathcal{R}_N$ be the set of poles of $R(\lambda)$ in $O_{N}$, each pole being repeated according to its order; $\mathcal{R}_N$ has at most $CN^{n+2}$ elements so we may multiply $R(\lambda)$ by \[F_N(\lambda):=\prod_{s\in \mathcal{R}_N}E(\lambda/s,n+2)\] where $E(z,p):=(1-z)\exp(z+\dots+p^{-1}z^p)$ is the Weierstrass elementary function. It is rather easy to check that for all $\epsilon>0$ small, we have the bounds \begin{equation}\label{estimateF_N} e^{C_\epsilon (N+1)^{n+3}}\geq |F_N(\lambda)|\geq e^{-C_\epsilon(N+1)^{(n+3)}} \end{equation} for some $C_\epsilon$ and for all $\lambda\in O_{N}$ such that $\textrm{dist}(\lambda,\mathcal{R})>\epsilon$. Thus $R(\lambda)F_N(\lambda)$ is holomorphic in $\{|\lambda-\lambda_0|\leq \beta_NN/4\}$ and we can use the maximum principle which gives a upper bound $||F_N(\lambda)R(\lambda)||_{\mathcal{L}(x^NL^2, x^{-N}L^2)} \leq \exp(C_\epsilon(N+1)^{n+3})$ in $\{|\lambda-\lambda_0|\leq \beta_NN/4\}$. We get our conclusion using \eqref{estimateF_N}, the fact that $\chi_i$ is bounded by $e^{CN}$ as an operator from $L^2$ to $x^NL^2$, and the Cauchy formula for the case $j>0$ (estimates of the derivatives with respect to $\lambda$). \hfill$\square$\\ \textsl{Remark}: Notice that similar estimates are obtained independently by Borthwick \cite{Bor}.\\ In the case of surfaces the second author \cite{N} used the powerful estimates developped by Dolgopyat \cite{Do} to prove that the Selberg zeta function $Z(\lambda)$ is analytic and non-vanishing in $\{\Re(\lambda)>\delta-\epsilon, \lambda\not=\delta\}$ for some $\epsilon>0$. In higher dimension, the same result holds, as was shown recently by Stoyanov \cite{S}. \begin{theo}[\bf{Naud, Stoyanov}]\label{naudstoy} There exists $\epsilon>0$ such that the Selberg zeta function $Z(\lambda)$ is holomorphic and non-vanishing in $\{\lambda\in\mathbb{C}; \Re(\lambda)>\delta-\epsilon, \lambda\not=\delta\}$. \end{theo} Using Proposition \ref{patper}, this implies that the resolvent $R(\lambda)$ is holomorphic in a similar set (possibly by taking $\epsilon>0$ smaller). Then an easy consequence of the maximum principle as in \cite{TZ,BP} together with a rough exponential bound for the resolvent allows to get a polynomial bound for $||\chi_1R(\lambda)\chi_2||$ on the $\{\Re(\lambda)=\delta; \lambda\not=\delta\}$. \begin{cor}\label{extres} There is $\epsilon>0$ such that the resolvent $R(\lambda)$ is meromorphic in $\Re(\lambda)>\delta-\epsilon$ with only possible pole the simple pole $\lambda=\delta$, the residue of which is given by \[\textrm{Res}_{\lambda=\delta}R(\lambda)=\frac{A_X}{\Gamma(\frac{n}{2}-\delta+1)}u_\delta\otimes u_\delta\] where $u_\delta$ is the Patterson generalized eigenfunction of (\ref{udelta}), $A_X\not=0$ a constant. Moreover for all $j\in\mathbb{N}_0$, $\chi_1,\chi_2\in C_0^\infty(X)$, there exists $L\in\mathbb{N}, C>0$ such that for $|\lambda-\delta|>1$ \[||\partial^j_\lambda\chi_1R(\lambda)\chi_2||_{\mathcal{L}(L^2(X))}\leq C(|\lambda|+1)^{L} \textrm{ in }\{\Re(\lambda)\geq \delta\}\] \end{cor} \textsl{Proof}: This is a consequence of Proposition \ref{borneres}, Proposition \ref{pa}, Theorem \ref{naudstoy} and the maximum principle as in \cite[Prop. 1]{BP}. First we remark from Proposition \ref{borneres} and Proposition \ref{pa} that $P_\lambda$ has a first order pole with rank one residue at $\lambda=\delta$ and, since $|P_\lambda(m,m')|\leq |P_{\Re(\lambda)}(m,m')|$, we have the estimate \[||\chi_1R(\lambda)\chi_2||_{\mathcal{L}(L^2(X))}\leq |\Re(\lambda)-\delta|^{-1}C(|\lambda|+1)^{M}\] for $\Re(\lambda)\in (\delta,n/2)$. This implies by the Cauchy formula that \[||\partial_\lambda^j\chi_1R(\lambda)\chi_2||_{\mathcal{L}(L^2(X))}\leq |\Re(\lambda)-\delta|^{-1-j}C(|\lambda|+1)^{M}.\] Let $A>0$, and $\varphi,\psi\in L^2(X)$, we can apply the maximum principle to the function \[f(\lambda)=e^{iA(-i(\lambda-\delta))^{n+4}} \langle \partial^j_\lambda\chi_1R(\lambda)\chi_2\varphi,\psi\rangle\] which is holomorphic in the domain $\Lambda$ bounded by the curves \[\Lambda_+:=\{\delta+u^{-n-3}+iu; u>1\}, \quad \Lambda_-:=\{\delta-\epsilon+iu;u>1\}, \quad \Lambda_0:=\{i+u; \delta-\epsilon<u<\delta+1\}.\] Then it is easy to check as in \cite[Prop. 1]{BP} that by choosing $A>0$ large enough \[ |f(\lambda)|<C(|\lambda|+1)^L||\varphi||_{L^2}||\psi||_{L^2}\] in $\Lambda$ for some $L$ depending only on $M$. In particular, applying the same method in the symmetric domain $\bar{\Lambda}:=\{\bar{\lambda};\lambda\in\Lambda\}$, we obtain the polynomial bound $||\partial_\lambda^j\chi_1R(\lambda)\chi_2||\leq C(|\lambda|+1)^L$ on $\{\Re(\lambda)=\delta,|\Im(\lambda)|>1\}$. \hfill$\square$ \section{Width of the strip with finitely many resonances} As stated in Theorem \ref{naudstoy}, we know that there exists a strip $\{\delta-\epsilon<\Re(\lambda)<\delta\}$ with no resonance for $\Delta_g$, or equivalently no zero for Selberg zeta function. However the proof of this result does not provide any effective estimate on the width of this strip (i.e. on $\epsilon$ above). More generally it is of interest to know the following \[\rho_\Gamma:=\inf \Big\{ s\in\mathbb{R}; Z(\lambda) \textrm{ has at most finitely many zero in }\{\Re(\lambda)>s\}\Big\}\] or equivalently \[\rho_\Gamma = \inf \Big\{ s\in\mathbb{R}; R(\lambda) \textrm{ has at most finitely many poles in }\{\Re(\lambda)>s\}\Big\}.\] In this work, we give a lower bound for $\rho_\Gamma$: \begin{theo}\label{naudstriptease} Let $X=\Gamma\backslash \mathbb{H}^{n+1}$ be a convex co-compact hyperbolic manifold and let $\delta\in(0,n)$ be the Hausdorff dimension of its limit set. Then for all $\varepsilon>0$, there exist infinitely many resonances in the strip $\{ -n\delta-\varepsilon< \Re(s)< \delta \}$. If moreover $\Gamma$ is a Schottky group, then there exist infinitely many resonances in the strip $\{ -\delta^2-\varepsilon< \Re(s)< \delta \}$. \end{theo} \noindent\textsl{Remark}: In particular, we have $\rho_\Gamma\geq-\delta n$ in general and $\rho_{\Gamma}\geq -\delta^2$ for Schottky manifolds. The limit case $\delta\rightarrow 0$ may be viewed as a cyclic elementary group $\Gamma_0$, and resonances of the Laplace operator on $\Gamma_0\backslash \H$ are given explicitely \cite[Appendix]{GZ1}, they form a lattice $\{-k+i\alpha \ell; k\in\mathbb{N}_0,\ell\in\mathbb{Z}\}$ for some $\alpha\in\mathbb{R}$, in particular there are infinitely many resonances on the vertical line $\{ \Re(s)=0 \}$. This heuristic consideration suggests that for small values of $\delta$, our result is rather sharp.\\ \textsl{Proof}: The proof is based on the trace formula of \cite{GN} and estimates on the distribution of resonances due to Patterson-Perry \cite{PP}, Guillop\'e-Lin-Zworski \cite{GLZ} (see also Zworski \cite{Z} for dimension $2$). To make some computations clearer (Fourier transforms), we will use the spectral parameter $z$ with $\lambda=\frac{n}{2}+iz$ and $\Im z >0$ in the non-physical half-plane. We set $\beta:=\delta$ if $X$ is Schottky or $n+1=2$, while $\beta:=n$ if $n+1>2$ and $X$ not Schottky. We proceed by contradiction and assume that there is $\rho=n/2+\beta\delta+\varepsilon$ for some $\varepsilon>0$ such that there are at most finitely many resonances in $\Im(z)<\rho$. Let us first recall the trace formula of \cite{GN}: as distributions of $t\in\mathbb{R}\setminus\{0\}$, we have the identity \begin{equation}\label{trace} \begin{gathered} \frac{1}{2}\Big(\sum_{\frac{n}{2}+iz\in\mathcal{R}}e^{iz|t|}+\sum_{k\in\mathbb{N}}d_ke^{-k|t|}\Big) =\sum_{\gamma\in\mathcal{P}}\sum_{m=1}^{\infty}\frac{\ell(\gamma)e^{-\frac{n}{2} m\ell(\gamma)} }{2G_\gamma(m)}\delta(|t|-m\ell(\gamma))+\frac{\chi(\bar{X})\cosh\frac{t}{2}} {(2\sinh\frac{|t|}{2})^{n+1}}, \end{gathered} \end{equation} where ${\mathcal P}$ denotes the set of primitive closed geodesics on $X=\Gamma\backslash \mathbb{H}^{n+1}$, $\ell(\gamma)$ stands for the length of $\gamma\in\mathcal{P}$, $G_\gamma(m)$ is defined in \eqref{Ggamma}, $d_k:=\dim\ker P_k$ if $P_k$ is the k-th GJMS conformal Laplacian on the conformal boundary $\partial\bar{X}$, $\mathcal{R}$ is the set of resonances of $\Delta_X$ counted with multiplicity and $\chi(\bar{X})$ denotes the Euler characteristic of $\bar{X}$. \noindent Next we choose $\varphi_0\in C_0^\infty(\mathbb{R})$ a positive weight supported on $[-1,+1]$ with $\varphi_0(0)=1$ and $0\leq \varphi_0 \leq 1$. We set $$\varphi_{\alpha,d}(t)=\varphi_0\left(\frac{t-d}{\alpha}\right),$$ where $d$ will be a large positive number and $\alpha>0$ will be small when compared to $d$ (typically $\alpha=e^{-\mu d}$). Pluging it into the trace formula \eqref{trace} and assuming that $d$ coincides with a large length of a closed geodesic, we get that for $d$ large enough, $$\sum_{\gamma,m}\frac{\ell(\gamma)e^{-\frac{n}{2} m \ell(\gamma)}}{2G_{\gamma}(m)}\varphi_{\alpha,d}(ml(\gamma)) \geq Ce^{-\frac{n}{2} d},$$ with a constant $C>0$, whereas the other term can be estimated by \[\alpha\chi(\bar{X})\int_{-1}^1\varphi(t)\frac{\cosh((d+t\alpha)/2)} {(2\sinh(|d+t\alpha|/2))^{n+1}}dt=\mathcal{O}(\alpha)e^{-\frac{n}{2} d}.\] The key part of the proof is to estimate carefully the spectral side of the formula, i.e. we must examinate $$\sum_{\frac{n}{2}+iz \in {\mathcal R} } \widehat{\varphi}_{\alpha,d}(-z)+\sum_{\substack{\frac{n}{2}+iz=-k\\ k\in\mathbb{N}_0}} d_k\widehat{\varphi}_{\alpha,d}(-z),$$ where $\widehat{\varphi}$ is the usual Fourier transform. Standard formulas on Fourier transform on the Schwartz space show that for all integer $M>0$, there exists a constant $C_M>0$ such that \begin{equation} \label{est1} \left|\widehat{\varphi}_{\alpha,d}(-z) \right| \leq \alpha C_M \frac{e^{-d\Im(z)+\alpha|\Im(z)|}}{(1+\alpha|z|)^M}. \end{equation} To simplify, we denote by $\widetilde{{\mathcal R}}$ the set $\{z \in \mathbb{C};\frac{n}{2}+iz \in {\mathcal R}\cup i\mathbb{N}\}$ where each element $z$ is repeated with the multiplicity \[\left\{\begin{array}{l} m_{n/2+iz} \textrm{ if }z\notin i\mathbb{N}\\ m_{n/2-k}+d_k \textrm{ if }z=ik \textrm{ with }k\in\mathbb{N} \end{array}\right..\] Our assumption now is that $$\{ 0\leq \Im(z) \leq \rho\}\cap \widetilde{{\mathcal R}}$$ is finite for $\rho=\frac{n}{2}+\beta\delta+\varepsilon$. We set $\overline{\rho}>\rho\geq 0$. The idea is to split the sum over resonances as $$\sum_{z \in \widetilde{{\mathcal R}_X}} \widehat{\varphi}_{\alpha,d}(-z) =\sum_{\frac{n}{2}-\delta\leq \Im(z) \leq \rho} \widehat{\varphi}_{\alpha,d}(-z) +\sum_{\rho\leq \Im(z) \leq \overline{\rho}} \widehat{\varphi}_{\alpha,d}(-z) +\sum_{\overline{\rho}\leq \Im(z)} \widehat{\varphi}_{\alpha,d}(-z),$$ and estimate their contributions using dimensional and fractal upper bounds. Using (\ref{est1}) we can bound the last term (for $d$ large) by $$\left|\sum_{\overline{\rho}\leq \Im(z)} \widehat{\varphi}_{\alpha,d}(-z) \right| \leq C_M \alpha e^{-\overline{\rho}(d-\alpha)} \int_{\overline{\rho}}^{+\infty} \frac{d{\mathcal N}(r)}{(1+\alpha r)^M},$$ where ${\mathcal N}(r)=\#\{z \in \widetilde{{\mathcal R}}; |z|\leq r\}$. By \cite[Th. 1.10]{PP} (see also \cite[Lemma 2.3]{GN} for a discussion about the $d_k$ terms), we know that ${\mathcal N}(r)=\mathcal{O}(r^{n+1})$, thus we can choose $M=n+2$ and obtain, after a Stieltjes integration by parts, the following upper bound $$\left|\sum_{\overline{\rho}\leq \Im(z)} \widehat{\varphi}_{\alpha,d}(-z) \right|=\mathcal{O}(\alpha^{-n}e^{-\overline{\rho}d} ).$$ Similarly, we have the estimate (for $d$ large and $\alpha$ small) $$\left|\sum_{\rho\leq \Im(z) \leq \overline{\rho}} \widehat{\varphi}_{\alpha,d}(-z) \right|\leq C_M \alpha e^{-\rho(d-\alpha)} \int_{\rho}^{+\infty} \frac{d\widetilde{\mathcal N}(r)}{(1+\alpha r)^M},$$ where $\widetilde{\mathcal N}(r)= \#\{z \in \widetilde{{\mathcal R}} \ :\ \rho\leq \Im(z) \leq \overline{\rho},\ |z|\leq r\}$. This counting function is known to enjoy the ``fractal'' upper bound $\widetilde{\mathcal N}(r)=\mathcal{O}(r^{1+\delta})$ when $X$ is Schottky \cite{GLZ} (see also \cite{Z} when $n=1$), thus we can write $\widetilde{\mathcal N}(r)=\mathcal{O}(r^{1+\beta})$ where $\beta$ is defined above. In other words, one obtains by choosing $M=n+2$, $$\left|\sum_{\rho\leq \Im(z) \leq \overline{\rho}} \widehat{\varphi}_{\alpha,d}(-z) \right|=\mathcal{O}(\alpha^{-\beta}e^{-\rho d} ).$$ Since we have assumed that $\{ 0\leq \Im(z) \leq \rho\}\cap \widetilde{{\mathcal R}}$ is finite, and using the fact that resonances (in the $z$ plane) have all imaginary part greater than $\frac{n}{2}-\delta$, we also get $$\left|\sum_{\frac{n}{2}-\delta\leq \Im(z) \leq \rho} \widehat{\varphi}_{\alpha,d}(-z)\right| =\mathcal{O}(\alpha e^{(\delta-\frac{n}{2})d} ).$$ Gathering all estimates, we have obtained as $d \rightarrow +\infty$, $$e^{-\frac{n}{2} d}(C+\mathcal{O}(\alpha))=\mathcal{O}(\alpha e^{(\delta-\frac{n}{2})d} )+\mathcal{O}(\alpha^{-\beta}e^{-\rho d} ) +\mathcal{O}(\alpha^{-n}e^{-\overline{\rho}d} ),$$ where all the implied constants do not depend on $d$ and $\alpha$. If we now set $\alpha=e^{-\mu d}$, we get a {\it contradiction} as $d\rightarrow +\infty$, provided that $$\left \{ \begin{array}{ccc} n\mu-\overline{\rho}&<&-\frac{n}{2}\\ \delta&<& \mu\\ \rho-\beta \mu&>& \frac{n}{2}. \end{array} \right.$$ Set $\mu:=\delta+\varepsilon$ and $\rho=\beta \mu+\frac{n}{2} +\varepsilon=\beta\delta+\frac{n}{2} +\varepsilon(1+\beta)$, we can then choose $\overline{\rho}:=n\mu+\frac{n}{2}+2\varepsilon$ which is larger than $\rho$ and we have our contradiction for all $\varepsilon>0$.\hfill$\square$\\ The proof reveals that any precise knowledge in the asymptotic distribution of resonances in strips has a direct impact on resonances with small imaginary part. \section{Wave asymptotic} \subsection{The leading term} Let $f,\chi\in C_0^\infty(X)$, it is sufficient to describe the large time asymptotic of the function \[u(t):=\chi \frac{\sin(t\sqrt{\Delta_X-\frac{n^2}{4}})}{\sqrt{\Delta_X-\frac{n^2}{4}}}f\] and $\partial_tu(t)$. We proceed using same ideas than in \cite{CZ}. We first recall that from Stone formula the spectral measure is \[d\Pi(v^2)=\frac{i}{2\pi}\Big(R(\frac{n}{2}+iv)-R(\frac{n}{2}-iv)\Big)dv\] in the sense that for $h\in C^\infty([0,\infty))$ we have \[h\Big(\Delta_X-\frac{n^2}{4}\Big)=\int_{0}^\infty h(v)d\Pi(v^2)2vdv.\] Since $\sin$ is odd, then it is clear that $u(t)$ can be expressed by the integral \begin{equation}\label{ut} u(t)=\frac{1}{2\pi}\int_{-\infty}^\infty e^{itv}\Big(\chi R(\frac{n}{2}+iv)f-\chi R(\frac{n}{2}-iv)f\Big)dv \end{equation} which is actually convergent since $f\in C_0^\infty$ (this is shown below). We want to move the contour of integration into the non-physical sheet $\{\Im(v)>0\}$ (which correponds with $\lambda=n/2+iv$ to $\{\Re(\lambda)<n/2\}$) for the part with $e^{itv}$ and into the physical sheet $\{\Im(v)<0\}$ for the part with $e^{-itv}$. After setting \[L(v):=\Big(\chi R(\frac{n}{2}+iv)f-\chi R(\frac{n}{2}-iv)f\Big)\] and $\eta>0$ small, we study the following integral for $\beta:=n/2-\delta$ \[I_1(R,\eta,t):=\int_{\substack{\Im(v)=\beta\\ \eta<|\Re(v)|<R}}e^{itv}L(v)dv, \quad I_2(R,t):=\int_{\substack{|\Re(v)|=R\\ 0<\Im(v)<\beta}}e^{itv}L(v)dv, \] In particular let us first show that \begin{lem}\label{i2} If $|L(v)|\leq C(|v|+1)^{M}$ in $\{|\Im (v)|\leq\beta\}$ for some $C,M>0$, then \[\lim_{R\to\infty}I_2(R,t)=\lim_{R\to\infty} \partial_tI_2(R,t)=0\] \end{lem} \textsl{Proof}: it suffices to prove inverse polynomial bounds for $L(v)$ as $|\Re(v)|\to \infty$. Actually we can rewrite $L(v)$ using Green formula \cite{G1,P} \begin{equation}\label{ltv} L(v;m)=-2iv\int_X\int_{\partial\bar{X}}\chi(m)E\Big(\frac{n}{2}+iv;m,y\Big)E\Big(\frac{n}{2}-iv;m',y\Big)f(m')dy_{\partial\bar{X}}dm'_X \end{equation} where $E(\lambda;m,y)$ denotes the Eisenstein function, or equivalently the Schwartz kernel of the Poisson operator (see \cite{JSB}), they satisfy for all $y\in\partial\bar{X}$ \[\Big(\Delta_X-\frac{n^2}{4}-v^2\Big)E\Big(\frac{n}{2}+iv;\bullet,y\Big)=0.\] Using this equation, integrating by parts $N$ times in $m'$ in (\ref{ltv}) and using the assumed polynomial bound of $|L(v)|$ in $|\Im(v)|\leq \beta$, we get for all $N>0$ (recall $f\in C_0^\infty(X)$) \begin{equation}\label{borneltv} |L(v)|\leq C_N(|\Re(v)|+1)^{M-N}\end{equation} for some constant $C_N$. Then it suffices to take $N\gg M$ large enough and the Lemma is proved. \hfill$\square$\\ Now we get estimates in $t$ for $I_1(R,\eta,t)$. \begin{lem}\label{i1} If $|L(v)|\leq C(|v|+1)^{M}$ in $|\Im v|\leq\beta$ for some $C,M>0$, then $I_1(R,\eta,t)$ and $\partial_tI_1(R,\eta,t)$ have a limit as $R\to\infty,\eta\to 0$ and \[\lim_{\eta\to 0}\lim_{R\to \infty}I_1(R,\eta,t)=\pi ie^{-\beta t}{\rm Res }_{v=i\beta}(L(v))+\mathcal{O}(e^{-\beta t}t^{-\infty}), \quad t\to \infty,\] \[\lim_{\eta\to 0}\lim_{R\to \infty}\partial_tI_1(R,\eta,t)=-\pi\beta ie^{-\beta t}{\rm Res }_{v=i\beta}(L(v))+\mathcal{O}(e^{-\beta t}t^{-\infty}), \quad t\to \infty\] \end{lem} \textsl{Proof}: Let us first consider $I_1(R,\eta,t)$, it can clearly be written as \[e^{-t\beta}\int_{\eta<|u|<R}e^{itu}L(u+i\beta)du.\] Since $L(u+i\beta)$ has a pole at $u=0$, we can write \[L(u+i\beta)=\frac{a}{u}+h(u)\] for some residue $a\in\mathbb{C}$ and $h(u)$ analytic on $\mathbb{R}$. Set $\psi \in C_0^\infty((-1,1))$ even and equal to $1$ near $0$, then by (\ref{borneltv}) and properties of Fourier transform the integral \[\int_{\eta<|u|<R}e^{itu}\Big((1-\psi(u))L(u+i\beta)+ \psi(u)h(u)\Big)du, \] converges as $R\to \infty,\eta\to 0$ to a function that is a $\mathcal{O}(t^{-\infty})$ when $t\to \infty$. Now it remains to consider \[a\int_{\eta<|u|<R}e^{itu}\psi(u)u^{-1}du=2ia\int_{\eta}^R\frac{\sin(ut)}{u}\psi(u)du\] which clearly has a limit as $R\to\infty,\eta\to 0$, we denote by $s(t)$ this limit. Then since $s(0)=0$ and $\psi(-u)=\psi(u)$, we have \[\partial_t s(t)=2ia\int_{0}^\infty \psi(u)\cos(tu)du=ia\hat{\psi}(t), \quad s(t)=ia\int_0^t \hat{\psi}(\xi)d\xi=\frac{1}{2} ia \int_{-t}^t\hat{\psi}(\xi)d\xi\] and it is clear that \[s(t)=\lim_{t\to \infty}s(t)+\mathcal{O}(t^{-\infty})=\pi ia+\mathcal{O}(t^{-\infty}).\] The same arguments show that \[\partial_ts(t)=\mathcal{O}(t^{-\infty})\] and this proves the result. \hfill$\square$\\ Now we can conclude \begin{theo}\label{mainth} Let $\chi\in C_0^\infty(X)$, then the solution $u(t)$ of we wave equation \eqref{waveeq} satisfies the asymptotic \[\chi u(t)=\frac{A_X}{\Gamma(\frac{n}{2}-\delta+1)}e^{-t(\frac{n}{2}-\delta)}\langle u_\delta,(\delta-n/2)f_0+f_1\rangle \chi u_\delta + \mathcal{O}_{L^2}(e^{-t(\frac{n}{2}-\delta)}t^{-\infty})\] as $t\to +\infty$, where $u_\delta$ is the Patterson generalized eigenfunction. \end{theo} \textsl{Proof}: we apply the residue theorem after changing the contour in (\ref{ut}) as explained above. This gives for instance for $f=(0,f_1)$, \[\int_{-R}^Re^{itv}L(v)dv=I_1(R,\eta,t)+I_2(R,t)+\int_{\substack{v=i\beta+\eta \exp(i\theta)\\ -\pi<\theta<0}}e^{itv}L(v)dv\] The limit of the last integral as $\eta\to 0$ is given $\pi ie^{-\beta t}\textrm{Res}_{v=i\beta} L(v)$. It suffices to conclude by taking the limits $R\to\infty,\eta\to 0$ and using Lemmas \ref{i2} and \ref{i1}. Then the case $f=(f_0,0)$ is dealt with similarly by differentiating in $t$ the equation above and using Lemmas \ref{i1}, \ref{i2}. \hfill$\square$\\ We now show a lower bound in $t$ for the remainder in $u(t)$ using Theorem \ref{naudstriptease}. \begin{prop}\label{rem} Let $K\subset X$ be a compact set, then there exists a generic set $\Omega\subset C^\infty(K)$ (i.e. a countable intersection of open dense sets) such that for all $f_1\in\Omega$ and all $\varepsilon>0$, we have $r(t)\not=\mathcal{O}_{L^2}(e^{-(\frac{n}{2}+n\delta+\varepsilon)t})$ where \[r(t):=\chi u(t)-\frac{A_X}{\Gamma(\frac{n}{2}-\delta+1)}e^{-t(\frac{n}{2}-\delta)}\langle u_\delta,f\rangle \chi u_\delta\] is the remainder in the expansion of the solution $u(t)$ of the wave equation \eqref{waveeq} with initial data $(0,f_1)$. The lower bound can be improved by $r(t)\not=\mathcal{O}_{L^2}(e^{-(\frac{n}{2}+\delta^2+\varepsilon)t})$ if $X$ is Schottky. \end{prop} \textsl{Proof}: Let us define $\Omega$. If $\lambda_0$ is a resonance, we denote by $\Pi_{\lambda_0}$ the polar part in the Laurent expansion of $R(\lambda)$ at $\lambda_0$. It is a finite rank operator of the form \[\Pi_{\lambda_0}= \sum_{j=1}^k(\lambda-\lambda_0)^{-j}\sum_{m=1}^{m_j(\lambda_0)}\varphi_{jm}\otimes \psi_{jm}\] where $m_j(\lambda_0),k\in\mathbb{N}$ and $\psi_{jm},\varphi_{jm}\in C^\infty(X)$, see for instance Lemma 3.1 of \cite{G}. Thus it is a continuous operator from $C^\infty(K)$ to $C^{\infty}(X)$ and thus the kernel of $\chi \Pi_{\lambda_0}|_{C^\infty(K)}$ is a closed nowhere dense set of $C^\infty(K)$, we thus define $\Omega=\cap_{s\in\mathcal{R}}(C^\infty(K)\setminus\ker \chi\Pi_{s}|_{C^\infty(K)})$ which is a generic set of $C^\infty(K)$ (recall that $C^\infty(K)$ is a Frechet space by compactness of $K$). The idea now is to use the existence of a resonance, say $\lambda_0$, in the strip $\{-n\delta+\varepsilon>\Re(\lambda)>\delta\}$ proved in Theorem \ref{naudstriptease} and the formula (for $\Re(\lambda)>\delta$) \[\chi R(\lambda)f=\int_{0}^\infty e^{t(\frac{n}{2}-\lambda)}\chi u(t)dt.\] Indeed, if $r(t)=\mathcal{O}(e^{-t(\frac{n}{2}+n\delta+\varepsilon)})$, the integral $\int_{0}^\infty e^{t(\frac{n}{2}-\lambda)}r(t)dt$ converges for $\Re(\lambda)>-n\delta-\varepsilon$, and so it provides a holomorphic continuation of $\chi R(\lambda)f$ in $\lambda$ there. Now a straightforward computation combined with Corollary \ref{extres} shows that for $\Re(\lambda)>\delta$ \[\int_{0}^\infty e^{(\frac{n}{2}-\lambda)t}r(t)dt=\chi R(\lambda)f-(\lambda-\delta)^{-1}\chi\textrm{Res}_{\lambda=\delta}R(\lambda)f.\] This leads to a contradiction when $f_1\in \Omega$ since $\ker \chi\Pi_{\lambda_0}|_{C^\infty(K)}\cap \Omega=\emptyset$ and so $\chi R(\lambda)f$ has a singularity at $\lambda=\lambda_0$. We thus obtain our conclusion. The same method applies when $X$ is Schottky and the finer estimates are valid. \hfill$\square$ \section{Conformal resonances} In this section, we try to explain the special cases $\delta\in n/2-\mathbb{N}$ in term of conformal theory of the conformal infinity. As emphasized before, a convex co-compact hyperbolic manifold $(X,g)$ compactifies into a smooth compact manifold with boundary $\bar{X}=X\cup \partial\bar{X}$, where $\partial\bar{X}=\Gamma\backslash\Omega$ if $\Omega$ is the domain of discontinuity of the group $\Gamma$ defined in the introduction. If $x$ is a smooth boundary defining function of $\partial\bar{X}$, $x^2g$ extends smoothly to $\bar{X}$ as a metric, the restriction \[h_0=x^2g|_{T\partial\bar{X}}\] is a metric on $\partial\bar{X}$ inherited from $g$ but depending on the choice of $x$, however its conformal class $[h_0]$ is clearly independent of $x$, it is then called the \emph{conformal infinity} of $X$. By Graham-Lee \cite{GRL,GR}, there is an identification between a particular class of boundary defining functions and elements of the class $[h_0]$: indeed, for any $h_0\in[h_0]$, there exists near $\partial\bar{X}$ a unique boundary defining function $x$ such that $|dx|_{x^2g}=1$ and $x^2g|_{T\partial\bar{X}}=h_0$, this function will be called a \emph{geodesic boundary defining function}.\\ We now recall the definition of the scattering operator $S(\lambda)$ as in \cite{GRZ,JSB}. Let $\lambda\in\mathbb{C}$ with $\Re(\lambda)\notin n/2+\mathbb{Z}$ and let $x$ be a geodesic boundary defining function, then for all $f\in C^{\infty}(\partial\bar{X})$ there exists a unique function $F(\lambda,f)\in C^\infty(X)$ which satisfies the boundary value problem \[ \left\{\begin{array}{l} (\Delta_X-\lambda(n-\lambda))F(\lambda,f)=0,\\ \exists F_1(\lambda,f),F_2(\lambda,f)\in C^\infty(\bar{X}) \textrm{ such that }\\ F(\lambda,f)=x^{n-\lambda}F_1(\lambda,f)+x^\lambda F_2(\lambda,f) \textrm{ and } F_1(\lambda,f)|_{\partial\bar{X}}=f. \end{array}\right.\] Then the operator $S(\lambda):C^\infty(\partial\bar{X})\to C^{\infty}(\partial\bar{X})$ is defined by \[S(\lambda)f =F_2(\lambda,f)|_{\partial\bar{X}}.\] It is clear that $S(\lambda)$ depends on choice of $x$, but it is conformally covariant under change of boundary defining function: if $\hat{x}:=xe^{\omega}$ is another such function, then the related scattering operator is \[\hat{S}(\lambda)=e^{-\lambda\omega_0}S(\lambda)e^{(n-\lambda)\omega_0}, \quad \omega_0:=\omega|_{\partial\bar{X}}.\] It is proved in \cite{GRZ} that $S(\lambda)$ has simple poles at $\lambda=n/2+k$ for all $k\in\mathbb{N}$, and after renormalizing $S(\lambda)$ into \[\mathcal{S}(\lambda):=2^{2\lambda-n}\frac{\Gamma(\lambda-\frac{n}{2})}{\Gamma(\frac{n}{2}-\lambda)}S(\lambda)\] we obtain by the main result of \cite{GRZ} that $\mathcal{S}(n/2+k)=P_k$ is the k-th GJMS conformal Laplacian on $(\partial\bar{X},h_0)$ defined previously in \cite{GJMS}. In general $\mathcal{S}(\lambda)$ is a pseudodifferential operator of order $2\lambda-n$ with principal symbol $|\xi|_{h_0}^{2\lambda-n}$ but for $\lambda=n/2+k$, it becomes differential. \begin{prop}\label{confresonance} If $\delta=n/2-k$ with $k\in\mathbb{N}$, then the $j$-th GJMS conformal Laplacian $P_j>0$ for $j<k$ while $P_k$ has a kernel of dimension $1$ with eigenvector given by $f_{n/2-k}$ defined below in \eqref{fndemik} in term of Patterson-Sullivan measure. \end{prop} \textsl{Proof}: Let us fix $\delta\in(0,n/2)$ not necessarily in $n/2-\mathbb{N}$ for the moment. In \cite{G}, the first author studied the relation between poles of resolvent and poles of scattering operator. If $\lambda\in\mathbb{C}$, we define its resonance multiplicity by \[m(\lambda):=\textrm{rank} \Big(\textrm{Res}_{s=\lambda}((2s-n)R(s))\Big)\] while its scattering pole multiplicity is defined by \[\nu(\lambda):=-\textrm{Tr} \Big(\textrm{Res}_{s=\lambda}(\partial_s\mathcal{S}(s)\mathcal{S}^{-1}(s))\Big).\] We proved in \cite{G} (see also \cite{GN} for point in pure point spectrum) that for $\Re(\lambda)<n/2$ \[\nu(\lambda)=m(\lambda)-m(n-\lambda)+\operatorname{1\negthinspace l}_{\frac{n}{2}-\mathbb{N}}(\lambda)\dim\ker \mathcal{S}(n-\lambda),\] which in our case reduces to \begin{equation}\label{reduce} \nu(\lambda)=m(\lambda)+\operatorname{1\negthinspace l}_{\frac{n}{2}-\mathbb{N}}(\lambda)\dim\ker \mathcal{S}(n-\lambda) \end{equation} by the holomorphy of $R(\lambda)$ in $\{\Re(\lambda)\geq n/2\}$, stated in Proposition \ref{pa}. We know from \cite{JSB,GRZ} that the Schwartz kernel of $\mathcal{S}(\lambda)$ is related to that of $R(\lambda)$ by \begin{equation}\label{noyaus} \mathcal{S}(\lambda;y,y')=2^{2\lambda-n+1}\frac{\Gamma(\lambda-\frac{n}{2}+1)}{\Gamma(\frac{n}{2}-\lambda)}[x^{-\lambda}{x'}^{-\lambda}R(\lambda;x,y,x'y')]|_{x=x'=0} \end{equation} where $(x,y)\in [0,\epsilon)\times\partial\bar{X}$ are coordinates in a collar neighbourhood of $\partial\bar{X}$, $x$ being the geodesic boundary defining function used to define $\mathcal{S}(\lambda)$. This implies with Proposition \ref{pa} that $\mathcal{S}(\lambda)$ is analytic in $\{\Re(\lambda)>\delta\}$ and has a simple pole at $\delta$ with residue \[\textrm{Res}_{\lambda=\delta}\mathcal{S}(\lambda)=A_X \frac{2^{-2k+1}}{(k-1)!}f_\delta\otimes f_\delta, \quad f_\delta:=(x^{-\delta}u_\delta)|_{x=0}.\] Note that Perry \cite{P2} proved that $f_\delta$ is well defined and in $C^\infty(\partial\bar{X})$. The functional equation $\mathcal{S}(\lambda)\mathcal{S}(n-\lambda)=\rm{Id}$ (see for instance Section 3 of \cite{GRZ}) and the fact that $\mathcal{S}(\lambda)$ is analytic in $\{\Re(\lambda)>\delta\}$ clearly imply that $\ker\mathcal{S}(\lambda)=0$ for $\Re(\lambda)\in(\delta,n-\delta)$, thus in particular $\ker P_j=0$ for any $j\in \mathbb{N}$ with $j<n/2-\delta$. Moreover, using \cite[Lemma 4.16]{PP} and the fact that $m_{n/2}=0$ since $R(\lambda)$ is holomorphic in $\{\Re(\lambda)>\delta\}$, one obtains $S(n/2)=\textrm{Id}$ thus $\mathcal{S}(\lambda)>0$ for all $\lambda\in (\delta,n-\delta)$ by continuity of $\mathcal{S}(\lambda)$ with respect to $\lambda$. We also deduce from the functional equation and the holomorphy of $\mathcal{S}(s)$ at $n-\delta$ that \[\mathcal{S}(n-\delta)f_{\delta}=0.\] We thus see from this discussion and Proposition \ref{pa} that, in \eqref{reduce}, the relation $m(\delta)=\nu(\delta)=1$ holds when $\delta\notin n/2-\mathbb{N}$ while $\nu(\delta)=\dim\ker P_k$ when $\delta=n/2-k$ with $k\in\mathbb{N}$ since $m(\delta)=0$ in that case by holomorphy of $R(\lambda)$ at $\delta=n/2-k$. To compute $\dim\ker P_k$ when $\delta=n/2-k$, one can use for instance Selberg's zeta function. Indeed by Proposition 2.1 of \cite{P2}, $Z(\lambda)$ has a simple zero at $\delta$ but it follows from Theorems 1.5-1.6 of Patterson-Perry \cite{PP} that $Z(\lambda)$ has a zero at $\lambda=n/2-k$ of order $\nu(n/2-k)$ if $k\in\mathbb{N}, k<n/2$, therefore $\nu(n/2-k)=1$ and thus \[\dim\ker P_k=1.\] One can now describe a bit more precisely the function $f_\delta$. The Poisson kernel of Proposition \ref{pa} in the half-space model $\mathbb{R}_y^{n}\times\mathbb{R}_{y_{n+1}}^+$ of $\mathbb{H}^{n+1}$ is \[\mathcal{P}(\lambda; y,y_{n+1},y')=\frac{y_{n+1}}{y_{n+1}^2+|y-y'|^2}\] thus if $x$ is the boundary defining function used to define $\mathcal{S}(\lambda)$ and if $(\pi_\Gamma^*x/y_{n+1}) |_{y_{n+1}=0}=k(y)$ (recall $\pi_\Gamma,\bar{\pi}_\Gamma$ are the projections of (\ref{pigamma})) for some $k(y)\in C^\infty(\mathbb{R}^n)$, so we can describe rather explicitely $f_\delta$, we have \begin{equation}\label{fndemik} \bar{\pi}_\Gamma^*f_{\delta}(y)=k(y)^{-\delta}\int_{\mathbb{R}^n}|y-y'|^{-2\delta}d\mu_\Gamma(y'), \quad y\in \Omega. \end{equation} \hfill$\square$\\ To summarize the discussion, if $\delta<n/2$, the Patterson function $u_\delta$ is an eigenfunction for $\Delta_X$ with eigenvalue $\delta(n-\delta)$, it is not an $L^2$ eigenfunction though and it has leading asymptotic behaviour $u_\delta\sim x^{\delta}f_\delta$ as $x\to 0$, where $f_\delta\in C^\infty(\partial\bar{X})$ is in the kernel of the boundary operator $\mathcal{S}(n-\lambda)$. When $\delta\notin n/2-\mathbb{N}$, this is a resonant state for $\Delta_X$ with associated resonance $\delta$ while when $\delta\in n/2-\mathbb{N}$ it is still a generalized eigenfunction of $\Delta_X$ but not a resonant state anymore, and $\delta$ is not a resonance yet in that case: the resonance disappear when $\delta$ reaches $n/2-k$ and instead the $k$-th GJMS at $\partial\bar{X}$ gains an element in its kernel given by the leading coefficient of $u_{n/2-k}$ in the asymptotic at the boundary.\\ \textsl{Remark}: Notice that the positivity of $P_j$ for $j<n/2-\delta$ has been proved by Qing-Raske \cite{QR} and assuming a positivity of Yamabe invariant of the boundary. Our proof allows to remove the assumption on the Yamabe invariant, which, as we showed, is automatically satisfied if $\delta<n/2$.
1,941,325,219,913
arxiv
\section{Introduction} Mixed ruthenates with perovskite based crystal structures have been receiving considerable attention of late \cite {1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,27a}, both because of their interesting magnetic properties and because of the recent discovery of superconductivity in the layered ruthenate, Sr$_2$RuO$_4$ \cite{16}. Despite the rarity of 4d based magnetic materials, SrRuO$_3$ is a robustly (Curie temperature, $T_C\approx $165 K; magnetization, $m\approx $% 1.6 $\mu _B$/Ru) ferromagnetic metal occurring in a distorted cubic perovskite structure \cite{28,29,30,31,32}, and $T_C$ can be even further increased by doping with Pb \cite{13}. However, magnetism is easily suppressed by doping with Ca, although Ca/Sr states are far removed from the Fermi level and accordingly may not be expected to influence the electronic properties of SrRuO$_3$ very drastically. Furthermore, Sr$_2$YRuO$_6,$ which has essentially the same crystal structure as SrRuO$_3,$ but with every second Ru substituted by Y, is antiferromagnetic, with estimates of the saturation magnetization even higher than the parent compound ($M\approx 3\mu _B),$ although the critical temperature, $T_N$ is reduced to 26 K. The variety of magnetic and electronic properties observed in these superficially similar compounds already poses an interesting theoretical challenge (cf., for instance, non superconducting cuprates, which despite their large variety, always show strong antiferromagnetism in Cu-O planes). Besides, there are a number of interesting observations that deserve attention. These include the fact that SrRuO$_3$ is the only known ferromagnetic metal among the 4$d$ oxides. As such interesting differences are expected from the much more abundant 3$d$ oxide magnets. For example, much stronger spin-orbit effects compared to the 3d systems may be anticipated, and these may manifest themselves in the magneto-crystalline and magneto-optical properties. In fact, SrRuO$_3$ does show an abnormally high magneto-crystalline anisotropy for a pseudo cubic material \cite{32} to the extent that it is difficult to measure its saturation magnetization using standard measurements of hysteresis loops, and resulting in some confusion in the older experimental literature. More recently, Klein and co-workers \cite{5} have measured strong magneto-optic properties in SrRuO$_3$ epitaxial films. 4$d$ ions generally have more extended $d$ orbitals than the corresponding 3$d$ ions, and as a result $4d$ oxides tend to have greater overlap and hybridization between the transition metal and O 2$p$ orbitals. Besides a tendency towards greater itinerancy, this can lead to more interplay between structural degrees of freedom and the magnetic and electronic properties. As mentioned, additional interest in these ruthenates comes from their apparent proximity to superconductivity, and possible new insights into the problem of high-temperature superconductivity that may emerge from their study. Although the layered perovskite Sr$_2$RuO$_4$ has a modest $T_c$ of 1K (there have been very recent, unconfirmed reports of signatures of superconductivity at up to 60K in the double perovskite Sr$_2$YRuO$_6$ with Cu doping \cite{36}), it was suggested that this material may be an unconventional superconductor. This is based largely on several similarities with the cuprates: Sr$_2$RuO$_4$ is iso-structural with the first discovered high-$T_c$ superconductor, shows highly two dimensional electronic properties, and of course is close to magnetic phases, particularly SrRuO$_3$% , and Sr$_2$YRuO$_6$. However, the evidence for strong electron correlations in ruthenates is by far not yet as compelling as the body that has been accumulated for the cuprates and many other 3$d$ oxides, and the question of whether these ruthenates can be treated within the framework of conventional band theory, or require a strong-correlation based theory, remains open. Several photoelectron spectroscopy experiments have been reported for Sr$_2$RuO$_4$, which because of its layered crystal structure is more amenable to such studies than nearly cubic SrRuO$_3$. Yokoya et al. \cite{24} and Lu et al. \cite{25}, using angle resolved photoemission (ARPES), both report observation of Fermi surface sections and extended van Hove features somewhat like those in the local density electronic structure calculations, although the positioning of the van Hove singularity relative to the Fermi energy differs and the dispersion is generally somewhat weaker than in the calculations, probably due to correlations, but possibly because of strong electron-phonon and -magnon interactions. Similarly, Schmidt et al. \cite{21} observed the valence bands of Sr$_2$RuO$_4$ using ARPES, and found uppermost occupied bands with a width reduced by a factor of 2 compared with band structure calculations. Unfortunately, ARPES is highly sensitive to the quality of samples and particularly sample surfaces. Interestingly, polycrystalline but otherwise apparently high quality Sr% $_2$RuO$_4$ samples can be non-metallic \cite{38}. Angle integrated photoemission is a more robust technique; using it, Yokoya et al. \cite{23} found good agreement between the experiment and density functional calculations, but observe a correlation satellite to the d-band (using resonant photoemission). Based on these measurements they estimated an effective Hubbard $U$ of 1.5 eV, which is at least three times smaller than similar estimates for the cuprate superconductors, casting some doubt on suggestion that Sr$_2$RuO$_4$ and related ruthenates are very strongly correlated. One of the most decisive arguments in favor of the importance of strong correlations in high-$T_c$ cuprates is the failure of conventional local-density-approximation band structure calculations to describe even qualitatively the antiferromagnetism in the undoped parent compounds. Similarly, the key question for these ruthenates may be which approximation, strongly correlated or band structure based, is best suited to explaining the variety of magnetic properties. One of the main purposes of this work is to determine whether a (similar to the cuprates) failure of the conventional, mean-field type, band calculations, is present in these ruthenates. Within a strong-correlation scenario, the ferromagnetism in metallic SrRuO$% _3,$ results from the double exchange mechanism, while antiferromagnetism in insulating Sr$_2$YRuO$_6$ is due to superexchange via two (unlike the 3d oxides and Cu perovskites) oxygen ions. This is appealing because in the Mott-Hubbard picture the main factor controlling the magnetic properties is carrier concentration, which is indeed different in those two materials: in (Sr,Ca)RuO$_3$ ruthenium is four-valent, that is, its $d$-band is populated by 4 electrons, while in Sr$_2$YRuO$_6$ the nominal valency of Ru is 5, and the number of $d$ electrons is 3. On the other hand, integer occupancy does not favor the double exchange scenario, and, besides, it is unclear how the Mott-Hubbard model provides a mechanism for suppressing magnetism in CaRuO$% _3.$ Finally, as we discuss in detail below, conventional band theory in all the cases we test does yield the correct magnetic ground state, in contrast to the cuprates and similar correlated 3$d-$oxides. Thus, contrary to some recently suggested superconductivity scenarios based on strong correlations% \cite{26,37}, it seems likely that if strong correlations play some role, it is more of a quantitative than of a qualitative nature. On the other hand, we note that even if Sr$_2$RuO$_4$ and other ruthenates are not strongly correlated, the superconductivity could still be unconventional, for instance arising from a magnetic mechanism. In this regard, a number of measurements indicate anomalously large scattering of electrons by spin fluctuations\cite{15}, complicated by a strong magnetoelastic coupling\cite{2}. Cyclotron masses\cite{22}, the specific heat, and the paramagnetic susceptibility \cite{16} are all strongly renormalized. While there is always the possibility of ascribing this renormalization to strong correlations, the simplest explanation may be strong electron-phonon-magnon interactions. Further, an abnormally large transport coupling constant, $\lambda _{tr}$ is required to rationalize the temperature dependence of the resistivity with the calculated Drude plasma energies\cite{18}, although this value is consistent with the specific heat enhancement. Unusual temperature dependencies of the Hall effect \cite{20} were found in CaRuO$_3$ and in SrRuO$_3$. We shall return to the transport properties later in the paper; it is plausible that they can be reconciled with the conventional one-electron mechanism, despite the unusual $T-$% dependences. The main purpose of the present paper is to study the magnetic phases and the relative importance of correlation and band structure effects for obtaining the magnetic properties. We focus on the double perovskite Sr$_2$% YRuO$_6$ and the ferromagnetic-paramagnetic transition in (Sr,Ca)RuO$_3$ with increasing Ca content, and we shall show that conventional band theory is fully able to describe the variegated magnetic properties in this family of materials. \section{First Principles Calculations} \subsection{Structure, Magnetism and Ionic Considerations:} As mentioned, SrRuO$_3$ occurs in an orthorhombic, Pbnm, GdFeO$_3$ structure, which has four formula units per cell. It is interesting to note that this is the same generic structure as LaMnO$_3$ and related manganites that have received considerable recent attention because of the discovery of colossal magnetoresistance effects in some of these. Further SrRuO$_3$ has the same nominal $d$ electron count as LaMnO$_3$, although unlike LaMnO$_3$ it is a ferromagnetic metal even without doping. In LaMnO$_3$ the distortion from the ideal cubic perovskite crystal structure consists of both rotations of the O octahedra and Jahn-Teller distortions of them to yield Mn-O bond length variations of more than 10\%. This is understood in ionic terms as a result of the fact that the high spin Mn ion with this electron count has a half full majority spin e$_g$ orbital favoring a Jahn-Teller distortion. In contrast, SrRuO$_3$ occurs with a reduced magnetic moment and its distortion consists of almost rigid rotations of the O octahedra with practically no accompanying variations in the Ru-O bond lengths. CaRuO$_3$ occurs in the same crystal structure and symmetry as SrRuO$_3$, also with no evident Jahn-Teller distortion of the O octahedra, but with approximately twice larger rotations. Such rotations are common in perovskite based materials and are usually understandable in terms of ionic size mismatches between the A and B site cations. Such an explanation is consistent with the trend observed in (Sr,Ca)RuO$_3$ since the Ca$^{2+}$ ionic radius is approximately 0.15 \AA\ smaller than Sr$^{2+}$% . Although CaRuO$_3$ is paramagnetic, it is believed to be rather close to magnetic instability. Sr$_2$YRuO$_6$ is an antiferromagnetic insulator that occurs in a distorted but well ordered double perovskite structure. This is derived from the perovskite, SrRuO$_3$ by replacing every second Ru by Y, such that the remaining Ru ions form an fcc lattice. The structural units are thus Ru-O and Y-O octahedra, with the Sr ions in the A site positions providing charge balance. Each Ru-O octahedra shares a single O atom with each neighboring Y-O octahedra, and vice versa, but there are no common O ions shared between different Ru-O octahedra. The primary distortions from the ideal perovskite derived structure, consist of (1) a substantial breathing of the octahedra to increase the Y-O distance to 2.2 \AA {} at the expense of the Ru-O distances which become 1.95 \AA {} and (2) rotations of the octahedra to reduce the closest Sr-O distances, consistent with the ionic sizes. These distortions reduce the symmetry to monoclinic (P21/n). A related view of the crystal structure is based on the fact that Y, like Sr is fully ionized in such oxides, and accordingly is a spectator ion providing space filling and charge to the active Ru-O system but playing no direct role in the electronic or magnetic properties. From this point of view, Sr$_2$YRuO$_6$ consists of independent rigid, but tilted, (RuO$_6$)$% ^{7-}$octahedral clusters, arranged on a slightly distorted fcc lattice. Hopping then proceeds between Ru ions in neighboring RuO$_6$ clusters via two intervening O ions Since Y is tri-valent, the Ru is formally 5-valent ($4d^3$) in this compound instead of formally tetra-valent as in perovskite SrRuO$_3$. In the octahedral crystal field, the Ru $t_{2g}$ orbitals lie below the $e_g$ orbitals, so that in the high spin state the majority spin Ru $t_{2g}$ manifold would be fully occupied, and all other Ru 4d orbitals unoccupied. This Jahn-Teller stable configuration is consistent with the experimental observation that the bond angles and bond lengths within the Ru-O octahedra are almost perfectly equal, but the Ru moment of 1.85 $\mu _B$/Ru measured using neutron diffraction is considerably smaller than the 3 $\mu _B$/Ru that would be expected in the high-spin configuration. First principles studies of SrRuO$_3$ have shown that its electronic structure involves rather strong Ru-O covalency, and that O $p$-derived states participate substantially in the magnetism and the electronic structure near the Fermi energy, which is important for understanding the transport properties. As will be discussed below, a similar covalency is present in CaRuO$_3$ and the differences in the magnetic ground states of CaRuO$_3$ and SrRuO$_3$ are due to band structure effects related to the modulation of the Ru-O hybridization by the structural distortion. In this regard, it should be noted that Ru$^{5+}-$O hybridization may be even stronger in Sr$_2$YRuO$_6 $, based on the expectation that the O $2p$ manifold would be even higher in energy with respect to the Ru $d$ states. The similar Ru-O distances in SrRuO% $_3$ and Sr$_2$YRuO$_6$ (less than 0.03 \AA {} longer than in Sr$_2$YRuO$_6$% ) and the fact that 5 is not a common oxidation state for Ru also suggest strong covalency in the double perovskite. Here we report density functional calculations of the electronic and magnetic properties of Sr$_2$YRuO$_6$. These confirm the strongly hybridized view of these materials and provide an explanation for the electronic and magnetic properties. \subsection{SrRuO$_3$} The electronic structure of SrRuO$_3$ has been described elsewhere\cite{6,27}. Here we repeat, for completeness, the main results, and also discuss some quantitative differences between the published calculations. There have been two recent band structure calculations for SrRuO$_3$\cite {6,27}. In both works the calculations were performed for both an idealized cubic perovskite structure and the experimental crystal structure. Allen {\it et al}\cite{27} interpreted their experimental measurements in terms of the band structure calculated within the local spin density approximation (LSDA) using the linear muffin-tin orbitals (LMTO) method. Singh \cite {6} used the general potential linearized augmented plane-waves (LAPW) method to calculate electronic and magnetic properties. The two studies yielded reasonably similar results for the electronic structures near the Fermi energy although some noticeable differences are present. Important for interpreting experimental results are the differences in the density of states and in the Fermi velocities. The latter were found in Ref. \onlinecite{6} to be almost isotropic, while in Ref.\onlinecite{27} strong anisotropy of the Fermi velocity (about 30\% in each channel) was reported. The ratio $N_{\uparrow }/N_{\downarrow }$ found in Ref. \onlinecite{27} is 50\% larger than that in Ref. \onlinecite{6}. Most important, the overall shape of the density of states within a $% \pm 0.2$ Ry window at the Fermi level is rather different. It is known that the accuracy of the atomic sphere approximation calculations can be difficult to control for materials with open crystal structures and low site symmetries due to sensitivity to the computational parameters (e.g., basis set, inclusion of empty spheres in lattice voids, linearization parameters etc.). Since we wanted to use LMTO-ASA technique in analyzing the calculated band structure, we have repeated the LAPW calculations reported in Ref. \onlinecite{6} using a standard LMTO-ASA package {\it Stuttgart-4.7}. We found it necessary to include 10 empty spheres per formula unit to achieve adequate space filling in the distorted structure (in the cubic perovskite structure this was not needed). The result appeared to be much closer to the LAPW results of Ref. \onlinecite{6} than to the LMTO ones of Ref. \onlinecite{27}; Ref. \onlinecite{27} does not mention use of any empty spheres, in which case insufficient space filling could have influenced the calculation. The results given here are from LAPW calculations, except where specifically noted otherwise. Calculations for SrRuO$_3$ in the ideal perovskite structure yielded a spin moment of 1.17 $\mu _B$ per formula unit, while calculations including the experimentally observed rotations yielded a larger moment of 1.59 $\mu _B$ in accord with recent experimental results. Only a portion of the total moment resides on the Ru sites (64\% in the LAPW MT sphere, and 67\% in the LMTO atomic sphere). The electronic density of states has a gap in the spin majority channel which is only 20 mRy above the Fermi level. The fact that SrRuO$_3$ is so close to a half-metal is important for understanding its transport properties, and the fact that they are so sensitive to magnetic ordering (and, correspondingly, to temperature). \subsection{CaRuO$_3$} Experimentally, CaRuO$_3$ is a paramagnetic metal. This fact suggests that the rotation of RuO$_6$ the octahedra is antagonistic to magnetism (since larger rotations constitute the main structural difference between CaRuO$_3$ and SrRuO$_3$). However, this conjecture is apparently at odds with the calculated result that the equilibrium magnetization in SrRuO$_3$ is smaller in an ideal cubic perovskite structure than in the actual distorted one. As a first step to understanding this, we have extended our calculations to CaRuO$_3$ in its experimental structure. Details of the method are as in Ref.\onlinecite{6}. The resulting density of states is shown in Fig.\ref{CR3-DOS}. We find that indeed the magnetism is suppressed in this case, though in a very borderline fashion. Fixed spin moment calculations of the total energy as a function of spin magnetization for CaRuO$_3$ show a very extended flat region, extending to near 1.5 $\mu _B$ per formula unit. This is reminiscent of fcc Pd which also shows such a flat region. This borderline state implies a high spin susceptibility and explains the fact that low doping can induce a ferromagnetic state. Further, para-magnon like spin excitations should be very soft in this material and magnetic impurities may be expected to induce giant induced local moments. There are already some reports that this is the case in CaRuO$_3$ \cite{crow}. \begin{figure}[tbp] \centerline{\epsfig{file=fig1d.epsi,height=0.95\linewidth,angle=-90}} \vspace{0.1in} \setlength{\columnwidth}{3.2in} \nopagebreak \caption{LAPW density of states of CaRuO$_3$ in its actual crystal structure. The total density of states is shown by the solid line, Only Ru(d) partial density of states is shown (dashed line), because the O(p) density is approximately the difference between the total and the Ru(d) densities. Here and in the other figures all densities of states are per spin and formula unit. } \label{CR3-DOS} \end{figure} Having shown that the ferromagnetism in SrRuO$_3$ and its suppression in CaRuO$_3$ can be described using band structure methods, we turn to the question of why these two perovskites have different magnetic properties. To determine whether the key difference between the materials is structural we have performed calculations for CaRuO$_3$ using the crystal structure of SrRuO$_3$. These calculations yield a spin magnetization of 1.68 $\mu _B$ per formula unit and a magnetic energy of 0.06 eV/Ru, very similar to SrRuO$% _3$. Calculations for the intermediate structure formed by a linear average of the experimental CaRuO$_3$ and SrRuO$_3$ structures, yield a similar spin moment of 1.53 $\mu _B$ per formula unit and a magnetic energy of only 0.029 eV/Ru (note the similarity of the magnetizations and large variation of magnetic energy). To within the accuracy of our calculations this paramagnetic - ferromagnetic energy difference becomes zero just at the experimental CaRuO$_3$ structure. Since ferromagnetism in the (Sr,Ca)RuO$_3$ is apparently strongly coupled to the rotation of the octahedra, alloying the A site cation is expected to be an effective means for tuning the magnetic properties. Alloying CaRuO$_3$ with larger divalent cations should generally induce ferromagnetism while alloying SrRuO$_3$ with smaller cations should suppress ferromagnetism. BaRuO$_3$, while a known compound, occurs in a different crystal structure and is not magnetic. However, Pb can be partially substituted on the Sr site, and it is known that introduction of this slightly larger divalent cation does increase $T_C$ in SrRuO$_3$. Later in the paper we shall analyze the transformation of the band structure of (Sr,Ca)RuO$_3$ upon increase of the tilting in more detail and will show that the nonmonotonic dependence of the equilibrium magnetization on tilting is a straightforward consequence of a natural evolution of the band structure near $E_F$ with the structural distortion. \subsection{Sr$_2$YRuO$_6$} The electronic and magnetic structure of Sr$_2$YRuO$_6$ was calculated using the full experimental crystal structure of Battle and Macklin \cite{SYR-str} except that the very small (0.23\%) lattice strain was neglected. Additional calculations were performed for idealized structures neglecting the tilting of the octahedra to help understand the role of this distortion, which changes the angles and distances along the Ru-O-O-Ru hopping paths. These local density approximation calculations were performed using the general potential LAPW method \cite{LAPW} including local orbital extensions \cite{LAPW1} to accurately treat the O $2s$ states and upper core states of Sr and Y as well as to relax any residual linearization errors associated with the Ru $d$ states. A well converged basis consisting of approximately 2700 LAPW basis functions in addition to the local orbitals was used with O sphere radii of 1.58 a.u. and cation radii of 2.10 a.u. This self-consistent approach has a flexible representation of the wavefunctions in both the interstitial and sphere regions and makes no shape approximations to either the potential or charge density. As such it is well suited to materials with open structures and low site symmetries like Sr$_2$YRuO$_6$. In addition, we used the LMTO method in the atomic sphere approximation and tight-binding representation\cite{LMTO} (Stuttgart code, version 4.7) to get better insight in the calculated electronic structure. The LMTO-ASA method is less accurate than the full-potential LAPW but it provides more flexibility in the way how the results are represented and how they can be analyzed in tight-binding language. \begin{figure}[tbp] \centerline{\epsfig{file=fig2d.epsi,height=0.95\linewidth,angle=-90}} \vspace{0.1in} \setlength{\columnwidth}{3.2in} \nopagebreak \caption{LAPW density of states of antiferromagnetic Sr$_2$YRuO$_6$. Partial densities of states of Ru(d) and O(p) orbitals are shown by the dashed and dotted lines, respectively.} \label{DOS-SYR-AF} \end{figure} Calculations were performed at the experimental structure for ferromagnetic (F) and the observed antiferromagnetic (AF) orderings. The AF ordering is 0.095 eV/Ru lower in energy than the F ordering, and has an insulating gap in the band structure, consistent with the experimental ground state. The insulating gap of 0.08 eV is between majority and minority spin states and may yield only a weak optical signature. The Ru moment as measured by the magnetization within the Ru LAPW sphere is 1.70 $\mu _B$ for the AF state and 1.80 $\mu _B$ with the F ordering, in reasonable accord with the neutron scattering results The similar moments with different spin configurations suggests that a local moment picture of the magnetism is appropriate for Sr$_2$YRuO$_6$. This is in contrast to perovskite SrRuO$_3$. Similar to SrRuO$_3$ there are substantial moments within the O LAPW spheres as well as the Ru spheres, amounting to approximately 0.10 $\mu _B$/O (AF ordered) and 0.12 $% \mu _B$/O (F ordered). These cannot be understood as tails of Ru 4d orbitals extending beyond the LAPW sphere radii, since such an explanation is inconsistent with the radial dependence of these orbitals, but rather they arise from polarization of the O ions due to hybridization, which is evidently strong both from this point of view and from the calculated electronic structure, discussed below. The total local moment per formula unit is of mixed Ru and O character and amounts to 3 $\mu _B$/f.u., which is approximately 60\% Ru derived and 40\% O derived (the interstitial polarization of 0.5 - 0.7 $\mu _B$/cluster derives from both Ru and O, but is assigned as mostly O in character based on the extended 2p orbitals of negative O ions and the small O sphere radius, and results of LMTO-ASA calculations which do not have any interstitial volume). The calculated exchange splittings of the O 1s core levels are 80 to 95 meV depending on the particular O site. The O polarizations may be observable in neutron experiments if O form factors are included with Ru in the refinement. Such an experiment is strongly suggested by the present results. \begin{figure}[tbp] \centerline{\epsfig{file=fig3d.epsi,height=0.95\linewidth,angle=-90}} \vspace{0.1in} \setlength{\columnwidth}{3.2in} \nopagebreak \caption{LAPW density of states of ferromagnetic Sr$_2$YRuO$_6$. Partial density of states of Ru(d) orbitals and the total density of states are shown by the dashed and solid lines, respectively.} \label{DOS-SYR-F} \end{figure} Projections of the electronic density of states (DOS) of antiferromagnetic Sr$_2$YRuO$_6$ onto the LAPW spheres are shown in Fig. \ref{DOS-SYR-AF}, where majority and minority spin projections onto a Ru ion and the six O ions in its cluster are shown. The DOS in two spin channels are similar in shape apart from an exchange splitting throughout the valence energy region and shows evidence of a strongly hybridized electronic structure. The details of this structure are deferred to the tight binding analysis below, except to mention the exchange splitting of the essentially pure O 2p states between -4 and -6 eV relative to the Fermi energy ($E_F$) and the fact that there are substantial Ru 4d contributions to the minority spin channel between -4 and -6 eV as well as O contributions above $E_F$ implying that the average Ru 4d occupancy is considerably higher than d$^3$. Although assigning charge in a crystal to various atoms is an ambiguous procedure, integration of the d-like DOS implies an average near d$^5$ similar to perovskite SrRuO$_3$. The magnetic moments derive from polarization of three bands near $E_F$ by an exchange splitting of 1 eV. The F ordered DOS (Fig. \ref{DOS-SYR-F}) is very similar to that in the AF state, but the exchange splitting is somewhat smaller and the bandwidth somewhat larger, resulting in a slight semimetallic overlap of majority and minority spin bands at $E_F$ which reduces the spin moment from 3.0 to 2.97 $\mu _B$/f.u.. Parallel calculations were performed using a structure in which the tilting of the RuO$_6$ clusters is suppressed. As with the actual experimental structure, the AF ordering is lower in energy than the F ordering. However, in this case the band structures are metallic for both orderings, showing that the tilting is crucial for the insulating state. As will be discussed below there is a substantial coupling between the magnetic order and this structural degree of freedom. \section{Tight-binding interpretation and physical properties} \subsection{Sr$_2$YRuO$_6$} \subsubsection{Single RuO$_6$ cluster\label{1cluster}} Somewhat unexpectedly, the easiest compound to understand is the Sr$_2$YRuO$_6$ double perovskite. Sr and Y, as is common in perovskites, are fully ionic, so that the states around the Fermi level barely have any Sr or Y character. Thus, as mentioned, this compound can be viewed as consisting of rigid RuO$% _6 $ octahedra, arranged on an fcc lattice, and loosely connected to each other. We will show below that this intuitive picture provides very good qualitative and quantitative interpretation of the full-scale band structure calculation. In contrast with Sr$_2$RuO$_4$ or Sr$_x$Ca$_{1-x}$RuO$_3,$ no octahedra share oxygens. The octahedra are slightly tilted, which we shall neglect for the moment (the effect of tilting is in a certain sense important and will be discussed later). Accordingly, we begin by discussing a single cluster. The electronic structure of a single RuO$_6$ cluster is governed by the relative position of Ru $d$ and O $p$ levels, and the corresponding hopping amplitudes. The Ru $d$ states are split by the crystal field into two manifolds consisting of 3 $t_{2g}$ and 2 $e_g$ levels, respectively, and these are separated by $\approx 1$ eV. The O $p$ levels are subject to a crystal field splitting at least three times smaller, and yield 9 $p_\pi $ states, which form $pd\pi $ bonds with Ru, plus three $p_\sigma $ states, which participate in the $pd\sigma $ bonding. After including $pd$ hopping, the system of levels becomes, for each spin channel: 13 nonbonding: 4$\times E_0(p_\sigma )+9\times E_0(p_\pi ),$ 5 bonding: 2$\times E_{-}(E_g)+3\times E_{-}(T_{2g}),$ and 5 antibonding: 2$\times E_{+}(E_g)+3\times E_{+}(T_{2g}), $ where $E_0$ are pure ionic levels, and $E_{\pm }(E_g)=0.5\{E_0(p_\sigma )+E_0(e_g)\pm \sqrt{[E_0(p_\sigma )-E_0(e_g)]^2+16t_\sigma ^2}\},$ $E_{\pm }(T_{2g})=0.5\{E_0(p_\pi )+E_0(t_{2g})\pm \sqrt{[E_0(p_\pi )-E_0(t_{2g})]^2+16t_\pi ^2}.$ The actual ordering of levels in RuO$_6,$ as shown on Fig.\ref{levels} is $E_{-}(T_{2g})\approx E_{-}(E_g)<E_0(p_\sigma )<E_0(p_\pi )<E_{+}(T_{2g})<<E_{+}(E_g)$. The last inequality leads to a substantial gap ($>$ 2 eV) between the antibonding $T_{2g}$ and the \begin{figure}[tbp] \centerline{\epsfig{file=level.eps,width=0.95\linewidth}} \vspace{0.1in} \setlength{\columnwidth}{3.2in} \nopagebreak \caption{Calculated LAPW density of states for cubic Sr$_2$YRuO$_6$ (antiferromagnetic) and the level scheme for an individual RuO$_6$ cluster. Notations for the density of states are the same as in Fig. \protect\ref{DOS-SYR-AF}. $T_{2g}$ levels and their parent states are shown by solid lines, the $E_{g}$ levels and states by dashed lines. Compared with the formulas in Section \ref{1cluster}, additional small O-O hoppings $\tau _\pi $ and $\tau _\sigma $ are taken into account; These split off the non-bonding levels mixed oxygen states with $E_0(p_\sigma )-4\tau _\sigma $ and $% E_0(p_\pi )-2\tau _\pi $.} \label{levels} \end{figure} antibonding $% E_g$ bands in the solid. This large gap is only partially due to the crystal field, and arises largely from the stronger (relative to $pd\pi )$ $pd\sigma $ bonding. The exchange splitting is, naturally, weaker than this enhanced crystal field splitting, and the Hund's rule does not apply to the high-lying antibonding $E_g$ states, which remain empty in both spin channels. Neglecting those states, there are 21 levels to be occupied by 39 valence electrons. Here Hund's rule does apply and tells us to populate all 21 spin-majority states, and all but the three antibonding $T_{2g}$ levels in the spin-minority channel. Thus for the electronic properties of the crystal these six spin-up and spin-down $T_{2g}$ molecular orbitals are of primary relevance; the symmetry of these orbitals is the same as for $d(t_{2g})$ states in a transition metal ion. We now use this information to analyze the electronic structure of crystalline Sr$_2$YRuO$_6$. \subsubsection{Intercluster hopping and exchange.} When a solid is built out of the clusters, the molecular levels broaden into bands, which however remain quite narrow in this material. Although the main intercluster hopping occurs via $dd\sigma $ matrix elements (here and below we mean for $d$ the Ru-O molecular orbitals with the effective $d-$% symmetry), the intercluster distance is large and the effective hopping amplitude is small. Thus one may conjecture that the valence band formed out of the majority spin molecular $T_{2g}$ orbitals, and the corresponding minority spin band do not overlap, and that the crystal, in either the ferro- or antiferromagnetic states, remains insulating. A more detailed analysis, as discussed later in the paper, reveals a difference between the ferro- and antiferromagnetic ordering, namely that the bandwidth is slightly larger, and the exchange splitting slightly smaller in the former case. In fact, our LDA calculations, described above, yield an insulating antiferromagnetic ground state, with a small gap of about 0.07 eV; they also give a metastable semimetallic ferromagnetic state, with a band overlap of a few meV. Let us now analyze this band structure in the tight-binding terms. A nearest neighbor model should be a good starting approximation. Let us begin with the ferromagnetic case, and consider the undistorted crystal structure (no tilting of oxygen octahedra). The main parameter is now the $% xy-xy$ hopping amplitude, $\tau _\sigma =0.75t_{dd\sigma }.$ In the nearest neighbor approximation, the three $T_{2g}$ bands do not hybridize with each other. Each of them, however, disperses according to $E_k=$ $% E_{+}(T_{2g})+4\tau _\sigma \cos (k_xa/2)\cos (k_ya/2),$ and the corresponding permutations of $x,y,z.$ Including $dd\pi $ hopping, the bands hybridize among themselves, resulting in a further increase in the bandwidth. The calculated LDA bands have widths of approximately 1.1 eV, corresponding to $\tau _\sigma \approx 0.14$ eV. $dd\pi $ hopping effects are responsible for the deviations from the dispersion above by about 0.1 eV. Importantly, there is no repulsion between the valence bands and the conduction bands, because they are fully spin polarized with the opposite spins. This situation changes, however, in the antiferromagnetic case. The observed magnetic ordering corresponds to ferromagnetic 001 planes stacked antiferromagnetically. Each RuO$_6$ cluster has thus 4 neighbors with the same and 8 neighbors with the opposite spin. Correspondingly, of three $T_{2g}$ derived bands one ($xy$) remains essentially the same as in the ferromagnet, and two other loose their dispersion to the first order in $ \tau _\sigma ,$ since the relevant neighboring clusters have only states of the opposite spin at this energy. Instead, for those bands there is a hybridization between the valence and the conduction bands, because now the orbitals with the same spin on the neighboring clusters belong to these different bands. The hybridization matrix element is $t_\sigma ({\bf k)=}% 4\tau _\sigma \cos (k_{x,y}a/2)\cos (k_za/2),$ and produces an additional bonding energy $2J\approx \langle t_\sigma ^2({\bf k)\rangle }/\Delta $ per cluster ($\Delta \approx 1$ eV is the exchange splitting). This yields about 0.08 eV which is very close to the calculated LDA energy difference (0.12 eV) between the AFM and FM configurations in the ideal undistorted structure. In the actual crystal structure the oxygen octahedra are tilted by about 12$^{\circ }$, so that $\tau $ is reduced by about 15\% (neglecting $\tau _\pi $ etc.), yielding $2J\approx 0.06$ eV. Our first-principles LAPW calculations give for the bandwidth approximately 0.9 eV, that is $\tau _\sigma \approx 0.11$ eV, and $2J\approx 0.05$ eV. The calculated LAPW energy difference is 0.095 eV, the same reduction from the undistorted case as given by the simple tight binding estimate above. It is worth noting that while this mechanism gives an effective antiferromagnetic exchange interaction $% J\propto \tau ^2/\Delta ,$ the underlying physics is very similar to, but not identical with, the usual superexchange interaction in 3$d$ oxides, $J\propto t^2/U $. The differences are that instead of metal-oxygen-metal hopping here the relevant hopping is direct cluster-cluster hopping and the energy denominator is the band gap due mainly to intracluster exchange, rather than Coulomb correlations described by a Hubbard $U$. One can also estimate the Neel temperature, using the above value for $J.$ To do that, let us begin with noting that this system represents a very good approximation to the antiferromagnetic nearest neighbor fcc model. The strong magnetoelastic coupling discussed below does not favor non-collinear spin configurations, so the direction of the cluster magnetic moments is fixed. The magnetic coupling $J_2$ with the next nearest neighbors can be safely neglected. Indeed, it is governed by the $dd\sigma $ hopping. Although $\tau _\sigma $ is larger than $\tau _\pi ,$ usually by a factor of the order of 2, the larger distance, in the canonical scaling $d^{l+l^{\prime }+1}$ gives a factor of $2^{-2.5}=0.18,$ and the energy denominator in the equation for $% J$ is about 10 times larger. Taken together, one expects $J_2$ to be at least two order of magnitudes smaller than $J.$ The antiferromagnetic fcc Ising model is well studied\cite{leib}. Despite magnetic frustration, it has a Neel temperature of approximately 1.76$J,$ for the spin 1/2 and approximately 1.33 $J,$ for the spin 1, which in our case corresponds to 700--900 K. The measured $T_N$ is 26 K, in apparent severe disagreement with our estimate. It is tempting to ascribe this to intracluster Hubbard-like correlation effects, which can increase the gap and reduce $J.$ Moreover, since the $t_{2g}$ band width is only 1 eV, even a moderate Hubbard repulsion could affect $J$. One can get a very rough upper estimate this effect as follows: The energy of the Coulomb repulsion of two electrons placed in two $t_{2g}$ orbital on the same cluster is (assuming about equal population on Ru and O) $U\approx 0.25U_{O-O}+0.25U_{Ru-Ru}+0.5U_{Ru-O},$ where $U_{O-O}$ is the Coulomb repulsion of two electrons localized on two neighboring oxygens etc.. $% U_{O-O}\approx 1/d_{O-O}=4.4$ eV; $U_{Ru-Ru}$ is believed to be about 1.5 eV% \cite{24}, and for $U_{Ru-O}$ we use 3 eV, keeping in mind that the charge-transfer metal-oxygen energy for the 3d oxides is about 4.5 eV and the metal-oxygen distance is 50\% smaller there. Then, we arrive at $U<3$ eV. It is unclear to what extent this $U$ will be reduced by screening by surrounding cluster and by intracluster charge redistribution, but this effect would definitely be substantial. Anyway, using 3 eV as a very safe upper bound, we get for the lower bound on 2$J$ approximately 0.03 eV, which corresponds to $T_c$ of at least 300 K. Thus, strong correlations alone cannot explain anomalously low Neel temperature of this compound. Another possibility to reduce the transition temperature is magnetoelastic coupling, which is subject of the next section. \subsubsection{Magnon-phonon coupling} The fact that magnetic excitations and phonons are coupled in ruthenates is known\cite{2}, but not well understood from a microscopic point of view. In the case of Sr$_2$YRuO$_6$ it is, however, reasonably clear: with increasing tilting angle the $\tau _\sigma $ hopping must decrease and with it the antiferromagnetic stabilization energy and effective exchange constant $J.$ This is confirmed by our first-principle results. In other words, magnetic excitations flipping the spin of a RuO$_6$ cluster are coupled with this phonon mode, changing the tilting angle (which is the soft mode for the transition from the cubic structure into the tilted one). A dimensionless coupling constant may be defined as $\lambda =d\ln J/dQ,$ where $Q=u_O\sqrt{2M_O\omega /\hbar }$ is the phonon coordinate. Here $u_O$ is the displacement of oxygens from their equilibrium positions, and $\omega $ is the frequency of the phonon. Very roughly, $% M_O\omega ^2=8\Delta E/d^2,$ where $\Delta E$ is the energy difference between the cubic and the distorted structure, taken per one oxygen, and $d$ is the equilibrium oxygen displacement. From our calculations, $\Delta E=90 $ meV. Experimentally, $d\approx 0.4$ \AA . Thus, $\omega \approx 270$ cm$% ^{-1}$. Now, using $2J\propto \tau _\sigma ^2\propto \cos ^22\theta ,$ where $\theta $ is the tilting angle, we can estimate $d\ln J/du_O\approx 8\theta _0^2/d\approx 0.8$ \AA $^{-1}.$ In fact, linear interpolation of $J$ between the cubic and equilibrium structure gives the same number for $d\ln J/du_O.$ Thus $\lambda $ is about 0.17 for this phonon mode, which means that the characteristic (e.g., zero-point motion) amplitude of the librations of the octahedra around their equilibrium position will produce sizable changes in the effective exchange constant. The thermodynamics of such a system is interesting and unusual, but its discussion goes beyond the scope of this paper. It is important to note, however, that the long-range order in the nearest-neighbors antiferromagnetic FCC Ising model appears exclusively because of the finite temperature entropy contribution to the free energy\cite{mac}. While at $T=0$ there is an infinite number of degenerate states, ordered in two dimensions and disordered in the third one, at $T>0$ this degeneracy is lifted because of different spectra of low-energy spin-flip excitations in the different ground states. As long as such spin-flip excitations are coupled with the phonons, the standard consideration of the AFM FCC Ising model does not apply, and the transition is not necessarily at $T\agt J$. However, the long-range two-dimensional AFM correlations should be present up to $T\approx J$, and could in principle be seen in some experiments. \subsubsection{Extended Stoner model for Sr$_2$YRuO$_6$} The above discussion of Sr$_2$YRuO$_6$ magnetic properties was based on the molecular (cluster) picture, and we observed that oxygen plays a crucial role in formation of the magnetic state. The same conclusion can be obtained, starting from the extended band picture. The standard approach to magnetism in the band theory goes back to Stoner and Slater\cite{ss}. They considered non-interacting electrons in the paramagnetic state, and added their exchange interaction in an average form, $H_{mag}=In_{\uparrow }n_{\downarrow }=const-Im^2/4,$ where $m$ is total magnetization and $I$ is independent of $m$. The magnetic susceptibility of such a system can be written as \begin{equation} \chi ^{-1}\equiv \partial ^2E/\partial n_{\uparrow }\partial n_{\downarrow }=\chi _0^{-1}-I, \label{stoner} \end{equation} where $\chi _0$ is Pauli susceptibility. If magnetization is measured in Bohr magnetons, then $\chi _0=N(0),$ the density of states per spin at the Fermi level. The instability occurs when $\chi $ diverges, that is, when $IN(0)$ becomes larger than 1. Eq. \ref{stoner} can of course be viewed as an approximation in the framework of the general linear response theory. However, such an approximation is highly uncontrollable, and even the splitting of the right-hand part of Eq. \ref{stoner} into two terms cannot be derived in a systematic way. More instructive is application of the Stoner method to the density functional theory. In DFT, total energy change is exactly written as sum of the change in the one-electron energy, which for small $m$ is $N(0)^{-1}m^2/4,$ and the change in the interaction energy, which is ($\partial h/\partial m)m^2/4$ . Here $h=\langle V_{\uparrow }-V_{\downarrow }\rangle $ is the effective Kohn-Sham magnetic field averaged over the sample (because Stoner theory assumes a uniform internal ferromagnetic field), and, $I\equiv -(\partial h/\partial m).$ The utility of the Stoner approach in DFT is due to the fact that usually there are very few orbitals whose occupancy substantially influences $h,$ and therefore $I$ is easy to calculate in a quasiatomic manner, using, for instance, the quasiatomic loop in standard LMTO codes. In practice, in quasiatomic calculations one changes the occupation of a given orbital, transferring some charge from the spin-up to spin-down quasiatomic level, recalculates the LSDA potential and determines how large the induced splitting of quasiatomic levels is. If different kinds of atoms in a solid contribute to the density of states at the Fermi level, one has to take into account the magnetization energy for each of them. This means that the total Stoner $I$ for such a solid is the average of the individual (quasiatomic) $I$'s with the squared partial density of states. Indeed, suppose the states at the Fermi level are a superposition of orbitals from several atoms, so that $N(0)=\sum_iN_i=N(0)% \sum_i\nu _i$ (where $i$ labels the atoms). Applying a uniform magnetic field creates a magnetization $m=\sum_im_i,$ where $m_i\equiv \nu _im$ is magnetization of the $i$-th atom. By definition, the intraatomic energy change is $-\sum_iI_im_i^2/4=-\sum_iI_i\nu _i^2m^2/4.$ Thus, the total $% I=\sum_iI_i\nu _i^2.$ So formulated, the Stoner theory applies to infinitesimally small changes in magnetization and essentially determines whether or not the paramagnetic state is stable against ferromagnetism. It is, however, a reasonable assumption that this theory holds, approximately, for finite magnetizations as well. One has, however, to modify the one-electron energy term $% N(0)^{-1}m^2/4,$ to account for the energy dependence of the density of states, within the rigid-band approximation. Then, the spin splitting producing a given magnetization $m$ can be defined as $\Delta =m/\bar{N}(m),$ where $\bar{N}(m)$ is the density of states averaged between the Fermi level of the spin-up and spin-down subbands. For the one-electron energy one obtains $\partial E_1/\partial m=m/2\bar{N},$ because one has to move $m/2$ electrons up by $\Delta .$ Integrating this expression, one arrives at the so-called extended Stoner theory\cite{estoner}, which uses the following expression for the total magnetization energy: \begin{equation} E(m)=\frac 12\int_0^m\frac{m^{\prime }dm^{\prime }}{\bar{N}(m^{\prime })}-% \frac{Im^2}4. \label{StonerE} \end{equation} Minimization of this energy leads to the extended Stoner criterion, which states that stable (or metastable) values of the magnetic moment are those for which $\bar{N}(m)I=1$ and $d\bar{N}(m)/dm<0.$ The paramagnetic state is (meta)stable when $\bar{N}(0)\equiv N(0)<1/I.$ Stoner theory is, in principle, formulated for a ferromagnetic instability. However, unless the Fermi surface topology specifically favors (or disfavors) the antiferromagnetic instability with a given vector {\bf Q}, one can assume that $\chi _0({\bf Q})\approx \chi _0(0).$ Indeed, in many cases if a material comes out magnetic from the calculations, the energy difference between ferro- and antiferromagnetic ordering is small compared with the magnetic stabilization energy. As we shall see, this is the case in Sr$_2$YRuO$_6,$ but not in SrRuO$_3,$ and the reason is that in the latter the Stoner factor $I$ is very different for ferro- and antiferromagnetic arrangements. Now let us consider how one can describe the magnetism in Sr$_2$YRuO$_6$ from the Stoner point of view. Calculation of Stoner parameters $I$'s is straightforward in the LMTO method\cite{LMTO}, which divides space into atomic spheres. In the popular Stuttgart LMTO-TB package it is possible to change occupancy of any atomic orbitals and to calculate the resulting change in atomic parameters, in particular the shift of the corresponding band center $C_{li}.$ With the spin-up and spin-down occupancies split by $\pm m/2,$ the Stoner parameter is ($C_{\uparrow }-C_{\downarrow })/m.$ We obtain $I_{Ru}$ of about 0.7 eV, and, importantly, find that the O$_p$ states in ruthenates also have substantial Stoner parameter, $I_O\approx 1.6$ eV. The density of the Ru $d$ states is approximately twice larger than that of the three O $p$ states. Thus, the total Stoner parameter for Sr$_2$YRuO$_6$ is $I=I_{Ru}\nu _{Ru}^2+3I_O\nu _O^2\approx 0.38$ eV. Correspondingly, the paramagnetic state is unstable unless $N(0) <2.6$ states/spin/eV/formula. The paramagnetic LMTO density of states of cubic Sr$_2$YRuO$_6$ (that is, with breathing, but with no tilting distortion) near the Fermi level is shown in Fig. \ref{SYR-DOS-NM}. \begin{figure}[tbp] \centerline{\epsfig{file=dos-I.eps,width=0.95\linewidth}} \vspace{0.1in} \setlength{\columnwidth}{3.2in} \nopagebreak \caption{LMTO density of states of the $T_{2g}$ band of the nonmagnetic Sr$_2$YRuO$_6$, and inverse Stoner parameter $1/I$.} \label{SYR-DOS-NM} \end{figure} \begin{figure}[tbp] \centerline{\epsfig{file=ston-I.eps,width=0.95\linewidth}} \vspace{0.1in} \setlength{\columnwidth}{3.2in} \nopagebreak \caption{Extended Stoner plot for the density of states shown on Fig. \protect\ref{SYR-DOS-NM}} \label{SRYC-ST} \end{figure} It has the narrow $T_{2g}$ band half filled, and $N(0)$ is close to 4.5 states/spin/eV/formula. This is much larger than $1/I$, so the paramagnetic states is very unstable. On the other hand, the average density of states in the $T_{2g}$ band is not that large, $\tilde{N}\sim 3/W\approx 3/1.5 $ eV=2 states/spin$\cdot $eV, thus in the cubic structure this band will not be fully polarized. Integrating the density of states shown in Fig. \ref{SYR-DOS-NM}, we obtain the extended Stoner plot for Sr$_2$YRuO$_6$ (Fig. \ref{SRYC-ST}), and observe that the equilibrium magnetization is slightly smaller than 2 $\mu _B$, and the ground state is semimetallic, in agreement with the self-consistent spin-polarized LMTO calculations, as well as with the more accurate LAPW calculations. Another fact that one can observe from Fig. \ref {SRYC-ST} is that if oxygen would not contribute in the total Stoner factor, that is, if total $I$ would be only $I_{Ru}\nu _{Ru}^2\approx 0.31$ eV, equilibrium magnetization would be very small, approximately 0.4 $\mu _B$, and of course with a much smaller gain in energy. As we shall see below, this is the case in SrRuO$_3$, where in the antiferromagnetic structure oxygen ions cannot polarize by symmetry. In Sr$_2$YRuO$_6$, however, oxygen fully contributes into the magnetic stabilization energy both in ferro- and in antiferromagnetic structure, so the next order mechanisms decides which magnetic order realizes. Such an additional mechanism is discussed in the previous section hybridization repulsion between the filled and the empty $T_{2g}$ bands, which stabilizes the antiferromagnetic structure. \subsection{SrRuO$_3$ and CaRuO$_3$} \subsubsection{Tight binding bands and relation to Sr$_2$YRuO$_6$.} The main structural difference from the double perovskite, Sr$_2$YRuO$_6$ is that now all oxygen ions are shared between two rutheniums, so one cannot make use of a single RuO$_6$ cluster concept. As with Sr$_2$YRuO$_6,$ we shall start by analyzing the band structure with non-tilted octahedra, that is, with the cubic perovskite structure. Per cubic cell we have, in each spin channel, two Ru $e_g$ states, strongly hybridized with 3 O $p_\sigma $ orbitals, and three Ru $t_{2g}$ states, hybridized with 6 O $p_\pi $ orbitals. In the nearest neighbor approximation, these $pd\sigma $ bands do not mix with the $pd\pi $ bands, and the $pd\pi $ bands, in turn, consist of three sets of mutually non-interacting $xy,$ $yz$, and $zx$-like bands. The nearest neighbor TB Hamiltonians have the form \end{multicols} \rule[10pt]{0.45\columnwidth}{.1pt} \[ H(e_g)=\left( \begin{array}{ccccc} \ E_0(e_g) & 0 & 2t_\sigma s_x/\sqrt{3} & 2t_\sigma s_y/\sqrt{3} & 4t_\sigma s_z/\sqrt{3} \\ 0 & E_0(e_g) & 2t_\sigma s_x & -2t_\sigma s_y & 0 \\ 2t_\sigma s_x/\sqrt{3} & 2t_\sigma s_x & E_0(p_\sigma )\ & 0 & 0 \\ 2t_\sigma s_y/\sqrt{3} & -2t_\sigma s_y & 0 & E_0(p_\sigma )\ & 0 \\ 4t_\sigma s_z/\sqrt{3} & 0 & 0 & 0 & E_0(p_\sigma )\ \end{array} \right) \] and \[ H(xy)=\left( \begin{array}{ccc} E_0(t_{2g}) & 2t_\pi s_x & 2t_\pi s_y \\ 2t_\pi s_x & E_0(p_\pi ) & -4t_\pi ^{\prime }s_xs_y \\ 2t_\pi s_y & -4t_\pi ^{\prime }s_xs_y & E_0(p_\pi ) \end{array} \right) , \] \begin{flushright}\rule{0.45\columnwidth}{.1pt} \end{flushright} \begin{multicols}{2} where $s_x=\sin (k_xa/2)$ etc. For each $t_{2g}$ manifold three bands appear: one non-bonding at $E_0(p_\pi ),$ and one bonding-antibonding pair at $E_{\pm }(xy)=0.5\{E_0(p_\pi )+E_0(t_{2g})\pm \sqrt{[E_0(p_\pi )-E_0(t_{2g})]^2+16t_\pi ^2(s_x^2+s_y^2)}.$Analysis of the calculated band structure shows that $E_0(t_{2g})\approx E_0(p_\pi ),$ so, neglecting oxygen-oxygen hopping $t^{\prime },$ the dispersion is approximately $% E_0(t_{2g})\pm 2t_\pi \sqrt{s_x^2+s_y^2},$ where $t_\pi \approx 1.4$ eV. Ru $% e_g$ orbitals are split off from the $t_{2g}$ orbitals by about 3 eV. As in Sr$_2$YRuO$_6,$ the crystal field effect on oxygen states is weaker: The O $% p_\sigma $ states are less than 2 eV below O p$_\pi $ states. The energy distance between Ru $e_g$ and O $p_\sigma $ levels is nearly 5 eV, so a good approximation is $\Delta E=E_0(e_g)-E_0(p_\sigma )\gg t_\sigma .$ Applying L\"{o}wdin perturbation theory to fold down the oxygen states, we get for (antibonding) $E_g$ bands the effective Hamiltonian \end{multicols} \rule[10pt]{0.45\columnwidth}{.1pt} \[ H(e_g)=\left( \begin{array}{cc} \ E_0(e_g)+4t_\sigma ^2(s_x^2+s_y^2+4s_z^2)/3\Delta E & 4t_\sigma ^2(s_x^2-s_y^2)/\sqrt{3}\Delta E \\ 4t_\sigma ^2(s_x^2-s_y^2)/\sqrt{3}\Delta E & E_0(e_g)+4t_\sigma ^2(s_x^2+s_y^2)/3\Delta E \end{array} \right) , \] \begin{flushright}\rule{0.45\columnwidth}{.1pt} \end{flushright} \begin{multicols}{2} which yields two bands with dispersion $\epsilon _k=E_0(e_g)+8t^2(s_x^2+s_y^2+s_z^2\pm \sqrt{% s_x^4+s_y^4+s_z^4-s_x^2s_y^2-s_z^2s_x^2-s_y^2s_z^2})/\Delta E.$ The formal valency of Ru in Sr$_x$Ca$_{1-x}$RuO$_3$ is 4. The total number of electrons, populating the Ru-O valence bands, is 22. This means that the bonding (mostly oxygen) $E_g$ bands are filled, as well as the bonding and nonbonding $T_{2g}$ bands. The conduction band is the antibonding $T_{2g}$ band, with its 6 states filled by 4 electrons. This band has a strong (logarithmic) van Hove singularity at half filling. However, direct oxygen-oxygen hopping $t^{\prime }\approx 0.3$ eV, which we have initially neglected, moves this singularity upwards to the position which corresponds to approximately 63\% filling (3.8 electrons) and makes the singularity sharper. This is the pronounced peak at $E_F$ in our first principles paramagnetic DOS\cite{6}. Such a situation, where the Fermi level nearly exactly hits a logarithmic peak in the density of states, is energetically unfavorable, and leads to an instability, which can be either magnetic, or a sufficiently strong lattice distortion, or both. \subsubsection{Cubic perovskite: magnetic instability\label{SecSR3-mag}} The calculated partial densities of Ru (d) and of the three O (p) states at the Fermi level in Sr$_x$Ca$_{1-x}$RuO$_3$ are approximately 70\% and 30 \%, respectively. Correspondingly, $I=I_{Ru}\nu _{Ru}^2+3I_O\nu _O^2\approx 0.41$ eV. Without the oxygen Stoner parameter, $I\approx 0.35.$ As mentioned above, our LAPW calculations yield for SrRuO$_3$ in the cubic structure relatively small magnetization of 1.17 $\mu_B$. The reason for that is that the density of state is piled near the Fermi level, and drops quickly when one goes away from it. Fig. \ref{SR3+CR3-st} shows how this is reflected in the effective density of states $\tilde{N}(m)$: it decreases rapidly with magnetization, and becomes equal to $1/I$ at $m\approx 1.2 \mu_B$. For a moderate tilting, corresponding to actual SrRuO$_3$ structure, $\tilde{N}(0)$ is smaller than in the cubic structure, but it decreases rather slowly with $m$ and remains larger than $1/I$ much longer. Two questions arise in this connection: why the ground state is ferromagnetic, and not antiferromagnetic, and why in the actual crystal structure is CaRuO$_3$ not magnetic at all? The first question is particularly easy to answer. In an antiferromagnetic structure, oxygen ions occur between opposite spin Ru ions, and thus by symmetry have zero net polarization. Correspondingly, the total Stoner parameter $I$ is smaller and so is magnetic stabilization energy and the equilibrium magnetization on Ru. As we shall see below, tilting has a substantial effect on the effective density of states, and for large tiltings the ground state becomes paramagnetic. It follows from the above discussion, however, that the ground state is always either ferro- or paramagnetic. \begin{figure}[tbp] \centerline{\epsfig{file=SR3+CR3.epsi,height=0.95\linewidth,angle=-90}} \vspace{0.1in} \setlength{\columnwidth}{3.2in} \nopagebreak \caption{LAPW densities of states in the $T_{2g}$ band in SrRuO$_3$ in the cubic and its actual structure, and of CaRuO$_3$ in its actual structure, and with experimental lattice parameters (4\% smaller for CaRuO$_3$).} \label{SR3+CR3} \end{figure} \begin{figure}[tbp] \centerline{\epsfig{file=stoner.eps,width=0.95\linewidth}} \vspace{0.1in} \setlength{\columnwidth}{3.2in} \nopagebreak \caption{Extended Stoner plot for SrRuO$_3$ and CaRuO$_3$ in various structures, produced with the densities of states shown in Fig. \protect\ref{SR3+CR3}. Inverse Stoner factors are calculated in the LMTO atomic spheres as described in the text.} \label{SR3+CR3-st} \end{figure} This situation is in sharp contrast with classical localized magnetic materials, like NiO or FeO, where even when ferromagnetism is imposed, oxygen is polarized only very weakly, and the magnetization of the metal ion is even smaller than for the antiferromagnetic state. It is also in contrast with the ferromagnetic colossal magnetoresistance Mn oxides, where the antiferromagnetic ground state is destroyed by the double exchange interaction, competing with superexchange. These ruthenates have integer occupancy of the valence band, and thus double exchange is not operative. However, covalency effects, which are strong because of the large $pd$ hopping and the near-degeneracy of the Ru $(t_{2g})$ and O $(p_\pi )$ states, are operative. This strong covalency is what requires part of the magnetic moment to reside on the oxygen, since exchange splitting the Ru $d$ states without the O would require disrupting the covalent bonding. In the crystal structures where it is possible to maintain O moments without ferromagnetic ordering, an antiferromagnetic state is likely to form (as in Sr$_2$YRuO$_6),$ but where it is not possible, like SrRuO$_3,$ a ferromagnetic ground state occurs instead. It is also worth noting that besides the double perovskite Sr$_2$YRuO$_6,$ where oxygen ions can polarize both in the ferro- and antiferromagnetic structure, and single perovskites Sr$_x$Ca$_{1-x}$RuO$_3$, there exist intermediate layered structures, which consist of perovskite (Sr,Ca)O$_2$ layers. Using the same arguments, we conjecture that if such compounds are magnetic, the effect of oxygen will cause ferromagnetic ordering inside layers, while interlayer coupling is strong ferromagnetic if the layers are sharing apical oxygens, but may be antiferromagnetic if they are connected by intermediate rocksalt layers (like in Sr$_2$RuO$_4$). \subsubsection{Role of the orthorhombic distortion} The observed crystal structure of both SrRuO$_3$ and CaRuO$_3$ is characterized by a substantial tilting of the RuO$_6$ octahedra. In SrRuO$_3$ the octahedra are rotated by 8$^{\circ },$ and in CaRuO$_3$ the distortion is about twice larger. In Fig.\ref{SR3+CR3} we show the density of states in the $T_{2g}$ band for these three different structures. There are two interesting effects on the electronic structure, associated with tilting. One is that hybridization between the $% T_{2g}$ and $E_g$ bands becomes possible. This broadens the logarithmic singularity in the density of states. At the same time the bands become more narrow and the gap between the antibonding $T_{2g}$ and $E_g$ bands grows. On the other hand, the unit cell is quadrupled so new Bragg reflections appear. These yield pseudogaps at the new Brillouin zone boundaries, occurring at energies close to half filling (e.g., along ${\bf \Gamma X}$ and ${\bf \Gamma M}$ directions) as well at two-third filling (e.g., along $% {\bf \Gamma R}$ direction). This second pseudogap thus appears to be near the Fermi level. One factor, band narrowing, tends to increase the equilibrium magnetization, but another one, the second pseudogap at the Fermi level, works against it. The actual trend looks like this: at small distortions the equilibrium magnetization grows. At some critical distortion magnitude, which is not far from the observed equilibrium distortion for SrRuO$_3$, the magnetization reaches a maximum and starts to decline. The first principles calculations show little difference between SrRuO$_3$ and CaRuO$% _3,$ provided the same crystal structure is used, so the main difference in the observed behavior is indeed due to the different distortion magnitudes. To understand the changes caused by the tilting distortion it is instructive to look at the extended Stoner plots for different distortions. Fig. \ref {SR3+CR3-st} shows such plots for SrRuO$_3$ in the experimental structure, in the cubic (ideal perovskite) structure, and for CaRuO$_3$, as well as for CaRuO$_3$ in the SrRuO$_3$ structure. One may immediately note the extreme instability of the cubic structure, due to discussed peak at the Fermi level. However, because the density of states is piled up near the Fermi level, the resulting exchange splitting is small compared with the band width. For the moderate tilting, like that in the experimentally observed SrRuO$_3$ structure, the peak broadens and it takes larger exchange fields to fully split this peak into occupied and unoccupied peaks. Finally, at even larger tiltings, corresponding to CaRuO$_3$, the peak is suppressed. In the effective density of states, as shown on Fig.\ref {SR3+CR3-st}, this results in a nearly flat plateau, extending from $m=0$ to $% m\approx 1$ $\mu _B.$ Accidentally, this plateau matches nearly exactly $1/I, $ calculated as described in the previous Section. In other words, the total energy of CaRuO$_3$ is nearly independent of magnetization up to $m\approx 1$ $\mu _B!.$ The total energy as a function of magnetization is shown on Fig.% \ref{fixedM}, where the results of the fixed-spin-moment LAPW calculations are compared with the same energy differences in the Stoner theory\cite{notef}. We conclude that although CaRuO$_3$ is nonmagnetic in its ground state, long-wave paramagnons should be extremely soft in this compound. This should effect the transport, magnetic and electronic properties. \begin{figure}[tbp] \centerline{\epsfig{file=fixedM.eps,width=0.95\linewidth}} \vspace{0.1in} \setlength{\columnwidth}{3.2in} \nopagebreak \caption{Ferromagnetic stabilization energy for CaRuO$_3$ in its actual crystal structure and in the SrRuO$_3$ crystal structure. First-principle LAPW fixed-moment calculations (squares; the dashed lines are guides to eye) are shown together with the approximate Stoner formula (Eq.\protect\ref{StonerE}), based on the data shown in Fig. \protect\ref{SR3+CR3-st}.} \label{fixedM} \end{figure} \subsubsection{Transport properties} Unusual transport properties of SrRuO$_3$ are due to the following three peculiarities: (1) While the DOS in both spin subsystems are nearly the same, the partial plasma frequency in the majority spin channel is 3 times larger than in the minority spin channel, a manifestation of the proximity to the half-metallic regime, which would occur if the magnetization were 2 $% \mu _B$ instead of 1.59 $\mu _B.$ (2) There is strong coupling between electrons, phonons, and magnons, which probably produces substantial spin-flip scattering of electrons, and (3) In both spin channels the Fermi surfaces consist of several sheets of complicated topology: hole-like, electron-like, and open, so that hole-like and electron-like parts compensate each other. Let us start with the electric resistivity, and assume for simplicity that two bands are present, spin-up and spin-down. Let us further assume that the sources of the resistivity are scattering of electrons by phonons, with the coupling constant $\lambda _{ph\uparrow \uparrow }=\lambda _{ph\downarrow \downarrow }=\lambda _{ph},$ and by magnons, with the coupling constant $% \lambda _m.$ Since the DOS are approximately equal, $N_{\uparrow }=N_{\downarrow }=N\approx 23$ st/Ry, also $\lambda _{\uparrow \downarrow }=\lambda _{\downarrow \uparrow }=\lambda _m.$ The specific heat renormalization in each band is now $(1+\lambda _{ph}+\lambda _m)\ $which would need to be $\approx 4.0$ to agree with experiment\cite{27}. In the lowest-order variational solution of the Boltzmann equation, given by Allen% \cite{pinski} (see also Ref. \onlinecite{eilat}), the resistivity of such a system at sufficiently high temperature is $\rho =8\pi ^2kT(\lambda _{ph}+\lambda _m)/\omega _p^2,$ where the so-called ``scattering-in'' term which is usually small in cubic crystals is neglected, and $\omega _p^2=\omega _{p\uparrow }^2+\omega _{p\downarrow }^2$ is the partial plasma frequency squared (one can find in the literature \cite{fert} a so-called ``two-current formula'' which gives the same result when the ``scattering-in'' term is neglected. There are some differences between the formulas of Refs.\onlinecite{pinski} and \onlinecite{fert}, which we discuss in the Appendix.). From our first principles calculations, $\omega _{p\uparrow }=3.3$ eV and $% \omega _{p\downarrow }=1.5$ eV. In the nonmagnetic phase $\omega _p=6.2$ eV, the same as the total plasma frequency in the ferromagnetic phase. At $T\agt% 30$ K and up to the Curie temperature the resistivity is reported to be linear\cite{27}. The linear coefficient ($\sim 1$ $\mu \Omega \cdot $cm/K) corresponds to $(\lambda _{ph}+\lambda _m)\approx 2.9,$ very close to the number extracted from the electronic specific heat. Above $T_C$ the resistivity changes slope, remaining linear up to at least several hundred Kelvin. The slope is, however, smaller than below $T_C,$ and corresponds to $% \lambda =\lambda _{ph}+\lambda _{pm}\approx 1.5,$ where $\lambda _{pm}$ is the electron-{\it para}magnon coupling constant. This value differs from that quoted in Ref.\onlinecite{27}, because of considerable differences in the calculated band structure. Thus, we conclude that the high-temperature resistivity of SrRuO$_3$ indicates rather strong electron-paramagnon, and even stronger electron-magnon coupling, with the reservation that probably in this system one cannot really separate electron-phonon and electron-magnon scattering completely because the corresponding degrees of freedom are coupled. The problem noted in Ref.\onlinecite{27}, namely that at high temperatures the mean free path is comparable to the lattice parameter, yet no saturation is seen in the resistivity, remains. The resistivity of CaRuO$_3$ has also been studied. In the studies reported in the literature \cite{1,13,bouch}the high-temperature resistivity shows the same slope as in SrRuO$_3,$ in full agreement with our observation that the electronic structure of both compounds is very similar. At low temperatures, however, the resistivity behaves very differently, namely it increases nearly linearly at small $T$ with a large slope. The slope decreases eventually and at the room temperature the behavior becomes similar to that of SrRuO$_3.$ This low-temperature linearity indicates that the excitations responsible for resistivity (apparently, paramagnons) soften at $T\rightarrow 0,$ indicating a magnetic instability at $T=0$ or at slightly negative $T$ (in Curie-Weiss sense). This is in agreement with our result that CaRuO$_3$ is on the borderline of a ferromagnetic state. Again, paramagnons are strongly coupled with phonons, and this leads to the large coupling strength. The low-temperature resistivity has also attracted attention. Experimentally, the resistivity initially increases rather quickly. Allen et al \cite{27} observed a low-temperature power law $\rho (T)-\rho (0)\propto T^{1-2}.$ Klein et al \cite{15} found that below 10 K the resistivity can be reasonably well fit with a quadratic law, but an even better fit (linear), was found for up to 30 K for the dependence of resistivity on magnetization. We interpret this observation as follows: A stronger than $T^5$ increase indicates that the excitations, responsible for the low-temperature scattering, have a sublinear dispersion. Conventional magnons, with $\omega \propto k^2$ dispersion, produce $\rho \propto T^2,$ in good agreement with the experiment. In fact, the experimental exponent is even below 2, which is easily accountable for by the Fermi surface effects: part of the temperature dependence comes from the term $({\bf v}_{{\bf k}}-{\bf v}_{{\bf % k}^{\prime }})^2,$ if it is proportional to $({\bf k}-{\bf k}^{\prime })^2;$ this not the case in SrRuO$_3,$ where one of the two Fermi surfaces (majority spin) is a small sheet with heavy electrons (similarly, magnetic alloys where momentum conservation does not hold show $\rho \propto T^{3/2};$ see Ref. \onlinecite{fert32}). A possible problem with this interpretation of the low-temperature resistivity is that as was already noted\cite{15}, in elemental ferromagnets, the magnon-limited resistivity is almost three orders of magnitude smaller than what would be needed to explain the low-temperature resistivity of SrRuO$_3$ (where $\rho \rightarrow \rho _0+aT^2,$ $a\approx 0.02$ $\mu \Omega \cdot $cm/K$^2).$ This can be resolved if we invoke the anomalously large magnon-phonon coupling, which, as discussed above, originates from the crucial role played by oxygen in magnetic properties of ruthenates. The strong electron-magnon coupling at low temperature in SrRuO$_3$ is closely related to the large electron-paramagnon coupling at high temperatures and in CaRuO$_3.$ One can make a rough estimate of the characteristic frequency of magnons responsible for the resistivity: the Schindler-Rice formula\cite{rice}, derived for the $% s-d$ paramagnon scattering should be qualitatively applicable here, because we also have light electrons which carry current and are scattered by magnons into a heavy, transport-inert band. This formula reads \begin{eqnarray*} \rho (T) &\approx &\alpha (T/T_m)^2[J_2(T_m/T)-((T/T_m)^3J_5(T_m/T)] \\ J_n(x) &=&\int_0^x\frac{4z^ndz}{\sinh ^2(x/2)}, \end{eqnarray*} and has asymptotic behavior at $T\rightarrow 0$ as $\alpha (T/T_m)^2\pi ^2/3$ and at $T\gg T_m$ as $\approx 0.8\alpha (T/T_m),$ where $kT_m$ is characteristic energy of the magnons. Using the experimental number ($\rho -\rho _0)/T^2\approx 0.02$ $\mu \Omega \cdot $cm/K$^2$ and assuming that the magnon-limited part of the high-temperature resistivity is $\sim 0.5$ $\mu \Omega \cdot $cm/K, we arrive at $T_m\sim 70$ K, which is a low, but not impossible number. The Hall coefficient in SrRuO$_3$ and CaRuO$_3$\cite{12} has attracted considerable attention. In both compounds the Hall constant $R$ shows an unusual temperature dependence, changing sign at $T\sim 50$ K. At this point, however, the similarity ends. For each given temperature the Hall resistivity $\rho _{xy}$ in CaRuO$_3$ is nearly perfectly proportional to the field, as it should be for ordinary Hall processes. In SrRuO$_3,$ to the contrary, $d\rho _{xy}/dH$ decreases substantially with temperature, and only well above $T_c$ does $\rho _{xy}$ become a linear function of $H.$ This closely resembles the so-called extraordinary Hall effect in ferromagnets. The physics of the extraordinary Hall effect is as follows: below $T_c,$ the internal magnetic field is much larger than that applied in a typical Hall experiment. However, the Hall currents induced in different magnetic domains mutually cancel. The applied field acts by aligning domains and lifting this cancellation. This process defines the large slope of $d\rho _{xy}/dH$ in low fields. At a field close to the saturation magnetization $4\pi M_s$ all domains are aligned and further change of the Hall current is due to the applied field itself (the ordinary Hall effect). It is tempting to associate the nonlinear field dependence of the Hall resistivity in SrRuO$_3$ with this effect. However, this hypothesis has been discounted by the authors of Ref. \onlinecite{12} for the following reason: In standard extraordinary Hall effect theory the intersection of the linear low-field and high-field asymptotes occurs at $% 4\pi M_s.$ In SrRuO$_3$ the position of the intersection is roughly the same for all temperatures below $T_c,$ and falls between 3 and 4 T. Magnetometer data show that $4\pi M_s$ is about 0.12 T at $T=5$ K, and, naturally, drops to zero at $T=T_c.$ Furthermore, a closer look at the data reveals that the slope of the Hall coefficient changes in a smooth manner, unlike conventional ferromagnets, where it changes rather sharply near $H=$ $4\pi M_s.$ Studies of the bulk magnetization in polycrystalline\cite{7} and single crystal\cite{32} samples of SrRuO$_3$ show that the magnetization is not saturated even at applied fields of several T. This has been ascribed to the strong magnetocrystalline anisotropy, as measured by Kanbayashi\cite{32}, and expected for a 4d magnet. Although hysteresis measurements for the thin film samples on which Hall measurements were taken apparently showed saturation near 1.5 T, we speculate that the domains may not yet be fully aligned at this field, yielding a continuing non-linear field dependence in the Hall resistivity. The sign reversal of the Hall conductivity has received even more attention. In the literature two explanations can be found: one\cite{27} assumes different temperature dependences for the electron and hole scattering rates, because of the different scattering mechanism (phonon vs. magnon), which may then yield a strong temperature dependence of the Hall resistivity, and a sign change. It has been argued\cite{12} that this hypothesis should not work, since CaRuO$_3$ is nonmagnetic, but still shows a sign-changing Hall effect. Instead the authors of Ref. \onlinecite{12} suggested that the sign may change because the number of electrons and holes in the energy window $\sim kT$ around the Fermi energy may change with $T.$ However, the sign reversal in CaRuO$_3$ and SrRuO$_3$ could be due to different physical reasons. This possibility is also suggested by the very different field dependence of the Hall resistivity in the two cases. On the other hand, it follows from our calculations, and is also indicated by various experiments, that CaRuO$_3$ is on the verge of a magnetic instability, and the interplay between the phonons and paramagnons may play much the same role as the interplay between the phonons and magnons in SrRuO$% _3.$ Furthermore, besides the temperature dependence of the relaxation rates and the temperature broadening of the Fermi level, there is yet another effect which may cause the sign change in SrRuO$_3$. The exchange splitting must be very temperature dependent in SrRuO$_3.$ Unlike common ferromagnets like Fe, where the Curie temperature corresponds to disordering of local moments, here the magnetization {\it disappears at T}$_c,$ including the {\it local} magnetization. Thus, the spin splitting changes with the temperature, essentially disappearing around $T_{c.}$ This is in contrast with most ferromagnets where an effective local spin splitting exists well above $T_c,$ without any macroscopic magnetization. Thus, the band structure itself is strongly temperature dependent. This effect can be operative in SrRuO$_3$ in addition to the two other possible mechanisms. \begin{figure}[tbp] \centerline{\epsfig{file=hall.eps,width=0.95\linewidth}} \vspace{0.1in} \setlength{\columnwidth}{3.2in} \nopagebreak \caption{Calculated inverse Hall number for SrRuO$_3$ (ferromagnetic and nonmagnetic) and for CaRuO$_3$. Note different signs for the two spin subbands in SrRuO$_3$ and strong dependence on the position of the Fermi level.} \label{hallSR3} \end{figure} The prerequisite for any of these mechanisms to be relevant is that there is strong compensation between the hole-like and the electron-like contributions from the different bands. To check this, we have calculated the Hall conductivity $% \sigma _H$ and the Hall coefficient $R_H=\sigma _H/\sigma _0^2$ for all the individual bands in SrRuO$_3$ and CaRuO$_3,$ following the procedure described in Ref.\onlinecite{allenHall}. The results are shown in Fig. \ref {hallSR3}. It was observed by Schultz {\it et al}\cite{allenHall} that quantitative calculations of the Hall coefficient are extremely sensitive to sampling of the Brillouin zone; it is impractical for these 20 atom per unit cell structures to calculate the first principles band structure at a {\bf k-}mesh comparable with the ultradense meshes used in Ref. \onlinecite{allenHall} for elemental metals and instead we have relied on interpolation between first principles band energies calculated at 100 points in the irreducible wedge of the zone. Thus, our calculations shown in Fig. \ref{hallSR3} cannot be taken quantitatively, but rather illustrate the qualitative fact, that the Hall conductivity has different signs in different bands and spin channels. The net Hall conductivity is defined by strong cancellation of hole- and electron-like contributions from different bands sc which in turn is very sensitive to relative position of different bands. Evidently, this balance can be easily violated by such temperature-dependent factors as lattice distortion, magnetization, and relaxation times. The mechanism suggested by Klein et al \cite{12} is also possible, since the net Hall conductivity does change sign within a few hundred K around $E_F$. Finally, very recent measurements\cite{guer} of the Hall coefficient in mixed Sr$_x$Ca$_{1-x}$RuO$_3$ samples showed that for intermediate concentrations it does not change sign with temperature, suggesting that the sign reversals in pure compounds are accidental and unrelated. \section{Conclusions} At this time there is already a fairly substantial body of experimental literature on these ruthenates, including magnetic measurements, spectroscopic studies, specific heat data and determinations of electronic transport and superconducting properties. These measurements demonstrate unusual and perhaps unexpected properties, and many of these have been ascribed to correlation effects. For example, the specific heats in the metallic compounds show substantial enhancements over the bare band structure values, superconductivity occurs in a layered material in apparent proximity to magnetic phases, quasiparticle bands measured by ARPES show weaker dispersion than band structure calculations, satellites are observed in angle integrated photoemission spectra, and the transport properties of the metallic phases are unusual, showing e.g. sign reversals in the Hall coefficient. Since this evidence clearly suggests something unusual about the perovskite-derived ruthenates, it is tempting to ascribe it to strong correlation effects, particularly since these effects are all either qualitatively in the direction expected for a correlated system or can conceivably arise from the additional complexity introduced by correlation effects. On the other hand, chemical trends lead to the expectation that, all things being equal, 4d Ru oxides should be less prone to strongly correlated behavior than the corresponding 3d oxides, and much less prone to such effects than cuprates. This is because of the much more extended 4d orbitals in Ru ions which should lead to stronger hybridization, better screening and lower effective Hubbard $U$. Furthermore, although much of the data are at first sight qualitatively in accord with general expectations for a correlated system, they have not been quantitatively explained in these terms, and there are some data that are rather difficult to understand purely in terms of a correlated scenario, most notably the disappearance of magnetism upon doping Ca in the SrRuO$_3$ system, and the ferromagnetic ground state in the integer occupancy (and thus not a double-exchange system) compound, SrRuO$_3$. We have performed first principles, band structure based calculations within the LSDA for SrRuO$_3$, CaRuO$_3$ and Sr$_2$YRuO$_6.$ Although this approach fails miserably in systems that are truly strongly correlated, it does yield the correct magnetic and electronic states in these materials, including quantitative agreement with known magnetic properties in all cases in these ruthenates. Moreover, the different magnetic behaviors can be fully understood in terms of simple and straightforward one-electron tight binding models and Stoner theory. Although interpretation of the transport properties in terms of a conventional one-electron picture and Bloch-Boltzmann theory is not as straightforward, we show that such an approach is not inconsistent with the existing body of experimental evidence. A key notion for understanding the transport in these systems is strong electron-phonon-(para)magnon coupling, which in turn can be understood in the framework of the band theory. In agreement with expectations based on chemical trends, rather strong hybridization is found between the Ru 4d and O 2p states in these materials. While antagonistic to a strong correlation scenario this is in large part responsible for the unusual properties in our band picture, including the very fact that magnetism occurs at all in a 4d metallic oxide. This strong hybridization leads to a ferromagnetic direct exchange interaction between Ru and O, and the cooperation between Ru and O contributions to the Stoner parameter leads to the magnetic ground states. As a result, the O ions in these ruthenates make substantial contributions to the magnetization density, which may be observable in neutron scattering experiments with O form factors included in the refinements. The importance of $p-d$ hybridization also leads to a strong coupling of magnetic and structural degrees of freedom, resulting in for example the destabilization of the ferromagnetic state due to octahedral tilting in CaRuO$_3$. One consequence of our scenario is that when Ru ions are bonded to the same O, as neighboring Ru ions are in the perovskite structure, the interaction between them will be strongly ferromagnetic. This means that magnetic fluctuations in layered ruthenates like Sr$_2$RuO$_4$ and the associated Ruddlesden-Popper (RP) series of compounds, are predicted to have predominantly ferromagnetic character in plane, although alternating layers, or perovskite blocks in the RP series may be coupled antiferromagnetically to each other via superexchange through rocksalt blocks. Such ferromagnetic fluctuations would be pair-breaking for singlet ($s-$ or $d-$% wave) superconductivity, but not for triplet superconductivity, as suggested for instance by Rice and Sigrist\cite{26} for Sr$_2$RuO$_4$. In fact, for triplet pairing both magnetic fluctuations and phonons provide Cooper attraction. Finally, when Ru ions are not connected at all via common O ions, like in Sr$_2$YRuO$_6$ the Ru-Ru coupling is via two intervening O ions, both of which are strongly hybridized with and ferromagnetically coupled to the nearest Ru, but couple to each other via an antiferromagnetic superexchange interaction. This results in an antiferromagnetic state. The strength and importance of covalent transition metal - oxygen interactions, combined with magnetism and metallicity is perhaps unique to these ruthenates. Already a number of interesting physical properties have been found among these compounds, and no doubt more interesting physics remains to be found in this family. \acknowledgements We acknowledge enlightening discussions with J.S. Dodge, R.P. Guertin, Lior Klein, Mark Lee, and W.E. Pickett. Work at the Naval Research Laboratory is supported by the Office of Naval Research. Computations were performed using the DoD HPCMO computing centers at NAVO and ASC.
1,941,325,219,914
arxiv
\section{Introduction}\label{sec:intro} We are interested in the well-posedness and numerical approximation of the following $d$-dimensional SDE: \begin{align}\label{eq:SDE} X_t = X_0 + \int_0^t b(X_s) \, d s + B_t , \quad t\in [0,1], \end{align} where $X_0 \in \mathbb{R}^d$, $b$ is a distribution in some nonhomogeneous Besov space $\mathcal{B}_p^\gamma$ and $B$ is an $\mathbb{R}^d$-fractional Brownian motion (fBm) with Hurst parameter $H$. When $B$ is a standard Brownian motion ($H=1/2$), this equation received a lot of attention when the drift is irregular, see for instance \cite{Zvonkin,Veretennikov} for bounded measurable drift or \cite{KrylovRockner} under some integrability condition. Strong well-posedness was obtained in those cases, which contrasts with the non-uniqueness and sometimes non-existence that can happen for the corresponding equations without noise. In case $B$ is a fractional Brownian motion, the results are more recent and we refer to \citet{NualartOuknine} for H\"older continuous drifts, then to \citet{Banos}, \citet{CatellierGubinelli}, \citet{GHM}, \citet{anzeletti2021regularisation} and \citet{GaleatiGerencser} for distributional drifts when the Hurst parameter is smaller than $1/2$. The most simple approximation scheme for \eqref{eq:SDE} is given by the Euler scheme with a time-step~$h$ \begin{align*} X_t^{h} = X_0+ \int_0^t b(X^{h}_{r_h}) \, d r + B_t , \quad t\in [0,1], \end{align*} where $r_h = h \lfloor \frac{r}{h} \rfloor$. For the numerical analysis of Brownian SDEs with smooth coefficients, including the previous scheme and higher-order approximations, we point to a few classical works by Pardoux, Talay and Tubaro~\cite{pardoux1985discretization,talay1990expansion}, see also \cite{kloeden1992stochastic}. The strong error $ \|X_{t} - X_{t}^h\|_{L^m(\Omega)}$ is known to be of order $h$ (and $h^{1/2}$ when the noise is multiplicative). When the coefficients are irregular, \citet{dareiotis2021quantifying} obtained recently a strong error with the optimal rate of order $1/2$ for merely bounded measurable drifts, even if the noise is multiplicative. This was extended to integrable drifts with a Krylov-R\"ockner condition by \citet{le2021taming}. We also refer to the review \cite{szolgyenyi2021stochastic} and references therein for discontinuous coefficients, and the recent weak error analysis of \citet{jourdain2021convergence} for integrable drifts. Besides, we mention that when the drift is a distribution in a Bessel potential space with negative regularity, \citet{de2019numerical} have obtained a rate of convergence for the so-called virtual solutions of a (Brownian) SDE, using a $2$-step mollification procedure of the drift. Let us now recall briefly what is known when $B$ is a fractional Brownian motion. First, \citet{NN} considered one-dimensional equations with $H>1/2$, smooth coefficients and multiplicative noise, i.e. the more general case with $B$ replaced by a symmetric Russo-Vallois~\cite{RussoVallois} integral $\int_{0}^t \sigma(X_{s}) \, d^oB_{s}$ in \eqref{eq:SDE}. They proved that the rate of convergence for the strong error is exactly of order $2H-1$. Then \citet{HuLiuNualart} introduced a modified Euler scheme to obtain an improved convergence rate of order $2H-1/2$, still in the multiplicative case. They also derived an interesting weak error rate of convergence. Recently, \citet{butkovsky2021approximation} considered \eqref{eq:SDE} with any Hurst parameter $H\in (0,1)$ and H\"older continuous drifts in $\mathcal{C}^\alpha$, for $\alpha \in [0,1]$. They obtained the strong error convergence rate $h^{(1/2+\alpha H) \wedge 1 - \varepsilon}$, which holds whenever $\alpha\geq 0$ and $\alpha>1-1/(2H)$. The latter condition is optimal in the sense that it corresponds to the existence and uniqueness result for \eqref{eq:SDE} established in \cite{CatellierGubinelli}. Our main contribution in this paper is an extension of their result to distributional drifts, i.e. to negative values of $\alpha$, including the threshold $\alpha=1-1/(2H)$. ~ First, we state that if $b$ is in the Besov space $\mathcal{B}^\gamma_{p}$ and that $\gamma-d/p > 1/2 - 1/(2H)$, then there exists a weak solution $(X,B)$ to \eqref{eq:SDE} which has some H\"older regularity. This result is a direct extension of \cite[Theorem 2.8]{anzeletti2021regularisation} to any dimension $d\geq1$ and was also recently extended to time-dependent drifts in \cite{GaleatiGerencser}. The condition $\gamma-d/p > 1/2 - 1/(2H)$ allows negative values of $\gamma$ and therefore $b$ can be a genuine distribution. Solutions to \eqref{eq:SDE} are then understood as processes of the form $X_{t} = X_{0} + K_{t} + B_{t}$, where $K_{t}$ is the limit of $\int_{0}^t b^n(X_{s})\, ds$, for any approximating sequence $(b^n)_{n\in \mathbb{N}}$. We see in particular that this approach is well suited for numerical approximation. Hence we propose a numerical scheme to approximate \eqref{eq:SDE}. To that end, for a time-step $h$ and a sequence $(b^n)_{n\in \mathbb{N}}$ that converges to $b$ in a Besov sense, we consider the following tamed Euler scheme that we define on the same probability space and with the same fBm $B$ as $X$: \begin{align}\label{def:EulerSDE} X_t^{h,n} = X_0+ \int_0^t b^n(X^{h,n}_{r_h}) \, d r + B_t , \end{align} where $r_h = h \lfloor \frac{r}{h} \rfloor$. Choosing $b^n = g_{1/n}\ast b$ as a convolution of $b$ with the Gaussian density $g_{1/n}$ with variance $1/n$, and for a careful choice of $n$ as a function of $h$, we prove under the stronger condition $\gamma-d/p > 1-1/(2H)$ that the following rate of convergence holds \begin{align*} \forall h\in (0,1), \quad \sup _{t \in [0,1] }\big\|X_{t}-X_{t}^{h,n}\big\|_{L^{m}(\Omega)} & \leq C h^{\frac{1}{2(1-\gamma+\frac{d}{p})}-\varepsilon} . \end{align*} A more general version of this result is presented in Theorem \ref{thm:main-SDE} and discussed thereafter, in particular concerning the value of the rate. We also obtain a non-explicit rate of convergence in the limit case of the strong regime, that is when $\gamma-d/p = 1-1/(2H)$ and $\gamma > 1-1/(2H)$. This extends the result of \citet{butkovsky2021approximation} to negative values of $\alpha \equiv \gamma-d/p$, and matches the $1/2-\varepsilon$ rate of convergence obtained in the limit case $\gamma-d/p=0$ and $b$ a bounded measurable function. Unlike previous works, our method does not rely on the Girsanov transform and avoids computing exponential moments of functionals of the noise or its discrete-time approximation. As a byproduct, we deduce that under the condition $\gamma-d/p \ge 1-1/(2H)$ and $\gamma > 1-1/(2H)$, $X$ is in fact a strong solution and it is pathwise unique in a class of H\"older continuous processes. Note that in the sub-critical case $\gamma-d/p>1-1/(2H)$, a notion of uniqueness (path-by-path) was already proven in \cite{CatellierGubinelli}, and strong existence was established in \cite{GHM}, for solutions in the sense of nonlinear Young differential equations. We compare these results to ours in Remark~\ref{rk:comparisonNotions}. ~ Our proof relies on several new regularisation properties of the $d$-dimensional fBm and of the discrete-time fBm which somehow extend Davie's lemma~\cite[Prop. 2.1]{Davie}. Namely, for functions $f$ in Besov spaces of negative regularity (resp. bounded $f$ for the discrete-time fBm), we obtain upper bounds on the moments of quantities such as $\int_{s}^t f(x+B_{r})\, dr$ in terms of $x$ and $(t-s)$, see Propositions~\ref{prop:regfBm},~\ref{prop:bound-E1-SDE} and \ref{prop:newbound-E2}. These upper bounds are sharper than if $B$ was replaced by any smooth function, which explains that we refer to regularisation properties of the fBm. The main tool to prove these results is the stochastic sewing lemma developed by \citet{le2020stochastic}. The critical case with $\gamma-d/p = 1-1/(2H)$ and $\gamma > 1-1/(2H)$ requires a version of the stochastic sewing lemma with critical exponents that induces a logarithmic factor in the result (see \cite[Theorem 4.5]{athreya2020well} and \cite[Lemma 4.10]{FHL}). We use this lemma in Proposition~\ref{prop:bound-E1-SDE-critic} to prove an upper bound on the moments of $\int_s^t f(X_r)-f(X^{h,n}_r) \, dr$. This leads to the following bound for $\mathcal{E}^{h,n} = X - X^{h,n}$, \begin{align*} \| \mathcal{E}^{h,n}_{t} - \mathcal{E}^{h,n}_{s} \|_{L^m} &\leq C \, \left(\|\mathcal{E}^{h,n}\|_{L^\infty_{[s,t]}L^m} + \epsilon(h,n) \right) \, (t-s)^{\frac{1}{2}} \\ &\quad + C \, \Big( \|\mathcal{E}^{h,n}\|_{L^\infty_{[s,t]}L^m}+\epsilon(h,n) \Big) \, \left| \log \big( \|\mathcal{E}^{h,n}\|_{L^\infty_{[s,t]}L^m} + \epsilon(h,n) \big) \right| \, (t-s), \end{align*} for some $\epsilon(h,n) = o(1)$. We then prove a Gr\"onwall-type lemma with logarithmic factors (Lemma \ref{lem:rate-critical}) which yields a control of $\|\mathcal{E}^{h,n}\|_{L^\infty_{[0,1]}L^m}$ by a power of $\epsilon(h,n)$. \paragraph{Organisation of the paper.} We start with definitions and notations in Subsection \ref{sec:notations-SDE}, then state our main results in Subsection \ref{sec:main}. In Section \ref{sec:proofWeakEx}, we first recall some Besov estimates in Subsection \ref{sec:besov}, then state a first regularisation property of the fBm in Subsection \ref{sec:reg}. We prove tightness and stability results in Subsection \ref{sec:tightness}. We conclude the proof of weak existence and use the convergence of the tamed Euler scheme to establish strong existence and uniqueness in Subsection \ref{sec:proofEx}. The strong convergence of the numerical scheme \eqref{def:EulerSDE} to the solution of \eqref{eq:SDE} is established in Section \ref{sec:overview-SDE}. This proof relies strongly on the regularisation lemmas for fBm and discrete-time fBm which are stated and proven in Section \ref{sec:stochastic-sewing}. In Section \ref{sec:simulations}, we provide examples of SDEs that can be approximated with our result. We also run simulations of the scheme \eqref{def:EulerSDE} to study its empirical rate of convergence and compare it with the theoretical result. Finally we gather some technical proofs based on the stochastic sewing lemma in Appendix \ref{app:reg-fBm} and complete the proof of uniqueness in a larger class of processes in Appendix \ref{app:extend-uniqueness}. \section{Framework and results}\label{sec:numerical-analysis-SDE} \subsection{Notations and definitions}\label{sec:notations-SDE} In this section, we define notations that are used throughout the paper. \begin{itemize}[nosep,leftmargin=1em,labelwidth=*,align=left] \item On a probability space $(\Omega,\mathcal{F},\mathbb{P})$, we denote by $\mathbb{F} = (\mathcal{F}_{t})_{t\in [0,1]}$ a filtration that satisfies the usual conditions. \item The conditional expectation given $\mathcal{F}_{t}$ is denoted by $\mathbb{E}^{t}$ when there is no risk of confusion on the underlying filtration. \item An $\mathbb{R}^d$-valued stochastic process $(X_t)_{t \in[0,1]}$ is said to be adapted if for all $t \in[0,1], \ X_t$ is $\mathcal{F}_{t}$-measurable. \item For $\alpha \in[0,1]$, $I$ a subset of $[0,1]$ and $E$ a Banach space, we denote by $\mathcal{C}^\alpha_{I} E$ the space of $E$-valued mappings that are $\alpha$-H\"older continuous on $I$. The corresponding semi-norm for a function $f: [0,1] \rightarrow E$ reads \begin{align*} [f]_{\mathcal{C}^\alpha_{I} E} := \sup_{\substack{s,t \in I \\ t \neq s }} \frac{\| f_t-f_s \|_{E}}{|t-s|^{\alpha}} . \end{align*} \item The $L^m(\Omega)$ norm, $m \in [1,\infty]$, of a random variable $X$ is denoted by $\|X\|_{L^m}$ and the space $L^m(\Omega)$ is simply denoted by $L^m$. In that case we denote by $\mathcal{C}^\alpha_{I} L^{m}$ the space of $L^m(\Omega)$-valued mappings that are $\alpha$-H\"older continuous on $I$. For an $\mathbb{R}^d$-valued process $Z$, the corresponding semi-norm is then denoted by $[Z]_{\mathcal{C}^\alpha_{I} L^{m}}$. \item When $E$ is $\mathbb{R}$ or $\mathbb{R}^d$, we simply denote by $\mathcal{C}^\alpha_{I}$ the corresponding space and when $\alpha=0$, we use the notation $\mathcal{C}_{I}$. \item We write $L^\infty_{I}$ for the space of bounded measurable functions on a subset $I$ of $[0,1]$ and $L^\infty_I L^m := L^\infty(I, L^m(\Omega))$. For a Borel-measurable function $f:\mathbb{R}^d\to \mathbb{R}^d$, denote the classical $L^\infty$ and $\mathcal{C}^1$ norms of $f$ by $\|f \|_\infty = \sup_{x \in \mathbb{R}^d} |f(x)|$ and $\|f\|_{\mathcal{C}^1} = \| f \|_\infty + \sup_{x \neq y} \frac{|f(x)-f(y)|}{|x-y|}$. The corresponding norm for a process $Z: [0,1] \times \Omega \rightarrow \mathbb{R}^d$ is \begin{align*} \|Z\| _{L^\infty_I L^{m}} := \sup_{\substack{s \in I }} \| Z_s\|_{L^{m}} . \end{align*} \item For all $S,T \in [0,1]$, define the simplex $\Delta_{S,T}$ by \begin{align*} \Delta_{S,T} = \{ (s,t) \in [S,T], s < t \}. \end{align*} \item For a process $Z: \Delta_{0,1} \times \Omega \rightarrow \mathbb{R}^d$, we still write \begin{equation*} [Z]_{\mathcal{C}^\alpha_{I} L^{m}} = \sup_{\substack{s,t \in I \\ s < t }} \frac{\| Z_{s,t} \|_{L^{m}}}{|t-s|^{\alpha}} ~~\mbox{and}~~ \|Z\|_{L^\infty_I L^{m}} = \sup_{\substack{s \in I }} \| Z_{0,s}\|_{L^{m}}. \end{equation*} \item In applications of the stochastic sewing lemma, we will need to consider increments of $Z$, which are given for any triplet of times $(s, u, t)$ such that $ s \leq u \leq t $ by $$ \delta Z_{s, u, t}:=Z_{s, t}-Z_{s, u}-Z_{u, t} . $$ \item Finally, given a process $Z: [0,1] \times \Omega \rightarrow \mathbb{R}^d$, $\alpha \in (0,1]$, $m \in [1,\infty)$ and $q \in [1,\infty]$, we consider the following seminorm: for any $0 \leq s \leq t \leq 1$, \begin{align}\label{eq:defbracket} [Z]_{\mathcal{C}_{[s,t]}^{\alpha}L^{m,q}}:= \sup_{(u,v) \in \Delta_{s,t}}\frac{\|\mathbb{E}^u[|Z_v-Z_u|^m]^{\frac{1}{m}}\|_{L^q}}{(v-u)^\alpha}, \end{align} where the conditional expectation is taken with respect to the filtration the space is equipped with. By the tower property and Jensen's inequality for conditional expectation, we know that \begin{align} \label{eq:boundSeminorms} [Z]_{\mathcal{C}_{[s,t]}^{\alpha} L^m}= [Z]_{\mathcal{C}^{\alpha}_{[s,t]} L^{m,m}} \leq [Z]_{\mathcal{C}_{[s,t]}^{\alpha} L^{m,\infty}}. \end{align} \end{itemize} \paragraph{Heat kernel.} For any $t>0$, denote by $g_{t}$ the Gaussian kernel on $\mathbb{R}^d$ with variance $t$: \begin{align*} g_{t}(x)=\frac{1}{(2 \pi \, t)^{d/2}} \exp \left(-\frac{|x|^{2}}{2 t}\right), \end{align*} and by $G_{t}$ the associated Gaussian semigroup on $\mathbb{R}^d$: for $f:\mathbb{R}^d\to \mathbb{R}^d$, \begin{align}\label{eq:semi-group-gaussian} G_t f(x) = \int_{\mathbb{R}^d} g_t(x-y) \, f(y) \, d y . \end{align} \paragraph{Besov spaces.}We use the same definition of nonhomogeneous Besov spaces as in \cite{bahouri2011fourier}, which we write here for any dimension $d$. Let $\chi,\varphi:\mathbb{R}^d\to \mathbb{R}$ be the smooth radial functions which are given by \cite[Proposition 2.10]{bahouri2011fourier}, with $\chi$ supported on a ball while $\varphi$ is supported on an annulus. Let $v_{-1}$ and $v$ respectively be the inverse Fourier transform of $\chi$ and $\varphi$. Denote by $\mathcal{F}$ the Fourier transform and $\mathcal{F}^{-1}$ its inverse. The nonhomogeneous dyadic blocks $\Delta_j, j\in \mathbb{N}\cup\{-1\}$ are defined for any $\mathbb{R}^d$-valued tempered distribution $u$ by \begin{align*} \Delta_{-1} u = \mathcal{F}^{-1} \left(\chi \mathcal{F}u \right) ~~\text{ and }~~ \Delta_{j}u = \mathcal{F}^{-1} \left(\varphi(2^{-j}\cdot) \mathcal{F}u \right) ~\text{ for } j \ge 0. \end{align*} Let $\gamma \in \mathbb{R}$ and $p \in [1, \infty]$. We denote by $\mathcal{B}_p^\gamma$ the nonhomogeneous Besov space $\mathcal{B}_{p,\infty}^\gamma(\mathbb{R}^d, \mathbb{R}^d)$ of $\mathbb{R}^d$-valued tempered distributions $f$ such that \begin{align*} \| f \|_{\mathcal{B}_p^\gamma} = \sup_{j \ge -1} 2^{j \gamma} \| \Delta_j f \|_{L_p(\mathbb{R}^d)} < \infty . \end{align*} Let $1\leq p_1 \leq p_2 \leq \infty$. The space $\mathcal{B}_{p_1}^\gamma$ continuously embeds into $\mathcal{B}^{\gamma-d(1/p_1-1/p_2)}_{p_2}$, which we write as ${\mathcal{B}_{p_1}^\gamma \hookrightarrow \mathcal{B}^{\gamma-d(1/p_1-1/p_2)}_{p_2}}$, see e.g. \cite[Prop.~2.71]{bahouri2011fourier}. ~ Finally, we denote by $C$ a constant that can change from line to line and that does not depend on any parameter other than those specified in the associated lemma, proposition or theorem. When we want to make the dependence of $C$ on some parameter $a$ explicit, we will write $C(a)$. ~ To give a meaning to equation \eqref{eq:SDE} with distributional drift, we first need to precise in which sense those drifts are approximated. \begin{definition}\label{def:conv-gamma-} Let $\gamma \in \mathbb{R}$ and $p \in [1,\infty]$. We say that a sequence of smooth bounded functions $(b^n)_{n \in \mathbb{N}}$ converges to $b$ in $\mathcal{B}_p^{\gamma-}$ as $n$ goes to infinity if \begin{equation}\label{eq:conv-in-gamma-} \begin{cases} \displaystyle \sup_{n \in \mathbb{N}} \|b^n\|_{\mathcal{B}_p^\gamma} \leq \|b\|_{B_p^\gamma} < \infty, \\ \displaystyle \lim_{n \rightarrow \infty} \|b^n - b\|_{\mathcal{B}_p^{\gamma'}} = 0, \quad \forall \gamma' < \gamma. \end{cases} \end{equation} \end{definition} Following \cite{NualartOuknine}, in dimension $d=1$, we recall a notion of $\mathbb{F}$-fBm which extends the classical definition of $\mathbb{F}$-Brownian motion. There exists a one-to-one operator $\mathcal{A}_{H}$ (which can be written explicitly in terms of fractional derivatives and integrals, see \cite[Definition 2.3]{anzeletti2021regularisation}) such that for $B$ an fBm, the process $W:=\mathcal{A}_{H}B$ is a Brownian motion. Then we say that $B$ is an $\mathbb{F}$-fBm if $W$ is an $\mathbb{F}$-Brownian motion. In any dimension $d \ge 1$, we say that $B$ is an $\mathbb{R}^d$-valued $\mathbb{F}$-fBm, if each component is an $\mathbb{F}$-fBm. We are now ready to introduce the notions of solution to \eqref{eq:SDE}. \begin{definition}\label{def:sol-SDE} Let $\gamma \in \mathbb{R}$, $p \in [1,\infty]$, $b \in \mathcal{B}_p^\gamma$, $T>0$ and $X_0 \in \mathbb{R}^d$. As in \cite{anzeletti2021regularisation}, we define the following notions. \begin{itemize} \item \emph{Weak solution:} a couple $((X_t)_{t \in [0,1]},(B_t)_{t \in [0,1]})$ defined on some filtered probability space $(\Omega,\mathcal{F},\mathbb{F},\mathbb{P})$ is a weak solution to \eqref{eq:SDE} on $[0,1]$, with initial condition $X_0$, if \begin{itemize}[nosep,leftmargin=1em,labelwidth=*,align=left] \item $B$ is an $\mathbb{R}^d$-valued $\mathbb{F}$-fBm; \item $X$ is adapted to $\mathbb{F}$; \item there exists an $\mathbb{R}^d$-valued process $(K_t)_{t \in [0,1]}$ such that, a.s., \begin{equation}\label{solution1} X_t=X_0+K_t+B_t\text{ for all } t \in [0,1] ; \end{equation} \item for every sequence $(b^n)_{n\in \mathbb{N}}$ of smooth bounded functions converging to $b$ in $\mathcal{B}^{\gamma-}_p$, we have that \begin{equation}\label{approximation2} \sup_{t\in [0,1]}\left|\int_0^t b^n(X_r) dr -K_t\right| \underset{n\rightarrow \infty}{\longrightarrow} 0 \text{ in probability}. \end{equation} \end{itemize} If the couple is clear from the context, we simply say that $(X_t)_{t \in [0,1]}$ is a weak solution. \item \emph{Pathwise uniqueness:} As in the classical literature on SDEs, we say that pathwise uniqueness holds if for any two solutions $(X,B)$ and $(Y,B)$ defined on the same filtered probability space with the same fBm $B$ and same initial condition $X_0 \in \mathbb{R}^d$, $X$ and $Y$ are indistinguishable. \item \emph{Strong solution:} A weak solution $(X,B)$ such that $X$ is $\mathbb{F}^B$-adapted is called a strong solution, where $\mathbb{F}^B$ denotes the filtration generated by $B$. \end{itemize} \end{definition} \subsection{Main results}\label{sec:main} Our first result is decomposed into two parts: first, it gives a condition for existence of a weak solution to \eqref{eq:SDE} and therefore extends \cite[Theorem 2.8]{anzeletti2021regularisation} to the multidimensional setting. The proof is presented in Section~\ref{sec:proofWeakEx}. The second part gives existence and uniqueness of a strong solution under stronger assumptions, and will be a consequence of the convergence of the tamed Euler scheme in Theorem \ref{thm:main-SDE}. Thus it provides a multidimensional extension of \cite[Theorem 2.9]{anzeletti2021regularisation} through a completely different proof. \begin{theorem}\label{th:WP} Let $\gamma \in \mathbb{R}$, $p \in [1,\infty]$ and $b \in \mathcal{B}_p^\gamma$. \begin{enumerate}[label=(\alph*)] \item\label{th:weakEx} Assume that \begin{align} \label{eq:assumptionweak} 0 > \gamma-\frac{d}{p}> \frac{1}{2} -\frac{1}{2H}. \tag{H1} \end{align} Then there exists a weak solution $X$ to \eqref{eq:SDE} such that ${[X-B]_{\mathcal{C}^\kappa_{[0,1]}L^{m,\infty}}<\infty}$ for any $\kappa \in (0,1+H(\gamma-d/p)\wedge 0]\setminus \{1\}$ and $m\geq 2$. \item\label{th:strongEx} Assume that \begin{align}\label{eq:cond-gamma-p-H} H < \frac{1}{2} \text{, }~ 0 > \gamma - \frac{d}{p}\geq 1-\frac{1}{2H} ~ \text{ and } ~ \gamma > 1-\frac{1}{2H}. \tag{H2} \end{align} Then there exists a strong solution $X$ to \eqref{eq:SDE} such that $[X-B]_{\mathcal{C}^{1/2+H}_{[0,1]}L^{m,\infty}}<\infty$ for any $m\geq 2$. Besides, pathwise uniqueness holds in the class of all solutions $X$ such that ${[X-B]_{\mathcal{C}^{1/2+H}_{[0,1]} L^{2,\infty}}<\infty}$. Finally, if $\gamma-d/p>1-1/(2H)$, pathwise uniqueness holds in the class of all solutions $X$ such that ${[X-B]_{\mathcal{C}^{H(1-\gamma+d/p)+\eta}_{[0,1]} L^{2,\infty}}<\infty}$, for any $\eta \in (0,1)$. \end{enumerate} \end{theorem} The proof of Theorem \ref{th:WP}$(a)$ is given in Section \ref{sec:proofEx} and the proof of Theorem \ref{th:WP}$(b)$ in Section~\ref{subsec:StrongEx}. The latter follows from the convergence of the tamed Euler scheme stated in Corollary \ref{cor:bn-choice}: since the scheme is adapted to $\mathbb{F}^B$ and converges to any weak solution $X$ such that $[X-B]_{\mathcal{C}^{1/2+H}_{[0,1]}L^{2, \infty}}<\infty$, we deduce uniqueness and also that the weak solution is adapted to $\mathbb{F}^B$ and is therefore a strong one. For $\eta \in (0,1)$, we extend the uniqueness result to solutions $X$ that satisfy $[X-B]_{\mathcal{C}^{H(1-\gamma+d/p)+\eta}_{[0,1]}L^{2, \infty}}<\infty$ in Appendix \ref{app:extend-uniqueness}. \begin{remark}\label{rk:comparisonNotions} ~ \begin{itemize} \item In \cite{CatellierGubinelli} and more recently \cite{GHM,GaleatiGerencser}, \eqref{eq:SDE} was solved in the sense of nonlinear Young differential equations in the sub-critical case of the strong regime: for $b$ in the H\"older space $\mathcal{C}^\alpha$ ($= \mathcal{B}^\alpha_{\infty}$ when $\alpha\notin \mathbb{N}$), strong existence and path-by-path uniqueness hold for $\alpha>1-1/(2H)$, unconditionally. Nevertheless the notions of solution might not be equivalent: we know from Theorem~2.14(a) in \cite{anzeletti2021regularisation} that a nonlinear Young solution with some H\"older regularity (which is proven to hold in \cite{GHM}) is a solution in the sense of Definition~\ref{def:sol-SDE}; however a strong solution in the sense of Definition~\ref{def:sol-SDE} must also have some H\"older regularity to be a nonlinear Young solution (Theorem~2.14(b) in \cite{anzeletti2021regularisation}) and we do not know if any strong solution has such regularity. \item In the weak regime $\gamma-d/p> 1/2 -1/(2H)$, weak existence was proven in \cite{anzeletti2021regularisation} in dimension $1$, then extended in higher dimension with time-dependence in \cite{GaleatiGerencser}. In \cite{GaleatiGerencser}, the notion of solution is similar to Definition~\ref{def:sol-SDE}, with $X$ that must satisfy \eqref{solution1} and \eqref{approximation2} for any approximating sequence $(b_{n})$ that converges to $b$ in $\mathcal{B}^\gamma_{p}$, which is slightly more restrictive than what is asked here (convergence in $\mathcal{B}^{\gamma-}_{p}$). Hence for the sake of completeness, we provide a proof of Theorem~\ref{th:WP}(a). \item Although the uniqueness result holds only in a class of regular enough processes, we will see in the next theorem that the Euler scheme chooses exactly the unique solution in this class. \end{itemize} \end{remark} ~ Let $(b^n)_{n \in \mathbb{N}}$ be a sequence of smooth functions that converges to $b$ in $\mathcal{B}_p^{\gamma-}$. Consider the tamed Euler scheme \eqref{def:EulerSDE} associated to \eqref{eq:SDE} with a time-step $h \in (0,1)$. The main result of this paper is the following theorem. It describes the convergence of the tamed Euler scheme to a weak solution $X$ such that $X-B \in \mathcal{C}^{1/2+H}_{[0,1]} L^{2,\infty}$. Choosing appropriately $h$ and $n$, we also deduce that the regularity of the tamed Euler scheme is the same, that is, $X^{h,n} -B \in \mathcal{C}^{1/2+H}_{[0,1]} L^{2,\infty}$. \begin{theorem}\label{thm:main-SDE} Let $H < 1/2$, $\gamma \in \mathbb{R}$, $p \in [1,\infty]$ satisfying \eqref{eq:cond-gamma-p-H} and let $m \in [2, \infty)$. Let $b \in \mathcal{B}_p^\gamma$ and $(b^n)_{n \in \mathbb{N}}$ be a sequence of smooth functions that converges to $b$ in $\mathcal{B}_p^{\gamma-}$. Let $X_0$ be an $\mathcal{F}_0$-measurable random variable, $(X,B)$ be a weak solution to \eqref{eq:SDE} and $(X^{h,n})_{h \in (0,1), n \in \mathbb{N}}$ be the tamed Euler scheme defined in \eqref{def:EulerSDE}, on the same probability space and with the same fBm $B$ as $X$. \begin{enumerate}[label=(\alph*)] \item \underline{Regularity of the tamed Euler scheme}: Let $\eta \in (0,H)$, $\mathcal{D}$ a sub-domain of $(0,1) \times \mathbb{N}$ and assume that \begin{align}\label{eq:assump-bn-bounded} \sup_{(h,n) \in \mathcal{D}} \| b^n \|_{\infty} h^{\frac{1}{2}-H} < \infty \ \ \text{ and }~ \sup_{(h,n) \in \mathcal{D} } \| b^n \|_{\mathcal{C}^1} h^{\frac{1}{2}+H-\eta} < \infty. \tag{H3} \end{align} Then $\displaystyle \sup_{(h,n) \in \mathcal{D}} [X^{h,n}-B]_{\mathcal{C}^{\frac{1}{2}+H }_{[0,1]} L^{m, \infty}} < \infty$. \end{enumerate} Assume that $X-B \in \mathcal{C}^{1/2+H}_{[0,1]}L^{m,\infty}$ and let $\varepsilon \in (0,1/2)$. \begin{enumerate} \item[(b)] \underline{The sub-critical case}: Assume $\gamma-d/p \in (1-1/(2H), 0)$. Then there exists $C>0$ that depends on $m, p, \gamma, d, \varepsilon, \|b\|_{\mathcal{B}_p^{\gamma}}$ such that for all $h \in (0,1)$ and $n \in \mathbb{N} $, the following bound holds: \begin{align}\label{eq:main-result-SDE} [X - X^{h,n}]_{\mathcal{C}^{\frac{1}{2}}_{[0,1]} L^{m}} & \leq C \left( \| b^n-b \|_{\mathcal{B}_p^{\gamma-1}} + \|b^n\|_\infty h^{\frac{1}{2}-\varepsilon} + \|b^n\|_\infty \|b^n\|_{\mathcal{C}^1} h^{1-\varepsilon} \right). \end{align} \item[(c)] \underline{The critical case}: Assume $\gamma-d/p=1-1/(2H)$ and $\gamma > 1-1/(2H)$. Let $\zeta \in (0,1/2)$, $\mathbf{M}$ be the constant given by Proposition \ref{prop:bound-E1-SDE-critic}, and $\delta \in (0, e^{-\mathbf{M}})$. If \eqref{eq:assump-bn-bounded} holds, then there exists $C>0$ that depends on $m, p, \gamma,d, \varepsilon, \zeta, \delta, \|b\|_{\mathcal{B}_p^{\gamma}}$ such that for all $(h,n) \in \mathcal{D}$, the following bound holds: \begin{equation}\label{eq:main-result-SDE-critic} \begin{split} [X - X^{h,n}]_{\mathcal{C}^{\frac{1}{2}-\zeta}_{[0,1]} L^{m}} & \leq C \Big( \|b-b^n\|_{\mathcal{B}_p^{\gamma-1}} (1 + |\log(\| b - b^n \|_{\mathcal{B}_p^{\gamma-1}})|) + \|b^n\|_\infty h^{\frac{1}{2}-\varepsilon} \\ & \quad + \|b^n\|_{\mathcal{C}^1} \|b^n\|_\infty h^{1-\varepsilon} \Big) ^{(e^{-\mathbf{M}}-\delta)} . \end{split} \end{equation} \end{enumerate} \end{theorem} Obviously, the previous error bounds also hold for the strong error in uniform norm, since we have \begin{equation}\label{eq:boundsup} \sup _{t \in [0,1] }\big\|X_{t}-X_{t}^{h,n}\big\|_{L^{m}} \leq [X - X^{h,n}]_{\mathcal{C}^{\frac{1}{2}-\zeta}_{[0,1]} L^{m}} . \end{equation} ~ Although the Hurst parameter does not appear in the upper bounds \eqref{eq:main-result-SDE}-\eqref{eq:main-result-SDE-critic}, the first term $\| b^n-b \|_{\mathcal{B}_p^{\gamma-1}}$ does depend implicitly on $H$ through \eqref{eq:cond-gamma-p-H}. Observe also that the second term, $\|b^n\|_\infty h^{1/2-\varepsilon}$, corresponds to the optimal rate of convergence found in \cite{butkovsky2021approximation}. ~ In the upper bounds \eqref{eq:main-result-SDE}-\eqref{eq:main-result-SDE-critic}, it is important to choose carefully the sequence $(b^n)_{n \in \mathbb{N}}$ to obtain a good rate of convergence of the numerical scheme. Choosing $b^n=G_{1/n}$ for $n \in \mathbb{N}^*$, we have thanks to Lemma \ref{lem:reg-S} that for $\gamma-d/p<0$, \begin{align} &\| b^n - b \|_{\mathcal{B}_p^{\gamma-1}} \leq C\, \|b\|_{\mathcal{B}_p^\gamma} \ n^{-\frac{1}{2}}, \label{eq:bn-b} \\ &\|b^n\|_\infty \leq C\, \|b\|_{\mathcal{B}_p^\gamma} \ n^{-\frac{1}{2}(\gamma-\frac{d}{p} )}, \label{eq:bn-inf} \\ &\| b^n \|_{\mathcal{C}^1} \leq C\, \|b\|_{\mathcal{B}_p^\gamma} \, n^{\frac{1}{2}} \ n^{-\frac{1}{2}(\gamma-\frac{d}{p})} \label{eq:bn-C1} . \end{align} Using these results in \eqref{eq:main-result-SDE} and optimising over $n$ and $h$, we deduce the following corollary. \begin{corollary}\label{cor:bn-choice} Let the same assumptions as in Theorem \ref{thm:main-SDE} hold. For $h \in (0, 1/2)$, define \begin{equation*} n_h = \left\lfloor h^{-\frac{1}{1-\gamma+\frac{d}{p}}}\right\rfloor ~~\mbox{and}~~ b^{n_h}=G_{\frac{1}{n_h}} b. \end{equation*} Then we have \begin{align} &\sup_{\substack{h \in (0,\frac{1}{2})}} [X^{h,n_h}-B]_{\mathcal{C}^{\frac{1}{2}+H}_{[0,1]} L^{m, \infty}} < \infty \label{eq:unifscheme} . \end{align} Let $\varepsilon \in (0,1/2)$. \begin{enumerate}[label=(\alph*)] \item \underline{The sub-critical case}: Assume $\gamma-d/p \in (1-1/(2H) ,0)$. Then there exists $C>0$ that depends on $ m, p, \gamma, \varepsilon, \|b\|_{\mathcal{B}_p^{\gamma}}$ such that the following bound holds: \begin{align} \forall h\in \left( 0,\frac{1}{2} \right),\quad [X - X^{h,n_h}]_{\mathcal{C}^{\frac{1}{2}}_{[0,1]} L^{m}} \leq C \, h^{\frac{1}{2(1-\gamma+\frac{d}{p})}-\varepsilon}. \label{eq:rate1} \end{align} \item \underline{The critical case}: Assume $\gamma-d/p=1-1/(2H)$ and $\gamma > 1-1/(2H)$. Let $\zeta \in (0,1/2)$, $\mathbf{M}$ be the constant given by Proposition \ref{prop:bound-E1-SDE-critic}, and $\delta \in (0, e^{-\mathbf{M}})$. Then there exists $C>0$ that depends on $m, p, \gamma, \varepsilon, \zeta, \delta, \|b\|_{\mathcal{B}_p^{\gamma}}$ such that the following bound holds: \begin{align} \forall h\in \left( 0,\frac{1}{2} \right),\quad [X - X^{h,n_h}]_{\mathcal{C}^{\frac{1}{2}-\zeta}_{[0,1]} L^{m}} \leq C \, h^{H (e^{-\mathbf{M}}-\delta)}. \label{eq:rate1-critic} \end{align} \end{enumerate} \end{corollary} \begin{remark} We construct the tamed Euler scheme on a particular probability space that is given in an abstract way by Theorem~\ref{th:WP}. From Corollary~\ref{cor:bn-choice}, we deduce (see Subsection~\ref{subsec:StrongEx}) that $X$ is in fact a strong solution. It is then possible to construct the tamed Euler scheme on any probability space (rich enough to contain an $\mathbb{F}$-fBm), which is of practical importance for simulations. \end{remark} \begin{remark} For instance, if each component of $b$ is a signed measure, then $b \in \mathcal{B}_1^{0} := \mathcal{B}_1^0(\mathbb{R}^d, \mathbb{R}^d)$ (see \cite[Proposition 2.39]{bahouri2011fourier}). Hence the previous result (Corollary~\ref{cor:bn-choice}$(a)$) yields a rate $\frac{1}{2(1+d)}-\varepsilon$, which holds for $H < \frac{1}{2(1+d)}$. In the critical case, when $H=\frac{1}{2(1+d)}$, the rate becomes $H e^{-\mathbf{M}}-\varepsilon$. \end{remark} For $\gamma- d/p > 0$, $\mathcal{B}_p^\gamma$ is continuously embedded in the H\"older space $\mathcal{C}^{\gamma-d/p}$. In \cite{butkovsky2021approximation}, it was proved that the Euler scheme achieves a rate $1/2+H(\gamma-d/p)-\varepsilon$. Moreover, if $b$ is a bounded measurable function, the rate is $1/2-\varepsilon$. To close the gap between the present results and \cite{butkovsky2021approximation}, we handle the case $\gamma-d/p=0$. Recall that $\mathcal{B}_p^{d/p}$ is continuously embedded into $\mathcal{B}_\infty^0$, so it is equivalent to work with $\gamma=0$ and $p=+\infty$. Note that $\mathcal{B}_\infty^0$ contains strictly $L^\infty(\mathbb{R}^d)$ (see e.g. \cite[Section 2.2.2, eq (8) and Section 2.2.4, eq (4)]{runst2011sobolev}) which was the space considered in \cite{butkovsky2021approximation}. Let $b \in \mathcal{B}_\infty^0$. By the definition of Besov spaces, we know that $b \in \mathcal{B}_\infty^{-\eta}$ for all $\eta>0$. Choosing $\eta$ small enough so that $- \eta > 1-1/(2H)$, we can apply Theorem \ref{th:WP} and Theorem \ref{thm:main-SDE}, and obtain a rate of convergence as in Corollary \ref{cor:bn-choice} when $b^n=G_{\frac{1}{n}} b$. This is summarized in the following Corollary. \begin{corollary}\label{cor:gama=d/p} Let the assumptions of Theorem \ref{thm:main-SDE} hold. Let $B$ be an $\mathbb{F}$-fBm with $H < 1/2$, $b \in \mathcal{B}_\infty^0$ and $m \ge 2$. There exists a strong solution $X$ to \eqref{eq:SDE} such that $[X-B]_{\mathcal{C}^{1/2+H}_{[0,1]}L^{m,\infty}}<\infty$. Besides, for any $\eta>0$, pathwise uniqueness holds in the class of solutions $X$ such that $[X-B]_{\mathcal{C}^{H+\eta}_{[0,1]} L^{2,\infty}}<\infty$. Let $\varepsilon \in (0,1/2)$. Then Theorem \ref{thm:main-SDE}$(a)$ holds and there exists a constant $C$ that depends only on $m, \varepsilon, \|b\|_{\mathcal{B}_\infty^{0}}$ such that for any $h \in (0,1/2)$ and $n\in \mathbb{N}$, the following bound holds: \begin{align*}% [X - X^{h,n}]_{\mathcal{C}^{\frac{1}{2}}_{[0,1]} L^{m}} & \leq C \left( \| b^n-b \|_{\mathcal{B}_{\infty}^{-1}} + \|b^n\|_\infty h^{\frac{1}{2}-\varepsilon} + \|b^n\|_\infty \|b^n\|_{\mathcal{C}^1} h^{1-\varepsilon} \right). \end{align*} Moreover, for $n_h=\lfloor h^{-1} \rfloor$ and $b^{n_h}=G_{\frac{1}{n_h}} b$, we have \begin{align*} \forall h\in \left( 0,\tfrac{1}{2}\right), \quad [X - X^{h,n_h}]_{\mathcal{C}^{\frac{1}{2}}_{[0,1]} L^{m}} & \leq C h^{\frac{1}{2}-\varepsilon}, \\ \sup_{\substack{h \in (0,\frac{1}{2})}} [X^{h,n_h}-B]_{\mathcal{C}^{\frac{1}{2}+H}_{[0,1]} L^{m, \infty}} & < \infty . \end{align*} \end{corollary} ~ Theorem \ref{thm:main-SDE}, Corollary \ref{cor:bn-choice} and Corollary \ref{cor:gama=d/p} are proven in Section \ref{sec:overview-SDE}. \\ \subsection{Discussion on the approach and results} The main novelty of the paper is that we treat fractional SDEs with distributional drifts, including the critical case $\gamma-d/p=1-1/(2H)$ and $\gamma>1-1/(2H)$. For bounded drifts, one can use Girsanov's theorem (see e.g. \cite[Lemma 4.2]{butkovsky2021approximation}), which requires upper bounds on exponential moments of functionals of the fBm. In the literature, this usually leads to an exponential dependence on $\|b^n\|_\infty$, which we chose to avoid. We note however that \citet{le2021taming} managed to develop a Girsanov argument for unbounded drifts for the Brownian motion, using the $L^q_{t} L^p(\mathbb{R}^d)$ norm of $b^n$ on small time intervals (see \cite[Lemma 5.14]{le2021taming}). For the fBm, the computations for a Girsanov argument do not seem to work with the stochastic sewing, since the functionals that appear include the fractional kernel (see for example \cite[(B.1)]{butkovsky2021approximation}). As a novel approach, we use the stochastic sewing lemma to regularise directly integrals of functions of the discrete noise $\{ X^{h,n}_{t_h}\}_{t \ge 0}$ (see Proposition~\ref{prop:newbound-E2}) to avoid a Girsanov argument and any exponential dependence on norms of $b^n$. The price to pay is that the $\mathcal{C}^1$ norm of $b^n$ appears, which can be compensated by powers of $h$. \smallskip Let us make a few comments on the rate of convergence obtained in Corollaries~\ref{cor:bn-choice} and \ref{cor:gama=d/p}: \begin{itemize} \item For fixed $\gamma, p$ and $d$, one can chose $H$ close to $\frac{1}{2(1-\gamma+\frac{d}{p})}$ from below, and get an order of convergence that will be $\frac{1}{2(1-\gamma+\frac{d}{p})}-\varepsilon \approx H-\varepsilon$. \item For a fixed $H$, one can take $b \in \mathcal{B}_\infty^{1-\frac{1}{2H}+\varepsilon}$ for any $\varepsilon>0$, and get an order of convergence that will be close to $H$. \item The order of convergence is $1/2-\varepsilon$ when $\gamma-d/p = 0$, for any $H < 1/2$. \item The order of convergence is $H e^{-\mathbf{M}}-\varepsilon$ for some constant $\mathbf{M}$, if $\gamma-d/p=1-1/(2H)$ and $\gamma > 1-1/(2H)$. Hence the rates of convergence we obtained change abruptly when $\gamma-d/p$ reaches the threshold $1-1/(2H)$. \end{itemize} In view of \cite{butkovsky2021approximation}, one could have guessed that the order of convergence for $\gamma-d/p \leq 0$ would still be $1/2+ H(\gamma-d/p)$. In particular, we point out that the two orders, $1/2+ H(\gamma-d/p)$ and $\frac{1}{2(1-\gamma+d/p)}$, coincide when $\gamma - d/p = 1-1/(2H)$. However \eqref{eq:cond-gamma-p-H} only implies the following inequality \begin{align*} \frac{1}{2}+ H\left(\gamma-\frac{d}{p}\right) \ge \frac{1}{2(1-\gamma+\frac{d}{p})} . \end{align*} Finally, the orders of convergence obtained here and in \cite{butkovsky2021approximation} are summarized in Table \ref{tab:true-summarySDE}. \begin{table}[h!] \centering \begin{tabular}{|c|c|c|c|c|} \hline \textit{Drift} & $\begin{array}{ll} \gamma - \frac{d}{p} = 1-\frac{1}{2H} \\ \text{and } \gamma > 1-\frac{1}{2H} \end{array}$ & $\gamma - \frac{d}{p} \in (1-\frac{1}{2H},0)$ & $\gamma - \frac{d}{p} = 0$ & $\gamma-\frac{d}{p}>0$\\ \hline \textit{Rate} & $H e^{-\mathbf{M}}-\varepsilon$ & $\frac{1}{2(1-\gamma+\frac{d}{p})}- \varepsilon $ & $\frac{1}{2}- \varepsilon $ & $\Big( \frac{1}{2}+H\big(\gamma-\frac{d}{p}\big) \Big) \wedge 1 -\varepsilon$\\ \hline \end{tabular} \caption{Rate of convergence of the tamed Euler scheme depending on the Besov regularity of the drift.} \label{tab:true-summarySDE} \end{table} \section{Existence and uniqueness of solutions}\label{sec:proofWeakEx} In this section, we prove Theorem \ref{th:WP}. The proof of Theorem \ref{th:WP}$(a)$ follows the same lines as the proof of Theorem 2.8 of \cite{anzeletti2021regularisation}, but requires extensions of some technical lemmas concerning the regularising effects of the fractional Brownian motion in dimension $d$. Subsection~\ref{subsec:StrongEx} is dedicated to the proof of Theorem \ref{th:WP}$(b)$. \subsection{Besov estimates}\label{sec:besov} The first of these extensions concern estimates of shift of distributions in Besov spaces. It is an generalisation to $\mathbb{R}^d$ of Lemma A.2 in \cite{athreya2020well} and follows its proof exactly, so we omit it. \begin{lemma}\label{lem:besov-spaces} Let $f$ be a tempered distribution on $\mathbb{R}^d$ and let $\beta \in \mathbb{R}$, $p\in [1,\infty]$. Then for any $a_1,a_2,a_3 \in \mathbb{R}^d$ and $\alpha, \alpha_1, \alpha_2 \in [0,1]$, one has \begin{itemize} \item[(i)] $\| f(a + \cdot ) \|_{\mathcal{B}_p^\beta} \leq \| f \|_{\mathcal{B}_p^\beta}$ . \item[(ii)] $\| f(a_1 + \cdot) - f(a_2 + \cdot) \|_{\mathcal{B}_p^\beta} \leq C |a_1 - a_2 |^{\alpha} \| f \|_{\mathcal{B}_p^{\beta+\alpha}}$ . \item[(iii)] $\| f(a_1 + \cdot) - f(a_2 + \cdot) -f(a_3 + \cdot) + f(a_3+a_2-a_1+\cdot) \|_{\mathcal{B}_p^\beta} \leq C |a_1 - a_2 |^{\alpha_1} |a_1 - a_3 |^{\alpha_2} \| f \|_{\mathcal{B}_p^{\beta+\alpha_1 + \alpha_2}} .$ \end{itemize} \end{lemma} \smallskip Then we have the following estimates for the Gaussian semigroup in Besov spaces. They are either borrowed or adapted from \cite{bahouri2011fourier,athreya2020well}. \begin{lemma}\label{lem:reg-S} Let $\beta\in \mathbb{R}$, $p\in [1,\infty]$ and $f \in \mathcal{B}_p^\beta$. Then \begin{enumerate}[label=(\roman*)] \item If $\beta<0$, $\| G_t f \|_{L^p(\mathbb{R}^d)} \leq C\, \|f \|_{\mathcal{B}_p^\beta}\, t^{\frac{\beta}{2}}$, for all $t > 0$. \item If $\beta-\frac{d}{p}<0$, $\| G_t f \|_{\infty} \leq C\, \|f \|_{\mathcal{B}_p^\beta}\, t^{\frac{1}{2}(\beta - \frac{d}{p})}$, for all $t > 0$. \item $\|G_t f - f\|_{\mathcal{B}_p^{\beta-\varepsilon}} \leq C\, t^{\frac{\varepsilon}{2}}\, \| f \|_{\mathcal{B}_p^\beta}$ for all $\varepsilon\in (0,1]$ and $t>0$. % In particular, it follows that $\lim_{t \rightarrow 0} \|G_t f -f\|_{\mathcal{B}_p^{\tilde{\beta}}}=0$ for every $\tilde{\beta}< \beta$. \item $\sup_{t>0} \|G_t f \|_{\mathcal{B}_p^\beta} \leq \| f \|_{\mathcal{B}_p^\beta}$. \item If $\beta-\frac{d}{p}<0$, $\|G_t f \|_{\mathcal{C}^1} \leq C\, \| f \|_{\mathcal{B}_p^\beta} \, t^{\frac{1}{2}(\beta- \frac{d}{p}-1)}$ for all $t>0$. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate}[label={\it(\roman*)}] \item The proof of Lemma A.3$(i)$ in \cite{athreya2020well} extends right away to dimension $d\geq1$. \item Using $(i)$ for $\beta-\frac{d}{p}$ instead of $\beta$ and the embedding $\mathcal{B}^\beta_{p} \hookrightarrow \mathcal{B}^{\beta-\frac{d}{p}}_{\infty}$, there is \begin{equation*} \| G_{t} f\|_{L^\infty(\mathbb{R}^d)} \leq C \, \|f\|_{\mathcal{B}^{\beta-d/p}_{\infty}} \, t^{\frac{1}{2}(\beta-\frac{d}{p})} \leq C \, \|f\|_{\mathcal{B}^{\beta}_{p}} \, t^{\frac{1}{2}(\beta-\frac{d}{p})} . \end{equation*} \item This is an adaptation of Lemma A.3$(ii)$ in \cite{athreya2020well} to dimension $d\geq 1$ that we detail briefly. From \cite[Lemma 4]{MourratWeber}, we have that for $g$ such that the support of $\mathcal{F}g$ is in a ball of radius $\lambda\geq 1$, there is for all $t\geq 0$, \begin{equation*} \|G_{t}g - g\|_{L^p(\mathbb{R}^d)} \leq C\, (t\lambda^2\wedge 1) \|g\|_{L^p(\mathbb{R}^d)}. \end{equation*} For any $j\geq -1$, the support of $\mathcal{F}(\Delta_{j}f)$ is included in a ball of radius $2^j$. Hence, \begin{align*} 2^{j(\beta-\varepsilon)}\|G_{t}(\Delta_{j}f) - \Delta_{j}f\|_{L^p(\mathbb{R}^d)} &\leq C\, 2^{j(\beta-\varepsilon)} (t 2^{2j}\wedge 1) \|\Delta_{j}f\|_{L^p(\mathbb{R}^d)}\\ &\leq C\, 2^{-j\varepsilon} (t 2^{2j}\wedge 1)^{\frac{\varepsilon}{2}}\, 2^{j\beta} \|\Delta_{j}f\|_{L^p(\mathbb{R}^d)}\\ &\leq C\, t^{\frac{\varepsilon}{2}}\, 2^{j\beta} \|\Delta_{j}f\|_{L^p(\mathbb{R}^d)}. \end{align*} The result follows. \item--~{\it(v)} The proof is the same as in the one-dimensional case, see \cite[Lemma A.3$(iii)$]{athreya2020well} and \cite[Lemma A.3$(iv)$]{athreya2020well}. \end{enumerate} \end{proof} The next lemma describes some time regularity estimates of random functions of the fractional Brownian motion in Besov norms. \begin{lemma}\label{lem:reg-B} Let $(\Omega,\mathcal{F},\mathbb{F},\mathbb{P})$ be a filtered probability space and $B$ be an $\mathbb{F}$-fBm. Let $\beta<0$, $p \in [1,\infty]$ and $e\in \mathbb{N}^*$. Then there exists a constant $C>0$ such that for any $(s,t)\in \Delta_{0,1}$, any bounded measurable function $f:\mathbb{R}^d\times \mathbb{R}^e\to \mathbb{R}^d$ and any $\mathcal{F}_{s}$-measurable $\mathbb{R}^e$-valued random variable $\Xi$ satisfying $\|f(\cdot,\Xi)\|_{\mathcal{C}^1}<\infty$ almost surely, there is \begin{enumerate}[label=(\roman*)] \item $\mathbb{E}^{s}[f(B_{t},\Xi)]=G_{\sigma_{{s},{t}}^2}f(\mathbb{E}^{s}[B_{t}],\Xi)$, where $G$ is the Gaussian semigroup introduced in \eqref{eq:semi-group-gaussian} and $\sigma_{{s},{t}}^2:=\var{(B^{(i)}_{t}-\mathbb{E}^{s}[B^{(i)}_{t}])}$, for any component $B^{(i)}$ of the fBm; \item $| \mathbb{E}^s f( B_t,\Xi) | \leq C \|f(\cdot,\Xi)\|_{\mathcal{B}_p^\beta} (t-s)^{H(\beta-\frac{d}{p})}$; \item $\| f(B_t,\Xi) - \mathbb{E}^s f(B_t,\Xi) \|_{L^1} \leq C \big\| \| f(\cdot,\Xi) \|_{\mathcal{C}^1}\big\|_{L^2} (t-s)^H$ . \item Furthermore, for any $u$ in the interval $(s,t)$ and $m \in[1, p]$ there exists a constant $C>0$ such that $\left\|\mathbb{E}^{u}\left[f\left(B_{t}, \Xi\right)\right]\right\|_{L^{m}} \leq C \left\| \| f(\cdot, \Xi)\|_{\mathcal{B}_{p}^{\beta}}\right\|_{L^{m}} \left(t-u\right)^{H \beta}\left(u-s\right)^{-\frac{d}{2 p}}\left(t-s\right)^{d\frac{1-2 H}{2 p}}$. \end{enumerate} \end{lemma} \begin{proof} The proofs of $(i)$, $(ii)$, $(iii)$ are similar to $(a)$, $(b)$, $(c)$ in \cite[Lemma 5.1]{anzeletti2021regularisation} but they rely now on Lemma~\ref{lem:besov-spaces} and Lemma~\ref{lem:reg-S}. We only reproduce the proof of $(iv)$ which is similar to $(d)$ in \cite[Lemma 5.1]{anzeletti2021regularisation}, to emphasize where the dimension $d$ appears. For $u \in (s,t)$, we have from $(i)$ that \begin{align*} \mathbb{E}^{s} | \mathbb{E}^{u} [ f(B_{t}, \Xi ) ] |^m & = \mathbb{E}^s | G_{\sigma^2_{u,t}} f(\mathbb{E}^u B_t , \Xi) |^m \\ & = \mathbb{E}^{s} | G_{\sigma^2_{u,t}} f(\mathbb{E}^u B_t - \mathbb{E}^{s} B_t + \mathbb{E}^s B_t , \Xi) |^m . \end{align*} Notice that $\mathbb{E}^s B_t$ is independent of $\mathbb{E}^u B_t - \mathbb{E}^s B_t$, which is a Gaussian variable with mean zero and covariance $\sigma^2_{s,u,t} I_{d}$ where \begin{align*} \sigma^2_{s,u,t} := \textrm{Var}(\mathbb{E}^u B_t^{(i)} - \mathbb{E}^s B_t^{(i)}) , \end{align*} for any component $B^{(i)}$ of the fBm. It follows that \begin{align*} \mathbb{E}^{s} | \mathbb{E}^{u} [ f(B_{t}, \Xi ) ] |^m & = \int_{\mathbb{R}^d} g_{ \sigma^{2}_{s,u,t} } (y) | G_{\sigma^2_{u,t}} f( \mathbb{E}^s B_t + y, \Xi) |^m \, d y . \end{align*} Let $q=\frac{p}{m}$ and $q'=\frac{q}{q-1}$. Using H\"older's inequality, we get \begin{align*} \mathbb{E}^{s} | \mathbb{E}^{u} [ f(B_{t}, \Xi ) ] |^m &\leq \| g_{ \sigma^{2}_{s,u,t} } \|_{L^{q'}(\mathbb{R}^d)} \|G_{\sigma^2_{u,t}} f(\cdot, \Xi) \|_{L^p(\mathbb{R}^d)}^m \\ &= \| G_{ \sigma^{2}_{s,u,t} } \delta_0 \|_{L^{q'}(\mathbb{R}^d)} \|G_{\sigma^2_{u,t}} f(\cdot, \Xi) \|_{L^p(\mathbb{R}^d)}^m . \end{align*} By Besov embedding, $\delta_0 \in \mathcal{B}_1^0 \hookrightarrow \mathcal{B}_{q'}^{-d+d/q'} = \mathcal{B}_{q'}^{-dm/p}$. Hence by Lemma \ref{lem:reg-S}$(i)$, \begin{align*} \mathbb{E}^{s} | \mathbb{E}^{u} [ f(B_{t}, \Xi ) ] |^m & \leq C \| \delta_0 \|_{\mathcal{B}_{q'}^{-dm/p}} \ \sigma_{s,u,t}^{-dm/p} \ \| f(\cdot, \Xi) \|^m_{\mathcal{B}_p^\beta} \ \sigma_{u,t}^{\beta m} . \end{align*} The fBm has the following local nondeterminism properties (see e.g. (C.3) and (C.5) in \cite{anzeletti2021regularisation}): there exists $C_{1}, C_{2}>0$ such that \begin{align}\label{eq:LND} \sigma^2_{u,t} = C_{1} (t-u)^{2H} ~~ \mbox{and}~~ \sigma^2_{s,u,t}\ge C_{2} (u-s) (t-s)^{-1+2H} . \end{align} It follows that \begin{align*} \mathbb{E}^{s} | \mathbb{E}^{u} [ f(B_{t}, \Xi ) ] |^m & \leq C \| \delta_0 \|_{\mathcal{B}_{q'}^{-dm/p}} \ \| f(\cdot, \Xi) \|^m_{\mathcal{B}_p^\beta} \ (u-s)^{-\frac{dm}{2p}} (t-s)^{(1-2H)\frac{dm}{2p}} \ (t-u)^{ H \beta m} . \end{align*} We conclude by taking the expectation in the above inequality and raising both sides to the power $1/m$. \end{proof} \subsection{Regularisation effect of the $d$-dimensional fBm}\label{sec:reg} We use the stochastic sewing lemma of \cite{le2020stochastic} (recalled in Lemma~\ref{lem:SSL}) to establish the key regularisation result (Proposition~\ref{prop:regfBm}) that will be used to prove existence of weak solutions. Note that the results in this subsection and the next one are similar to the one-dimensional framework developed in \cite{anzeletti2021regularisation}. First we have the following lemma, which extends \cite[Lemma D.2]{anzeletti2021regularisation} to dimension $d\geq 1$. Its proof, which is also close to the proof of \cite[Lemma D.2]{anzeletti2021regularisation}, is postponed to the Appendix~\ref{app:1streg}. \begin{lemma} \label{lem:1streg} Let $\beta \in (-1/(2H),0)$ such that $\beta-d/p \in (-1/H,0)$. Let $m \in [2, \infty]$, $q \in [m, \infty]$ and assume that $p\in [q,+\infty]$. Then there exists a constant $C>0$ such that for any $0\leq S\leq T$, any $\mathcal{F}_{S}$-measurable random variable $\Xi$ in $\mathbb{R}^e$ and any bounded measurable function $f:\mathbb{R}^d\times\mathbb{R}^e \rightarrow \mathbb{R}^d$ fulfilling \begin{enumerate}[label=(\roman*)] \item $\mathbb{E}\left[ \|f(\cdot,\Xi)\|_{\mathcal{C}^1}^2\right]<\infty$; \item $\mathbb{E}\left[ \|f(\cdot,\Xi)\|_{\mathcal{B}_p^{\beta}}^q\right]<\infty$, \end{enumerate} we have for any $(s,t) \in \Delta_{S,T}$ that \begin{equation}\label{eq:regulINT} \Big\| \Big( \mathbb{E}^S \Big| \int_s^t f(B_r,\Xi) \, dr \Big|^m \Big)^{\frac{1}{m}} \Big\|_{L^q} \leq C \, \| \|f(\cdot,\Xi)\|_{\mathcal{B}_p^{\beta}}\|_{L^q}\, (t-s)^{1+H(\beta-\frac{d}{p})} . \end{equation} \end{lemma} As a consequence of Lemma \ref{lem:SSL} and Lemma \ref{lem:1streg}, we get the following property of regularisation of the $d$-dimensional fBm. It can be compared to \cite[Lemma 7.1]{anzeletti2021regularisation}, which is stated for one-dimensional processes in the sub-critical case only. The proof is postponed to Appendix \ref{app:regfBm}. \begin{proposition}\label{prop:regfBm} Let $m\in[2,\infty)$, $q \in [m, +\infty]$ and $p \in [q,+\infty]$. \begin{enumerate}[label=(\alph*)] \item\label{item:3.5(a)} \underline{The sub-critical case}: let $\beta\in (-1/(2H),0)$ such that $\beta-d/p > -1/(2H)$. Let $\tau \in (0,1)$ such that $H(\beta-d/p-1)+\tau>0$. There exists a constant $C>0$ such that for any $f\in \mathcal{C}^\infty_b(\mathbb{R}^d, \mathbb{R}^d)\cap \mathcal{B}_p^\beta$, any $\mathbb{R}^d$-valued stochastic process $(\psi_t)_{t\in[0,1]}$ adapted to $\mathbb{F}$, any $(S,T) \in \Delta_{0,1}$ and $(s,t) \in \Delta_{S,T}$ we have \begin{equation} \label{eq:3.5a} \begin{split} \Big\| \Big( \mathbb{E}^S \Big| \int_s^t f(B_r+\psi_r) \, dr \Big|^m \Big)^{\frac{1}{m}} \Big\|_{L^q} \leq &\, C\, \|f\|_{\mathcal{B}_p^\beta}(t-s)^{1+H(\beta-\frac{d}{p})} \\ &+C \|f\|_{\mathcal{B}_p^\beta} [\psi]_{\mathcal{C}^\tau_{[S,T]}L^{m,q}} \, (t-s)^{1+H(\beta-\frac{d}{p} -1)+\tau}. \end{split} \end{equation} \item\label{item:3.5(b)} \underline{The critical case}: let $\beta-d/p=-1/(2H)$ and assume that $\beta > 1-1/(2H)$. There exists a constant $C>0$ such that for any $f\in \mathcal{C}^\infty_b(\mathbb{R}^d, \mathbb{R}^d) \cap \mathcal{B}_p^{\beta+1}$, any $\mathbb{R}^d$-valued stochastic process $(\psi_t)_{t\in[0,1]}$ adapted to $\mathbb{F}$, any $(S,T) \in \Delta_{0,1}$ and any $(s,t) \in \Delta_{S,T}$, we have \begin{equation} \label{eq:3.5b} \begin{split} \Big\| \int_s^t f(B_r+\psi_r) \, dr \Big\|_{L^m} \leq &\, C\, \|f\|_{\mathcal{B}_p^{\beta}} \, \left(1+ \left| \log\frac{\|f\|_{\mathcal{B}_p^{\beta}}}{\|f\|_{\mathcal{B}_p^{\beta+1}}} \right| \right) \, \left( 1+[\psi]_{\mathcal{C}^{\frac{1}{2}+H}_{[S,T]}L^m} \right) (t-s)^{\frac{1}{2}}. \end{split} \end{equation} \end{enumerate} \end{proposition} \subsection{Tightness and stability}\label{sec:tightness} The proof of existence of a weak solution is based on a classical argument: first, we construct a tight approximating sequence of processes (Proposition \ref{prop:tightness}), then we prove the stability, i.e. that any converging subsequence is a solution of the SDE \eqref{eq:SDE} (Proposition \ref{prop:stability}). First we need two \emph{a priori} estimates, which are direct consequences of Proposition \ref{prop:regfBm}$(a)$: Lemma~\ref{lem:apriori1} (resp. Lemma~\ref{lem:apriori2}), which extends \cite[Lemma 7.3]{anzeletti2021regularisation} (resp. \cite[Lemma 7.4]{anzeletti2021regularisation}) to dimension $d\geq 1$. \begin{lemma} \label{lem:apriori1} Assume that \eqref{eq:assumptionweak} holds and let $m \in [2,\infty)$. There exists $C > 0$, such that, for any ${b\in \mathcal{C}_b^\infty(\mathbb{R}^d, \mathbb{R}^d) \cap \mathcal{B}_p^\gamma}$, \begin{equation} \label{eq:regularity3.2} [X-B]_{\mathcal{C}_{[0,1]}^{1+H(\gamma-d/p)}L^{m,\infty}}\leq C\,( 1 + \|b\|^2_{\mathcal{B}_p^\gamma}), \end{equation} where $X$ is the strong solution to \eqref{eq:SDE} with drift $b$. \end{lemma} \begin{proof} Without loss of generality, assume that $X_{0}=0$ and denote $K=X-B$. Then $[K]_{\mathcal{C}^{\tau}_{[0,1]}L^{m,\infty}}$ is finite for any $\tau \in (0,1]$ as $|K_t-K_s|=|\int_s^t b(B_r+K_r)\, dr | \leq \|b\|_{\infty}|t-s|$. We aim to apply Proposition~\ref{prop:regfBm}$(a)$ with $m \in [2,\infty)$, $q=\infty$, $p=\infty$, $\beta=\gamma-\frac{d}{p}$ and $\tau=1+H(\gamma-d/p)$. % % The assumptions of Proposition~\ref{prop:regfBm}$(a)$ are satisfied since $\tau - H > 1/2 - H/2 > 0$, thus $\tau \in (0,1)$. In addition, by \eqref{eq:assumptionweak}, we have $H(\gamma-d/p)>H/2-1/2>-1/2$ and $H(\gamma-d/p-1)+\tau>0$. Then we get \begin{align} \label{eq:beforeiteration} \big\| \mathbb{E}^s \big( |K_t-K_s|^m \big)^{\frac{1}{m}} \big\|_{L^\infty} &\leq C\, \|b\|_{\mathcal{B}_\infty^{\gamma-d/p}} \left((t-s)^{1+H(\gamma-\frac{d}{p})}+[K]_{\mathcal{C}^{\tau}_{[s,t]}L^{m,\infty}} (t-s)^{1+H(\gamma-\frac{d}{p})+\tau-H}\right) \nonumber\\ &=C \|b\|_{\mathcal{B}_p^\gamma}(t-s)^{1+H(\gamma-\frac{d}{p})}\left(1+[K]_{\mathcal{C}^{\tau}_{[s,t]}L^{m,\infty}}(t-s)^{\tau-H}\right). \end{align} Choose $\ell = (4C\|b\|_{\mathcal{B}^\gamma_p})^{1/(H-\tau)}$ so that $C \|b\|_{\mathcal{B}^\gamma_p} \ell^{\tau-H}<1/2$. Let $u \in [0,1]$. Divide both sides in \eqref{eq:beforeiteration} by $(t-s)^{1+H(\gamma-d/p)}$ and take the supremum over $(s,t) \in \Delta_{u,(u+\ell )\wedge 1}$ to get \begin{align*} [K]_{\mathcal{C}^{1+H(\gamma-d/p)}_{[u,(u+\ell)\wedge 1]} L^{m,\infty}} \leq \left(C \|b\|_{\mathcal{B}_p^\gamma}+\frac{1}{2} [K]_{\mathcal{C}^{\tau}_{[u,(u+\ell)\wedge 1]}L^{m,\infty}} \right) , \end{align*} and therefore \begin{align*} [K]_{\mathcal{C}^{1+H(\gamma-d/p)}_{[u,(u+\ell) \wedge 1]} L^{m,\infty}} \leq 2 C \|b\|_{\mathcal{B}_p^\gamma}. \end{align*} The end of the proof consists in iterating the previous inequality in order to control the H\"older norm on the whole interval $[0,1]$, and is completely identical to the proof of \cite[Lemma 7.3]{anzeletti2021regularisation}. \end{proof} \begin{lemma} \label{lem:apriori2} Assume that \eqref{eq:assumptionweak} holds and let $b,h \in \mathcal{C}_b^\infty(\mathbb{R}^d, \mathbb{R}^d) \cap \mathcal{B}_p^\gamma$. Let $X$ be the strong solution to \eqref{eq:SDE} with drift $b$. Let $\delta \in (0,1+H(\gamma-d/p))$. Then there exists a constant $C>0$ which does not depend on $X_0$, $b$ and $h$, and a nonnegative random variable $Z$ which satisfies $\mathbb{E}[Z]\leq C \|h\|_{\mathcal{B}_p^\gamma}(1+\|b\|^2_{\mathcal{B}_p^\gamma})$ such that for any $(s,t)\in \Delta_{0,1}$, \begin{equation}\label{eq:averagingX} \Big|\int_s^t h(X_r) \, dr\Big|\leq Z\, |t-s|^\delta. \end{equation} \end{lemma} The proof relies on Proposition~\ref{prop:regfBm}$(a)$, Lemma~\ref{lem:apriori1} and Kolmogorov's continuity criterion. We omit it, as it is identical to the proof of \cite[Lemma 7.4]{anzeletti2021regularisation}. We now obtain tightness of the sequence that approximates $X$. \begin{proposition} \label{prop:tightness} Assume that \eqref{eq:assumptionweak} holds, let \((b^n)_{n \in \mathbb{N}}\) be a sequence of smooth bounded functions converging to $b$ in $\mathcal{B}_p^{\gamma-}$. For $n \in \mathbb{N}$, let $X^{n}$ be the strong solution to \eqref{eq:SDE} with initial condition $X_0$ and drift $b^n$. Then there exists a subsequence $(n_k)_{k\in \mathbb{N}}$ such that $(X^{n_k},B)_{k \in \mathbb{N}}$ converges weakly in the space $[\mathcal{C}_{[0,1]}(\mathbb{R}^d)]^2$. \end{proposition} \begin{proof} This short proof is close to the proof of \cite[Proposition 7.5]{anzeletti2021regularisation}. We reproduce it here for the reader's convenience. Let $K^{n}_t:=\int_0^t b^n(X^{n}_r)\, dr$. For $M>0$ and some $\delta \in (0,1+H(\gamma-d/p))$, let \begin{align*} A_M:=\{f \in \mathcal{C}_{[0,1]}: f(0)=0,\ |f(t)-f(s)|\leq M (t-s)^{\delta},\ \forall (s,t) \in \Delta_{0,1}\}. \end{align*} By Arzel\`a-Ascoli's theorem, $A_M$ is compact in $\mathcal{C}_{[0,1]}$. Applying Lemma~\ref{lem:apriori2} to $h=b^n$, this gives a nonnegative random variable $Z^n$ such that $\mathbb{E}[Z^n]\leq C \|b^n\|_{\mathcal{B}_p^\gamma}(1+\|b^n\|^2_{\mathcal{B}_p^\gamma})$ and \eqref{eq:averagingX} is satisfied. Thus by Markov's inequality we get \begin{align*} \mathbb{P}(K^{n} \notin A_M) &\leq \mathbb{P}(\exists (s,t) \in \Delta_{0,1}:|K^{n}_{s,t}|> M (t-s)^\delta)\\ &\leq \mathbb{P}(Z^n> M)\\ &\leq C\, \sup_{n \in \mathbb{N}} \|b^n\|_{\mathcal{B}_p^\gamma} \, (1+\sup_{n \in \mathbb{N}}\|b^n\|^2_{\mathcal{B}_p^\gamma}) \, M^{-1}. \end{align*} Hence, the sequence $(K^{n})_{n \in \mathbb{N}}$ is tight in $\mathcal{C}_{[0,1]}$. So $(K^{n},B)_{n \in \mathbb{N}}$ is tight in $(\mathcal{C}_{[0,1]})^2$. Thus by Prokhorov's Theorem, there exists a subsequence $(n_k)_{k \in \mathbb{N}}$ such that $(K^{{n_k}},B)_{k \in \mathbb{N}}$ converges weakly in the space $(\mathcal{C}_{[0,1]})^2$, and so does $(X^{{n_k}},B)_{k \in \mathbb{N}}$. \end{proof} Finally, the stability is expressed in the following proposition, which extends \cite[Proposition 7.7]{anzeletti2021regularisation} to dimension $d\geq 1$. \begin{proposition} \label{prop:stability} Assume that \eqref{eq:assumptionweak} holds and let $(b^n)_{n \in \mathbb{N}}$ be a sequence of smooth bounded functions converging to $b$ in $\mathcal{B}_p^{\gamma-}$. Let $\hat{B}^n$ have the same law as $B$. We consider $\hat{X}^n$ the strong solution to \eqref{eq:SDE} for $B=\hat{B}^n$, initial condition $X_0$ and drift $b^n$. Assume that there exist stochastic processes $\hat{X},\hat{B}: [0,1] \rightarrow \mathbb{R}^d$ such that $(\hat{X}^n,\hat{B}^n)_{n \in \mathbb{N}}$ converges to $(\hat{X},\hat{B})$ on $[\mathcal{C}_{[0,1]}(\mathbb{R}^d)]^2$ in probability. Then $\hat{X}$ fulfills \eqref{solution1} and \eqref{approximation2} from Definition~\ref{def:sol-SDE} and for any $m\in [2,\infty)$, there exists $C>0$ such that \begin{align} \label{eq:regularity} [\hat{X}-\hat{B}]_{\mathcal{C}_{[0,1]}^{1+H(\gamma-d/p)} L^{m,\infty} }\leq C \, (1 + \sup_{n \in \mathbb{N}} \|b^n\|^2_{\mathcal{B}_p^\gamma}) <\infty. \end{align} \end{proposition} The proof is postponed to Appendix~\ref{app:stability}. \subsection{Proof of Theorem \ref{th:WP}\textit{\ref{th:weakEx}}}\label{sec:proofEx} Let $(b^n)_{n \in \mathbb{N}}$ be a sequence of smooth bounded functions converging to $b$ in $\mathcal{B}_p^{\gamma-}$. By Proposition~\ref{prop:tightness}, there exists a subsequence $(n_k)_{k \in \mathbb{N}}$ such that $(X^{n_k}, B)_{k \in \mathbb{N}}$ converges weakly in $(\mathcal{C}_{[0,1]}(\mathbb{R}^d))^2$. Without loss of generality, we assume that $(X^{n}, B)_{n \in \mathbb{N}}$ converges weakly. By the Skorokhod representation Theorem, there exists a sequence of random variables $(\hat{X}^{n},\hat{B}^n)_{n \in \mathbb{N}}$ defined on a common probability space $(\hat{\Omega},\hat{\mathcal{F}},\hat{P})$, such that \begin{align} \label{samelaw} \text{Law}(\hat{X}^{n},\hat{B}^n)=\text{Law}(X^{n}, B), \ \forall n \in \mathbb{N}, \end{align} and $(\hat{X}^{n},\hat{B}^n)$ converges a.s. to some $(\hat{X},\hat{B})$ in $(\mathcal{C}_{[0,1]}(\mathbb{R}^d))^2$. As $X^{n}$ solves \eqref{eq:SDE} with drift $b^n$, we know by \eqref{samelaw} that $\hat{X}^{n}$ also solves \eqref{eq:SDE} with drift $b^n$ and $\hat{B}^n$ instead of $B$. As $X^{n}$ is a strong solution, we have that $X^{n}$ is adapted to $\mathbb{F}^B$. Hence by \eqref{samelaw}, we know that $\hat{X}^{n}$ is adapted to $\mathbb{F}^{\hat{B}^n}$ as the conditional laws of $\hat{X}^{n}$ and $X^{n}$ agree, and therefore it is a strong solution to \eqref{eq:SDE} with $\hat{B}^n$ instead of $B$. By Proposition~\ref{prop:stability}, we know that $\hat{X}$ fulfills \eqref{solution1} and \eqref{approximation2} from Definition~\ref{def:sol-SDE} with $\hat{B}$ instead of $B$ and it is adapted with respect to the filtration $\hat{\mathbb{F}}$ defined by $\hat{\mathcal{F}}_t:= \sigma(\hat{X}_{s},\hat{B}_{s},s \in [0,t])$. It remains to check that $\hat{B}$ is an $\hat{\mathbb{F}}$-fBm, which is completely analogous to the one-dimensional case treated in the proof of Theorem 2.8 in \cite{anzeletti2021regularisation}. Hence $\hat{X}$ is a weak solution. Finally, \eqref{eq:regularity} gives that \begin{equation*}% [\hat{X}-\hat{B}]_{\mathcal{C}_{[0,1]}^{1+H(\gamma-d/p)} L^{m,\infty} }<\infty , \end{equation*} which concludes the proof. \subsection{Proof of Theorem \ref{th:WP}\textit{\ref{th:strongEx}}}\label{subsec:StrongEx} Although it will be proven in the next sections, we will use Corollary \ref{cor:bn-choice} to prove Theorem~\ref{th:WP}\textit{\ref{th:strongEx}}. Assuming \eqref{eq:cond-gamma-p-H}, % we let $(X,B)$ and $(\Omega,\mathcal{F},\mathbb{F},\mathbb{P})$ be a weak solution to \eqref{eq:SDE} given by Theorem~\ref{th:WP}\textit{\ref{th:weakEx}}. On this probability space and with the same fBm $B$, we define the tamed Euler scheme $(X^{h,n})_{h>0,n\in \mathbb{N}}$. As in Corollary \ref{cor:bn-choice}, we let $b^n = G_{\frac{1}{n}} b$, $n_h = \lfloor h^{-\frac{1}{1-\gamma+d/p}} \rfloor$ and consider the scheme $(X^{h,n_h})_{h \in (0,1)}$. First, observe that $X^{h,n_h}$ is $\mathbb{F}^B$-adapted. In view of \eqref{eq:boundsup}, $X^{h,n_h}_{t}$ converges to $X_{t}$ in $L^m$, for each $t\in [0,1]$. Hence $X_{t}$ is $\mathcal{F}_{t}^B$-measurable and $X$ is therefore a strong solution. As for the uniqueness, if $X$ and $Y$ are two strong solutions to \eqref{eq:SDE} with the same fBm $B$, such that $[X-B]_{\mathcal{C}^{1/2+H}_{[0,1]} L^{2,\infty}}<\infty$ and $[Y-B]_{\mathcal{C}^{1/2+H}_{[0,1]} L^{2,\infty}}<\infty$, then by Corollary~\ref{cor:bn-choice}, $X^{h,n_h}= Y^{h,n_h}$ approximates both $X$ and $Y$. So $X$ and $Y$ are modifications of one another. Since they are continuous processes, they are indistinguishable. This proves uniqueness in the class of solutions $X$ such that $[X-B]_{\mathcal{C}^{1/2+H}_{[0,1]} L^{2,\infty}}<\infty$. In Appendix \ref{app:extend-uniqueness}, for any $\eta \in (0,1)$ and in the sub-critical regime $\gamma-d/p > 1-1/(2H)$, we extend the uniqueness result to the class of solutions $X$ such that $[X-B]_{\mathcal{C}^{H(1-\gamma+d/p)+\eta}_{[0,1]} L^{m,\infty}} < \infty .$ \section{Convergence of the tamed Euler scheme}\label{sec:overview-SDE} Let $\gamma$ and $p$ satisfying \eqref{eq:cond-gamma-p-H} and $b \in \mathcal{B}_p^\gamma$. By a Besov embedding, we have $\mathcal{B}_p^\gamma \hookrightarrow \mathcal{B}_q^{\gamma-\frac{d}{p}-\frac{d}{q}}$ for any $q \ge p$. Setting $\tilde{\gamma}=\gamma-\frac{d}{p}+\frac{d}{q}$ and $\tilde{p}=q$, we have $b \in \mathcal{B}_{\tilde{p}}^{\tilde{\gamma}}$ and $\gamma-d/p = \tilde{\gamma} - d/\tilde{p}$, so that \eqref{eq:cond-gamma-p-H} is still satisfied in $\mathcal{B}^{\tilde{\gamma}}_{\tilde{p}}$. Hence without any loss of generality, considering a smaller $\gamma$, we can always assume that $p$ is as large as we want. For the proof, we assume that $p \ge m$. This allows us in particular to apply the regularisation lemmas (see Proposition~\ref{prop:regfBm} and Section \ref{sec:proofs-SDE}). \subsection{Proof of Theorem \ref{thm:main-SDE}}\label{sec:proof-mainth} First, we skip the first point (Theorem \ref{thm:main-SDE}$(a)$) about the regularity of the scheme $X^{h,n}$. It is proven in Corollary~\ref{cor:bound-Khn}, and follows from several technical lemmas presented in Section~\ref{sec:stochastic-sewing}. Now we prove Theorem \ref{thm:main-SDE}$(b)$ and $(c)$. Let $(X,B)$ be a weak solution to \eqref{eq:SDE} defined on a filtered probability space $(\Omega,\mathcal{F},\mathbb{F},\mathbb{P})$. On this probability space and with the same fBm $B$, we define the tamed Euler scheme $(X^{h,n})_{h>0,n\in \mathbb{N}}$. For all $t>0$, recall from \eqref{solution1} that $K_t := X_t - B_t - X_0$ and define \begin{align}\label{eq:def-Khn} K_{t}^n := \int_0^{t} b^n(X_r) \, d r \ \text{ and } \ K^{h,n}_t := \int_0^t b^n(X^{h,n}_{r_h}) \, d r . \end{align} With these notations in mind, we set the notation for the error as \begin{align*} \mathcal{E}_t^{h,n} & := X_{t} - X^{h,n}_{t}, \quad t \ge 0. \end{align*} Let $0 \leq S \leq T \leq 1$. The error is decomposed as \begin{align}\label{eq:error-first-bound-SDE} [\mathcal{E}^{h,n}]_{\mathcal{C}^{\frac{1}{2}-\zeta}_{[S,T]} L^{m}} & \leq [ K - K^n ]_{\mathcal{C}^{\frac{1}{2}-\zeta}_{[S,T]} L^{m}} + [ K^{n}-K^{h,n}]_{\mathcal{C}^{\frac{1}{2}-\zeta}_{[S,T]} L^{m}} \nonumber \\ & = [ K - K^n ]_{\mathcal{C}^{\frac{1}{2}-\zeta}_{[S,T]} L^{m}} + [ E^{h,n}]_{\mathcal{C}^{\frac{1}{2}-\zeta}_{[S,T]} L^{m}} \nonumber \\ & \leq [ K - K^n ]_{\mathcal{C}^{\frac{1}{2}-\zeta}_{[S,T]} L^{m}} + [E^{1,h,n}]_{\mathcal{C}^{\frac{1}{2}-\zeta}_{[S,T]} L^{m}} + [E^{2,h,n}]_{\mathcal{C}^{\frac{1}{2}-\zeta}_{[S,T]} L^{m}} , \end{align} where $\zeta=0$ in the subcritical case, and for all $s < t$ we denote \begin{equation}\label{eq:defE} \begin{split} E^{h,n}_{s,t} & := K_t^n - K^n_s - (K_t^{h,n} - K_s^{h,n}), \\ E_{s,t}^{1,h,n} & := \int_s^t b^n(X_0 + K_r + B_r) - b^n(X_0 + K^{h,n}_r + B_r) \, dr, \\ E_{s,t}^{2,h,n} & := \int_s^t b^n(X_0 + K^{h,n}_r + B_r) - b^n(X_0 + K^{h,n}_{r_h} + B_{r_h}) \, dr . \end{split} \end{equation} We also denote \begin{align}\label{eq:defepsilonhn} \epsilon(h,n) := [K-K^n]_{\mathcal{C}^{\frac{1}{2}}_{[0,1]}L^m} + [E^{2,h,n}]_{\mathcal{C}^{\frac{1}{2}}_{[0,1]} L^{m}}. \end{align} In order to prove Theorem \ref{thm:main-SDE}$(b)$ and $(c)$, we will provide bounds on the quantities that appear in the right-hand side of \eqref{eq:error-first-bound-SDE}. The bound on $E^{1,h,n}$ is stated and proven in Section~\ref{sec:stochastic-sewing}, see Corollary~\ref{cor:bound-E1-SDE} for the sub-critical case, and in Proposition~\ref{prop:bound-E1-SDE-critic} for the critical case. The bound on $E^{2,h,n}$ is proven in Corollary~\ref{cor:newbound-E2} for both cases. We now prove the bounds on $K-K^n$ in both cases. \paragraph{Bound on $K-K^n$.} Let $k,n\in \mathbb{N}$. First, in the case $\gamma-d/p>1-1/(2H)$, we apply Proposition \ref{prop:regfBm}$(a)$ with $f=b^{k}-b^n$, $\tau=1/2+H$, $\beta=\gamma-1$ and $\psi=X-B$. Using $[X-B]_{\mathcal{C}^{1/2+H}_{[0,1]} L^m} < \infty$, it comes that for any $(s,t) \in \Delta_{S,T}$, \begin{align*} \|K_t^{k}-K_s^{k} - K_{t}^n + K_s^{n} \|_{L^m} & \leq C \| b^{k} - b^n \|_{\mathcal{B}_p^{\gamma-1}} (t-s)^{1+H(\gamma-1-\frac{d}{p})} . \end{align*} Hence $(K^{k}_t-K_s^{k})_{k \in \mathbb{N}}$ is a Cauchy sequence in $L^m(\Omega)$ and therefore it converges. We also know by definition of $X$ that $K^{k}_t-K_s^{k}$ converges in probability to $K_t-K_s$. Thus $K^{k}_t-K_s^{k}$ converges in $L^m$ to $K_t-K_s$. Now by the convergence of $b^{k}$ to $b$ in $\mathcal{B}_p^{\gamma-1}$, we get \begin{align*} \|K_t-K_s - K_{t}^n + K_s^{n} \|_{L^m} & \leq C \| b- b^n \|_{\mathcal{B}_p^{\gamma-1}} \, (t-s)^{1+H(\gamma-1-\frac{d}{p})} . \end{align*} Dividing by $(t-s)^{\frac{1}{2}}$ and taking the supremum over $(s,t)$ in $\Delta_{S,T}$ (recall that $\frac{1}{2} + H(\gamma-1-d/p) \ge 0$), we get that \begin{align}\label{eq:proba-conv-SDE} [K-K^{n}]_{\mathcal{C}^{\frac{1}{2}}_{[S,T]} L^{m}} & \leq C \| b - b^n \|_{\mathcal{B}_p^{\gamma-1}}. \end{align} \smallskip Now in the critical case, i.e. with $\gamma-d/p=1-1/(2H)$ and $\gamma > 1-1/(2H)$, we will apply Proposition \ref{prop:regfBm}$(b)$ with $f=b^{k}-b^n$, $\beta=\gamma-1$ and $\psi=X-B$. Since $(b^n)$ converges to $b$ in $\mathcal{B}^{\gamma-}_{p}$, there is $ \|b^k-b^n\|_{\mathcal{B}^\gamma_{p} }\leq 2 \|b\|_{\mathcal{B}^\gamma_{p}} \vee 1$, and therefore \begin{align*} \left| \log \frac{\| b^k - b^n \|_{\mathcal{B}_p^{\gamma-1}}}{\| b^k - b^n \|_{\mathcal{B}_p^{\gamma-}}} \right| & \leq \log( 2 \| b \|_{\mathcal{B}_p^\gamma} \vee 1) + | \log(\| b^k - b^n \|_{\mathcal{B}_p^{\gamma-1}}) | \\ & \leq C (1 + | \log(\| b^k - b^n \|_{\mathcal{B}_p^{\gamma-1}}) | ) . \end{align*} Besides, $[X-B]_{\mathcal{C}^{1/2+H}_{[0,1]} L^m} <\infty$, hence for any $(s,t) \in \Delta_{S,T}$, Proposition \ref{prop:regfBm}$(b)$ reads \begin{align*} \|K_t^{k}-K_s^{k} - K_{t}^n + K_s^{n} \|_{L^m} & \leq C \| b^{k} - b^n \|_{\mathcal{B}_p^{\gamma-1}} (1+|\log \| b^k - b^n \|_{\mathcal{B}_p^{\gamma-1}} | ) (t-s)^{\frac{1}{2}} . \end{align*} As in the sub-critical case, we deduce that \begin{align}\label{eq:proba-conv-SDE-critic} [K-K^{n}]_{\mathcal{C}^{\frac{1}{2}}_{[S,T]} L^{m}} & \leq C \| b - b^n \|_{\mathcal{B}_p^{\gamma-1}} (1+ |\log \| b - b^n \|_{\mathcal{B}_p^{\gamma-1}} |) . \end{align} \paragraph{Bound on $E^{1,h,n}$.} Recall that $\mathcal{E}^{h,n}= K-K^{h,n}$ and that $X$ is a weak solution constructed in Theorem~\ref{th:WP}$(a)$ that satisfies $[X-B]_{\mathcal{C}^{1/2+H}_{[S,T]} L^{m}}\leq [X-B]_{\mathcal{C}^{1/2+H}_{[0,1]} L^{m, \infty}}<\infty$. In the sub-critical case, we have from Corollary \ref{cor:bound-E1-SDE} that there exists $C>0$ such that for any $(s,t)\in \Delta_{S,T}$, any $n \in \mathbb{N}$ and $h \in (0,1)$, \begin{align*} \|E^{1,h,n}_{s,t} \|_{L^{m}} & \leq C \Big( [\mathcal{E}^{h,n}]_{\mathcal{C}^{\frac{1}{2}}_{[S,T]} L^{m}} + \|\mathcal{E}^{h,n}_{S} \|_{L^m} \Big) (t-s)^{1+ H (\gamma-1-\frac{d}{p})}. \end{align*} Divide by $(t-s)^{1/2}$ and take the supremum over $(s,t)\in \Delta_{S,T}$ to get \begin{align}\label{eq:bound-E1-SDE} [ E^{1,h,n} ]_{\mathcal{C}^{\frac{1}{2}}_{[S,T]} L^m} \leq C \Big( [\mathcal{E}^{h,n}]_{\mathcal{C}^{\frac{1}{2}}_{[S,T]} L^{m}} + \|\mathcal{E}^{h,n}_{S} \|_{L^m} \Big) (T-S)^{\frac{1}{2}+ H (\gamma-1-\frac{d}{p})} . \end{align} In the critical case, for $\mathcal{D}$ a sub-domain of $(0,1) \times \mathbb{N}$ such that \eqref{eq:assump-bn-bounded} holds, Proposition~\ref{prop:bound-E1-SDE-critic} yields the existence of $\ell_{0}>0$ such that if $T-S\leq \ell_{0}$, then for any $(s,t)\in \Delta_{S,T}$, \begin{align*} \| E^{1,h,n}_{s,t}\|_{L^{m}} &\leq \mathbf{M} \, \bigg(1+ \Big|\log \frac{T^H \big(1+ [K^{h,n}]_{\mathcal{C}^{1/2+H}_{[S,T]}L^m}\big)}{ \| \mathcal{E}^{h,n} \|_{L^\infty_{[S,T]} L^{m}} +\epsilon(h,n)}\Big| \bigg) \, \Big( \|\mathcal{E}^{h,n} \|_{L^\infty_{[S,T]} L^{m}} + \epsilon(h,n)\Big) \, (t-s) \nonumber\\ &\quad+ \mathbf{M} \, \Big(\|\mathcal{E}^{h,n} \|_{L^\infty_{[S,T]} L^{m}} + [\mathcal{E}^{h,n} ]_{\mathcal{C}^{\frac{1}{2}-\zeta}_{[S,T]} L^{m}} \Big)\, (t-s)^{\frac{1}{2}} . \end{align*} Since $\mathcal{D}$ satisfies \eqref{eq:assump-bn-bounded}, we have from Corollary \ref{cor:bound-Khn} that $$ \sup_{(h,n) \in \mathcal{D}} [K^{h,n}]_{\mathcal{C}^{\frac{1}{2}+H}_{[0,1]}L^m} < \infty .$$ It follows that \begin{align*} \| E^{1,h,n}_{s,t}\|_{L^{m}} &\leq \mathbf{M} \, \Big(1+ \big|\log \big(\|\mathcal{E}^{h,n} \|_{L^\infty_{[S,T]} L^{m}} +\epsilon(h,n)\big)\big| \Big) \, \Big( \|\mathcal{E}^{h,n} \|_{L^\infty_{[S,T]} L^{m}} + \epsilon(h,n)\Big) \, (t-s) \\ &\quad+ C \, \Big( (1+|\log T| (t-s)^{\frac{1}{2}}) \big( \|\mathcal{E}^{h,n} \|_{L^\infty_{[S,T]} L^{m}} + \epsilon(h,n)\big)+ [\mathcal{E}^{h,n} ]_{\mathcal{C}^{\frac{1}{2}-\zeta}_{[S,T]} L^{m}} \Big)\, (t-s)^{\frac{1}{2}}. \end{align*} Now use that $1\geq T\geq t-s$ to deduce that $|\log T| (t-s)^{\frac{1}{2}}$ is bounded on the set $\{(s,t,T): T\in (0,1] \text{ and } s<t\leq T \}$. Since $\|\mathcal{E}^{h,n} \|_{L^\infty_{[S,T]} L^{m}} \leq \| \mathcal{E}_S^{h,n} \|_{L^m} + [\mathcal{E}^{h,n}]_{\mathcal{C}^{\frac{1}{2}-\zeta}_{[S,T]} L^{m}}$, we get using $\mathbf{M} (t-s) \leq C (t-s)^{\frac{1}{2}}$, \begin{align*} \| E^{1,h,n}_{s,t}\|_{L^{m}} &\leq \mathbf{M} \, \big|\log \big(\|\mathcal{E}^{h,n} \|_{L^\infty_{[S,T]} L^{m}} +\epsilon(h,n)\big) \big| \, \Big( \|\mathcal{E}^{h,n} \|_{L^\infty_{[S,T]} L^{m}} + \epsilon(h,n)\Big) \, (t-s) \\ &\quad+ C \, \Big(\|\mathcal{E}^{h,n}_{S} \|_{L^{m}} + [\mathcal{E}^{h,n} ]_{\mathcal{C}^{\frac{1}{2}-\zeta}_{[S,T]} L^{m}} + \epsilon(h,n) \Big)\, (t-s)^{\frac{1}{2}}. \end{align*} Divide by $(t-s)^{1/2-\zeta}$ and take the supremum over $(s,t)\in \Delta_{S,T}$ to get \begin{equation}\label{eq:bound-E1-SDE-critic} \begin{split} [ E^{1,h,n} ]_{\mathcal{C}^{\frac{1}{2}-\zeta}_{[S,T]} L^{m}} &\leq \mathbf{M} \, \Big( \big|\log \big(\|\mathcal{E}^{h,n} \|_{L^\infty_{[S,T]} L^{m}} +\epsilon(h,n)\big)\big| \Big) \, \Big( \|\mathcal{E}^{h,n} \|_{L^\infty_{[S,T]} L^{m}} + \epsilon(h,n)\Big) \, (T-S)^{\frac{1}{2}+\zeta} \\ &\quad+ C \, \Big(\|\mathcal{E}^{h,n}_{S} \|_{L^{m}} + [\mathcal{E}^{h,n} ]_{\mathcal{C}^{\frac{1}{2}-\zeta}_{[S,T]} L^{m}} + \epsilon(h,n)\Big)\, (T-S)^{\zeta} . \end{split} \end{equation} \paragraph{Bound on $E^{2,h,n}$.} By Corollary \ref{cor:newbound-E2}, we have the following bound for $\varepsilon \in (0,\frac{1}{2})$, and $(s,t) \in \Delta_{S,T}$ \begin{align*} \| E^{2,h,n}_{s,t} \|_{L^m} \leq C \left( \|b^n\|_\infty h^{\frac{1}{2}-\varepsilon} + \|b^n\|_{\mathcal{C}^1} \|b^n\|_\infty h^{1-\varepsilon} \right) (t-s)^{\frac{1}{2}} . \end{align*} Dividing by $(t-s)^{\frac{1}{2}}$ and taking the supremum over $(s,t)$ in $\Delta_{S,T}$, we get \begin{align}\label{eq:bound-E2-SDE} [ E^{2,h,n} ]_{\mathcal{C}^{\frac{1}{2}}_{[S,T]} L^{m}} & \leq C \left( \|b^n\|_\infty h^{\frac{1}{2}-\varepsilon} + \|b^n\|_{\mathcal{C}^1} \|b^n\|_\infty h^{1-\varepsilon} \right) . \end{align} This is where we avoid using Girsanov's theorem and rely instead on a bound that involves the $\mathcal{C}^1$ norm of $b^n$. This simply comes from estimates when $t-s\leq h$ of the form $|\int_{s}^t f(\psi_{r}+B_{r}) - f(\psi_{r}+B_{r_{h}}) \, dr| \lesssim \|f\|_{\mathcal{C}^1}\, (t-s)\, h^{H-}$, at a scale where the discretised noise cannot regularise anymore. More rigorously, the previous bound is again obtained by a stochastic sewing argument. \paragraph{Conclusion in the sub-critical case.} Using \eqref{eq:bound-E1-SDE} in \eqref{eq:error-first-bound-SDE}, and recalling the definition of $\epsilon(h,n)$ in \eqref{eq:defepsilonhn}, we get % \begin{align*} [\mathcal{E}^{h,n}]_{\mathcal{C}^{\frac{1}{2}}_{[S,T]} L^{m}} &\leq \epsilon(h,n) + C \Big( [\mathcal{E}^{h,n}]_{\mathcal{C}^{\frac{1}{2}}_{[S,T]} L^{m}} + \| \mathcal{E}^{h,n}_{S} \|_{L^m} \Big) (T-S)^{\frac{1}{2}+ H (\gamma-1-\frac{d}{p})}. \end{align*} Hence for $T-S\leq (2C)^{-1/(1/2+H(\gamma-1-d/p))} =: \ell_{0}$, we get \begin{align}\label{eq:boundseminorm} [\mathcal{E}^{h,n}]_{\mathcal{C}^{\frac{1}{2}}_{[S,T]} L^{m}} \leq 2\epsilon(h,n) + \| \mathcal{E}^{h,n}_{S} \|_{L^m} . \end{align} Then the inequality \begin{align*} \|\mathcal{E}_{S}^{h,n}\|_{L^m} \leq \|\mathcal{E}_{S-\ell_{0}}^{h,n}\|_{L^m} + \ell_0^{\frac{1}{2}} [\mathcal{E}^{h,n}]_{\mathcal{C}^{\frac{1}{2}}_{[S-\ell_{0},S]} L^{m}} \end{align*} can be plugged in \eqref{eq:boundseminorm} and iterated until $S-k\ell_{0}$ is smaller than $0$ for $k\in \mathbb{N}$ large enough. It follows that \begin{align*} [\mathcal{E}^{h,n}]_{\mathcal{C}^{\frac{1}{2}}_{[0,1]} L^{m}} \leq C\epsilon(h,n) , \end{align*} and in view of \eqref{eq:defepsilonhn}, \eqref{eq:proba-conv-SDE} and \eqref{eq:bound-E2-SDE}, we obtain the result \eqref{eq:main-result-SDE} of Theorem~\ref{thm:main-SDE}$(b)$. \paragraph{Conclusion in the critical case.} Using \eqref{eq:bound-E1-SDE-critic} in \eqref{eq:error-first-bound-SDE}, we get that if $T-S\leq \ell_{0}$, \begin{align*} [\mathcal{E}^{h,n}]_{\mathcal{C}^{\frac{1}{2}-\zeta}_{[S,T]} L^{m}} &\leq [K-K^{n}]_{\mathcal{C}^{\frac{1}{2}-\zeta}_{[S,T]} L^{m}} + [E^{2,h,n}]_{\mathcal{C}^{\frac{1}{2}-\zeta}_{[S,T]} L^{m}} \\ &\quad + \mathbf{M} \, \Big( \big|\log \big(\|\mathcal{E}^{h,n} \|_{L^\infty_{[S,T]} L^{m}} +\epsilon(h,n)\big)\big| \Big) \, \Big( \|\mathcal{E}^{h,n} \|_{L^\infty_{[S,T]} L^{m}} + \epsilon(h,n)\Big) \, (T-S)^{\frac{1}{2}+\zeta} \\ &\quad+ C \, \Big(\|\mathcal{E}^{h,n}_{S} \|_{L^{m}} + [\mathcal{E}^{h,n} ]_{\mathcal{C}^{\frac{1}{2}-\zeta}_{[S,T]} L^{m}} + \epsilon(h,n) \Big) \, (T-S)^\zeta . \end{align*} We observe that $[K-K^{n}]_{\mathcal{C}^{\frac{1}{2}-\zeta}_{[S,T]} L^{m}} + [E^{2,h,n}]_{\mathcal{C}^{\frac{1}{2}-\zeta}_{[S,T]} L^{m}} \leq (T-S)^{\zeta} \, \epsilon(h,n)$. Let $\ell>0$ satisfying \begin{align}\label{eq:boundEll} \ell < ( C )^{-\frac{1}{\zeta}} \wedge 1 \wedge \ell_{0} . \end{align} Passing the term $[\mathcal{E}^{h,n}]_{\mathcal{C}^{\frac{1}{2}-\zeta}_{[S,T]} L^{m}}$ from the r.h.s to the l.h.s, we get for any $S<T$ such that $T-S\leq \ell$ \begin{align}\label{eq:1/2-zeta-bound} [\mathcal{E}^{h,n}]_{\mathcal{C}^{\frac{1}{2}-\zeta}_{[S,T]} L^{m}} &\leq \frac{1+C}{1-C \ell^\zeta} (\epsilon(h,n) + \, \|\mathcal{E}^{h,n}_{S} \|_{L^{m}})\, (T-S)^{\zeta} \nonumber \\ &\quad + \frac{\mathbf{M}}{1-C \ell^\zeta} \, \big|\log \big(\|\mathcal{E}^{h,n} \|_{L^\infty_{[S,T]} L^{m}} +\epsilon(h,n)\big)\big| \, \Big( \|\mathcal{E}^{h,n} \|_{L^\infty_{[S,T]} L^{m}} + \epsilon(h,n)\Big) \, (T-S)^{\frac{1}{2}+\zeta} . \end{align} Hence denoting $C_1 = \frac{1+C}{1-C \ell^\zeta}$ and $\mathbf{C}_2 =\frac{\mathbf{M}}{1-C \ell^\zeta} $, we have for $T-S \leq \ell$, \begin{equation}\label{eq:boundIncE} \begin{split} \| \mathcal{E}^{h,n}_{T} - \mathcal{E}^{h,n}_{S} \|_{L^m} &\leq C_1 \, \left(\epsilon(h,n) +\|\mathcal{E}^{h,n}\|_{L^\infty_{[S,T]}L^m} \right) \, (T-S)^{\frac{1}{2}} \\ &\quad + \mathbf{C}_2 \, \Big( \|\mathcal{E}^{h,n}\|_{L^\infty_{[S,T]}L^m}+\epsilon(h,n) \Big) \, \big| \log \big( \|\mathcal{E}^{h,n}\|_{L^\infty_{[S,T]}L^m} + \epsilon(h,n) \big)\big| \, (T-S). \end{split} \end{equation} We will now rely on the following technical lemma to conclude, which is a quantitative analogue of \cite[Prop. 3.6]{athreya2020well} and \cite[Prop. 6.1]{anzeletti2021regularisation} (see in particular the use of Equation~(6.9) in the latter result). We postpone the proof of this lemma to Subsection~\ref{subsec:proof-quantitativeuniq}. \begin{lemma}\label{lem:rate-critical} Let $(E, \|\cdot\|)$ be a normed vector space. For $\ell, C_{1}, C_{2}>0$ and $\eta\in(0,1)$ we consider the set $\mathcal{R}(\eta,\ell, C_{1}, C_{2})$ of functions defined from $[0,1]$ to $E$ characterised as follows: $f\in \mathcal{R}(\eta,\ell, C_{1}, C_{2})$ if $f$ is bounded, $f_{0}=0$ and for any $s\leq t\in [0,1]$ such that $t-s \leq \ell$, \begin{equation}\label{eq:boundIncf} \begin{split} \|f_{t} - f_{s}\| &\leq C_1 \, (\|f\|_{L^\infty_{[s,t]}E} + \eta) \, (t-s)^{\frac{1}{2}} \\ &\quad + C_2 \, ( \|f\|_{L^\infty_{[s,t]}E}+\eta ) \, \big| \log \big( \|f\|_{L^\infty_{[s,t]}E} + \eta \big)\big| \, (t-s). \end{split} \end{equation} Then for any $\delta\in(0, e^{-C_{2}})$, there exists $\bar{\eta} \equiv \bar{\eta}(C_{1},C_{2},\ell,\delta)$ such that for any $\eta<\bar{\eta}$ and any $f\in \mathcal{R}(\eta,\ell, C_{1}, C_{2})$, \begin{equation*} \| f \|_{L^\infty_{[0,1]} E} \leq \eta^{e^{-C_{2}}-\delta} . \end{equation*} \end{lemma} Let $\delta \in (0, \frac{e^{-\mathbf{M}}}{4})$. Recalling that $\mathbf{C}_2 = \frac{\mathbf{M}}{1-C \ell^\zeta}$, let us also choose $\ell$ which still satisfies \eqref{eq:boundEll} and which is small enough in order to have $e^{-\mathbf{C}_2} \geq e^{-\mathbf{M}}-\delta \ge \delta$. We now apply Lemma~\ref{lem:rate-critical} with $E= L^m$, $\ell=\ell$, $C_1=C_1$, $C_2 = \mathbf{C}_2$. By \eqref{eq:boundIncE}, we have that $\mathcal{E}^{h,n}$ belongs to $\mathcal{R}(\epsilon(h,n), \ell, C_1, \mathbf{C}_2)$ for $(h,n)\in \mathcal{D}$. Therefore, there exists $\bar{\epsilon} \equiv \bar{\epsilon} (C_1, \mathbf{C}_2, \ell, \delta)$ such that if $\epsilon(h,n) < \bar{\epsilon}$, we have \begin{align*} \| \mathcal{E}^{h,n} \|_{L^\infty_{[0,1]} L^m} \leq \epsilon(h,n)^{e^{-\mathbf{C}_2}-\delta} \leq \epsilon(h,n)^{e^{-\mathbf{M}}-2\delta} . \end{align*} Let $\epsilon \equiv \epsilon(C_1,\mathbf{M},\ell, \delta, \zeta) < \bar{\epsilon} \wedge 1 $ such that $\epsilon^{e^{-\mathbf{M}}-2\delta} < \frac{e^{-1}}{2}$. Then, if $\epsilon(h,n) < \epsilon$, we have $$\| \mathcal{E}^{h,n} \|_{L^\infty_{[0,1]}L^m} + \epsilon(h,n) \leq \epsilon(h,n)^{e^{-\mathbf{M}}-2\delta} + \epsilon(h,n) < 2 \epsilon(h,n)^{e^{-\mathbf{M}}-2 \delta} < e^{-1}.$$ Since $x \mapsto x | \log(x) |$ is increasing over $(0, e^{-1})$, in view of \eqref{eq:1/2-zeta-bound}, we have that over any interval $I$ of size $\ell$, \begin{align*} [ \mathcal{E}^{h,n} ]_{ \mathcal{C}^{\frac{1}{2}-\zeta}_{I} L^m} \leq C\, \epsilon(h,n)^{(e^{-\mathbf{M}}-2\delta)}\, (1+|\log(\epsilon(h,n))| ) . \end{align*} Since $\ell$ is fixed independently of $\epsilon(h,n)$, summing at most $\frac{1}{\ell}$ of these bounds, we get that if $\epsilon(h,n) < \epsilon$ \begin{align*} [ \mathcal{E}^{h,n} ]_{ \mathcal{C}^{\frac{1}{2}-\zeta}_{[0,1]} L^m} \leq C\, \epsilon(h,n)^{(e^{-\mathbf{M}}-2\delta)}\,(1+|\log(\epsilon(h,n))| ) \leq C\, \epsilon(h,n)^{(e^{-\mathbf{M}}-4\delta)} . \end{align*} From Corollary \ref{cor:bound-Khn} and the property $[K]_{\mathcal{C}^{1/2+H}_{[0,1]} L^{m,\infty}} < \infty$, we have that under \eqref{eq:assump-bn-bounded}, $\sup_{h,n\in \mathcal{D}} [ \mathcal{E}^{h,n} ]_{ \mathcal{C}^{\frac{1}{2}-\zeta}_{[0,1]} L^m} \leq [K]_{\mathcal{C}^{1/2+H}_{[0,1]} L^{m,\infty}} + \sup_{h,n\in \mathcal{D}} [K^{h,n}]_{\mathcal{C}^{1/2+H}_{[0,1]} L^{m,\infty}} <\infty$. It follows that there exists a constant $C$ such that for all $(h,n) \in \mathcal{D}$, \begin{align*} [ \mathcal{E}^{h,n} ]_{ \mathcal{C}^{\frac{1}{2}-\zeta}_{[0,1]} L^m} \leq C\, \epsilon(h,n)^{(e^{-\mathbf{M}}-4\delta)} . \end{align*} In view of \eqref{eq:defepsilonhn}, we obtain \eqref{eq:main-result-SDE-critic} from~Theorem \ref{thm:main-SDE}$(c)$. \begin{remark} The real-valued function $g_{t}= \eta^{e^{-C t}} - \eta$ solves the differential equation \begin{align*} g'_{t} = -C\, (g_{t} + \eta)\, \log(g_{t}+\eta), \ \forall t \in [0,1] . \end{align*} Thus the result of Lemma~\ref{lem:rate-critical} seems close to give an optimal bound. The term with the factor $C_1$ is only a small perturbation, that is why we do not keep track of the second constant in the bound \eqref{eq:bound-E1-SDE-critic} of $E^{1,h,n}$. \end{remark} \subsection{Proof of Lemma~\ref{lem:rate-critical}}\label{subsec:proof-quantitativeuniq} Let $\delta \in (0,\frac{1}{2} e^{-C_2})$. There exists $a > 1$ such that $e^{-C_2 \frac{a\log a}{a-1}} = e^{-C_2}-\delta$. Let $\varepsilon \equiv \varepsilon(C_2,\delta) \in (0,1)$ small enough such that $e^{-C_2 \frac{a\log a}{(a-1)(1-\varepsilon)}} \geq e^{-C_2}-2 \delta$. Denote also $\alpha := 1- e^{-C_2 \frac{a\log a}{(a-1)(1-\varepsilon)}}$. Now for $\eta \in (0,1)$ and $f\in \mathcal{R}(\eta,\ell,C_{1},C_{2})$, define the following increasing sequence: $t_0=0$ and for $k \in \mathbb{N}$, \begin{equation*} t_{k+1} = \begin{cases} \inf \{t>t_{k} : ~ \eta+ \|f_t\| \ge a^{k+1}\, \eta \} \wedge 1 & \text{ if } t_{k}<1,\\ 1 & \text{ if } t_{k}=1 , \end{cases} \end{equation*} with the convention that $\inf \emptyset = +\infty$. In view of \eqref{eq:boundIncf} and of the boundedness of $f$, the mapping $t\mapsto \|f_t\|$ is continuous. In particular, and by definition of the sequence $(t_k)$, we deduce that for any $k$, \begin{align*} \|f\|_{L^\infty_{[0,t_{k}]}E} \leq a^k\eta - \eta \leq a^k \eta. \end{align*} Let \begin{equation}\label{eq:defN} N = \left\lfloor - \alpha \frac{\log(\eta)}{\log(a)} \right\rfloor -1 , \end{equation} and let $\bar{\eta}_{0} \equiv \bar{\eta}_{0}(C_{2},\delta)$ be such that for $\eta<\bar{\eta}_{0}$, we have $N\geq 1$. We shall prove the following statement: \begin{align}\label{eq:statementEpsBar} \mbox{There exists $\bar{\eta} \equiv \bar{\eta}(C_1,C_{2},\ell, \delta)$ such that for any } \eta<\bar{\eta} \mbox{ and } f\in \mathcal{R}(\eta,\ell,C_{1},C_{2}), ~ t_{N+1}=1. \end{align} Observe that if \eqref{eq:statementEpsBar} holds true, then for $\eta<\bar{\eta}$ and $f\in \mathcal{R}(\eta,\ell,C_{1},C_{2})$, we have \begin{align*} \| f \|_{L^\infty_{[0,1]} E} \leq a^{N+1} \eta % \leq \eta^{1-\alpha} \leq \eta^{(e^{-C_2}-2\delta)} \end{align*} and the lemma is proven. Let us now prove the statement \eqref{eq:statementEpsBar}. Fix $\eta < \bar{\eta}_{0}$ and $f\in \mathcal{R}(\eta,\ell,C_{1},C_{2})$. Let \begin{align*} N_{0} = \inf\left\{ k\in \mathbb{N}:~ t_{k+1} = 1 \right\} . \end{align*} We aim to prove that $N_{0}\leq N$, so that we will have indeed $t_{N+1} = 1$. First, if $N_{0} = 0$, we have obviously that $N\geq N_{0}$. Assume now that $N_{0}\geq 1$. For any $k \leq N_{0}-1$, we have $\eta+\|f_{t_{k}}\| = a^k\, \eta$ and $\eta+\|f_{t_{k+1}}\| = a^{k+1}\, \eta$, which implies that $ \|f_{t_{k+1}} - f_{t_{k}} \|\geq (a^{k+1}-a^k) \eta$. Consider two cases: \begin{enumerate}[label=(\arabic*), labelwidth=!, labelindent=\parindent] \item If $t_{k+1}-t_k \leq \ell$, then one can apply \eqref{eq:boundIncf}, using $\| f_{t_{k+1}} \| = \| f \|_{L^\infty_{[t_k,t_{k+1}]} E}$, to get \begin{align*} (a^{k+1}-a^k) \eta &\leq C_1 \, a^{k+1} \eta \, (t_{k+1}-t_{k})^{\frac{1}{2}} + C_2 \, a^{k+1}\, \eta \, |\log (a^{k+1} \, \eta)| \, (t_{k+1}-t_{k}). \end{align*} \item If $t_{k+1}-t_{k} > \ell$, then we split the interval $[t_k, t_{k+1}]$ into at most $ \lfloor \frac{1}{\ell} \rfloor$ intervals of size $\ell$, that we denote $[\beta^{j}_k, \beta^{j+1}_k]$. We can apply \eqref{eq:boundIncf} over each such interval to get that \begin{align*} (a^{k+1}-a^k) \eta & \leq C_1 \, \sum_{j=0}^{\lfloor \frac{1}{\ell} \rfloor} (\|f \|_{L^\infty_{[\beta^{j}_k,\beta^{j+1}_k]}E} + \eta) \, (\beta^{j+1}_k-\beta^{j}_k)^{\frac{1}{2}} \\ &\quad + C_2 \sum_{j=0}^{\lfloor \frac{1}{\ell} \rfloor} \, (\|f \|_{L^\infty_{[\beta^{j}_k,\beta^{j+1}_k]}E} + \eta) \, |\log (\|f \|_{L^\infty_{[\beta^{j}_k,\beta^{j+1}_k]}E} + \eta) | \,(\beta^{j+1}_k-\beta^{j}_k). \end{align*} By definition of the sequence $(t_k)_{k\in \mathbb{N}}$, we know that $\eta + \|f \|_{L^\infty_{[\beta^{j}_k,\beta^{j+1}_k]}E} \leq a^{k+1} \eta$. Moreover for $k\leq N$, there is $a^{k+1} \eta \leq a^{N+1} \eta \leq \eta^{e^{-C_2}-\delta}$. Therefore, for $\bar{\eta}_1 \equiv \bar{\eta}_1 (C_{2},\delta) \leq \bar{\eta}_{0}$ small enough, we have that $a^{k+1} \eta \leq e^{-1}$ for any $\eta < \bar{\eta}_1$ and $k\leq N$. Since the mapping $x\mapsto x\, |\log x|$ is nondecreasing on the interval $[0,e^{-1}]$, we get that $$ (\|f \|_{L^\infty_{[\beta^{j}_k,\beta^{j+1}_k]}E}+\eta) \, |\log (\|f \|_{L^\infty_{[\beta^{j}_k,\beta^{j+1}_k]}E}+\eta) | \leq a^{k+1} \eta\, | \log ( a^{k+1} \eta) |. $$ Then, by Cauchy-Schwarz on the first term, we write \begin{align*} (a^{k+1}-a^k) \eta & \leq C_1 \, a^{k+1} \eta \, \sqrt{\frac{1}{\ell}} (t_{k+1}-t_k)^{\frac{1}{2}} + C_2 \, a^{k+1}\, \eta \, |\log (a^{k+1} \eta) | \,(t_{k+1}-t_k). \end{align*} \end{enumerate} Hence in both cases ($t_{k+1}-t_k \leq \ell$ or $t_{k+1}-t_k > \ell$), for any $\eta < \bar{\eta}_1$, there is \begin{align*} 1 &\leq \frac{C_1}{\sqrt{\ell}} \frac{a}{a-1}\, (t_{k+1}-t_{k})^{\frac{1}{2}} + C_2 \frac{a}{a-1} \, |\log (a^{k+1} \eta) | \, (t_{k+1}-t_{k}) . \end{align*} Notice that the polynomial $C_2 \frac{a}{a-1} \, |\log (a^{k+1} \eta) | \, X^2 + \frac{C_1}{\sqrt{\ell}} \frac{a}{a-1} \, X -1$ has only one non-negative root. Thus \begin{align*} (t_{k+1}-t_{k})^{\frac{1}{2}} \geq \frac{-\frac{a}{a-1} \frac{C_1}{\sqrt{\ell}} + \sqrt{(\frac{a}{a-1})^2 \frac{C_1^2}{\ell} + 4 \frac{C_2 a}{a-1} |\log (a^{k+1} \eta) | }}{2 \frac{C_2 a}{a-1} |\log (a^{k+1} \eta) | } . \end{align*} Then we have \begin{align*} (t_{k+1}-t_{k}) \geq \frac{2\left( \frac{C_1 a}{\sqrt{\ell}(a-1)}\right)^2 + 4 \frac{C_2 a}{a-1} |\log (a^{k+1} \eta) | -\frac{2 C_1 a}{\sqrt{\ell}(a-1)} \sqrt{\left(\frac{C_1 a}{\sqrt{\ell}(a-1)}\right)^2 + 4 \frac{C_2 a}{a-1} |\log (a^{k+1} \eta) | }}{ \left( \frac{2 C_2 a}{a-1}\right)^2 |\log (a^{k+1} \eta) |^2 } . \end{align*} Using the notation $C_a = \frac{C_2 a}{a-1}$ and the inequality $\sqrt{a+b} \leq \sqrt{a}+\sqrt{b}$, we get that for any $\eta < \bar{\eta}_1$, \begin{align} (t_{k+1}-t_{k}) & \geq \frac{C_1^2}{2 C_2^2 \ell |\log (a^{k+1} \eta) |^2 } + \frac{1}{C_a |\log (a^{k+1} \eta) |} - \frac{C_1^2}{2 C_2^2 \ell |\log (a^{k+1} \eta) |^2 } - \frac{ C_1}{ (\frac{a}{a-1})^{\frac{1}{2}} C_2^{\frac{3}{2}} \sqrt{\ell}\, |\log (a^{k+1} \eta) |^{\frac{3}{2}} } \nonumber \\ & \geq \frac{1}{C_a |\log (a^{k+1} \eta) |} - \frac{C_1}{C_2^{\frac{3}{2}} \sqrt{\ell}\, |\log (a^{k+1} \eta) |^{\frac{3}{2}} } \label{eq:upperb-tk}. \end{align} Now we will show that for $N$ defined in \eqref{eq:defN}, the sum from $0$ to $N$ of the right-hand side of \eqref{eq:upperb-tk} is larger than $1$, which implies that $N_0 \leq N$ since $\sum_{k=0}^{N_0} (t_{k+1}-t_k) = 1$. Let us start with the second term in the above inequality. Notice that for $k \leq N$, we always have $|\log (a^{k+1} \eta) | = |\log(\eta)| - (k+1) \log(a) $. Thus we get \begin{align*} \sum_{k=0}^{N-1} \frac{1}{|\log (a^{k+1} \eta) |^{\frac{3}{2}} } & \leq \int_0^{N} \frac{1}{ |\log (a^{x+1} \eta) |^{\frac{3}{2}} } \, d x \\ & = \frac{2}{\log(a)} \left( -\frac{1}{\sqrt{|\log (a \eta) | }} + \frac{1}{\sqrt{|\log (a^{N+1} \eta) |}} \right). \end{align*} We have $|\log (a^{N+1} \eta) | = |\log(\eta)| - \lfloor \alpha \frac{|\log(\eta)|}{\log(a)} \rfloor \log(a) \ge (1-\alpha) | \log(\eta) | = e^{-C_2 \frac{a\log a}{(a-1)(1-\varepsilon)} } | \log(\eta) |$. So we have $ |\log (a^{N+1} \eta) | \rightarrow \infty$ as $\eta \rightarrow 0$ and therefore, \begin{align*} \lim_{\eta \rightarrow 0} \sum_{k=0}^{N} \frac{1}{ |\log (a^{k+1} \eta) |^{\frac{3}{2}} } &\leq \lim_{\eta \rightarrow 0} \frac{2}{\log(a)} \Big( -\frac{1}{\sqrt{|\log (a \eta) | }} + \frac{1}{\sqrt{|\log (a^{N+1} \eta) |}} \Big) + \frac{1}{|\log (a^{N+1} \eta) |^{\frac{3}{2}} }\\ & =0 . \end{align*} On the other hand, \begin{align*} \sum_{k=0}^{N} \frac{1}{C_a |\log (a^{k+1} \eta) |} & \ge \frac{1}{C_a} \int_{-1}^{N-1} \frac{1}{|\log (a^{x+1} \eta) |} \, d x = \frac{1}{\log(a) C_a} \log \left( \frac{|\log ( \eta) |}{|\log (a^{N} \eta) |} \right) \\ & = \frac{1}{\log(a) C_a} \log \left( \frac{|\log(\eta)|}{(1-\alpha) | \log(\eta)| + \alpha | \log(\eta)|- N \log(a) } \right) . \end{align*} We have $N+1 = \lfloor \frac{\alpha | \log(\eta) |}{\log(a)} \rfloor \ge \frac{\alpha | \log(\eta) |}{\log(a)} - 1$, thus $N \log(a) + 2 \log(a) \ge \alpha | \log(\eta) |$. Hence, \begin{align*} & \ge \frac{1}{\log(a) C_a} \log \left( \frac{|\log(\eta)|}{(1-\alpha) | \log(\eta)| + 2\log(a) } \right) . \end{align*} The right hand side converges to $ \log(1/(1-\alpha))/ (C_a \log(a) )$ as $\eta$ goes to 0. Hence, going back to \eqref{eq:upperb-tk} and summing over $k \in \llbracket 0, N-1 \rrbracket$, we know that there exists $\bar{\eta} \equiv \bar{\eta} (C_1,C_2, \ell, \delta) \leq \bar{\eta}_{1}$ such that for $\eta < \bar{\eta}$ , we have \begin{align*} \sum_{k=0}^{N} (t_{k+1}-t_k) \geq \frac{1}{C_a \log(a)} \log\left( \frac{1}{1-\alpha} \right) (1-\varepsilon) = 1 = \sum_{k=0}^{N_0} (t_{k+1}-t_k) . \end{align*} It follows that $N_0 \leq N$ and thus $t_{N+1}=1$. Hence \eqref{eq:statementEpsBar} is true and we conclude that for $\eta < \bar{\eta}$, we have $\| f \|_{L^\infty_{[0,1]} E} \leq \eta^{e^{-C_2}-2 \delta}$ . \subsection{Proof of Corollary \ref{cor:bn-choice}}\label{subsec:Cor2.5} We will do the computations with $n_h = \lfloor h^{-\alpha} \rfloor $ for some $\alpha>0$ and prove that the upper bound on $ [\mathcal{E}^{h,n_{h}}]_{\mathcal{C}^{1/2}_{[0,1]} L^m}$ given by Theorem \ref{thm:main-SDE} is minimized for $\alpha = 1/(1-\gamma+d/p)$. First, the inequalities \eqref{eq:bn-inf} and \eqref{eq:bn-C1} imply that $b^{n_h}$ satisfy \eqref{eq:assump-bn-bounded} with $\mathcal{D}=\{ (h,n_h) , h \in (0,1/2) \}$, any $\eta \in (0,H)$ and $\alpha \leq (2(H-\eta)+1)/(1-\gamma+d/p)$. Therefore, we deduce \eqref{eq:unifscheme} from Theorem \ref{thm:main-SDE}$(a)$. \paragraph{The sub-critical case.} In view of Lemma~\ref{lem:reg-S}, $b^{n_{h}}$ satisfies \eqref{eq:bn-b}, \eqref{eq:bn-inf}, \eqref{eq:bn-C1}. Then the result of Theorem \ref{thm:main-SDE}$(b)$ reads \begin{align*} [\mathcal{E}^{h,n_{h}}]_{\mathcal{C}^{\frac{1}{2}}_{[0,1]} L^m} & \leq C \Big( \lfloor h^{-\alpha} \rfloor ^{-\frac{1}{2}} + \lfloor h^{-\alpha} \rfloor ^{-\frac{1}{2}(\gamma-\frac{d}{p})} h^{\frac{1}{2}-\varepsilon} + \lfloor h^{-\alpha} \rfloor ^{\frac{1}{2}-(\gamma-\frac{d}{p} )} h^{1-\varepsilon} \Big) . \end{align*} Since $-\frac{1}{2}(\gamma-\frac{d}{p})>0$ and $\frac{1}{2}-(\gamma-\frac{d}{p} )>0$, we have \begin{align*} \lfloor h^{-\alpha} \rfloor ^{\frac{1}{2}-(\gamma-\frac{d}{p} )} \leq h^{-\frac{\alpha}{2}} h^{\alpha(\gamma-\frac{d}{p} )} \text{ and } \lfloor h^{-\alpha} \rfloor ^{-\frac{1}{2}(\gamma-\frac{d}{p})} \leq h^{\frac{\alpha}{2}(\gamma-\frac{d}{p})} . \end{align*} Moreover, since $ h \in (0, \frac{1}{2})$ and $ \lfloor h^{-\alpha} \rfloor > h^{-\alpha} -1$, we have \begin{align*} \lfloor h^{-\alpha} \rfloor^{-\frac{1}{2}} & \leq (1-h^{\alpha})^{-\frac{1}{2}} h^{\frac{\alpha}{2}} \leq \left( 1-\frac{1}{2^\alpha} \right)^{-\frac{1}{2}} h^{\frac{\alpha}{2}} \leq C h^{\frac{\alpha}{2}} . \end{align*} It follows that \begin{align*} [\mathcal{E}^{h,n}]_{\mathcal{C}^{\frac{1}{2}}_{[0,1]} L^m} & \leq C \Big(h^{\frac{\alpha}{2}} + h^{\frac{\alpha}{2}(\gamma-\frac{d}{p} )} h^{\frac{1}{2}-\varepsilon} + h^{-\frac{\alpha}{2}} h^{\alpha(\gamma-\frac{d}{p} )} h^{1-\varepsilon}\Big). \end{align*} Now we optimize over $\alpha$. Introduce the following functions: \begin{align*} f_1(\alpha) = \frac{\alpha}{2} ,\quad f_2(\alpha) = \frac{\alpha}{2} \left( \gamma-\frac{d}{p} \right)+\frac{1}{2} ~~\mbox{and}~~ f_3(\alpha) & = \left( \gamma-\frac{d}{p}-\frac{1}{2} \right) \, \alpha+1 ,\quad \alpha>0. \end{align*} Observe that $f_1$ is increasing and $f_2,f_3$ are decreasing. Moreover, we have \begin{align}\label{eq:scaling} f_1(\alpha)=f_2(\alpha)=f_3(\alpha) \Leftrightarrow \alpha^\star = \frac{1}{1-\gamma+d/p} . \end{align} It follows that the error is minimized at $\alpha=\alpha^\star$. Let $n_h =\lfloor h^{-\alpha^\star} \rfloor$. This yields rate of convergence of order $ 1/(2(1-\gamma+d/p))-\varepsilon$, which proves \eqref{eq:rate1}. \paragraph{The critical case.} Using \eqref{eq:bn-b}, \eqref{eq:bn-inf} and \eqref{eq:bn-C1}, the result of Theorem \ref{thm:main-SDE}$(c)$ reads \begin{align*} [\mathcal{E}^{h,n_{h}}]_{\mathcal{C}^{\frac{1}{2}-\zeta}_{[0,1]} L^m} \leq C \Big( \lfloor h^{-\alpha} \rfloor ^{-\frac{1}{2}} (1+|\log( \lfloor h^{-\alpha} \rfloor ^{-\frac{1}{2}})|) + \lfloor h^{-\alpha} \rfloor ^{-\frac{1}{2}(\gamma-\frac{d}{p})} h^{\frac{1}{2}-\varepsilon} + \lfloor h^{-\alpha} \rfloor ^{\frac{1}{2}-(\gamma-\frac{d}{p} )} h^{1-\varepsilon} \Big)^{e^{-M}-\delta} . \end{align*} Optimising over $\alpha$ again, we find $\alpha^\star = 2H$. This yields \begin{align*} [\mathcal{E}^{h,n_{h}}]_{\mathcal{C}^{\frac{1}{2}-\zeta}_{[0,1]} L^m} \leq C \Big( h^{H-\varepsilon} |\log(h)| \Big)^{e^{-\mathbf{M}}-\delta} , \end{align*} which proves \eqref{eq:rate1-critic} \subsection{Proof of Corollary \ref{cor:gama=d/p}} Let $\varepsilon \in (0, 1/2)$, then fix $\eta \in (0, 1/(2H)-1)$ and $\delta \in (0,\frac{1}{2})$ such that $\eta + \delta = \varepsilon$. Then $b$ also belongs to $\mathcal{B}_\infty^{-\eta}$ and Theorem \ref{th:WP}$(b)$ states that there exists a strong solution $X$ to \eqref{eq:SDE} which satisfies $X-B \in \mathcal{C}_{[0,T]}^{1/2+H} L^{m, \infty}$, which is pathwise unique in the class of solutions that satisfy $X-B \in \mathcal{C}_{[0,T]}^{H+\eta} L^{2, \infty}$. To prove the second part of the corollary, apply Theorem~\ref{thm:main-SDE}$(b)$ with $\gamma= -\eta$, $p=\infty$ and $\varepsilon=\delta$, to get that for $\mathcal{D}$ satisfying \eqref{eq:assump-bn-bounded}, we have $\sup_{(h,n) \in \mathcal{D} } [X^{h,n}-B]_{\mathcal{C}^{1/2+H}_{[0,1]} L^m} < \infty$. Moreover, noting that $\| b^n-b \|_{\mathcal{B}_\infty^{-\eta-1}}\leq C\| b^n-b \|_{\mathcal{B}_\infty^{-1}}$, it comes that \begin{align*} [X - X^{h,n}]_{\mathcal{C}^{\frac{1}{2}}_{[0,1]} L^{m}} & \leq C \left( \| b^n-b \|_{\mathcal{B}_\infty^{-1}} + \|b^n\|_\infty h^{\frac{1}{2}-\delta} + \|b^n\|_{\mathcal{C}^1} \|b^n\|_\infty h^{1-\delta} \right) . \end{align*} Now take $n_h=\lfloor h^{-\alpha}\rfloor$ and $b^{n_h}= G_{1/n_h} b$ for some $\alpha>0$. Using \eqref{eq:bn-b}, \eqref{eq:bn-inf} and \eqref{eq:bn-C1} as in Subsection~\ref{subsec:Cor2.5} leads to $\sup_{h \in (0,1/2)} [X^{h,n_h}-B]_{\mathcal{C}^{1/2+H}_{[0,1]} L^m} < \infty$ and \begin{align*} [X - X^{h,n_h}]_{\mathcal{C}^{\frac{1}{2}}_{[0,1]} L^{m}} & \leq C \left( h^{\frac{\alpha}{2}} + h^{\frac{1}{2}-\delta-\frac{\alpha\eta}{2}} + h^{1-\delta-\frac{\alpha}{2}-\alpha\eta} \right) . \end{align*} Optimising over $\alpha$ as before, we find $\alpha^\star = 1/(1+\eta)$, which yields a rate of convergence of order $\frac{1}{2(1+\eta)}-\delta$. Since $\frac{1}{2(1+\eta)} \geq 1/2 - \eta$, it finally comes that \begin{align*} [X - X^{h,n_h}]_{\mathcal{C}^{\frac{1}{2}}_{[0,1]} L^{m}} \leq C\, h^{\frac{1}{2}-\eta-\delta} = C\, h^{\frac{1}{2}-\varepsilon}. \end{align*} \section{Regularisation effect of fBm and discrete-time fBm}\label{sec:stochastic-sewing} In this section, $X$ always denotes a weak solution to \eqref{eq:SDE} with drift $b\in \mathcal{B}^\gamma_{p}$, with $\gamma \in \mathbb{R}$ and $p \in [1,\infty]$ satisfying \eqref{eq:cond-gamma-p-H}. For such $X$, recall that the process $K$ is defined by \eqref{solution1}. Let $(b^n)_{\mathbb{N}}$ be a sequence of smooth functions that converges to $b$ in $\mathcal{B}_p^{\gamma-}$. For $n \in \mathbb{N}$ and $h \in (0,1)$, recall that $X^{h,n}$ denotes the tamed Euler scheme \eqref{def:EulerSDE} and that the process $K^{h,n}$ is defined by \eqref{eq:def-Khn}. \subsection{Regularisation in the strong well-posedness regime}\label{sec:proofs-SDE} In this subsection, we state and prove the bound on $E^{1,h,n}$, first in the sub-critical regime (Proposition~\ref{prop:bound-E1-SDE} and Corollary~\ref{cor:bound-E1-SDE}), then in the critical regime (Proposition~\ref{prop:bound-E1-SDE-critic}). Recall that by a Besov embedding, $\mathcal{B}_p^\gamma \hookrightarrow \mathcal{B}_{\tilde{p}}^{\gamma-d/p+d/\tilde{p}}$ for any $\tilde{p} \ge p$. Setting $\tilde{\gamma}=\gamma-d/p+d/\tilde{p}$, we have $b \in \mathcal{B}_{\tilde{p}}^{\tilde{\gamma}}$ and $\gamma-d/p = \tilde{\gamma} - d/\tilde{p}$. So, without any loss of generality, we can always assume that $p$ is as large as we want. In this subsection, $p \ge m \geq 2$. Before giving the main estimates of this section, we will need the following corollary of Lemma~\ref{lem:1streg}. \begin{corollary} \label{cor:4inc} Let $\beta \in (-1/(2H),0)$ such that $\beta-d/p \in (-1/H,0)$. Let $m \in [2, \infty)$ and assume that $p\in [m,+\infty]$. Let $\lambda, \lambda_1, \lambda_2 \in (0,1]$ and assume that $\beta>-1/(2H)+\lambda$ and $\beta>-1/(2H)+\lambda_1+\lambda_2$. There exists a constant $C>0$ such that for any $f \in \mathcal{C}_b^\infty(\mathbb{R}^d,\mathbb{R}^d) \cap \mathcal{B}_p^\beta$, any $0\leq s \leq u \leq t \leq 1$, any $\mathcal{F}_s$-measurable random variables $\kappa_1,\kappa_2 \in L^m$ and any $\mathcal{F}_u$-measurable random variables $\kappa_3, \kappa_4 \in L^m$, there is \begin{align*} \Big\|\int_u^t &\left(f(B_r+\kappa_1)-f(B_r+\kappa_2)-f(B_r+\kappa_3)+f(B_r+\kappa_4) \right)dr \Big\|_{L^m} \nonumber\\ &\leq C \|f\|_{\mathcal{B}_p^\beta}\, \|\mathbb{E}^s[|\kappa_1-\kappa_3|^m]^{1/m}\|_{L^\infty}^{\lambda_2}\, \|\kappa_1-\kappa_2\|_{L^m}^{\lambda_1}\, (t-u)^{1+H(\beta-\lambda_1-\lambda_2-\frac{d}{p})} \\ &\quad + C\|f\|_{\mathcal{B}_p^\beta}\, \|\kappa_1-\kappa_2 - \kappa_3 +\kappa_4\|_{L^m}^\lambda\, (t-u)^{1+H(\beta-\lambda-\frac{d}{p})}. \end{align*} \end{corollary} \begin{proof} The proof is identical to the one-dimensional version of this result, see Corollary D.4 of \cite{anzeletti2021regularisation}, so we do not repeat it. It relies on Lemma~\ref{lem:1streg} and Lemma~\ref{lem:besov-spaces}$(iii)$. \end{proof} \begin{proposition}\label{prop:bound-E1-SDE} Let $(\psi_t)_{t\in[0,1]}, \, (\phi_t)_{t\in[0,1]}$ be two $\mathbb{R}^d$-valued stochastic processes adapted to $\mathbb{F}$. Let $f \in \mathcal{C}^\infty_{b}(\mathbb{R}^d,\mathbb{R}^d) \cap \mathcal{B}_p^\gamma$ and $m \in [2,\infty)$ such that $m \leq p$. Assume that $\gamma-\frac{d}{p}>1-\frac{1}{2H}$ and let $\tau \in (0,1)$ such that \begin{align}\label{eq:tau-cond} \left( \tau\wedge \frac{1}{2} \right)+ H \left( \gamma-1-\frac{d}{p} \right) > 0 . \end{align} There exists a constant $C := C(m,p,\gamma,d) >0$ such that for any $0 \leq S < T \leq 1$ and $(s, t)\in \Delta_{S,T}$, % \begin{equation}\label{eq:ssl-o-on-2-SDE} \begin{split} & \Big\| \int_s^t f(\psi_r+ B_r) - f(\phi_r+ B_r) \, d r \Big\|_{L^{m}} \\ & \quad \leq C\, \| f \|_{\mathcal{B}^{\gamma}_{p}} \Big( 1 + [\psi]_{\mathcal{C}^{\frac{1}{2}+H}_{[S,T]} L^{m,\infty}} \Big) \left( [\psi-\phi]_{\mathcal{C}^{\tau}_{[S,T]} L^{m}} + \|\psi_S-\phi_S\|_{L^m} \right)(t-s)^{1+H(\gamma-1-\frac{d}{p})} . \end{split} \end{equation} \end{proposition} \begin{proof} Let $0\leq S<T\leq1$. For $(s,t)\in \Delta_{S,T}$, let \begin{align} \label{eq:Prop51A} A_{s,t} = \int_{s}^{t} f(\psi_{s} + B_r) - f(\phi_{s}+ B_r) \, dr ~~\mbox{and}~~ \mathcal{A}_{t} = \int_S^{t} f(\psi_r + B_r) - f(\phi_r+ B_r) \, dr . \end{align} Assume without any loss of generality that $[\psi]_{\mathcal{C}^{1/2+H}_{[S,T]} L^{m,\infty}}$ and $[\psi-\phi]_{\mathcal{C}^{\tau}_{[S,T]} L^{m}}$ are finite, otherwise the result is trivial. Let $\varepsilon \in (0,\gamma-(1-1/(2H)))$. In the following, we check the conditions in order to apply Lemma~\ref{lem:SSL} (with $q=m$). To show that \eqref{sts1} and \eqref{sts2} hold true with $\varepsilon_1= \tau \wedge \frac{1}{2} + H(\gamma - 1-d/p) {>0}$, $\alpha_1=0$ and $\varepsilon_2=1/2+H(\gamma-1-d/p)+\varepsilon/2>0$, $\alpha_2=0$, we prove that there exists a constant $C>0$ independent of $s,t,S$ and $T$ such that for $u = (s+t)/2$, \begin{enumerate}[label=(\roman*)] \item \label{item51(1)} $\|\mathbb{E}^{s} [\delta A_{{s},u,{t}}]\|_{L^m}\leq C\, \|f \|_{\mathcal{B}_p^\gamma} ( [\psi]_{\mathcal{C}^{1/2+H}_{[S,T]} L^{m,\infty}} + 1 ) ( [ \psi-\phi]_{\mathcal{C}^{\tau}_{[S,T]} L^{m}} +\| \psi_{S}-\phi_{S} \|_{L^{m}}) (t-s)^{1+\varepsilon_1} $; \item \label{item51(2)} $\| \delta A_{{s},u,{t}}\|_{L^m} \leq C\, \|f\|_{\mathcal{B}_p^\gamma}\,\Big( [\psi]_{\mathcal{C}^{1/2+H}_{[S,T]}L^{m,\infty}} +1 \Big) ( [ \psi-\phi ]_{\mathcal{C}^{\tau}_{[S,T]} L^{m}} + \|\psi_S-\phi_S\|_{L^m} ) \, (t-u)^{\frac{1}{2}+\varepsilon_2} $; \item \label{item51(3)} If \ref{item51(1)} and \ref{item51(2)} are satisfied, \eqref{sts3} gives the convergence in probability of $\sum_{i=1}^{N_k-1} A_{t^k_i,t^k_{i+1}}$ along any sequence of partitions $\Pi_k=\{t_i^k\}_{i=1}^{N_k}$ of $[S,t]$ with mesh converging to $0$. We will prove that the limit is the process $\mathcal{A}$ given in \eqref{eq:Prop51A}. \end{enumerate} Assume for now that \ref{item51(1)}, \ref{item51(2)} and \ref{item51(3)} hold. Applying Lemma~\ref{lem:SSL} and recalling \eqref{eq:bounds-nu}, we obtain that \begin{equation}\label{eq:ssl-goal} \begin{split} \Big\| \int_{s}^{t} & f(B_r+\psi_r) - f(B_r+\phi_r) \, dr \Big\|_{L^m} \\ &\leq \| A_{{s},{t}}\|_{L^m}+ C\, \|f\|_{\mathcal{B}_p^\gamma} ([\psi]_{\mathcal{C}^{1/2+H}_{[S,T]}L^{m,\infty}} +1) ([\psi-\phi ]_{\mathcal{C}^{\tau}_{[S,T]} L^{m}} + \|\psi_S-\phi_S\|_{L^m} ) (t-s)^{1+H(\gamma-1-\frac{d}{p})+ \tau \wedge \frac{1}{2}} \\ &\quad +C\, \|f \|_{\mathcal{B}_p^\gamma} ( [\psi]_{\mathcal{C}^{1/2+H}_{[S,T]}L^{m,\infty}} + 1 ) ([ \psi-\phi]_{\mathcal{C}^{\tau}_{[S,T]} L^{m}} +\| \psi_{S}-\phi_{S} \|_{L^{m}}) (t-s)^{1 + H(\gamma-1-\frac{d}{p})+\frac{\varepsilon}{2}} . \end{split} \end{equation} To bound $\| A_{{s},{t}}\|_{L^m}$, we apply Lemma~\ref{lem:1streg} with $q=m$ and $\beta=\gamma-1$, and for $\Xi = (\psi_{s},\phi_{s})$. As $f$ is smooth and bounded, the first assumption of Lemma~\ref{lem:1streg} is verified. By Lemma~\ref{lem:besov-spaces}$(i)$, $ \|f(\cdot + \psi_{s})-f(\cdot + \phi_{s})\|_{\mathcal{B}^{\gamma-1}_{p}} \leq 2\| f\|_{\mathcal{B}^{\gamma-1}_{p}}$, hence the second assumption of Lemma~\ref{lem:1streg} is verified. It follows by Lemma~\ref{lem:1streg} that \begin{align}\label{eq:Ast-} \| A_{s,t}\|_{L^m} &\leq C\, \| \|f(\psi_s+\cdot)-f(\phi_s+\cdot) \|_{\mathcal{B}_p^{\gamma-1}} \|_{L^m} \, (t-s)^{1+H(\gamma-1-\frac{d}{p})}\ \nonumber\\ &\leq C\, \|f\|_{\mathcal{B}^{\gamma}_p} \, \| \psi_{s}-\phi_{s} \|_{L^{m}} \, (t-s)^{1+H(\gamma-1-\frac{d}{p})}\nonumber\\ &\leq C\, \|f\|_{\mathcal{B}_p^\gamma} ( [ \psi-\phi ]_{\mathcal{C}^{\tau}_{[S,T]} L^{m}} + \|\psi_S-\phi_S\|_{L^m} ) \, (t-s)^{1+H(\gamma-1-\frac{d}{p})}. \end{align} Injecting the previous bound in \eqref{eq:ssl-goal}, we get \eqref{eq:ssl-o-on-2-SDE}. We now check that the conditions \ref{item51(1)}, \ref{item51(2)} and \ref{item51(3)} actually hold. \smallskip Proof of \ref{item51(1)}: For $u \in [s,t]$, we have $\delta A_{s,u,t} = \int_u^{t} f(\psi_{s} +B_r) - f(\phi_{s}+ B_r) - f(\psi_u+ B_r) +f(\phi_u + B_r) \, d r$. By the tower property of conditional expectation and Fubini's theorem, we have \begin{align}\label{eq:deltaAst-decomp-SDE} \mathbb{E}^{s} \delta A_{s,u,t} & = \int_u^{t} \mathbb{E}^{s}\, \mathbb{E}^u \Big[ f(\psi_{s} +B_r) - f(\phi_{s}+ B_r) - f(\psi_u + B_r) + f(\phi_u+ B_r) \Big] \, dr \nonumber \\ & =: \int_u^{t} \mathbb{E}^{s}\, \mathbb{E}^u [ F(B_r,s,u) + \tilde{F}(B_r,s,u) ] \, d r , \end{align} where \begin{align*} F(\cdot,s,u) & = f(\psi_{s} + \cdot)-f(\phi_{s} + \cdot) - f(\psi_u+\cdot) + f(\psi_u + \phi_{s} - \psi_{s} + \cdot), \\ \tilde{F}(\cdot,s,u)& = f(\phi_u+\cdot) - f(\psi_u+ \phi_{s} - \psi_{s} + \cdot). \end{align*} By Lemma \ref{lem:reg-B}$(ii)$, we have that \begin{align*} | \mathbb{E}^u F(B_r,s,u) | \leq \| F(\cdot,s,u) \|_{\mathcal{B}_p^{\gamma-2}} \, (r-u)^{H(\gamma-2-\frac{d}{p})} ,\\ | \mathbb{E}^u \tilde{F}(B_r,s,u) | \leq \|\tilde{F}(\cdot,s,u) \|_{\mathcal{B}_p^{\gamma-1}} \, (r-u)^{H(\gamma-1-\frac{d}{p})} . \end{align*} Moreover, by Lemma \ref{lem:besov-spaces}$(iii)$, it comes that \begin{align*} \mathbb{E}^s \| F(\cdot,s,u) \|_{\mathcal{B}_p^{\gamma-2}} &\leq \|f \|_{\mathcal{B}_p^\gamma}\, | \psi_{s}-\phi_{s} |\, \mathbb{E}^s| \psi_{s}-\psi_u |\\ &\leq \|f \|_{\mathcal{B}_p^\gamma}\, | \psi_{s}-\phi_{s} |\, \left(\mathbb{E}^s| \psi_{s}-\psi_u |^m\right)^{\frac{1}{m}}\\ &\leq \|f \|_{\mathcal{B}_p^\gamma}\, | \psi_{s}-\phi_{s} |\, [\psi]_{\mathcal{C}^{\frac{1}{2}+H}_{[S,T]} L^{m,\infty}} (u-s)^{\frac{1}{2}+H} . \end{align*} Besides \begin{align*} \| \| \tilde{F}(\cdot,s,u) \|_{\mathcal{B}_p^{\gamma-1}} \|_{L^{m}} & \leq \|f \|_{\mathcal{B}_p^\gamma} \| \psi_{s}-\psi_u -\phi_{s} + \phi_u \|_{L^{m}} \\ & \leq \|f \|_{\mathcal{B}_p^\gamma} \, [ \psi-\phi ]_{\mathcal{C}^{\tau}_{[S,T]} L^{m}}\, (u-s)^{\tau} . \end{align*} Plugging the previous bounds in \eqref{eq:deltaAst-decomp-SDE} and using $\| \psi_{s}-\phi_{s} \|_{L^m} \leq \| \psi_{S}-\phi_{S} \|_{L^m} + (T-S)^\tau [ \psi-\phi]_{\mathcal{C}^{\tau}_{[S,T]} L^{m}}$, we obtain \begin{align*} \| \mathbb{E}^{s} \delta A_{s,u,t} \|_{L^{m}} & \leq C\, \|f \|_{\mathcal{B}_p^\gamma} \Big( [\psi]_{\mathcal{C}^{\frac{1}{2}+H}_{[S,T]} L^{m,\infty}} + 1 \Big) ( [ \psi-\phi]_{\mathcal{C}^{\tau}_{[S,T]} L^{m}} +\| \psi_{S}-\phi_{S} \|_{L^{m}} )\\ &\quad \times \left( (t-s)^{1 + \tau + H(\gamma- 1-\frac{d}{p})} + (t-s)^{1 + \frac{1}{2} + H + H(\gamma - 2-\frac{d}{p})} \right) \\ & \leq C\, \|f \|_{\mathcal{B}_p^\gamma} \, \Big( [\psi]_{\mathcal{C}^{\frac{1}{2}+H}_{[S,T]} L^{m,\infty}} + 1 \Big) ([ \psi-\phi]_{\mathcal{C}^{\tau}_{[S,T]} L^{m}} +\| \psi_{S}-\phi_{S} \|_{L^{m}} )(t-s)^{1 + \tau \wedge \frac{1}{2} + H(\gamma - 1-\frac{d}{p})} . \end{align*} \smallskip Proof of \ref{item51(2)}: Apply Corollary~\ref{cor:4inc} with $\beta=\gamma$, $\lambda=1$, $\lambda_{1}=1$, $\lambda_{2}=\varepsilon$, $\kappa_{1}=\psi_{s}$, $\kappa_{2}=\phi_{s}$, $\kappa_{3}=\psi_{u}$ and $\kappa_{4}=\phi_{u}$. This yields \begin{align*} \| \delta A_{s,u,t} \|_{L^m} &\leq C \|f\|_{\mathcal{B}_p^\gamma}\, \|\mathbb{E}^s[|\psi_{s}-\psi_{u}|^m]^{1/m}\|_{L^\infty}^{\varepsilon}\, \|\psi_{s}-\phi_{s}\|_{L^m}\, (t-u)^{1+H(\gamma-1-\varepsilon-d/p)} \\ &\quad + C\|f\|_{\mathcal{B}_p^\gamma}\, \|\psi_{s}-\phi_{s} - \psi_{u} +\phi_{u}\|_{L^m}\, (t-u)^{1+H(\gamma-1-d/p)}. \end{align*} Hence we get from \eqref{eq:defbracket} that \begin{align*} \| \delta A_{s,u,t} \|_{L^m} &\leq C \|f\|_{\mathcal{B}_p^\gamma}\, [\psi]_{\mathcal{C}^{\frac{1}{2}+H}_{[S,T]}L^{m,\infty}}^{\varepsilon}\, \|\psi_{s}-\phi_{s}\|_{L^m}\, (t-u)^{1+H(\gamma-1-\frac{d}{p})+\frac{\varepsilon}{2}} \\ &\quad + C\|f\|_{\mathcal{B}_p^\gamma}\, [\psi-\phi]_{\mathcal{C}^\tau_{[S,T]}L^m}\, (t-u)^{1+H(\gamma-1-\frac{d}{p})+\tau}, \end{align*} and use $[\psi]_{\mathcal{C}^{1/2+H}_{[S,T]} L^{m,\infty}}^\varepsilon \leq [\psi]_{\mathcal{C}^{1/2+H}_{[S,T]} L^{m,\infty}}+1$ and $\|\psi_{s}-\phi_{s}\|_{L^m} \leq \|\psi_{S}-\phi_{S}\|_{L^m} + [\psi-\phi]_{\mathcal{C}^\tau_{[S,T]}L^m}$ to prove \ref{item51(2)}. \smallskip Proof of \ref{item51(3)}: Finally, for a sequence $(\Pi_k)_{k \in \mathbb{N}}$ of partitions of $[S,t]$ with $\Pi_k=\{t_i^k\}_{i=1}^{N_k}$ and mesh size $|\Pi_{k}|$ converging to zero, we have \begin{align*} \left\| \mathcal{A}_{t} - \sum_{i=1}^{N_{k}-1} A_{t_i^k,t_{i+1}^k} \right\|_{L^m} & \leq \sum_{i=1}^{N_{k}-1} \int_{t_i^k}^{t_{i+1}^k} \left\| f(\psi_r +B_r) - f(\phi_r + B_r) - f(\psi_{t_i^k}+B_r) + f(\phi_{t_i^k}+B_r) \right\|_{L^m} \, dr\\ & \leq \sum_{i=1}^{N_{k}-1} \int_{t_i^k}^{t_{i+1}^k} \left\| f(\psi_r +B_r)- f(\psi_{t_i^k}+B_r) \right\|_{L^m} + \left\| f(\phi_r + B_r) - f(\phi_{t_i^k}+B_r) \right\|_{L^m} dr\\ &\leq \|f\|_{\mathcal{C}^1} \sum_{i=1}^{N_{k}-1} \int_{t_i^k}^{t_{i+1}^k} \left\| \psi_r - \psi_{t_i^k}\right\|_{L^m} + \left\|\phi_r - \phi_{t_i^k}\right\|_{L^m} dr. \end{align*} Now use that $\| \psi_r - \psi_{t_i^k}\|_{L^m} \leq [\psi]_{\mathcal{C}^{1/2+H}_{[S,T]}L^m} |\Pi_{k}|^{1/2+H} \leq [\psi]_{\mathcal{C}^{1/2+H}_{[S,T]}L^{m,\infty}} |\Pi_{k}|^{1/2+H}$ and for $\sigma={\tau\wedge (1/2+H)}$, $\| \phi_r - \phi_{t_i^k}\|_{L^m} \leq C ([\phi-\psi]_{\mathcal{C}^{\tau}_{[S,T]}L^m} + [\psi]_{\mathcal{C}^{1/2+H}_{[S,T]}L^{m,\infty}} ) |\Pi_{k}|^{\sigma}$ to get \begin{align*} \left\| \mathcal{A}_{t} - \sum_{i=1}^{N_{k}-1} A_{t_i^k,t_{i+1}^k} \right\|_{L^m} \leq C\|f\|_{\mathcal{C}^1} (T-S) ([\phi-\psi]_{\mathcal{C}^{\tau}_{[S,T]}L^m} + [\psi]_{\mathcal{C}^{1/2+H}_{[S,T]}L^{m,\infty}} ) |\Pi_{k}|^{\sigma} \underset{k\to \infty}{\longrightarrow} 0. \end{align*} \end{proof} We now apply Proposition~\ref{prop:bound-E1-SDE} in the sub-critical case to obtain the following bound on $E^{1,h,n}$. This bound is used in Section~\ref{sec:overview-SDE}. \begin{corollary}\label{cor:bound-E1-SDE} Recall that the process $K^{h,n}$ was defined in \eqref{eq:def-Khn} and let $X_0$ be an $\mathcal{F}_0$-measurable random variable. Let $m \in [2,\infty)$ such that $m \leq p$ and assume that $\gamma-d/p>1-1/(2H)$. There exists a constant $C>0$ such that for any $0 \leq S < T \leq 1$, any $(s, t)\in \Delta_{S,T}$, any $h\in (0,1)$ and any $n\in \mathbb{N}$, \begin{align*} & \Big\| \int_s^t b^n(X_0 + K_r + B_r) - b^n(X_0 + K^{h,n}_r + B_r) \, dr \Big\|_{L^{m}} \\ &\quad \leq C \| b \|_{\mathcal{B}_p^{\gamma}} \Big( 1 + [X-B]_{\mathcal{C}^{\frac{1}{2}+H}_{[S,T]} L^{m,\infty}} \Big) \Big( [K-K^{h,n}]_{\mathcal{C}^{\frac{1}{2}}_{[S,T]} L^{m}} + \| K_S - K^{h,n}_S \|_{L^m} \Big)(t-s)^{1+H(\gamma-1-\frac{d}{p})} . \end{align*} \end{corollary} \begin{proof} Notice that $\tau = 1/2$ satisfies \eqref{eq:tau-cond}. Hence apply Proposition \ref{prop:bound-E1-SDE} with $\tau=1/2$, $f=b^n$, $\psi = X_0+ K$ and $\phi = X_0+K^{h,n}$ and recall from \eqref{eq:conv-in-gamma-} that $\| b^n \|_{\mathcal{B}_p^\gamma} \leq \| b \|_{\mathcal{B}_p^\gamma}$ to get the result. \end{proof} The following proposition provides a result similar to Corollary~\ref{cor:bound-E1-SDE} but in the critical case. \begin{proposition}\label{prop:bound-E1-SDE-critic} Let the assumptions of Theorem~\ref{thm:main-SDE}$(c)$ hold. In particular, recall that $X_0$ is an $\mathcal{F}_0$-measurable random variable, that $\gamma-d/p=1-1/(2H)$ and $\gamma > 1-1/(2H)$, $\zeta\in (0,1/2)$ and assume further that $2\leq m \leq p$. Recall also that $K^n$ and $K^{h,n}$ were defined in \eqref{eq:def-Khn}, and $\epsilon(h,n)$ was defined in \eqref{eq:defepsilonhn}. There exist constants $ \mathbf{M} >0$ and $\ell_{0}>0$ such that for any $0 \leq S < T \leq 1$ which satisfy $T-S\leq \ell_{0}$, any $(s, t)\in \Delta_{S,T}$, any $h\in (0,1)$ and any $n\in \mathbb{N}$, \begin{align*}% \Big\| &\int_s^t b^n( X_0+K_r+ B_r) - b^n(X_0+K^{h,n}_r+ B_r) \, dr \Big\|_{L^{m}} \\ &\leq \mathbf{M} \, \bigg(1+ \Big|\log \frac{T^H \big(1+ [K^{h,n}]_{\mathcal{C}^{1/2+H}_{[S,T]}L^m}\big)}{ \|K- K^{h,n} \|_{L^\infty_{[S,T]} L^{m}} +\epsilon(h,n)}\Big| \bigg) \, \Big( \|K- K^{h,n} \|_{L^\infty_{[S,T]} L^{m}} + \epsilon(h,n)\Big) \, (t-s) \nonumber\\ &\quad+ \mathbf{M} \, \Big(\|K- K^{h,n} \|_{L^\infty_{[S,T]} L^{m}} + [K- K^{h,n} ]_{\mathcal{C}^{\frac{1}{2}-\zeta}_{[S,T]} L^{m}} \Big)\, (t-s)^{\frac{1}{2}} . \end{align*} \end{proposition} \begin{remark} The constant $\mathbf{M}$ is important in the proof of Theorem \ref{thm:main-SDE}$(c)$ as it appears in the order of convergence. \end{remark} \begin{proof} Let $0\leq S<T\leq1$. For $(s,t)\in \Delta_{S,T}$, let $A_{s,t}$ and $\mathcal{A}_{t}$ be defined by \begin{equation} \label{eq:Prop51A-critic} \begin{split} &A_{s,t} = \int_{s}^{t} b^n(X_0+K_s+ B_r) - b^n(X_0+K^{h,n}_s+ B_r) \, dr ,\\ &\mathcal{A}_{t} = \int_S^{t} b^n( X_0+K_r+ B_r) - b^n(X_0+K^{h,n}_r+ B_r) \, dr , \end{split} \end{equation} and let \begin{align} \label{eq:R} R_{s,t} = \mathcal{A}_t-\mathcal{A}_s-A_{s,t} . \end{align} In this proof, we write $\|K- K^{h,n} \|_{\mathcal{C}^{\frac{1}{2}-\zeta}_{[S,T]} L^{m}}$ for $\| K_{S}-K^{h,n}_{S} \|_{L^{m}} + [K- K^{h,n} ]_{\mathcal{C}^{\frac{1}{2}-\zeta}_{[S,T]} L^{m}}$. Let $$ 0 < \varepsilon < \left( \gamma-1+\frac{1}{2H} \right) \wedge (1-2 \zeta) .$$ Set $\tau=1/2+\varepsilon/2$. In the following, we will check the conditions \eqref{sts1}, \eqref{sts2} and \eqref{sts4} which permit to apply the stochastic sewing with critical exponent \cite[Theorem 4.5]{athreya2020well}. To show that \eqref{sts1}, \eqref{sts2} and \eqref{sts4} hold true with $\varepsilon_{1} =H$, $\alpha_1=0$, $\varepsilon_2=1/2+H(\gamma-1-d/p)+\varepsilon/2 = \varepsilon/2$, $\alpha_2=0$ and $\varepsilon_4 = \varepsilon/2$ we prove that there exists a constant $C>0$ independent of $s,t,S$ and $T$ such that for $u = (s+t)/2$, \begin{enumerate}[label=(\roman*)] \item \label{item53(1)} $\|\mathbb{E}^{s} \delta A_{{s},u,{t}} \|_{L^m} \leq C\, ( 1+ [K^{h,n}]_{\mathcal{C}^{1/2+H}_{[S,T]}L^m} )\, (t-s)^{1+H}$; \myitem{(i')}\label{item53(1')} $\|\mathbb{E}^{s} \delta A_{{s},u,{t}}\|_{L^m} \leq C \, [R]_{\mathcal{C}^{\tau}_{[S,T]} L^{m}} \, (t-s)^{\frac{1}{2}+\tau} +C \Big( \|K- K^{h,n} \|_{L^\infty_{[S,T]} L^{m}} + \epsilon(h,n)\Big) {(t-s)};$ \item \label{item53(2)} $\| \delta A_{{s},u,{t}}\|_{L^m} \leq C\, \|K- K^{h,n} \|_{\mathcal{C}^{\frac{1}{2}-\zeta}_{[S,T]} L^{m}} \, (t-s)^{\frac{1}{2}+\varepsilon_2} $; \item \label{item53(3)} If \ref{item53(1)} and \ref{item53(2)} are satisfied, \eqref{sts3} gives the convergence in probability of $\sum_{i=1}^{N_k-1} A_{t^k_i,t^k_{i+1}}$ along any sequence of partitions $\Pi_k=\{t_i^k\}_{i=1}^{N_k}$ of $[S,t]$ with mesh converging to $0$. We will prove that the limit is the process $\mathcal{A}$ given in \eqref{eq:Prop51A-critic}. \end{enumerate} Then from \cite[Theorem 4.5]{athreya2020well}, we get \begin{align*}% \| R_{s,t} \|_{L^m} &\leq C\, \bigg(1+ \Big|\log \frac{T^H \big(1+ [K^{h,n}]_{\mathcal{C}^{1/2+H}_{[S,T]}L^m}\big)}{ \|K- K^{h,n} \|_{L^\infty_{[S,T]} L^{m}} +\epsilon(h,n)}\Big| \bigg) \, \Big( \|K- K^{h,n} \|_{L^\infty_{[S,T]} L^{m}} + \epsilon(h,n)\Big) \, (t-s) \nonumber\\ &\quad+ C\, \|K- K^{h,n} \|_{\mathcal{C}^{\frac{1}{2}-\zeta}_{[S,T]} L^{m}} \, (t-s)^{\frac{1}{2}+\frac{\varepsilon}{2}} +C \, [R]_{\mathcal{C}^{\tau}_{[S,T]} L^{m}} \, (t-s)^{\frac{1}{2}+\tau} . \end{align*} Now recalling that $\tau=1/2+\varepsilon/2$, we can divide both sides by $(t-s)^\tau$ and take the supremum over $(s,t) \in \Delta_{S,T}$ to get \begin{align*}% [R]_{\mathcal{C}^{\tau}_{[S,T]} L^{m}} &\leq C\, \bigg(1+ \Big|\log \frac{T^H \big(1+ [K^{h,n}]_{\mathcal{C}^{1/2+H}_{[S,T]}L^m}\big)}{ \|K- K^{h,n} \|_{L^\infty_{[S,T]} L^{m}} +\epsilon(h,n)}\Big| \bigg) \, \Big( \|K- K^{h,n} \|_{L^\infty_{[S,T]} L^{m}} + \epsilon(h,n)\Big) \, (T-S)^{1-\tau} \nonumber\\ &\quad+ C\, \|K- K^{h,n} \|_{\mathcal{C}^{\frac{1}{2}-\zeta}_{[S,T]} L^{m}} \, +C \, [R]_{\mathcal{C}^{\tau}_{[S,T]} L^{m}} \, (T-S)^{\frac{1}{2}} . \end{align*} For $S< T$ such that $T-S\leq (2C)^{-2}=:\ell_{0}$, we get that $C \, [R]_{\mathcal{C}^{\tau}_{[S,T]} L^{m}} \, (T-S)^{\frac{1}{2}} \leq (1/2) [R]_{\mathcal{C}^{\tau}_{[S,T]} L^{m}}$. We then subtract this quantity on both sides to get \begin{align*} [R]_{\mathcal{C}^{\tau}_{[S,T]} L^{m}} &\leq 2 C\, \bigg(1+ \Big|\log \frac{T^H \big(1+ [K^{h,n}]_{\mathcal{C}^{1/2+H}_{[S,T]}L^m}\big)}{ \|K- K^{h,n} \|_{L^\infty_{[S,T]} L^{m}} +\epsilon(h,n)}\Big| \bigg) \, \Big( \|K- K^{h,n} \|_{L^\infty_{[S,T]} L^{m}} + \epsilon(h,n)\Big) (T-S)^{1-\tau}\\ &\quad + 2 C\, \|K- K^{h,n} \|_{\mathcal{C}^{\frac{1}{2}-\zeta}_{[S,T]} L^{m}}. \end{align*} We conclude using \eqref{eq:AProp53} and \begin{align*} \| \mathcal{A}_t-\mathcal{A}_s \|_{L^m} & \leq \| R_{s,t} \|_{L^m} + \| A_{s,t} \|_{L^m} . \end{align*} ~ We now check that the conditions \ref{item53(1)}, \ref{item53(1')}, \ref{item53(2)} and \ref{item53(3)} actually hold. \smallskip Proof of \ref{item53(1)}: For $u \in [s,t]$, by the tower property of conditional expectation and Fubini's theorem, we have \begin{align}\label{eq:deltaAst-decomp-SDE-critic} \mathbb{E}^{s} \delta A_{s,u,t} & = \int_u^{t} \mathbb{E}^{s}\, \mathbb{E}^u \Big[ b^n(K_{s} +B_r) - b^n(K^{h,n}_{s}+ B_r) - b^n(K_u + B_r) + b^n(K^{h,n}_u+ B_r) \Big] \, dr \nonumber \\ & =: \int_u^{t} \mathbb{E}^{s}\, \mathbb{E}^u [ F(B_r,s,u) + \tilde{F}(B_r,s,u) ] \, dr , \end{align} where \begin{align*} F(\cdot,s,u) & =b^n(X_0+K_{s} +B_r) - b^n(X_0+K^{h,n}_{s}+ B_r) - b^n(X_0+K_u + B_r) \\ & \quad + b^n(X_0+K^{h,n}_s + K_{u} - K_{s} + \cdot) , \\ \tilde{F}(\cdot,s,u)& = b^n(X_0+K^{h,n}_u+\cdot) - b^n(X_0+K^{h,n}_s + K_{u} - K_{s} + \cdot) . \end{align*} By Lemma \ref{lem:reg-B}$(ii)$, we have for $\lambda\in [0,1]$ that \begin{align*} |\mathbb{E}^u F(B_r,s,u) | \leq C\, \| F(\cdot,s,u) \|_{\mathcal{B}_p^{\gamma-1-\lambda}} \, (r-u)^{H(\gamma-1-\lambda-\frac{d}{p})} . \end{align*} Moreover, by Lemma \ref{lem:besov-spaces}$(iii)$, it comes that \begin{align*} \| F(\cdot,s,u) \|_{\mathcal{B}_p^{\gamma-1-\lambda}} & \leq C\, \|b^n \|_{\mathcal{B}_p^\gamma} | K_{s}-K_u |\, | K_{s}-K^{h,n}_{s} |^\lambda. \end{align*} Hence \begin{align*} | \mathbb{E}^s \mathbb{E}^u F(B_r,s,u) | & \leq C\, \|b^n \|_{\mathcal{B}_p^\gamma} \, | K_{s}-K^{h,n}_{s} |^\lambda \, \mathbb{E}^s | K_{s}-K_u | \, (r-u)^{H(\gamma-1-\lambda-\frac{d}{p})} \\ & \leq C\, \|b^n\|_{\mathcal{B}_p^\gamma}\, | K_{s}-K^{h,n}_{s} |^\lambda\, [K]_{\mathcal{C}^{\frac{1}{2}+H}_{[S,T]} L^{m,\infty}} \, (u-s)^{\frac{1}{2}+H}\, (r-u)^{H(\gamma-1-\lambda-\frac{d}{p})} \end{align*} and from Jensen's inequality, \begin{align}\label{eq:boundcondF} \| \mathbb{E}^s \mathbb{E}^u F(B_r,s,u) \|_{L^m} & \leq C\, \|b^n\|_{\mathcal{B}_p^\gamma}\, \| K_{s}-K^{h,n}_{s} \|_{L^m}^\lambda\, [K]_{\mathcal{C}^{\frac{1}{2}+H}_{[S,T]} L^{m,\infty}} \, (u-s)^{\frac{1}{2}+H}\, (r-u)^{H(\gamma-1-\lambda-\frac{d}{p})}. \end{align} As for $\tilde{F}$, we have similarly that \begin{align}\label{eq:boundFtilde} \| \mathbb{E}^s \mathbb{E}^u \tilde{F}(B_r,s,u) \|_{L^m} &\leq C\, \| \mathbb{E}^s\| \tilde{F}(\cdot,s,u) \|_{\mathcal{B}_p^{\gamma-1}} \|_{L^m} \, (r-u)^{H(\gamma-1-\frac{d}{p})} \nonumber\\ &\leq C\, \|b^n\|_{\mathcal{B}_p^\gamma}\, \| K^{h,n}_{s}-K^{h,n}_u -K_{s} + K_u \|_{L^m}\, (r-u)^{H(\gamma-1-\frac{d}{p})}\nonumber\\ &\leq C\, \|b^n\|_{\mathcal{B}_p^\gamma}\, \left( [K^{h,n}]_{\mathcal{C}^{\frac{1}{2}+H}_{[S,T]}L^m} + [K]_{\mathcal{C}^{\frac{1}{2}+H}_{[S,T]}L^m} \right) (u-s)^{\frac{1}{2}+H}\, (r-u)^{H(\gamma-1-\frac{d}{p})} . \end{align} Choosing $\lambda=0$ and noticing that $H(\gamma-1-d/p) =-1/2$, we plug \eqref{eq:boundcondF} and \eqref{eq:boundFtilde} in \eqref{eq:deltaAst-decomp-SDE-critic} to obtain \begin{align*} \|\mathbb{E}^s \delta A_{s,u,t} \|_{L^m} &\leq C\, \|b^n\|_{\mathcal{B}_p^\gamma}\, [K]_{\mathcal{C}^{\frac{1}{2}+H}_{[S,T]} L^{m,\infty}} \, (u-s)\, (t-u)^{\frac{1}{2}} \\ &\quad + C\, \|b^n\|_{\mathcal{B}_p^\gamma}\, \left( [K^{h,n}]_{\mathcal{C}^{\frac{1}{2}+H}_{[S,T]}L^m} + [K]_{\mathcal{C}^{\frac{1}{2}+H}_{[S,T]}L^m} \right) (u-s)^{\frac{1}{2}+H}\, (r-u)^{\frac{1}{2}}\\ &\leq C\, \|b^n\|_{\mathcal{B}_p^\gamma}\, \left( [K^{h,n}]_{\mathcal{C}^{\frac{1}{2}+H}_{[S,T]}L^m} + [K]_{\mathcal{C}^{\frac{1}{2}+H}_{[S,T]}L^m}\right) (t-s)^{1+H} . \end{align*} Use finally the property $K=X-B \in \mathcal{C}^{1/2+H}_{[0,1]} L^{m, \infty}$ from Theorem~\ref{th:WP} and \eqref{eq:conv-in-gamma-} to deduce~\ref{item53(1)}. \smallskip Proof of \ref{item53(1')}: We rely again on the decomposition \eqref{eq:deltaAst-decomp-SDE-critic}. We now use \eqref{eq:boundcondF} with $\lambda=0$ and \eqref{eq:boundFtilde}. Since $H(\gamma-2-d/p) = -H-1/2>-1$, there is \begin{equation}\label{eq:condExpAsut} \begin{split} \|\mathbb{E}^{s} \delta A_{s,u,t} \|_{L^m} &\leq C\, \|b^n \|_{\mathcal{B}_p^\gamma} \, [K]_{\mathcal{C}^{\frac{1}{2}+H}_{[S,T]} L^{m,\infty}} \| K_{s}-K^{h,n}_{s} \|_{L^{m}}\, (t-s)\\ &\quad + C\, \|b^n \|_{\mathcal{B}_p^\gamma} \, \| K^{h,n}_{u}-K^{h,n}_s - K_{u}+ K_{s} \|_{L^m}\, (t-s)^{\frac{1}{2}}. \end{split} \end{equation} Here we do not expect $K^{h,n}-K$ to be $1/2$-H\"older continuous uniformly in $h$ and $n$, but only $(1/2-\zeta)$-H\"older, so we need to decompose $\| K^{h,n}_{u}-K^{h,n}_s - K_{u}+ K_{s} \|_{L^m}$ into several terms. First we introduce the pivot term $K^n_{u}-K^n_{s}$ to get \begin{align*} \| K^{h,n}_{u}-K^{h,n}_s - K_{u}+ K_{s} \|_{L^m} \leq [K-K^n]_{\mathcal{C}^{\frac{1}{2}}_{[S,T]}L^m} (u-s)^{\frac{1}{2}} + \| K^{h,n}_{u}-K^{h,n}_s - K^n_{u}+ K^n_{s} \|_{L^m}. \end{align*} Now observe that from \eqref{eq:def-Khn}, \eqref{eq:Prop51A-critic} and \eqref{eq:R}, \begin{align*} R_{s,u} = K^n_{u}-K^n_{s} - A_{s,u} - \int_{s}^u b^n(X_0+K^{h,n}_{r} + B_{r})\, dr, \end{align*} so that \begin{align*} K^{h,n}_{u}-K^{h,n}_s - K^n_{u}+ K^n_{s} = \int_{s}^u b^n(X_0+K^{h,n}_{r_{h}} + B_{r}) - b^n(X_0+K^{h,n}_{r} + B_{r})\, dr -A_{s,u} - R_{s,u} . \end{align*} Hence recalling the definition of $E^{2,h,n}$ from \eqref{eq:defE}, we get \begin{align*} \| K^{h,n}_{u}-K^{h,n}_s - K^n_{u}+ K^n_{s} \|_{L^m} \leq \|E^{2,h,n}_{s,u}\|_{L^m} + \|A_{s,u}\|_{L^m} + \|R_{s,u}\|_{L^m}. \end{align*} As in \eqref{eq:Ast-}, we have \begin{align}\label{eq:AProp53} \|A_{s,u}\|_{L^m} &\leq C \|b^n\|_{\mathcal{B}^\gamma_{p}}\, \|K_{s}-K^{h,n}_{s}\|_{L^m} \, (u-s)^{1+H(\gamma-1-\frac{d}{p})} \nonumber\\ &= C \|b^n\|_{\mathcal{B}^\gamma_{p}}\, \|K_{s}-K^{h,n}_{s}\|_{L^m} \, (u-s)^{\frac{1}{2}}. \end{align} Thus we get \begin{align*} \| K^{h,n}_{u}-K^{h,n}_s - K^n_{u}+ K^n_{s} \|_{L^m} &\leq \left( C \|b^n\|_{\mathcal{B}^\gamma_{p}}\, \|K_{s}-K^{h,n}_{s}\|_{L^m} + [E^{2,h,n}]_{\mathcal{C}^{\frac{1}{2}}_{[S,T]} L^{m}} \right) (u-s)^{\frac{1}{2}} \\ &\quad + [R]_{\mathcal{C}^{\tau}_{[S,T]} L^{m}}\, (u-s)^\tau . \end{align*} Plugging the previous inequality in \eqref{eq:condExpAsut}, it comes \begin{align*} \|\mathbb{E}^{s} \delta A_{s,u,t} \|_{L^m} &\leq C\, \|b^n\|_{\mathcal{B}^\gamma_{p}} \Big( (\|b^n\|_{\mathcal{B}^\gamma_{p}}+ [K]_{\mathcal{C}^{\frac{1}{2}+H}_{[S,T]} L^{m,\infty}})\, \|K_{s}-K^{h,n}_{s}\|_{L^m} + [E^{2,h,n}]_{\mathcal{C}^{\frac{1}{2}}_{[S,T]} L^{m}} \Big) (t-s) \\ &\quad+ \|b^n \|_{\mathcal{B}_p^\gamma} \, [R]_{\mathcal{C}^{\tau}_{[S,T]} L^{m}} \, (t-s)^{\frac{1}{2}+\tau}. \end{align*} Use finally that $[K]_{\mathcal{C}^{\frac{1}{2}+H}_{[S,T]} L^{m,\infty}}<\infty$, $[E^{2,h,n}]_{\mathcal{C}^{\frac{1}{2}}_{[S,T]} L^{m}} \leq \epsilon(h,n)$ and \eqref{eq:conv-in-gamma-} to deduce~\ref{item53(1')}. \smallskip Proof of \ref{item53(2)}: We apply Corollary~\ref{cor:4inc} with $\beta=\gamma$, $\lambda=1$, $\lambda_{1}=1$, $\lambda_{2}=\varepsilon$, $\kappa_{1}=K_{s}$, $\kappa_{2}=K^{h,n}_{s}$, $\kappa_{3}=K_{u}$ and $\kappa_{4}=K^{h,n}_{u}$: this yields \begin{align*} \| \delta A_{s,u,t} \|_{L^m} &\leq C \|b^n\|_{\mathcal{B}_p^\gamma}\, \|\mathbb{E}^s[|K_{s}-K_{u}|^m]^{1/m}\|_{L^\infty}^{\varepsilon}\, \|K_{s}-K^{h,n}_{s}\|_{L^m}\, (t-u)^{1+H(\gamma-1-\varepsilon-d/p)} \\ &\quad + C\|b^n\|_{\mathcal{B}_p^\gamma}\, \|K_{s}-K^{h,n}_{s} - K_{u} +K^{h,n}_{u}\|_{L^m}\, (t-u)^{1+H(\gamma-1-d/p)} \\ & \leq C \|b^n\|_{\mathcal{B}_p^\gamma}\, [K]_{\mathcal{C}^{\frac{1}{2}+H}_{[S,T]}L^{m,\infty}}^{\varepsilon}\, \|K_{s}-K^{h,n}_{s}\|_{L^m}\, (t-u)^{\frac{1}{2}+\frac{\varepsilon}{2}} \\ &\quad + C\|b^n\|_{\mathcal{B}_p^\gamma}\, [K-K^{h,n}]_{\mathcal{C}^{\frac{1}{2}-\zeta}_{[S,T]}L^m}\, (t-u)^{1-\zeta} . \end{align*} Since $\sup_{n} \|b^n\|_{\mathcal{B}_p^\gamma}$, $ [K]_{\mathcal{C}^{1/2+H}_{[S,T]}L^{m,\infty}}$ are finite and $\varepsilon < 1-2 \zeta$, we have obtained \ref{item53(2)}. \smallskip Proof of \ref{item53(3)}: The proof is identical to point~\ref{item51(3)} of Proposition~\ref{prop:bound-E1-SDE}. \end{proof} \subsection{Sewing bounds for the $d$-dimensional discrete fBm}\label{sec:bound-E2} First, we obtain in Subsection \ref{sec:reg-schema} the $\mathcal{C}^{1/2+H}_{[0,1]} L^{m,\infty}$ regularity of the tamed Euler scheme \eqref{def:EulerSDE} under \eqref{eq:cond-gamma-p-H} and \eqref{eq:assump-bn-bounded}, then we proceed to prove an upper bound on $[E^{2,h,n}]_{\mathcal{C}^{1/2}_{[0,1]} L^m}$ in Section \ref{sec:reg-E2hn}. \subsubsection{H\"older regularity of the tamed Euler scheme}\label{sec:reg-schema} \begin{lemma}\label{lem:bound-Khn} Recall that $\gamma$ and $p$ satisfy \eqref{eq:cond-gamma-p-H}. Let $m \in [2, \infty)$, $q \in [m,\infty]$. There exists a constant $C>0$ such that for any $0 \leq S < T \leq 1$, any $\mathbb{R}^d$-valued $\mathcal{F}_S$-measurable random variable $\psi$, any $f \in \mathcal{C}^\infty_{b}(\mathbb{R}^d, \mathbb{R}^d) \cap \mathcal{B}_p^\gamma$, any $h>0$ and any $(s, t)\in \Delta_{S,T}$, we have \begin{align*}% \Big\| \Big( \mathbb{E}^S \Big| \int_s^t f(\psi + B_{r_h}) \, d r \Big|^m \Big)^{\frac{1}{m}} \Big\|_{L^q} \nonumber \leq C\, \Big( \|f\|_\infty\, h^{\frac{1}{2}-H} + \| f \|_{\mathcal{B}_p^\gamma} \Big) (t-s)^{\frac{1}{2}+H} . \end{align*} \end{lemma} \begin{proof} We will check the conditions in order to apply Lemma~\ref{lem:SSL}. For $(s,t) \in \Delta_{S,T}$, let \begin{align*} A_{s,t} = \mathbb{E}^s \int_s^t f(\psi + B_{r_h}) \, d r ~~\mbox{and}~~ \mathcal{A}_t = \int_S^t f(\psi+ B_{r_h}) \, d r . \end{align*} Let $u\in [s,t]$ and notice that $\mathbb{E}^s \delta A_{s,u,t}=0$, so \eqref{sts1} holds with $\Gamma_1=0$. We will prove that \eqref{sts2} holds with $\alpha_2=0$ and $$ \Gamma_2 = C \|f\|_\infty h^{\frac{1}{2}-H} + C \| f \|_{\mathcal{B}_p^\gamma} .$$ \paragraph{The case $t-s\leq 2h$.} In this case we have \begin{align}\label{eq:Ast<h-Khn} |A_{s,t}| \leq \|f\|_\infty (t-s) \leq C \|f\|_\infty h^{\frac{1}{2}-H} (t-s)^{\frac{1}{2}+H} . \end{align} \paragraph{The case $t-s>2h$.} Here we split $A_{s,t}$ in two \begin{equation*} A_{s,t} = \mathbb{E}^s \int_s^{s+2h} f(\psi + B_{r_h}) \, d r + \mathbb{E}^s \int_{s+2h}^t f(\psi + B_{r_h}) \, d r . \end{equation*} For the first part, we obtain \begin{equation*} \Big|\mathbb{E}^s \int_s^{s+2h} f(\psi + B_{r_h}) \, d r \Big| \leq 2h\, \|f\|_\infty \leq C\, \|f\|_\infty h^{\frac{1}{2}-H} (t-s)^{\frac{1}{2}+H}. \end{equation*} Denote the second part by \begin{align*} J := \int_{s+2h}^t \mathbb{E}^s f(\psi+ B_{r_h}) \, d r . \end{align*} Using Lemma \ref{lem:reg-B}$(ii)$ and Lemma~\ref{lem:besov-spaces}$(i)$, we have \begin{align*} \|J \|_{L^q} & \leq C \int_{s+2h}^t \| f \|_{\mathcal{B}_p^\gamma} (r_{h}-s)^{H(\gamma-\frac{d}{p})} \, d r . \end{align*} Since $2(r_{h}-s) \geq r-s$, we obtain \begin{align}\label{eq:Ast>h-Khn} \|J \|_{L^q} & \leq C \int_{s+2h}^t \| f \|_{\mathcal{B}_p^\gamma} (r-s)^{H(\gamma-\frac{d}{p})} \, d r \nonumber\\ & \leq C \| f \|_{\mathcal{B}_p^\gamma} (t-s)^{1+H(\gamma-\frac{d}{p})} \nonumber\\ & \leq C \| f \|_{\mathcal{B}_p^\gamma} (t-s)^{\frac{1}{2}+H} . \end{align} Overall, combining \eqref{eq:Ast<h-Khn} and \eqref{eq:Ast>h-Khn}, we obtain that for all $s \leq t$, \begin{align*} \| A_{s,t}\|_{L^{q}} & \leq C \Big( \|f\|_{\infty} h^{\frac{1}{2}-H} (t-s)^{\frac{1}{2}+H} + \| f \|_{\mathcal{B}_p^\gamma} (t-s)^{\frac{1}{2}+H} \Big) . \end{align*} Thus for any $u\in [s,t]$, \begin{align*} \| \delta A_{s,u,t}\|_{L^{q}} & \leq \| A_{s,t}\|_{L^{m}}+\| A_{s,u}\|_{L^{m}}+\| A_{u,t}\|_{L^{m}}\\ &\leq C \Big( \|f\|_{\infty} h^{\frac{1}{2}-H} + \| f \|_{\mathcal{B}_p^\gamma} \Big) (t-s)^{\frac{1}{2}+H} . \end{align*} The power in $(t-s)$ is strictly larger than $1/2$, so \eqref{sts2} holds. \paragraph{Convergence in probability.} Finally, for a sequence $(\Pi_k)_{k \in \mathbb{N}}$ of partitions of $[S,t]$ with $\Pi_k=\{t_i^k\}_{i=1}^{N_k}$ and mesh size converging to zero, we have \begin{align*} \Big\| \mathcal{A}_t - \sum_{i=1}^{N_{k}-1} A_{t_i^k,t_{i+1}^k} \Big\|_{L^1} & \leq \sum_{i=1}^{N_{k}-1} \int_{t_i^k}^{t_{i+1}^k} \mathbb{E} \Big| f(\psi+ B_{r_h}) -\mathbb{E}^{t_i^k}[ f(\psi- B_{r_h})] \Big| \, d r\\ & \leq \sum_{i=1}^{N_{k}-1} \int_{t_i^k}^{t_{i+1}^k} \mathbb{E}\Big| f(\psi +B_{r_h}) -\mathbb{E}^{t_i^k} f(\psi +B_{r_h}) \Big| \, d r. \end{align*} Note that if $r_h \leq t_i^k$, then $\mathbb{E}| f(\psi +B_{r_h}) -\mathbb{E}^{t_i^k} f(\psi +B_{r_h})| = 0$. On the other hand, when $r_h \in (t_i^k,t_{i+1}^k]$ then in view of Lemma~\ref{lem:reg-B}$(iii)$, we have \begin{align*} \mathbb{E}| f(\psi +B_{r_h}) -\mathbb{E}^{t_i^k} f(\psi +B_{r_h}) | \leq C \| f \|_{\mathcal{C}^1} |\Pi_{k}|^H . \end{align*} It follows that \begin{align*} \Big\| \mathcal{A}_t - \sum_{i=1}^{N_{k}-1} A_{t_i^k,t_{i+1}^k} \Big\|_{L^1} & \leq \sum_{i=1}^{N_{k}-1} \int_{t_i^k}^{t_{i+1}^k} \| f \|_{\mathcal{C}^1} |\Pi_{k}|^H \, d r , \end{align*} and therefore $\sum_{i=1}^{N_{k}-1} A_{t_{i}^k, t_{i+1}^k}$ converges in probability to $\mathcal{A}_{t}$ as $k\to +\infty$. Hence we can apply Lemma~\ref{lem:SSL} with $\varepsilon_1>0$ and $\varepsilon_2 = H$ to conclude that \begin{align*} \big\| \big( \mathbb{E}^S | \mathcal{A}_t - \mathcal{A}_s |^m \big)^{\frac{1}{m}} \big\|_{L^q} & \leq \big\| \big(\mathbb{E}^S | \mathcal{A}_t - \mathcal{A}_s- A_{s,t} |^m \big)^{\frac{1}{m}} \big\|_{L^q} + \| A_{s,t} \|_{L^q} \\ & \leq C\, \Big( \|f\|_\infty \, h^{\frac{1}{2}-\varepsilon} +\| f \|_{\mathcal{B}_p^\gamma} \Big) \, (t-s)^{\frac{1}{2}+H } . \end{align*} \end{proof} \begin{proposition}\label{prop:bound-Khn} Recall that $\gamma$ and $p$ satisfy \eqref{eq:cond-gamma-p-H}. Let $\varepsilon \in (0,\frac{1}{2})$ and $m \in [2, \infty)$. There exists a constant $C>0$ such that for any $\mathbb{R}^d$-valued $\mathbb{F}$-adapted process $(\psi_{t})_{t\in [0,1]}$, any $f \in \mathcal{C}^\infty_{b}(\mathbb{R}^d, \mathbb{R}^d) \cap \mathcal{B}_p^\gamma$, any $0 \leq S < T \leq 1$ and any $(s, t)\in \Delta_{S,T}$, we have \begin{align}\label{eq:bound-Khn-general} \Big\| \Big( \mathbb{E}^S \Big| \int_s^t f(\psi_{r} + B_{r_h}) \, d r \Big|^m \Big)^{\frac{1}{m}} \Big\|_{L^{\infty}} & \leq C\, \Big( \|f\|_\infty\, h^{\frac{1}{2}-H} + \| f \|_{\mathcal{B}_p^\gamma} \Big) (t-s)^{\frac{1}{2}+H} \nonumber \\ &~ + C\, [ \psi ]_{\mathcal{C}^{\frac{1}{2}+H}_{[S,T]} L^{m,\infty}} \, \Big( \| f \|_{\mathcal{B}_p^{\gamma}} + \| f \|_{\mathcal{C}^1} h^{\frac{1}{2}+H-\varepsilon} \Big) \, (t-s)^{1+\varepsilon} . \end{align} \end{proposition} \begin{remark} A direct consequence of this proposition is that for any $(s, t)\in \Delta_{0,1}$, we have \begin{align}\label{eq:bound-general-discrete} \Big\| \Big( \mathbb{E}^s \Big| \int_s^t f(\psi_{r} + B_{r_h}) \, d r \Big|^m \Big)^{\frac{1}{m}} \Big\|_{L^{\infty}} & \leq C\, \Big( \|f\|_\infty\, h^{\frac{1}{2}-H} + \| f \|_{\mathcal{B}_p^\gamma} \Big) (t-s)^{\frac{1}{2}+H} \nonumber \\ &~ + C\, [ \psi ]_{\mathcal{C}^{\frac{1}{2}+H}_{[s,t]} L^{m,\infty}} \, \Big( \| f \|_{\mathcal{B}_p^{\gamma}} + \| f \|_{\mathcal{C}^1} h^{\frac{1}{2}+H-\varepsilon} \Big) \, (t-s)^{1+\varepsilon} . \end{align} \end{remark} \begin{proof} We will check the conditions in order to apply Lemma~\ref{lem:SSL} (with $q=\infty$). Let ${0 \leq S < T \leq 1}$. Assume that $[\psi]_{\mathcal{C}^{1/2+H}_{[S,T]}L^{m,\infty}}<\infty$, otherwise \eqref{eq:bound-Khn-general} trivially holds. For any $(s,t) \in \Delta_{S,T}$, define \begin{align*} A_{s,t} = \int_s^t f(\psi_s+ B_{r_h}) \, d r ~~\mbox{and}~~ \mathcal{A}_t = \int_S^t f(\psi_r+ B_{r_h}) \, d r . \end{align*} To show that \eqref{sts1} and \eqref{sts2} hold true with $\varepsilon_1= \varepsilon$, $\varepsilon_2=H >0$ and $\alpha_1=\alpha_2=0$, we prove that there exists a constant $C>0$ independent of $s,t,S$ and $T$ such that for $u = (s+t)/2$, \begin{enumerate}[label=(\roman*)] \item \label{item56(1)} $\|\mathbb{E}^{{s}} [\delta A_{s,u,t}]\|_{L^\infty} \leq C\, [ \psi ]_{\mathcal{C}^{\frac{1}{2}+H}_{[S,T]} L^{m,\infty}} \, \Big(\| f \|_{\mathcal{B}_p^{\gamma}} + \| f \|_{\mathcal{C}^1} h^{\frac{1}{2}+H-\varepsilon} \Big) \, (t-s)^{1+\varepsilon} $; \item \label{item56(2)} $\big\| \big( \mathbb{E}^S | \delta A_{s,u,t} |^m \big)^{\frac{1}{m}} \big\|_{L^\infty} \leq C\, \Big( \|f\|_{\infty}\, h^{\frac{1}{2}-H} + \| f \|_{\mathcal{B}_p^\gamma} \Big) \, ({t}-{s})^{\frac{1}{2}+H}$; \item \label{item56(3)} If \ref{item54(1)} and \ref{item54(2)} are satisfied, \eqref{sts3} gives the convergence in probability of $\sum_{i=1}^{N_k-1} A_{t^k_i,t^k_{i+1}}$ along any sequence of partitions $\Pi_k=\{t_i^k\}_{i=1}^{N_k}$ of $[S,t]$ with mesh converging to $0$. We will prove that the limit is the process $\mathcal{A}$ given above. \end{enumerate} Assume for now that \ref{item56(1)}, \ref{item56(2)} and \ref{item56(3)} hold. Applying Lemma~\ref{lem:SSL}, we obtain that \begin{align*} \Big\| \Big( \mathbb{E}^S \Big| \int_s^t f(\psi_{r} + B_{r_h}) \, d r \Big|^m \Big)^{\frac{1}{m}} \Big\|_{L^{\infty}} & \leq C\, \Big( \|f\|_{\infty}\, h^{\frac{1}{2}-H} + \| f \|_{\mathcal{B}_p^\gamma} \Big) \, ({t}-{s})^{\frac{1}{2}+H} \\ & ~ + C\, [ \psi ]_{\mathcal{C}^{\frac{1}{2}+H}_{[S,T]} L^{m,\infty}} \, \Big(\| f \|_{\mathcal{B}_p^{\gamma}} + \| f \|_{\mathcal{C}^1} h^{\frac{1}{2}+H-\varepsilon} \Big) \, (t-s)^{1+\varepsilon} \\ &~+\big\| \big(\mathbb{E}^S |A_{s,t}|^m\big)^{\frac{1}{m}} \big\|_{L^\infty}. \end{align*} Applying Lemma~\ref{lem:bound-Khn} with $q=\infty$ and $\psi = \psi_{s}$ for the last term of the previous equation, we get \eqref{eq:bound-Khn-general}.% We now check that the conditions \ref{item56(1)}, \ref{item56(2)} and \ref{item56(3)} actually hold. \smallskip Proof of \ref{item56(1)}: We have \begin{align*} \mathbb{E}^s \delta A_{s,u,t} & = \int_u^t \mathbb{E}^s [ f(\psi_s+ B_{r_h})- f(\psi_u+ B_{r_h}) ] \, d r . \end{align*} \paragraph{The case $t-u \leq 2h$.} In this case, using the Lipschitz norm of $f$, we have \begin{align*} | \mathbb{E}^s \delta A_{s,u,t} | & \leq \|f\|_{\mathcal{C}^1} \int_u^t \mathbb{E}^s|\psi_s-\psi_u| \, d r \leq \|f\|_{\mathcal{C}^1} \, (t-u) \left(\mathbb{E}^s|\psi_s-\psi_u|^m\right)^{\frac{1}{m}}. \end{align*} Thus using the inequality $(t-u) (u-s)^{1/2+H} \leq C\, h^{1/2+H-\varepsilon} (t-u)^{1/2-H+\varepsilon} (u-s)^{1/2+H} \leq C (t-s)^{1+\varepsilon} h^{\frac{1}{2}+H-\varepsilon}$, it comes \begin{align*} \| \mathbb{E}^s \delta A_{s,u,t} \|_{L^\infty} & \leq C\, \|f\|_{\mathcal{C}^1}\, [ \psi ]_{\mathcal{C}^{\frac{1}{2}+H}_{[S,T]} L^{m,\infty}} (t-s)^{1+\varepsilon} h^{\frac{1}{2}+H-\varepsilon} . \end{align*} \paragraph{The case $t-u > 2h$.} We split the integral between $u$ and $u+2h$ and then between $u+2h$ and $t$ as follows: \begin{align*} \mathbb{E}^s \delta A_{s,u,t} &= \int_u^{u+2h} \mathbb{E}^s [ f(\psi_s+ B_{r_h})- f(\psi_u+ B_{r_h}) ] \, d r + \int_{u+2h}^t \mathbb{E}^s [ f(\psi_s+ B_{r_h})- f(\psi_u+ B_{r_h}) ] \, d r\\ &=: J_1 + J_2 . \end{align*} For $J_1$, we obtain as in the case $t-u\leq 2h$ that \begin{align}\label{eq:J1Prop56} \| J_{1}\|_{L^\infty} = \|\mathbb{E}^s \delta A_{s,u,u+2h}\|_{L^\infty} & \leq C \|f\|_{\mathcal{C}^1} [ \psi ]_{\mathcal{C}^{\frac{1}{2}+H}_{[S,T]} L^{m,\infty}} (u+2h-s)^{1+\varepsilon} h^{\frac{1}{2}+H-\varepsilon} \nonumber\\ & \leq C \|f\|_{\mathcal{C}^1} [ \psi ]_{\mathcal{C}^{\frac{1}{2}+H}_{[S,T]} L^{m,\infty}} (t-s)^{1+\varepsilon} h^{\frac{1}{2}+H-\varepsilon} . \end{align} As for $J_2$, the tower property of the conditional expectation yields \begin{align*} J_{2} =\mathbb{E}^s \int_{u+2h}^t \mathbb{E}^u [ f(\psi_s+ B_{r_h})- f(\psi_u+ B_{r_h}) ] \, d r . \end{align*} Now use Lemma~\ref{lem:reg-B}$(ii)$ and Lemma~\ref{lem:besov-spaces}$(ii)$ to obtain \begin{align*} |J_{2}| &\leq C \int_{u+2h}^t \mathbb{E}^s\| f(\psi_s+ \cdot) - f(\psi_u+ \cdot) \|_{\mathcal{B}_p^{\gamma-1}} (r_h-u)^{H(\gamma-\frac{d}{p}-1)}\, dr \\ &\quad \leq C \, \|f \|_{\mathcal{B}_p^{\gamma}}\, \left(\mathbb{E}^s|\psi_{s}-\psi_{u}|^m\right)^{\frac{1}{m}} \int_{u+2h}^t (r_{h}-u)^{H(\gamma-\frac{d}{p}-1)} \, dr \, . \end{align*} Using the fact that $2(r_h-u) \ge (r-u)$, it comes \begin{align}\label{eq:J2Prop56} \| J_{2} \|_{L^\infty} & \leq C\,[ \psi ]_{\mathcal{C}^{\frac{1}{2}+H}_{[S,T]} L^{m,\infty}} \|f \|_{\mathcal{B}_p^{\gamma}} \, (u-s)^{\frac{1}{2}+H} \int_{u+2h}^t (r-u)^{H(\gamma-\frac{d}{p}-1)} \, dr \nonumber\\ &\leq C\, [ \psi ]_{\mathcal{C}^{\frac{1}{2}+H}_{[S,T]} L^{m,\infty}} \| f \|_{\mathcal{B}_p^{\gamma}} \, (u-s)^{\frac{1}{2}+H}\, (t-u)^{1+H(\gamma-\frac{d}{p}-1)} \nonumber\\ & \leq C\, [ \psi ]_{\mathcal{C}^{\frac{1}{2}+H}_{[S,T]} L^{m,\infty}} \| f \|_{\mathcal{B}_p^{\gamma}} \, (t-s)^{1+H} . \end{align} In view of the inequalities \eqref{eq:J1Prop56} and \eqref{eq:J2Prop56}, we have proven \ref{item56(1)}. \medskip Proof of \ref{item56(2)}: We write \begin{equation*} \big\| \big( \mathbb{E}^S | \delta A_{{s},u,{t}} |^m \big)^{\frac{1}{m}} \big\|_{L^\infty} \leq \big\| \big( \mathbb{E}^S | \delta A_{{s},{t}} |^m \big)^{\frac{1}{m}} \big\|_{L^\infty} + \big\| \big( \mathbb{E}^S | \delta A_{{s},u} |^m \big)^{\frac{1}{m}} \big\|_{L^\infty} + \big\| \big( \mathbb{E}^S | \delta A_{u,{t}} |^m \big)^{\frac{1}{m}} \big\|_{L^\infty} \end{equation*} Applying Lemma~\ref{lem:bound-Khn} with $q=\infty$ for each term in the right-hand side of the previous inequality, respectively for $\psi = \psi_{s}$, $\psi_{s}$ again and $\psi_{u}$, we get \ref{item56(2)}. \medskip Proof of \ref{item56(3)}: Finally, for a sequence $(\Pi_k)_{k \in \mathbb{N}}$ of partitions of $[S,t]$ with $\Pi_k=\{t_i^k\}_{i=1}^{N_k}$ and mesh size $|\Pi_{k}|$ converging to zero, we have \begin{align*} \left\| \mathcal{A}_t - \sum_{i=1}^{N_{k}-1} A_{t_i,t_{i+1}} \right\|_{L^1} &\leq \sum_{i=1}^{N_{k}-1} \int_{t_i^k}^{t_{i+1}^k} \mathbb{E}| f(\psi_r+ B_{r_h}) - f(\psi_{t_i^k}+ B_{r_h}) | \, d r\\ & \leq C \sum_{i=1}^{N_{k}-1} \int_{t_i^k}^{t_{i+1}^k} \| f \|_{\mathcal{C}^1} \|\psi_r - \psi_{t_i^k} \|_{L^1} \, d r \\ & \leq C \|f \|_{\mathcal{C}^1} \sum_{i=1}^{N_{k}-1} \int_{t_i^k}^{t_{i+1}^k} [ \psi ]_{\mathcal{C}^{\frac{1}{2}+H}_{[S,T]} L^m} |\Pi_{k} |^{\frac{1}{2}+H} \, d r ~\underset{k\to\infty}{\longrightarrow} 0. \end{align*} \end{proof} \begin{corollary}\label{cor:bound-Khn} Assume \eqref{eq:cond-gamma-p-H}, let $\mathcal{D}$ be a sub-domain of $(0,1) \times \mathbb{N}$ satisfying \eqref{eq:assump-bn-bounded} and let $m \in [2, \infty)$. Recall also that $K^{h,n}$ was defined in \eqref{eq:def-Khn}. Then \begin{align*}% \sup_{(h,n) \in \mathcal{D}} [ K^{h,n} ]_{\mathcal{C}^{\frac{1}{2}+H}_{[0,1]} L^{m,\infty}} < \infty . \end{align*} \end{corollary} \begin{proof} Let $(h,n) \in \mathcal{D}$ and $\varepsilon \leq H$. In view of Equation \eqref{eq:bound-general-discrete}, we have for $f=b^n$ and $\psi=X_0+K^{h,n}$ that there exists a constant $C$ such that for any $h \in (0,1)$, $n \in \mathbb{N}$ and $(s,t) \in \Delta_{0,1}$, \begin{align*} \Big\| \Big( \mathbb{E}^s \Big| \int_s^t b^n(X_0+K^{h,n}_{r} + B_{r_h}) \, dr \Big|^m \Big)^{\frac{1}{m}} \Big\|_{L^{\infty}} & \leq C\, \Big( \|b^n\|_\infty\, h^{\frac{1}{2}-H} + \| b \|_{\mathcal{B}_p^\gamma} \Big) (t-s)^{\frac{1}{2}+H} \nonumber \\ &~ + C\, [ K^{h,n} ]_{\mathcal{C}^{\frac{1}{2}+H}_{[s,t]} L^{m,\infty}} \, \Big( \| b \|_{\mathcal{B}_p^{\gamma}} + \| b^n \|_{\mathcal{C}^1} h^{\frac{1}{2}+H-\varepsilon} \Big) \, (t-s)^{1+\varepsilon} , \end{align*} where we used that $\|b^n\|_{\mathcal{B}_p^\gamma} \leq \| b \|_{\mathcal{B}_p^\gamma}$. In particular, for $0\leq S<T\leq 1$ and any $(s,t)\in \Delta_{S,T}$, \begin{align*} \Big\| \Big( \mathbb{E}^s \Big| \int_s^t b^n(X_0+K^{h,n}_{r} + B_{r_h}) \, dr \Big|^m \Big)^{\frac{1}{m}} \Big\|_{L^{\infty}} & \leq C\, \Big( \|b^n\|_\infty\, h^{\frac{1}{2}-H} + \| b \|_{\mathcal{B}_p^\gamma} \Big) (t-s)^{\frac{1}{2}+H} \nonumber \\ &~ + C\, [ K^{h,n} ]_{\mathcal{C}^{\frac{1}{2}+H}_{[S,T]} L^{m,\infty}} \, \Big( \| b \|_{\mathcal{B}_p^{\gamma}} + \| b^n \|_{\mathcal{C}^1} h^{\frac{1}{2}+H-\varepsilon} \Big) \, (t-s)^{1+\varepsilon} . \end{align*} Moreover, using that $| K^{h,n}_r-K^{h,n}_{r_h} | \leq \| b^n \|_\infty h$, we have \begin{align*} \Big\| \Big( \mathbb{E}^s \big|K^{h,n}_t-K^{h,n}_s \big|^m \Big)^{\frac{1}{m}} \Big\|_{L^{\infty}} & \leq \Big\| \Big( \mathbb{E}^s \Big| \int_s^t b^n(X_0+K^{h,n}_{r_h} + B_{r_h}) -b^n(X_0+K^{h,n}_{r} + B_{r_h}) \, d r \Big|^m \Big)^{\frac{1}{m}} \Big\|_{L^{\infty}} \\ &\quad + \Big\| \Big( \mathbb{E}^s \Big| \int_s^t b^n(X_0+K^{h,n}_{r} + B_{r_h}) \, d r \Big|^m \Big)^{\frac{1}{m}} \Big\|_{L^{\infty}} \\ & \leq C\, \| b^n\|_{\mathcal{C}^1} \| b^n \|_\infty h\, (t-s) + \Big\| \Big( \mathbb{E}^s \Big| \int_s^t b^n(X_0+K^{h,n}_{r} + B_{r_h}) \, d r \Big|^m \Big)^{\frac{1}{m}} \Big\|_{L^{\infty}} \\ & \leq C\, \Big( \|b^n\|_\infty\, h^{\frac{1}{2}-H} + \| b \|_{\mathcal{B}_p^\gamma} \Big) (t-s)^{\frac{1}{2}+H} \nonumber \\ &~ + C\, \Big( [ K^{h,n} ]_{\mathcal{C}^{\frac{1}{2}+H}_{[S,T]} L^{m,\infty}} ( \| b \|_{\mathcal{B}_p^{\gamma}} + \| b^n \|_{\mathcal{C}^1} h^{\frac{1}{2}+H-\varepsilon}) +\| b^n\|_{\mathcal{C}^1} \| b^n \|_\infty h \Big) \, (t-s) . \end{align*} Now using \eqref{eq:assump-bn-bounded} with $\eta=\varepsilon$ small enough, we get $\sup_{(h,n) \in \mathcal{D}} \| b^n \|_{\mathcal{C}^1} \|b^n \|_\infty h < \infty$ and \begin{align*} \Big\| \Big( \mathbb{E}^s \big|K^{h,n}_t-K^{h,n}_s \big|^m \Big)^{\frac{1}{m}} \Big\|_{L^{\infty}} \leq C\, (t-s)^{\frac{1}{2}+H} + C\, \Big( [ K^{h,n} ]_{\mathcal{C}^{\frac{1}{2}+H}_{[S,T]} L^{m,\infty}} + 1 \Big) \, (t-s) . \end{align*} Now divide by $(t-s)^{\frac{1}{2}+H}$ and take the supremum over $[S,T]$ to get that \begin{align*} [ K^{h,n} ]_{\mathcal{C}^{\frac{1}{2}+H}_{[S,T]} L^{m,\infty}} \leq C + C\, [ K^{h,n} ]_{\mathcal{C}^{\frac{1}{2}+H}_{[S,T]} L^{m,\infty}} \, (T-S)^{\frac{1}{2}-H} . \end{align*} Let $\ell = \big( \frac{1}{2C} \big)^{\frac{1}{1/2-H}} $. Then for $T-S \leq \ell$, we deduce \begin{align*} [ K^{h,n} ]_{\mathcal{C}^{\frac{1}{2}+H}_{[S,T]} L^{m,\infty}} \leq 2C. \end{align*} Since $\ell$ does not depend on $h$ nor $n$, we get the H\"older regularity on the whole interval $[0,1]$. \end{proof} \subsubsection{H\"older regularity of $E^{2,h,n}$}\label{sec:reg-E2hn} We start this subsection with general results on the regularisation property of the discrete-time fBm, that eventually lead to a bound on the term $E^{2,h,n}$ in Corollary~\ref{cor:newbound-E2}. \begin{lemma}\label{lem:newbound-E2-v0} Let $\varepsilon \in (0,\frac{1}{2})$ and $m \in [2, \infty)$. There exists a constant $C>0$ such that for any $0 \leq S < T \leq 1$, any $\mathbb{R}^d$-valued $\mathcal{F}_S$-measurable random variable $\psi$, any $f \in \mathcal{C}^0_{b}(\mathbb{R}^d, \mathbb{R}^d) $, any $h>0$ and any $(s, t)\in \Delta_{S,T}$, we have \begin{align*}% \Big\| \int_s^t f(\psi + B_r) - f(\psi+ B_{r_h}) \, d r \Big\|_{L^{m}} \nonumber \leq C\, \|f\|_\infty\, h^{\frac{1}{2}-\varepsilon} (t-s)^{\frac{1}{2}+\frac{\varepsilon}{2}} . \end{align*} \end{lemma} \begin{proof} We will check the conditions in order to apply Lemma~\ref{lem:SSL}. For $(s,t) \in \Delta_{S,T}$, let \begin{align*} A_{s,t} = \mathbb{E}^s \int_s^t f(\psi + B_r) - f(\psi + B_{r_h}) \, d r ~~\mbox{and}~~ \mathcal{A}_t = \int_S^t f(\psi + B_r) - f(\psi+ B_{r_h}) \, d r . \end{align*} Let $u\in [s,t]$ and notice that $\mathbb{E}^s \delta A_{s,u,t}=0$, so \eqref{sts1} holds with $\Gamma_1=0$. We will prove that \eqref{sts2} holds with $\alpha_2=0$ and $$ \Gamma_2 = C \|f\|_\infty h^{\frac{1}{2}-\varepsilon} .$$ \paragraph{The case $t-s\leq 2h$.} In this case we have \begin{align}\label{eq:Ast<h} |A_{s,t}| \leq \|f\|_\infty (t-s) \leq C \|f\|_\infty h^{\frac{1}{2}-\varepsilon} (t-s)^{\frac{1}{2}+\varepsilon} . \end{align} \paragraph{The case $t-s>2h$.} Here we split $A_{s,t}$ in two: \begin{equation*} A_{s,t} = \mathbb{E}^s \int_s^{s+2h} f(\psi + B_r) - f(\psi + B_{r_h}) \, d r + \mathbb{E}^s \int_{s+2h}^t f(\psi + B_r) - f(\psi + B_{r_h}) \, d r . \end{equation*} For the first part, we obtain \begin{equation*} \Big|\mathbb{E}^s \int_s^{s+2h} f(\psi + B_r) - f(\psi + B_{r_h}) \, d r \Big| \leq 4h\, \|f\|_\infty \leq C\, \|f\|_\infty h^{\frac{1}{2}-\varepsilon} (t-s)^{\frac{1}{2}+\varepsilon}. \end{equation*} Denote the second part by \begin{align*} J := \int_{s+2h}^t \mathbb{E}^s [ f(\psi+ B_{r}) - f(\psi+ B_{r_h}) ] \, d r . \end{align*} From Lemma~\ref{lem:reg-B}$(i)$ and \eqref{eq:LND}, we have \begin{align}\label{eq:boundJ55} J & = \int_{s+2h}^t \Big( G_{C_{1}(r-s)^{2H}}f(\psi+ \mathbb{E}^s B_{r}) - G_{C_{1}(r_h-s)^{2H}}f(\psi+ \mathbb{E}^s B_{r_h}) \Big) \, d r \\ & = \int_{s+2h}^t \Big( G_{C_{1}(r-s)^{2H}}f(\psi+ \mathbb{E}^s B_{r}) - G_{C_{1}(r_h-s)^{2H}}f(\psi+ \mathbb{E}^s B_{r}) \Big) \, d r \nonumber \\ & \quad + \int_{s+2h}^t \left( G_{C_{1}(r_h-s)^{2H}}f(\psi+ \mathbb{E}^s B_{r}) - G_{C_{1}(r_h-s)^{2H}}f(\psi+ \mathbb{E}^s B_{r_h}) \right) \, d r \nonumber\\ & =: J_1 + J_2 .\nonumber \end{align} For $J_1$, we apply \cite[Proposition 3.7 (ii)]{butkovsky2021approximation} with $\beta=0$, $\delta=1$, $\alpha=0$ to get \begin{align*} \| J_1 \|_{L^m} & \leq C\, \| f \|_\infty \int_{s+2h}^t \big( (r-s)^{2H} - (r_h-s)^{2H} \big) (r_h-s)^{-2H} \, d r . \end{align*} Now applying the inequalities $(r-s)^{2H} - (r_h-s)^{2H} \leq C (r-r_{h}) (r_h-s)^{2H-1}$ and $2 (r_h-s) \ge (r-s)$, it comes \begin{align*} \| J_1 \|_{L^m} & \leq C\, \| f \|_\infty \int_{s+2h}^t (r-r_h) (r-s)^{2H-1} (r-s)^{-2H} \, d r \\ & \leq C\, \| f \|_\infty h \int_{s+2h}^t (r-s)^{-1} \, d r \\ & \leq C\, \| f \|_\infty h \left( |\log(2h)| + |\log(t-s)| \right) . \end{align*} Use again that $2h < t-s$ to get \begin{align*} \| J_1 \|_{L^m} & \leq C\, \| f \|_\infty h^{\frac{1}{2}-\varepsilon} (t-s)^{\frac{1}{2}+\frac{\varepsilon}{2}} . \end{align*} As for $J_2$, we have \begin{align*} \|J_{2}\|_{L^m} \leq \int_{s+2h}^t \|G_{C_{1}(r_h-s)^{2H}} f \|_{\mathcal{C}^1} \, \| \mathbb{E}^s B_{r} - \mathbb{E}^s B_{r_h} \|_{L^m} \, d r. \end{align*} In view of \cite[Proposition 3.7 (i)]{butkovsky2021approximation} applied with $\beta=1$, $\alpha=0$, \cite[Proposition 3.6 (v)]{butkovsky2021approximation} and using again that $2 (r_h-s) \ge (r-s)$, we get \begin{align*} \| J_2 \|_{L^m} & \leq C\, \| f \|_\infty \int_{s+2h}^t \| \mathbb{E}^s B_{r} - \mathbb{E}^s B_{r_h} \|_{L^m} (r_h-s)^{-H} \, d r \\ & \leq C\, \| f \|_\infty \int_{s+2h}^t (r-r_h) (r-s)^{H-1} (r-s)^{-H} \, d r \\ & \leq C\, \| f \|_\infty\, h\, \big( |\log(2h)| + |\log(t-s)| \big) \\ & \leq C\, \| f \|_\infty\, h^{\frac{1}{2}-\varepsilon}\, (t-s)^{\frac{1}{2}+\frac{\varepsilon}{2}} . \end{align*} Combining the bounds on $J_1$ and $J_2$, we deduce that \begin{align*} \| J \|_{L^m} \leq C\, \| f \|_\infty\, h^{\frac{1}{2}-\varepsilon} \, (t-s)^{\frac{1}{2}+\frac{\varepsilon}{2}} . \end{align*} Hence for all $t-s>2h$, \begin{align}\label{eq:Ast>h} \| A_{s,t} \|_{L^m} \leq C\, \| f \|_\infty\, h^{\frac{1}{2}-\varepsilon} \, (t-s)^{\frac{1}{2}+\frac{\varepsilon}{2}} . \end{align} \medskip Overall, combining \eqref{eq:Ast<h} and \eqref{eq:Ast>h}, we obtain that for all $s \leq t$, \begin{align*} \| A_{s,t}\|_{L^{m}} & \leq C \|f\|_{\infty} h^{\frac{1}{2}-\varepsilon} (t-s)^{\frac{1}{2}+\frac{\varepsilon}{2}} . \end{align*} Thus for any $u\in [s,t]$, \begin{align*} \| \delta A_{s,u,t}\|_{L^{m}} & \leq \| A_{s,t}\|_{L^{m}}+\| A_{s,u}\|_{L^{m}}+\| A_{u,t}\|_{L^{m}}\\ &\leq C \|f\|_{\infty} h^{\frac{1}{2}-\varepsilon} (t-s)^{\frac{1}{2}+\frac{\varepsilon}{2}} . \end{align*} The power in $(t-s)$ is strictly larger than $1/2$, so \eqref{sts2} holds. \paragraph{Convergence in probability.} Finally, for a sequence $(\Pi_k)_{k \in \mathbb{N}}$ of partitions of $[S,t]$ with $\Pi_k=\{t_i^k\}_{i=1}^{N_k}$ and mesh size converging to zero, we have \begin{align*} \Big\| \mathcal{A}_t - \sum_{i=1}^{N_{k}-1} A_{t_i^k,t_{i+1}^k} \Big\|_{L^1} & \leq \sum_{i=1}^{N_{k}-1} \int_{t_i^k}^{t_{i+1}^k} \mathbb{E} \Big| f(\psi +B_r) - f(\psi+ B_{r_h}) -\mathbb{E}^{t_i^k}[ f(\psi +B_{r}) + f(\psi- B_{r_h})] \Big| \, d r\\ & \leq \sum_{i=1}^{N_{k}-1} \int_{t_i^k}^{t_{i+1}^k} \mathbb{E} \Big| f(\psi +B_r) -\mathbb{E}^{t_i^k} f(\psi +B_{r}) \Big| \, d r \\ &\quad + \sum_{i=1}^{N_{k}-1} \int_{t_i^k}^{t_{i+1}^k} \mathbb{E}\Big| f(\psi +B_{r_h}) -\mathbb{E}^{t_i^k} f(\psi +B_{r_h}) \Big| \, d r \\ & =: I_1 + I_2 . \end{align*} In view of Lemma \ref{lem:reg-B}$(iii)$, it comes that \begin{align*} I_1 & \leq \sum_{i=1}^{N_{k}-1} \int_{t_i^k}^{t_{i+1}^k} \| f \|_{\mathcal{C}^1} (r-t_i^k)^H \, d r \leq \| f \|_{\mathcal{C}^1} |\Pi_{k}|^H \, (t-S). \end{align*} As for $I_{2}$, note that if $r_h \leq t_i^k$, then $\mathbb{E}| f(\psi +B_{r_h}) -\mathbb{E}^{t_i^k} f(\psi +B_{r_h})| = 0$. On the other hand, when $r_h \in (t_i^k,t_{i+1}^k]$ then in view of Lemma~\ref{lem:reg-B}$(iii)$, we have \begin{align*} \mathbb{E}| f(\psi +B_{r_h}) -\mathbb{E}^{t_i^k} f(\psi +B_{r_h}) | \leq C \| f \|_{\mathcal{C}^1} |\Pi_{k}|^H . \end{align*} It follows that \begin{align*} I_2 & \leq \sum_{i=1}^{N_{k}-1} \int_{t_i^k}^{t_{i+1}^k} \| f \|_{\mathcal{C}^1} |\Pi_{k}|^H \, d r , \end{align*} and therefore $\sum_{i=1}^{N_{k}-1} A_{t_{i}^k, t_{i+1}^k}$ converges in probability to $\mathcal{A}_{t}$ as $k\to +\infty$. We can therefore apply Lemma~\ref{lem:SSL} with $\varepsilon_1>0$ and $\varepsilon_2 = \varepsilon/2$ to conclude that \begin{align*} \| \mathcal{A}_t - \mathcal{A}_s \|_{L^m} & \leq \| \mathcal{A}_t - \mathcal{A}_s - A_{s,t} \|_{L^m} + \| A_{s,t} \|_{L^m} \\ & \leq C\, \|f\|_\infty \, h^{\frac{1}{2}-\varepsilon} \, (t-s)^{\frac{1}{2}+\frac{\varepsilon}{2} } . \end{align*} \end{proof} \begin{proposition}\label{prop:newbound-E2} Let $\varepsilon \in (0,\frac{1}{2})$ and $m \in [2, \infty)$. There exists a constant $C>0$ such that for any $\mathbb{R}^d$-valued stochastic process $(\psi_{t})_{t\in [0,1]}$ adapted to $\mathbb{F}$, any $f \in \mathcal{C}^1_{b}(\mathbb{R}^d, \mathbb{R}^d) $, any $h \in (0,1)$ and any $(s, t)\in \Delta_{0,1}$, we have \begin{align}\label{eq:ssl-on-on-SDE} \Big\| \int_s^t f(\psi_r + B_r) - f(\psi_{r} + B_{r_h}) \, d r \Big\|_{L^{m}} & \leq C \Big( \|f\|_\infty h^{\frac{1}{2}-\varepsilon} (t-s)^{\frac{1}{2}+\frac{\varepsilon}{2}} + \|f \|_{\mathcal{C}^1} [\psi]_{\mathcal{C}^{1}_{[0,1]} L^{\infty}} h^{1-\varepsilon} (t-s)^{1+\frac{\varepsilon}{2}} \Big) . \end{align} \end{proposition} \begin{proof} Assume that $[\psi]_{\mathcal{C}^{1}_{[0,1]}L^\infty}<\infty$, otherwise \eqref{eq:ssl-on-on-SDE} trivially holds. We will check the conditions in order to apply Lemma~\ref{lem:SSL} (with $q=m$). Let and $0 \leq S < T \leq 1$. For any $(s,t) \in \Delta_{S,T}$, define \begin{align*} A_{s,t} = \int_s^t f(\psi_s + B_r) - f(\psi_s+ B_{r_h}) \, d r ~~\mbox{and}~~ \mathcal{A}_t = \int_S^t f(\psi_r+ B_r) - f(\psi_r+ B_{r_h}) \, d r . \end{align*} To show that \eqref{sts1} and \eqref{sts2} hold true with $\varepsilon_1= \varepsilon_2=\varepsilon/2 >0$ and $\alpha_1=\alpha_2=0$, we prove that there exists a constant $C>0$ independent of $s,t,S$ and $T$ such that for $u = (s+t)/2$, \begin{enumerate}[label=(\roman*)] \item \label{item54(1)} $\|\mathbb{E}^{{s}} [\delta A_{s,u,t}]\|_{L^m} \leq C\, \|f \|_{\mathcal{C}^1}\, [\psi]_{\mathcal{C}^{1}_{[S,T]} L^{\infty}}\, h^{1-\varepsilon}\, ({t}-{s})^{1 + \frac{\varepsilon}{2}}$; \item \label{item54(2)} $\| \delta A_{s,u,t}\|_{L^m} \leq C\, \|f\|_{\infty}\, h^{\frac{1}{2}-\varepsilon}\, ({t}-{s})^{\frac{1}{2}+\frac{\varepsilon}{2}}$; \item \label{item54(3)} If \ref{item54(1)} and \ref{item54(2)} are satisfied, \eqref{sts3} gives the convergence in probability of $\sum_{i=1}^{N_k-1} A_{t^k_i,t^k_{i+1}}$ along any sequence of partitions $\Pi_k=\{t_i^k\}_{i=1}^{N_k}$ of $[S,t]$ with mesh converging to $0$. We will prove that the limit is the process $\mathcal{A}$ given in \eqref{eq:Prop51A}. \end{enumerate} Assume for now that \ref{item54(1)}, \ref{item54(2)} and \ref{item54(3)} hold. Applying Lemma~\ref{lem:SSL}, we obtain that \begin{align*} \Big\| \int_s^t f(\psi_r + B_r) - f(\psi_{r} + B_{r_h}) \, d r \Big\|_{L^{m}} & \leq C \Big( \|f\|_\infty h^{\frac{1}{2}-\varepsilon} (t-s)^{\frac{1}{2}+\frac{\varepsilon}{2}} + \|f \|_{\mathcal{C}^1} [\psi]_{\mathcal{C}^{1}_{[S,T]} L^{\infty}} h^{1-\varepsilon} (t-s)^{1+\frac{\varepsilon}{2}} \Big) \\ & \quad +\| A_{s,t}\|_{L^m}. \end{align*} We will see in \eqref{eq:Prop54Ast} that $\|A_{s,t}\|_{L^m} \leq C\, \|f\|_{\infty} h^{\frac{1}{2}-\varepsilon}\, ({t}-{s})^{\frac{1}{2}+\frac{\varepsilon}{2}}$. Then choosing $(s,t)=(S,T)$, we get \eqref{eq:ssl-on-on-SDE}, using that $[\psi]_{\mathcal{C}^{1}_{[S,T]} L^{\infty}} \leq [\psi]_{\mathcal{C}^{1}_{[0,1]} L^{\infty}} $. We now check that the conditions \ref{item54(1)}, \ref{item54(2)} and \ref{item54(3)} actually hold. \smallskip Proof of \ref{item54(1)}: We have \begin{align*} \mathbb{E}^s \delta A_{s,u,t} & = \mathbb{E}^s \int_u^t f(\psi_s+ B_r) - f(\psi_s+ B_{r_h}) - f(\psi_u + B_r) + f(\psi_u+ B_{r_h}) \, d r . \end{align*} \paragraph{The case $t-u \leq 2h$.} In this case, using the Lipschitz norm of $f$, we have \begin{align*} | \mathbb{E}^s \delta A_{s,u,t} | & \leq 2 \|f\|_{\mathcal{C}^1} \int_u^t |\psi_s-\psi_u| \, d r. \end{align*} Therefore using the inequality $(t-u) (u-s) \leq C\, h^{1-\varepsilon}\, (t-u)^{\varepsilon} (u-s) \leq C\, (t-s)^{1+\varepsilon} h^{1-\varepsilon}$, \begin{align*} \| \mathbb{E}^s \delta A_{s,u,t} \|_{L^m} & \leq \|f\|_{\mathcal{C}^1} [ \psi ]_{\mathcal{C}^1_{[S,T]} L^m} (t-u) (u-s) \\ & \leq C \|f\|_{\mathcal{C}^1} [ \psi ]_{\mathcal{C}^1_{[S,T]} L^\infty} (t-s)^{1+\varepsilon} h^{1-\varepsilon} . \end{align*} \paragraph{The case $t-u > 2h$.} We split the integral between $u$ and $u+2h$ and then between $u+2h$ and $t$ as follows: \begin{align*} \mathbb{E}^s \delta A_{s,u,t} & = \int_u^{u+2h} \mathbb{E}^s [ f(\psi_s+ B_r) - f(\psi_s+ B_{r_h}) - f(\psi_u + B_r) + f(\psi_u+ B_{r_h}) ] \, d r \\ & \quad + \mathbb{E}^s \int_{u+2h}^t \mathbb{E}^u [ f(\psi_s+ B_r) - f(\psi_s+ B_{r_h}) - f(\psi_u + B_r) + f(\psi_u+ B_{r_h}) ] \, d r \\ & =: J_1 + J_2 , \end{align*} using the tower property of conditional expectation for $J_{2}$. For $J_1$, we obtain from the case $t-u\leq 2h$ that \begin{align}\label{eq:J1Prop54} \| J_{1}\|_{L^m} = \|\mathbb{E}^s \delta A_{s,u,u+2h}\|_{L^m}& \leq C \|f\|_{\mathcal{C}^1} [ \psi ]_{\mathcal{C}^1_{[S,T]} L^\infty} (t-s)^{1+\varepsilon} h^{1-\varepsilon} . \end{align} As for $J_2$, we use Lemma~\ref{lem:reg-B}$(i)$ and \eqref{eq:LND} to write \begin{align*} J_2 & = \mathbb{E}^s \int_{u+2h}^t \big( G_{C_{1}(r-u)^{2H}} - G_{C_{1}(r_h-u)^{2H}} \big) \big( f(\psi_s+ \mathbb{E}^u B_r) - f(\psi_u +\mathbb{E}^u B_r) \big) \, d r \\ &\quad + \mathbb{E}^s \int_{u+2h}^t G_{C_{1}(r_h-u)^{2H}} \Big( f(\psi_s+ \mathbb{E}^u B_r) - f(\psi_s+ \mathbb{E}^u B_{r_h}) - f(\psi_u + \mathbb{E}^u B_r) + f(\psi_u+ \mathbb{E}^u B_{r_h}) \Big) \, d r \\ & =: J_{21} + J_{22} . \end{align*} For $J_{21}$, we apply \cite[Proposition 3.7 (ii)]{butkovsky2021approximation} with $\beta=0$, $\delta=1$, $\alpha=0$ and $f\equiv f(\psi_s+ \cdot) - f(\psi_u +\cdot)$ to get \begin{align*} \| J_{21} \|_{L^m} & \leq C \| \mathbb{E}^s \| f(\psi_s + \cdot) - f(\psi_u+\cdot) \|_{\infty} \|_{L^m} \int_{u+2h}^t \big( (r-u)^{2H}-(r_h-u)^{2H} \big) (r_h-u)^{-2H} \, d r . \end{align*} Now using that $ \| \mathbb{E}^s \| f(\psi_s + \cdot) - f(\psi_u+\cdot) \|_{\infty} \|_{L^m} \leq \| f \|_{\mathcal{C}^1} \| \mathbb{E}^s |\psi_s - \psi_u| \|_{L^m} \leq \| f \|_{\mathcal{C}^1} \| \psi_s - \psi_u \|_{L^\infty}$ and applying the inequalities $(r-u)^{2H} - (r_h-u)^{2H} \leq C (r-r_{h}) (r_h-u)^{2H-1}$ and $2 (r_h-u) \ge (r-u)$, it comes \begin{align*} \| J_{21} \|_{L^m} & \leq C\, \| f \|_{\mathcal{C}^1} [ \psi ]_{\mathcal{C}^1_{[S,T]} L^\infty} |u-s| \int_{u+2h}^t (r-r_h) (r-u)^{2H-1} (r-u)^{-2H} \, d r \\ & \leq C\, \| f \|_{\mathcal{C}^1} [ \psi ]_{\mathcal{C}^1_{[S,T]} L^\infty}\, h \, (t-s)\, \big( |\log(2h)| + |\log(t-u)| \big) . \end{align*} Since $t-u = (t-s)/2 > 2h$, one has \begin{align}\label{eq:J21Prop54} \| J_{21} \|_{L^m} & \leq C\, \| f \|_{\mathcal{C}^1} [ \psi ]_{\mathcal{C}^1_{[S,T]} L^\infty} \, h^{1-\varepsilon} (t-s)^{1+\frac{\varepsilon}{2}} . \end{align} As for $J_{22}$, observe that \begin{align*} &\Big|G_{C_{1}(r_h-u)^{2H}} \Big( f(\psi_s+ \mathbb{E}^u B_r) - f(\psi_s+ \mathbb{E}^u B_{r_h}) - f(\psi_u + \mathbb{E}^u B_r) + f(\psi_u+ \mathbb{E}^u B_{r_h}) \Big)\Big| \\ &\quad \leq \|G_{C_{1}(r_h-u)^{2H}} f(\psi_{s}+\cdot) - G_{C_{1}(r_h-u)^{2H}}f(\psi_{u}+\cdot)\|_{\mathcal{C}^1} \, | \mathbb{E}^u B_r - \mathbb{E}^u B_{r_h} | \\ &\quad \leq C\, \| f(\psi_s + \cdot) - f(\psi_u+\cdot) \|_{\infty}\, (r_{h}-u)^{-H} \, | \mathbb{E}^u B_r - \mathbb{E}^u B_{r_h} |\\ &\quad \leq C\, \| f \|_{\mathcal{C}^1} \, (r_{h}-u)^{-H} \, | \psi_s - \psi_u |\, | \mathbb{E}^u B_r - \mathbb{E}^u B_{r_h} |, \end{align*} where we used \cite[Proposition 3.7 (i)]{butkovsky2021approximation} with $\beta=1$ and $\alpha=0$ in the penultimate inequality. Now in view of the previous inequality, using consecutively Jensen's inequality, \cite[Proposition 3.6 (v)]{butkovsky2021approximation}, that $2(r_{h}-u)\geq r-u$ and that $t-u = (t-s)/2 > 2h$, it comes \begin{align}\label{eq:J22Prop54} \| J_{22} \|_{L^m} & \leq C\, \| f \|_{\mathcal{C}^1} \int_{u+2h}^t \| \mathbb{E}^s[ | \psi_s - \psi_u |\, | \mathbb{E}^u (B_r - B_{r_h})|] \|_{L^m} (r_{h}-u)^{-H} \, d r \nonumber\\ & \leq C\, \| f \|_{\mathcal{C}^1}\, \|\psi_{s}-\psi_{u}\|_{L^\infty} \int_{u+2h}^t \| \mathbb{E}^u (B_r - B_{r_h}) \|_{L^m} (r_{h}-u)^{-H} \, d r \nonumber\\ & \leq C\, \| f \|_{\mathcal{C}^1} [ \psi ]_{\mathcal{C}^1_{[S,T]} L^\infty} \, (u-s) \int_{u+2h}^t (r-r_h) (r-u)^{H-1} (r_{h}-u)^{-H} \, d r \nonumber \\ & \leq C\, \| f \|_{\mathcal{C}^1} [ \psi ]_{\mathcal{C}^1_{[S,T]} L^\infty} \, h\, (t-s) \, \big( | \log(2h) | + | \log(t-u)| \big)\nonumber \\ & \leq C\, \| f \|_{\mathcal{C}^1} [ \psi ]_{\mathcal{C}^1_{[S,T]} L^\infty} \, h^{1-\varepsilon}\, (t-s)^{1+\frac{\varepsilon}{2}} . \end{align} In view of the inequalities \eqref{eq:J1Prop54}, \eqref{eq:J21Prop54} and \eqref{eq:J22Prop54}, we have finally \begin{align*}% \| \mathbb{E}^s \delta A_{s,u,t} \|_{L^{m}} & \leq C\, \| f \|_{\mathcal{C}^1} [ \psi ]_{\mathcal{C}^1_{[S,T]} L^\infty} h^{1-\varepsilon} (t-s)^{1+\frac{\varepsilon}{2}} . \end{align*} \medskip Proof of \ref{item54(2)}: We write \begin{equation*} \| \delta A_{{s},u,{t}}\|_{L^m} \leq \| A_{{s},{t}}\|_{L^m} + \| A_{{s},u}\|_{L^m} + \| A_{u,{t}}\|_{L^m} \end{equation*} and we apply Lemma~\ref{lem:newbound-E2-v0} for each term in the right-hand side of the previous inequality, respectively for $\psi = \psi_{s}$, $\psi_{s}$ again and $\psi_{u}$. We thus have \begin{align} \label{eq:Prop54Ast} \| A_{s,t} \|_{L^{m}} \leq C \, \|f\|_\infty\, h^{\frac{1}{2}-\varepsilon} \, (t-s)^{\frac{1}{2}+\frac{\varepsilon}{2} }, \end{align} and combining similar inequalities on $A_{{s},u}$ and $A_{u,{t}}$ with \eqref{eq:Prop54Ast} yields \begin{equation*} \| \delta A_{s,u,t}\|_{L^m} \leq C\, \|f\|_{\infty}\, h^{\frac{1}{2}-\varepsilon}\, ({t}-{s})^{\frac{1}{2}+\frac{\varepsilon}{2}} . \end{equation*} \medskip Proof of \ref{item54(3)}: Finally, for a sequence $(\Pi_k)_{k \in \mathbb{N}}$ of partitions of $[S,t]$ with $\Pi_k=\{t_i^k\}_{i=1}^{N_k}$ and mesh size $|\Pi_{k}|$ converging to zero, we have \begin{align*} \left\| \mathcal{A}_t - \sum_{i=1}^{N_{k}-1} A_{t_i,t_{i+1}} \right\|_{L^1} &\leq \sum_{i=1}^{N_{k}-1} \int_{t_i^k}^{t_{i+1}^k} \mathbb{E}| f(\psi_r +B_r) - f(\psi_r+ B_{r_h}) -f(\psi_{t_i^k} +B_{r}) + f(\psi_{t_i^k}+ B_{r_h}) | \, d r\\ & \leq C \sum_{i=1}^{N_{k}-1} \int_{t_i^k}^{t_{i+1}^k} \| f \|_{\mathcal{C}^1} \|\psi_r - \psi_{t_i^k} \|_{L^1} \, d r \\ & \leq C \|f \|_{\mathcal{C}^1} \sum_{i=1}^{N_{k}-1} \int_{t_i^k}^{t_{i+1}^k} [ \psi ]_{\mathcal{C}^1_{[0,1]} L^\infty} |\Pi_{k} | \, d r ~\underset{k\to\infty}{\longrightarrow} 0. \end{align*} \end{proof} \begin{corollary}\label{cor:newbound-E2-general} Let $\varepsilon \in (0,\frac{1}{2})$ and $m \in [2, \infty)$. There exists a constant $C>0$ such that for any $\mathbb{R}^d$-valued $\mathbb{F}$-adapted process $(\psi_{t})_{t\in [0,1]}$, any $f \in \mathcal{C}^1_{b}(\mathbb{R}^d, \mathbb{R}^d) $, any $h \in (0,1)$, and any $(s, t)\in \Delta_{0,1}$, \begin{align*}% \Big\| \int_s^t f(\psi_r + B_r) - f(\psi_{r_h} + B_{r_h}) \, d r \Big\|_{L^{m}} & \leq C \left( \|f\|_\infty \, h^{\frac{1}{2}-\varepsilon} (t-s)^{\frac{1}{2}+\frac{\varepsilon}{2}} + \|f \|_{\mathcal{C}^1} [\psi]_{\mathcal{C}^{1}_{[0,1]} L^\infty} \, h^{1-\varepsilon} (t-s) \right) . \end{align*} \end{corollary} \begin{proof} Introducing the pivot term $f(\psi_{r} + B_{r_h})$, we have \begin{align*} \Big\| &\int_s^t f(\psi_r + B_r) - f(\psi_{r_h} + B_{r_h}) \, d r \Big\|_{L^{m}} \\ &\leq \Big\| \int_s^t f(\psi_{r} + B_{r_h}) - f(\psi_{r_h} + B_{r_h}) \, d r \Big\|_{L^{m}} + \Big\| \int_s^t f(\psi_r + B_r) - f(\psi_{r} + B_{r_h})\, d r \Big\|_{L^{m}}\\ &=:J_1 + J_2 . \end{align*} We first bound $J_1$ using the $\mathcal{C}^1$ norm of $f$: \begin{align*} J_1 \leq \|f\|_{\mathcal{C}^1} \int_s^t \| \psi_r - \psi_{r_h} \|_{L^m} \, d r \leq \|f\|_{\mathcal{C}^1} [ \psi ]_{\mathcal{C}^1_{[0,1]} L^\infty}\, h\, (t-s) . \end{align*} Then $J_2$ is bounded by Proposition \ref{prop:newbound-E2}. Combining the two bounds, we get the desired result. \end{proof} \begin{corollary}\label{cor:newbound-E2} Recall that the process $K^{h,n}$ was defined in \eqref{eq:def-Khn}. Let $\varepsilon \in (0,\frac{1}{2})$ and $m \in [2, \infty)$. There exists a constant $C>0$ such that for any $(s , t) \in \Delta_{0,1}$, any $h\in (0,1)$ and any $n\in \mathbb{N}$, we have \begin{align*}% \Big\| \int_s^t b^n(X_0+K^{h,n}_r + B_r) - b^n(X_0 +K_{r_h}^{h,n}+ B_{r_h}) \, d r \Big\|_{L^{m}} \leq C \Big( \|b^n\|_\infty h^{\frac{1}{2}-\varepsilon} + \|b^n\|_{\mathcal{C}^1} \|b^n\|_\infty h^{1-\varepsilon} \Big) (t-s)^{\frac{1}{2}} . \end{align*} \end{corollary} \begin{proof} Define the process $\psi_t = X_0 + K^{h,n}_t,\, t\in [0,1]$. Since $\psi$ is $\mathbb{F}$-adapted and $b^n \in \mathcal{C}^1_{b}(\mathbb{R}^d, \mathbb{R}^d)$, we apply Corollary \ref{cor:newbound-E2-general} to get \begin{align*} & \Big\| \int_s^t b^n(X_0+K^{h,n}_r + B_r) - b^n(X_0+K_{r_h}^{h,n}+ B_{r_h}) \, d r \Big\|_{L^{m}} \\ &\quad \leq C \Big( \|b^n\|_\infty h^{\frac{1}{2}-\varepsilon} (t-s)^{\frac{\varepsilon}{2}} + \|b^n\|_{\mathcal{C}^1} [\psi]_{\mathcal{C}^1_{[0,1]}L^\infty} h^{1-\varepsilon} (t-s)^{\frac{1}{2}}\Big) (t-s)^{\frac{1}{2}} \\ &\quad \leq C \Big( \|b^n\|_\infty h^{\frac{1}{2}-\varepsilon}+ \|b^n\|_{\mathcal{C}^1} [\psi]_{\mathcal{C}^1_{[0,1]}L^\infty} h^{1-\varepsilon}\Big) (t-s)^{\frac{1}{2}} . \end{align*} It remains to prove an upper bound on $[\psi]_{\mathcal{C}^1_{[0,1]} L^\infty}$. For $0 \leq u \leq v \leq 1$, we have \begin{align*} |\psi_v -\psi_u| = |\int_u^v b^n(X^{h,n}_{r_h}) \, d r | \leq |v-u| \|b^n\|_\infty . \end{align*} Hence $[\psi]_{\mathcal{C}^1_{[0,1]} L^\infty} \leq \| b^n \|_\infty$. \end{proof} \section{Examples and simulations}\label{sec:simulations} In this section, we discuss examples of SDEs of the form \eqref{eq:SDE} that can be treated by Theorem~\ref{thm:main-SDE}. \subsection{Skew fractional Brownian motion} The skew Brownian motion is a one-dimensional process that behaves like a Brownian motion with a certain diffusion coefficient above the $x$-axis, and with another diffusion coefficient below the $x$-axis. We refer to \cite{harrison1981skew,Lejay} for various constructions, and in particular in \cite{LeGall}, it is shown to be the solution of an SDE which involves its local time. This equation reads $dX_{t} = \alpha\, dL^X_{t} + dW_{t}$, for $\alpha\in(-1,1)$, where $L^X$ is the local time at $0$ of the solution. Formally we can write $dL^X_{t} = \delta_{0}(X_{t})\, dt$. More generally in $\mathbb{R}^d$, although this is not the only possible approach (see e.g. \cite{Banos,Sole} for alternative definitions), we call skew fractional Brownian motion the solution to \eqref{eq:SDE} when the drift is $\alpha\delta_{0}$, $\alpha\in \mathbb{R}^d$, that is \begin{align}\label{eq:skewfbm} \, d X_t = \alpha \delta_0(X_t) \, d t + \, d B_t . \end{align} Since $\delta_{0} \in \mathcal{B}_p^{-d+\frac{d}{p}}$, Theorem~\ref{th:WP} gives strong existence and uniqueness for $H < \frac{1}{2(d+1)}$ and the tamed Euler scheme converges for the same values of $H$ by Theorem~\ref{thm:main-SDE}. \begin{remark} Since $H < 1/(2(d+1)) < 1/d$, we know from \cite[Theorem 7.1]{dalang2009minicourse} that the fBm visits any state $x$ (and in particular $0$) infinitely may times with positive probability. So the equation \eqref{eq:skewfbm} is not simply reduced to $X=B$.\\ Instead of putting a Dirac measure in dimension $d >1$, one can also define a skew fBm on some set $S$ of dimension $d-1$ by considering a measure $\mu$ supported on $S$ (for example a Hausdorff measure). As before, we know from \cite[Theorem 7.1]{dalang2009minicourse} that for $H < 1/d$, the fBm visits $S$ infinitely many times with positive probability. So the equation $d X_t = \mu(X_t) d t + d B_t$ is not reduced to $X=B$. Since signed measures also belong to $\mathcal{B}_p^{-d+d/p}$ \cite[Proposition 2.39]{bahouri2011fourier}, we have again strong existence and uniqueness for $H < 1/(2(d+1))$. \end{remark} As an alternative construction of the skew fBm, we also propose to replace the local time by its approximation $b(x) = \frac{\alpha}{2\varepsilon} \mathds{1}_{(-\varepsilon, \varepsilon)}(x)$, $\varepsilon>0$. Now we have $b$ bounded and so $b \in \mathcal{B}_\infty^0$, therefore one can take $H < 1/2$ and consider the SDE \begin{align*}% \, d X_t = \frac{\alpha}{(2\varepsilon)^d} \mathds{1}_{(-\varepsilon, \varepsilon)^d}(X) \, d t + \, d B_t . \end{align*} In the Markovian case and dimension $d=1$, the skew Brownian motion is reflected on the $x$-axis when $\alpha=\pm 1$. Unlike the skew Brownian motion, the skew fBm (for $H\neq 1/2$) is not reflected for any value of $\alpha$, since $X-B$ is more regular than $B$ (see Theorem~\ref{th:WP}). To construct reflected processes, a classical approach is to proceed by penalization, see e.g. \cite{LionsSznitman} in the Brownian case, and \cite{RTT} for rough differential equations. This consists in choosing a drift of the form $b_{\varepsilon}(x)=\frac{(x)_{-}}{\varepsilon}$ and letting $\varepsilon$ tend to $0$. Note that this approach also works for stochastic partial differential equations (SPDEs), see for instance \cite{nualart1992white,zambotti2003integration,Ludovic}. If we consider more specifically the stochastic heat equation, the solution in time observed at a fixed point in space behaves qualitatively like a fractional SDE with Hurst parameter $H=\frac{1}{4}$. Hence it is interesting to consider the following one-dimensional SDE: \begin{align}\label{eq:penalised} \, d X_t^\varepsilon = \frac{(X_t^\varepsilon)_{-}}{\varepsilon} \kappa(X_t^\varepsilon) \, d t + \, d B_t . \end{align} In \cite{RTT}, $\kappa$ was essentially the identity mapping and the distance between $X^\varepsilon$ and $X^0$ was quantified, with $X^0$ a reflected process. But then the drift is not in some Besov space. So in order to approximate \eqref{eq:penalised} numerically, we could assume that $\kappa$ is a smooth cut-off to ensure that the drift is in some Besov space (e.g. $\mathcal{B}^1_\infty$), however it is no longer clear that $X^\varepsilon$ converges to a reflected process. We leave the question of numerical approximation of reflected fractional processes for future research. \subsection{Applications in finance} Some models of mathematical finance involve irregular drifts. First, consider a dividend paying firm, whose capital evolution can be modelled by the following one-dimensional SDE: \begin{align*}% \, d X_{t} = (r - \mathds{1}_{X_{t}\leq q}) \, d t + \sigma \, d B_{t} , \end{align*} with an interest rate $r$, the volatility of the market $\sigma$ and some threshold $q$, see e.g \cite{MR1466852}. \begin{remark} An extension of the previous SDE to dimension $d$ can be done by considering a threshold of the form $\mathds{1}_{x \in D}$ where $D$ is some domain in $\mathbb{R}^d$ or $\mathds{1}_{f(x) \leq q}$ where $f : \mathbb{R}^d \mapsto \mathbb{R}$. \end{remark} Numerical methods for bounded drifts with Brownian noise exist in the literature, see e.g. \cite{dareiotis2021quantifying,jourdain2021convergence}. When $B$ is a fractional Brownian motion with $H<1/2$, \cite{butkovsky2021approximation} provides a rate of convergence for the strong error (and Theorem~\ref{thm:main-SDE} provides the same rate of convergence). \smallskip Then, we propose a class of models which can be related heuristically to the rough Heston model introduced in \cite{ElEuchRosenbaum}. Recently, it was observed empirically that the volatility in some high-frequency financial markets has a very rough behaviour, in the sense that its trajectories have a very small H\"older exponent, close to $0.1$. Formally, the volatility component in the rough Heston model is described by a square root diffusion coefficient and a very rough driving noise. It would read \begin{equation}\label{eq:roughHeston} dV_{t} = \kappa(V_{t}) \, dt + \sqrt{V_{t}}\, dB_{t}, \end{equation} if we could make sense of this equation, the difficulty being both to define a stochastic integral when $H$ is small, and to ensure the positivity of the solution. Note that it is possible to define properly a rough Heston model, by means of Volterra equations, see \cite{ElEuchRosenbaum}. However, we keep discussing \eqref{eq:roughHeston} at a formal level, and consider the Lamperti transform $L(x) = \sqrt{x}$. Assume that a first order chain rule holds for the solution of \eqref{eq:roughHeston}, then as long as $V$ stays nonnegative, it comes that \begin{align*} L(V_{t}) = L(V_{0}) + \int_{0}^t \frac{\kappa(V_{s})}{2\sqrt{V_{s}}} \, ds + \frac{1}{2}B_{t} , \end{align*} which for $\widetilde{V}_{t} := L(V_{t}) = \sqrt{V_{t}}$ also reads \begin{align}\label{eq:Vtilde} \widetilde{V}_{t} = \widetilde{V}_{0} + \frac{1}{2} \int_{0}^t \frac{1}{\widetilde{V}_{s}} \kappa( \widetilde{V}_{s}^2) \, ds + \frac{1}{2} B_{t} . \end{align} While there are some quantitative numerical approximation results for rough models (e.g. for the rough Bergomi model \cite{Gassiat,FrizEtAl}), the Euler scheme for the rough Heston model is only known to converge without a rate \cite{RTY}. Now we can make sense of the Equation \eqref{eq:Vtilde} with drift $b(x) = \frac{\kappa(x^2)}{2 |x|^{1-\varepsilon}}$ as for $\kappa$ a bump function and for small $\varepsilon>0$, $b\in \mathcal{B}^{0}_{1}$ (see \cite[Prop. 2.21]{bahouri2011fourier}). Hence Theorem~\ref{thm:main-SDE} can be applied whenever $H<1/4$ and in view of Corollary~\ref{cor:bn-choice}, this yields a strong error of order $(1/4)^-$. \subsection{Fractional Bessel processes (in dimension $1$)} Bessel processes \cite[Chapter XI]{revuz2013continuous} play an important role in probability theory and financial mathematics. As a generalization and motivated by the discussion in the previous subsection, we consider solutions to the following one-dimensional SDE: \begin{align}\label{eq:bessel} \, d X_t = \frac{\kappa(X_{t})}{|X_t|^\alpha} \, d t + \, d B_t , \end{align} for some $\alpha > 0$ and $H \in (0,1)$. When $H=1/2$, $\alpha=1$ and $\kappa$ is the identity function, we know that the solution always stays positive \cite[Chapter XI, Section 1]{revuz2013continuous}. By computations similar to \cite[Prop. 2.21]{bahouri2011fourier}, the drift $b(x)= \kappa(x) \, |x|^{-\alpha}$ belongs to $\mathcal{B}_\infty^{-\alpha}$ for $\alpha \in (0,1)$. In this case, \eqref{eq:cond-gamma-p-H} reads $H < \frac{1}{2(1+\alpha)}$ and the rate of convergence of the tamed Euler scheme is close to $\frac{1}{2(1+\alpha)}$. \subsection{Other examples in higher dimension} A way to extend processes \eqref{eq:bessel} to dimension $2$ could be the following: \begin{align}\label{eq:bessel2D} \, d X_t^{i} = \frac{\kappa(X_{t})}{|X_t|^{\alpha}} \, d t + \, d B_t^i , \ i=1,2, \end{align} where $B^1$ and $B^2$ are two independent fBms and $\alpha>0$. By \cite[Proposition 2.21]{bahouri2011fourier}, one can prove that $x \mapsto b(x)= \frac{\kappa(x)}{| x|^{\alpha}}$ belongs to $\mathcal{B}^{-\alpha}_\infty$ for $\alpha \in (0,2)$. Therefore, the condition on $H$ becomes $H< \frac{1}{2(1+\alpha)}$. \\ Notice that the SDE \eqref{eq:bessel2D} presents a singularity only at the point $(0,0)$. To create a singularity on both the $x$ and $y$-axes, one could also look at the following SDE \begin{align*}% \, d X_t^{i} = \frac{1}{(|X_t^1| \wedge |X_t^2|)^{\alpha}} \, d t + \, d B_t^i , \ i=1,2. \end{align*} \smallskip Another example to consider in higher dimension is an SDE with discontinuous drift. For instance, let the drift be an indicator function of some domain $D$ as in \eqref{eq:ind2D}: \begin{align}\label{eq:ind2D} \, d X_t= \mathds{1}_{D}^{(d)}(X_t) \, d t + \, d B_t, \end{align} where $\mathds{1}_{D}^{(d)}$ denotes the vector-valued indicator function with identical entries $\mathds{1}_{D}$ on each component. We have $\mathds{1}_{D}^{(d)} \in \mathcal{B}_\infty^0$, and thus one can take $H < 1/2$. \subsection{Simulations} In dimension $1$, we will simulate two SDEs. First the skew fractional Brownian motion \eqref{eq:skewfbm} with $\alpha=1$. Then we simulate the SDE with bounded measurable drift $\mathds{1}_{\mathbb{R}_{+}} \in \mathcal{B}_\infty^0$, i.e. \begin{align}\label{eq:simubounded} \, d X_t = \mathds{1}_{X_{t}>0} \, d t + \, d B_t . \end{align} The drifts are approximated by convolution with the Gaussian kernel, that is $b^n (x) = G_{\frac{1}{n}} b (x)$ and we fix the initial condition to $X_0=0$. For the skew fBm, this corresponds to $$b^n(x) = \sqrt{\frac{n}{2 \pi}} e^{-\frac{n x^2}{2}} ,$$ and for \eqref{eq:simubounded} this yields $$b^n(x) = \sqrt{\frac{n}{2 \pi}} \int_0^x e^{-\frac{n y^2}{2}} \, d y .$$ As in Corollary~\ref{cor:bn-choice}, we fix the parameter $n$ of the mollifier in the tamed Euler scheme as $n=\lfloor h^{-\frac{1}{1-\gamma+\frac{d}{p}}}\rfloor$. Our aim is to observe the rate of convergence numerically, so we need a reference value for the solutions of \eqref{eq:skewfbm} and \eqref{eq:simubounded}. However these solutions do not have an explicit expression so we do not have an exact reference value. Instead, we first make a costly computation with very small time-step $h=2^{-7}\, 10^{-4}$ that will serve as reference value. In a second step, we compute the tamed Euler scheme for $h\in\{2^{-1}\, 10^{-4},\ 2^{-2}\, 10^{-4},\ 2^{-3}\, 10^{-4},\ 2^{-4}\, 10^{-4} \}$ and compare it to the reference value with the same noise and $h=2^{-7}\, 10^{-4}$. The result is averaged over $N=50000$ realisations of the noise to get an estimate of the strong error. ~ In dimension $2$, we simulate the $2$-dimensional SDE \eqref{eq:ind2D} with $X_0=0$ and with $D$ the quadrant defined by $D = \{x=(x_{1},x_{2}):~ x_{1} \ge 0, x_{2} \ge 0 \}$. The drift $b$ is approximated by \begin{align*} b^n(x) = G_{\frac{1}{n}} b (x) = \frac{n}{2 \pi } \int_{\mathbb{R}^2} e^{-\frac{n}{2}|x-y|^2} \mathds{1}_{D}(y) \, d y. \end{align*} \smallskip Recall that according to Corollary~\ref{cor:bn-choice}, the theoretical order of convergence is almost $1/2$ when the drift is bounded and almost $1/4$ when the drift is a Dirac distribution. We plot the logarithmic strong error with respect to the time-step $h$ for several values of the Hurst parameter, in Figure \ref{fig:simbounded} for the Equations \eqref{eq:simubounded} and \eqref{eq:ind2D}, and in Figure \ref{fig:simDirac} for the Equation \eqref{eq:skewfbm}. We conclude that the empirical order of convergence is consistent with the theoretical one. \begin{figure}[h!] \centering \includegraphics[scale=0.4]{Hcumul_IND.eps} \, \includegraphics[scale=0.4]{Hcumul_IND_2D.eps} \caption{Plot of the logarithm of the strong error ($y$-axis) against $h$ ($x$-axis) for a bounded drift. Left: Equation~\eqref{eq:simubounded} ($d=1$) - Right: Equation \eqref{eq:ind2D} $(d=2)$ . For different values of $H<1/2$, and in both dimension $1$ and $2$, we observe that the numerical order of convergence (by linear regression) is approximately $0.5$ (with a standard deviation plotted in dashed lines), which coincides with the theoretical order $1/2$. } \label{fig:simbounded} \end{figure} \begin{figure}[h!] \centering \includegraphics[scale=0.4]{Hcumul_DIR.eps} \caption{Plot of the logarithm of the strong error ($y$-axis) against $h$ ($x$-axis) for a Dirac drift in dimension $d=1$, namely Equation~\eqref{eq:skewfbm}. For several values of $H<1/2$, we observe that the numerical order of convergence (by linear regression) is approximately $0.25$ (with a standard deviation plotted in dashed lines), which is close to the theoretical order $1/4$. } \label{fig:simDirac} \end{figure} \begin{appendices} \section{Proofs of regularisation by fBm in dimension $d$}\label{app:reg-fBm} We start by recalling an extension of the stochastic sewing Lemma \cite{le2020stochastic} with singular weights that was established in \cite{athreya2020well}. It is useful for the main estimates of Section~\ref{sec:proofWeakEx} (Lemma~\ref{lem:1streg} and Proposition \ref{prop:regfBm}, whose proofs are developed in this appendix) and also in Section \ref{sec:stochastic-sewing}. For $\alpha \in[0,1)$ and $(s, t) \in \Delta_{S, T}$ we define $$ \nu_{S, T}^{(\alpha)}(s, t):=\int_{s}^{t}(r-S)^{-\alpha} d r , $$ which satisfies \begin{align}\label{eq:bounds-nu} \nu_{S, T}^{(\alpha)}(s, t) \leq C\, (t-s)^{1-\alpha} . \end{align} \begin{lemma}[\cite{athreya2020well}] \label{lem:SSL} Let $0\leq S<T$, $m \in [2, \infty)$ and $q \in [m,\infty]$. Let $(\Omega,\mathcal{F},\mathbb{F},\mathbb{P})$ be a filtered probability space. Let $A: \Delta_{S,T} \rightarrow L^m$ be such that $A_{s,t}$ is $\mathcal{F}_t$-measurable for any $(s,t) \in \Delta_{S,T}$. Assume that there exist constants $\Gamma_1,\Gamma_2\geq 0$, $\alpha_1 \in [0,1)$, $\alpha_2 \in [0, \frac{1}{2})$ and $\varepsilon_1,\varepsilon_2>0$ such that for any $(s,t) \in \Delta_{S,T}$ and $u:=(s+t)/2$, \begin{align} \|\mathbb{E}^s[\delta A_{s,u,t}]\|_{L^q}&\leq \Gamma_1 \, (u-S)^{-\alpha_1} (t-s)^{1+\varepsilon_1},\label{sts1}\\ \| \big( \mathbb{E}^S | \delta A_{s,u,t} |^{m} \big)^{\frac{1}{m}} \|_{L^q} &\leq \Gamma_2 \, (u-S)^{-\alpha_2} (t-s)^{\frac{1}{2}+\varepsilon_2}. \label{sts2} \end{align} Then there exists a process $(\mathcal{A}_t)_{t\in [S,T]}$ such that, for any $t \in [S,T]$ and any sequence of partitions $\Pi_k=\{t_i^k\}_{i=0}^{N_k}$ of $[S,t]$ with mesh size going to zero, we have \begin{align} \label{sts3} \mathcal{A}_t=\lim_{k\rightarrow \infty}\sum_{i=0}^{N_k}A_{t_i^k,t_{i+1}^k} \text{ in probability.} \end{align} Moreover, there exists a constant $C=C(\varepsilon_1,\varepsilon_2,m, \alpha_1, \alpha_2)$ independent of $S,T$ such that for every $(s,t) \in \Delta_{S,T}$ we have \begin{align*} \| \big( \mathbb{E}^S | \mathcal{A}_t-\mathcal{A}_s-A_{s,t} |^m \big)^{\frac{1}{m}} \|_{L^q} \leq C\, \Gamma_1 \nu_{S, T}^{(\alpha_{1}) }(s, t) (t-s)^{ \varepsilon_1} + C\, \Gamma_2 \Big( \nu_{S, T}^{(2 \alpha_{2})}(s, t) \Big)^{\frac{1}{2}} (t-s)^{ \varepsilon_2}, \end{align*} and \begin{align*} \|\mathbb{E}^S[\mathcal{A}_t-\mathcal{A}_s-A_{s,t}]\|_{L^{q}}\leq C\, \Gamma_1\, \nu_{S,T}^{(\alpha_1)}(s,t)\, (t-s)^{ \varepsilon_1}. \end{align*} \end{lemma} \begin{remark} \begin{itemize} \item In this paper, the stochastic sewing Lemma is applied for only two possible values of $q$, that is $q=\infty$ or $q=m$, in which case we have $\| \big( \mathbb{E}^S | \cdot |^m \big)^{\frac{1}{m}} \|_{L^m} = \| \cdot \|_{L^m}$. \item A critical-exponent version of the stochastic sewing Lemma, introduced in \cite[Theorem 4.5]{athreya2020well} is used in Proposition \ref{prop:bound-E1-SDE-critic}. Under the same notations and assumptions as Lemma \ref{lem:SSL} (with $q=m$ and $\alpha_{1}=\alpha_{2}=0$), assuming moreover that there exist $\Gamma_3, \Gamma_4, \varepsilon_4 >0$ such that \begin{align}\label{sts4} \left\|\mathbb{E}^s\left[\delta A_{s, u, t}\right]\right\|_{L^m} \leq \Gamma_3|t-s|+\Gamma_4|t-s|^{1+\varepsilon_4}, \end{align} we get that for $(s,t) \in \Delta_{S,T}$, \begin{align*} \left\|\mathcal{A}_t-\mathcal{A}_s-A_{s, t}\right\|_{L^m} \leq C \Gamma_3\left(1+\left|\log \frac{\Gamma_1 T^{\varepsilon_1}}{\Gamma_3}\right|\right)(t-s)+C \Gamma_2(t-s)^{\frac{1}{2}+\varepsilon_2}+C \Gamma_4(t-s)^{1+\varepsilon_4}. \end{align*} \end{itemize} \end{remark} \subsection{Proof of Lemma~\ref{lem:1streg}}\label{app:1streg} We will apply Lemma~\ref{lem:SSL} for $S\leq s \leq t \leq T$, \begin{align*} \mathcal{A}_t:=\int_S^t f(B_r,\Xi) \, dr ~~\text{and}~~ A_{s,t}:=\mathbb{E}^s\left[\int_s^t f(B_r,\Xi) \, dr\right]. \end{align*} Notice that we have $\mathbb{E}^s[\delta A_{s,u,t}]=0$, so \eqref{sts1} trivially holds. In order to establish \eqref{sts2}, we will show that for some $\varepsilon_{2}>0$, \begin{align} \label{(4.8)-critic} \|\delta A_{s,u,t}\|_{L^q}\leq \Gamma_2 \, (t-s)^{\frac{1}{2}+\varepsilon_2} (u-S)^{-\frac{dH}{p}}. \end{align} For $u=(s+t)/2$ we have by the triangle inequality, Jensen's inequality for conditional expectation and Lemma~\ref{lem:reg-B}$(iv)$ (recall that $q \leq p$) that \begin{align*} \|\delta A_{s,u,t}\|_{L^q}&\leq \left\|\mathbb{E}^s\left[\int_u^t f(B_r,\Xi) \, dr\right]\right\|_{L^q} + \left\|\mathbb{E}^u\left[\int_u^t f(B_r,\Xi) \, dr\right]\right\|_{L^q}\\ &\leq \int_u^t\left(\|\mathbb{E}^s f(B_r,\Xi) \|_{L^q}+\|\mathbb{E}^u f(B_r,\Xi) \|_{L^q}\right) dr\\ &\leq 2 \int_u^t \|\mathbb{E}^u f(B_r,\Xi) \|_{L^q} \, dr \\ &\leq C\int_u^t \|\| f(\cdot,\Xi) \|_{\mathcal{B}_p^{\beta}}\|_{{L^q}} (r-u)^{H\beta} (u-S)^{-\frac{d}{2p}} (r-S)^{d\frac{1-2H}{2p}} \, dr\\ &\leq C \, \| \| f(\cdot,\Xi) \|_{\mathcal{B}_p^{\beta}}\|_{L^q}\, (t-u)^{1+H\beta} (u-S)^{-\frac{dH}{p}}, \end{align*} where we used $r-S \leq 2(u-S)$ for the last inequality. Hence, we have \eqref{(4.8)-critic} for $\varepsilon_2=1/2+H \beta >0$. Let $t\in [S,T]$. Let $(\Pi_k)_{k \in \mathbb{N}}$ be a sequence of partitions of $[S,t]$ with mesh size converging to zero. For each $k$, denote $\Pi_k=\{t_i^k\}_{i=1}^{N_k}$. By Lemma~\ref{lem:reg-B}$(iii)$ we have that \begin{align*} \left\|\mathcal{A}_t-\sum_i A_{t^k_i,t^k_{i+1}} \right\|_{L^1}&\leq\sum_i \int_{t^k_i}^{t^k_{i+1}}\|f(B_r,\Xi)-\mathbb{E}^{t_i^k} f(B_r,\Xi) \|_{L^1} dr\\ &\leq C \, \|\|f(\cdot,\Xi) \|_{\mathcal{C}^1}\|_{L^2}\, (t-S)\, |\Pi_k|^H \longrightarrow 0. \end{align*} Hence \eqref{sts3} holds true. Applying Lemma~\ref{lem:SSL}, we get \begin{align}\label{eq:lem34-ssl-goal} \| \big( \mathbb{E}^S | \mathcal{A}_t-\mathcal{A}_s|^m \big)^{\frac{1}{m}} \|_{L^q} &\leq \|A_{s,t}\|_{L^q} + C \, \| \| f(\cdot,\Xi) \|_{\mathcal{B}_p^{\beta}}\|_{L^q}\, \Big( \nu_{S,T}^{(\frac{2dH}{p})}(s,t)\Big)^{\frac{1}{2}} (t-s)^{\frac{1}{2} + H\beta}. \end{align} To bound $\|A_{s,t}\|_{L^q}$, notice that \begin{align}\label{eq:boundAst} \|A_{s,t}\|_{L^q} &=\Big\|\mathbb{E}^s \int_s^t f(B_r,\Xi) \, dr \Big\|_{L^q} \leq \int_s^t \|\mathbb{E}^s f(B_r,\Xi) \|_{L^q} dr . \end{align} Hence to obtain \eqref{eq:regulINT}, use Lemma~\ref{lem:reg-B}$(ii)$ and recall that $1+H(\beta-\frac{d}{p}) >0$ to get that \begin{align*} \|A_{s,t}\|_{L^q} &\leq C \int_s^t \| \| f(\cdot,\Xi) \|_{\mathcal{B}_p^{\beta}}\|_{L^q}\, (r-s)^{H(\beta-\frac{d}{p})} dr \\ &\leq C \, \| \| f(\cdot,\Xi) \|_{\mathcal{B}_p^{\beta}}\|_{L^q}\, (t-s)^{1+H(\beta-\frac{d}{p})} . \end{align*} Plugging the previous inequality in \eqref{eq:lem34-ssl-goal} with the first inequality of \eqref{eq:bounds-nu} yields \eqref{eq:regulINT}. \subsection{Proof of Proposition~\ref{prop:regfBm}}\label{app:regfBm} Let $(S,T)\in \Delta_{0,1}$. For $(s,t) \in \Delta_{S,T}$, let \begin{align} \label{eq:A} A_{s,t}:=\int_{s}^{t} f(B_r+\psi_{s}) dr \ \text{and } \mathcal{A}_{t}:=\int_S^{t} f(B_r+\psi_r) dr. \end{align} \paragraph{Proof of \ref{item:3.5(a)}.} Assume that $[\psi]_{\mathcal{C}^\tau_{[S,T]}L^{m,q}}<\infty$, otherwise \eqref{eq:3.5a} trivially holds. In the last part of this proof, we will check that the conditions in order to apply Lemma~\ref{lem:SSL} are verified. Namely, we will show that \eqref{sts1} and \eqref{sts2} hold true with $\varepsilon_1=H(\beta-d/p-1)+\tau>0$, $\alpha_1=0$ and $\varepsilon_2=1/2+H(\beta-\frac{d}{p}) >0$, $\alpha_2=0$, so that there exists a constant $C>0$ independent of $S,T,s,t$ such that \begin{enumerate}[label=(\roman*$_{a}$)] \item \label{en:(1a)} $\|\mathbb{E}^{s} [\delta A_{s,u,t}]\|_{L^q}\leq C\, \|f\|_{\mathcal{B}_p^\beta}\, [\psi]_{\mathcal{C}^\tau_{[S,T]}L^{m,q}}\, (t-s)^{1+H(\beta-\frac{d}{p}-1)+\tau}$; % \item \label{en:(2a)} $\Big\| \big( \mathbb{E}^S |\delta A_{s,u,t} |^m \big)^{\frac{1}{m}} \Big\|_{L^q } \leq C\, \| f \|_{\mathcal{B}_p^\beta} (t-s)^{1+H(\beta-\frac{d}{p})}$; \item \label{en:(3a)} If \ref{en:(1a)} and \ref{en:(2a)} are satisfied, \eqref{sts3} gives the convergence in probability of $\sum_{i=0}^{N_k-1} A_{t^k_i,t^k_{i+1}}$ along any sequence of partitions $\Pi_k=\{t_i^k\}_{i=0}^{N_k}$ of $[S,t]$ with mesh converging to $0$. We will prove that the limit is the process $\mathcal{A}$ given in \eqref{eq:A}. \end{enumerate} Assume for now that \ref{en:(1a)}, \ref{en:(2a)} and \ref{en:(3a)} hold. Applying Lemma~\ref{lem:SSL} and recalling \eqref{eq:bounds-nu}, we obtain that \begin{align*} \Big\| \Big( \mathbb{E}^S \Big | \int_{s}^{t} f(B_r+\psi_r) \, dr \Big|^m \Big)^{\frac{1}{m}} \Big\|_{L^q} &\leq C \, \|f\|_{\mathcal{B}_p^\beta}\, [\psi]_{\mathcal{C}^\tau_{[S,T]}L^{m,q}}\, (t-s)^{1+H(\beta-1-\frac{d}{p})+\tau} \\ & \quad + C\, \| f \|_{\mathcal{B}_p^\beta} (t-s)^{1+H(\beta-\frac{d}{p})} + \big\| \big( \mathbb{E}^S | A_{s,t} |^m \big)^{\frac{1}{m}} \big\|_{L^q}. \end{align*} To bound $\big\| \big( \mathbb{E}^S | A_{s,t} |^m \big)^{\frac{1}{m}} \big\|_{L^q}$, we apply Lemma~\ref{lem:1streg} to $\Xi = \psi_s$. As $f$ is smooth and bounded, the first assumption of Lemma~\ref{lem:1streg} is verified. By Lemma~\ref{lem:besov-spaces}$(i)$, $ \|f(\cdot + \psi_s) \|_{\mathcal{B}^{\beta}_{p}} \leq \| f\|_{\mathcal{B}^{\beta}_{p}}$, hence the second assumption of Lemma~\ref{lem:1streg} is verified. It follows by Lemma~\ref{lem:1streg} that \begin{align}\label{eq:Ast} \big\| \big( \mathbb{E}^S | A_{s,t} |^m \big)^{\frac{1}{m}} \big\|_{L^q} &\leq C\, \| \|f(\psi_s+\cdot) \|_{\mathcal{B}_p^{\beta}} \|_{L^q} \, (t-s)^{1+H(\beta-\frac{d}{p})}\ \nonumber\\ &\leq C\, \|f\|_{\mathcal{B}^{\beta}_p} \, (t-s)^{1+H(\beta-\frac{d}{p})}. \end{align} Then, we get \eqref{eq:3.5a}. \paragraph{Proof of \ref{item:3.5(b)}.} Assume that $[\psi]_{\mathcal{C}^{1/2+H}_{[S,T]}L^m}<\infty$, otherwise \eqref{eq:3.5b} trivially holds. In the last part of this proof, we will check that the conditions in order to apply the stochastic sewing Lemma with critical exponent \cite[Theorem 4.5]{athreya2020well} are verified. Namely, we will show that for some $\varepsilon \in (0,1)$ small enough (specified later), \eqref{sts1}, \eqref{sts2} and \eqref{sts4} hold true with $\varepsilon_1=H >0$, $\alpha_1=0$, $\varepsilon_2=\varepsilon/2>0$, $\alpha_2=0$ and $\Gamma_{4}=0$, % so that there exists a constant $C>0$ independent of $s,t,S$ and $T$ such that \begin{enumerate}[label=(\roman*$_{b}$)] \item\label{en:(1b)} $\|\mathbb{E}^{s} [\delta A_{s,u,t}]\|_{L^m}\leq C\, \| f \|_{\mathcal{B}_p^{\beta+1}}\, [\psi]_{\mathcal{C}^{\frac{1}{2}+H}_{[S,T]}L^m}\, (t-s)^{1+H}$; \myitem{(i$^\prime_{b}$)}\label{en:(1'b)} $\| \mathbb{E}^s [\delta A_{s,u,t}]\|_{L^m}\leq C\, \|f\|_{\mathcal{B}_p^{\beta}}\, [\psi]_{\mathcal{C}^{\frac{1}{2}+H}_{[S,T]}L^m}\, (t-s)$ ; \item\label{en:(2b)} $\| \delta A_{s,u,t} \|_{L^m} \leq C\, \| f \|_{\mathcal{B}_p^{\beta}} \Big( 1+ [\psi]_{\mathcal{C}^{\frac{1}{2}+H}_{[S,T]} L^m} \Big) (t-s)^{\frac{1}{2}+ \frac{\varepsilon}{2}}$; \item\label{en:(3b)} If \ref{en:(1b)} and \ref{en:(2b)} are satisfied, \eqref{sts3} gives the convergence in probability of $\sum_{i=0}^{N_k-1} A_{t^k_i,t^k_{i+1}}$ along any sequence of partitions $\Pi_k=\{t_i^k\}_{i=0}^{N_k}$ of $[S,t]$ with mesh converging to $0$. We will prove that the limit is the process $\mathcal{A}$ given in \eqref{eq:A}. \end{enumerate} Assume for now that \ref{en:(1b)}, \ref{en:(1'b)}, \ref{en:(2b)} and \ref{en:(3b)} hold. Applying \cite[Theorem 4.5]{athreya2020well}, we obtain that \begin{align*} \Big\| \int_{s}^{t} f(B_r+\psi_r) \, dr \Big\|_{L^m} &\leq C\, \|f\|_{\mathcal{B}_p^{\beta}}\, [\psi]_{\mathcal{C}^{\frac{1}{2}+H}_{[S,T]}L^m} \Big(1+\left| \log\frac{\| f \|_{\mathcal{B}_p^{\beta+1}} t^{\varepsilon_1}}{\| f \|_{\mathcal{B}_p^{\beta}}} \right| \Big)\, (t-s) \\ & \quad + C\, \| f \|_{\mathcal{B}_p^{\beta}} \Big( 1+ [\psi]_{\mathcal{C}^{\frac{1}{2}+H}_{[S,T]} L^m} \Big) (t-s)^{\frac{1}{2} + \frac{\varepsilon}{2}} \\ & \quad + \| A_{s,t}\|_{L^m} . \end{align*} To bound $\| A_{s,t} \|_{L^m}$, we use again \eqref{eq:Ast} with $\beta-\frac{d}{p}=-\frac{1}{2H}$ to get $\|A_{s,t} \|_{L^m}\leq C \|f\|_{\mathcal{B}^{\beta}_p} (t-s)^{\frac{1}{2}}$. Hence we get \eqref{eq:3.5b}. ~ We now check that the conditions \ref{en:(1a)}, \ref{en:(2a)}, \ref{en:(3a)}, \ref{en:(1b)}, \ref{en:(1'b)}, \ref{en:(2b)} and \ref{en:(3b)} actually hold. \smallskip \paragraph{Proof of \ref{en:(1a)}, \ref{en:(1b)} and \ref{en:(1'b)}.} For $(s,t) \in \Delta_{S,T}$, we have $$\delta A_{s,u,t}= \int_u^{t} f(B_{r}+\psi_{s})-f(B_{r}+\psi_u) \, dr.$$ Hence, by the tower property of conditional expectation and Fubini's Theorem, we get \begin{align*} |\mathbb{E}^{s} \delta A_{s,u,t}|&= \Big|\mathbb{E}^{s} \int_u^{t} \mathbb{E}^u [f(B_{r}+\psi_{s})-f(B_{r}+\psi_u)] \, dr \Big|. \end{align*} Now using Lemma~\ref{lem:reg-B}$(ii)$ with the $\mathcal{F}_{u}$-measurable variable $\Xi=(\psi_{s},\psi_{u})$ and using again Fubini's Theorem, we obtain that for $\lambda\in [0,1]$, % \begin{align}\label{eq:LqBound(1a)} \Big\|\mathbb{E}^{s} \int_u^{t} \mathbb{E}^u [f(B_{r}+\psi_{s})-f(B_{r}+\psi_u)] \, dr \Big\|_{L^q} &\leq \int_u^{t} \| \mathbb{E}^{s} \| % f(\cdot+\psi_{s})-f(\cdot+\psi_u) \|_{\mathcal{B}_p^{\beta-\lambda}} \|_{L^q} \, (r-u)^{H(\beta-\lambda-\frac{d}{p})} \, dr \nonumber \\ &\leq C \|f\|_{\mathcal{B}_p^{\beta-\lambda+1}} \, \| \mathbb{E}^{s} |\psi_u-\psi_{s}|\|_{L^q} \int_u^{t} (r-u)^{H(\beta-\lambda-\frac{d}{p})} \, dr. \end{align} By the conditional Jensen inequality and \eqref{eq:defbracket} (recall that $m\leq q$), we have \begin{align}\label{eq:conditionalIncPsi} \left\|\mathbb{E}^{s} \left|\psi_u-\psi_{s} \right|\right\|_{L^q} \leq [\psi]_{\mathcal{C}_{[s, t]}^\tau L^{m, q}}\, (u-s)^\tau . \end{align} Then choosing $\lambda=1$ in \eqref{eq:LqBound(1a)}, we get \ref{en:(1a)}. In the critical case, let $\tau=1/2+H$. For $q=m$, we get from \eqref{eq:LqBound(1a)} and \eqref{eq:conditionalIncPsi} that \begin{align*} \| \mathbb{E}^{s} \delta A_{s,u,t} \|_{L^m} & \leq C\, \|f\|_{\mathcal{B}_p^{\beta-\lambda+1}} \, [\psi]_{\mathcal{C}^\tau_{[S,T]} L^m} \, (t-s)^{1+H(\beta-\lambda-\frac{d}{p})+\tau} . \end{align*} Choosing $\lambda=1$ in the previous inequality, we get \ref{en:(1'b)}. While choosing $\lambda=0$ yields \ref{en:(1b)}. \paragraph{Proof of \ref{en:(2a)}.} We write \begin{align*} \left\| \big( \mathbb{E}^S |\delta A_{s,u,t} |^m \big)^{\frac{1}{m}} \right\|_{L^q } \leq \left\| \big( \mathbb{E}^S |\delta A_{s,t} |^m \big)^{\frac{1}{m}} \right\|_{L^q} + \left\| \big( \mathbb{E}^S |\delta A_{s,u} |^m \big)^{\frac{1}{m}} \right\|_{L^q } + \left\| \big( \mathbb{E}^S |\delta A_{u,t} |^m \big)^{\frac{1}{m}} \right\|_{L^q } , \end{align*} Recall that we already obtained a bound on $\| ( \mathbb{E}^S |\delta A_{s,t} |^m )^{1/m} \|_{L^q} $ in \eqref{eq:Ast}. We obtain similar bounds for $\| ( \mathbb{E}^S |\delta A_{s,u} |^m )^{1/m} \|_{L^q}$ and $\| (\mathbb{E}^S |\delta A_{u,t} |^m )^{1/m} \|_{L^q} $, which yields \begin{align*} \left\| \big( \mathbb{E}^S |\delta A_{s,u,t}|^m \big)^{\frac{1}{m}} \right\|_{L^q} &\leq C\, \| f \|_{\mathcal{B}_p^\beta} \Big( (t-s)^{1+H(\beta-\frac{d}{p})} +(u-s)^{1+H(\beta-\frac{d}{p})} +(t-u)^{1+H(\beta-\frac{d}{p})} \Big) \\ & \leq C\, \| f \|_{\mathcal{B}_p^\beta} (t-s)^{1+H(\beta-\frac{d}{p})} . \end{align*} \paragraph{Proof of \ref{en:(2b)}.} We choose $\varepsilon$ such that $\beta-\varepsilon > -1/2H$ and $\beta-\varepsilon-d/p > -1/H$. We apply now Lemma \ref{lem:1streg} with $\beta\equiv\beta-\varepsilon$ and $\Xi = (\psi_{s}, \psi_u)$. As $f$ is smooth and bounded, the first assumption of Lemma~\ref{lem:1streg} is verified. By Lemma~\ref{lem:besov-spaces}$(i)$, $ \|f(\cdot + \psi_{s})-f(\cdot + \psi_{u}) \|_{\mathcal{B}^{\beta-\varepsilon}_{p}} \leq 2\| f\|_{\mathcal{B}^{\beta-\varepsilon}_{p}}$, hence the second assumption of Lemma~\ref{lem:1streg} is verified. It follows by Lemma~\ref{lem:1streg} and Lemma~\ref{lem:besov-spaces}$(ii)$ that \begin{align*} \| \delta A_{s,u,t} \|_{L^m} & \leq C\, \| \| f(\cdot + \psi_{s})-(\cdot + \psi_{u}) \|_{\mathcal{B}_p^{\beta-\varepsilon}} \|_{L^m} (t-u)^{1+H(\beta-\varepsilon-\frac{d}{p})} \\ & \leq C\, \| f \|_{\mathcal{B}_p^{\beta}} \, \| |\psi_{s}-\psi_u |^{\varepsilon} \|_{L^m} (t-u)^{1+H(\beta-\varepsilon-\frac{d}{p})} . \end{align*} Hence by Jensen's inequality, \begin{align*} \| \delta A_{s,u,t} \|_{L^m} & \leq C\, \| f \|_{\mathcal{B}_p^{\beta}} \| \psi_{s}-\psi_u \|_{L^m}^\varepsilon (t-u)^{1+H(\beta-\varepsilon-\frac{d}{p})} \\ & \leq C\, \| f \|_{\mathcal{B}_p^{\beta}} [\psi]_{\mathcal{C}^{\frac{1}{2}+H}_{[S,T]} L^m}^\varepsilon (t-s)^{1+H(\beta-\frac{d}{p}) + \frac{\varepsilon}{2}} \\ & \leq C\, \| f \|_{\mathcal{B}_p^{\beta}} \Big( 1+ [\psi]_{\mathcal{C}^{\frac{1}{2}+H}_{[S,T]} L^m} \Big) (t-s)^{1+H(\beta-\frac{d}{p}) + \frac{\varepsilon}{2}} . \end{align*} \paragraph{Proof of \ref{en:(3a)} and \ref{en:(3b)}.} For a sequence $(\Pi_k)_{k \in \mathbb{N}}$ of partitions of $[S,t]$ with $\Pi_k=\{t_i^k\}_{i=1}^{N_k}$ and mesh size converging to zero, we have \begin{align*} \Big\|\mathcal{A}_{t}-\sum_{i=1}^{N_{k}-1} A_{t_i^k,t_{i+1}^k}\Big\|_{L^m} &\leq \sum_{i=1}^{N_{k}-1} \int_{t_i^k}^{t_{i+1}^k} \| f(B_r + \psi_r)-f(B_r+\psi_{t_i^k})\|_{L^m} \, dr\\ &\leq \sum_{i=1}^{N_{k}-1} \int_{t_i^k}^{t_{i+1}^k} \|f\|_{\mathcal{C}^1}\|\psi_r-\psi_{t_i^k}\|_{L^m} \, dr\\ &\leq C\, \|f\|_{\mathcal{C}^1} \, |\Pi_k|^{\tau \wedge (\frac{1}{2}+H)}\, [\psi]_{\mathcal{C}^{\tau \wedge (\frac{1}{2}+H)}_{[s,t]}L^m} \, \, \underset{k \rightarrow \infty}{\longrightarrow} 0. \end{align*} \subsection{Proof of Proposition~\ref{prop:stability}}\label{app:stability} This proof is very close to the proof of \cite[Proposition 7.7]{anzeletti2021regularisation}, but we adapt it to dimension $d\geq 1$ for the reader's convenience. Assume w.l.o.g. that \(X_0 = 0\) and let \(\hat{K}:=\hat{X}-\hat{B}\), so that \eqref{solution1} is automatically verified. Let $(\tilde{b}^n)_{n \in \mathbb{N}}$ be any sequence of smooth bounded functions converging to $b$ in $\mathcal{B}_p^{\gamma-}$. To verify that $\hat{K}$ and $\hat{X}$ satisfy \eqref{approximation2}, we have to show that \begin{align} \lim_{k \rightarrow \infty} \sup_{t \in [0,1]} \left|\int_0^t \tilde{b}^k(\hat{X}_r) \, dr-\hat{K}_t\right|=0 \text{ in probability}. \label{convergence} \end{align} By the triangle inequality we have that for $k,n \in \mathbb{N}$ and $t \in [0,1]$, \begin{align} \label{I1I2I3} \left|\int_0^t \tilde{b}^k(\hat{X}_r) \, dr-\hat{K}_t\right|\leq &\left|\int_0^t \tilde{b}^k(\hat{X}_r) \, dr-\int_0^t \tilde{b}^k(\hat{X}_r^n)\, dr\right|+\left|\int_0^t \tilde{b}^k(\hat{X}_r^n)\, dr-\int_0^t b^n(\hat{X}_r^n)\, dr\right|\nonumber\\ &+\left|\int_0^t b^n(\hat{X}_r^n)\, dr-\hat{K}_t\right|=:A_1+A_2+A_3. \end{align} Now we will show that all summands on the right hand side of \eqref{I1I2I3} converge to $0$ uniformly on $[0,1]$ in probability as $k \to \infty$, choosing $n=n(k)$ accordingly. First we bound $A_1$. Notice that \begin{align*} \left|\int_0^t \tilde{b}^k(\hat{X}_r) \, dr-\int_0^t \tilde{b}^k(\hat{X}_r^n)\, dr\right|&\leq \| \tilde{b}^k\|_{\mathcal{C}^1} \int_0^t |\hat{X}_r-\hat{X}_r^n| \, dr\\ &\leq \| \tilde{b}^k\|_{\mathcal{C}^1}\, \sup_{t \in [0,1]} |\hat{X}_t-\hat{X}_t^n |. \end{align*} For any $\varepsilon>0$, choose an increasing sequence $(n(k))_{k \in \mathbb{N}}$ such that \begin{align*} \mathbb{P}\Big(\| \tilde{b}^k \|_{\mathcal{C}^1}\, \sup_{t \in [0,1]} |\hat{X}_t-\hat{X}_t^{n(k)}| > \varepsilon\Big)< \frac{1}{k}, ~ \forall k \in \mathbb{N}. \end{align*} % Hence, we get that \begin{align*} \lim_{k \rightarrow \infty} \sup_{t\in [0,1]} \left|\int_0^t \tilde{b}^k(\hat{X}_r) \, dr-\int_0^t \tilde{b}^k(\hat{X}_r^{n(k)})\, dr\right|=0 \text{ in probability}. \end{align*} Now, we bound $A_2$. Let $\gamma^\prime<\gamma$ with $\gamma^\prime-d/p>1/2-1/(2H)$. By Lemma~\ref{lem:apriori2} applied to $\hat{X}^n$, $h=\tilde{b}^k-b^n$ and $\gamma^\prime$ instead of $\gamma$, there exists a random variable $Z_{n,k}$ such that \begin{align} \label{eq:expectation} \mathbb{E}[Z_{n,k}]&\leq C\, \| \tilde{b}^k-b^n\|_{\mathcal{B}_p^{\gamma^\prime}}(1+\|b^n\|^2_{\mathcal{B}_p^{\gamma^\prime}})\nonumber\\ & \leq C\, (\| \tilde{b}^k-b\|_{\mathcal{B}_p^{\gamma^\prime}}+\|b^n-b\|_{\mathcal{B}_p^{\gamma^\prime}}) \, (1+\sup_{m \in \mathbb{N}}\|b^m\|^2_{\mathcal{B}_p^{\gamma^\prime}}), \end{align} for $C$ independent of $k,n$ and such that there is \begin{align*} \sup_{t \in [0,1]}\left|\int_0^t \tilde{b}^k(\hat{X}_r^n)\, dr-\int_0^t b^n(\hat{X}_r^n)\, dr\right|\leq Z_{n,k}. \end{align*} Using Markov's inequality and \eqref{eq:expectation} we obtain that \begin{align*} \mathbb{P}&\left(\sup_{t \in [0,1]} \left|\int_0^t \tilde{b}^k(\hat{X}_r^n)\, dr-\int_0^t b^n(\hat{X}_r^n)\, dr\right|>\varepsilon\right)\leq \varepsilon^{-1}\, \mathbb{E}[Z_{n,k}] \\ &\qquad \qquad \leq C\, \varepsilon^{-1}\, (\| \tilde{b}^k-b\|_{\mathcal{B}_p^{\gamma^\prime}}+\|b^n-b\|_{\mathcal{B}_p^{\gamma^\prime}}) \, (1+\sup_{m \in \mathbb{N}}\|b^m\|^2_{\mathcal{B}_p^{\gamma^\prime}}). \end{align*} Choosing $n=n(k)$ as before, we get \begin{align*} \lim_{k \rightarrow \infty}\sup_{t \in [0,1]}\left|\int_0^t \tilde{b}^k(\hat{X}^{n(k)}_r) \, dr-\int_0^t b^{n(k)}(\hat{X}^{n(k)}_r) \, dr\right|=0 \text{ in probability}. \end{align*} To bound the last summand $A_3$, recall that $\hat{X}^n_t=\int_{0}^t b^n(\hat{X}^n_r) \, dr+\hat{B}^n_t$. We get that \begin{align*} \sup_{t \in [0,1]} \left|\int_0^t b^n(\hat{X}^n_r) \, dr-\hat{K}_t\right| \leq \sup_{t \in [0,1]}(|\hat{X}^n_t-\hat{X}_t|+|\hat{B}_t^n-\hat{B}_t|). \end{align*} Since by assumption $(\hat{X}^n,\hat{B}^n)_{n \in \mathbb{N}}$ converges to $(\hat{X},\hat{B})$ on $(\mathcal{C}_{[0,1]})^2$ in probability, we get that \begin{align*} \lim_{k \rightarrow \infty} \sup_{t \in [0,1]} \left|\int_0^t b^{n(k)}(\hat{X}^{n(k)}_r)\, dr-\hat{K}_t\right|=0 \text{ in probability}, \end{align*} and therefore \eqref{convergence} holds true. It remains to show that \eqref{eq:regularity} holds true. By Lemma~\ref{lem:apriori1}, there exists $C>0$ such that for any $(s,t)\in \Delta_{0,1}$, \begin{align} \label{eq:regularity2} \left\| \big( \mathbb{E}^s | (\hat{X}_t^n-\hat{B}_t^n)-(\hat{X}_s^n-\hat{B}_s^n) |^m \big)^{\frac{1}{m}} \right\|_{L^\infty} \leq C \, (1 + \sup_{m \in \mathbb{N}}\|b^m\|^2_{\mathcal{B}_p^\gamma}) \, (t-s)^{1+H(\gamma-\frac{d}{p})}. \end{align} Using that $\int_0^t b^n(\hat{X}^n_r) \, dr$ converges to $K_t$ on $\mathcal{C}_{[0,1]}$ in probability and that $\sup_{m \in \mathbb{N}}\|b^m\|_{\mathcal{B}_p^\gamma}$ is finite, we get \eqref{eq:regularity} by applying Fatou's Lemma to \eqref{eq:regularity2}. \section{Extension of the pathwise uniqueness result}\label{app:extend-uniqueness} In the regime $\gamma-d/p > 1-1/(2H)$, we extend the pathwise uniqueness result of Section \ref{subsec:StrongEx} to weak solutions $X$ that satisfy a weaker regularity than $[X-B]_{\mathcal{C}^{1/2+H}_{[0,1]} L^{2,\infty}} < \infty$. Let $m \ge 2$ and assume that \begin{align}\label{eq:weaker-reg} [X-B]_{\mathcal{C}^{H(1-\gamma+\frac{d}{p})+\eta}_{[0,1]} L^{m,\infty}} < \infty \end{align} for some $\eta \in (0,1)$. Our goal is to show that $[X-B]_{\mathcal{C}^{1/2+H}_{[0,1]} L^{m,\infty}} < \infty$. Of course if $\eta\geq H(\gamma-d/p)+1/2$, this is automatically true. Hence we assume $\eta<H(\gamma-d/p)+1/2$. Let $n \in \mathbb{N}$ and consider the process \begin{align*} X^n_t = X_0 + \int_0^t b^n (X^n_r) \, dr + B_t, \, \forall t \in [0,1] . \end{align*} Applying Proposition \ref{prop:regfBm} with $f=b^n$, $\tau=1+H(\gamma-d/p)$ and $\psi=X_0 + X^n-B$, we get that there exists a constant $C$ such that for any $n \in \mathbb{N}$, and $(s,t) \in \Delta_{0,1}$, \begin{align*} \Big\| \Big( \mathbb{E}^s \Big| \int_s^t b^n( \psi_r + B_{r}) \, dr \Big|^m \Big)^{\frac{1}{m}} \Big\|_{L^\infty} & \leq C \| b \|_{\mathcal{B}_p^\gamma} (t-s)^{1+H(\gamma-\frac{d}{p})} \\ & \quad + C [X^{n}-B]_{\mathcal{C}^{\frac{1}{2}+H}_{[s,t]} L^{m,\infty}} \| b \|_{\mathcal{B}_p^\gamma} (t-s)^{1+H(\gamma-\frac{d}{p})+\tau} , \end{align*} where we used that $\| b^n \|_{\mathcal{B}_p^\gamma} \leq \| b \|_{\mathcal{B}_p^\gamma}$. In particular, for $0 \leq S < T \leq 1$ and $(s,t) \in \Delta_{S,T}$, we have \begin{align*} \Big\| \Big( \mathbb{E}^s \Big| \int_s^t b^n( \psi_r + B_{r}) \, dr \Big|^m \Big)^{\frac{1}{m}} \Big\|_{L^\infty} & \leq C \| b \|_{\mathcal{B}_p^\gamma} (t-s)^{1+H(\gamma-\frac{d}{p})} \\ & \quad + C [X^{n}-B]_{\mathcal{C}^{\frac{1}{2}+H}_{[S,T]} L^{m,\infty}} \| b \|_{\mathcal{B}_p^\gamma} (t-s)^{1+H(\gamma-\frac{d}{p})+\tau} . \end{align*} Now divide by $(t-s)^{1+H(\gamma-\frac{d}{p})}$ and take the supremum over $\Delta_{S,T}$ to get \begin{align}\label{eq:uniqueness-S-T} [X^{n}-B]_{\mathcal{C}^{1+H(\gamma-\frac{d}{p})}_{[S,T]} L^{m,\infty}} & \leq C \, \| b \|_{\mathcal{B}_p^\gamma} + C\, [X^{n}-B]_{\mathcal{C}^{1+H(\gamma-\frac{d}{p})}_{[S,T]} L^{m,\infty}} \| b \|_{\mathcal{B}_p^\gamma} (t-s)^{\tau-H} , \end{align} with $\tau-H>1/2$. For $(s,t) \in \Delta_{S,T}$, we have \begin{align*} \Big| \int_s^t b^n( \psi_r + B_{r}) \, dr \Big| \leq \| b^n \|_{\mathcal{B}_p^\gamma} (t-s) \leq \| b \|_{\mathcal{B}_p^\gamma} (t-s)^{1+H(\gamma-\frac{d}{p})} . \end{align*} Therefore, $[X^{n}-B]_{\mathcal{C}^{1+H(\gamma-d/p)}_{[S,T]} L^{m,\infty}} < \infty$. Let $\ell = \left( \frac{1}{2C \| b \|_{\mathcal{B}_p^\gamma}} \right)^{\frac{1}{1/2-H}}$. Then for $T-S \leq \ell$, \eqref{eq:uniqueness-S-T} implies that \begin{align*} [X^{n}-B]_{\mathcal{C}^{1+H(\gamma-\frac{d}{p})}_{[S,T]} L^{m,\infty}} & \leq C . \end{align*} Since $$[X^{n}-B]_{\mathcal{C}^{1+H(\gamma-d/p)}_{[0,1]} L^{m,\infty}} \leq \sum_{k=0}^{\lfloor \frac{1}{\ell} \rfloor} [X^{n}-B]_{\mathcal{C}^{1+H(\gamma-d/p)}_{[k \ell,(k+1) \ell]} L^{m,\infty}},$$ and $\ell$ does not depend on $n$, we conclude that \begin{align}\label{eq:holdereg-Xn} \sup_{n\in \mathbb{N}} [X^{n}-B]_{\mathcal{C}^{1+H(\gamma-\frac{d}{p})}_{[0,1]} L^{m,\infty}} & <\infty . \end{align} We now wish to take the limit as $n$ goes to infinity in the previous inequality. Define ${K^n := \int_0^\cdot b^n(X_r) \, dr}$ and write \begin{align}\label{eq:firsterrordecomp} [X-X^n]_{\mathcal{C}^{\frac{1}{2}}_{[S,T]} L^m } \leq [K - K^n]_{\mathcal{C}^{\frac{1}{2}}_{[S,T]} L^m } + [E^n]_{\mathcal{C}^{\frac{1}{2}}_{[S,T]} L^m } , \end{align} where $K$ is defined in Definition \ref{def:sol-SDE} and for all $s < t$, \begin{align*} E^n_{s,t} & := K^n_t-K^n_s - (X^n_t-B_t - X^n_s+ B_s) = \int_s^t b^n(X_0+K_r+B_r) - b^n(X^n_r-B_r + B_r)\, dr . \end{align*} \paragraph{Bound on $K-K^n$.} For $k \in \mathbb{N}$, we aim to apply Proposition \ref{prop:regfBm}\ref{item:3.5(a)} with $f=b^{k}-b^n$, $\tau=H(1-\gamma+d/p)+\eta$, $\beta=\gamma-\eta$ and $\psi=X-B$. Let us check the assumptions: Since $\eta<1$, we have $\gamma-\eta-d/p>-1/(2H)$; then $\tau$ is clearly positive and $\tau<H+1/2<1$ because we assumed $\eta<H(\gamma-d/p)+1/2$; finally we have $\tau+ H(\gamma-\eta-d/p-1) = \eta(1-H)>0$. In addition, we assumed $[X-B]_{\mathcal{C}^{\tau}_{[0,1]} L^{m,\infty}} < \infty$ in \eqref{eq:weaker-reg}, thus Proposition \ref{prop:regfBm}\ref{item:3.5(a)} yields that for any $(s,t) \in \Delta_{S,T}$, \begin{align*} \|K_t^{k}-K_s^{k} - K_{t}^n + K_s^{n} \|_{L^m} & \leq C \| b^{k} - b^n \|_{\mathcal{B}_p^{\gamma-\eta}} (t-s)^{1+H(\gamma-\eta-\frac{d}{p})} . \end{align*} Hence $(K^{k}_t-K_s^{k})_{k \in \mathbb{N}}$ is a Cauchy sequence in $L^m(\Omega)$ and therefore it converges. We also know by definition of $X$ that $K^{k}_t-K_s^{k}$ converges in probability to $K_t-K_s$. Thus $K^{k}_t-K_s^{k}$ converges in $L^m$ to $K_t-K_s$. Now by the convergence of $b^{k}$ to $b$ in $\mathcal{B}_p^{\gamma-\eta}$, we get \begin{align*} \|K_t-K_s - K_{t}^n + K_s^{n} \|_{L^m} & \leq C \| b- b^n \|_{\mathcal{B}_p^{\gamma-\eta}} \, (t-s)^{1+H(\gamma-\eta-\frac{d}{p})} . \end{align*} Dividing by $(t-s)^{\frac{1}{2}}$ and taking the supremum over $(s,t)$ in $\Delta_{S,T}$ (recall that $\frac{1}{2} + H(\gamma-\eta-d/p) \ge 0$), we get that \begin{align}\label{eq:K-Kn} [K-K^{n}]_{\mathcal{C}^{\frac{1}{2}}_{[S,T]} L^{m}} & \leq C \| b - b^n \|_{\mathcal{B}_p^{\gamma-\eta}}. \end{align} \paragraph{Bound on $E^{n}$.} We apply Proposition \ref{prop:bound-E1-SDE} with $\psi = X^n-B$, $\phi=X_0+K$, $f=b^n$ and $\tau=\frac{1}{2}$ to get \begin{align*} & \Big\| \int_s^t b^n(X_0+K_r+B_r) - b^n(X^n_r-B_r + B_r) \, dr \Big\|_{L^m} \\ & \quad \leq C\, \|b^n\|_{\mathcal{B}^\gamma_{p}} (1+[X^n-B]_{\mathcal{C}^{\frac{1}{2}+H}_{[0,1]} L^m} ) \Big( [X-X^n]_{\mathcal{C}^{\frac{1}{2}}_{[S,T]} L^m} + \| X_S-X^n_S \|_{L^m} \Big) (t-s)^{1+H(\gamma-1-\frac{d}{p})} . \end{align*} Now divide by $(t-s)^{\frac{1}{2}}$ and take the supremum over $\Delta_{S,T}$ to get \begin{align}\label{eq:bound-En} [E^n]_{\mathcal{C}^{\frac{1}{2}}_{[S,T]} L^m} \leq C (1+[X^n-B]_{\mathcal{C}^{\frac{1}{2}+H}_{[0,1]} L^m} ) \Big( [X-X^n]_{\mathcal{C}^{\frac{1}{2}}_{[S,T]} L^m} + \| X_S-X^n_S \|_{L^m} \Big) (T-S)^{\frac{1}{2}+H(\gamma-1-\frac{d}{p})} . \end{align} \smallskip Injecting \eqref{eq:holdereg-Xn}, \eqref{eq:K-Kn} and \eqref{eq:bound-En} into \eqref{eq:firsterrordecomp}, we get \begin{align*} [X-X^n]_{\mathcal{C}^{\frac{1}{2}}_{[S,T]} L^m} \leq C \| b - b^n \|_{\mathcal{B}_p^{\gamma-\eta}} + C \Big( [X-X^n]_{\mathcal{C}^{\frac{1}{2}}_{[S,T]} L^m} + \| X_S-X^n_S \|_{L^m} \Big) (T-S)^{\frac{1}{2}+H(\gamma-1-\frac{d}{p})} . \end{align*} Hence for $T-S\leq (2C)^{-1/(1/2+H(\gamma-1-d/p))} =: \ell_{0}$, we get \begin{align}\label{eq:diffXXn} [X-X^n]_{\mathcal{C}^{\frac{1}{2}}_{[S,T]} L^{m}} \leq 2 C \Big( \| b - b^n \|_{\mathcal{B}_p^{\gamma-\eta}}+ \| X_S-X^n_{S} \|_{L^m} \Big) . \end{align} Then the inequality \begin{align*} \|X_{S}-X^n_{S} \|_{L^m} \leq \|X_{S-\ell_{0}}-X_{S-\ell_{0}}^{n}\|_{L^m} + \ell_0^{\frac{1}{2}} [X-X^n]_{\mathcal{C}^{\frac{1}{2}}_{[S-\ell_{0},S]} L^{m}} \end{align*} can be plugged in \eqref{eq:diffXXn} and iterated until $S-k\ell_{0}$ is smaller than $0$ for $k\in \mathbb{N}$ large enough. It follows that \begin{align*} [X-X^n]_{\mathcal{C}^{\frac{1}{2}}_{[0,1]} L^{m}} \leq C \| b - b^n \|_{\mathcal{B}_p^{\gamma-\eta}} . \end{align*} Recall that $b^n$ converges to $b$ in $\mathcal{B}_p^{\gamma-\eta}$ by \eqref{def:conv-gamma-}. Hence, $X^n$ converges uniformly (in $L^m(\Omega)$) to $X$. Taking the limit as $n$ goes to infinity in \eqref{eq:holdereg-Xn}, we have shown that for any $\eta \in (0,1)$ and $m \ge 2$ \begin{align*} [X-B]_{\mathcal{C}^{H(\gamma-1+\frac{d}{p})+\eta}_{[0,1]} L^{m,\infty}} < \infty ~~\Rightarrow~~ [X-B]_{\mathcal{C}^{1+H(\gamma-\frac{d}{p})}_{[0,1]} L^{m,\infty}} < \infty . \end{align*} Since $1+H(\gamma-d/p) > 1/2+H$, we also have $[X-B]_{\mathcal{C}^{1/2+H}_{[0,1]} L^{m,\infty}} < \infty$. It follows that pathwise uniqueness holds in the class of weak solutions that satisfy \eqref{eq:weaker-reg}. \end{appendices}
1,941,325,219,915
arxiv
\section{Introduction} % A bipartite network is defined as having two types of nodes, with edges allowed only between nodes of different types. For instance, a network in which edges connect people with the foods they eat is bipartite, as are other networks of associations between two classes of objects. Recent applications of bipartite networks include studies of plants and the pollinators that visit them~\cite{youngReconstructionPlantPollinator2019a}, stock portfolios and the assets they comprise~\cite{squartiniEnhancedCapitalassetPricing2017}, and even U.S. Supreme Court justices and the cases they vote on~\cite{guimeraJusticeBlocksPredictability2011}. More abstractly, bipartite networks also provide an alternative representation for hypergraphs in which the two types of nodes represent the hypergraph's nodes and its hyperedges, respectively~\cite{ghoshalRandomHypergraphsTheir2009,chodrowConfigurationModelsRandom2019}. Many networks exhibit community structure, meaning that their nodes can be divided into groups such that the nodes within each group connect to other nodes in other groups in statistically similar ways. Bipartite networks are no exception, but they exhibit a particular form of community structure because type-I nodes are defined by how they connect to type-II nodes, and vice versa. For example, in the bipartite network of people and the foods they eat, vegetarians belong to a group of nodes which are defined by the fact that they never connect to nodes in the group of meat-containing foods; meat-containing foods are defined by the fact that they never connect to vegetarians. While the group structure in this example comes from existing node categories, one can also ask whether statistically meaningful groups could be derived solely from the patterns of the edges themselves. This problem, typically called {\it community detection}, is the unsupervised task of partitioning the nodes of a network into statistically meaningful groups. In this paper, we focus on the community detection problem in bipartite networks. There are many ways to find community structure in bipartite networks, including both general methods---which can be applied to any network---and specialized methods derived specifically for bipartite networks. We focus on a family of models related to the stochastic blockmodel (SBM), a generative model for community structure in networks~\cite{hollandStochasticBlockmodelsFirst1983}. Since one of the SBM's parameters is a division of the nodes into groups, community detection with the SBM simply requires a method to fit the model to network data. With inference methods becoming increasingly sophisticated~\cite{peixotoBayesianStochasticBlockmodeling2018}, many variants of the SBM have been proposed, including those that accommodate overlapping communities~\cite{airoldiMixedMembershipStochastic2008,godoy-loriteAccurateScalableSocial2016}, broad degree distributions~\cite{karrerStochasticBlockmodelsCommunity2011}, multilayer networks~\cite{tarres-deulofeuTensorialBipartiteBlock2019}, hierarchical community structures~\cite{peixotoHierarchicalBlockStructures2014}, and networks with metadata~\cite{hricNetworkStructureMetadata2016,newmanStructureInferenceAnnotated2016,peelGroundTruthMetadata2017}. SBMs have also been used to estimate network structure or related observational data even if the measurement process is incomplete and erroneous~\cite{youngReconstructionPlantPollinator2019a,newmanEstimatingNetworkStructure2018,newmanNetworkStructureRich2018,peixotoReconstructingNetworksUnknown2018}. In fact, a broader class of so-called mesoscale structural inference problems, like core-periphery identification and imperfect graph coloring, can also be solved using formulations of the SBM, making it a universal representation for a broad class of problems~\cite{youngUniversalityStochasticBlock2018,olhedeNetworkHistogramsUniversality2014}. At first glance, the existing SBM framework is readily applicable to bipartite networks. This is because, at a high level, the two types of nodes should correspond naturally to two blocks with zero edges within each block, implying that SBMs should detect the bipartite split without that split being explicitly provided. However, past work has shown that providing node type information {\it a priori} improves both the quality of partitions and the time it takes to find them~\cite{larremoreEfficientlyInferringCommunity2014}. Unfortunately those results, which relied on local search algorithms to maximize model likelihood~\cite{karrerStochasticBlockmodelsCommunity2011,larremoreEfficientlyInferringCommunity2014}, have been superseded by more recent results which show that fitting fully Bayesian SBMs using Markov chain Monte Carlo can find structures in a more efficient and non-parametric manner~\cite{peixotoNonparametricBayesianInference2017, rioloEfficientMethodEstimating2017, peixotoBayesianStochasticBlockmodeling2018}. These methods maximize a posterior probability, producing similar results to traditional cross validation by link predictions in many (but not all) cases~\cite{kawamotoCrossvalidationEstimateNumber2017a,valles-catalaConsistenciesInconsistenciesModel2018}. In this sense, they avoid overfitting the data, i.e., they avoid finding a large number of communities whose predictions fail to generalize. This raises the question of whether the more sophisticated Bayesian SBM methods gain anything from being customized for bipartite networks, like the previous generation of likelihood-based methods did~\cite{larremoreEfficientlyInferringCommunity2014}. In this paper, we begin by introducing a non-parametric Bayesian bipartite SBM (biSBM) and show that bipartite-specific adjustments to the prior distributions improve the resolution of community detection by a factor of $\sqrt{2}$, compared with the general SBM~\cite{peixotoParsimoniousModuleInference2013}. As with the general SBM, the biSBM automatically chooses the number of communities and controls model complexity by maximizing the posterior probability. After introducing a bipartite model, we also introduce an algorithm, designed specifically for bipartite data, that efficiently fits the model to data. Importantly, this algorithm can be applied to both the biSBM and its general counterpart, allowing us to isolate both the effects of our bipartite prior distributions and the effects of the search algorithm itself. As in the maximum likelihood case~\cite{larremoreEfficientlyInferringCommunity2014}, the ability to customize the search algorithm for bipartite data provides both improved community detection results, as well as a more sophisticated understanding of the solution landscape, but unlike that previous work, this algorithm does more than simply require that blocks consist of only one type of node. Instead, the algorithm explores a two-dimensional landscape of model complexity, parameterized by the number of type-I blocks and the number of type-II blocks. This contributes to the growing body of work that explores the solution space of community detection models, including methods to sample the entire posterior~\cite{rioloEfficientMethodEstimating2017}, count of the number of metastable states~\cite{kawamotoCountingNumberMetastable2019b}, and determine the number of solution samples required to describe the landscape adequately~\cite{calatayudExploringSolutionLandscape2019a}. In the following sections, we introduce a degree-corrected version of the bipartite SBM~\cite{larremoreEfficientlyInferringCommunity2014}, which combines and extends two recent advances. Specifically, we recast the bipartite SBM~\cite{larremoreEfficientlyInferringCommunity2014} in a {\it microcanonical} and Bayesian framework~\cite{peixotoNonparametricBayesianInference2017} by assuming that the number of edges between groups and degree sequence are fixed exactly, instead of only in expectation. We then derive its likelihood, introduce prior distributions that are bipartite-specific, and describe an algorithm to efficiently fit the combined nonparametric Bayesian model to data. We then demonstrate the impacts of both the bipartite priors and algorithm in synthetic and real-world examples, and explore their impact on the maximum number of communities that our method can find, i.e., its resolution limit, before discussing the broader implications of this work. % \section{The microcanonical bipartite SBM} \label{sec:micro_bisbm} % Consider a bipartite network with $N_{\RomanNumeralCaps{1}}$ nodes of type $\RomanNumeralCaps{1}$ and $N_{\RomanNumeralCaps{2}}$ nodes of type $\RomanNumeralCaps{2}$. The type-$\RomanNumeralCaps{1}$ nodes are divided into $B_{\RomanNumeralCaps{1}}$ blocks and the type-$\RomanNumeralCaps{2}$ nodes are divided into $B_{\RomanNumeralCaps{2}}$ blocks. Let $N = N_{\RomanNumeralCaps{1}} + N_{\RomanNumeralCaps{2}}$ and $B = B_{\RomanNumeralCaps{1}} + B_{\RomanNumeralCaps{2}}$. Rather than indexing different types of nodes separately, we index the nodes by $i = 1, 2, \dots, N$ and annotate the block assignment of node $i$ by $b_i = 1, 2, \dots, B$. A key feature of the biSBM is that each block consists of only one type of node. Having divided nodes into blocks, we can now write down the propensities for nodes in each block to connect to nodes in the other blocks. Let $e_{rs}$ be the total number of edges between blocks $r$ and $s$. Then, let $k_i$ be the degree of node $i$. Together, ${\textbf{\textit{e}}} = \lbrace e_{rs} \rbrace$ and ${\textbf{\textit{k}}} = \lbrace k_i \rbrace$ specify the degrees of each node and the patterns by which edges are placed between blocks. The number of edges attached to a group $r$ must be equal to the sum of its degrees, such that $e_r = \sum_{s} e_{rs} = \sum_{b_i = r} k_i$ for any $r$. For bipartite networks, $e_{rr} = 0$ for all $r$. We use $n_r$ to denote the number of nodes in block $r$. Given the parameters above, one can generate a network by placing edges that satisfy the constraints imposed by ${\textbf{\textit{e}}}$ and ${\textbf{\textit{k}}}$. However, that network would be just one of an ensemble of potentially many networks, all of which satisfy the constraints, analogous to the configuration model~\cite{bollobasProbabilisticProofAsymptotic1980,fosdickConfiguringRandomGraph2018}. Peixoto showed how to count the number of networks in this ensemble~\cite{peixotoEntropyStochasticBlockmodel2012}, so that for a uniform distribution over that ensemble, the likelihood of observing any particular network is simply the inverse of the ensemble size. This means that, given ${\textbf{\textit{e}}}$, ${\textbf{\textit{k}}}$, and the group assignments ${\textbf{\textit{b}}} = \lbrace b_i \rbrace$, computing the size of the ensemble $\|\Omega\left({{\textbf{\textit{k}}}, {\textbf{\textit{e}}}, {\textbf{\textit{b}}}}\right) \|$ is tantamount to computing the likelihood of drawing a network with adjacency matrix ${\textbf{\textit{A}}}$ from the model, $P({\textbf{\textit{A}}} \mid {\textbf{\textit{k}}}, {\textbf{\textit{e}}}, {\textbf{\textit{b}}}) = \|\Omega\left({{\textbf{\textit{k}}}, {\textbf{\textit{e}}}, {\textbf{\textit{b}}}}\right) \|^{-1}$. Thus, treating networks as equiprobable microstates in a microcanonical ensemble leads to the microcanonical stochastic blockmodel, whose bipartite version we now develop, specifically to find communities in real-world bipartite networks. This derivation follows directly from combining the bipartite formulation of the SBM~\cite{larremoreEfficientlyInferringCommunity2014} with the microstate counting developed in~\cite{peixotoEntropyStochasticBlockmodel2012}. We introduce a new algorithm to fit the model in Sec.~\ref{sec:fitting_algm}. \section{Nonparametric Bayesian SBM for Bipartite Networks} \label{sec:nonparametric} % We first formulate the community detection problem as a {\it parametric} inference procedure. The biSBM is parameterized by a partition of nodes into blocks ${\textbf{\textit{b}}}$, the number of edges between blocks ${\textbf{\textit{e}}}$, and the number of edges for each node, ${\textbf{\textit{k}}}$. However, for empirical networks, we need only search the space of partitions ${\textbf{\textit{b}}}$. This is because the microcanonical model specifies the degree sequence ${\textbf{\textit{k}}}$ exactly, so the only way that an empirical network can be found in the microcanonical ensemble is if the parameter ${\textbf{\textit{k}}}$ is equal to the empirically observed degree sequence. Note that, when ${\textbf{\textit{k}}}$ and ${\textbf{\textit{b}}}$ are both specified, ${\textbf{\textit{e}}}$ is also exactly specified. As a consequence, community detection requires only a search over partitions of the nodes into blocks ${\textbf{\textit{b}}}$. In the absence of constraints on ${\textbf{\textit{b}}}$, the maximum likelihood solution is simply for the model to memorize the data, placing each node into its own group and letting $\hat{{\textbf{\textit{e}}}} = {\textbf{\textit{A}}}$. To counteract this tendency to dramatically overfit, we adapt the Bayesian nonparametric framework of~\cite{peixotoNonparametricBayesianInference2017}, where the number of groups and other model parameters are determined from the data, and customize this framework for the situation in which the data are bipartite. We start by factorizing the joint distribution for the data and the parameters in this form, \begin{equation} P({\textbf{\textit{A}}}, {\textbf{\textit{k}}}, {\textbf{\textit{e}}}, {\textbf{\textit{b}}}) = P({\textbf{\textit{A}}}\mid {\textbf{\textit{k}}},{\textbf{\textit{e}}},{\textbf{\textit{b}}}) P({\textbf{\textit{k}}}\mid {\textbf{\textit{e}}},{\textbf{\textit{b}}}) P({\textbf{\textit{e}}} \mid {\textbf{\textit{b}}}) P({\textbf{\textit{b}}}), \label{eq:joint_probability} \end{equation} where $P({\textbf{\textit{k}}} | {\textbf{\textit{e}}}, {\textbf{\textit{b}}})$, $P({\textbf{\textit{e}}} | {\textbf{\textit{b}}})$, and $P( {\textbf{\textit{b}}} )$ are prior probabilities that we will specify in later subsections. Thus, Eq.~\eqref{eq:joint_probability} defines a complete generative model for data and parameters. The Bayesian formulation of the SBM is a powerful approach to community detection because it enables model comparison, meaning that we can use it to choose between different model classes (e.g., hierarchical vs flat) or to choose between parameterizations of the same model (e.g., to choose the number of communities). Two approaches to model comparison, producing equivalent formulations of the problem, are useful. The first formulation is that of simply maximizing Eq.~\eqref{eq:joint_probability}, taking the view that the model which maximizes the joint probability of the model and data is, statistically most justified. The second formulation is that of minimizing the so-called {\it description length}~\cite{rissanenInformationComplexityStatistical2007}, which has a variety of interpretations (for a reviews and update, see~\cite{grunwaldMinimumDescriptionLength2007,grunwaldMinimumDescriptionLength2019}). Perhaps the most useful interpretation for our purposes is that of compression, which takes the view that the best model is one which allows us to most compress the data, while accounting for the cost to describe the model itself. In this phrasing, for a model class $M$, the description length $\Sigma_{M}\left( {\textbf{\textit{A}}}, {\textbf{\textit{b}}} \right)$ is given by $\Sigma_{M}\left( {\textbf{\textit{A}}}, {\textbf{\textit{b}}} \right) = -\ln P\left({\textbf{\textit{A}}} | {\textbf{\textit{b}}}, M \right) - \ln P\left({\textbf{\textit{b}}} | M\right)$. These two terms can be interpreted as the description cost of compressing the data ${\textbf{\textit{A}}}$ using the model and the cost of expressing the model itself, respectively. Therefore, the minimum description length (MDL) approach can be interpreted as optimizing the tradeoff between better fitting but larger models. Asymptotically, MDL is equivalent to the Bayesian Information Criterion (BIC)~\cite{schwarzEstimatingDimensionModel1978} for stochastic blockmodels under compatible prior assumptions~\cite{peixotoHierarchicalBlockStructures2014,yanModelSelectionDegreecorrected2014}. A complete and explicit formulation of model comparison will be provided in the context of our studies of empirical data in Sec.~\ref{sec:empirical}, using strict MDL approaches. For now, we proceed with calculating the likelihood and prior probabilities for the microcanonical biSBM and its parameters. % \subsection{Likelihood for microcanonical bipartite SBM} % The observed network ${\textbf{\textit{A}}}$ is just one of $\|\Omega\left({{\textbf{\textit{k}}}, {\textbf{\textit{e}}}, {\textbf{\textit{b}}}}\right) \|$ networks in the microcanonical ensemble which match $\lbrace {\textbf{\textit{k}}}, {\textbf{\textit{e}}}, {\textbf{\textit{b}}} \rbrace$ exactly. Assuming that each configuration in the network ensemble is equiprobable, computing the likelihood is equivalent to taking the inverse of the size of the ensemble. We compute the size of the ensemble by counting the number of networks that match the desired block structure $\Omega({\textbf{\textit{e}}})$ and dividing by the number of equivalent network configurations without block structure $\Xi({\textbf{\textit{A}}})$, yielding, \begin{equation} P_{\text{bi}}\left( {\textbf{\textit{A}}} \mid {\textbf{\textit{k}}}, {\textbf{\textit{e}}}, {\textbf{\textit{b}}} \right) = \|\Omega\left({{\textbf{\textit{k}}}, {\textbf{\textit{e}}}, {\textbf{\textit{b}}}}\right) \|^{-1} \equiv \frac{\Xi\left( {\textbf{\textit{A}}} \right)}{\Omega\left( {\textbf{\textit{e}}}\right)} \ . \label{eq:sbm_likelihood} \end{equation} As detailed in Ref.~\cite{peixotoNonparametricBayesianInference2017}, the number of networks that obey the desired block structure determined by ${\textbf{\textit{e}}}$ is given by, \begin{equation} \Omega \left( {\textbf{\textit{e}}}\right) = \frac{\prod_r{e_r!}}{\prod_{r < s} {e_{rs}!}} \ . \end{equation} This counting scheme assumes that half-edges are distinguishable. In other words, it differentiates between permutations of the neighbors of the same node, which are all equivalent (i.e., correspond to the same adjacency matrix). To discount equivalent permutations of neighbors, we count the number of half-edge pairings that correspond to the bipartite adjacency matrix ${\textbf{\textit{A}}}$, \begin{equation} \Xi \left( {\textbf{\textit{A}}} \right) = \frac{\prod_i k_{i} !}{\prod_{i<j} A_{ij}!} \ . \end{equation} Note that while self-loops are forbidden, this formulation allows the possibility of multiedges. % \subsection{Prior for the degrees} % The prior for the degree sequence follows directly from Ref.~\cite{peixotoNonparametricBayesianInference2017} because ${\textbf{\textit{k}}}$ is conditioned on ${\textbf{\textit{e}}}$ and ${\textbf{\textit{b}}}$, which are bipartite. The intermediate degree distribution $\bm{\eta} = \lbrace \eta_k^r \rbrace$, with $\eta_k^r$ being the number of nodes with degree $k$ that belong to group $r$, further factorizes the conditional dependency. This allows us to write \begin{equation} P\left({\textbf{\textit{k}}}\mid {\textbf{\textit{e}}}, {\textbf{\textit{b}}} \right) = P\left({\textbf{\textit{k}}}\mid \bm{\eta} \right) P\left(\bm{\eta} \mid {\textbf{\textit{e}}},{\textbf{\textit{b}}} \right) \ , \label{eq:prior_deg} \end{equation} where \begin{equation} P\left({\textbf{\textit{k}}}\mid \bm{\eta} \right) = \prod_r \frac{\prod_k \eta_k^r!}{n_r!} \end{equation} is a uniform distribution of degree sequences constrained by the overall degree counts, and \begin{equation} P\left(\bm{\eta} \mid {\textbf{\textit{e}}},{\textbf{\textit{b}}} \right) = \prod_r q(e_r, n_r)^{-1} \label{eq:eta_hyperprior} \end{equation} is the distribution of the overall degree counts. The quantity $q\left( m, n\right)$ is the number of restricted partitions of the integer $m$ into at most $n$ parts~\cite{andrewsTheoryPartitions1998}. It can be computed via the following recurrence relation, \begin{equation} \label{eqn:int_part_exact} q\left( m, n\right) = q\left( m, n-1\right) + q\left(m-n, n\right), \end{equation} with boundary conditions $q\left( m, 1\right) = 1$ for $m > 0$, and $q\left(m, n\right) = 0$ for $m \leq 0$ or $n \leq 0$. With this, computing $q\left( m, n\right)$ for $m \leq M$ and $n \leq m$ requires $\mathcal{O}( M^2)$ additions of integers. In practice, we precompute $q(m, n)$ using the exact Eq.~\eqref{eqn:int_part_exact} for $m \leq 10^4$ (or $m \leq E$ when the network is smaller), and resort to approximations~\cite{peixotoNonparametricBayesianInference2017} only for larger arguments. For sufficiently many nodes in each group, the hyperprior Eq.~\eqref{eq:eta_hyperprior} will be overwhelmed by the likelihood, and the distribution of Eq.~\eqref{eq:prior_deg} will approach the actual degree sequence. In such cases, the prior and hyperprior naturally learn the true degree distribution, making them applicable to heterogeneous degrees present in real-world networks. % \subsection{Prior for the node partition} % The prior for the partitions ${\textbf{\textit{b}}}$ also follows Ref.~\cite{peixotoNonparametricBayesianInference2017} in its general outline, but the details require modification for bipartite networks. We write the prior for ${\textbf{\textit{b}}}$ as the following Bayesian hierarchy \begin{equation} P_{\text{bi}}\left( {\textbf{\textit{b}}} \right) = P\left( {\textbf{\textit{b}}}\mid{\textbf{\textit{n}}} \right) P\left( {\textbf{\textit{n}}}\mid B \right)P\left( B \right) \ , \label{eq:partitionprior} \end{equation} where ${\textbf{\textit{n}}} = \lbrace n_r \rbrace$, the number of nodes in each group. We then assume that this prior can be factorized into independent priors for the partitions of each type of node, i.e., $P_{\text{bi}}\left( {\textbf{\textit{b}}} \right) = P\left( {\textbf{\textit{b}}}_{\RomanNumeralCaps{1}} \right) P\left( {\textbf{\textit{b}}}_{\RomanNumeralCaps{2}} \right)$. This allows us to treat the terms of Eq.~\eqref{eq:partitionprior} as \begin{equation} P\left( {\textbf{\textit{b}}}\mid {\textbf{\textit{n}}} \right) = \left ( \frac{\prod_{\substack{\text{type-I}\\ \text{groups } r}} n_r!}{N_\text{I}!} \right ) \left ( \frac{\prod_{\substack{\text{type-II}\\ \text{groups } s}} n_s!}{N_\text{II}!} \right ) \ , \label{eq:partitionprior_1} \end{equation} \begin{equation} P\left( {\textbf{\textit{n}}}\mid B \right) = \binom{N_{\RomanNumeralCaps{1}} - 1}{B_{\RomanNumeralCaps{1}} - 1}^{-1} \binom{N_{\RomanNumeralCaps{2}} - 1}{B_{\RomanNumeralCaps{2}} - 1}^{-1}\ , \label{eq:partitionprior_2} \end{equation} and \begin{equation} P\left(B\right) = N_{\RomanNumeralCaps{1}}^{-1}N_{\RomanNumeralCaps{2}}^{-1}\ . \label{eq:partitionprior_3} \end{equation} Equation~\eqref{eq:partitionprior_2} is a uniform hyperprior over all such histograms on the node counts~${\textbf{\textit{n}}}$, while Eq.~\eqref{eq:partitionprior_3} is a prior for the number of nonempty groups itself. This Bayesian hierarchy over partitions accommodates heterogeneous group sizes, allowing it to model the group sizes possible in real-world networks. % \subsection{Prior for the bipartite edge counts} % We now introduce the prior for edge counts between groups, ${\textbf{\textit{e}}}$, which also requires modification for bipartite networks. While the edge count prior for general networks is parameterized by the number of groups $B$, the analogous prior for bipartite networks is parameterized by $B_{\RomanNumeralCaps{1}}$ and $B_{\RomanNumeralCaps{2}}$. We therefore modify the counting scheme of Ref.~\cite{peixotoNonparametricBayesianInference2017}, written for general networks, to avoid counting non-bipartite partitions that place edges between nodes of the same type. Our prior for edge counts between groups is therefore \begin{equation} P_{\text{bi}}\left( {\textbf{\textit{e}}} \mid {\textbf{\textit{b}}} \right) = \multiset{B_{\RomanNumeralCaps{1}} B_{\RomanNumeralCaps{2}}}{E}^{-1}\ , \label{eq:uniform_bipartite_prior} \end{equation} where $B_{\RomanNumeralCaps{1}} B_{\RomanNumeralCaps{2}}$ counts the number of group-to-group combinations when edges are allowed only between type-$\RomanNumeralCaps{1}$ and type-$\RomanNumeralCaps{2}$ nodes. The notation $\textmultiset{B_{\RomanNumeralCaps{1}} B_{\RomanNumeralCaps{2}}}{E} = \binom{B_{\RomanNumeralCaps{1}} B_{\RomanNumeralCaps{2}}+E-1}{E}$ counts the number of histograms with $B_{\RomanNumeralCaps{1}} B_{\RomanNumeralCaps{2}}$ bins whose counts sum to~$E$. Similar to the uniform prior for general networks~\cite{peixotoNonparametricBayesianInference2017}, it is unbiased and maximally non-informative, but by neglecting mixed-type partitions, this prior results in a more parsimonious description. In later sections, we show that this modified formulation enables the detection of smaller blocks, improving the so-called resolution limit, by reducing model complexity for larger $B_{\RomanNumeralCaps{1}}$ and $B_{\RomanNumeralCaps{2}}$. % \subsection{Model summary} % Having fully specified the priors in previous subsections, we now substitute our calculations into Eq.~\eqref{eq:joint_probability}, the joint distribution for the biSBM, yielding, \begin{widetext} \begin{align} P_{\text{bi}}\left({\textbf{\textit{A}}},\! {\textbf{\textit{k}}},\! {\textbf{\textit{e}}},\! {\textbf{\textit{b}}}\right) \!=\! \frac{\prod_i{k_i!}\prod_{r<s}{e_{rs}!}}{\prod_r{e_{r}!}\prod_{i<j}{A_{ij}!}} \prod_{r}{\frac{\prod_k{\eta_k^r!}}{n_r!}\frac{1}{q\left(e_r, n_r\right)}} \multiset{B_{\RomanNumeralCaps{1}} B_{\RomanNumeralCaps{2}}}{E}^{\!-1} \frac{\prod_r{n_r!}}{N_{\RomanNumeralCaps{1}}! N_{\RomanNumeralCaps{2}}!} \binom{N_{\RomanNumeralCaps{1}}\!-\!1}{B_{\RomanNumeralCaps{1}}\!-\!1}^{\!-1} \binom{N_{\RomanNumeralCaps{2}}\!-\!1}{B_{\RomanNumeralCaps{2}}\!-\!1}^{\!-1} \frac{1}{N_{\RomanNumeralCaps{1}} N_{\RomanNumeralCaps{2}}}\ . \label{eq:full_posterior} \end{align} \end{widetext} Inference of the biSBM reduces to the task of sampling this distribution efficiently and correctly. Although Eq.~\eqref{eq:full_posterior} is somewhat daunting, note that ${\textbf{\textit{k}}}$ and ${\textbf{\textit{e}}}$ are implicit functions of the partition ${\textbf{\textit{b}}}$, meaning Eq.~\eqref{eq:full_posterior} depends only on the data and the partition ${\textbf{\textit{b}}}$. This opens the door to efficient sampling of the posterior distribution via Markov chain Monte Carlo which we discuss in Sec.~\ref{sec:fitting_algm}. \subsection{Comparison with the hierarchical SBM} In deriving the biSBM, we replaced the SBM's uniform prior for edge counts with a bipartite formulation Eq.~\eqref{eq:uniform_bipartite_prior}. However, one can instead replace it with a Bayesian hierarchy of models~(Eq.~\eqref{eq:hi_edge_count}; \cite{peixotoHierarchicalBlockStructures2014}). In this hierarchical SBM (hSBM), the matrix ${\textbf{\textit{e}}}$ is itself considered as an adjacency matrix of a multigraph with $B$ nodes and $E$ edges, allowing it to be modeled by a second SBM. Of course, the second SBM also has an edge count matrix with the same number of edges and fewer nodes, so the process of modeling each edge count matrix using another SBM can be done recursively until the model has only one block. In so doing, the hSBM typically achieves a higher posterior probability (which corresponds to higher compression, from a description length point of view) than non-hierarchical (or ``flat'') models, and can therefore identify finer-scale community structure. The hSBM's edge count prior allows it to find finer scale communities and more efficiently represent network data. However, as we will see, when the network is small and has no hierarchical structure, the hSBM can actually underfit the data, finding too few communities, due to the overhead of specifying a hierarchy even when none exists. The scenarios in which the flat bipartite prior has advantages over its hierarchical counterpart are explored in Sec.~\ref{sec:benchmark}. % % \section{Fitting the model to data} \label{sec:fitting_algm} The mathematical formulation of the biSBM takes full advantage of a network's bipartite structure to arrive at a better model. Here, we again make use of that bipartite structure to accelerate and improve our ability to fit the model, Eq.~\eqref{eq:full_posterior}, to network data. At a high level, our algorithm for model fitting consists of two key routines. The first routine is typical of SBM inference, and uses Markov chain Monte Carlo importance sampling~\cite{metropolisEquationStateCalculations1953,hastingsMonteCarloSampling1970,peixotoEfficientMonteCarlo2014}, followed by simulated annealing, to explore the space of partitions, conditioned on fixed community counts. In this routine, we accelerate mixing time by making use of the bipartite constraint, specifying a Markov chain only over states (partitions) with one type of node in each block. Importantly, this constraint has the added effect that we must fix both block counts, $B_{\RomanNumeralCaps{1}}$ and $B_{\RomanNumeralCaps{2}}$, separately. The second routine of our algorithm consists of an adaptive search over the two-dimensional space of possible $(B_{\RomanNumeralCaps{1}}, B_{\RomanNumeralCaps{2}})$, using the ideas of dynamic programming~\cite{cormenIntroductionAlgorithms3rd2009,ericksonAlgorithms2019}. It attempts to move quickly through those parts of the $(B_{\RomanNumeralCaps{1}}, B_{\RomanNumeralCaps{2}})$ plane that are low probability under Eq.~\eqref{eq:full_posterior} without calling the MCMC routine, and instead allocating computation time for the regions that better explain the data. The result is an effective algorithm, with two separable routines, which makes full use of the network's bipartite structure, allowing us to either maximize or sample from the posterior Eq.~\eqref{eq:full_posterior}. One advantage of having decoupled routines in this way is that the the partitioning engine is a modular component which can be swapped out for a more efficient alternative, should one be engineered or discovered. Reference implementations of two SBM partitioning algorithms, a Kernighan-Lin-inspired local search~\cite{kernighanEfficientHeuristicProcedure1970,karrerStochasticBlockmodelsCommunity2011,larremoreEfficientlyInferringCommunity2014} and the MCMC algorithm, are freely available as part of the~\texttt{bipartiteSBM} library~\cite{yenBipartiteSBMPythonLibrary}. Alternative methods for model fitting exist. For instance, it is possible to formulate a Markov chain over the entire space of partitions whose stationary distribution is the full posterior, without conditioning on the number of groups. In such a scheme, transitions in the Markov chain can create or destroy groups~\cite{rioloEfficientMethodEstimating2017}, and the Metropolis-Hastings principles guarantee that this chain will eventually mix. However, this approach turns out to be too slow to be practical because the chain gets trapped in metastable states, extending mixing times. Another alternative approach is to avoid our two-dimensional search over $B_{\RomanNumeralCaps{1}}$ and $B_{\RomanNumeralCaps{2}}$, and instead search over $B = B_{\RomanNumeralCaps{1}} + B_{\RomanNumeralCaps{2}}$. This is the approach of Ref.~\cite{peixotoParsimoniousModuleInference2013}, where, after proving the existence of an optimal number of blocks $B$, a golden-ratio one-dimensional search is used to efficiently find it. \subsection{Inference routine} \label{sec:inference_algm} % The task of the MCMC inference routine is to maximize Eq.~\eqref{eq:full_posterior}, conditioned on fixed values of $B_{\RomanNumeralCaps{1}}$ and $B_{\RomanNumeralCaps{2}}$. Starting from an initial partition ${\textbf{\textit{b}}}_{\text{init}}$, the MCMC algorithm explores the space of partitions with fixed $B_{\RomanNumeralCaps{1}}$ and $B_{\RomanNumeralCaps{2}}$ by proposing changes to the block memberships ${\textbf{\textit{b}}}$, and then accepting or rejecting those moves with carefully specified probabilities. As is typical, those probabilities are chosen so that the probability that the algorithm is at any particular partition is equal to the posterior probability of that partition, given $B_{\RomanNumeralCaps{1}}$ and $B_{\RomanNumeralCaps{2}}$, by enforcing the Metropolis-Hastings criterion. Rather than initializing the MCMC procedure from a fully random initial partition, we instead use an agglomerative initialization~\cite{peixotoHierarchicalBlockStructures2014} which reduces burn-in time and avoids getting trapped in metastable states that are common when group sizes are large. The agglomerative initialization amounts to putting each node in its own group and then greedily merging pairs of groups of matching types until the specified $B_{\RomanNumeralCaps{1}}$ and $B_{\RomanNumeralCaps{2}}$ remain. After initialization, each step consists of proposing to move a node $i$ from its current group $r$ to a new group $s$. Following~\cite{peixotoNonparametricBayesianInference2017}, proposal moves are generated efficiently in a two-step procedure. First, we sample a random neighbor $j$ of node $i$ and inspect its group membership $b_j$. Then, with probability $\epsilon B / ( e_{b_j} + \epsilon B )$ we choose $s$ uniformly at random from $\lbrace 1, 2, \dots, B \rbrace$; otherwise, we choose $s$ with probability proportional to the number of edges leading to that group from group $b_j$, i.e., proportional to $e_{b_j s}$. A proposed move which would violate the bipartite structure by mixing node types, or which would leave group $r$ empty, is rejected with probability one. A valid proposed move is accepted with probability \begin{equation} a = \min \left \{ 1, \frac{p\left(b_i = s \rightarrow r \right)}{p\left(b_i = r \rightarrow s \right)} \exp{ \left(- \beta \Delta S \right)} \right \} \ , \label{eq:acceptance_probability} \end{equation} where \begin{equation} p\left(b_i = r \rightarrow s \right) = \sum_{t}{R_t^i} \frac{e_{ts} + \epsilon}{e_t + \epsilon B} \ . \label{eq:proposal_moves} \end{equation} Here, $R_t^i$ is the fraction of neighbors of node $i$ which belong to block $t$ and and $\epsilon > 0$ is an arbitrary parameter that enforces ergodicity. The term $\beta$ is an inverse-temperature parameter, and $\Delta S$ is the difference between the entropies of the biSBM's microcanonical ensemble in its current state and in its proposed new state. With this in mind, \begin{equation} \Delta S = S|_{b_i = s} - S|_{b_i = r} = \ln \frac{P\left( {\textbf{\textit{A}}}, {\textbf{\textit{k}}}, {\textbf{\textit{e}}}, {\textbf{\textit{b}}} \right)}{P\left( {\textbf{\textit{A}}}', {\textbf{\textit{k}}}', {\textbf{\textit{e}}}', {\textbf{\textit{b}}}' \right)} \ , \label{eq:delta_entropy} \end{equation} where variables without primes represent the current state ($b_i =r$) and variables with primes correspond to the state being proposed ($b_i = s$). The initialization, proposal, and evaluation steps of the algorithm above are fast. With continuous bookkeeping of the incident edges to each group, proposals can be made in time $\mathcal{O}\left(k_i\right)$, and are engineered to substantially improve the mixing times since they remove an explicit dependency on the number of groups which would otherwise be present with the fully random moves~\cite{peixotoHierarchicalBlockStructures2014}. Then, when evaluating Eq.~\eqref{eq:delta_entropy}, we need only a number of terms proportional to $k_i$. In combination, the cost of an entire ``sweep,'' consisting of one proposed move for each node in the network, is $\mathcal{O}\left(E\right)$. The overall number of steps necessary for MCMC inference is therefore $\mathcal{O}\left( \tau E \right)$, where $\tau$ is the average mixing time of the Markov chain, independent of $B$. Our \texttt{bipartiteSBM} implementation~\cite{yenBipartiteSBMPythonLibrary} has the following default settings, chosen to stochastically maximize Eq.~\eqref{eq:full_posterior} for fixed $B_{\RomanNumeralCaps{1}}$ and $B_{\RomanNumeralCaps{2}}$ via a simulated annealing process. We first let $\epsilon = 1$, and perform $10^3$ sweeps at $\beta=1$ to reach equilibrated partitions. Then we perform zero-temperature ($\beta \rightarrow \infty$) sweeps, in which only moves leading to a strictly lower entropy are allowed. We keep track of the system's entropy during this process and exit the MCMC routine when no record-breaking event is observed within a $2 \times 10^3$ sweeps window, or when the number of sweeps exceeds $10^4$, whichever is earlier. The partition ${\textbf{\textit{b}}}$ at the end corresponds to the lowest entropy. Equivalently stated, this partition ${\textbf{\textit{b}}}$ corresponds to the minimum description length or highest posterior probability, for fixed $B_{\RomanNumeralCaps{1}}$ and $B_{\RomanNumeralCaps{2}}$. The minimal entropy at each stage is bookmarked for future decision-making processes. The bipartite MCMC formulation is more than just similar to its general counterpart. In fact, one can show that for fixed $B_{\RomanNumeralCaps{1}}$ and $B_{\RomanNumeralCaps{2}}$, the Markov chain transition probabilities dictated by Eq.~\eqref{eq:delta_entropy} are identical for the uniform bipartite edge count prior Eq.~\eqref{eq:uniform_bipartite_prior} and its general equivalent introduced in~\cite{peixotoNonparametricBayesianInference2017}. This means that the MCMC algorithm explores the same entropic landscape for both bipartite and general networks when $B_{\RomanNumeralCaps{1}}$ and $B_{\RomanNumeralCaps{2}}$ are fixed. As we will demonstrate in Sec.~\ref{sec:benchmark}, however, by combining the MCMC routine with both the novel search routine over the block counts and the more sensitive biSBM priors, we can better infer model parameters in bipartite networks. % \begin{figure}[tp] \centering \includegraphics[width=0.88\linewidth]{fig1.pdf} \caption[]{Diagram showing the biSBM community detection algorithm on the description length landscape of the malaria gene-substring network~\cite{larremoreNetworkApproachAnalyzing2013}. (a)~Each square in the heatmap shows the result of fitting a model using MCMC at the specified $\left (B_{\RomanNumeralCaps{1}}, B_{\RomanNumeralCaps{2}}\right )$. The color bar scales linearly. An arrow indicates the minimizing point. (b)~Trajectory of the efficient search routine over the landscape shown in the top panel. Circles indicate where MCMC inference was required. Pink shaded regions show neighborhoods of exhaustive local search, with sequential order indicated by \circled{1} to \circled{5}. (c)~Change of description length values as the algorithm progresses. Shaded circles show the steps at which the 36 MCMC calculations were performed. The minimizing point at $(11,14)$ was found during local search \circled{4} and confirmed during local search \circled{5}.} \label{fig:heuristic} \end{figure} \subsection{Search routine} \label{sec:search_algm} % The task of the search routine is to maximize Eq.~\eqref{eq:full_posterior} over the $\left( B_{\RomanNumeralCaps{1}}, B_{\RomanNumeralCaps{2}}\right)$ plane, i.e., to find the optimal number of groups. However, maximizing Eq.~\eqref{eq:full_posterior} for any fixed choice of $\left( B_{\RomanNumeralCaps{1}}, B_{\RomanNumeralCaps{2}}\right)$ requires the MCMC inference introduced above, motivating the need for an efficient search. If we were to treat the network as unipartite, a one-dimensional convex optimization on the total number of groups $B = B_{\RomanNumeralCaps{1}} + B_{\RomanNumeralCaps{2}}$ with a search cost of $\mathcal{O}\left( \ln N \right)$~\cite{peixotoParsimoniousModuleInference2013} could be used. On the other hand, exhaustively exploring the plane of possibilities would incur a search cost of $\mathcal{O}(B_\text{max}^2)$, where $B_\text{max}$ is the maximum value of $B$ which can be detected. In fact, our experiments indicate that neither the general unipartite approach nor the naive bipartite approach is optimal. The plane search is too slow, while the line search undersamples local maxima of the $\left( B_{\RomanNumeralCaps{1}}, B_{\RomanNumeralCaps{2}}\right)$ landscape, which is typically multimodal. Instead, we present a recursive routine that runs much faster than exhaustive search, which parameterizes the tradeoff between search speed and search accuracy by rapidly finding the high-probability region of the $\left( B_{\RomanNumeralCaps{1}}, B_{\RomanNumeralCaps{2}}\right)$ plane without too many calls to the more expensive MCMC routine. We provide only a brief outline of the search algorithm here, supplying full details in Appendix~\ref{appendix:algorithm}. The search is initialized with each node in its own block. Blocks are rapidly agglomerated until $\min\left( B_{\RomanNumeralCaps{1}}, B_{\RomanNumeralCaps{2}} \right) = \lfloor \sqrt{2 E}/2 \rfloor$. This is the so-called {\it resolution limit}, the maximum number of communities that our algorithm can reliably find, which we discuss in detail in Sec.~\ref{sec:resolution_limit}. Equation~\eqref{eq:full_posterior} will never be maximized prior to reaching this frontier. During this initial phase, we also compute the posterior probability of the trivial bipartite partition with $(1,1)$ blocks, as a reference for the next phase. \begin{figure*}[!t] \centering \begin{center} \includegraphics[width=0.9\linewidth]{fig2.png} \caption[]{Numerical tests of the recovery of planted structure in synthetic networks with $N=10^4$ nodes. Each point shows the median of $10^2$ replicates of the indicated model and algorithm (see legend) and error bars show $25\%-75\%$ quantiles. Insets show the structure of the problems at moderate $\epsilon$. (a) A test meant to be easy: mean degree $5$, equally sized groups, and $B_{\RomanNumeralCaps{1}} = B_{\RomanNumeralCaps{2}} = 10$. (b) A test meant to be challenging: mean degree $15$, equally sized groups, and $B_{\RomanNumeralCaps{1}} = 4$ and $B_{\RomanNumeralCaps{2}} = 15$.} \label{fig:benchmark} \end{center} \end{figure*} Next, we search the region of the ($B_{\RomanNumeralCaps{1}}$,$B_{\RomanNumeralCaps{2}}$) plane within the resolution frontier to find a local maximum of Eq.~\eqref{eq:full_posterior} by adaptively reducing the number of communities. In this context, a local maximum is defined as an MCMC-derived partition with exactly ($B_{\RomanNumeralCaps{1}}$,$B_{\RomanNumeralCaps{2}}$) blocks, whose posterior probability is larger than the posterior probabilities for MCMC-derived partitions at nearby values ($B_{\RomanNumeralCaps{1}}\pm h$,$B_{\RomanNumeralCaps{2}}\pm h$), for a chosen neighborhood size $h$. From the initial partition at the resolution frontier, we merge blocks, selected greedily from a stochastically sampled set of proposed merges. Here, because the posterior probability is a tiny value, it is computationally more convenient to work with the model entropy $S$, which is related to the posterior probability by $S = - \ln P$. Proposed merges are evaluated by their entropy after merging, but without calling the MCMC routine to optimize the post-merge partition. Because MCMC finds better (or no worse) fits to the data, this means that these post-merge entropies are approximate upper bounds of the best-fit entropy, given the post-merge number of blocks. We therefore use this approximate upper bound to make the search adaptive: whenever a merge would produce an upper-bound approximation that is a factor $1+\Delta_0$ higher than the current best $S$, a full MCMC search is initialized at the current grid point. Otherwise, merges proceed rapidly since the approximate entropy is extremely cheap to compute. Throughout this process, the value of $\Delta_0$ is estimated from the data to balance accuracy and efficiency, and it adaptively decreases as the search progresses (Appendix~\ref{appendix:algorithm}). The algorithm exits when it finds a local minimum on the entropic landscape, returning the best overall partition explored during the search. In practice, a typical call to the algorithm takes the form of (i) a rapid agglomerative merging phase from $(N_{\RomanNumeralCaps{1}},N_{\RomanNumeralCaps{2}})$ blocks to the resolution limit frontier; (ii) many agglomerative merges to move along candidate local minima that rely on approximated entropy; (iii) more deliberate and MCMC-reliant neighborhood searches to examine candidate local minima. These phases are shown in Fig.~\ref{fig:heuristic}. The algorithm has total complexity $\mathcal{O}(m h^2)$, where $m$ is the number of times that an exhaustive neighborhood search is performed. When $h=2$, we find $m < 3$ for most empirical networks examined. This algorithm is not guaranteed to find the global optimum, but due to the typical structure of the $\left( B_{\RomanNumeralCaps{1}}, B_{\RomanNumeralCaps{2}}\right)$ optimization landscape for bipartite networks, we have found it to perform well for many synthetic and empirical networks, and it tends consistently estimate the number of groups (see Sec.~\ref{sec:resolution_limit}). An implementation is available in the \texttt{bipartiteSBM} library~\cite{yenBipartiteSBMPythonLibrary}. % \section{Reconstruction performance}\label{sec:benchmark} % In this section, we examine our method's ability to correctly recover the block structure in synthetic bipartite networks where known structure has been intentionally hidden. In each test, we begin by creating a bipartite network with unambiguous block structure, and then gradually mix that structure with noise until the planted blocks disappears entirely, creating a sequence of community detection problems that are increasingly challenging~\cite{mooreComputerSciencePhysics2017}. The performance of a community detection method can then be measured by how well it recovers the known partition over this sequence of challenges. The typical synthetic test for unipartite networks is the {\it planted partition model}~\cite{condonAlgorithmsGraphPartitioning2001} in which groups have $\omega_{rr} = \omega_\text{in}$ assortative edges, and $\omega_{rs} = \omega_\text{out}$ disassortative edges for $r \neq s$. When the total expected degree for each group is fixed, the parameter $\epsilon = \omega_\text{out}/\omega_\text{in}$ controls the ambiguity of the planted blocks. Unambiguous assortative structure corresponds to $\epsilon=0$ while $\epsilon=1$ corresponds to a fully random graph. Here, we consider a straightforward translation of this model to bipartite networks in which the nodes are again divided into blocks according to a planted partition. As in the unipartite planted partition model, non-zero entries of the block affinity matrix take on one of two values but due to the fact that all edges are disassortative, we replace $\omega_\text{in}$ and $\omega_\text{out}$ with $\omega_{\text{+}}$ or $\omega_{\text{–}}$ to avoid confusion (see insets of Fig.~\ref{fig:benchmark}). By analogy, we let $\epsilon=\omega_{\text{-}}/\omega_{\text{+}}$ while fixing the total expected degree for each group, so that $\epsilon=0$ corresponds to highly resolved communities which blend into noise as $\epsilon$ grows. We present two synthetic tests using this bipartite planted partition model, designed to be easy and difficult, respectively. In the easy test, the unambiguous structure consists of $N_{\RomanNumeralCaps{1}}\!=\!N_{\RomanNumeralCaps{2}}\!=\!\tfrac{1}{2}10^4$ nodes, divided evenly into $B_{\RomanNumeralCaps{1}}\!=\!B_{\RomanNumeralCaps{2}}\!=\!10$ blocks of 500 nodes each, with a mean degree $\langle k \rangle = 5$. Each type-I block is matched with a type-II block so that the noise-free network consists of exactly 10 bipartite components, with zero edges placed between nodes in different components by definition. In the hard test, the unambiguous structure consists of $N\!=\!10^4$ nodes divided evenly into $B_{\RomanNumeralCaps{1}}=4$ and $B_{\RomanNumeralCaps{2}}=15$ blocks of approximately equal size, with mean degree $\langle k \rangle=15$. The relationships between the groups in the hard test are more complex, so the insets of Fig.~\ref{fig:benchmark} provide schematics of the adjacency matrices of both tests under a moderate amount of noise. In both cases, node degrees were drawn from a power-law distribution with exponent $\alpha=2$, and for a fixed $\epsilon$, networks were drawn from the canonical degree-corrected stochastic blockmodel~\cite{karrerStochasticBlockmodelsCommunity2011,larremoreEfficientlyInferringCommunity2014}. We test four methods' abilities to recover the bipartite planted partitions, in combinations that allow us to separate the effects of using our bipartite {\it model} (Sec.~\ref{sec:nonparametric}) and our bipartite {\it search algorithm} (Sec.~\ref{sec:fitting_algm}), in comparison to existing methods. The first method maximizes the biSBM posterior using our 2D search algorithm. The second method keeps the 2D search algorithm, but examines the effects of the bipartite-specific edge count prior by replacing it with the general SBM's edge count prior [i.e., replacing Eq.~\eqref{eq:uniform_bipartite_prior} with Eq.~\eqref{eq:uniform_prior}]. The third method uses the same general SBM edge count prior as the second, but uses a 1D bisection search~\cite{peixotoParsimoniousModuleInference2013} to examine the effects of the 2D search. The fourth method maximizes the hierarchical SBM posterior using a 1D bisection search. For the first two cases, we use our \texttt{bipartiteSBM} library~\cite{yenBipartiteSBMPythonLibrary}, while for the latter two, we use the \texttt{graph-tool} library~\cite{peixotoGraphtoolPythonLibrary2014}. In all cases, we enforce type-specific MCMC move proposals to avoid mixed-type groups. In the easy test, we find that the bipartite search algorithm introduced in Sec.~\ref{sec:fitting_algm} performs better than the one-dimensional searches (Fig.~\ref{fig:benchmark}a). Because the one-dimensional search algorithm assumes that the optimization landscape is unimodal, we reasoned that other modes may emerge as $\epsilon$ increases. To test this, we generated networks within the transition region ($\epsilon \approx 0.054$) and then conducted an exhaustive survey of plausible $(B_{\RomanNumeralCaps{1}},B_{\RomanNumeralCaps{2}})$ values using MCMC with the general SBM. This revealed two basins of attraction, located at $(8, 8)$ and $(1,1)$, explaining the SBM's performance. This bimodal landscape can therefore hinder search in one dimension by too quickly attracting the algorithm to the trivial bipartite partition. Perhaps surprisingly then, a similar exhaustive survey of the $(B_{\RomanNumeralCaps{1}},B_{\RomanNumeralCaps{2}})$ plane using the bipartite model revealed that near the transition $\epsilon$, the biSBM has a local optimum with {\it more than} the planted $(10,10)$ blocks. In the hard case, we find that it is not the bipartite search that enables the biSBM to outperform the other methods, but rather the bipartite posterior (Fig.~\ref{fig:benchmark}b). An exploration of the outputs of the general searches shows that when they fail, they tend to find an incorrect number of blocks, which should total $19$ [corresponding to the planted $(4,15)$ blocks]. To understand this failure mode in more detail, we fixed $B=19$ and used MCMC to fit the general SBM~\cite{peixotoGraphtoolPythonLibrary2014}. This led to solutions in which $B_{\RomanNumeralCaps{1}} \approx B_{\RomanNumeralCaps{2}}$, revealing that the performance degradation, relative to the biSBM, was due to a tendency for that particular algorithmic implementation of the SBM to find more balanced numbers of groups. Interestingly, near their respective transitions values of $\epsilon$, both the SBM and biSBM tend to find more groups than were planted in the hard test, thus overfitting the data. To explore this further, we again conducted exhaustive surveys of the $(B_{\RomanNumeralCaps{1}},B_{\RomanNumeralCaps{2}})$ plane using MCMC and found that under both models, the posterior surfaces are consistently multimodal, with attractive peaks corresponding to more communities than the planted $(4,15)$. However, only the bipartite search algorithm introduced in Sec.~\ref{sec:fitting_algm} finds overfitted partitions with too many groups; the unipartite search algorithms instead return underfitted models with too few groups, balanced between the node types. In sum, our synthetic network tests reveal two phenomena. First, the biSBM with bipartite search is able to extract structure from higher levels of noise than the alternatives, making it an attractive option for bipartite community detection with real data. However, our tests also reveal that the posterior surfaces of both the SBM and biSBM degenerate in unexpected ways near the detectability transition~\cite{decelleAsymptoticAnalysisStochastic2011,mosselReconstructionEstimationPlanted2015,kawamotoDetectabilityThresholdsGeneral2017,ricci-tersenghiTypologyPhaseTransitions2019a}. % \section{Resolution Limit}\label{sec:resolution_limit} % \begin{figure}[!t] \centering \includegraphics[width=0.9\linewidth]{fig3.png} \caption[]{A numerical experiment on bipartite cliques to demonstrate the resolution limit. As an increasing number of bipartite cliques with $10$ nodes of each type are presented to the SBM, biSBM, and hSBM (see legend), the hSBM continues to find all cliques while the SBM and biSBM begin to merge pairs, quartets, and eventually octets of cliques. Arrows indicate analytical predictions of merge transitions from posterior odds ratios, with colors matching the legend. Note that biSBM transitions occur at twice the value of $B$ as SBM transitions, showing the biSBM's expanded resolution limit.} \label{fig:resolution} \end{figure} Community detection algorithms exhibit a {\it resolution limit}, an upper bound on the number of blocks that can be resolved in data, even when those blocks are seemingly unambiguous. For instance, using the general SBM, only $B_\text{max} = \mathcal{O} \left(N^{1/2}\right)$ groups can be detected~\cite{peixotoNonparametricBayesianInference2017}, while the higher resolution of the hierarchical SBM improves this scaling to $B_\text{max} = \mathcal{O} \left( {N}/\ln{N}\right)$~\cite{peixotoHierarchicalBlockStructures2014}. In this section we investigate the resolution limit of the biSBM numerically and analytically. Our numerical experiment considers a network of $B_{\RomanNumeralCaps{1}} = B_{\RomanNumeralCaps{2}} = \tilde{B}$ bipartite cliques of equal size, with $10$ nodes of each type per biclique and therefore $100$ edges per biclique. To this network, we repeatedly apply the SBM, the hSBM, and biSBM, and record the number of blocks found each time, varying $\tilde{B}$ between $1$ and $510$. For small values of $\tilde{B}$, all three algorithms infer $\tilde{B}$ blocks, but as the number of blocks increases, solutions which merge pairs, then quartets, and then octets become favored (Fig.~\ref{fig:resolution}). The hSBM continues to find $\tilde{B}$ blocks, as expected. The exact value of $\tilde{B}$ at which merging blocks into pairs becomes more attractive can be derived by asking when the corresponding posterior odds ratio, comparing a model with $\tilde{B}$ bicliques to a model with $\tilde{B}/2$ biclique pairs, exceeds one, \begin{equation} \Lambda(\tilde{B}) = \frac{P({\textbf{\textit{A}}},{\textbf{\textit{k}}},{\textbf{\textit{e}}}_\text{clique pairs},{\textbf{\textit{b}}}_\text{clique pairs})}{P({\textbf{\textit{A}}},{\textbf{\textit{k}}},{\textbf{\textit{e}}}_\text{cliques},{\textbf{\textit{b}}}_\text{cliques})}\ . \label{eq:bayes_resolution} \end{equation} When there are $10$ nodes of each type per biclique and $100$ edges, $\Lambda(\tilde{B})$ exceeds 1 when $\tilde{B}=19$ for the SBM and $\tilde{B}=38$ for the biSBM (Fig.~\ref{fig:resolution}; arrows). A similar calculation predicts the transition from biclique pairs to biclique quartets at $\tilde{B}=75$ for the SBM and $\tilde{B}=149$ for the biSBM (Fig.~\ref{fig:resolution}; arrows). Numerical experiments confirm these analytical predictions, but noisily, due to the stochastic search algorithms involved, and the fact the optimization landscapes are truly multimodal, particularly near points of transition. \begin{figure}[!t] \centering \includegraphics[width=0.9\linewidth]{fig4.pdf} \caption[]{Comparison of the description lengths resulting from prior distribution over edge counts using the biSBM, SBM, and hSBM priors. Regions where a flat prior has a lower description length than the hierarchical prior are shaded for (a) the SBM and (b) the biSBM. Flat priors are favored when there are fewer edges, more groups, and a smaller hierarchical branching factor $\sigma$ (defined in Sec.~\ref{sec:resolution_limit}). The flat-model regime is larger for the biSBM than the SBM, as described in Sec.~\ref{sec:resolution_limit}.} \label{fig:edge_count_prior} \end{figure} The posterior odds ratio calculations above can be generalized, and show that the biSBM extends the resolution transitions twice as far as the SBM for the transitions from $B\! \to\! \tfrac{1}{2}B \! \to\! \tfrac{1}{4}B\! \to\! \dots$, and so on, but still undergoes the same transitions eventually. Thus, both models exhibit the same resolution limit scaling $B_\text{max} = \mathcal{O} \left(N^{1/2}\right)$, but with resolution degradations that occur at $N$ for the SBM occurring at $2N$ for the biSBM. Therefore, the resolution limit of the biSBM is $\sqrt{2}$ larger than the SBM for the same number of nodes. One can alternatively retrace the analysis of Ref.~\cite{peixotoNonparametricBayesianInference2017}, but for the biSBM applied to bicliques to derive the same $\sqrt{2}$ resolution improvement. This constant-factor improvement in resolution limit may seem irrelevant, given that the major contribution of the hierarchical SBM was to change the order of the limit to $B_\text{max} = \mathcal{O} \left( {N}/\ln{N}\right)$~\cite{peixotoHierarchicalBlockStructures2014}. However, we find that, on the contrary, the $\sqrt{2}$ factor improvement for the biSBM expands a previously uninvestigated regime in which flat models outperform their hierarchical cousin. When given the biclique data, the hSBM finds a hierarchical division where at each level $l$, the number of groups decreases by a factor $\sigma_l$, except at the highest level where it finds a bipartite division. Assuming that $\sigma_l = \sigma$, we have $B_l = 2 \tilde{B}_l$, where $B_l = \tilde{B}/\sigma^{l-1}$. The hSBM's prior for edge counts Eq.~\eqref{eq:hi_edge_count} can be factored into uniform distributions over multigraphs at lower levels and over an SBM at the topmost level, leading to, \begin{IEEEeqnarray}{rCl}\IEEEeqnarraymulticol{3}{l} {P_\text{lower}\left( {\textbf{\textit{e}}} \right) = \prod_{l=1}^{\log_\sigma{\tilde{B}}}{\multiset{\sigma^2}{E\sigma^{l}/\tilde{B}}}^{-\tilde{B}/\sigma^{l}}} \nonumber\\& \times & \frac{{\sigma!}^{2\tilde{B}/\sigma^{l}}}{\left( \tilde{B}/\sigma^{l-1}\right)!^2}\binom{\tilde{B}/\sigma^{l-1}-1}{\tilde{B}/\sigma^{l}-1}^{-2} ,\nonumber\\ \label{eq:biclique_lower_multigraph} \end{IEEEeqnarray} and, \begin{equation} P_\text{topmost}\left( {\textbf{\textit{e}}} \right) = \multiset{\textmultiset{2}{2}}{E}^{-1} \ . \label{eq:biclique_topmost_multigraph} \end{equation} By comparing $P_\text{hier} = P_\text{lower} P_\text{topmost}$ with the corresponding terms from the biSBM [Eq.~\eqref{eq:uniform_bipartite_prior}] or the corresponding equation for the SBM [Eq.~\eqref{eq:uniform_prior}], we can identify regimes in which a flat model better describes network data than the nested model. \begin{table*}[ht] \caption{Results for 24 empirical networks. Number of nodes $n_{\RomanNumeralCaps{1}}$, $n_{\RomanNumeralCaps{2}}$, mean degree $\langle k \rangle$, number of type-$\RomanNumeralCaps{1}$ groups $B_{\RomanNumeralCaps{1}}$, and number of type-$\RomanNumeralCaps{2}$ groups $B_{\RomanNumeralCaps{2}}$, and description length per edge $\Sigma/E$. Superscripts: b-biSBM, g-SBM, h-hSBM. $L$ indicates the number of levels found by the hSBM. Reported values indicate best of 100 independent runs. Unless otherwise noted, data are accessible from the Colorado Index of Complex Networks (ICON)~\cite{clausetColoradoIndexComplex}. The confidence level is marked with asterisks$^\text{a}$.} \centering \begin{tabular}% { >{\raggedright\arraybackslash}p{5.1cm} % >{\raggedleft\arraybackslash}p{1.1cm} % >{\raggedleft\arraybackslash}p{1.1cm} % >{\raggedleft\arraybackslash}p{1.1cm} % >{\centering\arraybackslash}p{1.6cm} % >{\centering\arraybackslash}p{1.6cm} % >{\centering\arraybackslash}p{1.6cm} % >{\raggedright\arraybackslash}p{1.3cm} % >{\raggedright\arraybackslash}p{1.0cm} % >{\raggedright\arraybackslash}p{1.0cm} % % } \hline\hline Dataset & $N_{\RomanNumeralCaps{1}}$ & $N_{\RomanNumeralCaps{2}}$ & $\langle k \rangle$ & $(B_{\RomanNumeralCaps{1}}^{\text{b}}, B_{\RomanNumeralCaps{2}}^{\text{b}})$ & $(B_{\RomanNumeralCaps{1}}^{\text{g}}, B_{\RomanNumeralCaps{2}}^{\text{g}})$ & $(B_{\RomanNumeralCaps{1}}^{\text{h}}, B_{\RomanNumeralCaps{2}}^{\text{h}})$ & $\langle L + 1 \rangle$ & $\Sigma^\text{b}/E$ & $\Sigma^\text{h}/E$\\ [0.3ex] \hline Southern women interactions~\cite{jonesDeepSouthSocial1942} & 18 & 14 & 5.56 & (1, 1) & (1, 1) & (1, 1) & 2.0 & \bfseries2.15$^*$ & 2.26 \\ Joern plant-herbivore web~\cite{joernFeedingPatternsGrasshoppers1979} & 22 & 52 & 4.97 & (2, 2) & (1, 1) & (1, 1) & 2.0 & \bfseries2.64$^*$ & 2.74 \\ Swingers and parties~\cite{niekampSexualAffiliationNetwork2013} & 57 & 39 & 4.83 & (1, 1) & (1, 1) & (1, 1) & 2.0 & \bfseries2.92$^*$ & 2.97 \\ McMullen pollination web~\cite{mcmullenFlowervisitingInsectsGalapagos1993} & 54 & 105 & 2.57 & (2, 2) & (2, 2) & (1, 1) & 2.0 & \bfseries2.87$^*$ & 3.02 \\ Ndrangheta criminals~\cite{dimilanoOrdinanzaDiApplicazione2011} & 156 & 47 & 4.48 & (3, 4) & (3, 3) & (3, 4) & 2.87 & \bfseries3.44$^*$ & 3.49 \\ Abu Sayyaf kidnappings$^\text{b}$~\cite{gerdesAssessingAbuSayyaf2014} & 246 & 105 & 2.28 & (2, 2) & (1, 1) & (1, 1) & 2.0 & \bfseries4.50$^*$ & 4.54\\ Virus-host interactome~\cite{rozenblatt-rosenInterpretingCancerGenomes2012} & 53 & 307 & 2.52 & (2, 2) & (1, 1) & (1, 1) & 2.0 & \bfseries3.78$^*$ & 3.81\\ Clements-Long plant-pollinator~\cite{clementsExperimentalPollinationOutline1923} & 275 & 96 & 4.98 & (1, 1) & (1, 1) & (1, 1) & 2.0 & \bfseries3.45$^*$ & 3.47 \\ Human musculoskeletal system~\cite{murphyStructureFunctionControl2018} & 173 & 270 & 4.30 & (7, 8) & (5, 5) & (8, 8) & 4.01 & \bfseries3.94 & 3.94 \\ Mexican drug trafficking$^\text{b}$~\cite{cosciaKnowingWhereHow2012} & 765 & 10 & 16.1 & (12, 8) & (8, 7) & (10, 6) & 3.11 & \bfseries1.26$^*$ & 1.29 \\ Country-language network~\cite{kunegisKONECTKoblenzNetwork2013} & 254 & 614 & 2.89 & (4, 5) & (2, 2) & (4, 3) & 2.11 & \bfseries 4.53$^*$ & 4.56 \\ Malaria gene similarity~\cite{larremoreNetworkApproachAnalyzing2013} & 297 & 806 & 5.38 & (15, 16) & (6, 6) & (25, 20) & 4.95 & 4.73 & \bfseries4.67$^*$ \\ Protein complex-drug~\cite{nacherModularityProteinComplex2012} & 739 & 680 & 5.20 & (20, 22) & (14, 14) & (35, 39) & 5.06 & 3.65 & \bfseries3.50$^{**}$ \\ Robertson plant-pollinator~\cite{robertsonFlowersInsectsLists1928} & 456 & 1428 & 16.2 & (20, 18) & (11, 11) & (20, 19) & 4.0 & \bfseries3.10$^*$ & 3.10 \\ Human gene-disease network~\cite{gohHumanDiseaseNetwork2007} & 1419 & 516 & 4.06 & (13, 14) & (9, 9) & (35, 36) & 5.04 & 5.02 & \bfseries4.80$^{**}$ \\ Food ingredients-flavors web~\cite{ahnFlavorNetworkPrinciples2011} & 1525 & 1107 & 27.9 & (27, 69) & (20, 29) & (42, 130) & 4.91 & 2.55 & \bfseries 2.51$^{**}$\\ Wikipedia doc-word network~\cite{gerlachNetworkApproachTopic2018} & 63 & 3140 & 24.8 & (22, 206) & (18, 23) & (29, 71) & 4.17 & 1.58 & \bfseries 1.51$^{**}$\\ Foursquare check-ins~\cite{yangFinegrainedPreferenceawareLocation2013} & 2060 & 2876 & 11.0 & (65, 66) & (40, 40) & (244, 248) & 5.2 & 5.92 & \bfseries5.09$^{**}$\\ Ancient metabolic network~\cite{goldfordRemnantsAncientMetabolism2017} & 5651 & 5252 & 4.22 & (18, 22) & (5, 5) & (17, 21) & 4.26 & \bfseries5.68$^{**}$ & 5.82\\ Marvel Universe characters~\cite{alberichMarvelUniverseLooks2002} & 6486 & 12942 & 9.95 & (68, 72) & (67, 62) & (365, 314) & 6.24 & 4.70 & \bfseries4.42$^{***}$ \\ Reuters news stories~\cite{lewisRCV1NewBenchmark2004} & 19757 & 38677 & 33.5 & (396, 440) & (87, 108) & (294, 463) & 6.25 & 4.22 & \bfseries4.16$^{***}$ \\ IMDb movie-actor dataset$^\text{c}$ & 53158 & 39768 & 6.49 & (91, 92) & (69, 68) & (264, 265) & 6.22 & 7.40 & \bfseries7.30$^{***}$ \\ YouTube group memberships~\cite{misloveMeasurementAnalysisOnline2007} & 94238 & 30087 & 4.72 & (62, 66) & (37, 38) & (221, 238) & 5.9 & \bfseries7.07$^{**}$ & 7.13 \\ DBpedia writer network~\cite{auerDBpediaNucleusWeb2007} & 89355 & 46213 & 2.13 & (22, 26) & (2, 3) & (2, 3) & 2.16 & \bfseries10.32$^{**}$ & 10.41\\ [0.5ex] \hline\hline \end{tabular} \begin{tablenotes}[flushleft] \small \item $^\text{a}$ Via the posterior odds ratio: $^{*}: \Lambda < 10^{-2}$;\quad $^{**}: \Lambda < 10^{-100}$;\quad $^{***}: \Lambda < 10^{-10000}$. \item $^\text{b}$ Temporal data with timestamps are aggregated, making a multigraph. \item $^\text{c}$ Data available at \url{https://www.imdb.com/interfaces}. IMDb copyright permits redistribution of data only in unaltered form. \end{tablenotes} \label{table:benchmark} \end{table*} Figure~\ref{fig:edge_count_prior} shows regimes in which the flat model is preferred for both the SBM and biSBM. These regimes are larger for the biSBM than the SBM, as expected, and are larger when the hierarchical branching factor $\sigma$ decreases---indeed, if the data are less hierarchical, the hierarchical model is expected to have less of an advantage. The flat-model description is also favored when there are fewer edges and more groups, suggesting that in order for the nested model to be useful, it requires sufficient data to support its more costly nested architecture. A number of real-world networks that fall into this flat-model regime are described in the following section. We note that our definition of this regime relies on assumptions of perfect inference and a fixed branching factor at each level of the hSBM's hierarchy. These assumptions may not always hold. % \begin{figure*}[t] \centering \includegraphics[width=0.9\linewidth]{fig5.png} \caption[]{Repeated application of models (see legend in panel a) with default algorithms produces distributions of the description length and the number of groups, for eight of the empirical networks listed in Table~\ref{table:benchmark}. Vertical lines mark the value of the mean description length.} \label{fig:empirical} \end{figure*} \section{Empirical networks}\label{sec:empirical} % We now examine the application of the biSBM to a corpus of real-world networks ranging in size from $N=32$ to $N=135,568$ nodes, across social, biological, linguistic, and technological domains. While it was typical of past studies to measure a community detection method by its ability to recapitulate known metadata labels, we acknowledge that this approach is inadvisable for a number of theoretical and practical reasons~\cite{peelGroundTruthMetadata2017} and instead compare the biSBM to the SBM and hSBM using Bayesian model selection. In general, to compare one partition-model pair $\left( {\textbf{\textit{b}}}_0, M_0 \right)$ and an alternative pair $\left( {\textbf{\textit{b}}}_1, M_1 \right)$, we can compute the posterior odds ratio, \begin{equation} \Lambda = \frac{P\left( {\textbf{\textit{b}}}_0, M_0 | {\textbf{\textit{A}}} \right)}{P\left( {\textbf{\textit{b}}}_1, M_1 | {\textbf{\textit{A}}} \right)} = \frac{P\left({\textbf{\textit{A}}}, {\textbf{\textit{b}}}_0 | M_0 \right)}{P\left({\textbf{\textit{A}}}, {\textbf{\textit{b}}}_1 | M_1 \right)} \times \frac{P\left( M_0\right)}{P\left( M_1\right)} \ . \end{equation} Model $\left( {\textbf{\textit{b}}}_0, M_0 \right)$ is favored when $\Lambda > 1$ and model $\left( {\textbf{\textit{b}}}_1, M_1 \right)$ is favored when $\Lambda < 1$, with the magnitude of difference from $\Lambda=1$ indicating the degree of confidence in model selection~\cite{jeffreysTheoryProbability1998}. In the absence of any {\it a priori} preference for either model, $P\left(M_0\right) = P\left(M_1\right)$, meaning that the ratio of probabilities $\Lambda$ can be alternatively expressed via the difference in description lengths, $\Lambda \equiv \exp({\Sigma_1 - \Sigma_0})$. [Recall that the description length $\Sigma_\ell$ for the combined model $({\textbf{\textit{b}}}_\ell,M_\ell)$ and data ${\textbf{\textit{A}}}$ can be written as the negative log of the posterior probability, as introduced in Sec.~\ref{sec:nonparametric}.] In what follows, we compare the hSBM to the biSBM and without loss of generality choose $M_1$ to be whichever model is favored so that $\Lambda$ simply expresses the magnitude of the odds ratio. Note that by construction, the biSBM always outperforms the flat SBM. As predicted in the previous section, the biSBM's flat prior is better when networks are smaller and sparser, while for larger networks the hSBM generally performs better by building a hierarchy that results in a more parsimonious model (Table~\ref{table:benchmark}). Indeed, the majority of larger networks are better described using the hSBM (Table~\ref{table:benchmark}; rightmost columns), but exceptions do exist, including the ancient metabolic network~\cite{goldfordRemnantsAncientMetabolism2017}, YouTube memberships~\cite{misloveMeasurementAnalysisOnline2007}, and DBpedia writer network~\cite{auerDBpediaNucleusWeb2007}, which share the common feature of low density. The Robertson plant-pollinator network~\cite{robertsonFlowersInsectsLists1928}, on the other hand, is neither small nor particularly sparse, and yet the biSBM is still weakly preferred over the hSBM. Differences between models, based only on their {\it maximum a posteriori} (i.e., minimum description length) estimates, may overlook additional complexity in the models' full posterior distributions. We repeatedly sample from the posterior distributions of the SBM, biSBM, and hSBM for $8$ networks from Table~\ref{table:benchmark}, showing both posterior description length distributions and inferred block count distributions (Fig.~\ref{fig:empirical}). Generally, all three models exhibit similar description-length variation, but due to the 2D search introduced in Sec.~\ref{sec:fitting_algm}, the biSBM returns partitions with wider variation in $B_{\RomanNumeralCaps{1}}$ and $B_{\RomanNumeralCaps{2}}$. For instance, the drug trafficking network~\cite{cosciaKnowingWhereHow2012}, a multigraph with $N_{\RomanNumeralCaps{1}} \gg N_{\RomanNumeralCaps{2}}$, has a bimodal distribution of description lengths under the hSBM, while the biSBM finds plausible partitions for a wide variety of $B_{\RomanNumeralCaps{1}}$ values (Fig.~\ref{fig:empirical}b). On the other hand, posterior distributions for the country-language network~\cite{kunegisKONECTKoblenzNetwork2013} are all unimodal, but the biSBM finds probable states with wide variation in description length and block counts, while the hSBM samples from a small region (Fig.~\ref{fig:empirical}c). This can happen when the network is small, since the hSBM requires sufficiently complicated data to justify a hierarchy, while the biSBM finds a variety of lower description length partitions. In fact, viewing the same datasets through the lenses of these different models' priors can quite clearly shift the location of posterior peaks. This is most clearly visible in the Reuters network~\cite{lewisRCV1NewBenchmark2004}, for which the models have unambiguous and non-overlapping preferred states (Fig.~\ref{fig:empirical}f). Briefly, we note that model comparison is possible here due to the fact that all of the models we considered are SBMs with clearly specified posterior distributions. Broader comparisons between community detection models of entirely different classes are also possible, for which we suggest Ref.~\cite{ghasemianEvaluatingOverfitUnderfit2019}. % \section{Discussion}\label{sec:discussion} % This paper presented a bipartite microcanonical stochastic blockmodel (biSBM) and an algorithm to fit the model to network data. Our work is built on two foundations, developing a bipartite SBM~\cite{larremoreEfficientlyInferringCommunity2014} with a more sophisticated microstate counting approach~\cite{peixotoEntropyStochasticBlockmodel2012}. The model itself follows in the footsteps of Bayesian SBMs~\cite{peixotoHierarchicalBlockStructures2014,peixotoNonparametricBayesianInference2017} but with key modifications to the prior distribution and the search algorithm that more correctly account for the fact that some partitions are strictly prohibited when a network is bipartite. As a result, the biSBM is able to resolve community structure in bipartite networks better than the general SBM, demonstrated in tests with synthetic networks (Fig.~\ref{fig:benchmark}). The resolution limit of the biSBM is greater than the general SBM by a factor of $\sqrt{2}$. We demonstrated this mathematically and in a simple biclique-finding test (Fig.~\ref{fig:resolution}). This analysis led us to directly compare the priors for the biSBM and the hierarchical SBM, which hinted at an unexpected regime in which the biSBM provides a better model than the hSBM. This regime, populated by smaller, sparser, and less hierarchical networks, was found in real data where model selection favored the biSBM (Table~\ref{table:benchmark}). \begin{figure}[!t] \centering \includegraphics[width=0.9\linewidth]{fig6.png} \caption[]{Scatter plots and histograms of description length for the ancient metabolic (a;~\cite{goldfordRemnantsAncientMetabolism2017})~and malaria gene similarity (b;~\cite{larremoreNetworkApproachAnalyzing2013})~networks from 100 independent experiments. Grey vertical lines connect biSBM results with their matching h-biSBM hierarchical results. Arrows in histograms mark the MDL points from the hSBM (grey) and by the h-biSBM (blue).} \label{fig:metabolic} \end{figure} How should we understand these networks that are better described by our flat model than a hierarchical one? One possibility is that these networks are simply ``flat'' and so any hierarchical description simply wastes description-length bits on a model which is too complex. Another possibility is that this result can be explained not by the mathematics of the models but by the algorithms used to fit the models. In fact, our tests with synthetic networks show clear differences between models {\it and} algorithms, with the 2D search algorithm introduced here providing better fits to data than a 1D search (Fig.~\ref{fig:benchmark}). However, this finding alone does not actually differentiate between the two possible explanations, and so we constructed the following simple test. To probe the differences between the biSBM and hSBM as {\it models} vs differences in their model-fitting {\it algorithms}, we combined both approaches in a two-step protocol: Fit the biSBM to network data and then build an optimal hierarchical model upon that fixed biSBM base. Unless the data are completely flat, this hierarchy-building process will further reduce the description length, providing a more parsimonious model. If the hybrid h-biSBM provides a superior description length to the hSBM, our observations can be attributed to differences in model-fitting algorithms. In fact, this is precisely what we find. Figure~\ref{fig:metabolic} shows repeated application of the biSBM, hSBM, and hybrid h-biSBM to the ancient metabolic network~\cite{goldfordRemnantsAncientMetabolism2017} and the malaria genes network~\cite{larremoreNetworkApproachAnalyzing2013}. In the ancient metabolic network, the biSBM already outperformed the hSBM, so the hybrid model results in only marginal improvements in description length. However, doing so also creates hierarchies with an average depth of $\langle L \rangle = 3.85$ layers, compared with the $\langle L \rangle = 3.27$ layers found by hSBM natively. In other words, we can achieve a deeper hierarchy in addition to a more parsimonious model when using the flat biSBM partition at the lowest level. This suggests that, in fact, not all of the hSBM's underperformance can be attributed to the ancient metabolic network's being ``flat,'' since a hierarchy can be constructed upon the biSBM's inferred structure. In the malaria genes network, although the hSBM outperformed the biSBM, the hybrid model was superior to both. Since the hybrid partitions are, in principle, available to the hSBM, our conclusion is that the 2D search algorithm we presented is actually finding better partitions. Put another way, there are further opportunities to improve the depth and speed of algorithms to fit stochastic blockmodels to real-world data, particularly when bipartite or other structure in the data can be exploited. Finally, this work shows how both models and algorithms can reflect the structural constraints of real-world network data, and how doing so improves model quality. While our work addresses only community detection for bipartite networks, generalizations of both the mathematics and search algorithms could in principle be derived for multi-partite networks in which more complicated rules exist for how node types are allowed to connect. \acknowledgements The authors thank Tiago Peixoto, Tatsuro Kawamoto, Pan Zhang, Joshua Grochow, and Jean-Gabriel Young for stimulating discussions. DBL was supported in part by the Santa Fe Institute Omidyar Fellowship. The authors thank the BioFrontiers Institute at the University of Colorado Boulder and the Santa Fe Institute for the use of their computational facilities. %
1,941,325,219,916
arxiv
\section{Introduction} Classical T~Tauri stars (CTTS) are pre-main sequence, low mass stars (M~$\lesssim $~2\,M$_{\odot}$) that are still accreting matter from a surrounding circumstellar disk at a significant level. CTTS show an emission line spectrum and are often characterized by strong veiling and a large H$\alpha$ equivalent width, formerly the main classifier of this class. The H$\alpha$ equivalent width criterion for a CTTS is, however, spectral-type dependent. Further studies have shown that the 10\,\% width of the H$\alpha$ line is mostly determined by the accretion streams and is less spectral-type dependent and a more reliable tracer of accretion, especially in low mass stars \citep[see e.g.][]{whi03}. Beside H$\alpha$ many other lines have been used to quantitatively study accretion properties in young stars. In addition to emission lines, CTTS show a strong IR-excess from their circumstellar disk that also provides a diagnostic on the inclination of the system. CTTS evolve via transitional objects, where the disk starts to dissipate and accretion rates decrease, into weak-line T~Tauri stars (WTTS) that are virtually disk-less and do not show strong signs of ongoing accretion. T~Tauri stars are known to be strong and variable X-ray emitters from {\it Einstein} and {\it ROSAT} observations. X-ray studies of star forming regions such as COUP \citep[{\it Chandra} Orion Ultradeep Project,][]{get05} or XEST \citep[{\it XMM-Newton} Extended Survey of the Taurus Molecular Cloud,][]{gue07}, confirmed that T~Tauri stars show high levels of magnetic activity as evidenced by hot coronal plasma and strong flaring, but also refined the general X-ray picture of YSOs (Young Stellar Objects). In the commonly accepted magnetospheric accretion model for CTTS \citep[e.g.][]{koe91} material is accreted from the stellar disk onto the star along magnetic field lines, which disrupt the accretion disk in the vicinity of the corotation radius. Since the infalling material originates from the disc truncation radius, typically located at several stellar radii, it reaches almost free-fall velocity and upon impact the supersonic flow forms a strong shock near the stellar surface. The funnelling of the accreted matter by the magnetic field leads to the formation of accretion spots that have small surface filling factors \citep{cal98} and produce strong optical/UV and X-ray emission \citep{lam98, guenther07}. The X-ray emission from accretion spots on CTTS has specific signatures which are detectable with high-resolution X-ray spectroscopy. Accretion shocks generate plasma with temperatures of up to a few MK that is significantly cooler than the average coronal plasma and sufficiently funneled accretion streams are expected to produce X-rays in a high-density environment ($n_{e} \gtrsim 10^{11}$\,cm$^{-3}$ measured in \ion{O}{vii}). In contrast, plasma produced by magnetic activity covers a much broader temperature range spanning in total 1\,--\,100~MK and is on average hotter \citep[$T_{\rm av} \approx 10 - 20$~MK for CTTS in Taurus,][]{tel07}. Further it typically has, at least outside large flares, much lower densities \citep[$n_{e} \lesssim 3\times 10^{10}$\,cm$^{-3}$,][]{ness04}. The accretion streams may also influence coronal structures on the stellar surface or lead to additional magnetic activity via star-disk interaction. Moreover, the accretion process is accompanied by outflows or winds from the star and the surrounding disk, which play an important role in star formation via the transport of angular momentum. Stellar jets and associated shocks provide another X-ray production mechanism, which typically generates cool plasma at low densities, as seen in several T~Tauri stars such as DG~Tau \citep{gue05}. X-ray diagnostics like density and temperature sensitive X-ray line ratios can be utilized to distinguish between the different scenarios. TW~Hya was the first and is still the most prominent CTTS that is dominated by accretion shocks in X-rays. Its X-ray spectrum shows high density plasma as evidenced by density sensitive lines in He-like triplets of oxygen and neon and an unusually cool plasma distribution \citep{kas02,ste04}. TW~Hya has been extensively studied in X-rays and a very deep {\it Chandra} observation suggests that the accreted and shock-heated material mixes with surrounding coronal material, likely producing a complex distribution of emission regions around the accretion spots \citep{bri10}. So far all low-mass CTTS studied at X-ray energies show similar signs of accretion plasma, classic examples are e.g. BP~Tau \citep{schmitt05}, V4046~Sgr \citep{guenther06,arg12}, MP~Mus \citep{arg07} or RU~Lup \citep{rob07b}. In contrast, T~Tau itself shows a strong cool plasma component, but a low plasma density \citep{gue07a}; although the system is dominated by the intermediate mass T~Tauri star T~Tau~N ($M \approx 2.4~M_{\odot}$). In a comparative study of several bright CTTS it was shown that the presence of X-rays from both accretion shocks and magnetic activity is likely universal, but that the respective contributions differ significantly between the individual objects \citep{rob06}. Indeed, magnetic activity produces the bulk of the observed X-ray emission in the majority of CTTS in the 0.2\,--\,2.0~keV band and completely dominates at higher energies. In addition, X-ray temperature diagnostics have shown that all accreting stars exhibit an excess of shock-generated cooler plasma, leading to a soft excess when compared to coronal sources \citep{rob07b, tel07, gue07b}. The observed X-ray spectrum of young stars is modified by often significant absorption by circumstellar or disk material as well as by outflowing and infalling matter. X-ray absorption can exceed optical extinction by an order of magnitude as shown e.g. for RU~Lup \citep{rob07b} and may even be strongly time variable as in AA~Tau \citep{schmitt07}. The different stellar properties such as mass, rotation and activity, varying mass accretion rates and degree of funneling as well as the viewing angle dependence naturally lead to the variety of X-ray phenomena in YSOs that are an inter-mixture of magnetic activity, accretion and outflow processes. High resolution X-ray spectra from young accreting stars are only in a few cases available and existing studies focussed on the more massive CTTS with spectral type G or K. Young low-mass stars with M~$\lesssim$~0.5\,M$_{\odot}$ are typically X-ray fainter and so far only poorly studied; the transitional multiple system \hbox{Hen 3-600} \citep{huene07} is one of the rare examples. Nevertheless, they are the most common stars and their investigation is of great astrophysical interest to draw a more complete and general picture of the evolution of young stars and their surrounding environment, where the stellar winds and UV/X-ray emission influence the chemistry and evolution of the circumstellar disk and the process of planet formation. \section{The target: DN Tau} \label{tar} Our target star \object{DN Tau} is a M0-type CTTS located in the Taurus Molecular Cloud (TMC) at a distance of $d=140$~pc \citep{coh79}; important stellar parameters collected from the literature are summarized in Table\,\ref{pro}. DN~Tau is a single star on a fully convective track with an estimated age in the range of 0.5\,--\,1.7~Myr. Its optical extinction is quite low, indicating that DN~Tau is not deeply embedded in circumstellar material or the TMC. While classical estimates of stellar luminosity, mass and radius are about $L_{*}= 1.0~L_{\odot}$, $M_{*}= 0.5~M_{\odot}$ and $R_{*}= 2.1~R_{\odot}$, \cite{donati13} find a slightly hotter, smaller and less luminous model of DN~Tau ($L_{*}= 0.8~L_{\odot}$, $M_{*}= 0.65~M_{\odot}$, $R_{*}= 1.9~R_{\odot}$) adopting an optically measured $A_{V} =0.5$. In contrast, \cite{ing13} find a much brighter and larger DN~Tau ($L_{*}= 1.5~L_{\odot}$, $M_{*}= 0.6~M_{\odot}$, $R_{*}= 2.8~R_{\odot}$) when using $A_{V} =0.9$, that bases on $A_{J}=0.29$ from IR-measurements \citep{fur11}. The CTTS nature of DN~Tau is reflected by a typical EW\,[H$\alpha$]~$ = 12 - 18$~{\r A} and a H$\alpha$ 10\,\% width in the range of $290 - 340$~km\,s$^{-1}$ \citep{herb88, whi04,ngu09} and a moderate IR excess, making it a Class II source based on its far-IR SED \citep{ken95}. DN~Tau is a variable, but typically moderate accretor that exhibits little UV excess; e.g. \cite{gull98} derived $L_{\rm acc} = 0.016~L_{\odot}$ and a weak optical veiling of $r=0.075$. Nevertheless, the infalling plasma on DN~Tau is apparently well funnelled with an accretion spot filling factor of $f=0.005$ \citep{cal98}. \cite{ing13} modeled broad-band optical and UV data in a similar approach but with multiple accretion columns and found, depending on the absence/presence of 'hidden' low flux accretion emission, $f=0.002/0.06$ and $\log \dot{M}_{\rm acc} = -8/-7.8~M_{\odot}$\,y$r^{-1}$. \cite{donati13} give $\log \dot{M}_{\rm acc} = -9.1 \pm 0.3~M_{\odot}$\,yr$^{-1}$ as average for their accretion proxies. While the reliability of the various methods used to obtain quantitative estimates on the mass accretion rates is debated, highly variable accretion properties of DN~Tau are observed and \cite{fern95} measured an EW\,[H$\alpha$] declining from 87 to 15~{\r A} within four days. DN~Tau has only a weak outflow; while typically about 5\,--\,10\,\% of the accretion rate are estimated, \cite{whi04} give a 2\,\% upper limit derived from their data. \begin{table}[t] \begin{center} \caption{\label{pro}Stellar properties of DN Tau from optical measurements.} \begin{tabular}{lcr} \hline\hline\\[-3mm] Sp. type & M\,0\,$^{1,2,3}$ & \\ $T_{\rm eff}$ & 3800\,$^{3}$ ... 3850\,$^{1}$ ... 3950$\pm 50$\,$^{4}$ & K\\ $M_{*}$& 0.4\,$^{1,2}$ ... 0.5\,$^{3}$ ... 0.65$\pm 0.05$\,$^{4}$& $M_{\odot}$\\ $R_{*}$& 1.9$\pm 0.2$\,$^{4}$ ... 2.1\,$^{2}$ & $R_{\odot}$\\ $L_{\rm bol}$ & 0.8$\pm 0.2$\,$^{4}$ ... 0.9\,$^{2}$ ... 1.0\,$^{1}$ & $L_{\odot}$\\ $A_{V}$ & 0.25\,$^{2}$ ... 0.5\,$^{1}$& mag\\ $\log \dot{M}_{\rm acc}$ & -7.8\,$^{3}$ ... -8.5\,$^{2}$ ... -9.1$\pm 0.3$\,$^{4}$ & $M_{\odot} yr^{-1}$\\\hline \end{tabular} \end{center} \noindent {\scriptsize $^{1}$ \cite{ken95}, $^{2}$ \cite{gull98}, $^{3}$ \cite{whi04}, \\$^{4}$ \cite{donati13}} \end{table} Photometric variations of DN~Tau's optical brightness were first reported with a period of about $P_{\rm rot} \approx 6.0$~d \citep{bou86}, later refined to $P_{\rm rot} = 6.3$~d \citep{vrba93}. This variability can be interpreted as rotational modulation of a large magnetic spot or spot group with a surface coverage of up to 35\,\%. Strong magnetic activity on DN~Tau is also implied by its large inferred mean magnetic field of 2~kG \citep{joh07}. Results from spectropolarimetric observations with ESPaDOnS/CFHT \citep{donati13} show a simple magnetic topology that is largely axisymmetric and mostly poloidal with a dominant octupolar and a weaker bipolar component of 0.6\,--\,0.8.~kG and 0.3\,--\,0.5~kG polar strength respectively. \cite{muz03} present near-IR spectra of DN~Tau from which they inferred an inner (dust-)disk rim located at 0.07~AU ($\approx 7~R_{*}$), notably the closest disk rim in their sample. The disk of DN~Tau with $M_{d} = 0.03~M_{\odot}$ as deduced from submillimeter observations is quite massive, roughly an order of magnitude above the median mass found for the Class~II sources in the sample of \cite{and05}. DN~Tau is viewed under an intermediate inclination; \cite{muz03} inferred an inclination of $i=28\pm 10^{\circ}$ from IR data, quite similar to the estimate of $i=35\pm 10^{\circ}$ by \cite{donati13}. Adopting $P_{\rm rot} = 6.3$~d, $i=33^{\circ}$ and combining these data with the rotational velocity of $v$\,sin\,$i= 12.3 \pm 0.6$~km\,s$^{-1}$ \citep{ngu09}, $v$\,sin\,$i = 9 \pm 1$~km\,s$^{-1}$ \citep{donati13} or $v$\,sin\,$i= 10.2$~km\,s$^{-1}$ \cite{hart89}, we obtain $R_{*} \approx 2.8~R_{\odot}$, $R_{*} \approx 2.0~R_{\odot}$ and $R_{*} \approx 2.3~R_{\odot}$ respectively. X-ray emission from DN~Tau was first detected with {\it Einstein} \citep{wal81} and later by {\it ROSAT} \citep{neu95}; both at a similar X-ray luminosity of $\log L_{\rm X} \approx 29.7$~erg\,s$^{-1}$, albeit with significant error. DN~Tau has been observed by {\it XMM-Newton} in 2005 as part of the XEST project (No. 12-040); an analysis of these data is presented in \cite{tel07}. They derived basic X-ray properties from an EMD model and a multi-temperature fit, both methods give $L_{\rm X} = 1.2 \times 10^{30}$~erg\,s$^{-1}$ and an average coronal temperature of about 12\,--\,14~MK. DN~Tau is among the X-ray brighter CTTS, when compared to similar objects in the XEST or COUP sample and its X-ray activity level is with $\log L_{\rm X}/L_{\rm bol} \approx -3.5$ close to, but still about a factor of three below, the saturation limit at $\log L_{\rm X}/L_{\rm bol} \approx -3$. We re-observed DN~Tau in 2010 with {\it XMM-Newton}, primarily to obtain a deeper exposed high-resolution X-ray spectrum with the aim to expand the sample of emission line studied CTTS into the lower mass regime. In this paper we present an analysis of the new {\it XMM-Newton} observations of DN~Tau and compare it to earlier observations. Our paper is structured as follows: in Sect.\,\ref{obsana} the X-ray observations and the data analysis are described, in Sect.\,\ref{results} we present our results subdivided into different physical topics, in Sect.\,\ref{comp} we discuss our DN~Tau results and compare it to other CTTS and end with a summary in Sect.\,\ref{sum}. \begin{table}[t] \begin{center} \caption{\label{log} XMM-Newton observing log of DN Tau.} \begin{tabular}{rrrr}\hline Date & Obs. ID. & Dur. MOS/PN (ks) \\\hline\\[-3.mm] 2005-03-04/05& 0203542101 & 31/29 \\ 2010-08-18/19& 0651120101 & 119/118 \\\hline \end{tabular} \end{center} \end{table} \section{Observations and data analysis} \label{obsana} The target DN Tau was observed by {\it XMM-Newton} twice; a 30~ks exposure was obtained for the XEST survey in 2005 (PI: Guedel) and a 120~ks exposure was obtained in 2010 (PI: Robrade). We focus on the deeper 2010 exposure, but also re-analyze the 2005 data in an identical fashion to ensure consistency throughout this work. Data were taken with all X-ray detectors, i.e. the EPIC (European Photon Imaging Camera) and the RGS (Reflection Grating Spectrometer) as well as the optical monitor (OM). The EPIC consists of two MOS and one PN detector; the PN is the more sensitive instrument whereas the MOS detectors have a slightly higher spectral resolution. The EPIC instruments were operated in both observations in the full frame mode with the medium filter, allowing a direct comparison of the data. The OM was operated in the fast mode with the U filter in 2005 (eff. wavelength 3440 \AA) and the UVW1 (eff. wavelength 2910 \AA) filter in 2010. A detailed description of the instruments can be found in the 'XMM-Newton Users Handbook' (http://xmm.esac.esa.int); the used data is summarized in Table\,\ref{log}. All data analysis was carried out with the {\it XMM-Newton} Science Analysis System (SAS) version~11.0 \citep{sas} and standard SAS tools were used to produce images, light curves and spectra. Standard selection criteria were applied to the data, light curves are background subtracted and we exclude periods of high background from spectral analysis. Source photons from the EPIC detectors were extracted from circular regions around DN~Tau and the background was taken from nearby source free regions. The RGS data of DN~Tau has only a moderate SNR, therefore we extracted spectra from a 90\,\%~PSF source region to reduce the background contribution. The data of the X-ray detectors are analyzed independently for each observation to study variability and cross-check the results from the different instruments. We note that some degradation has occurred for the RGS detector between the exposures, while the effective area of the EPIC detectors shows only minor changes. Spectral analysis was carried out with XSPEC V12.6 \citep{xspec} and we used multi-temperature APEC/VAPEC models \citep{apec} with abundances relative to solar photospheric values as given by \cite{grsa} to derive X-ray properties like luminosities or emission measure distributions (EMD). We find that photoelectrically absorbed three-temperature models adequately describe the data, but note that some of the fit parameters are mutually dependent, e.g. absolute abundances and emission measure, emission measures and temperatures of neighboring components or absorption column density, temperature and emission measure of cool spectral components. Spectra are re-binned for modeling and errors in spectral models are given by their 90\% confidence range and were calculated by allowing variations of normalizations and respective model parameters. Additional uncertainties may arise from errors in the atomic data and instrumental calibration. For line fitting purposes we use the CORA program \citep{cora}, identical line widths and Lorentzian line shapes. Emitted line fluxes are corrected for absorption by using the {\it ismtau}-tool of the PINTofALE software \citep{poa} and flux-conversion is made with the SAS tool {\it rgsfluxer}. \section{Results} \label{results} Here we report on the results obtained from the {\it XMM-Newton} observations, subdivided into separate topics. \begin{figure}[t] \includegraphics[width=89mm]{dntau_fig1.eps} \caption{\label{lcs} X-ray light curves of DN Tau in 2005 and 2010, 0.2\,--\,5.0~keV EPIC data with 1~ks binning.} \end{figure} \begin{figure}[t] \includegraphics[width=90mm]{dntau_fig2.eps} \caption{\label{omall} OM light curves from 2005 (U filter) and 2010 (UVW1 filter), 300~s binning each.} \end{figure} \subsection{X-ray light curves and hardness} The X-ray light curves of DN~Tau as obtained from the summed EPIC data are shown in Fig.\,\ref{lcs}, here we use the 0.2\,--\,5.0~keV energy band and a 1~ks temporal binning. Some variability and minor activity is present in both observations, but large fractions of the X-ray light curves are quite flat and only one moderate flare visible at 55\,--\,60~ks with a factor two increase in count rate and two smaller events peaking at about 75~ks and 95~ks are detected during the 2010 exposure. The features at the beginning and end of the 2005 observation are likely also decay and rise phases of partly covered flares. Except for a higher average count rate by 20\,--\,30\,\% in 2010, the level of variability within each observation period is comparable. A long-term trend of declining X-ray count-rate by roughly 10\,\% is seen in the 2010 data and might be due to rotational modulation. The 1.4~d observation has a phase coverage of about 0.22 and given the moderate inclination of DN~Tau rotational modulation can be expected for surface features at low and intermediate latitudes. \begin{figure}[t] \includegraphics[width=90mm]{dntau_fig3.eps} \caption{\label{hrepic} Hardness ratio of DN Tau from EPIC data, 2010 (black), 2005 (blue); typical errors are indicated in the upper right corner.} \end{figure} We investigate the basic spectral state of DN~Tau for both exposures and its evolution with a hardness ratio analysis, $HR=(H-S)/(H+S)$ with 0.2\,--\,0.8~keV as soft band and 0.8\,--\,5.0~keV as hard band. The energy bands are chosen in a way that X-ray emission in our soft band is predominantly produced by plasma at temperatures of 2\,--\,5~MK, whereas the hard band is dominated by emission from hotter plasma at 5\,--\,20~MK; however a moderate shift of the band-separation energy does not influence the results. As shown in Fig.\,\ref{hrepic} the positive correlation between X-ray brightness and spectral hardness that is typically observed for magnetic activity is generally not present in DN~Tau. We detect a spectral hardening during the larger flares in 2010, but overall a clear correlation between brightness and hardness is not present. Typically more active coronal stars exhibit harder spectra \citep[see e.g.][]{schmitt97}, but a similar trend is also seen when studying the temporal behavior in individual objects including CTTS \citep{rob06}. Remarkably, we find that the X-ray fainter state in the year 2005 is overall characterized by harder emission than the brighter 2010 state. In addition, the individual observation periods again show a broad scatter and only marginal correlations. Similar conclusions are obtained when inspecting the time evolution of the hardness ratio. \subsection{UV light curves, flares and UV/X-ray correlations} The OM light curves of DN Tau are plotted in Fig.\,\ref{omall}; associated brightness errors are in the range of 0.01\,mag and below the size of the shown symbols. Note that the DN~Tau observation from 2005 was performed in the U band filter (NUV), while in 2010 the bluer, but less sensitive UVW1 filter (NUV-MUV) was used. Comparing the UV light curves with the X-ray ones (Fig.\,\ref{lcs}), it is evident that the short term variations of the 2005 U band data do not strictly correlate with those of the X-ray brightness, suggesting a different origin of the respective emission. Similarly, \cite{vrba93} find the U band brightness variations to be rather stochastic and not modulated by the 6.3~d rotation period as the other optical bands (BVRI). This indicates that a significant fraction of the UV flux is not associated with the dark spots that are interpreted as magnetically active regions. Further, except for a shorter observing period where an out-of-phase modulation was found, a stable accretion configuration, i.e. a dominant hot spot, was not present and overall their U band variations are with $\pm$0.6~mag much larger than those in BVRI with 0.1\,--\,0.2~mag. Looking at long-term variations, DN~Tau apparently brightened in the UV-range over the last decades without comparable changes in optical bands. While there is also significant variability on shorter timescales, the U band brightness increased on average from about $14.5~(14.0-14.9)$~mag during the monitoring in the 1980s over $13.9 (14.3-13.3)$~mag in the early 1990s \citep{gran07} to a magnitude of $13.25~(13.12-13.36)$~mag (1.5~ks {\it XMM-Newton} average) in 2005. Also the moderately 'harder' UVW1 flux during the quasi-quiescent part of the 2010 observation is apparently not correlated with X-ray brightness. During our observation the UV flux varies significantly on timescales of minutes to hours; for example we observe at about 25~ks an increase of the UVW1 rate by roughly 20\% within a few ks, but without any corresponding X-ray signature as might be expected for magnetic activity. Since the photospheric UV emission is negligible in M type stars, the observed behavior favors a scenario where the bulk of the UV emission is related to several accretion spots located on the surface of DN~Tau and variability is created by changes in geometry and/or variable spot brightness. A rough estimate of the relative contributions from magnetic activity and accretion to the UV flux of DN~Tau can be obtained by a comparison to purely magnetically active sources under the assumption of similar X-ray generating coronae and magnetically induced chromospheric UV emission. Here we use the active mid-M dwarf EV~Lac \citep{mit05} that was observed with the same instrumental setup as DN~Tau in 2010. Accounting for radius and distance, i.e. enlarging EV~Lac to the size of DN~Tau and putting it at the same distance, we find that a scaled up version of an active M dwarf outshines DN~Tau by a factor of 1.5\,--\,2.0 in X-rays, but would only produce 15\,--\,20\,\% of its UV flux. When scaling these values to DN~Tau's true X-ray emission, i.e. accounting for the X-ray overluminosity of the scaled up M dwarf, only about 10\,\% of the UV flux from DN~Tau are attributable to magnetic activity. While a mild suppression in X-ray brightness in accreting vs. non-accreting T Tauri stars is quite typical and might be related to phenomena not present on M dwarfs, this comparison shows that the bulk of the UV emission from DN~Tau is generated in the accretion shocks. \begin{figure}[t] \includegraphics[width=90mm]{dntau_fig4.eps} \caption{\label{om2010}The largest 2010 flare as observed in X-rays (black, 300 s binning) and in the UV (red, 60 s binning.)} \end{figure} In contrast, the three X-ray flares observed in 2010 have clear counterparts in the UVW1 data, most prominent is the largest flare starting around 55~ks. In Fig.\,\ref{om2010} we show its UV light curve overplotted with the X-ray light curve, scaled to the same quasi-quiescent pre-flare level for clarity. The UV emission precedes the X-rays and peaks about 10~min earlier. This behavior indicates an energy release via magnetic reconnection, succeeded by evaporation of fresh material from the stellar surface that is subsequently heated to X-ray emitting temperatures; flare events like these are frequently observed on the Sun and low-mass stars. Using results from spectral modelling (see Sect.\,\ref{spec}), we estimate for the largest flare a peak luminosity of $L_{\rm X} = 3 \times 10^{30}$\,erg\,s$^{-1}$, an energy release of about $2.5 \times 10^{33}$\,erg at X-ray energies and a loop length of about $L \approx 0.15~R_{*}$, i.e. an event in a compact coronal structure located close to the stellar surface of DN~Tau. The time evolution of the flare is dominated by the initial event, but shows sub-structure as visible in the optical plateau and the secondary X-ray peak, indicating subsequent magnetic activity. About one hour after the flare onset the X-ray and UV light curves roughly reach their pre-flare values again. The two smaller X-ray flares show even more complex light curves. They probably result from an overlay of multiple events, for example several magnetic reconnections occurring within a short time interval in an active region or region complex. Both flares again show UV counterparts, but these are less pronounced than during the large event. \subsection{Global X-ray properties from CCD spectroscopy} \label{spec} To study the global spectral properties of the X-ray emission from DN Tau we use the EPIC data. As an example of the spectral quality we show in Fig.~\ref{pnspec} the PN spectra and corresponding models for the two observations; in the inset we show the X-ray emission at high energies which is discussed below. Visual inspection already shows that major changes have occurred in the softer X-ray regime, whereas the spectra above 1~keV are virtually identical. The similar spectral slopes suggest that the changes at low energies are not caused by variable line-of-sight absorption, but are intrinsic to the emission of DN~Tau. \begin{figure}[t] \includegraphics[height=88mm,angle=-90]{dntau_fig5a.ps} \vspace*{-53.mm}\hspace*{61.5mm} \includegraphics[height=24mm,angle=-90]{dntau_fig5b.ps} \vspace*{27mm} \caption{\label{pnspec}X-ray spectra of DN Tau ({\it crosses}, PN), spectral fit ({\it histogram}) and model components ({\it dashed}) for the two observations: 2010 (black), 2005 (blue). {\it Inset}: The spectrum above 6.0\,keV during the active (black) and quasi-quiescent (red) half in 2010.} \end{figure} The spectral properties of DN~Tau and their changes between the two observations are quantified by modeling the spectra in iterative steps to obtain most robust results. We first investigate potential effects of the flares on the total 2010 spectrum, but found no differences between the models for the quasi-quiescent ($t < 55$~ks) and active half, except for small re-normalizations by a few percent. In a next step we fitted the total data of each observation individually. No significant changes in absorption column density and coronal abundances were found between the 2005 and 2010 datasets and we tied these parameters. We then modeled both observations simultaneously with temperatures and emission measures (EM) as free parameters. We first use the MOS detectors that have a better spectral resolution to determine EMDs, abundances and absorption and cross-checked our results with the PN data, where we also tied the temperatures. The derived model parameters are given in Table\,\ref{specres} and a comparison shows that the overall coronal temperature structure and the EMD changes are independent of the data used. The X-ray luminosities are the emitted ones, i.e. they are absorption corrected; the observed values are given in brackets. We find for the 2010 (2005) observation X-ray luminosities of $\log L_{\rm X} = 30.2\,(30.1)$~\,erg\,s$^{-1}$, average coronal temperatures of $T_{\rm X} = 13\,(18)$~MK and an activity level $\log L_{\rm X}/L_{\rm bol} \approx -3.3$, emphasizing that DN~Tau is among the most active and X-ray brightest CTTS with respect to its mass or effective temperature. \begin{table}[t] \caption{\label{specres}Spectral fit results for DN Tau, EPIC data. Parameter absent in the PN results are adopted from the MOS modeling.} \begin{center} \begin{tabular}{lrrr}\hline\hline\\[-3.1mm] Par. & \multicolumn{1}{c}{2005} & \multicolumn{1}{c}{2010} & unit\\\hline\\[-3mm] & \multicolumn{2}{c}{MOS}&\\\hline\\[-3mm] $N_{\rm H}$ & \multicolumn{2}{c}{0.8$^{+ 0.1}_{- 0.1}$} & $10^{21}$cm$^{-2}$\\[1mm] kT1 &0.17$^{+ 0.05}_{- 0.03}$ &0.23$^{+ 0.03}_{- 0.03}$& keV \\[1mm] kT2 &0.60$^{+ 0.07}_{- 0.06}$ &0.64$^{+ 0.03}_{- 0.03}$& keV\\[1mm] kT3 & 2.27$^{+ 0.33}_{- 0.21}$ &1.91$^{+ 0.15}_{- 0.14}$& keV \\[1mm] EM1 & 0.8$^{+ 0.9}_{- 0.5}$& 2.0$^{+ 0.6}_{- 0.5}$& $10^{52}$cm$^{-3}$\\[1mm] EM2 & 3.7$^{+ 0.5}_{- 0.4}$& 5.6$^{+ 0.5}_{- 0.6}$& $10^{52}$cm$^{-3}$\\[1mm] EM3 & 6.4$^{+ 0.6}_{- 0.5}$ & 5.3$^{+ 0.5}_{- 0.4}$ & $10^{52}$cm$^{-3}$\\[1mm] Mg (7.6 eV) & \multicolumn{2}{c}{0.52$^{+ 0.26}_{- 0.18}$} & solar \\[1mm] Fe (7.9 eV) & \multicolumn{2}{c}{0.35$^{+ 0.12}_{- 0.10}$ }& solar\\[1mm] Si (8.2 eV)& \multicolumn{2}{c}{0.32$^{+ 0.14}_{- 0.12}$} & solar\\[1mm] S (10.4 eV) & \multicolumn{2}{c}{0.24$^{+ 0.18}_{- 0.17}$} & solar\\[1mm] O (13.6 eV) & \multicolumn{2}{c}{0.65$^{+ 0.28}_{- 0.16}$} & solar\\[1mm] Ne (21.6 eV) & \multicolumn{2}{c}{1.51$^{+ 0.49}_{- 0.38}$} &solar\\[1mm] \hline\\[-3mm] $\chi^2_{red}${\tiny(d.o.f.)} & \multicolumn{2}{c}{1.05 (432)} & \\[0.5mm]\hline\\[-3mm] $L_{\rm X}$ {\tiny (0.2-8.0 keV)} & 1.37 (0.99)& 1.64 (1.13)& $10^{30}$\,erg\,s$^{-1}$\\\hline\\[-2mm] &\multicolumn{2}{c}{PN}&\\\hline\\[-3mm] kT1 &\multicolumn{2}{c}{0.24$^{+ 0.03}_{- 0.03}$} & keV \\[1mm] kT2 &\multicolumn{2}{c}{0.64$^{+ 0.02}_{- 0.02}$} & keV\\[1mm] kT3 & \multicolumn{2}{c}{1.95$^{+ 0.11}_{- 0.10}$} & keV \\[1mm] EM1 & 0.5$^{+ 0.2}_{- 0.3}$& 2.0$^{+ 0.3}_{- 0.3}$& $10^{52}$cm$^{-3}$\\[1mm] EM2 & 3.5$^{+ 0.4}_{- 0.4}$& 5.6$^{+ 0.3}_{- 0.4}$& $10^{52}$cm$^{-3}$\\[1mm] EM3 & 5.9$^{+ 0.3}_{- 0.4}$ & 5.2$^{+ 0.4}_{- 0.3}$ & $10^{52}$cm$^{-3}$\\[1mm] \hline\\[-3mm] $\chi^2_{red}${\tiny(d.o.f.)} & \multicolumn{2}{c}{1.04 (508)} & \\[0.5mm]\hline\\[-3mm] $L_{\rm X}$ {\tiny (0.2-8.0 keV)} & 1.24 (0.90)& 1.62 (1.12)& $10^{30}$\,erg\,s$^{-1}$\\\hline \end{tabular} \end{center} \end{table} The emission measure distributions show that intermediate ($\sim$~6\,--\,8~MK) and high ($\gtrsim$~20~MK) temperature plasma dominates the X-ray emission from DN~Tau in both observations, whereas the cool component around 2~MK contributes only about 5\,\% (2005) and 15\,\% (2010) to the total emission measure. The respective contribution of the three components to the spectral model is shown by the dashed lines in Fig.\,\ref{pnspec}. While the fitted temperatures are comparable between the 2005 and 2010 data for all plasma components, the emission measure of the individual components varies distinctly. We find a strong EM-increase by roughly a factor three in the cool component and a moderate increase by 50\,\% in the intermediate temperature component. In contrast, a constant EM or even moderate decrease is present in the hot component. In relative terms the EM-increase is most pronounced in the cool component, but in absolute terms the increase in the intermediate temperature component is at least comparable or even slightly larger. The X-ray luminosity of $1.6 \times 10^{30}$\,erg\,s$^{-1}$ obtained for the 2010 {\it XMM-Newton} data is about 25\,\% higher than those in the {\it XMM-Newton} observation from 2005, but a factor of three above the values obtained from {\it Einstein} data roughly 30~years ago and from {\it ROSAT} data in the early 1990s. Neither the 2005 nor the 2010 exposure are dominated by strong flaring, thus significant, likely long-term, variability of DN~Tau's X-ray brightness must be present and clearly this distinct change has to occur in the emission components associated with magnetic activity. Our spectral modeling shows that the moderate X-ray brightening is caused by an increase in EM in the cool and intermediate plasma component; these components contribute in 2010 much stronger to the EMD than in 2005. Typically magnetically more active phases show harder spectra due to the stronger contribution from hotter plasma, but since cooler and hotter coronal regions are not co-spatial and significant evolution may have occurred over the five years, a coronal origin for the EMD changes cannot be completely ruled out by the plasma temperatures alone. Given the CTTS nature of DN~Tau and that a similar trend, albeit on timescales of hours, was observed on the prototype of an accretion dominated CTTS TW~Hya \citep{rob06}, another possibility would be to attribute the enhanced emission from cool plasma on DN~Tau to the presence of a stronger accretion component. In this scenario the coolest component would be naturally predominantly affected, since here the contribution from the accretion shock is largest. While there is also a coronal contribution to the low temperature plasma and clearly the 8~MK plasma does not originate directly from the accretion shocks, the EM-increase at intermediate temperatures might be a contribution from an accretional-fed coronal component as suggested by \cite{bri10} in their study of TW~Hya. The fact that the hot component ($\gtrsim$~20\,MK), attributed to the corona of DN~Tau, stayed approximately constant with a tendency for a mild decrease, does again not favor enhanced magnetic activity as origin of the increased X-ray brightness. Further, this scenario would imply that the enhanced accretion component had at best a very moderate effect on the hot coronal structures associated with the magnetically most active regions on the surface of DN~Tau. We find an overall low metallicity in the X-ray spectra of DN~Tau, however significant differences for individual elemental abundances are present. The derived abundance pattern of DN~Tau shows in general a so-called IFIP (inverse First Ionization Potential) pattern that is commonly observed in active stars, where the low FIP elements like Fe are significantly depleted and especially the high FIP elements like Ne are enhanced compared to solar composition. The IFIP trend is not strictly linear in DN~Tau (see Table~\ref{specres} where the FIP of each element is given in brackets), but appears to be have a broad abundance minimum at low to intermediate FIP elements (Fe-Si-S) while the very low FIP element Mg and the intermediate FIP element O have higher abundances and only Ne is enhanced compared to solar photospheric values. While the absolute abundances vary moderately between the applied models, the derived abundance ratios are fairly robust. Independent of the specific model or data used, our best fits give a Ne/O ratio as well as a O/Fe ratio of roughly two for DN~Tau, similar to values observed for BP~Tau \citep{rob06} and in many active M dwarf coronae \citep{gue01,rob05}. \subsubsection{The spectrum beyond 6 keV} In the inset of Fig.~\ref{pnspec} we show the X-ray emission from DN~Tau at very high energies; here the PN spectra above 6.0~keV from the 2010 observation roughly splitted in the middle and binned to a minimum of five counts. The comparison shows that photons at these energies were predominantly collected during the second and more active half of the observation, defined as $t > 55$~ks. We identify probable contributions from the 6.4~keV Fe-K$\alpha$ fluorescence line, which is excited by photons with energies above 7.1~keV, from the 6.7~keV \ion{Fe}{xxv} line complex and possibly also from the 6.97~keV \ion{Fe}{xxvi} line. When adding a narrow Gaussian at 6.4~keV to the 2010 spectral model, where fluorescence photons were not included, we find that the Fe-K$\alpha$ line is formally detected, but its flux is with $2.1 (0.3-3.9) \times 10^{-15}$~ergs\,cm$^{-2}$\,s$^{-1}$ poorly defined. The additional presence of emission lines from highly ionized Fe indicates that plasma with temperatures of $\gtrsim 40$\,MK is generated in active structures on DN~Tau, most likely predominantly during the detected flares. Nevertheless, while the spectra clearly suggest the presence of very hot plasma on DN~Tau, especially in the more active half of the 2010 observation, even at this phase its contribution to the total X-ray emission is with a few percent very minor. \subsubsection{X-ray absorption towards DN Tau} Absorption can significantly alter the appearance of X-ray spectra and we derive from our modeling a moderate absorption column density of $N_{H}=0.8 \times 10^{21}$\,cm$^{-2}$, showing that no large amounts of circumstellar or disk material are in the line of sight. The X-ray absorption is, in contrast to extinction, also sensitive to optically transparent material and thus it is a useful tool to study infalling or outflowing dust-free gas or plasma. As mentioned above, the modelled X-ray absorption is virtually unaffected by the observed changes in the EMD. Consequently, if the cooler X-ray plasma is largely created in the vicinity of the accretion shocks and the increase in emission measure is caused by a higher mass accretion rate, then the plasma in the accretion columns can have at maximum a very moderate contribution to the modeled X-ray absorption in DN~Tau. The X-ray absorption of DN~Tau is overall consistent with the one expected from the optical extinction $A_{\rm V} \approx 0.3 \dots 0.5$~mag, when using the standard conversion $N_{\rm H} = 1.8 \times 10^{21}$\,cm$^{-2} \times A_{\rm V}$\,cm$^{-2}$ \citep{pred95}. Adopting a roughly standard gas-to-dust ratio, an extinction of $A_{V} = 0.9$ as used by \cite{ing13} is not supported by the X-ray results. Several other CTTS (e.g. BP~Tau) also show an agreement within a factor of two between X-ray and optical absorption, in contrast the more pole-on CTTS RU~Lup \citep{rob07b} or the near edge-on system AA~Tau \citep{schmitt07} exhibit an X-ray absorption that is up to about one magnitude above the values derived from optical measurements and indeed most CTTS show an excess X-ray absorption in \cite{guenther08}. This finding indicates that mainly matter with a 'normal', i.e. roughly interstellar, gas-to-dust ratio is responsible for the absorption towards the X-ray emitting regions on DN~Tau. Significant amounts of optically transparent material like accretion streams or hot winds are absent in the line of sight, probably favored by the fact that DN~Tau is viewed under an intermediate inclination. \subsection{High-resolution X-ray spectroscopy} The high-resolution RGS spectrum of DN~Tau obtained in 2010 is shown in Fig.\,\ref{rgs}, here flux-converted in the 8\,--\,25\,\AA\, range. A global modelling of these data leads to overall similar results as derived above and we concentrate in the following on the analysis of the brighter emission lines denoted in the plot. These density and temperature sensitive lines are of special diagnostic interest, since they can be used to study the plasma contributions originating in the corona and accretion spots. In our analysis we use the lines of the He-like triplet of \ion{O}{vii}, namely resonance\,(r), intercombination\,(i) and forbidden\,(f) at 21.6, 21.8, 22.1\,\AA, as well as the Ly\,$\alpha$ line of \ion{O}{viii} at 18.97\,\AA. The absorption corrected photon fluxes, using the $N_{H}$ value from our EPIC modelling, of the relevant lines are given in Table\,\ref{lines} and a zoom on the \ion{O}{vii} triplet for the two exposures is shown in Fig.\,\ref{o7}. We also make a comparison to the 2005 observation to check our results from the global spectroscopy, but admittedly the S/N of these data is rather poor. A similar diagnostic for moderately hotter plasma uses the \ion{Ne}{ix} triplet (13.45\,--\,13.7\,\AA) and the \ion{Ne}{x} line (12.1\,\AA) which are detected in the 2010 spectrum. These lines allow also an abundance analysis of the Ne/O ratio when applying the methods described in \cite{rob08}. For DN~Tau we find Ne/O~$\approx 0.4$, a typical value for an active star and similar to the one derived above. \begin{figure}[t] \includegraphics[width=90mm]{dntau_fig6.eps} \caption{\label{rgs} Flux converted RGS spectrum (2010 observation) of DN Tau.} \end{figure} \begin{table}[t] \setlength\tabcolsep{5pt} \caption{\label{lines}Line fluxes in $10^{-5}$~cts\,cm$^{-2}$\,s$^{-1}$, absorption corrected.} \begin{center}{ \begin{tabular}{lrrrr}\hline\hline\\[-3mm] Data &\multicolumn{1}{c}{Ly$\alpha$} & \multicolumn{1}{c}{r} & \multicolumn{1}{c}{i} & \multicolumn{1}{c}{f}\\\hline\\[-3mm] & {OVIII} & \multicolumn{3}{c}{OVII}\\\hline\\[-3mm] 2010 & 4.0$\pm 0.5$ & 1.4$\pm$0.4 & 1.4$\pm$0.5 & 0.5$\pm$0.3 \\ 2005 &2.3$\pm$1.2 & 0.7$\pm$0.5 & 1.3$\pm$0.7& 1.2$\pm$0.7 \\\hline\\[-3mm] & {Ne X} & \multicolumn{3}{c}{Ne IX}\\\hline\\[-3mm] 2010 & 1.6$\pm 0.2$ & 0.8$\pm$0.2 & 0.2$\pm$0.2 & 0.7$\pm$0.2 \\\hline \end{tabular}} \end{center} \end{table} \subsubsection{Oxygen lines - plasma density} \label{o7sect} To search for high density plasma from accretion shocks, we specifically study the density sensitive $f/i$\,-\,ratio of the \ion{O}{vii} triplet \citep[see e.g.][]{por01}, that has a peak formation temperature of about $2$~MK. The plasma density is determined from the relation $ f/i =R_{0}$\,/\,$(1+\phi/\phi_{c}+n_{e}/N_{c})$ with $f$ and $i$ being the respective line intensities, $R_{0}=3.95$ the low density limit of the line ratio, $N_{c} =3.1 \times 10^{10}$cm$^{-3}$ the critical density and $\phi/\phi_{c}$ the radiation term. The effect from radiation is neglected in our calculations since the UV field of DN~Tau is not sufficiently strong to influence the \ion{O}{vii} ratio. A strong FUV flux would lower the derived plasma densities, but in the case of DN~Tau the FUV emission would also to be attributed to the accretion shocks, that however produce only a rather small UV-excess. On the other hand, \ion{O}{vii} is not only produced in the accretion shocks, but also in the corona which is dominated by low density plasma and the true accretion shock density would be underestimated. As a consequence, changes in the measured \ion{O}{vii} density can be caused either by changes of the actual densities in the accretion components or by the relative mixture of low and high density plasma from the corona and the accretion shocks. \begin{figure}[t] \begin{center} \includegraphics[width=85mm]{dntau_fig7.eps} \caption{\label{o7}Observed \ion{O}{vii} triplet in 2010 (black) and 2005 (blue).} \end{center} \end{figure} As shown in Fig.\,\ref{o7}, the \ion{O}{vii} intercombination line is stronger than the forbidden line in the 2010 spectrum, while in the 2005 data they are of comparable strength. We find a $f/i$\,-\,ratio below one in both observations; the derived values are $f/i = 0.36 \pm 0.26$ for the 2010 data and $f/i =0.92 \pm 0.73$ for the 2005 data from measured line fluxes. Poissonian ranges (90\,\% conf.) derived from Monto-Carlo methods on the measured counts are 0.14\,--\,0.62 (2010) and 0.06\,--\,2.0 (2005). Coronal sources typically exhibit a higher ratio of $f/i \gtrsim 1.5$ \citep{ness04}, indicating the presence of non-coronal plasma on DN~Tau. The $f/i$\,-ratio differs by a factor of 2.5 between the observations, but large errors, especially for the 2005 exposure, are present. For the \ion{O}{vii} emitting plasma we find a density of $n_{e} = 3.0~(1.6-11.8) \times 10^{11}$~cm$^{-3}$ (2010) and $n_{e}=1.0~(0.4-6.1) \times 10^{11}$~cm$^{-3}$ (2005) respectively; given the coronal contribution these values are likely lower limits for the accretion shocks. Overall the densities derived for DN~Tau are comparable, but at the lower end of values found for other CTTS. Given a theoretical 'low density' $f/i$\,-\,ratio of around four, the coronal contribution at \ion{O}{vii} temperatures is expected to be only moderate and an inspection of our spectral model shows that virtually all ($\sim 90\%$) of the \ion{O}{vii} emission is generated by the coolest plasma component. An apparent higher density in 2010 would be naturally explained by a stronger contribution of accretion plasma to the X-ray emission, either due to a larger spatial extend or to a higher density of the shock region(s). Inspecting the $f/i$\,-\,ratio for high densities, i.e. $\log n_{e} \gtrsim 12/13$~cm$^{-3}$, one finds $f/i \lesssim 0.1/0.01$. Thus the \ion{O}{vii} triplet is at the very end of its density sensitive range and even small contributions from the omnipresent corona can strongly affect the results. For example, adding to a high density plasma with $\log n_{e} \sim 13$~cm$^{-3}$ a 10\,\% fraction of coronal material ($f/i \sim 3$), already reduces the apparent density by about one order of magnitude. In these cases the \ion{O}{vii} analysis does not measure the true density of the accretion shock or the average density of the stellar plasma, but primarily traces the relative contributions from the visible portions of the accretion shocks and corona. Assuming shocks with high density and a corona with low density, we derive an accretion to coronal EM-ratio of about 0.7:1 in 2005 that increased to a ratio of 2:1 in the year 2010. Correspondingly one would expect, assuming a similar corona, a comparable increase in the coolest plasma component and while there appears to be some deficit in total \ion{O}{vii} photons, this is roughly consistent with our finding from \ion{O}{vii}($r$) and the EMD modelling. A density analysis of hotter plasma at $\approx$~4\,MK can be carried out with the \ion{Ne}{ix} triplet, that also has a higher critical density. Here we find a $f/i$\,-\,ratio that is compatible with its low density limit ($n_{e} \approx 1.0\times 10^{11}$cm$^{-3}$), while at the high end we can only put an upper limit of $n_{e} \lesssim 5 \times 10^{12}$cm$^{-3}$ on the plasma density. In addition, the \ion{Ne}{ix} $i$ line is blended by several Fe lines with \ion{Fe}{xix} being the strongest one; this is not taken into account in above calculation since Fe is heavily depleted in DN~Tau and statistical errors on the only weakly detected $i$-line dominate. Moderately cooler plasma at $\sim$~1.5\,MK could be studied with the \ion{N}{vi} triplet, but the data quality is insufficient to provide any further constraints. \subsubsection{Oxygen lines - plasma temperatures} Accretion processes that contribute to the X-ray emission can also be studied with temperature diagnostics and here we search for an excess of cool plasma via the \ion{O}{viii}/\ion{O}{vii} line ratio. The material that is accreted by CTTS has infall velocities of a few hundred km\,s$^{-1}$ and thus the post-shock plasma reaches at maximum temperatures up to a few MK. Such plasma is still relatively cool with respect to the coronal temperatures in active stars or CTTS and should therefore be detectable as 'soft excess' X-ray emission. As temperature diagnostic we use strong oxygen emission lines as measured in 2010, here the \ion{O}{viii} Ly$_{\alpha}$ line (18.97~\AA) and the \ion{O}{vii} He-like triplet lines with peak formation temperatures of $\sim$~3\,MK and $\sim$~2\,MK respectively. An abundance independent method is obtained by using the \ion{O}{viii}(Ly$_{\alpha}$)/\ion{O}{vii}(r) energy flux ratio in comparison to the summed luminosity of both lines. \begin{figure}[t] \includegraphics[width=99mm]{dntau_fig8.ps} \caption{\label{coolex} The soft excess of DN~Tau; \ion{O}{viii}(Ly$_{\alpha}$)/\ion{O}{vii}(r)-ratio vs. summed luminosity for main-sequence stars (diamonds) and CTTS (triangles).} \end{figure} In Fig.\,\ref{coolex} we compare the \ion{O}{viii}/\ion{O}{vii}-ratio of DN~Tau with those of other CTTS collected in the literature and with a large sample of main-sequence stars at various activity levels taken from \cite{ness04}. The correlation between the \ion{O}{viii}/\ion{O}{vii} line ratio and $L_{\rm X}$ for main-sequence stars is well known and caused by the higher coronal temperatures in more active and X-ray brighter stars. As shown in the plot, DN~Tau exhibits a soft excess compared to active coronal sources with similar X-ray luminosity, but it is quite weak when compared to other CTTS. Actually the soft excess of DN~Tau is the weakest in the sample of X-ray studied CTTS. Calculating the \ion{O}{viii}/\ion{O}{vii} energy flux ratio of DN~Tau we find a value of around three; inspecting theoretical ratios as calculated with e.g. the {\it Chianti} code this corresponds to an average plasma temperature of 3.0\,--\,3.5~MK. This temperature is rather high given the expected shock temperatures and supports the idea that most of the oxygen emission in DN~Tau is produced by magnetic activity or consists of mixed accreted and coronal plasma. Another method based on \ion{O}{vii} alone and thus more suited for very cool temperatures uses the temperature sensitive g-ratio, $g= (f+i) / r$. Our value of $g = 1.36 \pm 0.57$ favors indeed low temperatures of $\lesssim 1$~MK for the \ion{O}{vii} plasma, but due to its weaker temperature-dependence also twice as high temperatures are consistent with the data. \section{Discussion} \label{comp} \subsection{The accretion shocks on DN Tau} At first look the soft excess of DN~Tau is surprisingly weak for a young CTTS that is accreting matter from its disk and that exhibits a high plasma density in \ion{O}{vii}. Two main factors might be responsible for this effect; either the relative accretion luminosity is very low or the accreted plasma is not heated sufficiently to produce strong \ion{O}{vii} emission. The estimated mass accretion rates from optical/UV observations of DN~Tau are intermediate for CTTS, therefore they alone cannot explain the weakness of its soft excess. Here the ratio of coronal to accretion luminosity is another important measure. While the density analysis suggests a significant contribution of the accretion shock to the \ion{O}{vii} emission, the relative contribution from very cool plasma to the overall X-ray emission is quite weak and a strong \ion{O}{viii} line from the corona reduces the soft excess. Also the evolutionary phase plays an important role and DN~Tau is a low-mass CTTS that is relatively young and thus still quite enlarged. As a consequence, shock speeds and temperatures do not reach values found for more massive or older, more compact stars. Basically, $V_{sh} \propto \sqrt{2 G M_{*}/R_{*} \times (1-R_{*}/R_{t}) }$ and $T_{sh} \propto V_{sh}^{2}$ \citep[e.g.][]{lam98,cal98}, where $M_{*}$ and $R_{*}$ are stellar mass and radius and $R_{t}$ is the disk truncation radius, i.e. from where matter is falling onto the star. The calculated shock velocity depends slightly on the adopted stellar model, using the \cite{donati13} values of $M_{*}= 0.65~M_{\odot}$ and $R_{*}=1.9~R_{\odot}$ and their $R_{mag} = 5.9~R_{*}$ (2010) as truncation radius, results in shock speeds of about $V_{sh} = 330 \pm 30$~km\,s$^{-1}$. Adopting the slightly larger or less massive stellar models with e.g. $M_{*} \approx 0.5~M_{\odot}$ and $R_{*} \approx 2.1~R_{\odot}$, gives $V_{sh}=$\,260\,--\,300~km\,s$^{-1}$; here we assumed accretion from the respective co-rotating radius. The corresponding strong-shock temperatures are in the range of $0.9 - 1.5$~MK and similar to the \ion{O}{vii}-temperature derived above. While these temperatures are sufficient to produce \ion{O}{vii} emission, they are below the peak emissivity temperature of about 2~MK, reducing the contribution from the accretion shocks to these lines and consequently the strength of the soft excess. Since the hot and active corona emits predominantly at higher temperatures, it contributes even weaker to \ion{O}{vii} and thus preserves the accretion shock signatures in the applied emission line diagnostic, most prominently seen in the very cool plasma during DN~Tau's 2010 soft state. However, compared to other CTTS, the 'pure' accretion shock plasma is on DN~Tau a weak contributor to the total X-ray emission. The X-ray mass accretion rate for DN~Tau can be calculated from mass conversation under the assumption of a strong shock by using $\dot{M}_{acc} = 4 \pi f R^{2}_{*} \rho_{pre} V_{sh}$. Using our best-fit modeling results on the plasma density, adopting a mean molecular weight of $\mu_{e} = 1.2$, a filling factor of $f = 0.01$ and stellar models as above, we derive an X-ray mass accretion rate of $\log \dot{M} \approx -9.5~M_{\odot}$\,yr$^{-1}$. As a caveat and recalling the discussion above, due to the coronal blend the \ion{O}{vii} density is likely a lower limit for the accretion shock density and further the filling factor is adopted from other analysis. An independent estimate based on X-ray data can be derived when fitting our X-ray spectra with the model from \cite{guenther07}, where we obtain a mass accretion rates around $\log \dot{M} =-9.2~M_{\odot}$\,yr$^{-1}$. With these models we find filling factors of $f \lesssim 0.01$ and post-shock densities of $n_{e} \gtrsim 3 \times 10^{11}$~cm$^{-3}$, but their interdependency does not allow to further constrain the accretion shock properties. The X-ray accretion rate is about one order of magnitude below the values mostly found from optical/UV measurements, similar to result obtained for other CTTS \citep[e.g. BP~Tau,][]{schmitt05}. While intrinsic variability likely also plays a role, this finding might indicate that the accreted material contributes only fractionally to the observed X-ray emission. In these scenarios, either not all accreted material produces X-rays or the X-rays are produced but partially absorbed or both. In some accretion regions the shock temperatures might be too low to generate X-ray emission and thus X-rays would trace only the fastest fraction of the accretion stream; although it remains unclear why infalling material should not impact the stellar surface with similar velocities when accreted from similar distances, i.e. around the disk truncation radius. Similarly, accretion streams of low density that produce no strong X-ray emission may remain mostly undetected and lead to missing material, but if they exist they are expected to carry only a small fraction of the total mass flux. For example, adding low flux columns ($F \propto \rho V^{3}$) to the model of DN~Tau, increases the maximum spot size ($f$) by a factor of 30, but the mass accretion rate ($\dot{M}$) by less than a factor of two \citep{ing13}. Alternatively, virtually all infalling material might produce X-ray emission in accretion shocks, but these X-rays are partly absorbed locally, e.g. by the accretion column and thus missing in the observed X-ray spectra as suggested by \cite{sac10}. Recently, \cite{dod13} performed a non-LTE modeling of emission components of optical He and Ca lines by adding an accretion hot spot and a photosphere. Significant variability is present in the derived accretion parameters for two DN~Tau observations performed in autumn 2009 and spring 2010. They find pre-shock (infall) densities of $\log n_{e} =12.2/13$~cm$^{-3}$, velocities of $V_{0}= 230/280$~km\,s$^{-1}$ and filling factors of $f = 3/1.2$\,\%, leading to accretion luminosities of 3\,/\,16\,\%~$L_{*}$ and mass accretion rates of $\log \dot{M} = -8.1 /-7.6~M_{\odot}$\,yr$^{-1}$. These results again indicate a very high infall density, large mass accretion rates and large filling factors for DN~Tau. Beside, they support an active accretion period in 2010, at least a few month before the new X-ray data was taken. where a strong accretion stream impacts high latitude regions. The 2010 data presented in \cite{donati13} was obtained a few month after our X-ray observation and albeit they obtain a lower mass accretion rate of $\log \dot{M} = -9.1$, their surface maps show a large monolithic dark spot and an embedded accretion region at similar high latitudes. Their dominant spot is located in 2010 at about phase 0.55 and our X-ray data was taken at phase 0.1\,--\,0.3. Thus if this configurations is applicable, it implicates a relatively unspoiled view on the accretion spot region during the {\it XMM-Newton} observation. In summary, the low infall velocities and the non-negligible coronal contribution likely make X-ray diagnostics less favorable for a quantitative analysis, but they are still applicable to detect the presence of X-rays from accretion shocks in CTTS like DN Tau. \subsection{X-ray variability} The observed X-ray variability can be caused by intrinsic changes in accretion rate or magnetic activity as well as by varying viewing geometry, absorption and rotation. While for the omnipresent short- and mid-term variability (seconds to months) all these factors contribute, the situation is less clear for the major cause of possible long-term trends on timescales of years to decades. The changes in X-ray brightness between 2005 and 2010 can likely be attributed to different accretion states, unless a large fraction of the accretion spots is 'hidden' in the 2005 exposure. Here, variable mass accretion rates and changing magnetic topology, which influences the disk truncation radius and viewing geometry, likely play the major role. Since the X-rays from the accretion shocks experience absorption by the above accretion streams, some time dependent viewing geometry effects may be present in the detected X-ray emission, as suggested in \cite{arg11} for their V2129~Oph data. Our 2010 observation covers 0.22 in rotational phase and the line of sight is likely not aligned with the accretion stream, but the 2005 data (0.06 phase coverage) could be more severely affected. While the global $N_{H}$ is identical for both observations and the apparent presence of high density plasma in the 2005 spectrum as well as the large EM changes over a broad temperature range do not strongly support this explanation for the case of DN~Tau, the presence of a variable contribution from locally produced and re-absorbed accretion components is not ruled out by the data. In contrast, the increase in X-ray brightness by a factor of three compared to the measurements in the 1980s and 1990s seems to favor a change in the magnetic activity state of DN~Tau, already because its X-ray emission detected by {\it XMM-Newton} is predominantly of coronal origin. On the other hand, DN~Tau showed on average a brightening over the last decades in the U band, again by a factor of three, likely attributable predominantly to the accretion shocks. These apparently different scenarios might at least partially be explained by an accretion fed corona that would lead to the observed trends, but unfortunately most of these observations are not simultaneous and especially X-ray data is sparse. Thus there might be no overall long-term trend at all, but instead UV bright and X-ray bright phases that are not necessarily related to each other and dominated by the various kinds of short-term variability. \subsection{DN Tau in the CTTS context} The young low-mass CTTS DN~Tau shows a soft excess and a high density in its cool ($\sim$~2\,MK) plasma component. Both findings indicate, that plasma originating in well funnelled accretion streams impacting the stellar surface contributes to the observed X-ray emission. The plasma densities of DN~Tau derived from \ion{O}{vii} diagnostics are similar to values of other CTTS like BP~Tau or RU~Lup \citep{schmitt05,rob07b}, but its soft X-ray excess is quite weak for a young CTTS. The higher mass accretion rates of BP~Tau or RU~Lup naturally lead to a more pronounced soft excess, but this effect cannot account for all CTTS. Additionally, as a consequence of the low mass and large radius of DN~Tau, the impact velocity is among the lowest of all studied CTTS and expected accretion shock temperatures are well below the peak formation temperature of the studied X-ray lines. Therefore only the hottest part of the accreted and shocked material will reach X-ray temperatures, however this is still sufficient to produce detectable signatures in the X-ray data. Combining the only moderate mass accretion rate and the low impact velocity would explain, why the soft excess of DN~Tau is even smaller than those of old CTTS like V4046~Sgr or MP~Mus. While their accretion rates are even lower, these objects are more massive and especially more compact ($M_{*}/R_{*}$) and the accretion shocks have higher temperatures and produce X-ray emission in the \ion{O}{vii}-lines more efficiently. Comparing DN~Tau to other young stars in the lower mass regime, we find that a moderate soft excess is also present in the TWA member Hen~3-600 \citep{huene07}, a multiple system with an M3/M3.5 binary as principle components. The value of its \ion{O}{viii}/\ion{O}{vii} ratio is similar to the one of DN~Tau, albeit Hen~3-600 is about a factor five fainter in X-rays and its soft excess is even less pronounced. In contrast to DN~Tau, Hen~3-600 is old ($\sim 10$~Myr) and likely already in the CTTS/WTTS transitional phase. While expected to be compact, here the evolved state and corresponding very low accretion rate reduces accretion shock signatures in its X-ray spectrum. Correspondingly, its oxygen $f/i$\,-ratio of about 1.1, yet with significant error, is at the uppermost end of the values observed for accretional sources. Notably, coronal properties as derived from the modelling of global X-ray spectra are very similar for DN~Tau and BP~Tau \citep{rob06}. We find similar X-ray luminosities and plasma temperatures as well as nearly identical abundance patterns. The largest differences are found in the relative strength of the cool component of their EMDs, but as outlined above this is where the accretion shocks have their largest impact in the X-ray spectra. Overall, at least when considering stars with comparable X-ray activity, the coronal properties seem to vary at best moderately in the regime of young low-mass CTTS when going to less massive stars. With DN~Tau we extend the X-ray studied sample of young accreting stars to lower masses and its X-ray properties clearly link it to more massive or more evolved CTTS. The combination of a very cool accretion component with a strong hot corona makes DN~Tau one of the X-ray brightest CTTS in its mass range, but reduces the influence of the accretion shocks in its X-ray spectrum and emission line diagnostics. \section{Summary \& conclusions} \label{sum} From our study of the X-ray emission of DN~Tau we obtain the following main results and draw the subsequent conclusions: \begin{enumerate} \item DN~Tau is among the least massive CTTS where cool MK-temperature plasma at high density from accretion shocks is clearly present and it is the youngest star in the regime of M-type stars studied in X-rays in greater detail. DN~Tau shares general properties with other low-mass CTTS, but differences arise in detail that are mainly related to its youth and low mass. \item The \ion{O}{vii} triplet shows an $f/i$\,-\,ratio of about 0.4, attributed to accretion shocks that significantly contribute to the soft X-ray emission. The corresponding plasma density is $n_{e} = 3 - 4 \times 10^{11}$~cm$^{-3}$, due to the coronal contribution this is likely a lower limit for the accretion shocks. DN~Tau shows a soft excess as measured in \ion{O}{viii}/\ion{O}{vii}-ratios, confirming the presence of accretion shock plasma. While the plasma density is quite typical when compared to other CTTS, the soft X-ray excess is rather weak. Here the low impact velocity of the accreted material, a consequence of the low mass and large radius of DN~Tau, results in shock temperatures of about $1.0 - 1.5$~MK, well below peak formation of \ion{O}{vii}. Overall the cool plasma component around 2~MK contributes only moderately to the X-ray emission. \item A strong coronal component with hot ($\gtrsim$ 10 MK) plasma is present and at higher energies the spectrum of DN Tau is dominated by magnetic activity. Intermediate temperature plasma clearly originates from coronal structures, but may contain accretion-fed material. The corona reaches temperatures of $\gtrsim 30$~MK and its abundance pattern shows an IFIP effect that is reminiscent of those of active stars. DN~Tau is with $\log L_{\rm X} = 30.2$\,erg\,s$^{-1}$ among the X-ray brighter CTTS in its mass- or $T_{\rm eff}$\,-\,range. \item We find significant changes of DN~Tau's X-ray properties; in 2010 it was in an X-ray brighter, but overall softer spectral state compared to 2005. The emission measure of the cool plasma changed by a factor of a few, indicating accretion related variability. Similar, but less pronounced changes are observed at intermediate temperatures; in contrast the hot component stayed virtually constant. No changes in absorption column or elemental abundances were found; further the X-ray absorption is consistent with optical values. \item Several X-ray flares with durations of $\lesssim 1$~h are detected in the 2010 exposure, which are accompanied by clear UV counterparts. The UV emission precedes the X-rays as expected in the chromospheric evaporation scenario and the flares are similar to the ones seen on active young M dwarfs. Outside the observed flares an obvious correlation between X-ray and UV brightness is not observed, indicating largely independent emission regions. Brightness differences by a factor of three are present in X-ray and U band data on timescales of years to decades. \end{enumerate} \begin{acknowledgements} This work is based on observations obtained with {\it XMM-Newton}, an ESA science mission with instruments and contributions directly funded by ESA Member States and the USA (NASA). J.R. acknowledges support from the DLR under grant 50QR0803. HMG was supported by the National Aeronautics and Space Administration under Grant No. NNX11AD12G issued through the Astrophysics Data Analysis Program. The publication is supported by the Austrian Science Fund (FWF). \end{acknowledgements} \bibliographystyle{aa}
1,941,325,219,917
arxiv
\section{INTRODUCTION} \label{sec:intro} The transition of asymptotic giant branch (AGB) stars to the early planetary nebula (PN) stage is a poorly understood phase of stellar evolution. It is agreed upon that a rapid drop in mass loss rate accompanies the transition from the AGB to the post-AGB phase. However, no well defined objective criterion exists for the transition from the AGB to the post-AGB phase (the termination of the AGB). Bl\"ocker (1995), for example, used both mass loss rate and the stellar pulsation period, i.e., a dynamical property of the star. He took the mass loss rate to decrease with decreasing pulsating period, and defined the zero-age post-AGB phase when the pulsation period is 50 days. Below I will suggest a new property to theoretically define the transition from the AGB to the post-AGB phase. It is based on both the thermal and dynamical properties of the star, and is connected to the mass loss rate and mass loss inhomogeneities. Observationally, AGB stars are well defined, and {\it visible } well developed post-AGB stars can be clearly defined in principle, but practically there are many difficulties (e.g., Hrivnak et al. 1989; Szczerba et al. 2007; Suarez et al. 2006). Still, there is no theoretical definition of the transition. Recently, Szczerba et al. (2007) classified many post-AGB stars. However, they classified mainly unobscured stars in the IR and/or the visible band, most likely well after they have left the AGB. Observationally, most stars are likely to be obscured during the transition because of a high mass loss rate. Several parameters were proposed to define the transition from the AGB to the post-AGB phase, but each one of them has some problems. \\ {\it A drop in the mass loss rate.} This criterion (e.g., Suarez et al. 2006) captures the essence of the transition. However, what mass loss rate should be used to mark the transition? What if the star rotates and/or interacts with a binary companion such that mass loss rate depends on the companion? \\ {\it Optical depth.} Suggestions have been made that the transition occurs when the totally obscured AGB star becomes visible again. But at what wavelength? Another problem with this definition is that it depends on the viewing angle if the mass loss geometry is not spherical. In addition, low mass AGB stars with low metallicity might never become totally obscured. \\ {\it Pulsation period.} Although mass loss rate is tightly coupled to pulsation, it is not clear what pulsation period should be used. Also, AGB stars with different total masses and luminosity can have the same pulsation period at very different effective temperatures. The usage of pulsation (e.g., Bl\"ocker 1995) includes in it the dynamical time of the star, but no `natural' transition value exists. In the presently proposed criterion the dynamical time is included with a quantitative measure. \\ {\it A change in the dominating mass loss mechanism. } On the AGB the mass loss process is that of pulsations coupled with radiation pressure on dust, while for the central stars of PNs it is mainly radiation pressure on ions. The idea is that the transition is defined when the dominate mass loss process switches from pulsation and radiation pressure on dust to radiation pressure on ions. There are two main problems with this. Firstly, the physics is not well understood to connect this transition to stellar evolutionary codes. Secondly, interaction with a companion can be the dominate mass loss mechanism in many post-AGB stars, either via tidal interaction or a common envelope. \\ {\it Effective temperature.} It is not clear what temperature to use. There is no `natural' temperature for any physical effect, although the transition occurs around an effective temperature of $T \simeq 5000 ~\rm{K}$ (e.g. Sch\"onberner 1981). Even dust formation can cease at different temperatures, depending on the metallicity of the envelope. Vassiliadis \& Wood (1994) took the transition to occur when the effective temperature is twice the minimum temperature the star can reach on the AGB, but no physical reason is given for that. \\ {\it Contraction. } Contraction cannot be used because the star starts to contract before it leaves the AGB. \\ {\it Rapid contraction. } The criterion of a rapid contraction, with time or with decreasing envelope mass, captures the essence of the transition, but a quantitative value is not easy to define. One can use the logarithmic derivative of the stellar radius with envelope mass \begin{equation} \delta \equiv \frac{d \ln R }{d \ln {M_{\rm env}} } . \label{delta1} \end{equation} But $\delta$ changes monotonically in the relevant temperature (radius) range, and it is not clear what value should be used, although $\delta=1$ might be a natural choice (Frankowski, A. 2007, private communication). Alternatively, one can define the transition to occur when the magnitude of the second logarithmic derivative of the stellar radius with envelope mass \begin{equation} C \equiv \vert \frac{d^2 \ln R }{d \ln {M_{\rm env}}^2 } \vert \label{C1} \end{equation} reaches its maximum value. This occurs when the contraction changes from the AGB type behavior to the post-AGB one. Examining some models show that this occurs at an effective temperature of $\sim 7000-9000 ~\rm{K}$, which is hotter than the usually assumed transition point (Sch\"onberner \& Bl\"ocker 1993), e.g., Szczerba et al. (2001) and Tylenda et al. (2001) who listed G and K stars as post AGB stars. The criterion proposed in the next section includes in it the beginning of the rapid contraction with envelope mass, and does it with a quantitative physical definition. In any case, it seems that the criterion of maximum $C$ is similar in some aspects to the criterion suggested in the next section (but not the quantitative value of transition). \\ {\it Topology in the $U-V$ plane.} One defines the quantities $V\equiv 4 \pi r^3 \rho/M_r$ and $U \equiv G M_r \rho /rP$, where $M_r$ is the mass inner to radius $r$ in the star, and the other symbols have their usual meaning. One can draw the structure of the star in the $U-V$ plane. As discussed in detailed by Sugimoto \& Fujimoto (2000), a structural curve with a loop corresponds to a giant-like structure, while that without a loop corresponds to a dwarf-like structure. Equivalently, one can consider the variation of $W \equiv V/W$ within the star. If it changes (inside the star) monotonically then there is no loop. We can try to apply this criterion to the post AGB star. For a very low envelope mass $M_r$ is constant in the envelope, and $W=GM_r^2/4 \pi r^4 P \sim (r^4P)^{-1}$. I examine the structure of the model with the envelope mass of $5.74 \times 10^{-4}$ from Soker (1992), that has an effective temperature of $7200 K$. The envelope pressure profile changes from $P \sim r^{-5}$ in the range $r \la 3 R_\odot$ to $P \sim r^{-3.7}$ in the range $4 \la r \la 20 R_\odot$. Namely, $W$ increases with $r$ for $r \la 3 R_\odot$, and decreases with increasing $r$ in the range $4 \la r \la 20 R_\odot$. A loop in the $U-V$ plane does exist for this model. Therefore, the loop in the $U-V$ plane disappears too late in the post AGB evolution. An interesting property is that $V$ and $U$ are related to dynamical and thermal properties of the envelope, however, it is not straightforward to connect them to relevant properties of the post-AGB envelope. \\ {\it Disappearance of the envelope convective zone.} Although the convective zone becomes thinner and thinner as the star shrinks, it disappear only when the star is very hot (e.g., Soker 1992). \\ {\it End of shell burning.} This criterion has not been suggested, but it is listed here for the sake of clarity. It is not a good criterion as nuclear burning ceases when the star is hot (e.g. Harpaz \& Kovetz 1981; Kovetz \& Harpaz 1981; Sch\"onberner 1981). \section{THE PROPOSED CRITERION} \label{sec:criterion} When the envelope mass is low, the Kelvin-Helmholtz time of the envelope is given by (Sch\"onberner \& Bl\"ocker 1993) \begin{equation} \tau_{\rm KH-env} = \frac{G M_c}{L} \int \frac{ 4 \pi r^2\rho(r)}{r} {dr}, \label{tkh1} \end{equation} where $M_c$ is the core mass, $L$ is the stellar luminosity, $R$ is the stellar radius, and $\rho$ is the density in the envelope. On the upper AGB the density profile in most of the envelope (beside regions very close to the core that contain very little mass) can be approximated by $\rho \propto r^{-2}$ (e.g., Soker 1992). For the inner radius of the envelope, and in particular for the convective part, we can take $r_0 \sim 1 R_\odot$, and so $\beta_s=\ln (R/r_0) \simeq 6$ in the integration of equation (\ref{tkh1}). I defined a parameter $\beta_s$ that depends on the exact structure of the envelope. For the response of the envelope alone, the internal energy of the envelope should also be considered. This will reduce the required time to supply energy or to remove energy, and will reduce somewhat the effective value of $\beta_s$. I will therefore scale it with $\beta_s=5$. Scaling the different variables of upper AGB stars gives \begin{equation} \tau_{\rm KH-env} \simeq 6 \left( \frac{\beta_s}{5} \right) \left( \frac{M_c}{0.6 M_\odot} \right) \left( \frac{M_{\rm env}}{0.1 M_\odot} \right) \left( \frac{L}{5000 L_\odot} \right)^{-1} \left( \frac{R}{300 R_\odot} \right)^{-1} ~\rm{yr}, \label{tkh2} \end{equation} where $M_{\rm env}$ is the envelope mass. For the relevant dynamical time I take $(G \rho_{\rm av})^{-1/2}$, where $\rho_{\rm av}$ is the average density of the entire star. Scaling with typical number gives \begin{equation} \tau_{\rm dyn} \simeq 0.7 \left( \frac{M}{0.6 M_\odot} \right)^{-1/2} \left( \frac{R}{300 R_\odot} \right)^{3/2} ~\rm{yr}. \label{tdyn1} \end{equation} During the evolution along the AGB before the star starts to contract in radius, the luminosity increases and the mass decreases, such that $\tau_{\rm dyn}$ increases and $\tau_{\rm KH-env}$ decreases. Therefore, their ratio $Q \equiv \tau_{\rm dyn}/\tau_{\rm KH-env}$ increases. For evaluating the value of $Q$ on the upper AGB, when the envelope mass is low, I take $M_c=M$ in equation (\ref{tkh2}). This gives for the upper AGB \begin{equation} Q \equiv \frac{\tau_{\rm dyn}}{\tau_{\rm KH-env}} \simeq 0.1 \left( \frac{\beta_s}{5} \right)^{-1} \left( \frac{M}{0.6 M_\odot} \right)^{-3/2} \left( \frac{R}{300 R_\odot} \right)^{5/2} \left( \frac{L}{5000 L_\odot} \right) \left( \frac{M_{\rm env}}{0.1 M_\odot} \right)^{-1} \label{chi1} \end{equation} The thermal time $\tau_{\rm KH-env}$ is less than an order of magnitude longer than the dynamical time during this late AGB phase. Soker \& Harpaz (1999) noted that this relatively short thermal time must result in a strong irregular behavior of the envelope because dynamical motions, such as pulsations and convective motion, can cause large thermal perturbations in the envelope. Therefore, a large value of this parameter, i.e., $Q \ga 0.1$, can be related to a highly inhomogeneous mass loss process, as well as to a high mass loss rate. Soker \& Harpaz (1992) argued that the characteristics of AGB stellar pulsations depend on the thermal and dynamical time scales. They used the thermal time scale of only the upper envelope, and for the dynamical time scale they took the pulsation period. The star starts to contract before it leaves the AGB. The initial contraction is slow, and the stellar radius during the early contraction phase can be approximated by \begin{equation} R_{C} \simeq R_m \left( \frac{M_{\rm env}}{M_{\rm env-m}} \right)^{\delta}, \label{rad1} \end{equation} where $R_m$ and $M_{\rm env-m}$ are the stellar radius and envelope mass when the contraction starts. For the model used by Soker (1992) relation (\ref{rad1}) holds for an envelope mass of $0.001 \la M_{\rm env} \la 0.1 M_\odot$ with $\delta \simeq 0.2$ for most of the time. Then $\delta$ increases more and more rapidly until it reaches a very large value when the star contracts by two order of magnitude for a tiny change in the envelope mass (Sch\"onberner 1983). Qualitatively similar behavior is found for other core masses, but at different envelope masses (Sch\"onberner 1983; Frankowski 2003). Using equation (\ref{rad1}) to express the stellar radius in equation (\ref{chi1}) gives for the contracting-AGB phase \begin{equation} Q_{ C} \simeq 0.1 \left( \frac{\beta_s}{5} \right)^{-1} \left( \frac{M}{0.6 M_\odot} \right)^{-3/2} \left( \frac{L}{5000 L_\odot} \right) \left( \frac{M_{\rm env}}{0.1 M_\odot} \right)^{\frac{5}{2}\delta-1} \left( \frac{R_m}{300 R_\odot} \right)^{5/2} \left( \frac{M_{\rm env-m}}{0.1 M_\odot} \right)^{-\frac{5}{2}\delta}. \label{chi2} \end{equation} During the contracting-AGB phase the luminosity and mass do not change much, and the derivative of equation (\ref{chi2}) can be written as \begin{equation} \Delta Q_C \equiv \frac{d \ln Q_{C}}{d \ln M_{\rm env}} \simeq \frac{5}{2}\delta-1 - \frac{d \ln \beta_s}{d \ln M_{\rm env}}. \label{chi3} \end{equation} Along the entire AGB $\Delta Q$ is negative (beside temporal variations, e.g., after thermal pulses). It is very positive during the fast contraction along the post-AGB track (again, beside temporal variations). The transition from $\Delta Q <0$ to $\Delta Q>0$ can mark the beginning of the post-AGB phase. (Note that the envelope mass decreases with time, and therefore when $\Delta Q <0$ then $Q$ increases with time.) Namely, the star is said to terminate the AGB when $Q$ is at its maximum value; this occurs after the contraction started and $Q=Q_C$. If there is no change in the density profile then $d \ln \beta_s/d \ln M_{\rm env} = 0$ and $\Delta Q_C$ changes sign when $\delta = 0.4$. This is when more or less the rapid contraction starts. However, during the contracting-AGB phase the density profile becomes steeper (e.g., Soker 1992), and $\beta_s$ increases slowly, so that $d \ln \beta_s/d \ln M_{\rm env} < 0$. On the other hand, the envelope convective zone, which might be more relevant to many processes influencing the mass loss process, becomes concentrated in the outer region and the effective value of $\beta_s$ might decrease. Over all, I suggest to ignore the structural factor $\beta_s$, and to mark the transition when $\delta = 0.4$, i.e., when $d \ln (R_{C})/ d \ln {M_{\rm env}} = 0.4$ and $\Delta Q=0$. Alternatively we can use the criterion that \begin{equation} \Delta Q_{\rm KH} \equiv - \frac{d \ln \tau_{\rm KH-env}}{d \ln M_{\rm env}} \simeq \delta-1 - \frac{d \ln \beta_s}{d \ln M_{\rm env}} \label{chi4} \end{equation} is equal zero. Namely, $\tau_{\rm KH-env}$ has its minimum value. This occurs when $\delta \simeq 1$. In the model presented in Soker (1992) $\delta = 2/5$ and $\delta = 1$ when the stellar radius is $R=150 R_\odot$ and $R=110 R_\odot$, and the effective temperature is $T=4400~\rm{K}$ and $T=5100~\rm{K}$, respectively. The value of $C$ defined in equation (\ref{C1}) reaches its maximum at $T \simeq 7000 ~\rm{K}$. In a model with a core mass of $M_c=0.67 M_\odot$ from the solar metallicity track of Vassiliadis \& Wood (1994), the maxima in $Q$ and $Q_{\rm KH}$ as calculated by Frankowski (Frankowski, A. 2007, private communication) are reached when the stellar radius is $R=120 R_\odot$ and $R=85 R_\odot$, and at an effective temperature of $T=5200~\rm{K}$ and $T = 6100 ~\rm{K}$, respectively. The value of $C$ has its maximum at a temperature of $T = 8900 ~\rm{K}$ in that model. As mentioned in section 1, the $C$-criterion gives the transition at a too high temperature, after mass loss has decline, but still captures most aspects of the transition. I prefer the $Q$-criterion or the $Q_{\rm KH}$-criterion because they seem to have more of a physical implication to the mass loss process. \section{SUMMARY} \label{sec:summary} I suggest to theoretically define the transition from the AGB to the post-AGB phase, namely, the termination of the AGB, by using the envelope thermal time scale (Kelvin-Helmholtz time) and the stellar dynamical time. The criterion does not refer to the short time-scale variations occurring on the AGB and post-AGB, e.g., thermal pulses (helium shell flashes) and magnetic activity, but refers to the time average of stellar properties. All other alternatives for the transition must use average values as well; only a criterion based on the envelope mass alone does not need a time average, but no such a criterion has been suggested. As the star expands along the AGB and loses mass the thermal time scale decreases and the dynamical time increase. On the upper AGB the two time scales become comparable. This implies that dynamical processes, such as pulsation and convective motion, can influence the thermal state of the envelope. This is likely to influence the mass loss process. The thermal time scale continues to decrease even after the star starts its contraction. The thermal time scale starts to increase at about the same evolutionary point where the rapid contraction starts. I suggest to define the termination of the AGB when $Q$ (defined in eq. \ref{chi1}) reaches it maximum value. This occurs during the contraction part of the AGB, when $Q=Q_C$ (eq. \ref{chi2}), and the transition occurs when $\Delta Q_C=0$ (defined in eq. \ref{chi3}). Alternatively, using only the thermal time scale, the transition can be defined when the thermal time scale starts to increase with decreasing envelope mass. Namely, when the value of $\tau_{\rm KH-env}$ is at its minimum and $\Delta Q_{\rm KH}=0$ (eq. \ref{chi4}). The criterion proposed here has several advantages. \begin{enumerate} \item It uses well defined properties of the envelope. These are simple to derive with a stellar model calculated without the inclusion of pulsation or magnetic activity. If these are included, time average must be introduced. \item The transition can be defined to occur at a well define evolutionary point (when averaged over short time scales variations), when $\Delta Q=0$ (or, alternatively, $\Delta Q_{\rm KH} =0 $). \item It is based on properties that are closely related to the mass loss process: The dynamical time and the thermal properties of the envelope. However, the theoretical transition occurs when $Q$ is at its maximum value. Therefore, the high mass loss rate and inhomogeneities are likely to continue into the post-AGB phase for a short time. \item It contains previously proposed criteria. (a) Pulsation: via the dynamical time scale. (b) Rapid contraction: as the change in the behavior of ${\tau_{\rm KH-env}}$ occurs at about the same time rapid contraction stars. (c) Mass loss rate: The mass loss process is related to the dynamical and thermal time scales. (d) Effective temperature: The value $\Delta Q =0$ is reached at $T \sim 4000-6000 ~\rm{K}$, a temperatures range that was used before to mark the zero age pos-AGB phase. \end{enumerate} Although the criterion proposed here has its own merit in the theoretical study of stellar evolution, it goes beyond a pure academic exercise. The criterion suggests that the involved time scales have a role in determining the evolution of the envelope, mainly as they are important parameters in the mass loss process, e.g., via pulsation properties. Future theoretical studies will have to examine in more detail how these time scales affect the mass loss process. Also, the suggested theoretical criterion must be examined with detailed numerical simulations, to verify that the criterion $\Delta Q=0$, or $\Delta Q_{\rm KH} =0$, do not give wrong results. The numerical study should also determine which of these two similar criteria fit observational parameters better, hence connecting theoretical studies with observations. \acknowledgements This research was supported by the Asher Space Research Institute in the Technion.
1,941,325,219,918
arxiv
\section{Introduction} Let $R=k[x_1,\ldots,x_d]$ be the polynomial ring over a field $k$, and let $\mathcal{D}_R$ be the ring of $k$-linear differential operators on $R$. For every non-zero $f \in R$, the natural action of $\mathcal{D}_R$ on $R$ extends uniquely to an action on $R_f$. In characteristic 0, it has been proven by Bernstein in the polynomial ring case (cf.\,\cite[Corollary 1.4]{Bernstein1972}) that $R_f$ has finite length as a $\mathcal{D}_R$-module. The minimal $m$ such that $R_f = \mathcal{D}_R \cdot \frac{1}{f^m}$ is related to Bernstein-Sato polynomials (cf. \cite[Theorem 23.7, Definition 23.8, and Corollary 23.9]{TwentyFourHours}), and there are examples in which $m >1$ (e.g.\,\cite[Example 23.13]{TwentyFourHours}). Remarkably, in positive characteristic, not only is $R_f$ finitely generated as a $\mathcal{D}_R$-module \cite[Proposition 3.3]{Bog}, but it is generated by $\frac{1}{f}$ (cf.\,\cite[Theorem 3.7 and Corollary 3.8]{AMBL}). This is shown by proving the existence of a differential operator $\delta \in \mathcal{D}_R$ such that $\delta\left(1/f\right) = 1/f^p$, i.e., a differential operator that acts as the Frobenius homomorphism on $1/f$. The main result of this paper exhibits an algorithm that, given $f\in R$, produces a differential operator $\delta \in \mathcal{D}_R$ such that $\delta\left(1/f\right) = 1/f^p$. We will call such a $\delta$ a {\it differential operator associated with $f$}. Our method is described in full details in Section \ref{the algorithm described in detail}. Moreover, this procedure has been implemented using the computer algebra system Macaulay2. Assume that $\operatorname{char}(k) = p >0$ and that $[k:k^p]<\infty$. For $e \geqslant 1$ let $R^{p^e}$ be the subring of $R$ consisting of all $p^e$-th powers of elements in $R$, which can also be viewed as the image of the $e$-th iteration of the Frobenius endomorphism $F:R \to R$. We set $R^{p^0} := R$. It is shown in \cite[1.4.9]{Yek} that $\mathcal{D}_R$ is equal to the increasing union $ \bigcup_{e \geqslant 0} \operatorname{End}_{R^{p^e}}(R)$. Therefore, given $\delta \in \mathcal{D}_R$, there exists $e \geqslant 0$ such that $\delta \in \operatorname{End}_{R^{p^e}}(R)$ but $\delta \notin \operatorname{End}_{R^{p^{e'}}}(R)$ for any $e' < e$. Given a non-zero polynomial $f \in R$, we have seen above that there exists $\delta \in \mathcal{D}_R$ that is associated with $f$. We say that $f$ has level $e$ if such $\delta$ is $R^{p^e}$-linear, and there is no $R^{p^{e'}}$-linear differential operator $\delta'$, with $e'<e$, that is associated with $f$. In Section \ref{monomial section}, we study the case when $f$ is a monomial; indeed, in Theorem \ref{monomial} we determine the level of $f$, and we give an explicit description of the differential operator $\delta$ associated with $f$. We also describe explicitly $I_e(f^{p^e-1})$, the ideal of $p^e$-th roots of $f^{p^e-1}$, where $e$ is the level of $f$. The ideal $I_e(f^{p^e-1})$ can be defined as the unique smallest ideal $J \subseteq R$ such that $f^{p^e-1} \in J^{[p^e]} = (j^{p^e} \mid j \in J)$ (see for example\, \cite[Definition 2.2]{BMS}). In Section \ref{section of level one} we present some families of polynomials which have level one, and we give some examples. In Section \ref{elliptic curves section} we focus on Elliptic Curves $\mathcal{C} \subseteq \mathbb{P}^2_{\mathbb{F}_p}$, where $\mathbb{F}_p$ is the finite field with $p$ elements. We prove the following characterization: \begin{theorem} Let $p \in \mathbb{Z}$ be a prime number and let $\mathcal{C} \subseteq \mathbb{P}^2_{\mathbb{F}_p}$ be an elliptic curve defined by a cubic $f(x,y,z) \in \mathbb{F}_p[x,y,z]$. Then \begin{enumerate}[(i)] \setlength{\itemsep}{2pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item $\mathcal{C}$ is ordinary if and only if $f$ has level one. \item $\mathcal{C}$ is supersingular if and only if $f$ has level two. \end{enumerate} \end{theorem} All computations in this article are made using the computer software Macaulay2 \cite{M2}. \section{Preliminaries} The goal of this section is to review the definitions, notations and results that we use throughout this paper. Unless otherwise specified, $k$ will denote a perfect field of prime characteristic $p$. Under this assumption, it is known (see \cite[IV, Th\'eor\`eme 16.11.2]{EGA_4_4}) that the ring of $k$-linear differential operators over $R=k[x_1,\ldots,x_d]$ can be expressed in the following way: \[ \displaystyle \mathcal{D}_R:=R \left\langle D_{x_i,t} \mid i=1,\ldots,d \mbox{ and } t \geqslant 1 \right\rangle, \ \ \mbox{ where } D_{x_i,t} := \frac{1}{t!} \frac{\partial^t}{\partial x_i^t}. \] This allows us to regard $\mathcal{D}_R$ as a filtered ring. Indeed, one has that \[ \mathcal{D}_R=\bigcup_{e\geqslant 0}\mathcal{D}_R^{(e)},\text{ where }\mathcal{D}_R^{(e)}:=R \left\langle D_{x_i,t} \mid i=1,\ldots,d \mbox{ and } 1\leqslant t\leqslant p^e-1 \right\rangle . \] Moreover, it is shown by A.\,Yekutieli (see \cite[1.4.9]{Yek}) that $\mathcal{D}_R^{(e)}=\operatorname{End}_{R^{p^e}} (R)$, hence the previous filtration does not depend on the choice of coordinates. Now, we fix additional notation; given an $\mathbf{\alpha}=(a_1,\ldots ,a_d)\in\mathbb{N}^d$ we shall use the following multi-index notation: \[ \mathbf{x}^{\mathbf{\alpha}}:=x_1^{a_1}\cdots x_d^{a_d}. \] With this notation, we set $\norm{\mathbf{x}^{\mathbf{\alpha}}}:=\max\{a_1,\ldots ,a_d\}$. By abuse of notation, we will sometimes also use $\norm{\mathbf{\alpha}}$ instead of $\norm{\mathbf{x}^{\mathbf{\alpha}}}$. For any polynomial $g\in k [x_1,\ldots ,x_d]$, we define \[ \norm{g}:=\max_{\mathbf{x}^{\mathbf{\alpha}}\in\operatorname{supp} (g)} \norm{\mathbf{x}^{\mathbf{\alpha}}}, \] where if $g=\sum_{\mathbf{\alpha}\in\mathbb{N}^d} g_{\mathbf{\alpha}} \mathbf{x}^{\mathbf{\alpha}}$ (such that $g_{\mathbf{\alpha}}=0$ for all but a finite number of terms) the support of $g$ is defined as \[ \operatorname{supp} (g):=\left\{x^{\mathbf{\alpha}}\in R\mid\ g_{\mathbf{\alpha}}\neq 0\right\}. \] Moreover, we also define $\deg (g)$ as the total degree of $g$. Finally, for any ideal $J \subseteq R$, $J^{[p^e]}$ will denote the ideal generated by all the $p^e$-th powers of elements in $J$, or equivalently the ideal generated by the $p^e$-th powers of any set of generators of $J$. \subsection{The ideal of $p^e$-th roots} Due to the central role which the ideal of $p^e$-th roots plays throughout this article, we review some well-known definitions and facts (cf.\,\cite[page 465]{AMBL} and \cite[Definition 2.2]{BMS}). \begin{definition} Given $g\in R$, we set $I_e (g)$ to be the smallest ideal $J\subseteq R$ such that $g\in J^{[p^e]}$. \end{definition} \begin{remark} Assume that $k$ is perfect. In our assumptions, the ring $R$ is a free $R^{p^e}$-module, with basis given by the monomials $\{\mathbf{x}^{\mathbf{\alpha}} \mid \norm{\alpha} \leqslant p^e-1\}$. If we write \[ g=\sum_{0\leqslant\norm{\mathbf{\alpha}}\leqslant p^{e}-1}g_{\mathbf{\alpha}}^{p^e} \mathbf{x}^{\alpha}, \] then $I_e (g)$ is the ideal of $R$ generated by all the elements $g_{\mathbf{\alpha}}$ \cite[Proposition 2.5]{BMS}. \end{remark} \begin{remark} \label{I_e homog} Notice that, if $g$ is a homogeneous polynomial, then, for all $e \in \mathbb{N}$, $I_e(g)$ is a homogeneous ideal. Indeed, if we write $g=\sum_{0\leqslant\norm{\mathbf{\alpha}}\leqslant p^{e}-1}g_{\mathbf{\alpha}}^{p^e} \mathbf{x}^{\alpha}$, then we can assume without loss of generality that every $g_\alpha^{p^e} \mathbf{x}^{\alpha}$ has degree equal to $\deg(g)$. But then $g_\alpha$ must be homogeneous of degree \[ \displaystyle \deg(g_\alpha) = \frac{\deg(g)-\deg(\mathbf{x}^{\alpha})}{p^e}. \] Since $I_e(g)$ is generated by the the elements $g_\alpha$, it is a homogeneous ideal. \end{remark} We have the following easy properties (see \cite[Lemma 3.2 and Lemma 3.4]{AMBL} for details). \begin{proposition}\label{propiedades del ideal raiz} Given $f\in R$ a non-zero polynomial, and given $e\geqslant 0$, the following statements hold. \begin{enumerate}[(i)] \item $I_e (f)=I_{e+1} (f^p)$. \item $I_e (f^{p^e-1})\supseteq I_{e+1} (f^{p^{e+1}-1})$. \end{enumerate} \end{proposition} Note that part (ii) of Proposition \ref{propiedades del ideal raiz} produces the following decreasing chain of ideals: \begin{equation}\label{cadena incordio} R=I_0 (f^{p^0-1})\supseteq I_1 (f^{p-1})\supseteq I_2(f^{p^2-1})\supseteq I_3 (f^{p^3-1})\supseteq\ldots \end{equation} It is shown in \cite{AMBL} that under our assumptions this chain stabilizes. The smallest integer $e \in \mathbb{N}$ where the chain stabilizes plays a central role in this paper. We summarize the facts that we will need in the following theorem. See \cite[Proposition 3.5, and Theorem 3.7]{AMBL} for details and proofs. \begin{theorem}\label{first step of the algorithm} Let $k$ be a perfect field of prime characteristic $p$, let $R=k[x_1,\ldots ,x_d]$, and let $f\in R \smallsetminus \{0\}$. Define \[ \displaystyle e:=\inf\left\{s\geqslant 1\mid\ I_{s-1}\left(f^{p^{s-1}-1}\right)=I_s\left(f^{p^s-1}\right)\right\}. \] Then, the following assertions hold. \begin{enumerate}[(i)] \item The chain of ideals \eqref{cadena incordio} stabilizes rigidly, that is $e < \infty$ and $I_{e-1}\left(f^{p^{e-1}-1}\right)=I_{e+s} \left(f^{p^{e+s}-1}\right)$ for any $s\geqslant 0$. \item One has \[ e=\min\left\{s\geqslant 1\mid\ f^{p^s-p}\in I_s\left(f^{p^s-1}\right)^{[p^s]}\right\}, \] and $e\leqslant\deg(f)$. \item There exists $\delta\in\mathcal{D}_R^{(e)}$ such that $\delta(f^{p^e-1}) = f^{p^e-p}$, or equivalently such that $\delta (1/f)=1/f^p$. \item There is no $\delta '\in\mathcal{D}_R^{(e')}$, with $e'<e$, such that $\delta ' (1/f)=1/f^p$. \end{enumerate} \end{theorem} Motivated by Theorem \ref{first step of the algorithm}, we make the following definition. \begin{definition} For a non-zero polynomial $f \in R$, we call the integer $e$ defined in Theorem \ref{first step of the algorithm} the {\it level of $f$}. Also, we will say that $\delta \in \mathcal{D}_R^{(e)}$ such that $\delta(f^{p^e-1}) = f^{p^e-p}$, or equivalently such that $\delta(1/f) = 1/f^p$, is a differential operator {\it associated with $f$}. \end{definition} \section{The algorithm}\label{the algorithm described in detail} Let $k$ be a computable perfect field of prime characteristic $p$ (e.g., $k$ is finite). Let $R=k[x_1,\ldots ,x_d]$, and let $f\in R$ be a non-zero polynomial. We now describe in details the algorithm that computes a differential operator $\delta \in \mathcal{D}_R$ associated with $f$. \begin{itemize} \item {\bf \underline{Step 1}.} Find the smallest integer $e \in \mathbb{N}$ such that $I_e(f^{p^e-p}) = I_e(f^{p^e-1})$. There is an implemented algorithm for the computation of the level of a given polynomial $f \in R$. Here follows a description: \begin{algorithm}\label{calculo del nivel} Let $k$ be a computable perfect field of prime characteristic $p$ (e.g., finite), let $R=k[x_1,\ldots ,x_d]$, and let $f\in R$. These data act as the input of the procedure. Initialize $e=0$ and $flag=true$. While $flag$ has the value $true$, execute the following commands: \begin{enumerate}[(i)] \item Assign to $e$ the value $e+1$, and to $q$ the value $p^e$. \item Compute $I_e (f^{q-1})$. \item Assign to $J$ the value $I_e \left(f^{q-1}\right)^{[q]}$. \item If $f^{q-p}\in J$, then $flag=false$; otherwise, come back to step (i). \end{enumerate} At the end of this method, return the pair $\left(e,I_e\left(f^{p^e-1}\right)\right)$. Such $e$ is exactly the $e$ described in Theorem \ref{first step of the algorithm}, i.e., the level of $f$. \end{algorithm} \begin{remark} Since the level $e$ is always at most $\deg(f)$, we can ensure that the While loop in Algorithm \ref{calculo del nivel} finishes after, at most, $\deg(f)$ iterations. Notice that, a priori, there is a black box in this algorithm, namely the computation of $I_e (f^{q-1})$ at each step. However, the calculation of the ideal of $p^e$-th roots is well known (cf. \cite[Section 6]{KatzmanSchwede2012}). \end{remark} \begin{remark} As pointed out by E.\,Canton in \cite[Definition 2.3]{Canton15}, the so-called \emph{non-F-pure ideal} of $f$ introduced by O.\,Fujino, K.\,Schwede and S.\,Takagi in \cite[Definition 14.4]{FujinoSchwedeTakagi11} turns out to be $I_e \left(f^{p^e-1}\right)$, where $e$ is the level of $f$ (see \cite[Remark 16.2]{FujinoSchwedeTakagi11}). Therefore, Algorithm \ref{calculo del nivel} provides a procedure to calculate the non-F-pure ideal. \end{remark} \item {\bf \underline{Step 2}.} For $e \in \mathbb{N}$ as in {\bf Step 1} write $f^{p^e-1} = \sum_{i=1}^n c_i^{p^e} \mu_i$, where $\{\mu_1,\ldots,\mu_n\}$ is the basis of $R$ as an $R^{p^e}$-module consisting of all the monomials $x_1^{a_1}\cdots x_d^{a_d}$, with $a_i\leqslant p^e-1$ for all $i=1,\ldots,d$. \begin{claim}\label{Dirac delta claim} For all $i =1,\ldots,n$ there exists $\delta_i \in \mathcal{D}_R^{(e)}$ such that $\delta_i(\mu_j) = 1$ if $i=j$ and $\delta_i(\mu_j) = 0$ if $i\ne j$. \end{claim} \begin{proof} For $i \in \{1,\ldots,n\}$ and $\mu_i = x_1^{a_1}\cdots x_d^{a_d}$ consider $\nu_i:=x_1^{p^e-1-a_1}\cdots x_d^{p^e-1-a_d}$, which is a monomial in $R$ because $a_k \leqslant p^e-1$ for all $k=1,\ldots,d$. Notice that $\nu_i \mu_j = (x_1\cdots x_d)^{p^e-1}$ if and only if $i=j$. Then set \[ \displaystyle \delta_i := \left( \prod_{k=1}^d D_{x_k,p^e-1} \right) \cdot \nu_i \in \mathcal{D}_R^{(e)}. \] Notice that $\delta_i(\mu_i) = 1$, and that if $\mu_j=x_1^{b_1}\cdots x_d^{b_d}$, then $\delta_i(\mu_j) = 0$ if $b_k < a_k$ for some $k\in\{1,\ldots,d\}$. So let us assume that $a_k \leqslant b_k \leqslant p^e-1$ for all $k=1,\ldots,d$, and that there is $s\in \{1,\ldots,d\}$ such that $a_s<b_s$. Note that by definition of $\nu_i$ we have that $\nu_i\mu_j = x_1^{r_1}\cdots x_d^{r_d}$, with $p^e \leqslant r_s \leqslant 2p^e-2$, so that we can write $r_s = p^e+n$ for some integer $n$ with $0 \leqslant n \leqslant p^e-2$. \begin{subclaim}The coefficient of $D_{x_s,p^e-1}(\nu_i\mu_j)$ is \[ {p^e+n \choose p^e-1} \equiv 0 \mod p. \] \end{subclaim} \begin{proof} As a consequence of a theorem proved by Lucas in \cite[pp.\,51--52]{Lucas}, it is enough to check that at least one of the digits of the base $p$ expansion of $p^e-1$ is greater than the corresponding digit in the base $p$ expansion of $p^e+n$. The base $p$ expansion of $p^e-1$ is given by \[ p^e-1 = (p-1) (1+p+\cdots+ p^{e-1}) = (p-1)p^0 + (p-1)p^1 + \cdots + (p-1)p^{e-1}, \] so that the subclaim is proved unless the first $e$ digits of $p^e+n$ are $p-1$ as well. But in this case, since $p^e+n>p^e-1$ we would get \[ \displaystyle p^e+n \geqslant (p-1)p^0 + (p-1)p^1 + \cdots + (p-1)p^{e-1} + p^e = 2p^e-1, \] a contradiction since $n \leqslant p^e-2$. \end{proof} The Subclaim shows that $D_{x_s,p^e-1}(\nu_i\mu_j) = 0$ for all $\mu_j$ with $j \ne i$. Therefore, using that $\delta_i \in \mathcal{D}_R^{(e)} = \operatorname{End}_{R^{p^e}}(R)$, we get \[ \displaystyle \delta_i(f^{p^e-1}) = \delta_i\left(\sum_{j=1}^n c_j^{p^e} \mu_j\right)=\sum_{j=1}^n c_j^{p^e} \delta_i(\mu_j) = c_i^{p^e}.\qedhere \] \end{proof} \item {\bf \underline{Step 3}.} Since $1 \in \mathcal{D}_R^{(e)}$, for $e \in \mathbb{N}$ as in {\bf Step 1} we have \[ f^{p^e-p} \in \mathcal{D}_R^{(e)}(f^{p^e-p}) = I_e(f^{p^e-p})^{[p^e]} = I_e(f^{p^e-1})^{[p^e]} = (c_1,\ldots,c_n)^{[p^e]}. \] In particular there exist $\alpha_1,\ldots,\alpha_n \in R$ such that $f^{p^e-p} = \sum_{i=1}^n \alpha_i c_i^{p^e}$. Consider $\delta_i \in \mathcal{D}_R^{(e)}$ as in {\bf Step 2}, so that $\delta_i(f^{p^e-1}) = c_i^{p^e}$, and set \[ \displaystyle \delta:= \sum_{i=1}^n \alpha_i \delta_i \in \mathcal{D}_R^{(e)}. \] With this choice we have \[ \displaystyle \delta(f^{p^e-1}) = \delta\left(\sum_{j=1}^n c_j^{p^e} \mu_j\right) = \sum_{i,j=1}^n c_j^{p^e}\alpha_i \delta_i(\mu_j) = \sum_{i=1}^n \alpha_i c_i^{p^e} = f^{p^e-p}, \] and using that $\delta \in \mathcal{D}_R^{(e)}$ we finally get \[ \displaystyle \delta\left(\frac{1}{f}\right) = \delta \left(\frac{f^{p^e-1}}{f^{p^e}}\right) = \frac{1}{f^{p^e}} \delta \left(f^{p^e-1}\right) = \frac{f^{p^e-p}}{f^{p^e}} = \frac{1}{f^p}. \] \end{itemize} \section{The monomial case}\label{monomial section} Throughout this section, let $k$ be a perfect field and let $R=k[x_1,\ldots,x_d]$. We now analyze the case when $f \in R$ is a monomial. First we show a lower bound for the level of $f$. \begin{lemma}\label{lower bound of the level}Let $f=x_1^{a_1}\cdots x_d^{a_d}$ be a monomial in $R=k[x_1,\ldots,x_d]$, with $a_i >0$ for all $i=1,\ldots,d$. Let $\delta \in \mathcal{D}_R^{(e)}$ be such that $ \delta\left(1/f\right) = 1/f^p$. Then, setting $a:=\norm{f} = \max \{a_i \mid 1 \leqslant i \leqslant d\}$, we have \[ \displaystyle e \geqslant \left\lceil \log_p(a) \right\rceil +1. \] \end{lemma} \begin{proof} It suffices to show that for $t := \left\lceil \log_p(a)\right\rceil$ we have $I_{t}(f^{p^{t}-p}) \supsetneq I_{t}(f^{p^{t}-1})$. This is because if the chain \[ \displaystyle I_t(f^{p^t-p^{t-1}}) \supseteq I_t(f^{p^t-p^{t-2}}) \supseteq \ldots \supseteq I_t(f^{p^t-p}) \supsetneq I_t(f^{p^t-1}) \] stabilizes before such step, then it would be stable at it as well, and the smallest $s$ such that $I_s(f^{p^s-p}) = I_s(f^{p^s-1})$ is precisely the level of $f$ (see Theorem \ref{first step of the algorithm}). Notice that $t=0$ if and only if $a_i=1$ for all $i$. For such a monomial the Lemma is trivially true. So let us assume that $t \geqslant 1$, or equivalently that $a_i \geqslant 2$ for at least one $i \in \{1,\ldots, d\}$. Let \[ \displaystyle j_i:= \min\{j \in \mathbb{N} \mid jp^t \geqslant a_i\}, \] and notice that $j_i \leqslant a_i$ for all $i$, and by choice of $t$ we have that $j_i \geqslant 2$ for at least one $i$, say $j_1 \geqslant 2$. Then \[ f^{p^t-p} = x_1^{p^ta_1-pa_1} \cdots x_d^{p^ta_d - pa_d} = (x_1^{a_1-j_1}\cdots x_d^{a_d-j_d})^{p^t} \cdot x_1^{j_1p^t-pa_1}\cdots x_d^{j_dp^t-pa_d}. \] Since $(j_i-1)p^t<a_i$ by definition of $j_i$, we have $j_ip^t-pa_i < p^t+a_i-pa_i < p^t$. This shows that $I_t(f^{p^t-p}) = (x_1^{a_1-j_1}\cdots x_d^{a_d-j_d})$. On the other hand: \[ f^{p^t-1} = x_1^{p^ta_1-a_1} \cdots x_d^{p^ta_d - a_d} = (x_1^{a_1-1}\cdots x_d^{a_d-1})^{p^t} \cdot x_1^{p^t-a_1} \cdots x_d^{p^t-a_d}, \] which makes sense because $p^t\geqslant a_i$ for all $i$. This shows that $I_t(f^{p^t-1}) = (x_1^{a_1-1}\cdots x_d^{a_d-1})$, and because $j_1 \geqslant 2$ we have \[ I_t(f^{p^t-p})= (x_1^{a_1-j_1}\cdots x_d^{a_d-j_d}) \supseteq (x_1^{a_1-2}\cdots x_d^{a_d-1}) \supsetneq (x_1^{a_1-1}\cdots x_d^{a_d-1}) = I_t(f^{p^t-1}).\qedhere \] \end{proof} \begin{theorem} \label{monomial} Let $f=x_1^{a_1}\cdots x_d^{a_d}$ be a monomial in $k[x_1,\ldots,x_d]$, with $a_i >0$ for all $i=1,\ldots,d$. Let $a:= \norm{f} = \max \{a_i \mid 1 \leqslant i \leqslant d\}$. Then $f$ has level $e := \left\lceil \log_p(a) \right\rceil+1$, and $I_e(f^{p^e-1}) = (x_1^{a_1-1}\cdots x_d^{a_d-1})$. Furthermore, \[ \displaystyle \delta:= \prod_{i=1}^d \left(x_i^{p^e-pa_i} \cdot D_{x_i,p^e-1} \cdot x_i^{a_i-1} \right) \in \mathcal{D}_R^{(e)} \] is a differential operator associated with $f$. \end{theorem} \begin{proof} Set $e := \left\lceil \log_p(a) \right\rceil+1$. During the proof of Lemma \ref{lower bound of the level} we already proved that $I_e(f^{p^e-1}) = (x_1^{a_1-1}\cdots x_d^{a_d-1})$; keeping this fact in mind, it is enough to check that $\delta(f^{p^e-1}) = f^{p^e-p}$. Indeed, we have \begin{eqnarray*} \begin{array}{c} \displaystyle \delta(f^{p^e-1}) = \delta\left(x_1^{p^ea_1-a_1} \cdots x_d^{p^ea_d-a_d}\right) = \left(x_1^{a_1-1} \cdots x_d^{a_d-1}\right)^{p^e} \delta\left(x_1^{p^e-a_1} \cdots x_d^{p^e-a_d}\right) = \\ \\ \displaystyle = \prod_{i=1}^d \left(x_i^{p^ea_i-pa_i}\cdot D_{x_i,p^e-1}(x_i^{p^e-1})\right) = x_1^{p^ea_1-pa_1} \cdots x_d^{p^ea_d-pa_d} = f^{p^e-p}, \end{array} \end{eqnarray*} and therefore the proof is completed. \end{proof} Regarding the level $e$ obtained in Theorem \ref{monomial}, one might ask whether, given any non-zero $f\in R$, its level would always be bounded above by $\lceil\log_p (\norm{f})\rceil +1$. Unfortunately, this is not the case, as the following example illustrates. \begin{example} Consider $f:=xy^3+x^3\in\mathbb{F}_2[x,y]$. In this case, one can check with Macaulay2 that the level of $f$ is $4$, while $\lceil\log_2 (\norm{f})\rceil +1=3$. In fact, the level is even strictly greater than $\lceil\log_2(\deg(f))\rceil + 1=3$. \end{example} The monomial in Theorem \ref{monomial} is assumed to be of the form $x_1^{a_1}\cdots x_d^{a_d}$. Using a suitable linear change of coordinates, we immediately get the following Corollary, which includes the general monomial case. \begin{corollary} \label{linforms} Let $n\leqslant d$, let $f=\ell_1^{a_1}\cdots \ell_n^{a_n}$ be a product of powers of linear forms which are linearly independent over $k$, and let $a:=\max \{a_i \mid 1\leqslant i \leqslant n\}$. Then $f$ has level $e = \left\lceil \log_p(a) \right\rceil+1$, the ideal of $p^e$-th roots is $I_e(f^{p^e-1}) = (\ell_1^{a_1-1}\cdots \ell_n^{a_n-1})$ and \[ \displaystyle \delta:= \prod_{i=1}^n \left(\ell_i^{p^e-pa_i} \cdot D_{\ell_i,p^e-1} \cdot \ell_i^{a_i-1} \right) \in \mathcal{D}_R^{(e)} \] is a differential operator associated with $f$. Here, if $\ell_i = \sum_{j=1}^d \lambda_{ij} x_j$, then the differential operator $D_{\ell_i, p^e-1}$ is defined as $\sum_{j=1}^d \lambda_{ij} D_{x_j,p^e-1}$. \end{corollary} \section{Families of level one}\label{section of level one} Polynomials of level one, that is polynomials $f$ such that $I_1(f^{p-1}) = R$, are somehow special. For instance, let $f,g \in R$ and let $\delta \in \mathcal{D}_R$ be associated with $f$. Assume that $e = 1$, then for $\delta':=\delta\left( g^{p-1} \cdot \underline{ \ \ }\right)$ we get \[ \displaystyle \delta'\left(\frac{g}{f}\right) = \delta\left(\frac{g^p}{f}\right) = g^p \cdot \delta \left(\frac{1}{f}\right) = \left(\frac{g}{f}\right)^p. \] The authors do not know whether, for any choice of $f,g \in R$, $f \ne 0$, there always exists $\delta' \in \mathcal{D}_R$ such that $\delta'(g/f) = (g/f)^p$. In fact, when $\delta \in \mathcal{D}_R^{(e)}$ for $e \geqslant 2$, the best we can get is $\delta'\left(g/f\right) = g^{p^e}/f^{p}$, with $\delta':= \delta\left(g^{p^e-1} \cdot \underline{ \ \ } \right)$. On the other hand, for any $f \in R$ we have $R_f \cong \mathcal{D}_R \cdot \frac{1}{f}$ and therefore, for any $g \in R$, there exists $\delta' \in \mathcal{D}_R$ such that $\delta'\left(1/f\right) = g^p/f^p$. In fact it is enough to set $\delta':= g^{p} \cdot \delta$. We will now exhibit some families of polynomials that have level one, together with some examples. However, before doing so, we want to single out the following elementary statement, because we will be using it repeatedly throughout this section. It may be regarded as a straightforward sufficient condition which ensures that a polynomial has level one. In this section, unless otherwise stated, $k$ will denote a perfect field and $R=k[x_1,\ldots,x_d]$ will be a polynomial ring over $k$ \begin{lemma}\label{sqfpowers_nonhomog} Let $f \in R$ be a non-zero polynomial, write \[ f^{p^e-1}=\sum_{0\leqslant\norm{\mathbf{\alpha}}\leqslant p^e-1}f_{\mathbf{\alpha}}^{p^e} \ \mathbf{x}^{\mathbf{\alpha}}, \] and assume that $f_{\mathbf{\beta}}$ is a unit for some $0\leqslant\norm{\mathbf{\beta}}\leqslant p^e-1$ and some $e\geqslant 1$. Then, $f$ has level one. \end{lemma} \begin{proof} By definition, we have that $I_e(f^{p^e-1}) = R$; on the other hand, we know that $I_1 (f^{p-1})\supseteq I_e(f^{p^e-1})$. In this way, combining these two facts it follows that $I_1(f^{p-1}) = R$, and therefore $f$ has level one. \end{proof} We can give an easy but useful characterization of homogeneous polynomials of level one. \begin{lemma} \label{sqfpowers} Let $f \in R$ be a homogeneous non-zero polynomial. Let $\{\mu_j\}_{j=1}^{p^d}:= \{\mathbf{x}^{\mathbf{\alpha}} \mid \norm{\alpha} \leqslant p-1\}$ be the monomial basis of $R$ as a $R^p$-module. Then $f$ has level one if and only if $\mu_j \in \operatorname{supp}(f^{p-1})$ for some $j = 1,\ldots,p^d$. \end{lemma} \begin{proof} Write \[ f^{p-1}=\sum_{0\leqslant\norm{\mathbf{\alpha}}\leqslant p-1}f_{\mathbf{\alpha}}^p \ \mathbf{x}^{\mathbf{\alpha}}. \] Note that, since $f$ is homogeneous, $I_1(f^{p-1})$ is a homogeneous ideal by Remark \ref{I_e homog}. If $f$ has level one, then $I_1(f^{p-1}) = (f_\alpha \mid 0 \leqslant \norm{\alpha} \leqslant p-1) = R$, therefore there exists at least one coefficient $f_\beta$ that is outside of the irrelevant maximal ideal $\mathfrak{m}=(x_1,\ldots, x_d)$. Write $f_\beta = \lambda + r$, with $\lambda \in k$ and $r \in \mathfrak{m}$. Then, we can write $f^{p-1} = \lambda^p \mathbf{x}^{\mathbf{\beta}} + h$ for some $h \in R$. Also, since $\{\mathbf{x}^{\mathbf{\alpha}} \mid 0 \leqslant \norm{\alpha} \leqslant p-1\}$ is a basis of $R$ as $R^p$-module, there is no cancellation between $\lambda^p \mathbf{x}^{\mathbf{\beta}}$ and $h$. Thus, $\mu_j=\mathbf{x}^{\mathbf{\beta}} \in \operatorname{supp}(f^{p-1})$. Conversely, if $\mu_j = \mathbf{x}^{\mathbf{\beta}} \in \operatorname{supp}(f^{p-1})$ for some $0 \leqslant \norm{\beta} \leqslant p-1$, then we can write again $f^{p-1} = \lambda^p \mathbf{x}^{\mathbf{\beta}} + h$ for some $\lambda \in k$ and some $h \in R$. Also, we can assume that $\mathbf{x}^{\mathbf{\beta}} \notin \operatorname{supp}(h)$. Then the coefficient $f_\beta^p$ of $\mathbf{x}^{\mathbf{\beta}}$ in the expansion of $f^{p-1}$ must be $\lambda^p+r^p = (\lambda+r)^p$ for some $r \in \mathfrak{m}$, and thus $\lambda+r \in I_1(f^{p-1})$. Since the latter is homogeneous (here we are using again Remark \ref{I_e homog}), we have in particular that $\lambda \in I_1(f^{p-1})$, which implies $I_1(f^{p-1}) = R$, and therefore $f$ has level one. \end{proof} \begin{proposition} \label{sqfree} Let $f \in R$ be a non-zero polynomial whose support contains a squarefree term involving a variable that does not appear in any other term of the support of $f$. Then $f$ has level one. \end{proposition} \begin{proof} Without loss of generality we can assume that $x_{1}\cdots x_{n} \in \operatorname{supp}(f)$, and that $x_1$ does not appear in any other term of $\operatorname{supp}(f)$. Write $f=\lambda_0 x_1 \cdots x_n+ \sum_{i=1}^s \lambda_i m_i$, where $m_i=x^{\alpha_i}$ and $\alpha_i \in \mathbb{N}^d$ are of the form $(0,\alpha_{i2},\ldots,\alpha_{id})$ for $i=1,\ldots,s$. Then, \[ f^{p-1} = \sum_{i_0+i_1+\cdots+i_s = p-1} {p-1 \choose i_0, \ldots, i_s} \lambda_0^{i_0}\lambda_1^{i_1} \cdots \lambda_s^{i_s} (x_1\cdots x_n)^{i_0} m_1^{i_1}\cdots m_s^{i_s}. \] Notice that in order for a term in the support of $f^{p-1}$ to be divisible by $x_1^{p-1}$ it is necessary that $i_0 = p-1$, in which case $i_1 = \ldots = i_s = 0$. Hence $x_1^{p-1} \cdots x_n^{p-1}$ is in the support of $f^{p-1}$, appearing with coefficient $\lambda_0^{p-1} \ne 0$. Then we can write \[ \displaystyle f^{p-1} = \lambda_0^{p-1} x_1^{p-1} \cdots x_n^{p-1} + \sum_{{\tiny \begin{array}{c} 0 \leqslant \norm{\alpha} \leqslant p-1 \\ \alpha_1\ne p-1\end{array}}} f_\alpha^p \ \mathbf{x}^\alpha, \] and the Proposition now follows from Lemma \ref{sqfpowers_nonhomog}, since $\lambda_0^{p-1}$ is a unit. \end{proof} \begin{example} Let $f=x^2+y^2+xyz \in \mathbb{F}_p[x,y,z]$. Since $z$ appears in the square free term $xyz$ and nowhere else in the support of $f$ we have that $f$ has level one by Proposition \ref{sqfree}. In fact, $\delta:= D_{x,p-1}D_{y,p-1}D_{z,p-1} \in \mathcal{D}_R^{(1)}$ is a differential operator associated with $f$. \end{example} \begin{proposition} \label{allsqfree} Let $f \in R$ be a non-zero polynomial of degree $n$ such that every element of its support is a squarefree monomial. Then $f$ has level one. \end{proposition} \begin{proof} Without loss of generality we can assume that $x_{1}\cdots x_n \in \operatorname{supp}(f)$. We want to show that we can apply Lemma \ref{sqfpowers_nonhomog}. Let $f= \lambda_0 x_1\cdots x_n+ \sum_{i=1}^s \lambda_i m_i$, where the monomials $m_i$ are squarefree of degrees $d_i:= \deg(m_i) \leqslant n$, that we can assume different from $x_1\cdots x_n$. Then \[ f^{p-1} = \sum_{i_0+i_1+\cdots+i_s = p-1} {p-1 \choose i_0, \ldots, i_s} \lambda_0^{i_0}\lambda_1^{i_1} \cdots \lambda_s^{i_s} (x_1\cdots x_n)^{i_0} m_1^{i_1}\cdots m_s^{i_s}. \] Note that the choice $i_0=p-1$, $i_1=\ldots=i_s=0$ gives the monomial $\lambda_0^{p-1} (x_1\cdots x_n)^{p-1}$, and we want to show that this choice of indices is the only one that gives such a monomial. By way of contradiction, assume that $(x_1\cdots x_n)^{i_0} m_1^{i_1}\cdots m_s^{i_s} = (x_1\cdots x_n)^{p-1}$, then necessarily each $m_i$ divides $x_1\cdots x_n$, because they are squarefree. Since we are assuming that none of the monomials $m_i$ is equal to $x_1\cdots x_n$, we must have that $\deg(m_i) < n$. But then \[ \displaystyle \deg\left((x_1\cdots x_n)^{i_0} m_1^{i_1}\cdots m_s^{i_s}\right) = ni_0+d_1i_1+\cdots+d_si_s < n(i_0+i_1+\cdots+i_s) = n(p-1), \] which is a contradiction because $(x_1\cdots x_n)^{i_0} m_1^{i_1}\cdots m_s^{i_s} = (x_1\cdots x_n)^{p-1}$, and the degree of the latter is $n(p-1)$. Therefore if we write \[ \displaystyle f^{p-1} = \sum_{0 \leqslant \norm{\alpha} \leqslant p-1} f_\alpha^p \ \mathbf{x}^\alpha, \] then the coefficient of $x_1^{p-1}\cdots x_n^{p-1}$ is precisely $\lambda_0^{p-1}$, which is a unit. Using Lemma \ref{sqfpowers_nonhomog}, the Proposition now follows. \end{proof} \begin{example} Let $R=\mathbb{F}_p[X_{ij}]_{1\leqslant i,j \leqslant n}$ be a polynomial ring in $n^2$ variables and let $f=\det(X_{ij})$. Because of Proposition \ref{allsqfree} $f$ has level one, since its support consists only of squarefree monomials. \end{example} \begin{proposition}\label{quadratic forms} Let $f \in R = k[x_1,\ldots,x_d]$ be a homogeneous quadric. Then $f$ has level one unless $f$ is the square of a linear form, in which case $f$ has level two. \end{proposition} \begin{proof} If $f$ is a power of a linear form, then $f$ has level two by Corollary \ref{linforms}. Otherwise, if $p \ne 2$ there exists a linear change of variables that diagonalizes $f$ (cf.\,\cite[Chapter IV, Proposition 5]{arithmeticcourse}). That is, we can assume that, after a linear change of coordinates, $f=x_1^2 + \cdots + a x_n^2$, where $2 \leqslant n \leqslant d$ and $a$ is either $1$ or an element of $k$ which is not a square. Notice that $x_1^{p-1}x_2^{p-1}$ appears with coefficient $\lambda:= {p-1 \choose \frac{p-1}{2}} \in k \smallsetminus \{0\}$ if $n \geqslant 3$, and with coefficient $a^{(p-1)/2} {p-1 \choose \frac{p-1}{2}} \in k \smallsetminus\{0\}$ if $n=2$. Therefore $f$ has level one by Lemma \ref{sqfpowers}. Finally, if $p=2$ and $f$ is not a power of a linear form, then we can assume that $x_1x_2$ appears with non-zero coefficient in $f^{p-1} = f$, and we conclude using again Lemma \ref{sqfpowers}. \end{proof} \begin{proposition}\label{diagonal hypersurfaces: no todas} Let $f=x_1^t+\cdots+x_d^t \in R$ be a diagonal hypersurface of degree $t\geqslant 1$. If $t\leqslant\min\{d,p\}$ and $p\equiv 1\pmod t$, then $f$ has level one. \end{proposition} \begin{proof} Our assumptions on $t,d$ and $p$ allow us to expand $f^{p-1}$ in the following manner: \[ \displaystyle f^{p-1}=\frac{(p-1)!}{\left(\left(\frac{p-1}{t}\right)!\right)^t}x_1^{p-1}\cdots x_t^{p-1}+\sum_{\substack{i_1+\cdots+i_d=p-1\\ (i_1,\ldots ,i_t)\neq ((p-1)/t,\ldots,(p-1)/t)}} \frac{(p-1)!}{i_1!\cdots i_d!}x_1^{t i_1}\cdots x_d^{t i_d}. \] Since $\frac{(p-1)!}{\left(\left(\frac{p-1}{t}\right)!\right)^t} \in k\smallsetminus\{0\}$, the above equality shows that $x_1^{p-1}\cdots x_t^{p-1}$ appears in $\operatorname{supp} (f^{p-1})$ with non-zero coefficient, hence $f$ has level one by Lemma \ref{sqfpowers}. \end{proof} The assumptions $p\equiv 1\pmod t$ and $t\leqslant\min\{d,p\}$ in Proposition \ref{diagonal hypersurfaces: no todas} cannot be removed in general, as the following examples illustrate. \begin{example} Let $R:=\mathbb{F}_5 [x,y,z]$ and $f:=x^3+y^3+z^3$. One can check using Macaulay2 that $f$ has level two. Notice that, in this case $3=\deg (f)\leqslant\min\{3,5\}$ and $5\equiv 2\pmod 3$. On the other hand, consider now $R:=\mathbb{F}_7 [x,y]$ and $f:=x^3+y^3$. One can check using Macaulay2 that $f$ has level two. In this case $3=\deg (f)> 2=\min\{2,7\}$ and $p=7\equiv 1\pmod 3$. \end{example} The diagonal hypersurface considered in Proposition \ref{diagonal hypersurfaces: no todas} is of the form $x_1^t+\cdots+x_d^t$; using a suitable linear change of coordinates, we immediately get the following Corollary, which includes as a particular case Proposition \ref{diagonal hypersurfaces: no todas}. \begin{corollary}\label{linforms2} Let $n\leqslant d$, let $f=\ell_1^t+\cdots+\ell_n^t$ be a diagonal hypersurface of degree $t\geqslant 1$ made up by linear forms $\ell_1,\ldots ,\ell_n$ which are linearly independent over the field $k$. If $t\leqslant\min\{n,p\}$ and $p\equiv 1\pmod t$, then $f$ has level one. \end{corollary} Before going on, we want to review the following notion (see \cite[page 243]{TwentyFourHours}): \begin{definition} A polynomial $f\in R$ is said to be \emph{regular} provided \[ \operatorname{Tj}(f):=\left(f,\frac{\partial f}{\partial x_1},\ldots ,\frac{\partial f}{\partial x_d}\right)=R, \] where $\operatorname{Tj}(f)$ denotes the Tjurina ideal attached to $f$. \end{definition} In characteristic zero, a polynomial is regular if and only if its Bernstein-Sato polynomial is $b_f(s) = s+1$ \cite[Theorem 23.12]{TwentyFourHours}. In this case, $R_f$ is generated by $1/f$ as a $D$-module. \begin{proposition}\label{some regular polynomials of level one} Let $k$ be a perfect field of characteristic $2$, and let $f\in k[x_1,\ldots ,x_d]$ be regular. Then, $f$ has level one. \end{proposition} \begin{proof} Since $f$ is regular, there are $r_0,r_1,\ldots ,r_d\in R$ such that \[ 1=r_0 f+r_1 \frac{\partial f}{\partial x_1}+\cdots +r_d \frac{\partial f}{\partial x_d}. \] In this way, setting \[ \delta:= r_0+\sum_{j=1}^d \frac{\partial}{\partial x_j} \] it follows that $\delta (f)=1$ and therefore $f$ has level one. \end{proof} \begin{remark}\label{more regular polynomials of level one} A very easy way to produce polynomials which are simultaneously regular and of level one in arbitrary prime characteristic works as follows. Let $k$ be a perfect field of prime characteristic $p$, let $R=k[x_1,\ldots ,x_d]$, and assume that $f\in R$ is a non-zero polynomial of the form $f=\lambda x_i +g$, for some $1\leqslant i\leqslant d$, some $\lambda\in k -\{0\}$, and some $g\in R$ such that \[ \frac{\partial g}{\partial x_i}=0\quad (\text{i.e., }g\in k[x_1,\ldots ,x_{i-1},\widehat{x_i},x_{i+1},x_{i+2},\ldots ,x_d]). \] Then, $f$ is regular and of level one; indeed, the fact that $f$ is of level one follows directly from Proposition \ref{sqfree}. \end{remark} \section{Elliptic Curves}\label{elliptic curves section} Let $p \in \mathbb{Z}$ be a prime and let $\mathcal{C} \subseteq \mathbb{P}^2_{\mathbb{F}_p}$ be an elliptic curve defined by an homogeneous cubic $f(x,y,z) \in \mathbb{F}_p[x,y,z]$. We want to review here the following notion (see \cite[13.3.1]{Husemoller}). \begin{definition} $\mathcal{C}$ is said to be ordinary if the monomial $(xyz)^{p-1}$ appears in the expansion of $f^{p-1}$ with non-zero coefficient. Otherwise, $\mathcal{C}$ is said to be supersingular. \end{definition} The general form of a cubic defining an elliptic curve is the following \[ f= y^2z+a_1xyz +a_3yz^2-x^3-a_2x^2z-a_4xz^2-a_6z^3, \] where $a_1,\ldots,a_6 \in \mathbb{F}_p$. When $p\ne 2,3$, the expression above can be further simplified to \[ f= y^2z - x^3 +axz^2 + bz^3, \] for $a,b \in \mathbb{F}_p$ (see \cite[3.3.6]{Husemoller} for details). We are now interested in computing the level of elliptic curves $\mathcal{C}$. We are mainly interested in upper bounds, since it is easy to see from Lemma \ref{sqfpowers} that any ordinary elliptic curve has level one, and that any supersingular elliptic curve has level at least two. First, we explore the low characteristic cases, where the list of possibilities (up to isomorphism) is very limited. \begin{proposition} \label{char_2_3} Let $\mathcal{C} \subseteq \mathbb{P}^2_{\mathbb{F}_p}$ be a supersingular elliptic curve defined by a cubic $f \in \mathbb{F}_p[x,y,z]$. If $p = 2$ or $p = 3$, then $f$ has level two. \end{proposition} \begin{proof} Set $D:=D_{x,p^2-1}D_{y,p^2-1}D_{z,p^2-1}$. By \cite[13.3.2 and 13.3.3]{Husemoller} there are only the following two cases, up to isomorphism: \begin{center} \begin{tabular}{|c|c|c|} \hline $p$ & Elliptic curve & Differential operator \\ \hline $2$ & $x^3+y^2z+yz^2$ & $ \begin{array}{c} \\ y^2 D x^3z+ z^2Dx^3y+x^2Dxyz^2 \\ \\ \end{array}$\\ \hline $3$ & $x^3-xz^2-y^2z$ & $ \begin{array}{c} \\ (x^6z^3-x^3y^6)Dx^4z^5+\\ +(x^9+x^3z^6+y^6z^3)Dxy^8+y^3z^6Dx^4y^5 \\ \\ \end{array}$\\ \hline \end{tabular} \end{center} The table above is exhibiting a differential operator of level two for each polynomial, therefore the level is at most two in all such cases. We have already noticed that $\mathcal{C}$ is ordinary if and only if $f$ has level one. This shows that when $p=2$ or $p=3$, $\mathcal{C}$ is supersingular if and only if $f$ has level two. \end{proof} Recall that $R = \mathbb{F}_p[x,y,z]$ is a free $R^{p^2}$-module with basis given by $\{x^ry^sz^t \mid 0 \leqslant r,s,t \leqslant p^2-1\}$. For a polynomial $g \in \mathbb{F}_p[x,y,z]$ consider \[ \displaystyle g = \sum_{0 \leqslant r,s,t \leqslant p^2-1} (c(r,s,t))^{p^2} \ x^ry^sz^t, \] and recall that by Proposition \ref{propiedades del ideal raiz}, with this notation, $I_2(g)$ is the ideal generated by the elements $c(r,s,t)$, for $0 \leqslant r,s,t \leqslant p^2-1$. \begin{remark} If $f \in \mathbb{F}_p[x,y,z]$ is a cubic and $g = f^{p^2-1}$, then for any $0 \leqslant r,s,t \leqslant p^2-1$ one has \[ \deg(c(r,s,t)) \leqslant \left \lfloor \frac{\deg(f^{p^2-1})}{p^2}\right \rfloor = \left\lfloor \frac{3(p^2-1)}{p^2} \right\rfloor = 2. \] In particular, $I_2(f^{p^2-1}) = \displaystyle \left(c(r,s,t) \mid 0 \leqslant r,s,t \leqslant p^2-1\right)$ is generated in degree at most two. \end{remark} For the rest of the section, we will denote $I_1(f^{p-1})$ and $I_2(f^{p^2-1})$ simply by $I_1$ and $I_2$. \subsection{Preliminary computations} The purpose of this part is to single out some technical facts which will be used for proving the main result of this section; namely, Theorem \ref{elliptic_curves}. \begin{lemma} \label{y} Let $f = y^2z-x^3+axz^2+bz^3 \in \mathbb{F}_p[x,y,z]$, where $p \ne 2,3$. Then \[ \displaystyle c(0,p^2-2,p^2-1) = y. \] \end{lemma} \begin{proof} A monomial in the expansion of $f^{p^2-1}$ will have the form $(y^2z)^h(-x^3)^i(axz^2)^j(bz^3)^k$, where $h+i+j+k=p^2-1$. Looking at the coefficient of $y^{p^2-2}z^{p^2-1}$ in such expansion, by degree considerations we only have three possibilities: \[ \displaystyle y^{2h}x^{3i+j} z^{h+2j+3k} = \left\{ \begin{array}{ll} x^{p^2} \cdot y^{p^2-2}z^{p^2-1} \\ y^{p^2} \cdot y^{p^2-2}z^{p^2-1} \\ z^{p^2} \cdot y^{p^2-2}z^{p^2-1} \end{array} \right. \] Since $p^2-2$ is not even, there is no choice of $h$ that realizes the first and the third cases. So we are left with the second, which is achieved only by the choice $h=p^2-1$, $i=j=k= 0$. This shows that the coefficient of $y^{p^2-2}z^{p^2-1}$ in the expansion of $f^{p^2-1}$ is precisely $y^{p^2}$, and the Lemma follows. \end{proof} Before going on, we need to review the following classical result, due to Legendre, because it will play some role later in this section (see Proof of Lemma \ref{cusp}). We refer to \cite[page 8]{AignerZiegler2004} for a proof. \begin{theorem}[Legendre] Let $n\geqslant 0$ be a non-negative integer, let $p$ be a prime number, and let $\sigma_p (n)$ be the sum of the base $p$ digits of $n$. Then, \[ v_p \left(n!\right)=\frac{n-\sigma_p (n)}{p-1}, \] where, given any non-negative integer $m\geqslant 0$, \[ v_p\left(m\right):=\max\{t\geqslant 0:\ p^t\mid m\}. \] \end{theorem} \begin{lemma}\label{issue of divisibility} Let $p\neq 2$ be a prime. Then, \[ \lambda :=\frac{(p^2-1)!}{\left(\left(\frac{p^2-1}{2}\right)!\right)^2}\neq 0\pmod p. \] \end{lemma} \begin{proof} On one hand, $p^2-1=(p-1)(1+p)$ is the base $p$ expansion of $p^2-1$; on the other hand, since $p\neq 2$ it follows that \[ \frac{p^2-1}{2}=\left(\frac{p-1}{2}\right)(1+p) \] is the base $p$ expansion of $(p^2-1)/2$. Keeping in mind these two facts it follows, using Legendre's Theorem, that \begin{align*} v_p \left(\lambda\right)& =v_p \left((p^2-1)!\right)-2v_p\left(\left(\frac{p^2-1}{2}\right)!\right)\\ & =\frac{p^2-1-2(p-1)}{p-1}-2\cdot\left(\frac{\frac{p^2-1}{2}-2\left(\frac{p-1}{2}\right)}{p-1}\right)=p-1-(p-1)=0, \end{align*} hence $p$ does not divide $\lambda$ and therefore we can ensure that $\lambda\neq 0\pmod p$. \end{proof} \begin{lemma}\label{cusp} Let $p\neq 2,3$ be a prime, and let $f=y^2z-x^3\in\mathbb{F}_p [x,y,z]$. Then, $I_1 = I_2 = (x,y)$. In particular, $f$ has level two. \end{lemma} \begin{proof} Consider the expansion \[ \displaystyle f^{p^2-1} = \sum_{i=0}^{p^2-1} \frac{(p^2-1)!}{i!(p^2-1-i)!} y^{2i}z^ix^{3(p^2-1-i)}. \] For $i=(p^2-1)/2$ we obtain the monomial \[ \displaystyle \lambda y^{p^2-1}z^{(p^2-1)/2}x^{3(p^2-1)/2} = \lambda x^{p^2} \cdot \left(x^{(p^2-3)/2}y^{p^2-1}z^{(p^2-1)/2}\right), \] where $\lambda = \frac{(p^2-1)!}{\left(\left(\frac{p^2-1}{2}\right)!\right)^2} \ne 0$ (indeed, this follows by Lemma \ref{issue of divisibility}). Because of the term $z^i$ in the expansion above, the choice $i=(p^2-1)/2$ is clearly the only one that gives the monomial $x^{(p^2-3)/2}y^{p^2-1}z^{(p^2-1)/2}$ of the basis of $R$ as an $R^{p^2}$-module. Therefore $c\left(\frac{p^2-3}{2},p^2-1,\frac{p^2-1}{2}\right) = \lambda^{1/p^2}x = \lambda x$, and thus $x \in I_2$. In addition, by Lemma \ref{y} we always have $y \in I_2$. Therefore $(x,y) \subseteq I_2$. On the other hand, consider the expansion \[ \displaystyle f^{p-1} = \sum_{j=0}^{p-1} \frac{(p-1)!}{j!(p^2-1-j)!} y^{2j}z^jx^{3(p-1-j)}. \] We claim that either $2j \geqslant p$ or $3(p-1-j) \geqslant p$. In fact, suppose $j < p/2$, or equivalently $j \leqslant (p-1)/2$, because $j$ is an integer. Then $3(p-1-j) \geqslant 3(p-1)/2 \geqslant p$ since $p \geqslant 5$ by assumption. This shows that all the coefficients $c(r,s,t)^p$ in the expansion of $f^{p-1} = \sum_{0 \leqslant r,s,t \leqslant p-1} c(r,s,t)^p x^ry^sz^t$ are contained in $(x,y)^{[p]}$, and thus $I_1 = (c(r,s,t) \ \mid \ 0 \leqslant r,s,t \leqslant p-1) \subseteq (x,y)$. Therefore the Lemma follows from the chain of inclusions $(x,y) \subseteq I_2 \subseteq I_1 \subseteq (x,y)$. \end{proof} \begin{lemma} \label{a_b_zero} Let $p \ne 2,3$ be a prime and let $\mathcal{C}$ be a supersingular elliptic curve defined by $f(x,y,z) = y^2z-x^3+axz^2+bz^3$. If either $a=0$ or $b=0$, then $f$ has level two. \end{lemma} \begin{proof} Notice that, since $\mathcal{C}$ is supersingular, we have $\mathbb{F}_p[x,y,z] = R \supsetneq I_1 \supseteq I_2$, and we want to show that $I_1 = I_2$. Also, by Lemma \ref{y} we always have $y\in I_2$. If $a=b=0$, then Lemma \ref{cusp} ensures that $I_1=I_2 = (x,y)$. Now assume $a\ne 0$ and $b=0$. We claim that $c\left(\frac{p^2-1}{2},p^2-1,\frac{p^2-3}{2}\right) = \mu z$, for some $\mu \ne 0$. A monomial in the expansion of $f^{p^2-1}$ will have the form $(y^2z)^h(-x^3)^i(axz^2)^j$ for $h+i+j=p^2-1$. Looking at the terms involving $x^{\frac{p^2-1}{2}}y^{p^2-1}z^{\frac{p^2-3}{2}}$ the possibilities are \[ \displaystyle y^{2h}x^{3i+j} z^{2j+h} = \left\{ \begin{array}{ll} x^{p^2} \cdot x^{\frac{p^2-1}{2}}y^{p^2-1}z^{\frac{p^2-3}{2}} \\ y^{p^2} \cdot x^{\frac{p^2-1}{2}}y^{p^2-1}z^{\frac{p^2-3}{2}} \\ z^{p^2} \cdot x^{\frac{p^2-1}{2}}y^{p^2-1}z^{\frac{p^2-3}{2}} \end{array} \right. \] The second case is not possible, since $2p^2-1$ is not even and hence here is no $h$ that gives it. This forces $h=(p^2-1)/2$, and hence the first case is also not possible, since for such an $h$ the exponent of $z$ must be at least $(p^2-1)/2$. Canceling $y^{p^2-1}z^{\frac{p^2-1}{2}}$ we are left with \[ \displaystyle x^{3i} z^{2j} = x^{\frac{p^2-1}{2}}z^{p^2-1} \] This implies that $j=\frac{p^2-1}{2}$ and $i=0$. Therefore the coefficient of $x^{\frac{p^2-1}{2}}y^{p^2-1}z^{\frac{p^2-3}{2}}$ in the expansion of $f^{p^2-1}$ is \[ \displaystyle c\left(\frac{p^2-1}{2},p^2-1,\frac{p^2-3}{2}\right)^{p^2} =a^{\frac{p^2-1}{2}} z^{p^2} = (\mu z)^{p^2}, \] where $\mu = \left(a^{\frac{p^2-1}{2}}\right)^{1/p^2} \ne 0$. This shows the claim. With similar considerations, one can see that \[ \displaystyle c\left(\frac{p^2-3}{2},p^2-1,\frac{p^2-1}{2}\right) = x. \] This gives $(x,y,z) \subseteq I_2 \subseteq I_1 \subsetneq R$, which forces $I_1 = I_2$ because $I_1$ is a proper homogeneous ideal of $R=\mathbb{F}_p [x,y,z]$, hence $I_1\subseteq\left(x,y,z\right)$. Finally, if $a=0$ and $b \ne 0$ the arguments are completely analogous to the case $a\ne 0$, $b=0$. Here we get \[ \displaystyle c\left(\frac{p^2-3}{2},p^2-1,\frac{p^2-1}{2}\right) = x \ \ \ \mbox{ and } \ \ \ c\left(0,p^2-1,p^2-2\right) = \lambda z \] for some $\lambda \ne 0$. Therefore $(x,y,z) \subseteq I_2 \subseteq I_1 \subsetneq R$, which once again forces $I_1 = I_2$. \end{proof} \subsection{Main result} Next statement is the main result of this section. \begin{theorem} \label{elliptic_curves} Let $p \in \mathbb{Z}$ be a prime number and let $\mathcal{C} \subseteq \mathbb{P}^2_{\mathbb{F}_p}$ be an elliptic curve defined by a cubic $f(x,y,z) \in \mathbb{F}_p[x,y,z]$. Then \begin{enumerate}[(i)] \setlength{\itemsep}{2pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item \label{1} $\mathcal{C}$ is ordinary if and only if $f$ has level one. \item \label{2} $\mathcal{C}$ is supersingular if and only if $f$ has level two. \end{enumerate} \end{theorem} \begin{proof} Part (\ref{1}) follows from Lemma \ref{sqfpowers}, which also shows that if $f$ has level at least two, then $\mathcal{C}$ is supersingular. So it is left to show that if $\mathcal{C}$ is supersingular, then $f$ has level precisely equal to two. By Proposition \ref{char_2_3} we only have to consider the cases where $p\ne 2,3$, thus we can assume that $f$ is of the form \[ \displaystyle f(x,y,z) = y^2z - x^3 + axz^2 + bz^3. \] for $a,b \in \mathbb{F}_p$. Furthermore, by Lemma \ref{a_b_zero} we can assume that $ab \ne 0$. First we claim that $c\left(\frac{p^2-3}{2},p^2-1,\frac{p^2-1}{2}\right) = x+ \lambda z$, where \[ \displaystyle \lambda = \sum_{i=0}^{\frac{p^2-7}{6}} \frac{(p^2-1)!}{\left(\frac{p^2-1}{2}\right)!i!\left(\frac{p^2-3}{2}-3i\right)!(2i+1)!} (-1)^i a^{\frac{p^2-3}{2} - 3i} b^{2i+1} \in \mathbb{F}_p. \] In fact a general monomial in the expansion of $f^{p^2-1}$ will have the form $(y^2z)^h(-x^3)^i(axz^2)^j(bz^3)^k$, and by looking at terms that involve $x^{\frac{p^2-3}{2}}y^{p^2-1}z^{\frac{p^2-1}{2}}$, by degree considerations we have three possibilities: \[ \displaystyle y^{2h}x^{3i+j} z^{h+2j+3k} = \left\{ \begin{array}{ll} x^{p^2} \cdot x^{\frac{p^2-3}{2}}y^{p^2-1}z^{\frac{p^2-1}{2}} \\ y^{p^2} \cdot x^{\frac{p^2-3}{2}}y^{p^2-1}z^{\frac{p^2-1}{2}} \\ z^{p^2} \cdot x^{\frac{p^2-3}{2}}y^{p^2-1}z^{\frac{p^2-1}{2}} \end{array} \right. \] The second case cannot be realized since $2p^2-1$ is not even, hence we necessarily have $h=\frac{p^2-1}{2}$. This leaves two cases: \[ \displaystyle x^{3i+j} z^{2j+3k} = \left\{ \begin{array}{ll} x^{p^2} \cdot x^{\frac{p^2-3}{2}} \\ z^{p^2} \cdot x^{\frac{p^2-3}{2}} \end{array} \right. \] The first one can only happen when $h=i=\frac{p^2-1}{2}$ and $j=k=0$, giving the monomial \[ \displaystyle (y^2z)^{\frac{p^2-1}{2}}(-x^3)^{\frac{p^2-1}{2}} = (-1)^{\frac{p^2-1}{2}}x^{p^2} \cdot x^{\frac{p^2-3}{2}}y^{p^2-1}z^{\frac{p^2-1}{2}} = x^{p^2} \cdot x^{\frac{p^2-3}{2}}y^{p^2-1}z^{\frac{p^2-1}{2}}, \] because $p^2 \equiv 1$ (mod $4$) . For the second case $i,j$ and $k$ will have to satisfy \begin{eqnarray*} \left\{ \begin{array}{l} j= \frac{p^2-3}{2}-3i \\ \\ k=2i+1 \end{array} \right. \end{eqnarray*} As both $j$ and $k$ must be non-negative, we have \[ \displaystyle 0 \leqslant i \leqslant \left\lfloor \frac{p^2-3}{6} \right\rfloor = \frac{p^2-7}{6}, \] because $p^2 \equiv 1$ (mod $6$). Furthermore, the coefficient of $(y^2z)^h(-x^3)^i(axz^2)^j(bz^3)^k$ under these conditions is precisely \[ \displaystyle \frac{(p^2-1)!}{\left(\frac{p^2-1}{2}\right)!i!\left(\frac{p^2-3}{2}-3i\right)!(2i+1)!} (-1)^i a^{\frac{p^2-3}{2} - 3i} b^{2i+1}, \] proving that \[ \displaystyle c\left(\frac{p^2-3}{2},p^2-1,\frac{p^2-1}{2}\right)^{p^2} = x^{p^2} + \lambda z^{p^2} = (x+\lambda^{1/p^2}z)^{p^2}, \] for $\lambda$ as above. Since we are working over $\mathbb{F}_p$, we finally have $\lambda^{1/p^2} = \lambda$, showing the claim. With similar arguments, one can see that \begin{eqnarray*} \begin{array}{l} c\left(\frac{p^2-1}{2},p^2-3,\frac{p^2+1}{2}\right) = ax+\mu z, \\ \\ c\left(\frac{p^2-7}{2},p^2-1,\frac{p^2+3}{2}\right) = -ax+\tau z, \\ \\ c\left(0,p^2-1,p^2-2\right) = \theta x+b^{\frac{p^2-1}{2}} z, \end{array} \end{eqnarray*} where $\theta \in \mathbb{F}_p$, \[ \displaystyle \mu = \sum_{i=0}^{\frac{p^2-1}{6}} \frac{(p^2-1)!}{\left(\frac{p^2-3}{2}\right)!i!\left(\frac{p^2-1}{2}-3i\right)!(2i+1)!} (-1)^i a^{\frac{p^2-1}{2} - 3i} b^{2i+1} \in \mathbb{F}_p , \] and \[ \displaystyle \tau = \sum_{i=0}^{\frac{p^2-7}{6}} \frac{(p^2-1)!}{\left(\frac{p^2-1}{2}\right)!i!\left(\frac{p^2-7}{2}-3i\right)!(2i+3)!} (-1)^i a^{\frac{p^2-7}{2} - 3i} b^{2i+3} \in \mathbb{F}_p. \] Since $I_2 = (c(r,s,t) \mid 0 \leqslant r,s,t \leqslant p^2-1)$, in particular we have \[ (x+\lambda z,ax+\mu z, -ax + \tau z, \theta x+b^{\frac{p^2-1}{2}} z) \subseteq I_2. \] \begin{claim*} The following matrix has full rank: \begin{eqnarray*} \left[\begin{matrix} a & \mu \\ 1 & \lambda \\ -a & \tau \\ \theta & b^{\frac{p^2-1}{2}} \end{matrix} \right]. \end{eqnarray*} \end{claim*} In fact if $\det \left[\begin{matrix} a & \mu \\ 1 & \lambda \end{matrix}\right] \ne 0$ then we are done, otherwise we have $\mu = a\lambda$. Note that \begin{eqnarray*} \begin{array}{c} \displaystyle \mu = \sum_{i=0}^{\frac{p^2-1}{6}} \frac{(p^2-1)!}{\left(\frac{p^2-3}{2}\right)!i!\left(\frac{p^2-1}{2}-3i\right)!(2i+1)!} (-1)^i a^{\frac{p^2-1}{2} - 3i} b^{2i+1} \\ \\ = \displaystyle \left[\sum_{i=0}^{\frac{p^2-7}{6}} \frac{(p^2-1)!}{\left(\frac{p^2-3}{2}\right)!i!\left(\frac{p^2-1}{2}-3i\right)!(2i+1)!} (-1)^i a^{\frac{p^2-1}{2} - 3i} b^{2i+1}\right] + (-1)^{\frac{p^2-1}{6}}\frac{(p^2-1)!}{\left(\frac{p^2-3}{2}\right)!\left(\frac{p^2-1}{6}\right)!\left(\frac{p^2+2}{3}\right)!} b^{\frac{p^2+2}{3}} \\ \\ = \displaystyle \left[\sum_{i=0}^{\frac{p^2-7}{6}} \frac{(p^2-1)!}{\left(\frac{p^2-3}{2}\right)!i!\left(\frac{p^2-1}{2}-3i\right)!(2i+1)!} (-1)^i a^{\frac{p^2-1}{2} - 3i} b^{2i+1}\right] + \frac{3(p^2-1)!}{\left(\frac{p^2-1}{2}\right)!\left(\frac{p^2-7}{6}\right)!\left(\frac{p^2+2}{3}\right)!} b^{\frac{p^2+2}{3}}, \end{array} \end{eqnarray*} where the last equality comes from the fact that $p^2 \equiv 1$ (mod $12$) and by rearranging the binomial coefficients. Also, \[ \displaystyle a\lambda = \sum_{i=0}^{\frac{p^2-7}{6}} \frac{(p^2-1)!}{\left(\frac{p^2-1}{2}\right)!i!\left(\frac{p^2-3}{2}-3i\right)!(2i+1)!} (-1)^i a^{\frac{p^2-1}{2} - 3i} b^{2i+1}. \] Using that $\mu = a \lambda$, we get \begin{eqnarray*} \begin{array}{c} \displaystyle \frac{3(p^2-1)!}{\left(\frac{p^2-1}{2}\right)!\left(\frac{p^2-7}{6}\right)!\left(\frac{p^2+2}{3}\right)!} b^{\frac{p^2+2}{3}}\\ \\ = \displaystyle \sum_{i=1}^{\frac{p^2-7}{6}} \left[\frac{1}{\frac{p^2-1}{2}} - \frac{1}{\frac{p^2-1}{2}-3i}\right] \frac{(p^2-1)!}{\left(\frac{p^2-3}{2}\right)!i!\left(\frac{p^2-3}{2}-3i\right)!(2i+1)!} (-1)^i a^{\frac{p^2-1}{2} - 3i} b^{2i+1} \\ \\ \displaystyle = \sum_{i=1}^{\frac{p^2-7}{6}} \frac{-3(p^2-1)!}{\left(\frac{p^2-1}{2}\right)!(i-1)!\left(\frac{p^2-1}{2}-3i\right)!(2i+1)!} (-1)^i a^{\frac{p^2-1}{2} - 3i} b^{2i+1} \\ \\ = \displaystyle 3\sum_{i=0}^{\frac{p^2-13}{6}} \frac{(p^2-1)!}{\left(\frac{p^2-1}{2}\right)!i!\left(\frac{p^2-7}{2}-3i\right)!(2i+3)!} (-1)^i a^{\frac{p^2-7}{2} - 3i} b^{2i+3}, \end{array} \end{eqnarray*} where the last equality comes from reindexing the sum. Since $3$ is invertible in $\mathbb{F}_p$ we get that \begin{eqnarray*} \begin{array}{c} \displaystyle 0 = \left[\sum_{i=0}^{\frac{p^2-13}{6}} \frac{(p^2-1)!}{\left(\frac{p^2-1}{2}\right)!i!\left(\frac{p^2-7}{2}-3i\right)!(2i+3)!} (-1)^i a^{\frac{p^2-7}{2} - 3i} b^{2i+3}\right] - \left[\frac{(p^2-1)!}{\left(\frac{p^2-1}{2}\right)!\left(\frac{p^2-7}{6}\right)!\left(\frac{p^2+2}{3}\right)!} b^{\frac{p^2+2}{3}}\right] \\ \\ = \displaystyle \sum_{i=0}^{\frac{p^2-7}{6}} \frac{(p^2-1)!}{\left(\frac{p^2-1}{2}\right)!i!\left(\frac{p^2-7}{2}-3i\right)!(2i+3)!} (-1)^i a^{\frac{p^2-7}{2} - 3i} b^{2i+3} \end{array} \end{eqnarray*} because $p^2 \equiv 1$ (mod $12$), hence $(-1)^{\frac{p^2-7}{6}} = -1$. But the latter is precisely $\tau$, and this argument shows that if $a \lambda = \mu$, then $\tau = 0$. Therefore either $a \lambda \ne \mu$ or \[ \det \left[\begin{matrix} -a & \tau \\ \theta & b^{\frac{p^2-1}{2}} \end{matrix} \right] = \det \left[\begin{matrix} -a & 0 \\ \theta & b^{\frac{p^2-1}{2}} \end{matrix} \right] = -ab^{\frac{p^2-1}{2}} \ne 0. \] Hence the matrix has rank two, and the Claim follows. \\ \noindent But this shows that there are linear combinations of $x+\lambda z,ax+\mu z, -ax + \tau z$ and $\theta x+b^{\frac{p^2-1}{2}} z$ that produce $x$ and $z$, that is $(x,z) \subseteq I_2$. By Lemma \ref{y} we always have that $y \in I_2$. Therefore \[ (x,y,z) \subseteq I_2 \subseteq I_1 \subsetneq R, \] implying that $I_1 = I_2$ and hence that $f$ has level two. \end{proof} \section{A Macaulay2 session} The purpose of this section is to explain, through a Macaulay2 session, how the algorithm introduced in Section \ref{the algorithm described in detail} works in specific examples. We begin clearing the previous input and loading our scripts. \begin{verbatim} clearAll; load "differentialOperator.m2"; \end{verbatim} We fix the polynomial ring that we will use throughout the following examples. \begin{verbatim} p=2; F=ZZ/p; R=F[x,y,z,w]; \end{verbatim} The first example illustrates a particular case of Theorem \ref{monomial}. \begin{verbatim} i6 : f=x^3*y^5*z^7*w^4; i7 : L=differentialOperatorLevel(f); i8 : L 2 4 6 3 o8 = (4, ideal(x y z w )) \end{verbatim} This means that, in this case, $f$ has level $4$ and that $I_4 (f^{2^4-1})=\left(x^2 y^4 z^6 w^3\right)$. Now, we produce a differential operator $\delta$ of level $4$ such that $\delta (1/f)=1/f^2$. \begin{verbatim} i7 : DifferentialOperator(f) o7 = | x10y6z2w8 d_0^15d_1^15d_2^15d_3^15x2y4z6w3 | \end{verbatim} As the reader will note, the output is a row matrix; it means that $\delta$ turns out to be \[ x^{10}y^6z^2w^8\cdot D_{x,2^4-1}D_{y,2^4-1}D_{z,2^4-1}D_{w,2^4-1}\cdot x^2y^4z^6w^3. \] Our next aim is to illustrate a particular case of Corollary \ref{linforms}. \begin{verbatim} ii8 : f=x^3*(x+y)^5*(x+y+z)^7*(x+y+z+w)^4; ii9 : L=differentialOperatorLevel(f); ii10 : first L oo10 = 4 \end{verbatim} Now, a particular case of Proposition \ref{sqfree}. \begin{verbatim} ii13 : f=x^2+y^2+z^3+x*y*z*w; ii14 : L=differentialOperatorLevel(f); ii15 : L oo15 = (1, ideal 1) \end{verbatim} This means that $f$ has level one. Now, we produce the corresponding differential operator. \begin{verbatim} i7 : DifferentialOperator(f) o7 = | 1 d_0d_1d_2d_3 | \end{verbatim} It means that the differential operator produced in this case is $D_{x,1}D_{y,1}D_{z,1}D_{w,1}$. The following computation may be regarded as a particular case of Proposition \ref{allsqfree}. \begin{verbatim} i6 : f=x*w-y*z; i7 : DifferentialOperator(f) o7 = | 1 d_0d_1d_2d_3yz | \end{verbatim} It means that, in this case, the differential operator produced is $D_{x,1}D_{y,1}D_{z,1}D_{w,1}\cdot yz$. Next, a homogeneous quadric (cf.\,Proposition \ref{quadratic forms}). \begin{verbatim} i6 : f=x^2+y^2+x*y+z^2+w^2; i7 : DifferentialOperator(f) o7 = | 1 d_0d_1d_2d_3zw | \end{verbatim} We finish with a homogeneous cubic \begin{verbatim} ii12 : f=x^3+y^3+z^3+w^3; ii13 : L=differentialOperatorLevel(f); ii14 : L oo14 = (2, ideal (w, z, y, x)) oo14 : Sequence ii15 : DifferentialOperator(f) oo15 = | w2 d_0^3d_1^3d_2^3d_3^3x3z3w | | z2 d_0^3d_1^3d_2^3d_3^3x3zw3 | | y2 d_0^3d_1^3d_2^3d_3^3yz3w3 | | x2 d_0^3d_1^3d_2^3d_3^3xy3z3 | \end{verbatim} This means that our differential operator in this case turns out to be \begin{align*} & w^2\cdot D_{x,3}D_{y,3}D_{z,3}D_{w,3}\cdot x^3z^3 w\quad +z^2\cdot D_{x,3}D_{y,3}D_{z,3}D_{w,3}\cdot x^3zw^3\\ & +y^2\cdot D_{x,3}D_{y,3}D_{z,3}D_{w,3}\cdot yz^3w^3\quad +x^2\cdot D_{x,3}D_{y,3}D_{z,3}D_{w,3}\cdot xy^3z^3. \end{align*} \section{The code of the algorithm} The aim of this final section is to show our implementation in Macaulay2 of the algorithm described in Section \ref{the algorithm described in detail} of this manuscript; the whole code can be found in \cite{BoixDeStefaniVanzoM2}. Throughout this section, $R=\mathbb{F}_p [x_1,\ldots ,x_d]$ will be the polynomial ring with $d$ variables with coefficients in the field $\mathbb{F}_p$. First of all, we write down the code of a procedure which, given an ideal $I$ of $R$, return as output $I^{[p^e]}$, i.e., the ideal generated by all the $p^e$th powers of elements in $I$. The method below is based on code written by M.\,Katzman and included, among other places, in \cite{FsplittingM2}. \begin{verbatim} frobeniusPower(Ideal,ZZ) := (I,e) ->( R:=ring I; p:=char R; local u; local answer; G:=first entries gens I; if (#G==0) then { answer=ideal(0_R); } else { N:=p^e; answer=ideal(apply(G, u->u^N)); }; answer ); \end{verbatim} Now, we exhibit the code of a function which, given ideals $A,B$ of $R$, produces as output the ideal $I_e (A)+B$. For our purposes in this manuscript, $B=(0)$ and $A$ is a principal ideal. Once again, this is based on code written by Katzman and included in \cite{FsplittingM2}. \begin{verbatim} ethRoot(Ideal,Ideal,ZZ):= (A,B,e) ->( R:=ring(A); pp:=char(R); F:=coefficientRing(R); n:=rank source vars(R); vv:=first entries vars(R); R1:=F[vv, Y_1..Y_n, MonomialOrder=>ProductOrder{n,n},MonomialSize=>16]; q:=pp^e; J0:=apply(1..n, i->Y_i-substitute(vv#(i-1)^q,R1)); S:=toList apply(1..n, i->Y_i=>substitute(vv#(i-1),R1)); G:=first entries compress( (gens substitute(A,R1) L:=ideal 0_R1; apply(G, t-> { L=L+ideal((coefficients(t,Variables=>vv))#1); }); L1:=L+substitute(B,R1); L2:=mingens L1; L3:=first entries L2; L4:=apply(L3, t->substitute(t,S)); use(R); substitute(ideal L4,R) ); \end{verbatim} Next, we provide the code of our implementation of Algorithm \ref{calculo del nivel}. Namely, given $f\in R$, the procedure below gives as output the pair $\left(e, I_e\left(f^{p^e-1}\right)\right)$, where $e$ is the level of $f$, and $I_e\left(f^{p^e-1}\right)$ is the ideal where the chain \eqref{cadena incordio} stabilizes. As the reader can easily point out, this method is just turning Theorem \ref{first step of the algorithm} into an algorithm. \begin{verbatim} differentialOperatorLevel(RingElement):=(f) ->( R:=ring(f); p:=char(R); local J; local I; local e; e=0; flag:=true; local q; local N; while (flag) do { e=e+1; q=p^e; N=q-1; I=ethRoot(ideal(f^N),ideal(0_R),e); J=frobeniusPower(I,e); N=q-p; if ((f^N }; (e,I) ); \end{verbatim} Now, let $x=x_i$ (for some $1\leqslant i\leqslant d$), $n\geqslant 0$, and $f\in R$. The below procedure returns as output \[ \frac{1}{n!}\frac{\partial^n f}{\partial x^n}. \] It is worth noting that, in some intermediate step of this method, we have to lift our data to characteristic zero in order to avoid problems with the calculation of $1/n!$. \begin{verbatim} DiffOperator(RingElement,ZZ,RingElement):=(el,numb,funct)-> ( R:=ring(el); vv:=first entries vars(R); S:=QQ[vv]; funct1:=substitute(funct,S); el=substitute(el,S); for i to numb-1 do funct1=diff(el,funct1); funct1=1/(numb!)*funct1; funct=substitute(funct1,R); el=substitute(el,R); use R; return funct; ); \end{verbatim} Next, we provide a method which, given a monomial $\mathbf{x}^{\alpha}=x_1^{a_1}\cdots x_d^{a_d}$ with $0\leqslant a_i\leqslant p-1$ for all $i$, returns as output the differential operator $\delta_{\alpha}\in\mathcal{D}_R^{(e)}$ where, for any other monomial $\mathbf{x}^{\beta}=x_1^{b_1}\cdots x_d^{b_d}$ with $0\leqslant b_i\leqslant p-1$ for all $i$, $\delta_{\alpha}$ acts in the following way: \[ \delta_{\alpha}\left(\mathbf{x}^{\beta}\right)=\begin{cases} 1,\text{ if }\alpha=\beta,\\ 0,\text{ otherwise.}\end{cases} \] As the reader can easily point out, the below method is just turning Claim \ref{Dirac delta claim} into an algorithm. \begin{verbatim} DeltaOperator(RingElement,ZZ):=(el,pe)-> ( R:=ring(el); indet:=vars(R); for i to numColumns(indet)-1 do ( deg:=degree(indet_(0,i),el); listdiff_i=pe-1-deg; ); return listdiff; ); \end{verbatim} The next two methods are quite technical; however, both are necessary in order to avoid problems during the execution of our main procedure, which we are almost ready to describe. \begin{verbatim} checkcondition=method(); --- checkcondition finds a random monomial in startingpol//pol with all the --- variables in degree <p^e checkcondition(ZZ,RingElement,Ideal,ZZ):=(pe,startingpol,J,indexvar) -> ( pol:=(first entries gens J)_indexvar; genJ=first entries gens J; R:=ring(pol); var:=vars R; nvars:=numColumns var; for i to numgens J-1 do sett_i=set first entries monomials(startingpol//genJ_i); supportpol= first entries monomials (startingpol//pol); contat=0; for i to #supportpol-1 do ( flag=true; for k to nvars-1 do if degree(var_(0,k),supportpol_i)>pe-1 then flag=false; if flag then ( rightsupport_contat=supportpol_i; contat=contat+1; ); ); correctmon=rightsupport_(random contat); return correctmon; ); ----------------------------------------- DifferentialAction=method(); --- DifferentialAction computes differential(element) DifferentialAction(RingElement,RingElement):=(differential,element)-> ( R:=ring(element); T:=ring(differential); var:=vars T; nvars:=numColumns var; total=0; diffmonomials=first entries monomials (differential); for i to #diffmonomials-1 do ( moltiplication=substitute(element,T); for j to floor(nvars/2)-1 do moltiplication=moltiplication* var_(0,floor(nvars/2)+j)^(degree(var_(0,floor(nvars/2)+j),diffmonomials_i)); differentiation=coefficient(diffmonomials_i,differential)*moltiplication; for j to floor(nvars/2)-1 do differentiation=DiffOperator(var_(0,nvars-1-j), degree(var_(0,floor(nvars/2)-1-j),diffmonomials_i),differentiation); total=total+differentiation; ); return total; ); \end{verbatim} We conclude showing our implementation of the algorithm described in Section \ref{the algorithm described in detail}, which is the main result of this paper. \begin{verbatim} DifferentialOperator(RingElement):=(f)-> ( R:=ring(f); p:=char(R); F:=coefficientRing(R); local e; local J; (e,J)=differentialOperatorLevel(f); J=frobeniusPower(J,e); variable:=vars(R); variable1:=first entries variable; genJ:=first entries gens J; nvars:=numColumns vars R; powerf:=f^(p^e-1); T:=F[d_0..d_(nvars-1),variable1]; --creating ring of differentials varDelta:=first entries vars T; powerfT=substitute(powerf,T); use R; condition:=true; while condition do ( for i to numgens J-1 do ( listsupport=checkcondition(p^e,powerf,J,i); listdiff=DeltaOperator(listsupport,p^e); delta1=1; delta2=1; for j to nvars-1 do ( use T; delta1=delta1*varDelta_j^(p^e-1); expon2=listdiff_j; delta2=delta2*varDelta_(nvars+j)^expon2; ); delta_i=delta1*delta2; newgen_i=DifferentialAction(delta_i,powerfT); ); newgenmatr=matrix(newgen_0); for i from 1 to numgens J-1 do newgenmatr=newgenmatr|newgen_i; newgenmatrR=substitute(newgenmatr,R); if J==ideal(newgenmatrR) then ( matalpha=f^(p^e-p)//newgenmatrR; matalpha=substitute(matalpha,T); for i to numgens J-1 do if i==0 then matrixdiff=matrix{{matalpha_(i,0),delta_i}} else matrixdiff=matrixdiff||matrix{{matalpha_(i,0),delta_i}}; totale=0; for i to numgens J-1 do totale=totale+matalpha_(i,0)*DifferentialAction(delta_i,powerf); powerfp=substitute(f^(p^e-p),T); return matrixdiff; condition=false; ); ); use R; ); \end{verbatim} \section*{Acknowledgements} This project started during the summer school PRAGMATIC 2014. The authors would like to thank Aldo Conca, Srikanth Iyengar, and Anurag Singh, for giving very interesting lectures and for sharing open problems. They also wish to thank Alfio Ragusa and all the organizers of PRAGMATIC 2014 for giving them the opportunity to attend the school. The authors also thank Josep \`Alvarez Montaner and, once again, Anurag Singh and Srikanth Iyengar, for many helpful suggestions concerning the material of this manuscript. \bibliographystyle{alpha}
1,941,325,219,919
arxiv
\section{Introduction} These notes discuss recent topics in orbit equivalence theory in operator algebra framework. Firstly, we provide an operator algebraic interpretation of discrete measurable groupoids in the course of giving a simple observation, which re-proves (and slightly generalizes) a result on treeability due to Adams and Spatzier \cite[Theorem 1.8]{Adams-Spatzier:AmerJMath90}, by using operator algebra techniques. Secondly, we reconstruct Gaboriau's work \cite{Gaboriau:InventMath00} on costs of equivalence relations in operator algebra framework with avoiding any measure theoretic argument. It is done in the same sprit as of \cite{Popa:MathScand85} for aiming to make Gaboriau's beautiful work much more accessible to operator algebraists (like us) who are not much familiar with ergodic theory. As simple byproducts, we clarify what kind of results in \cite{Gaboriau:InventMath00} can or cannot be generalized to the non-principal groupoid case, and observe that the cost of a countable discrete group with regarding it as a groupoid (i.e., a different quantity from Gaboriau's original one \cite[p.43]{Gaboriau:InventMath00}) is nothing less than the smallest number of its generators in sharp contrast with the corresponding $\ell^2$-Betti numbers, see Remark \ref{Rem3-4} (2). The methods given here may be useful for further discussing the attempts, due to Shlyakhtenko \cite{Shlyakhtenko:CMP01}\cite{Shlyakhtenko:Duke03}, of interpreting Gaboriau's work on costs by the idea of free entropy (dimension) due to Voiculescu. We introduce the notational convention we will employ; for a von Neumann algebra $N$, the unitaries, the partial isometries and the projections in $N$ are denoted by $N^u$, $N^{pi}$ and $N^p$, respectively. The left and right support projections of $v \in N^{pi}$ are denoted by $l(v)$ and $r(v)$, respectively, i.e., $l(v):=vv^*$ and $r(v):=v^* v$. We also mention that only von Neumann algebras with separable preduals will be discussed throughout these notes. We should thank Damien Gaboriau who earnestly explained us the core idea in his work, and also thank Tomohiro Hayashi for pointing out an insufficient point in a preliminary version. The present notes were provided in part for the lectures we gave at University of Tokyo, in 2004, and we thank Yasuyuki Kawahigashi for his invitation and hospitality. \section{A Criterion for Treeability} Let $M \supseteq A$ be an inclusion of (not necessarily finite) von Neumann algebras with a faithful normal conditional expectation $E_A^M : M \rightarrow A$. Let $\mathbb{K}(M\supseteq A)$ be the $C^*$-algebra obtained as the operator norm $\Vert\ \cdot\ \Vert_{\infty}$-closure of $M e_A M$ on $L^2\left(M\right)$ with the Jones projection $e_A$ associated with $E_A^M$, and it is called the algebra of relative compact operators associated with the triple $M \supseteq A$, $E_A^M$. We use the notion of Relative Haagerup Property due to Boca \cite{Boca:Pacific93}. (Quite recently, Popa used a slightly different formulation of Relative Haagerup Property in the type II$_1$ setting, see \cite{Popa:Preprint'03}, but we employ Boca's in these notes.) The triple $M \supseteq A$, $E_A^M$ is said to have Relative Haagerup Property if there is a net of $A$-bimodule (unital) normal completely positive maps $\Psi_{\lambda} : M \rightarrow M$, $\lambda \in \Lambda$, with $E_A^M\circ \Psi_{\lambda} = E_A^M$ for every $\lambda \in \Lambda$ such that for a fixed (and hence any) faithful state $\varphi \in M_*$ with $\varphi\circ E_A^M=\varphi$ one has \begin{itemize} \item $\lim_{\lambda}\left\Vert\Psi_{\lambda}(x) - x \right\Vert_{\varphi} = 0$ for every $x \in M$, or equivalently $\lim_{\lambda} \Psi_{\lambda} = \mathrm{id}_M$ pointwisely in $\sigma$-strong topology; \item $\widehat{\Psi}_{\lambda} \in \mathbb{K}(M\supseteq A)$, \end{itemize} where $\widehat{\Psi}_{\lambda}$ is the bounded operator on $L^2\left(M\right)$ defined by $\widehat{\Psi}_{\lambda}\Lambda_{\varphi}(x) := \Lambda_{\varphi}\left(\Psi_{\lambda}(x)\right)$ for $x \in M$ with the canonical injection $\Lambda_{\varphi} : M \rightarrow L^2\left(M\right)$. The next lemma can be proved in the essentially same way as in \cite{Choda:ProcJapanAcad:83}, where group von Neumann algebras are dealt with. Although the detailed proof is now available in \cite[Proposition 3.5]{Jolissaint:JOT02}, we give its sketch for the reader's convenience, with focusing the ``only if" part, which we will need later. \begin{lemma}\label{Lem2-1} Assume that $M = A\rtimes_{\alpha}G$, i.e., $M$ is the crossed-product of $A$ by an action $\alpha$ of a countable discrete group $G$. Suppose that the action $\alpha$ has an invariant faithful state $\phi \in A_*$. Then, the inclusion $M \supseteq A$ with the canonical conditional expectation $E_A^M : M \rightarrow A$ has Relative Haagerup Property if and only if $G$ has Haagerup Property {\rm (}see {\rm \cite{Haagerup:InventMath79},\cite{Choda:ProcJapanAcad:83})}. \end{lemma} \begin{proof} (Sketch) Let $\lambda_g$, $g\in G$, be the canonical generators of $G$ in $M = A\rtimes_{\alpha}G$. The ``if" part is the easier implication. In fact, if $G$ has Haagerup Property, i.e., there is a net of positive definite functions $\psi_{\lambda}$ with vanishing at infinity such that $\psi_{\lambda}(g) \rightarrow 1$ for every $g \in G$, then the required $\Psi_{\lambda}$ can be constructed in such a way that $\Psi_{\lambda}\big(\sum_g a(g)\lambda_g\big) := \sum_{g\in G} \psi_{\lambda}(g)a(g)\lambda_g$ for every finite linear combination $\sum_{g\in G} a(g)\lambda_g \in A\rtimes_{\alpha}G$, see \cite[Lemma 1.1]{Haagerup:InventMath79}. The ``only if" part is as follows. Define $\psi_{\lambda}(g) := \phi\circ E_A^M\big(\Psi_{\lambda}(\lambda_g)\lambda_g^*\big)$, $g \in G$, and clearly $\psi_{\lambda}(g) \rightarrow 1$ for every $g \in G$. Let $\varepsilon>0$ be arbitrary small. One can choose a $T = \sum_{i=1}^n x_i e_A y_i \in Me_A M$ with $\big\Vert \widehat{\Psi}_{\lambda} - T\big\Vert_{\infty}\leq\varepsilon/2$. Then, for $g \in G$ one has $\big|\psi_{\lambda}(g)\big| \leq \frac{\varepsilon}{2} + \sum_{i=1}^n \big\Vert x_i \big\Vert_{\infty} \big\Vert E_A^M(y_i\lambda_g)\big\Vert_{\phi}$. Since $\big\Vert y_i \big\Vert_{\phi\circ E_A^M}^2 = \sum_{h \in G}\left\Vert E_A^M\left(y_i\lambda_h^*\right)\right\Vert_{\phi}^2$ (where it is crucial that $\phi$ is invariant under $\alpha$), one can choose a finite subset $K$ of $G$ in such a way that every $g \in G\setminus K$ satisfies that $\big\Vert E_A^M\big(y_i\lambda_g\big)\big\Vert_{\phi}\leq\varepsilon/\big(2\sum_{i=1}^n \Vert x_i \Vert_{\infty}\big)$. Then, $|\psi_{\lambda}(g)|\leq\varepsilon$ for every $g \in G\setminus K$. \end{proof} In what follows, we further assume that $A$ is commutative. Denote $\mathcal{G}(M\supseteq A) := \big\{ v \in M : v^*v, vv^* \in A^p,\ vAv^*=Avv^* \big\}$ and call it the full (normalizing) groupoid of $A$ in $M$. When $A$ is a MASA in $M$ and $\mathcal{G}(M\supseteq A)$ generates $M$ as von Neumann algebra, we call $A$ a Cartan subalgebra in $M$, see \cite{Feldman-Moore:TransAMS77}. Let us introduce a von Neumann algebraic formulation of the set of one-sheeted sets in a countable discrete measurable groupoid. \begin{definition}\label{Def2-1} An $E_A^M$-groupoid is a subset $\mathcal{G}$ of $\mathcal{G}(M\supseteq A)$ equipped with the following properties: \begin{itemize} \item $u, v \in \mathcal{G} \Longrightarrow uv \in \mathcal{G}${\rm ;} \item $u \in \mathcal{G} \Longrightarrow u^* \in \mathcal{G}${\rm ;} \item $u \in A^{pi} \Longrightarrow u \in \mathcal{G}$ {\rm (}and, in particular, $u \in \mathcal{G}, p \in A^p \Longrightarrow pu, up \in \mathcal{G}${\rm );} \item Let $\left\{u_k\right\}_k$ be a {\rm (}possibly infinite{\rm )} collection of elements in $\mathcal{G}$. If the support projections and the range projections respectively form mutually orthogonal families, then $\sum_k u_k \in \mathcal{G}$ in $\sigma$-strong* topology{\rm ;} \item Each $u \in \mathcal{G}$ has a {\rm (}possibly zero{\rm )} $e \in A^p$ such that $e \leq l(u)$ and $E_A^M(u)=eu${\rm ;} \item Each $u \in \mathcal{G}$ satisfies that $E_A^M(uxu^*) = uE_A^M(x)u^*$ for every $x \in M$. \end{itemize} \end{definition} Such a projection $e$ as in the fifth is uniquely determined as the modulus part of the polar decomposition of $E_A^M(u)$. The sixth automatically holds, either when $E_A^M$ is the (unique) $\tau$-conditional expectation with a faithful tracial state $\tau \in M_*$ or when $A$ is a MASA in $M$. The next two lemmas are proved based on the same idea as for \cite[Proposition 2.2]{Popa:MathScand85}. \begin{lemma}\label{Lem2-2} Let $\mathcal{U}$ be an {\rm (}at most countably infinite{\rm )} collection of elements in $\mathcal{G}$, and $w_0 :=1, w_1,\dots$ be the words in $\mathcal{U}\sqcup\mathcal{U}^*$ of reduced form in the formal sense with regarding $u^{-1}= u^*$ for $u \in \mathcal{U}$. Suppose that $\mathcal{G}'' = A\vee\mathcal{U}''$ as von Neumann algebra. Then, each $v \in \mathcal{G}$ has a partition $l(v) = \sum_k p_k$ in $A^p$ with $p_k v= p_k E_A^M\left(vw_k^*\right)w_k$ for every $k$. Furthermore, each coefficient $E_A^M\left(vw_k^*\right)$ falls in $A^{pi}$ and $v = \sum_k p_k E_A^M\left(vw_k^*\right) w_k$ in $\sigma$-strong* topology. \end{lemma} \begin{proof} By the fifth requirement of $E_A^M$-groupoids one can find a (unique) $e_k \in A^p$ in such a way that $e_k \leq l(v)$ and $E_A^M\left(vw_k^*\right) = e_k vw_k^*$, i.e., $e_k v = E_A^M\left(vw_k^*\right)w_k$. Set $e := \bigvee_k e_k$, and choose a faithful state $\varphi \in M_*$ with $\varphi\circ E_A^M = \varphi$. Then, we get $\big(v -ev| aw_k\big)_{\varphi}:=\varphi\big((aw_k)^*(v-ev)\big)= 0$ for every $a \in A$ and every $k$, where the sixth requirement is used crucially. Since the $aw_k$'s give a total subset in $\mathcal{G}''$ in $\sigma$-strong topology, we concludes that $v = ev$ so that $e=l(v)$. Since $A$ is commutative, one can construct $p_0, p_1,\dots \in A^p$ in such a way that $p_k \leq e_k$ and $\sum_k p_k = e$. Then, we have $p_k v = p_k e_k v = p_k E_A^M\left(vw_k^*\right)w_k$ for each $k$. \end{proof} \begin{lemma}\label{Lem2-2.1} Let $\mathcal{G}_0 \subseteq \mathcal{G}$ be an $E_A^M$-groupoid with a faithful normal conditional expectation $E_{M_0}^M : M \rightarrow M_0 := \mathcal{G}_0''$ with $E_A^M\circ E_{M_0}^M = E_A^M$. Then, each $u \in \mathcal{G}$ has a {\rm (}unique{\rm )} $p \in A^p$ such that $p \leq l(u)$ and $E_{M_0}^M(u)=pu$. \end{lemma} \begin{proof} By the same method as for the previous lemma together with standard exhaustion argument one can construct an (at most countably infinite) subset $\mathcal{W}$ of $\mathcal{G}_0$ that possesses the following properties: $E_A^M\left(w_1 w_2^*\right) = \delta_{w_1,w_2} l(w_1)$ for $w_1, w_2 \in \mathcal{W}$; each $u \in \mathcal{G}$ has an orthogonal family $\{e_w(u)\}_{w\in\mathcal{W}}$ such that $e_w(u) \leq l(uw^*)$ ($\leq l(u)\wedge r(w)$) and $e_w(u)u= E_A^M(uw^*)w$; if $u \in \mathcal{G}$ is chosen from $\mathcal{G}_0$, then $l(u) = \sum_{w\in\mathcal{W}} e_w(u)$. Choose a faithful state $\varphi \in M_*$ with $\varphi\circ E_A^M = \varphi$. Set $p := \sum_{w\in\mathcal{W}} e_w(u) \leq l(u)$, and we have $\big(E_{M_0}^M(u)-pu\big|aw\big)_{\varphi}:= \varphi\big((aw)^*(E_{M_0}^M(u)-pu)\big)= 0$ for every $aw$, $a \in A, w \in \mathcal{W}$. Hence, we get $E_{M_0}^M(u)=pu$. The uniqueness follows from that for the (right) polar decomposition of $E_{M_0}^M(u)$. \end{proof} Let $\begin{matrix} \mathcal{G}_{11} & \supset & \mathcal{G}_{12} \\ \cup & & \cup \\ \mathcal{G}_{21} & \supset & \mathcal{G}_{22} \end{matrix}$ be $E_A^M$-groupoids and write $M_{ij} := \mathcal{G}_{ij}''$. Assume that there are faithful normal conditional expectations $E_{ij} : M \rightarrow M_{ij}$ with $E_A^M\circ E_{ij} = E_A^M$. In this case, Lemma \ref{Lem2-2.1} enables us to see that the following three conditions are equivalent: $\begin{matrix} M_{11} & \supset & M_{12} \\ \cup & & \cup \\ M_{21} & \supset & M_{22} \end{matrix}$ forms a commuting square; $M_{22} = M_{12}\cap M_{21}$; and $\mathcal{G}_{22} = \mathcal{G}_{12}\cap\mathcal{G}_{21}$. Moreover, one also observes, in the similar way as above, that if two $E_A^M$-groupoids inside a fixed $\mathcal{G}$ generate the same intermediate von Neumann algebra between $\mathcal{G}'' \supseteq A$, then they must coincide. If $A = \mathbf{C}1$, then the image $\pi(\mathcal{G})$ with the quotient map $\pi : M^u \rightarrow M^u/\mathbb{T}1$ is a countable discrete group. The full groupoid $\mathcal{G}(M\supseteq A)$ itself becomes an $E_A^M$-groupoid when $A$ is a MASA in $M$ thanks to Dye's lemma (\cite[Lemma 6.1]{Dye:AmerJMath63}; also see \cite{Choda:ProcJapanAcad:65}), which asserts the same as in Lemma \ref{Lem2-2.1} for $\mathcal{G}(M\supseteq A)$ without any assumption when $A$ is a Cartan subalgebra in $M$. (The non-finite case needs a recently well-established result in \cite{Aoi:JMSJ03}.) Moreover, the set of one-sheeted sets in a countable discrete measurable groupoid canonically gives an $E_A^M$-groupoid, where $M \supseteq A$ with $E_A^M : M \rightarrow A$ are constructed by the so-called regular representation. See just after the next lemma for this fact. Let us introduce the notions of graphings and treeings due to Adams \cite{Adams:ErgodicTheoryDynamSystems:90} (also see \cite{Gaboriau:InventMath00}, \cite[Proposition 7.5]{Shlyakhtenko:CMP01}) in operator algebra framework. We call such a collection $\mathcal{U}$ as in Lemma \ref{Lem2-2}, i.e, $\mathcal{G}''=A\vee\mathcal{U}''$, a graphing of $\mathcal{G}$. On the other hand, a collection $\mathcal{U}$ of elements in $\mathcal{G}(M\supseteq A)$ ({\it n.b.}, not assumed to be a graphing) is said to be a treeing if $E_A^M(w)=0$ for all words $w$ in $\mathcal{U}\sqcup\mathcal{U}^*$ of reduced form in the formal sense. This is equivalent to that $\mathcal{U}$ is a $*$-free family (or equivalently, $\left\{ A\vee\{u\}''\right\}_{u\in\mathcal{U}}$ is a free family of von Neumann algebras) with respect to $E_A^M$ in the sense of Voiculescu (see e.g.~\cite[\S\S3.8]{VDN:Book92}) since every element in $\mathcal{G}(M\supseteq A)$ normalizes $A$. We say that $\mathcal{G}$ has a treeing $\mathcal{U}$ when $\mathcal{U}$ is a treeing and a graphing of $\mathcal{G}$, and also $\mathcal{G}$ is treeable if $\mathcal{G}$ has a treeing. \begin{lemma}\label{Lem2-3} {\rm (cf.~\cite{Haagerup:InventMath79}, \cite{Boca:Pacific93})} If an $E_A^M$-groupoid $\mathcal{G}$ has a treeing $\mathcal{U}$, then the inclusion $M(\mathcal{G}) := \mathcal{G}'' \supseteq A$ with $E_A^M\big|_{M(\mathcal{G})} : M(\mathcal{G}) \rightarrow A$ must have Relative Haagerup Property. \end{lemma} \begin{proof} We may and do assume $M=M(\mathcal{G})$ for simplicity. We first assume that $\mathcal{U}$ is a finite collection. Since $\mathcal{U}$ is a treeing, each $u \in \mathcal{U}$ satisfies either $u^m\neq0$ or $E_A^M\left(u^m\right)=0$, and thus each $N_u := A\vee\{u\}''$ can be decomposed into \begin{equation*} \text{(i)}\quad N_u = \sideset{}{^{\oplus}}\sum_{|m|\leq n_u} u^m A \quad \text{or} \quad \text{(ii)}\quad N_u = \sideset{}{^{\oplus}}\sum_{m\in\mathbb{Z}} u^m A \end{equation*} in the Hilbert space $L^2(M)$ via $\Lambda_{\varphi}$ with a faithful state $\varphi \in M_*$ with $\varphi\circ E_A^M=\varphi$. Here, $u^{-m}$ means the adjoint $u^*{}^m$ as convention. By looking at this description, it is not so hard to confirm that each triple $N_u \supseteq A$ with $E_A^M\big|_{N_v}$ satisfies Relative Haagerup Property. Namely, one can construct a net $\Psi_u^{(\varepsilon)} : N_u \rightarrow N_u$ of completely positive maps in such a way that \begin{itemize} \item $E_A^M\circ \Psi_u^{(\varepsilon)} = E_A^M\big|_{N_u}$; \item $\Psi_u^{(\varepsilon)}$ converges to $\mathrm{id}_{N_u}$ pointwisely, in $\sigma$-strong topology, as $\varepsilon \searrow 0$; \item $\widehat{\Psi}_u^{(\varepsilon)}$ falls into $\mathbb{K}\left(N_u\supseteq A\right)$ in $L^2(N_u)= \overline{\Lambda_{\varphi}(N_u)}$; \item $T_u^{(\varepsilon)} := \widehat{\Psi}_u^{(\varepsilon)}\big|_{L^2(N_v)^{\circ}}$ satisfies $\left\Vert T_u^{(\varepsilon)}\right\Vert_{\infty} = \exp(-\varepsilon)$ with $L^2(N_u)^{\circ} := (1-e_A)L^2(N_u)$. \end{itemize} The case (i) is easy, that is, \begin{equation*} \Psi_u^{(\varepsilon)} := e^{-\varepsilon}\ \mathrm{id}_{N_u} + (1-e^{-\varepsilon}) E_A^M\big|_{N_u} = E_A^M\big|_{N_u} + e^{-\varepsilon}\left(\mathrm{id}_{N_u}-E_A^M\big|_{N_u}\right) \end{equation*} converges to $\mathrm{id}_{N_u}$ pointwidely, in $\sigma$-strong topology, and one has \begin{equation}\label{eq1-L2-3} \widehat{\Psi}_u^{(\varepsilon)} = e_A + e^{-\varepsilon}\left(\sum_{0\lneqq|m|\leq n_u} u^m e_A u^{-m}\right) \in N_u e_A N_u. \end{equation} The case (ii) needs to modify the standard argument \cite[Lemma 1.1]{Haagerup:InventMath79}. By using the cyclic representation of $\mathbb{Z}$ induced by the positive definite function $m \mapsto e^{-\varepsilon|m|}$ one can construct a sequence $s_k \in \ell^{\infty}(\mathbb{Z})$ satisfying that $\sum_k \left|s_k(m)\right|^2 < +\infty$ for every $m \in \mathbb{Z}$ and moreover that $\sum_k s_k(m_1)\overline{s_k(m_2)} = e^{-\varepsilon(|m_1-m_2|)}$ for every pair $m_1, m_2 \in \mathbb{Z}$. Set $S_k := \sum_{m\in\mathbb{Z}} s_k(m) u^m e_A u^{-m}$ (on $L^2(N_u)$), and then the desired completely positive maps can be given by \begin{equation*} \Psi_u^{(\varepsilon)} : x \in N_u \mapsto \sum_k S_k x S_k^* \in B\left(L^2\left(N_u\right)\right). \end{equation*} (Note here that $u^m e_A u^{-m}$ is the projection from $L^2(N_u)$ onto $\overline{\Lambda_{\varphi}\left(u^m A\right)}$.) In fact, it is easy to see that $\Psi_u^{(\varepsilon)}\left(u^m a\right) = e^{-\varepsilon|m|} u^m a$ for $m \in \mathbb{Z}$, $a \in A$, which shows that the range of $\Psi_u^{(\varepsilon)}$ sits in $N_u$ and that $\Psi_u^{(\varepsilon)}$ converges to $\mathrm{id}_{N_u}$ pointwidely, in $\sigma$-strong topology. Moreover, one has \begin{equation}\label{eq2-L2-3} \widehat{\Psi}_u^{(\varepsilon)} = \sum_{m\in\mathbb{Z}} e^{-\varepsilon|m|} u^m e_A u^{-m} = \lim_{n\rightarrow\infty} \sum_{|m|\leq n} e^{-\varepsilon|m|} u^m e_A u^{-m} \end{equation} in operator norm. Since $\mathcal{U}$ is a treeing, we have \begin{equation*} \left(M, E_A^M\right) = \underset{u\in\mathcal{U}}{\bigstar_A}\left(N_u, E_A^M\big|_{N_u}\right). \end{equation*} Therefore, \cite[Proposition 3.9]{Boca:Pacific93} shows that the inclusion $M \supseteq A$ with $E_A^M$ satisfies Relative Haagerup Property since we have shown that so does each $N_u \supseteq A$ with $E_A^M\big|_{N_u}$. However, we would like to give the detailed argument on this point for the reader's convenience. Thanks to $E_A^M\circ \Psi_u^{(\varepsilon)} = E_A^M\big|_{N_u}$, we can construct the free products of completely positive maps $\Psi^{(\varepsilon)} := \underset{u\in\mathcal{U}}{\bigstar_A} \Psi_u^{(\varepsilon)} : M \rightarrow M$, which is uniquely determined by the following properties: \begin{itemize} \item $E_A^M\circ\Psi^{(\varepsilon)} = E_A^M$; \item $\Psi^{(\varepsilon)}\left(x_1 x_2 \cdots x_{\ell}\right) = \Psi_{u_1}^{(\varepsilon)}\left(x_1\right)\Psi_{u_2}^{(\varepsilon)}\left(x_2\right)\cdots\Psi_{u_{\ell}}^{(\varepsilon)}\left(x_{\ell}\right)$ for $x_j^{\circ} \in \mathrm{Ker}E_A^M\cap N_{u_j}$ with $u_1\neq u_2\neq\cdots\neq u_{\ell}$. \end{itemize} (See \cite[Theorem 3.8]{BlanchardDykema:Pacific01} in the most generic form at present.) Since each $\Psi_u^{(\varepsilon)}$ converges to $\mathrm{id}_{N_u}$ pointwidely in $\sigma$-strong topology, as $\varepsilon\searrow0$, the above two properties enable us to confirm that so does $\Psi^{(\varepsilon)}$ to $\mathrm{id}_M$. It is standard to see that \begin{equation*} \widehat{\Psi}^{(\varepsilon)} = 1_{L^2(A)}\oplus\sideset{}{^{\oplus}}\sum_{\ell\geq1}\sideset{}{^{\oplus}}\sum_{u_1\neq u_2 \neq \cdots \neq u_{\ell}}T_{u_1}^{(\varepsilon)}\otimes_{\varphi}T_{u_2}^{(\varepsilon)}\otimes_{\varphi}\cdots\otimes_{\varphi}T_{u_{\ell}}^{(\varepsilon)} \end{equation*} in the free product representation \begin{equation}\label{eq3-L2-3} L^2(M) = L^2(A)\oplus\sideset{}{^{\oplus}}\sum_{\ell\geq1}\sideset{}{^{\oplus}}\sum_{u_1\neq u_2 \neq \cdots \neq u_{\ell}}L^2\left(N_{u_1}\right)^{\circ}\otimes_{\varphi}\cdots\otimes_{\varphi} L^2\left(N_{u_{\ell}}\right)^{\circ} \end{equation} with $L^2(A) = \overline{\Lambda_{\varphi}(A)} \subseteq L^2(M)$, where $\otimes_{\varphi}$ means the relative tensor product operation over $A$ with respect to $\varphi|_A \in A_*$ (see \cite{Sauvageot:JOT83}). Notice that, with $x_j^{\circ} \in \mathrm{Ker}E_A^M\cap N_{u_j}$, $u_1\neq u_2\neq\cdots\neq u_{\ell}$, \begin{equation*} \Lambda_{\varphi}\left(x_1^{\circ} x_2^{\circ} \cdots x_{\ell}^{\circ}\right)=\Lambda_{\varphi}\left(x_1^{\circ}\right)\otimes_{\varphi}\Lambda_{\varphi}\left(x_2^{\circ}\right)\otimes_{\varphi}\cdots\otimes_{\varphi}\Lambda_{\varphi}\left(x_{\ell}^{\circ}\right) \end{equation*} in \eqref{eq3-L2-3}, and hence by \eqref{eq1-L2-3},\eqref{eq2-L2-3}, we have, via \eqref{eq3-L2-3}, \allowdisplaybreaks{ \begin{align*} &\widehat{\Psi}^{(\varepsilon)}\big|_{L^2\left(N_{u_1}\right)^{\circ}\otimes_{\varphi}L^2\left(N_{u_2}\right)^{\circ}\otimes_{\varphi}\cdots\otimes_{\varphi} L^2\left(N_{u_{\ell}}\right)^{\circ}} \\ &= T_{u_1}^{(\varepsilon)}\otimes_{\varphi}T_{u_2}^{(\varepsilon)}\otimes_{\varphi}\cdots\otimes_{\varphi}T_{u_{\ell}}^{(\varepsilon)} \\ &= \sum_{m_1,m_2,\dots,m_{\ell}} e^{-\varepsilon n}\ u_1^{m_1} u_2^{m_2}\cdots u_{\ell}^{m_{\ell}} e_A u_{\ell}^{-m_{\ell}}\cdots u_2^{-m_2} u_1^{-m_1} \end{align*} }with certain natural numbers $n=n(u_1,u_2,\dots,u_{\ell}; m_1,m_2,\dots,m_{\ell})$ that converges to $+\infty$ as $|m_1|,|m_2|\dots,|m_{\ell}| \rightarrow \infty$ (as long as when it is possible to do so). Note also that \allowdisplaybreaks{ \begin{align*} \left\Vert T_{u_1}^{(\varepsilon)}\otimes_{\varphi}T_{u_2}^{(\varepsilon)}\otimes_{\varphi}\cdots\otimes_{\varphi}T_{u_{\ell}}^{(\varepsilon)}\right\Vert_{\infty} &\leq \left\Vert T_{u_1}^{(\varepsilon)} \right\Vert_{\infty}\cdot\left\Vert T_{u_2}^{(\varepsilon)} \right\Vert_{\infty}\cdots\left\Vert T_{u_{\ell}}^{(\varepsilon)} \right\Vert_{\infty} \\ &= e^{-\ell\varepsilon} \longrightarrow 0 \quad \text{(as $\ell \rightarrow \infty$)}. \end{align*} }By these facts, $\widehat{\Psi}^{(\varepsilon)}$ clearly falls in the operator norm closure of $M e_A M$ since $\mathcal{U}$ is a finite collection. Hence, the net $\Psi^{(\varepsilon)}$ of completely positive maps on $M$ provides a desired one showing that the inclusion $M\supseteq A$ with the $E_A^M$ has Relative Haagerup Property. Next, we deal with the case that $\mathcal{U}$ is an infinite collection. In this case, one should at first choose a filtration $\mathcal{U}_1 \subseteq \mathcal{U}_2 \subseteq \cdots \nearrow \mathcal{U} = \bigcup_k \mathcal{U}_k$ by finite sub-collections. Then, instead of the above $\Psi^{(\varepsilon)}$ we consider the completely positive maps \begin{equation*} \Psi^{(\varepsilon)}_k := \left(\underset{u \in \mathcal{U}_k}{\bigstar_A}\Psi_u^{(\varepsilon)}\right)\circ E_{M_k}^M : M \rightarrow M_k := \bigvee_{u\in\mathcal{U}_k} N_u \left(=\underset{u\in\mathcal{U}_k}{\bigstar_A} N_u\right) \rightarrow M_k \subseteq M \end{equation*} with $\varphi\circ E_A^M$-conditional expectation $E_{M_k}^M : M \rightarrow M_k$. Since $M_1 \subseteq M_2 \subseteq \cdots \nearrow M = \bigvee_k M_k$, the non-commutative Martingale convergence theorem \cite[Lemma 2]{Connes:JFA75} says that $E_{M_k}^M$ converges to $\mathrm{id}_M$ pointwidely, in $\sigma$-strong topology, as $k\rightarrow\infty$, and so does $\Psi^{(\varepsilon)}_k$ to $\mathrm{id}_M$ too, as $\varepsilon\searrow0$, $k\rightarrow\infty$. We easily see that \begin{equation}\label{eq4-L2-3} \widehat{\Psi}^{(\varepsilon)}_k = 1_{L^2(A)}\oplus\sideset{}{^{\oplus}}\sum_{\ell\geq1}\sideset{}{^{\oplus}}\sum_{u_1\neq u_2 \neq \cdots \neq u_{\ell} \atop u_j \in \mathcal{U}_k}T_{u_1}^{(\varepsilon)}\otimes_{\varphi}T_{u_2}^{(\varepsilon)}\otimes_{\varphi}\cdots\otimes_{\varphi}T_{u_{\ell}}^{(\varepsilon)} \end{equation} in \eqref{eq3-L2-3}. Note that the summation of each $\ell$th direct summand of \eqref{eq4-L2-3} is taken over the alternating words in the fixed finite collection $\mathcal{U}_k$ of length $\ell$, and thus the previous argument works for showing that $\widehat{\Psi}^{(\varepsilon)}_k$ falls into $\mathbb{K}(M\supseteq A)$. Hence, we are done. \end{proof} Here, we briefly summarize some basic facts on von Neumann algebras associated with countable discrete measurable groupoids, see e.g.~\cite{Hahn:TransAMS78},\cite{Renault:LNM793}. Let $\Gamma$ be a countable discrete measurable groupoid with unit space $X$, where $X$ is a standard Borel space with a regular Borel measure. With a non-singular measure on $X$ under $\Gamma$ one can construct, in a canonical way, a pair $M(\Gamma)\supseteq A(\Gamma)$ of von Neumann algebra and distinguished commutative von Neumann subalgebra with $A(\Gamma) = L^{\infty}(X)$ and a faithful normal conditional expectation $E_{\Gamma} : M(\Gamma) \rightarrow A(\Gamma)$, by the so-called regular representation of $\Gamma$ due to Hahn \cite{Hahn:TransAMS78} (also see \cite[Chap.~II]{Renault:LNM793}), which generalizes Feldman-Moore's construction \cite{Feldman-Moore:TransAMS77} for countable discrete measurable equivalence relations. Denote by $\mathcal{G}_{\Gamma}$ of $\Gamma$ the set of ``one-sheeted sets in $\Gamma$" or called ``$\Gamma$-sets", i.e., measurable subsets of $\Gamma$, on which the mappings $\gamma \in \Gamma \mapsto \gamma\gamma^{-1}, \gamma^{-1}\gamma \in X$ are both injective. Note that $\mathcal{G}_{\Gamma}$ becomes an inverse semigroup with product $E_1 E_2 := \{\gamma_1\gamma_2 : \gamma_1 \in E_1, \gamma_2 \in E_2, \gamma_1^{-1}\gamma_1 = \gamma_2\gamma_2^{-1} \}$ and inverse $E \mapsto E^{-1} := \{\gamma^{-1} : \gamma \in E\}$. Each $E \in \mathcal{G}_{\Gamma}$ gives an element $u(E) \in \mathcal{G}\left(M(\Gamma)\supseteq A(\Gamma)\right)$ with the properties: Its left and right support projections $l\left(u(E)\right), r\left(u(E)\right)$ coincide with the characteristic functions on $EE^{-1}=\{\gamma\gamma^{-1} : \gamma \in E\}, E^{-1}E=\{ \gamma^{-1}\gamma : \gamma \in E \}$, respectively, in $L^{\infty}(X)$; The mapping $u : E \in \mathcal{G}_\Gamma \mapsto u(E) \in \mathcal{G}\left(M(\Gamma)\supseteq A(\Gamma)\right)$ is an inverse semigroup homomorphism (being injective modulo null sets), where $\mathcal{G}\left(M(\Gamma)\supseteq A(\Gamma)\right)$ is equipped with the inverse operation $u \mapsto u^*$; $E_{\Gamma}(u(E)) = eu(E)$ with the projection $e$ given by the characteristic function on $X\cap E$; $E_{\Gamma}\big(u(E)xu(E)^*\big)=u(E)E_{\Gamma}(x)u(E)^*$ for every $x\in M(\Gamma)$. It is not difficult to see that $\mathcal{G}(\Gamma) := A(\Gamma)^{pi}u\left(\mathcal{G}_{\Gamma}\right)= \left\{ au(E) \in \mathcal{G}\left(M(\Gamma)\supseteq A(\Gamma)\right) : a \in A(\Gamma)^{pi}, E \in \mathcal{G}_{\Gamma} \right\}$ is an $E_{\Gamma}$-groupoid, which generates $M(\Gamma)$ as von Neumann algebra. An (at most countably infinite) collection $\mathcal{E}$ of elements in $\mathcal{G}_{\Gamma}$ is called a graphing of $\Gamma$ if it generates $\Gamma$ as groupoid, or equivalently the smallest groupoid that contains $\mathcal{E}$ becomes $\Gamma$. If no word in $\mathcal{E}\sqcup\mathcal{E}^{-1}$ of reduced form in the formal sense intersects with the unit space $X$ of strictly positive measure, then we call $\mathcal{E}$ a treeing of $\Gamma$. Then, it is not hard to see the following two facts: (i) the collection $u(\mathcal{E})$ of $u(E) \in \mathcal{G}\left(M(\Gamma)\supseteq A(\Gamma)\right)$ with $E \in \mathcal{E}$ is a graphing of $\mathcal{G}({\Gamma})$ if and only if $\mathcal{E}$ is a graphing of $\Gamma$; and similarly, (ii) the collection $u(\mathcal{E})$ is a treeing of $\mathcal{G}({\Gamma})$ if and only if $\mathcal{E}$ is a treeing of $\Gamma$. With these considerations, the previous two lemmas immediately imply the following criterion for treeability: \begin{proposition}\label{Prop2-4} Relative Haagerup Property of $M(\Gamma)\supseteq A(\Gamma)$ with $E_{\Gamma}$ is necessary for treeability of countable discrete measurable groupoid $\Gamma$. In particular, any countably infinite discrete group without Haagerup Property has no treeable free action with finite invariant measure. \end{proposition} Note that this follows from a much deeper result due to Hjorth (see \cite[\S28]{Kechris-Miller:LNM04}) with the aid of Lemma \ref{Lem2-1} if a given $\Gamma$ is principal or an equivalence relation. The above proposition clearly implies the following result of Adams and Spatzier: \begin{corollary}\label{Thm2-5} {\rm (\cite[Theorem 1.8]{Adams-Spatzier:AmerJMath90})} Any countably infinite discrete group of Property T admits no treeable free ergodic action with finite invariant measure. \end{corollary} \begin{remark}\label{Rem2-6} Note that the finite measure preserving assumption is very important in the above assertions. In fact, any countably infinite discrete group of Property T has an amenable free ergodic action without invariant finite measure {\rm (}e.g.~the boundary actions of some word-hyperbolic groups and the translation actions of discrete groups on themselves{\rm )}. \end{remark} \section{Operator Algebra Approach to Gaboriau's Results} We explain how to re-prove Gaboriau's results \cite{Gaboriau:InventMath00} on costs of equivalence relations (and slightly generalize them to the groupoid setting) in operator algebra framework, avoiding any measure theoretic argument. Throughout this section, we keep and employ the terminologies in the previous section. Let $\mathcal{E}$ be a graphing of a countable discrete measurable groupoid $\Gamma$ with a non-singular probability measure $\mu$ on the unit space $X$. Following Levitt \cite{Levitt:ErgodicTheoryDynamSystems95} and Gaboriau \cite{Gaboriau:InventMath00} the $\mu$-cost of $\mathcal{E}$ is defined to be \begin{equation*} C_{\mu}(\mathcal{E}) := \sum_{E \in \mathcal{E}} \frac{\mu\left(EE^{-1}\right)+\mu\left(E^{-1}E\right)}{2}, \end{equation*} and the $\mu$-cost of $\Gamma$ by taking the infimum all over the graphings, that is, \begin{equation*} C_{\mu}(\Gamma) := \inf \left\{ C_{\mu}(\mathcal{E}) : \text{$\mathcal{E}$ graphing of $\Gamma$}\right\}. \end{equation*} In fact, if $\Gamma$ is a principal one (or equivalently a countable discrete equivalence relation) with an invariant probability measure $\mu$, the $\mu$-cost of graphings and that of $\Gamma$ coincide with Levitt and Gaboriau's ones. Let $M\supseteq A$ be a von Neumann algebra and a distinguished commutative von Neumann subalgebra with a faithful normal conditional expectation $E_A^M : M \rightarrow A$, and $\mathcal{G}$ be an $E_A^M$-groupoid. For a faithful state $\varphi \in M_*$ with $\varphi\circ E_A^M = \varphi$, the $\varphi$-cost of a graphing $\mathcal{U}$ of $\mathcal{G}$ is defined to be \begin{equation*} C_{\varphi}(\mathcal{U}) := \sum_{u \in \mathcal{U}} \frac{\varphi\left(l(u)+r(u)\right)}{2}, \end{equation*} and that of $\mathcal{G}$ by taking the infimum all over the graphings of $\mathcal{G}$, that is, \begin{equation*} C_{\varphi}(\mathcal{G}) := \inf\left\{ C_{\varphi}(\mathcal{U}) : \text{$\mathcal{U}$ graphing of $\mathcal{G}$}\right\}. \end{equation*} We sometimes consider those cost functions $C_{\varphi}$ for both graphings and $E_A^M$-groupoids with the same equations even when $\varphi$ is not a state (but still normal and positive). When $\mathcal{G}=\mathcal{G}(\Gamma)$, i.e., the canonical $E_{\Gamma}$-groupoid associated with a countable discrete measurable groupoid $\Gamma$, it is plain to verify that $C_{\varphi}(\mathcal{G}(\Gamma)) = C_{\mu}(\Gamma)$ with the state $\varphi \in M(\Gamma)_*$ defined to be $\left(\int_X\ \cdot\ \mu(dx)\right)\circ E_{\Gamma}$. Therefore, it suffices to consider $E_A^M$-groupoids and their $\varphi$-costs to re-prove Gaboriau's results in operator algebra framework with generalizing it to the (even not necessary non-principal) groupoid setting, and indeed many of results in \cite{Gaboriau:InventMath00} can be proved purely in the framework. For example, we can show the following additivity formula of costs of $E_A^M$-groupoids: \begin{theorem}\label{Thm3-1} {\rm (cf.~\cite[Th\'{e}o\`{e}me IV.15]{Gaboriau:InventMath00})} Assume that $M$ has a faithful tracial state $\tau \in M_*$ with $\tau\circ E_A^M = \tau$. Let $\mathcal{G}_1 \supseteq \mathcal{G}_3 \subseteq \mathcal{G}_2$ be $E_A^M$-groupoids. Set $N_1 := \mathcal{G}_1''$, $N_2 := \mathcal{G}_2''$ and $N_3 := \mathcal{G}_3''$ {\rm (}all of which clearly contains $A${\rm )}, and let $E_{N_3}^M : M \rightarrow N_3$ be the $\tau$-conditional expectation {\rm (}hence $E_A^M\circ E_{N_3}^M = E_A^M${\rm )}. Suppose that \begin{equation*} \big(M,E_{N_3}^M\big) = \big(N_1, E_{N_3}^M\big|_{N_1}\big)\underset{N_3}{\bigstar}\big(N_2, E_{N_3}^M\big|_{N_2}\big), \end{equation*} or equivalently $\mathcal{G}_1$, $\mathcal{G}_2$ are $*$-free with amalgamation $N_3$ with respect to $E_{N_3}^M$, and further that $A$ is a MASA in $N_3$ so that $\mathcal{G}_3 = \mathcal{G}\left(N_3\supseteq A\right)$ holds automatically, see the discussion just below Lemma \ref{Lem2-2.1}. {\rm (}Remark here that $A$ needs not to be a MASA in $N_1$ nor $N_2$.{\rm )} Then, if $N_3$ is hyperfinite, then the smallest $E_A^M$-groupoid $\mathcal{G} = \mathcal{G}_1\vee\mathcal{G}_2$ that contains $\mathcal{G}_1$ and $\mathcal{G}_2$ satisfies that \begin{equation*} C_{\tau}\left(\mathcal{G}\right) = C_{\tau}\left(\mathcal{G}_1\right) + C_{\tau}\left(\mathcal{G}_2\right) - C_{\tau}\left(\mathcal{G}_3\right) \end{equation*} as long as when $C_{\tau}\left(\mathcal{G}_1\right)$ and $C_{\tau}\left(\mathcal{G}_2\right)$ are both finite. \end{theorem} This can be regarded as a slight generalization of one of the main results in \cite{Gaboriau:InventMath00} to the groupoid setting. In fact, let $\Gamma$ be a countable discrete measurable groupoid with an invariant probability measure $\mu$, and assume that it is generated by two countable discrete measurable subgroupoids $\Gamma_1$, $\Gamma_2$. If no alternating word in $\Gamma_1\setminus\Gamma_3, \Gamma_2\setminus\Gamma_3$ with $\Gamma_3 := \Gamma_1\cap\Gamma_2$ intersects with the unit space of strictly positive measure, i.e., $\Gamma$ is the ``free product with amalgamation $\Gamma_1 \bigstar_{\Gamma_3} \Gamma_2$" (modulo null set), and $\Gamma_3$ is principal and hyperfinite, then the above formula immediately implies the formula $C_{\mu}(\Gamma) = C_{\mu}\left(\Gamma_1\right)+C_{\mu}\left(\Gamma_2\right)-C_{\mu}\left(\Gamma_3\right)$ as long as when $C_{\mu}\left(\Gamma_1\right)$ and $C_{\mu}\left(\Gamma_2\right)$ are both finite. Here, we need the same task as in \cite{Kosaki:JFA04}. Proving the above theorem needs several lemmas and propositions, many of which can be proved based on the essentially same ideas as in \cite{Gaboriau:InventMath00} even in operator algebra framework so that some of their details will be just sketched. The next simple fact is probably known but we could not find a suitable reference. \begin{lemma}\label{Lem3-2} Let $\mathcal{G}$ be an $E_A^M$-groupoid with $M = \mathcal{G}''$, and assume that $M$ is finite. Then, if $e, f \in A^p$ are equivalent in $M$, denoted by $e \sim_M f$, in the sense of Murray-von Neumann {\rm (}i.e., $l(u) = e$ and $f=r(u)$ for some $u \in M^{pi}${\rm )}, then there is an element $u \in \mathcal{G}$ such that $l(u)=e$ and $r(u)=f$. Hence, under the same assumption, if $p \in A^p$ has the central support projection $c_M(p)=1$, then one can find $v_k \in \mathcal{G}$ in such a way that $\sum_k v_k p v_k^* = 1$. \end{lemma} \begin{proof} The latter assertion clearly follows from the former. Since the linear span of $\mathcal{G}$ becomes a dense $*$-subalgebra of $M$, $e\sim_M f$ implies $eMf\neq\{0\}$ so that there is a $v\in\mathcal{G}$ with $evf\neq0$. Letting $u_0 := fve$ one has $l(u_0) \leq f$ and $r(u_0)\leq e$, and thus $f-l(u_0) \sim_M e-r(u_0)$ since $M$ is finite. Hence, standard exhaustion argument completes the proof. \end{proof} To prove the next proposition, Gaboriau's original argument still essentially works purely in operator algebra framework. \begin{proposition}\label{Prop3-3}{\rm (\cite[Proposition I.9; Proposition I.11]{Gaboriau:InventMath00})} Suppose that $A$ is a MASA in $M$. Then, the following assertions hold true{\rm:} \begin{itemize} \item[{\rm (a)}] Let $\varphi \in M_*$ be a faithful state with $\varphi\circ E_A^M = \varphi$. If a graphing $\mathcal{U}$ of $\mathcal{G}(M\supseteq A)$ satisfies $C_{\varphi}(\mathcal{U})=C_{\varphi}\left(\mathcal{G}(M\supseteq A)\right)<+\infty$, then $\mathcal{U}$ must be a treeing. \item[{\rm (b)}] If $M$ is of finite type I {\rm (}hence $A$ is automatically a Cartan subalgebra{\rm )} and $\tau \in M_*$ is a faithful tracial state {\rm (}n.b., $\tau\circ E_A^M=\tau$ holds automatically{\rm )}, then every treeing $\mathcal{U}$ of $\mathcal{G}(M\supseteq A)$ satisfies that \begin{equation*} C_{\tau}(\mathcal{U}) = 1 - \tau(e) = C_{\tau}\left(\mathcal{G}(M\supseteq A)\right), \end{equation*} where $e \in A^p$ is arbitrary, maximal, abelian projection of $M$ {\rm (}hence the central support projection $c_M(e)=1${\rm )}. \end{itemize} \end{proposition} \begin{proof} (Sketch) (a) Suppose that $\mathcal{U}$ is not a treeing. Then, one can choose a word $v_{\ell}^{\varepsilon_{\ell}}\cdots v_1^{\varepsilon_1}$ in $\mathcal{U}\sqcup\mathcal{U}^*$ of reduced form in the formal sense in such a way that $E_A^M\left(v_{\ell}^{\varepsilon_{\ell}}\cdots v_1^{\varepsilon_1}\right)\neq0$ but every proper subword $v_i^{\varepsilon_i}\cdots v_j^{\varepsilon_j}$ satisfies that $E_A^M\left(v_i^{\varepsilon_i}\cdots v_j^{\varepsilon_j}\right)=0$. It is plain to find mutually orthogonal nonzero $e_1,\dots,e_{\ell} \in A^p$ with $e_k \leq r\left(v_k^{\varepsilon_k}\right)$ satisfying that $v_k^{\varepsilon_k}e_k v_k^{\varepsilon_k}{}^* = e_{k+1}$ ($k=1,\dots,\ell-1$) and $v_{\ell}^{\varepsilon_{\ell}} e_{\ell} v_{\ell}^{\varepsilon_{\ell}}{}^* = e_1$, where the following simple fact is needed: If $A$ is a MASA in $M$, then any $u \in \mathcal{G}(M\supseteq A)\setminus A^{pi}$ has a nonzero $e \in A^p$ such that $e\leq r(v)$ and $e(vev^*)=0$. Thus, $\mathcal{V} := \mathcal{U}\setminus\left\{v_{\ell}\right\}\sqcup\left\{\left(l\left(v_{\ell}^{\varepsilon_{\ell}}\right)-e_1\right)v_{\ell}^{\varepsilon_{\ell}}\right\}$ becomes a graphing and satisfies $C_{\varphi}(\mathcal{U})\gneqq C_{\varphi}(\mathcal{V})$, a contradiction. (b) Assume that $M = M_n\left(\mathbf{C}\right)$. Let $\mathcal{V}$ be a graphing of $\mathcal{G}(M\supseteq A)$. Let $p_1,\dots,p_n \in A^p$ be the mutually orthogonal minimal projections in $M$, and define the new graphing $\mathcal{V}'$ to be the collection of all nonzero $p_i v p_j$ with $i,j=1,\dots,n$ and $v \in \mathcal{V}$, each of which is nothing but a standard matrix unit (modulo scalar multiple). Note that $C_{\tau}(\mathcal{V}) = C_{\tau}\left(\mathcal{V}'\right)$ by the construction, and it is plain to see that if $\mathcal{V}$ is a treeing then so is $\mathcal{V}'$ too. We then construct a (non-oriented, geometric) graph whose vertices are $p_1,\dots,p_n$ and whose edges given by $\mathcal{V}'$ with regarding each $p_i v p_j \in \mathcal{V}'$ as an arrow connecting between $p_i$ and $p_j$. It is plain to see that a sub-collection $\mathcal{U}$ of $\mathcal{V}'$ is a treeing of $\mathcal{G}(M\supseteq A)$ if and only if the subgraph whose edges are given by only $\mathcal{U}$ forms a maximal tree. Therefore, a standard fact in graph theory (see e.g.~\cite[\S\S2.3]{Serre:book}) tells that $\mathcal{V}'$ contains a treeing $\mathcal{U}$ of $\mathcal{G}(M\supseteq A)$ or $\mathcal{V}'$ becomes a treeing when so is $\mathcal{V}$ itself. Such a treeing is determined as a collection of matrix units $e_{i_1 j_1},\dots,e_{i_{n-1} j_{n-1}}$ up to scalar multiples with the property that each of $1,\dots,n$ appears at least once in the subindices $i_1,j_1,\dots,i_{n-1},j_{n-1}$. Hence $C_{\varphi}(\mathcal{V})=C_{\tau}\left(\mathcal{V}'\right)\geq C_{\varphi}(\mathcal{U}) = 1-1/n$, which implies the desired assertion in the special case of $M=M_n(\mathbf{C})$. The simultaneous central decomposition of $M \supseteq A$ reduces the general case to the above simplest case we have already treated. Proving that any treeing attains $C_{\tau}\left(\mathcal{G}(M\supseteq A)\right)$ needs the following simple fact: Let $\mathcal{U}$ be a graphing of $\mathcal{G}(M\supseteq A)$, and set $\mathcal{U}(\omega) := \left\{u(\omega) : u \in \mathcal{U}\right\}$ with $u = \int^{\oplus}_{\Omega} u(\omega) d\omega$ in the central decomposition of $M$ with $\mathcal{Z}(M) = L^{\infty}(\Omega) \subseteq A$. Then, $\mathcal{U}$ is $*$-free with respect to $E_A^M$ (or other words, say a treeing) if and only if so is $\mathcal{U}(\omega)$ with respect to $E_{A(\omega)}^{M(\omega)}$ for a.e.~$\omega\in\Omega$ with $E_A^M = \int_{\Omega}^{\oplus} E_{A(\omega)}^{M(\omega)} d\omega$, see e.g.~the proof of \cite[Theorem 5.1]{Ueda:ASPM04}. \end{proof} \begin{remark}\label{Rem3-4}{\rm (1) In the above (a), it cannot be avoided to assume that $A$ is a MASA in $M$, that is, the assertion no longer holds true in the non-principal groupoid case. In fact, let $M := L\left(\mathbb{Z}_N\right)$ be the group von Neumann algebra associated with cyclic group $\mathbb{Z}_N$ and $\tau_{\mathbb{Z}_N}$ be the canonical tracial state. Then, $\mathcal{G}(\mathbb{Z}_N) := \mathbb{T}1\cdot\lambda(\mathbb{Z}_N)$ is a $\tau_{\mathbb{Z}_N}(\ \cdot\ )1$-groupoid, and it is trivial that $C_{\tau_{\mathbb{Z}_N}}\left(\mathcal{G}(\mathbb{Z}_N)\right) = C_{\tau_{\mathbb{Z}_N}}\left(\{\lambda(\bar{1})\}\right)$ with the canonical generator $\bar{1} \in \mathbb{Z}_N$. This clearly provides a counter-example. (2) Notice that the cost $C_{\tau_G}(\mathcal{G}(G))$ of a group $G$ is clearly the smallest number $n(G)$ of generators of $G$, and hence Theorem \ref{Thm3-1} provides a quite natural formula, that is, $n(G\bigstar H) = n(G)+n(H)$. One should here note that the $\ell^2$-Betti numbers of discrete groupoids (\cite{Gaboriau:IHES02}, and also \cite{Sauer:Preprint03}) recover the group $\ell^2$-Betti numbers when a given groupoid is a group (see e.g.~the approach in \cite{Sauer:Preprint03}). (3) Assume that $M$ is properly inifinite and $A$ is a Cartan subalgebra in $M$. Based on the fact that the inclusion $B\big(\ell^2(\mathbb{N)}\big) \supseteq \ell^{\infty}(\mathbb{N})$ can be embedded into $M\supseteq A$, it is not difficult to see that $C_{\varphi}\left(\mathcal{G}(M\supseteq A)\right)=\frac{1}{2}$ for every faithful state $\varphi\in M_*$ with $\varphi\circ E_A^M=\varphi$. Therefore, the idea of costs seems to fit for nothing in the infinite case with general states. (4) One of the key ingredients in the proof of (b) can be illustrated by \begin{equation*} M_3(\mathbf{C}) \cong \begin{bmatrix} * & * & \\ * & * & \\ & & * \end{bmatrix} \underset{\begin{bmatrix} * & & \\ & * & \\ & & * \end{bmatrix}}{\bigstar} \begin{bmatrix} * & & \\ & * & * \\ & * & * \end{bmatrix} \end{equation*} which provides the treeing $e_{12}, e_{23}$ of $\mathcal{G}(M\supseteq A)$ with $M=M_3(\mathbf{C})$. This kind of facts are probably known, and specialists in free probability theory are much familiar with similar phenomena in the context of (operator) matrix models of semicircular systems. } \end{remark} Throughout the rest of this section, let us assume that $\mathcal{G}$ is an $E_A^M$-groupoid with $M=\mathcal{G}''$ and $\tau \in M_*$ is a faithful tracial state with $\tau\circ E_A^M = \tau$. For a given $p \in A^p$ we denote by $p\mathcal{G}p$ the set of $pup$ with $u\in\mathcal{G}$, which becomes an $E_{Ap}^{pMp}$-groupoid with $E_{Ap}^{pMp} := E_A^M\big|_{pMp}$. The next lemma is technical but quite important, and shown in the same way as in Gaboriau's. It is a graphing counterpart of the well-known construction of induced transformations (see e.g.~\cite[p.13--14]{Friedman:Book70}). \begin{lemma}\label{Lem3-5}{\rm (cf.~\cite[Lemme II.8]{Gaboriau:InventMath00})} Let $p \in A^p$ be such that the central support projection $c_M(p)=1$, and $\mathcal{U}$ be a graphing of $\mathcal{G}$. Then, there are a treeing $\mathcal{U}_v$ and a graphing $\mathcal{U}_h$ of $p\mathcal{G}p$ with the following properties{\rm :} \begin{itemize} \item[{\rm (a)}] $p$ is an abelian projection of $M_v:=A\vee\mathcal{U}_v{}''$ with $c_{M_v}(p) = 1${\rm ;} \item[{\rm (b)}] For a graphing $\mathcal{V}$ of $p\mathcal{G}p$, $\mathcal{U}_v\sqcup\mathcal{V}$ becomes a graphing of $\mathcal{G}${\rm ;} \item[{\rm (c)}] For a graphing $\mathcal{V}$ of $p\mathcal{G}p$, $\mathcal{U}_v\sqcup\mathcal{V}$ is a treeing of $\mathcal{G}$ if and only if so is $\mathcal{V}${\rm ;} \item[{\rm (d)}] $C_{\tau}(\mathcal{U}) = C_{\tau}(\mathcal{U}_v)+C_{\tau}(\mathcal{U}_h)$ and $C_{\tau}(\mathcal{U}_v) = 1-\tau(p)$. \end{itemize} \end{lemma} \begin{proof} (Sketch) Let $\mathcal{U}^{(\ell)}$ be the set of words in $\mathcal{U}\sqcup\mathcal{U}^*$ of reduced form in the formal sense and of length $\ell\geq 1$, and set $q_{\ell} := \bigvee_{w \in \mathcal{U}^{(\ell)}} w p w^*$. Since $A$ is commutative, we can construct inductively the projections $p_{\ell} \in A^p$ by $p_{\ell} := q_{\ell}(1-p_1-\cdots-p_{\ell-1})$ with $p_0:=p$. Letting $p_0 := p$ we have $\sum_{\ell\gneq0} p_{\ell} = 1$ thanks to $c_M(p)=1$. For each $u \in \mathcal{U}$, we define $u_{\ell_1\ell_2} := p_{\ell_1} u p_{\ell_2} \in \mathcal{G}$ with $\ell_1,\ell_2 \in \mathbb{N}\sqcup\{0\}$, and consider the new collection $\widetilde{\mathcal{U}} := \bigsqcup_{\ell_1,\ell_2\geq0}\widetilde{\mathcal{U}}_{\ell_1,\ell_2}$ with $\widetilde{\mathcal{U}}_{\ell_1,\ell_2} := \left\{ u_{\ell_1,\ell_2} : u \in \mathcal{U} \right\}$ instead of the original $\mathcal{U}$ (without changing the $\tau$-costs). Replacing $u_{\ell_1\ell_2}$ by its adjoint if $\ell_2 \lneqq \ell_1$ we may and do assume that $\widetilde{\mathcal{U}}_{\ell_1,\ell_2} = \emptyset$ as long as when $\ell_2 \lneqq \ell_1$. Then, it is not so hard to see that $p_{\ell} = \bigvee_{v\in\widetilde{\mathcal{U}}_{\ell-1,\ell}} r(v)$ for every $\ell \geq 1$. Numbering $\widetilde{\mathcal{U}}_{\ell-1,\ell} = \left\{v_1,v_2,\dots\right\}$ we construct a partition $p_{\ell} = \sum_k s_k$ in $A^p$ inductively by $s_k := r(v_k)(1-s_1-\cdots-s_{k-1})$, and set $\widetilde{\mathcal{U}}_{\ell-1,\ell}' := \left\{v_k s_k\right\}_k$ and $\widetilde{\mathcal{U}}_{\ell-1,\ell}'' := \left\{v_k (1-s_k)\right\}_k$. Set $\mathcal{U}_v := \bigsqcup_{\ell\geq1}\widetilde{\mathcal{U}}'_{\ell-1.\ell}$, and then it is clear that the (right support) projections $r(v)$, $v\in\mathcal{U}_v$, are mutually orthogonal and moreover that $\sum_{v\in\widetilde{\mathcal{U}}_{\ell-1,\ell}'} r(v) = p_{\ell}$ (hence $\sum_{v\in\mathcal{U}_v}r(v)=1-p$). Set $\mathcal{U}_v^{[k,\ell]} := \big\{ v_{k k+1}\cdots v_{\ell-1 \ell} \neq 0 : v_{j-1 j} \in \widetilde{\mathcal{U}}_{j-1,j}'\big\}$ with $k\lneqq\ell$, and define $\mathcal{U}_h$ to be the collection of elements in $\mathcal{G}$ of the form, either $v \in \mathcal{U}_{0,0}$ or $w_1 v w_2^* \neq 0$ with either $w_1 \in \mathcal{U}_v^{[0,\ell_1]}$, $v \in\widetilde{\mathcal{U}}_{\ell_1,\ell_2}$, $w_2 \in \mathcal{U}_v^{[0,\ell_2]}$ ($\ell_1 = \ell_2$ or $\ell_1 \leq \ell_2-2$); or $w_1 \in \mathcal{U}_v^{[0,\ell]}$, $v \in\widetilde{\mathcal{U}}_{\ell-1,\ell}''$, $w_2 \in \mathcal{U}_v^{[0,\ell-1]}$. It is not so hard to verify that all the assertions (a)-(d) hold for the collections $\mathcal{U}_v$, $\mathcal{U}_h$ that we just constructed. (Note here that the trace property of $\tau$ is needed only for verifying the assertion (d).) \end{proof} \begin{remark}\label{Rem3-5.1} {\rm We should remark that $M_v$ is constructed so that $A$ is a Cartan subalgebra in $M_v$. Let $\mathcal{G}_v$ be the smallest $E_A^M$-groupoid that contains $\mathcal{U}_v$, and hence $N_v = \mathcal{G}_v''$ is clear. By the construction of $\mathcal{U}_v$ one easily see that any non-zero word in $\mathcal{U}_v\sqcup\mathcal{U}_v^*$ must be in either $\mathcal{U}_v^{[k,\ell]}$ or its adjoint set so that $p\mathcal{G}_v p = A^{pi}p$ by Lemma \ref{Lem2-2}. (This pattern of argument is used to confirm that $\mathcal{U}_v$ is a treeing.) Hence, we get $\mathcal{Z}(M_v)p = pM_v p = Ap$, by which with $c_{M_v}(p)=1$ it immediately follows that $A'\cap M_v= A$, thanks to Lemma \ref{Lem3-2}.} \end{remark} \begin{proposition}\label{Prop3-6}{\rm (cf.~\cite[Proposition II.6]{Gaboriau:InventMath00})} Let $p \in A^p$ be such that the central support projection $c_M(p)=1$. Then, the following hold true{\rm :} \begin{itemize} \item $\mathcal{G}$ is treeable if and only if so is $p\mathcal{G}p${\rm ;} \item $C_{\tau}(\mathcal{G}) - 1 = C_{\tau|_{pMp}}(p\mathcal{G}p)-\tau(p)$. \end{itemize} \end{proposition} \begin{proof} The first assertion is nothing less than Lemma \ref{Lem3-5} (c). The second is shown as follows. By Lemma \ref{Lem3-5} (d), we have $C_{\tau}(\mathcal{U}) \geq C_{\tau}\left(p\mathcal{G}p\right) + 1-\tau(p)$ for every graphing $\mathcal{U}$ of $\mathcal{G}$ so that $C_{\tau}(\mathcal{G}) -1 \geq C_{\tau}\left(p\mathcal{G}p\right)-\tau(p)$. Let $\varepsilon>0$ be arbitrary small. Choose a graphing $\mathcal{V}_{\varepsilon}$ so that $C_{\tau}\left(\mathcal{V}_{\varepsilon}\right)\leq C_{\tau}\left(p\mathcal{G}p\right)+\varepsilon$. With $\mathcal{U}_v$ as in Lemma \ref{Lem3-5} the new collection $\mathcal{U}_{\varepsilon} :=\mathcal{U}_v\sqcup\mathcal{V}_{\varepsilon}$ becomes a graphing of $\mathcal{G}$ by Lemma \ref{Lem3-5} (b), and hence $C_{\tau}(\mathcal{G})\leq C_{\tau}\left(\mathcal{U}_{\varepsilon}\right)=1-\tau(p)+C_{\tau}\left(\mathcal{V}_{\varepsilon}\right)$ by Lemma \ref{Lem3-5} (d). Hence, $C_{\tau}(\mathcal{G})-1\leq C_{\tau}\left(\mathcal{V}_{\varepsilon}\right)-\tau(p)\leq C_{\tau}\left(p\mathcal{G}p\right)+\varepsilon-\tau(p)\searrow C_{\tau}\left(p\mathcal{G}p\right)-\tau(p)$ as $\varepsilon\searrow0$. \end{proof} \begin{corollary}\label{Cor3-7} {\rm (\cite[Proposition 1, Theorem 2]{Levitt:ErgodicTheoryDynamSystems95},\cite[Proposition III.3, Lemme III.5]{Gaboriau:InventMath00})} {\rm (a)} Assume that $M$ is of type II$_1$ and $A$ is a Cartan subalgebra in $M$. Then, $C_{\tau}\left(\mathcal{G}(M\supseteq A)\right)\geq1$, and the equality holds if $M$ is further assumed to be hyperfinite. {\rm (b)} Assume that $M$ is hyperfinite and $A$ is a Cartan subalgebra in $M$. Then, every treeing $\mathcal{U}$ of $\mathcal{G}(M\supseteq A)$ {\rm (}it always exists{\rm )} satisfies that \begin{equation*} C_{\tau}(\mathcal{U}) = 1-\tau(e) = C_{\tau}\left(\mathcal{G}(M\supseteq A)\right), \end{equation*} where $e \in A^p$ is arbitrary, maximal abelian projection of $M$ {\rm (}hence the central support projection $c_M(e)$ must coincide with that of type I direct summand{\rm )}. {\rm (c)} Let $N$ be a hyperfinite intermediate von Neumann subalgebra between $M\supseteq A$, and assume that $A$ is a Cartan aubalgebra in $N$. Let $\mathcal{U}$ be a treeing of $\mathcal{G}(N\supseteq A)$ and suppose that $\mathcal{G}$ contains $\mathcal{G}(N\supseteq A)$. Then, for each $\varepsilon>0$, there is a graphing $\mathcal{U}_{\varepsilon}$ of $\mathcal{G}$ enlarging $\mathcal{U}$ such that $C_{\tau}\left(\mathcal{U}_{\varepsilon}\right) \leq C_{\tau}(\mathcal{G}) + \varepsilon$. \end{corollary} \begin{proof} (a) It is known that for each $n \in \mathbb{N}$ there is an $n\times n$ matrix unit system $e_{ij} \in \mathcal{G}(M\supseteq A)$ ($i,j=1,\dots,n$) such that all $e_{ii}$'s are chosen from $A^p$. Then, Proposition \ref{Prop3-6} implies that $C_{\tau}(\mathcal{G})=C_{\tau|_{e_{11}Me_{11}}}\left(e_{11}\mathcal{G}e_{11}\right)+1-\tau\left(e_{11}\right)\geq1-\tau\left(e_{11}\right)=1-1/n\nearrow1$ as $n\rightarrow\infty$. The equality in the hyperfinite case clearly follows from celebrated Connes, Feldman and Weiss' theorem \cite{ConnesFeldmanWeiss:ErgodicTh81} (also \cite{Popa:MathScand85} for its operator algebraic proof). (b) Choose an incereasing sequence of type I von Neumann subalgebras $A \subseteq M_1 \subseteq \cdots \subseteq M_k \nearrow M$. By Dye's lemma (or Lemma \ref{Lem2-2.1}), each $u \in \mathcal{U}$ has a unique projection $e_k(u) \in A^p$ such that $e_k(u)\leq l(u)$ and $E_{N_k}^M(u)=e_k(u)u$, where $E_{M_k}^M : M \rightarrow M_k$ is the $\tau$-conditional expectation. Set $\mathcal{U}_k := \left\{ e_k(u)u : u \in \mathcal{U}\right\}$ and $N_k := A\vee\mathcal{U}_k''$ being of type I. Clearly, each $\mathcal{U}_k$ is a treeing of $\mathcal{G}(N_k\supseteq A)$, and hence Proposition \ref{Prop3-3} (b) says that $C_{\tau}\left(\mathcal{U}_k\right)=C_{\tau}\left(\mathcal{G}\left(N_k\supseteq A\right)\right)=1-\tau\left(e_k\right)$ for every maximal abelian projection $e_k \in A^p$ of $N_k$. The non-commutative Martingale convergence theorem (e.g.~\cite[Lemma 2]{Connes:JFA75}) shows that $e_k(u) u = E_{M_k}^M(u) \rightarrow u$ in $\sigma$-strong* topology, as $k\rightarrow\infty$, for every $u \in \mathcal{U}$. Hence, we get $C_{\tau}\left(\mathcal{U}\right)=\lim_{k\rightarrow\infty}C_{\tau}\left(\mathcal{U}_k\right)=\lim_{k\rightarrow\infty}C_{\tau}\left(\mathcal{G}\left(N_k\supseteq A\right)\right)$ and $N_k = A\vee\mathcal{U}_k'' \nearrow A\vee\mathcal{U}''=M$. Let $e \in A^p$ be a maximal abelian projection of $M$. Then, Proposition \ref{Prop3-3} (b) and the above (a) show that $C_{\tau}\left(\mathcal{G}(M\supseteq A)\right)=1-\tau(e)$. Since $e$ must be an abelian projection of each $N_k$, one can choose $e_1,e_2,\dots \in A^p$ in such a way that each $e_k$ is a maximal abelian projection of $N_k$ and greater than $e$. It is standard to see that $e=\bigwedge_{k=1}^{\infty} e_k$ so that $\tau(e_k) \geq \tau\big(\bigwedge_{k'=1}^k e_{k'}\big) \searrow \tau(e)$ as $k\rightarrow\infty$. Therefore, $C_{\tau}(\mathcal{U})=\lim_{k\rightarrow\infty}C_{\tau}\left(\mathcal{G}\big(N_k\supseteq A\right)\big)=\lim_{k\rightarrow\infty}\left(1-\tau\left(e_k\right)\right)\leq\lim_{k\rightarrow\infty}\left(1-\tau\left(\bigwedge_{k'=1}^k e_{k'}\right)\right) = 1-\tau(e)$, and then it follows immediately that $C_{\tau}(\mathcal{U}) = 1-\tau(e) = C_{\tau}\left(\mathcal{G}(M\supseteq A)\right)$. (c) Let $N = N_{\mathrm{I}}\oplus N_{\mathrm{II}_1} \supseteq A = A_{\mathrm{I}}\oplus A_{\mathrm{II}_1}$ be the decomposition into the finite type I and the type II$_1$ parts. Looking at the decomposition, one can find a projection $p_{\varepsilon}=p_I\oplus p_{\mathrm{II}_1}^{\varepsilon} \in A^p$ in such a way that $p_{\mathrm{I}}$ is an abelian projection of $N_{\mathrm{I}}$ with $c_{N_{\mathrm{I}}}(p_{\mathrm{I}}) = 1_{N_{\mathrm{I}}}$ and $\tau\left(p_{\mathrm{II}_1}^{\varepsilon}\right)<\varepsilon/2$ with $c_{N_{\mathrm{II}_1}}\left(p_{\mathrm{II}_1}^{\varepsilon}\right)=1_{N_{\mathrm{II}_1}}$. Choose a graphing $\mathcal{V}_{\varepsilon}$ of $p_{\varepsilon}\mathcal{G}p_{\varepsilon}$ in such a way that $C_{\tau}\left(\mathcal{V}_{\varepsilon}\right)\leq C_{\tau}\left(p_{\varepsilon}\mathcal{G}p_{\varepsilon}\right)+\varepsilon/2$, and then set $\mathcal{U}_{\varepsilon} := \mathcal{U}\sqcup\mathcal{V}_{\varepsilon}$. Since $c_{N}\left(p_{\varepsilon}\right)=1$, $\mathcal{U}_{\varepsilon}$ is a graphing $\mathcal{G}$ thanks to Lemma \ref{Lem3-2}. Then, Lemma \ref{Lem3-5} (b) implies that $C_{\tau}\left(\mathcal{V}_{\varepsilon}\right) \leq C_{\tau}\left(p_{\varepsilon}\mathcal{G}p_{\varepsilon}\right)+\varepsilon/2 = C_{\tau}(\mathcal{G})-1+\tau\left(p_{\varepsilon}\right)+\varepsilon/2 = C_{\tau}(\mathcal{G})-\left(1-\tau\left(p_{\mathrm{I}}\right)\right)+\tau\left(p_{\mathrm{II}_1}^{\varepsilon}\right)+\varepsilon/2$, and thus by Proposition \ref{Prop3-3} (b) we get $C_{\tau}\left(\mathcal{V}_{\varepsilon}\right) \leq C_{\tau}(\mathcal{G}) - C_{\tau}(\mathcal{U}) + \varepsilon$, which implies the desired assertion. \end{proof} \begin{remark}\label{Rem3-8} {\rm (1) The proof of (b) in the above also shows ``hyperfinite monotonicity," which asserts as follows. Assume that $M$ is hyperfinite and $A$ is a Cartan subalgebra in $M$. For any intermediate von Neumann subalgebra $N$ between $M \supseteq A$ (in which $A$ becomes automatically a Cartan subalgebra thanks to Dye's lemma, see the discussion above Lemma \ref{Lem2-3}), we have $C_{\tau}\left(\mathcal{G}(N\supseteq A)\right) \leq C_{\tau}\left(\mathcal{G}(M\supseteq A)\right)$. Furthermore, we have $\lim_{k\rightarrow\infty}C_{\tau}\big(\mathcal{G}(M_k\supseteq A)\big)=C_{\tau}\big(\mathcal{G}(M\supseteq A)\big)$ for any increasing sequence $A \subseteq M_1 \subseteq M_2 \subseteq \cdots \subseteq M_k \nearrow M$ of von Neumann subalgebras. Note that this kind of fact on free entropy dimension was provided by K.~Jung \cite{Jung:TAMS03}. (2) Related to (c) one can show the following (cf.~\cite[Lemme V.3]{Gaboriau:InventMath00}): Let $u \in \mathcal{G}$, and $\mathcal{G}_0 \subseteq \mathcal{G}$ be an $E_A^M$-groupoid, and set $N := \big( r(u)\mathcal{G}_0r(u)\big)''\vee\big(u^*\mathcal{G}_0u\big)''$. If $e \in \big(Ar(u)\big)^p$ has $c_N(e) = r(u)$, then $\mathcal{G}_0\vee\{u\} = \mathcal{G}_0\vee\{ue\}$ so that $C_{\tau}(\mathcal{G}_0\vee\{u\}) \leq C_{\tau}(\mathcal{G}_0) + \tau(e)$. Here, ``$\vee$" means the symbol of generation as $E_A^M$-groupoid. In fact, by Lemma \ref{Lem3-2} one finds $v_k \in r(u)\mathcal{G}_0 r(u) \vee u^*\mathcal{G}_0 u$ so that $\sum_k v_k e v_k^* = r(u)$. Since $v_k \in u^*\mathcal{G}_0 u$, one has $v_k = u^* w_k u$ for some $w_k \in l(u)\mathcal{G}_0 l(u)$ so that $\sum_k w_k(ue) v_k^* = u$. This fact can be used in many actual computations, and in fact it tells us that the cost of an $E_A^M$-groupoid can be estimated by that of its ``normal $E_A^M$-subgroupoid" with a certain condition. Its free entropy dimension counterpart seems an interesting question. } \end{remark} \begin{proposition}\label{Prop3-9} Let $\mathcal{G}_1\supseteq\mathcal{G}_3\subseteq\mathcal{G}_2$ be $E_A^M$-groupoids, and let $\mathcal{G} = \mathcal{G}_1\vee\mathcal{G}_2$ be the smallest $E_A^M$-groupoid that contains $\mathcal{G}_1$, $\mathcal{G}_2$. If $\mathcal{G}_3''$ is hyperfinite and if $A$ is a MASA in $\mathcal{G}_3''$ {\rm (}and hence $\mathcal{G}_3=\mathcal{G}\left(\mathcal{G}_3''\supseteq A\right)$ is automatic{\rm )}, then \begin{equation*} C_{\tau}(\mathcal{G}) \leq C_{\tau}\left(\mathcal{G}_1\right)+C_{\tau}\left(\mathcal{G}_2\right)-C_{\tau}\left(\mathcal{G}_3\right). \end{equation*} \end{proposition} \begin{proof} Choose a treeing $\mathcal{U}$ of $\mathcal{G}_3$ so that $C_{\tau}\left(\mathcal{G}_3\right)=C_{\tau}(\mathcal{U})$ by Corollary \ref{Cor3-7} (b). Let $\varepsilon>0$ be arbitrary small. By Corollary \ref{Cor3-7} (c), one can choose graphings $\mathcal{U}_{\varepsilon}^{(i)}$ of $\mathcal{G}_i$ enlarging $\mathcal{U}$, $i=1,2$, so that $C_{\tau}\big(\mathcal{U}_{\varepsilon}^{(i)}\big)\leq C_{\tau}\left(\mathcal{G}_i\right)+\varepsilon/2$. Thus, $C_{\tau}(\mathcal{G})\leq C_{\tau}\left(\big(\mathcal{U}_{\varepsilon}^{(1)}\setminus\mathcal{U}\big)\sqcup\big(\mathcal{U}_{\varepsilon}^{(2)}\setminus\mathcal{U}\big)\sqcup\mathcal{U}\right) = C_{\tau}\big(\mathcal{U}_{\varepsilon}^{(1)}\big)+C_{\tau}\big(\mathcal{U}_{\varepsilon}^{(2)}\big)-C_{\tau}(\mathcal{U})\leq C_{\tau}\left(\mathcal{G}_1\right)+C_{\tau}\left(\mathcal{G}_2\right)-C_{\tau}\left(\mathcal{G}_3\right)+\varepsilon\searrow C_{\tau}\left(\mathcal{G}_1\right)+C_{\tau}\left(\mathcal{G}_2\right)-C_{\tau}\left(\mathcal{G}_3\right)$ as $\varepsilon\searrow0$. \end{proof} To prove Theorem \ref{Thm3-1}, it suffices to show the inequality $C_{\tau}(\mathcal{G})\geq C_{\tau}\left(\mathcal{G}_1\right)+C_{\tau}\left(\mathcal{G}_2\right)-C_{\tau}\left(\mathcal{G}_3\right)$ thanks to Proposition \ref{Prop3-9}. To do so, we begin by providing a simple fact on general amalgamated free products of von Neumann algebras. \begin{lemma}\label{Lem3-10} Let \begin{equation*} \big(N,E_{N_3}^N\big)= \big(N_1,E_{N_3}^{N_1}\big)\underset{N_3}{\bigstar}\big(N_2,E_{N_3}^{N_2}\big) \end{equation*} be an amalgamated free product of {\rm (}$\sigma$-finite{\rm )} von Neumann algebras, and $L_1$ and $L_2$ be von Neumann subalgebras of $N_1$ and $N_2$, respectively. Suppose that $\begin{matrix} N_i & \supset & L_i \\ \cup & & \cup \\ N_3 & \supset & N_3\cap L_i \end{matrix}$ has faithful normal conditional expectations \begin{equation*} E_{L_i}^{N_i} : N_i \rightarrow L_i, \quad E_{N_3 \cap L_i}^{N_3} : N_3 \rightarrow N_3 \cap L_i, \quad E_{N_3 \cap L_i}^{L_i} : L_i \rightarrow N_3 \cap L_i, \end{equation*} and form commuting squares {\rm (}see e.g.~{\rm \cite[p.~513]{EvansKawahigashi:Book})} for both $i=1,2$. If $L_1\cap N_3 = L_2\cap N_3$ and further $N = L_1\vee L_2$ as von Neumann algebra, then $L_1 = N_2$ and $L_2 = N_2$ must hold true. \end{lemma} \begin{proof} Note that the amalgamated free product \begin{equation*} \big(L, E_{L_3}^L\big) = \big(L_1, E_{L_3}^L\big)\underset{L_1\cap N_3 = L_2\cap N_3}{\bigstar}\big(L_2, E_{L_3}^{L_2}\big) \end{equation*} can be naturally embedded into $\big(N, E_{N_3}^N\big)$ thanks to the commuting square assumption. Then, it is plain to see that $\begin{matrix} N & \supset & L \\ \cup & & \cup \\ N_i & \supset & L_i \end{matrix}$ form commuting squares too, i.e., $E_L^N\big|_{N_i} = E_{L_i}^{N_i}$, $i=1,2$, by which the desired assertion is immediate. \end{proof} The next technical lemma plays a key r\^{o}le in the proof of Theorem \ref{Thm3-1}. \begin{lemma}\label{Lem3-11} {\rm (\cite[IV.37]{Gaboriau:InventMath00})} Assume the same setup as in Theorem \ref{Thm3-1}. Let $\mathcal{V} = \mathcal{V}_1\sqcup\mathcal{V}_2$ be a graphing of $\mathcal{G}$ with collections $\mathcal{V}_1$, $\mathcal{V}_2$ of elements in $\mathcal{G}_1$, $\mathcal{G}_2$, respectively. Then, one can construct two collections $\mathcal{V}'_1$, $\mathcal{V}'_2$ of elements in $\mathcal{G}_3 = \mathcal{G}\left(N_3\supseteq A\right)$ in such a way that \begin{itemize} \item[{\rm (i)}] $\mathcal{V}'_1\sqcup\mathcal{V}'_2$ is a treeing{\rm ;} \item[{\rm (ii)}] $\mathcal{V}_i\sqcup\mathcal{V}'_i$ is a graphing of $\mathcal{G}_i$ for $i=1,2$, respectively. \end{itemize} \end{lemma} Before giving the proof, we illustrate the idea in a typical example. Assume that $M = N_1\bigstar_{N_3}N_2 \supseteq A$ is of the form: $N_1 := N_1^{(0)}\otimes M_2(\mathbf{C})\otimes M_2(\mathbf{C})$, $N_2 := N_2^{(0)}\otimes M_2(\mathbf{C})\otimes M_2(\mathbf{C})$, $N_3 := N_3^{(0)}\otimes M_2(\mathbf{C})\otimes M_2(\mathbf{C})$ and their common subalgebra $A := A_0\otimes\mathbf{C}^2\otimes\mathbf{C}^2$, where $A_0$ is a common Cartan subalgbera of $N_i^{(0)}$, $i=1,2,3$. Denote by $e_{ij}^{(1)}\otimes e_{k\ell}^{(2)}$, $i,j,k,\ell=1,2$, the standard matrix units in $M_2(\mathbf{C})\otimes M_2(\mathbf{C})$. Let $\mathcal{V}_i^{(0)}$ be a graphing of $\mathcal{G}_i := \mathcal{G}\big(N_i^{(0)}\supseteq A_0\big)$, $i=1,2$, and set \allowdisplaybreaks{ \begin{align*} \mathcal{V}_1 &:= \big\{ v\otimes1\otimes1 : v \in \mathcal{V}_1^{(0)} \big\}\sqcup\big\{1\otimes e_{12}^{(1)}\otimes1 \big\}, \\ \mathcal{V}_2 &:= \big\{ v\otimes1\otimes1 : v \in \mathcal{V}_2^{(0)} \big\}\sqcup\big\{1\otimes1\otimes e_{12}^{(1)} \big\}. \end{align*} }Clearly, $\mathcal{V} := \mathcal{V}_1\sqcup\mathcal{V}_2$ becomes a graphing of $\mathcal{G} := \mathcal{G}_1\vee\mathcal{G}_2$. In this example, the collections $\mathcal{V}'_1$, $\mathcal{V}'_2$ in the lemma can be chosen for example to be $\big\{1\otimes e_{11}^{(1)}\otimes e_{12}^{(2)}\big\}$, $\big\{1\otimes e_{12}^{(1)}\otimes1\big\}$, respectively. The proof below goes along the line of this procedure with the help of Lemma \ref{Lem3-10}. \begin{proof} By Lemma \ref{Lem3-10} with the aid of Lemma \ref{Lem2-2.1} (needed to confirm the required commuting square condition, see the explanation after the lemma) the original (ii) is reduced to showing (ii') $N_3\cap L_1 = N_3\cap L_2$ with $L_i:=A\vee\left(\mathcal{V}_i\sqcup\mathcal{V}'_i\right)''$, $i=1,2$. Choose an increasing sequence of type I von Neumann subalgebras $N_3^{(0)} := A \subseteq N_3^{(1)} \subseteq N_3^{(2)} \subseteq \cdots \subseteq N_3^{(k)} \nearrow N_3$. Let us construct inductively two sequences of collections $\mathcal{V}_1^{(k)}$, $\mathcal{V}_2^{(k)}$ of elements in $\mathcal{G}\big(N_3^{(k)}\supseteq A\big)$ in such a way that \begin{itemize} \item[(a)] $\mathcal{V}_i^{(k)} \subseteq \mathcal{V}_i^{(k+1)}$ for every $k$ and each $i=1,2$; \item[(b)] $\mathcal{V}_1^{(k)}\sqcup\mathcal{V}_2^{(k)}$ is a treeing for every $k$; \item[(c)] letting $L_i^{(k)} := A\vee\left(\mathcal{V}_i\sqcup\mathcal{V}_i^{(k)}\right)''$, $i=1,2$, we have \begin{itemize} \item[(1)] $\mathcal{V}_1\sqcup\mathcal{V}_2^{(k)} \subseteq L_1^{(k)}\cap L_2^{(k)}$ for every $k$, \item[(2)] $N_3^{(k)}\cap L_1^{(n)} \subseteq L_2^{(k)}$ for every even $k$; \item[(3)] $N_3^{(k)}\cap L_2^{(k)} \subseteq L_1^{(k)}$ for every odd $k$. \end{itemize} \end{itemize} ((c-2) is needed only for the inductive precedure.) If such collections were constructed, then $\mathcal{V}'_i := \bigcup_k \mathcal{V}_i^{(k)}$, $i=1,2$, would be desired ones. In fact, any word in $\big(\mathcal{V}'_1\sqcup\mathcal{V}'_2\big)\sqcup\big(\mathcal{V}'_1\sqcup\mathcal{V}'_2\big)^*$ of reduced form in the formal sense is in turn one in $\big(\mathcal{V}_1^{(k)}\sqcup\mathcal{V}_2^{(k)}\big)\sqcup\big(\mathcal{V}_1^{(k)}\sqcup\mathcal{V}_2^{(k)}\big)^*$ for some finite $k$ thanks to (a), and thus (i) follows from (b). For each pair $k_1, k_2$, the above (c-2), (c-3) imply, with $k_1, k_2 \leq 2k$, that \begin{gather*} N_3^{(k_1)}\cap L_1^{(k_2)} \subseteq N_3^{(2k)}\cap L_1^{(2k)} \subseteq N_3^{(2k)}\cap L_2^{(2k)} \subseteq N_3\cap L_2; \\ N_3^{(k_1)}\cap L_2^{(k_2)} \subseteq N_3^{(2k+1)}\cap L_2^{(2k+1)} \subseteq N_3^{(2k+1)}\cap L_1^{(2k+1)} \subseteq N_3\cap L_1. \end{gather*} Hence, \begin{equation*} \overline{\bigcup_{k_1,k_2} N_3^{(k_1)}\cap L_1^{(k_2)}}^{\text{$\sigma$-s}} \subseteq N_3\cap L_2, \quad \overline{\bigcup_{k_1,k_2} N_3^{(k_1)}\cap L_2^{(k_2)}}^{\text{$\sigma$-s}} \subseteq N_3\cap L_1. \end{equation*} Note here that $N_3\cap L_i = \overline{\bigcup_{k_1,k_2} N_3^{(k_1)}\cap L_i^{(k_2)}}^{\text{$\sigma$-s}}$ for both $i=1,2$, since all $\begin{matrix} N_i & \supset & L_i \\ \cup & & \cup \\ N_3 & \supset & N_3\cap L_i, \end{matrix}\ \begin{matrix} N_i & \supset & L_i \\ \cup & & \cup \\ N_3^{(k)} & \supset & N_3^{(k)}\cap L_i, \end{matrix}\ \begin{matrix} N_i & \supset & L_i^{(k_2)} \\ \cup & & \cup \\ N_3^{(k_1)} & \supset & N_3^{(k_1)}\cap L_i^{(k_2)} \end{matrix}$ form commuting squares for every $k,k_1,k_2$ and each $i=1,2$, thanks to Dye's lemma (or Lemma \ref{Lem2-2.1}); note here that $A$ is assumed to be a Cartan subalgebra in $N_3$. Then, (ii') follows immediately. Set $\mathcal{V}_1^{(0)} = \mathcal{V}_2^{(0)} := \emptyset$. Assume that we have already constructed $\mathcal{V}_1^{(j)}$, $\mathcal{V}_2^{(j)}$, $j=1,2,\dots,k$, and that the next $k+1$ is even (the odd case is also done by the same way). Consider \begin{equation*} K_1 := N_3^{(k+1)}\cap L_1^{(k)} \supseteq K_0 := N_3^{(k+1)}\cap L_1^{(k)} \cap L_2^{(k)} \left(\supseteq A\right), \end{equation*} which are clearly of finite type I. Then, one can choose an abelian projection $p \in A^p$ of $K_0$ with the central support projection $c_{K_0}(p)=1$. Also, one can find a treeing $\mathcal{U}_p$ of $\mathcal{G}\left(pK_1 p \supseteq Ap\right)$, see the proof of Proposition \ref{Prop3-3} (b). Set $\mathcal{V}_1^{(k+1)} := \mathcal{V}_1^{(k)}$, $\mathcal{V}_2^{(k+1)} := \mathcal{V}_2^{(k)}\sqcup\mathcal{U}_p$, which are desired ones in the $k+1$ step. In fact, (a) is trivial, and (b) follows from the (rather trivial) fact that $K_0$ and $pK_1 p$ are $*$-free with respect to $E_A^M$. Note that $N_3^{(k+1)}\cap L_1^{(k+1)} = N_3^{(k+1)}\cap L_1^{(k)} = K_1 = K_0\vee pK_1 p \subseteq L_2^{(k)}\vee\mathcal{U}_p'' = L_2^{(k+1)}$ (by Lemma \ref{Lem3-2} it follows from $c_{K_0}(p)=1$ that $K_1 = K_0\vee pK_1 p$), which is nothing but (c-2). Finally, (c-1) follows from the assumption of induction together with that $\mathcal{U}_p \subseteq pK_1 p \subseteq L_1^{(k)}$. \end{proof} One of the important ideas in Gaboriau's argument is the use of ``adapted systems." It is roughly translated to amplification/reduction procedure in operator algebra framework. \begin{proof}{[Proof of Theorem \ref{Thm3-1}]} (Step I: Approximation) By Proposition \ref{Prop3-9}, it suffices to show that $C_{\tau}(\mathcal{G}) \geq C_{\tau}\left(\mathcal{G}_1\right)+C_{\tau}\left(\mathcal{G}_2\right)-C_{\tau}\left(\mathcal{G}_3\right)$ modulo ``arbitrary small error." Let $\varepsilon>0$ be arbitrary small. There is a graphing $\mathcal{V}_{\varepsilon}$ of $\mathcal{G}$ with $C_{\tau}\big(\mathcal{V}_{\varepsilon}\big) \leq C_{\tau}(\mathcal{G}) + \varepsilon/3$, and we choose a graphing $\mathcal{U}_i$ of $\mathcal{G}_i$ with $C_{\tau}\left(\mathcal{U}_i\right)<+\infty$ (thanks to $C_{\tau}\left(\mathcal{G}_i\right) < +\infty$) for each $i=1,2$, and set $\mathcal{U} := \mathcal{U}_1\sqcup\mathcal{U}_2$. Since $C_{\tau}\left(\mathcal{U}\right)=\sum_{u\in\mathcal{U}}\tau\left(l(u)\right)<+\infty$, there is a finite sub-collection $\mathcal{U}_0$ of $\mathcal{U}$ with $\sum_{u\in\mathcal{U}\setminus\mathcal{U}_0} \tau\left(l\left(u\right)\right)\leq\varepsilon/3$. Since both $\mathcal{V}$ and $\mathcal{U}$ are graphings of $\mathcal{G}$, we may and do assume, by cutting each $v \in \mathcal{V}$ by suitable projections in $A^p$ based on Lemma \ref{Lem2-2}, that each $v \in \mathcal{V}$ has a word $w(v)$ in $\mathcal{U}\sqcup\mathcal{U}^*$ of reduced form in the formal sense and a $a(v) \in A^{pi}$ with $v=a(v)w(v)$. Denote by $w_0 := 1, w_1, w_2,\dots$ the all words in $\mathcal{V}\sqcup\mathcal{V}^*$ of reduced form, and by Lemma \ref{Lem2-2} again, each $u\in\mathcal{U}$ is described as $u = \sum_j p_k(u) a_k(u) w_k$ in $\sigma$-strong topology, where $l(u) = \sum_k p_k(u)$ in $A^p$, the $a_k(u)$'s are in $A^{pi}$, and $p_k(u) u = p_k(u) a_k(u) w_k$ for every $k$. Then, we can choose a $k_0\in\mathbb{N}$ (depending only on the finite collection $\mathcal{U}_0$) in such a way that $\sum_{k\geq k_0+1} \tau(p_k(u))\leq\varepsilon/3\sharp(\mathcal{U}_0)$ for all $u\in\mathcal{U}_0$. Set \allowdisplaybreaks{ \begin{align*} \mathcal{X} &:= \left\{ v \in \mathcal{V}_{\varepsilon} : \text{$v$ appears in $w_1,\dots,w_{k_0}$}\right\}; \\ \mathcal{Y} &:= \left\{ p(u) u : u\in\mathcal{U}_0 \right\}\sqcup\left(\mathcal{U}\setminus\mathcal{U}_0\right) \end{align*} }with $p(u) := \sum_{k\geq k_0+1} p_k(u)$ for $u \in \mathcal{U}_0$. Since $u = \sum_{j=0}^{k_0} p_k(u)a_k(u) w_k + p(u)u$ for all $u\in\mathcal{U}_0$, the coollection $\mathcal{Z} := \mathcal{X}\sqcup\mathcal{Y}$ becomes a graphing of $\mathcal{G}$. We have \begin{equation}\label{eq1-Thm3-1} C_{\tau}(\mathcal{Z}) = C_{\tau}\left(\mathcal{X}\right)+C_{\tau}\left(\mathcal{Y}\right) \leq C_{\tau}\left(\mathcal{V}_{\varepsilon}\right)+C_{\tau}\left(\mathcal{Y}\right) \leq C_{\tau}(\mathcal{G})+\varepsilon. \end{equation} Clearly, $\mathcal{Y}$ is decomposed into two collections $\mathcal{Y}_1$, $\mathcal{Y}_2$ of elements in $\mathcal{G}_1$, $\mathcal{G}_2$, respectively, as $\mathcal{Y} = \mathcal{Y}_1\sqcup\mathcal{Y}_2$, while $\mathcal{X}$ not in general. Thus, we replace $\mathcal{X}$ by a new ``decomposable" graphing in a sufficiently large amplification of $M\supseteq A$ to use Lemma \ref{Lem3-11}. (Step II: Adapted system/Amplification) Notice that each $v \in \mathcal{X} \big(\subseteq \mathcal{V}_{\varepsilon}\big)$ is described as \begin{equation*} v=a(v)w(v)=a(v)u_{n(v)}(v)^{\delta_{n(v)}(v)}\cdots u_1(v)^{\delta_1(v)}, \end{equation*} where $n(v) \in \mathbb{N}$ and $u_i(v) \in \mathcal{U}$, $\delta_i(v) \in \{1,*\}$ ($i=1,\dots,n(v)$). Cutting each $u_i(v)$ by a suitable projection in $A^p$ and replacing $u_i(v)$ by $u_i(v)^*$ if $\delta_i(v)=*$, etc., we may and do assume the following: $r\left(u_{i+1}(v)\right)=l\left(u_i(v)\right)$ ($i=1,\dots,n(v)-1$); $l(v)=l\left(a(v)\right) \left(=r\left(a(v)\right)\right)=l\left(u_{n(v)}(v)\right)$ and $r(v)=r\left(u_1(v)\right)$; and each $u_i(v)$ is of the form either $eu$ or $eu^*$ with $e \in A^p$, $u \in\mathcal{U}$. In what follows, we ``reveal" all words $u_{n(v)}(v)\cdots u_1(v)$'s as follows. Set $n:=1+\sum_{v\in\mathcal{X}}\left(n(v)-1\right)<+\infty$, and choose a partition $\{2,\dots,n\} = \bigsqcup_{v\in\mathcal{X}}I_v$. Denote by $e_{ij}$ the standard matrix units in $M_n(\mathbf{C})$, by $\mathrm{Tr}_n$ the canonical non-normalized trace on $M_n(\mathbf{C})$, and by $E_{\mathbf{C}^n}^{M_n(\mathbf{C})}$ the $\mathrm{Tr}_n$-conditional expectation from $M_n(\mathbf{C})$ onto the diagonal matrices $\mathbf{C}^n \subseteq M_n(\mathbf{C})$. Let $M^{n} := M\otimes M_n(\mathbf{C}) \supseteq A^{n} := A\otimes\mathbf{C}^n$ be the $n$-amplifications and write $\tau^{n} := \tau\otimes\mathrm{Tr}_n \in M^{n}_*$. For each $v \in \mathcal{X}$, we define the $n(v)$ elements $\tilde{u}_1(v),\dots,\tilde{u}_{n(v)}(v) \in \mathcal{G}\left(M^{n} \supseteq A^{n}\right)$ by \allowdisplaybreaks{ \begin{align*} \tilde{u}_1(v) &:= u_1(v)\otimes e_{i_2 1}, \\ \tilde{u}_2(v) &:= u_2(v)\otimes e_{i_3 i_2}, \\ &\phantom{aaaa}\vdots \\ \tilde{u}_{n(v)}(v) &:= u_{n(v)}(v)\otimes e_{1 i_{n(v)}} \end{align*} }with $I_v = \left\{i_2,\dots.i_{n(v)}\right\}$. Set $\widetilde{\mathcal{Z}}:=\widetilde{\mathcal{X}}\sqcup\left\{ y\otimes e_{11} : y \in \mathcal{Y}\right\}$ as a collection of elements of $\mathcal{G}\left(M^{n}\supseteq A^{n}\right)$ with $\widetilde{\mathcal{X}} := \left\{\tilde{u}_i(v) : i=1,\dots,n(v), v \in \mathcal{X}\right\}$, and \begin{equation*} P := 1\otimes e_{11} + \sum_{v\in\mathcal{X}}\sum_{i=1}^{n(v)-1} l\left(\tilde{u}_i(v)\right) = 1\otimes e_{11} + \sum_{v\in\mathcal{X}}\sum_{i=2}^{n(v)} r\left(\tilde{u}_i(v)\right). \end{equation*} By straightforward calculation we have \begin{equation}\label{eq2-Thm3-1} C_{\tau}\left(\mathcal{Z}\right)-1=C_{\tau^{n}}\big(\widetilde{\mathcal{Z}}\big)-\tau^{n}(P). \end{equation} (Step III: Reduction) Set $\widetilde{A} := A^{n}P$ and $\widetilde{M} := PM^{n}P$. Clearly, $\widetilde{M}$ is generated by the $g\otimes e_{11}$'s with $g \in \mathcal{G}$ and the $l\left(u_k(v)\right)\otimes e_{i_{k+1} 1}$'s with $k=1,\dots,n(v)-1$, $v \in \mathcal{V}$. Set $\tilde{\tau} := \tau^{n}\big|_{\widetilde{M}}$, and the $\tilde{\tau}$-conditional expectation $E_{\widetilde{A}}^{\widetilde{M}} : \widetilde{M} \rightarrow \widetilde{A}$ is given by the restriction of $E_A^M\otimes E_{\mathbf{C}^n}^{M_n(\mathbf{C})}$ to $\widetilde{M}$. Let $\widetilde{\mathcal{G}}$ be the smallest $E_{\widetilde{A}}^{\widetilde{M}}$-groupoid that contains $\big\{ g\otimes e_{11} : g \in \mathcal{G}\big\}\sqcup\big\{ l\left(u_k(v)\right)\otimes e_{i_{k+1} 1} : k=1,\dots,n(v)-1, v \in \mathcal{V}\big\}$. Also, for each $i=1,2,3$, let $\widetilde{\mathcal{G}}_i$ be the smallest $E_{\widetilde{A}}^{\widetilde{M}}$-groupoid that contains $\big\{ g\otimes e_{11} : g \in \mathcal{G}_i\big\}\sqcup\big\{ l\left(u_k(v)\right)\otimes e_{i_{k+1} 1} : k=1,\dots,n(v)-1, v \in \mathcal{V}\big\}$ and set $\widetilde{N}_i := \widetilde{\mathcal{G}}_i''$. Then, it is clear that $\widetilde{N}_i = P\left(N_i\otimes M_n(\mathbf{C})\right)P = \left(N_i\otimes M_n(\mathbf{C})\right)\cap\widetilde{M}$, $i=1,2,3$. In particular, $\widetilde{A}$ is a Cartan subalgebra in $\widetilde{N}_3$, and thus $\widetilde{\mathcal{G}}_3 = \mathcal{G}\big(\widetilde{N}_3\supseteq\widetilde{A}\big)$. We have $\widetilde{\mathcal{G}} = \widetilde{\mathcal{G}}_1\vee\widetilde{\mathcal{G}}_2$, i.e., the smallest $E_{\widetilde{A}}^{\widetilde{M}}$-groupoid that contains both $\widetilde{\mathcal{G}}_1$ and $\widetilde{\mathcal{G}}_2$. Here, simple facts are in order. \begin{itemize} \item[(a)] $N_1\otimes M_n(\mathbf{C})$ and $N_2\otimes M_n(\mathbf{C})$ are free with amalgamation over $N_3\otimes M_n(\mathbf{C})$ inside $M\otimes M_n(\mathbf{C})$ with subject to $E_{N_3}^M\otimes\mathrm{id}_{M_n(\mathbf{C})}$; \item[(b)] $v\otimes e_{11} = \left(a(v)^*\otimes e_{11}\right)\cdot \widetilde{u}_{n(v)}(v)\cdots\widetilde{u}_1(v)$ for every $v \in \mathcal{X}$; \item[(c)] $l\left(u_k(v)\right)\otimes e_{i_{k+1} 1} = \left(\widetilde{u}_k(v)\cdots \widetilde{u}_1(v)\right)\left(\left(u_k(v)\cdots u_1(v)\right)\otimes e_{11}\right)$ for every $k=1,\dots,n(v)-1$ and $v \in \mathcal{X}$. \end{itemize} By (a), we have $\displaystyle{\widetilde{M} \cong \widetilde{N}_1\bigstar_{\widetilde{N}_3}\widetilde{N}_2}$ with respect to the restriction of $E_{N_3}^M\otimes\mathrm{id}_{M_n(\mathbf{C})}$ to $\widetilde{M}$ (giving the $\tilde{\tau}$-conditional expectation onto $\widetilde{N}_3$). By (b) and (c), it is plain to see that $\widetilde{\mathcal{Z}}$ is a graphing of $\widetilde{\mathcal{G}}$. Moreover, by its construction, it is decomposable, that is, $\widetilde{\mathcal{Z}} = \widetilde{\mathcal{Z}}_1\sqcup\widetilde{\mathcal{Z}}_2$ with collections $\widetilde{\mathcal{Z}}_1$, $\widetilde{\mathcal{Z}}_2$ of elements in $\widetilde{\mathcal{G}}_1$, $\widetilde{\mathcal{G}}_2$, respectively. Therefore, Lemma \ref{Lem3-11} shows that there is a treeing $\widetilde{\mathcal{Z}}' = \widetilde{\mathcal{Z}}'_1\sqcup\widetilde{\mathcal{Z}}'_2$ in $\widetilde{\mathcal{G}}_3 = \mathcal{G}\big(\widetilde{N}_3\supseteq\widetilde{A}\big)$ with the property that $\widetilde{\mathcal{Z}}_i\sqcup\widetilde{\mathcal{Z}}'_i$ is a graphing of $\widetilde{\mathcal{G}}_i$ for each $i=1,2$. Therefore, by Corollary \ref{Cor3-7} (b) (or Remark \ref{Rem3-8}), we have \begin{equation*} C_{\tilde{\tau}}\big(\widetilde{\mathcal{Z}}\big) = C_{\tilde{\tau}}\big(\widetilde{\mathcal{Z}}_1\sqcup\widetilde{\mathcal{Z}}'_1\big)+ C_{\tilde{\tau}}\big(\widetilde{\mathcal{Z}}_2\sqcup\widetilde{\mathcal{Z}}'_2\big)- C_{\tilde{\tau}}\big(\widetilde{\mathcal{Z}}'\big) \geq C_{\tilde{\tau}}\big(\widetilde{\mathcal{G}}_1\big)+ C_{\tilde{\tau}}\big(\widetilde{\mathcal{G}}_2\big)- C_{\tilde{\tau}}\big(\widetilde{\mathcal{G}}_3\big). \end{equation*} It is trivial that $c_{\widetilde{N}_i}(1\otimes e_{11}) = 1_{\widetilde{M}}$ for all $i=1,2,3$ with $1_{\widetilde{M}} = P$, and that $\left\{ g\otimes e_{11} : g \in \mathcal{G}_i\right\}=\left(1\otimes e_{11}\right)\widetilde{\mathcal{G}}_i\left(1\otimes e_{11}\right)$ for every $i=1,2,3$. Hence, \eqref{eq1-Thm3-1},\eqref{eq2-Thm3-1} and Propsotion \ref{Prop3-6} altogether imply that \allowdisplaybreaks{ \begin{align*} C_{\tau}(\mathcal{G}) + \varepsilon &\geq C_{\tau}(\mathcal{Z}) \\ &= C_{\tilde{\tau}}\big(\widetilde{\mathcal{Z}}\big) - \tilde{\tau}\big(1_{\widetilde{M}}\big)+1 \\ &\geq C_{\tilde{\tau}}\big(\widetilde{\mathcal{G}}_1\big)+ C_{\tilde{\tau}}\big(\widetilde{\mathcal{G}}_2\big)- C_{\tilde{\tau}}\big(\widetilde{\mathcal{G}}_3\big) - \tilde{\tau}\big(1_{\widetilde{M}}\big)+1 \\ &= C_{\tau}\big(\mathcal{G}_1\big)+ C_{\tau}\big(\mathcal{G}_2\big)- C_{\tau}\big(\mathcal{G}_3\big). \end{align*} }\end{proof}
1,941,325,219,920
arxiv
\section{Introduction} It is not known what the magnetic field associated with sunspots looks like underneath the solar surface. \cite{Cowling1946} proposed that a sunspot extends below the surface as a magnetic flux tube - field lines bound tightly together in a single monolithic column resisting deformation against pressure from the surrounding gas. However, the sharp vertical gradient in the ambient gas pressure at the surface necessitates that the magnetic field lines fan out rapidly. This would make a flux tube highly concave near the surface, and therefore susceptible to the fluting instability. This prompted \cite{1979Parker} to suggest an alternative configuration in which the field underneath the surface may be structured - a sunspot, in this view, is a cluster of numerous small flux tubes that are held together by a converging flow below a certain depth. However, \cite{MeyerWeiss} had used a vacuum model of a flux tube to study the stability of spots against the fluting instability, and concluded that spots should not break up into smaller flux tubes up to a depth of 5 Mm. \cite{1981Spruit} built on the work of \cite{MeyerWeiss} and constructed a cluster model of a sunspot which is similar to a tethered balloon model (see Figure 1 of \cite{1981Spruit}) - the tube remains coherent up to a certain depth, beyond which it is fragmented into small individual flux tubes that are tied together at the base of the convection zone. It differed from \cite{1979Parker}, in that the tying of the flux tube to the base of the convection zone removed the necessity of a converging flow to explain the stability of sunspots. For a discussion on the merits and demerits of both the monolithic and cluster models, see Chapter 1 of \cite{Thomas_Weiss1992}. The fact that penumbral filaments often invade a spot's umbra and fragment it ( \cite{Spot_frag12,umbra_divide_2018}), suggest that the fluting instability might play a role in determining the subsurface structure of spots and therefore, by extension, their appearance on the surface. However, the probing of sunspot subsurface structure using helioseismic techniques has not been able to distinguish between the cluster and monolithic models \citep{Moradi2010SoPh}. Existing MHD simulations of complete sunspots, \citep{rempel09b,Rempel_2011_subsurface,Rempel_2011b} using the radiative-MHD code MURaM \citep{Vogler2005,rempel09a}, correspond to the monolithic model. \cite{Rempel_2011_subsurface} specifically addressed the question of whether a sunspot is monolithic or cluster-like underneath the surface and concluded that sunspots are closer to the monolithic model, but can become highly fragmented in its decay stage. However, these models have field lines that are too vertical near the spot periphery to form penumbral filaments naturally. This is overcome by increasing the horizontal field strength at the top boundary by a factor of two compared to a potential field configuration, and the extent of the penumbra is solely determined by the magnetic top boundary condition \citep{Rempel_2012}. Recently, \cite{2020A&A..Jurack} presented a sunspot simulation with a decent sized penumbra without modifying the top boundary, by using a strongly compressed flux tube at the lower boundary. Their penumbra, however, is dominated by the counter-Evershed flow. Also, their umbral field strength is higher than what is observed. In this paper, we conduct numerical experiments using the MURaM code to investigate the susceptibility of flux tubes to the fluting instability by varying the initial magnetic field structure. We focus on the question - would sunspots with field lines inclined strongly enough to form penumbral filaments, result in flux tubes that become highly fluted under the surface? To this end, we constructed initial sunspot flux tube configurations where the field lines are curved near the surface, such that they form penumbral filaments without having to change the top boundary condition, and become close to vertical below a certain depth. We describe our simulation setups and detailed descriptions of our initial conditions for our magnetic flux tubes in Section \ref{sec:simset}. We conducted four runs in the computationally inexpensive slab geometry, where we systematically varied the radius of curvature ($R_\mathrm{c}$) to check if we can control the degree of fluting. Then we computed two complete circular spots of opposite polarities in a shallow computational domain. We present our results in Section \ref{sec:results} and discuss the implications of our results in Section \ref{sec:diss}. \section{Simulation Setup} \label{sec:simset} We used the MURaM radiative MHD code for our simulations. For our four slab geometry runs, we chose simulation boxes with dimensions of 36 Mm ($x$) $\times$ 6 Mm ($y$) $\times$ 10.3 Mm ($z$) and resolutions of 48 km $\times$ 48 km $\times$ 25.8 km. We conducted a further run where we computed complete circular spots of opposite polarities. We used a relatively shallow domain with a vertical extent of 6 Mm which had a resolution of 20 km. The horizontal extents of this run were 72 Mm $\times$ 36 Mm with a resolution of 48 km in both directions. All of our boxes were periodic in the horizontal directions and the upper boundaries were kept open to plasma flows. When our hydrodynamic runs achieved thermal equilibrium, we introduced magnetic flux tubes in the simulation domains. We initialized our magnetic runs by damping all three components of the velocity field by a factor of $(1+ (|B|^2/B_\mathrm{c}^{2}))$, where $B_\mathrm{c}^{2}$ = 80000 $G^{2}$. We do this only at the timestep where our magnetic flux tubes are introduced, and thereafter we let the convective flow field develop naturally. In the following paragraphs we have described the initial structure of these flux tubes. \begin{figure*} \centering \hspace{-3.0cm}\includegraphics[scale=0.15]{Initial_condition_bz_rc_ref.png} \caption{Panels a-d: Vertical slices of B$_\mathrm{z}$ (x-z plane, here Depth = $h_\mathrm{phot}$ - z) used as initial conditions for the slab geometry runs, with their corresponding $R_\mathrm{c}$ shown on the right. The initial magnetic conditions for the slab geometry runs were invariant in the y direction. Panel e: Vertical slice of B$_\mathrm{z}$ used as initial condition for the circular spot simulation. The circular spot had an axisymmetric initial condition, with its corresponding $R_\mathrm{c}$ shown on the right. } \label{fig:slabinit} \end{figure*} \subsection{Slab Geometry Runs} As discussed, our initial conditions are designed to serve two purposes - 1) they should result in the formation of penumbral filaments, and 2) have a small radius of curvature ($R_\mathrm{c}$) that induces the fluting instability. Since these are numerical experiments, we are free to choose the initial conditions to achieve these goals. For our slab geometry runs, we define the three components of the magnetic field inside the initial flux tubes, in conformity with the $\Vec{\nabla} \cdot \Vec{B} = 0$ constraint, as follows: \begin{align} \label{eqn:1} B_{z} & = f(z), \nonumber \\ B_{x} & = -xf'(z), \nonumber \\ B_{y} & = 0, \end{align} where, \begin{equation} f(z) =B_{bot} \exp{\frac{-z}{\sigma}}. \label{eqn:fz} \end{equation} At $z=0$ (the lower boundary) we set $B_{z}$ to $B_{bot}$ and at $z = h_{opt}$ (optical surface) , we set $B_{z}$ to $B_{opt}$. $B_{opt}$ and $B_{bot}$ are parameters that we are free to choose. Using these constraints and eqn. \ref{eqn:fz} we can express $\sigma$ as, \begin{equation} \sigma = h_{phot}/\log(\frac{B_{bot}}{B_{opt}}). \end{equation} Keeping $B_\mathrm{opt}$ at 3000 Gauss, we conducted three runs with $B_\mathrm{bot}$ as 10000, 20000 and 30000 Gauss. We labeled these runs as R10, R20 and R30 respectively. The top 3 panels of Figure \ref{fig:slabinit} depict the initial $B_\mathrm{z}$ and their corresponding $R_\mathrm{c}$, for these runs. In order to quantify $R_\mathrm{c}$ we have to first calculate the curvature vector ($\Vec{\kappa}$), which is given by: \begin{equation} \Vec{\kappa} = \Vec{b} \cdot \Vec{\nabla} \Vec{b} \end{equation} where, \begin{equation} \Vec{b} = \frac{\Vec{B}}{|\Vec{B}|} \end{equation} The inverse of the magnitude of $\Vec{\kappa}$ $(\frac{1}{|\Vec{\kappa}|})$ at any point, gives the local $R_\mathrm{c}$. We have plotted the corresponding $R_\mathrm{c}$ of our initial magnetic fields in the right hand column of Figure \ref{fig:slabinit}. In all of the cases, $R_\mathrm{c}$ is very high at the centre, implying near vertical fields, while at the edges the fieldlines are significantly curved. Clearly, the fieldlines become more curved as we progress from R10 to R30 (panels a - c). Note how the brighter band in the centre, becomes narrower from R10 to R30. One can predict thus, fluid elements can penetrate the furthest into the flux tube of R30 before meeting any resistance from strong vertical fields. A side effect of decreasing $R_\mathrm{c}$ simply by continuously increasing the field strength at the lower boundary is that it keeps making the flux tube narrower at its base. We, therefore, carried out another experiment where we tried out a different initial condition. We superimposed two additional flux tubes on either side of the main flux tube used in R20, as shown in panel d of Figure \ref{fig:slabinit}. We did this because - 1) the enhanced field strength at the edges, close to the lower boundary, would help keep the flux tube coherent at the base of the simulation box (note that this run has the highest $R_\mathrm{c}$ at the base) 2) the additional magnetic pressure around the centre of the flux tube, near the surface, would help the fieldlines fan out more and become even more inclined once the flux tube achieves pressure equilibrium, facilitating penumbral filament formation. We labeled this run R20E. Note that due to the superposed smaller tubes, the initial field strength at the base of the computational domain in this run locally reaches 30 kG at the edges. \subsection{Round spots} For our shallow round spot simulation we use an initial condition, which has a vertical cut similar to the vertical cut of the initial condition used in R20E. Two flux sheets were superimposed on either side of the main flux sheet and this was rotated axisymmetrically, while ensuring that $\nabla\cdot B=0$. A vertical cut of the initial condition through the centre of the simulation box is shown in panel e of Figure \ref{fig:slabinit}. \subsection{Boundary Condition for the magnetic field} In the shallow sunspot simulation presented in \cite{Rempel_2011_subsurface}, a lower boundary open to plasma flows inside the magnetic flux tube caused the sunspot to disintegrate completely within 6 hours. In our simulations, for all of the runs, we set all velocities to zero at the lower boundary for $|B|> 1000$ Gauss. This allows us to study the effects of the fluting instability with minimal interference from the lower boundary. At the upper boundary the magnetic field was made to have a potential field configuration. \section{Results} \label{sec:results} \begin{figure*} \centering \includegraphics[scale=0.55]{all_slab.pdf} \caption{Left Panel: Horizontal cuts of B$_\mathrm{z}$ of the slab geometry runs - R10, R20, R30, R20E, in Gauss at a depth of 4.65 Mm after 8 hours of solar runtime. Right Panel: The corresponding bolometric intensity maps in units of 10$^{10}$ erg cm$^{-2}$ ster$^{-1}$ s$^{-1}$. The images have been repeated twice in the y-direction. } \label{Fig:slabintenmidbox} \end{figure*} \begin{figure*} \centering \includegraphics[scale=0.3]{Bz_r20e_horcuts_zoomed.png} \caption{Zoomed in horizontal cuts of B$_\mathrm{z}$ (Gauss) of the run R20E at different depths below the photosphere. We have intentionally chosen only 4 contour levels to draw attention to the tongue like weak field regions at the edge of the flux tube caused by fluid penetrating from outside. } \label{Fig:bz_horcuts_r20e} \end{figure*} \begin{figure*} \centering \hspace{-1.5cm}\includegraphics[scale=0.45]{Inten_time_shallow_round_ref.png} \caption{Temporal evolution of the circular spot simulation showing the advancement of the fluting instability. The top panel shows the emerging bolometric intensity in units of 10$^{10}$ erg cm$^{-2}$ ster$^{-1}$ s$^{-1}$ at different stages of the evolution. The lower panel shows horizontal cuts of B$_\mathrm{z}$ in Gauss at a depth of 5.3 Mm below the photosphere. } \label{fig:time_ev} \end{figure*} \subsection{Slab Geometry Runs} The left panel of Figure \ref{Fig:slabintenmidbox} shows horizontal cuts of B$_\mathrm{z}$ at a depth of 4.65 Mm below the visible surface. It is clear that both the number of filament-like intrusions of the surrounding plasma and the lengths of such intrusions, increase as we increase the curvature of the initial flux tubes, as seen in the results of R10, R20 and R30. In all of the runs, the instability originates close to the middle of the box, where the curvature is maximum, and propagates both upwards and downwards. Some of these intrusions eventually manifest themselves at the surface in the intensity images as long penumbral filaments with thin dark cores (see right panel of Figure \ref{Fig:slabintenmidbox}). The purpose of the runs in the slab geometry was to vary R$_\mathrm{c}$ and see if it results in different amounts of fluting. Our results confirm that R$_\mathrm{c}$ indeed controls the degree of fluting. The run R20E exhibits properties that lie between R20 and R30 - the intrusions are plentiful but only a couple of them manage to reach the centre of the flux tube. At the surface, it develops the most expansive penumbra among the four cases, while having an umbra that is not distorted by intruding filaments. This indicates that our numerical experiment of superimposing two additional flux tubes achieved its intended purpose. This prompted us to choose the initial condition for the next circular spot simulation such that its vertical slice is similar to run R20E. A side effect of the higher field strengths in the lower boundary is that the runs R30 and R20E have comparatively cleaner umbrae with fewer umbral dots. In Figure \ref{Fig:bz_horcuts_r20e} we have plotted horizontal cuts of B$_\mathrm{z}$ at different depths of the R20E run. We have zoomed in on only a part of the flux tube so that we can investigate individual filaments. We have chosen only 4 contour levels so that we can easily discern the penetrating tongues of the external fluid. Notice that the tongue-like weak field regions are the most prominent at a depth of 6 Mm, while both at the lower boundary and at a depth of 2 Mm only traces of the intrusions have appeared. It is clear that the fluting originates near the middle of the box and propagates both upwards and downwards through diffusive processes and pressure differences generated by the penetrating plasma. This also demonstrates that the fluting is not merely a boundary effect. \subsection{Round spots} For our circular spot simulation, we used initial conditions that are similar to the one used in run R20E. Close to the surface, the initial flux tube had strong vertical fields near the centre, while below a certain depth the field strength at the edges of the flux tube was enhanced. We have plotted in Figure \ref{fig:time_ev} (part a) the evolution of the circular spot simulation in the shallow box. The top panel shows a series of intensity images at different stages of the evolution, while the bottom panel shows the corresponding horizontal cuts of B$_\mathrm{z}$ at a depth of 5.3 Mm. As seen in the intensity image panel, the inclined fields near the surface and the presence of opposite polarities result in the formation of penumbral structures of considerable extent in both the positive and negative spots 2 hours into the run. By this time, the corresponding flux tubes already show a very high degree of fluting. In the subsequent time frames, the flux tubes get more and more distorted and 6 hours into the simulation they are no longer coherent and break up into disconnected fragments. The instability propagates upwards and we see the head of the filaments gradually penetrating the umbral regions. The last snapshot has been taken 10 hours into the run and by this time the umbra in the intensity image is completely covered with protruding filaments whose heads have migrated all the way to the center. The corresponding horizontal cut shows that the flux tubes are completely distorted and they are both reminiscent of the spaghetti-like structure hypothesized by \cite{1979Parker}. In our simulations, we see multiple flux sheets form, some of them loosely connected. It is important to note that in addition to being fluted the flux tubes are also continuously pulled apart by convection and we see the circumferences of both the tubes expanding with time. This accelerates the breaking up of the flux tubes into individual components which in turn facilitates the filaments at the surface to penetrate further into the umbrae. This is in agreement with \cite{1979Parker} who suggested that in order to prevent a fluted flux tube from being completely pulled apart there must be a converging flow that holds the different parts together and in the absence of a converging flow in our simulations, the flux tubes simply break up. It is important to bear in mind that we had set all velocities at points with $|B|> 1000$ G at the lower boundary to zero. However, the magnetic field at the lower boundary can still be transported by the external flow field and be weakened by filamentary intrusions from above, mediated by diffusive processes. \begin{figure*} \centering \hspace*{-1.0cm}\includegraphics[scale=0.06]{Inten_by_diff_depths_shallowround_ref_vx-min.png} \caption{Snapshot of the circular spot simulation after 3.5 hours of solar runtime with the bolometric intensity image in the top panel (a), horizontal cuts of B$_\mathrm{z}$ at two different depths (b-c) and at the $\tau$=1 surface (d). Panel \textit{e} shows the vertical velocity profile at a depth of 2.5 Mm and panel \textit{f} plots v$_\mathrm{x}$ at the $\tau$=1 surface. The intensity map is in units of 10$^{10}$ erg cm$^{-2}$ ster$^{-1}$ s$^{-1}$, B$_\mathrm{z}$ is in Gauss, and the velocities are in units of km/s. } \label{Fig:round_diffdepths} \end{figure*} In Figure \ref{Fig:round_diffdepths}, we have presented after 3.5 solar hours the bolometric intensity image (panel a), horizontal cuts of the magnetic field at different depths (panels b-d), the vertical velocity profile at a depth of 2.5 Mm (panel e) and the velocity along the x direction at the $\tau=1$ surface (panel f). At a depth of 5.3 Mm, the flux tubes are almost completely shredded after 3.5 hours of runtime. The instability, in this case, had originated closer to the lower boundary and propagated upwards as is evidenced by the decreasing severity of the fluting at depths of 2.5 Mm and the $\tau$=1 surface. In panel e, we have plotted v$_\mathrm{z}$ at a depth of 2.5 Mm. We find that in the areas that correspond to the penetrating fluid at the edge of the flux tube, there is a systematic upflow. These upflows eventually help the intrusions manifest at the surface as lightbridges. At the centre of the flux tube v$_\mathrm{z}$ becomes negligible. A noticeable feature in the intensity image is the extent of the penumbra. We have achieved an umbra:penumbra area ratio of around 1:4 which is in the range of what is observed on the Sun \citep{solanki_review}. This is a significant result since sunspot simulations typically use the upper boundary to achieve respectable penumbral proportions. In contrast to \cite{2020A&A..Jurack}, who also used the subsurface structure of the sunspot to produce a penumbra, we obtain Evershed flows that have the correct orientation (panel f). We, however, also obtain umbral field strengths that are higher than what is typically observed, like \cite{2020A&A..Jurack}. The periodicity of the horizontal boundaries makes the penumbra slightly asymmetric. Thus it is more elongated in the x-direction, where the opposite polarities meet. \section{Conclusion} \label{sec:diss} We have simulated complete sunspots that naturally form penumbral filaments and have further demonstrated that sunspots with highly curved flux tubes may have subsurface structures which are close to the cluster model proposed by \cite{1979Parker}. Our simulations lead us to make the following conclusions about the nature of sunspots - 1) It is quite clear that the initial subsurface structure plays an important role in the formation of penumbral filaments and the stability of sunspots. Highly curved flux tubes are indeed vulnerable to the fluting instability, as had been speculated by many authors before. Our experiments in the slab geometry where we systematically varied the curvature of the initial flux tube confirm that the intrusions of plasma into the flux tubes are indeed due to the fluting instability and we could control the degree of fluting to some extent by continuously decreasing the radius of curvature. 2) Our circular spot simulation has strong horizontal fields and consequently develops extended penumbral filaments that harbour the Evershed flow. 3) Our simulations suggest that even sunspots with little structuring at the surface might already be highly fluted underneath and eventually the subsurface structuring is manifested at the surface through penumbral filaments encroaching into the umbra. The nearly field-free material typical of such intruding filaments reach down 5 Mm or more in our simulations. Whether sunspots anchored deep in the convection zone can keep the spaghetti-like structure from being torn apart, as predicted by \cite{1981Spruit}, remains an open question. Sunspot simulations that cover the full convection zone, such as the one by \cite{2020MNRAS..Hotta}, can be used to answer this question. \acknowledgements This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 695075) and has been supported by the BK21 plus program through the National Research Foundation (NRF) funded by the Ministry of Education of Korea. MP acknowledges support by the International Max-Planck Research School (IMPRS) for Solar System Science at the University of Göttingen. RHC acknowledges partial support from the ERC synergy grant WHOLESUN 2018. The simulations have been carried out on supercomputers at GWDG and on the Max Planck supercomputer at RZG in Garching.
1,941,325,219,921
arxiv
\section{Proofs} \begin{proof}[\textbf{Proof of Theorem~\ref{thm:bounded_offset}}] Since $\widehat{f}$ is in the star hull around $\widehat{g}$, $\widehat{f}$ must lie in the set $\H : = \F + \star(\F-\F)$. Hence, in view of \eqref{eq:excess_loss_bound_deterministic}, excess loss $\mathcal{E}(\widehat{f})$ is upper bounded by \begin{align} &\sup_{f \in \H} \left\{ (\widehat{\En}-\En) [2(f^*-Y)(f^* - f)] + \En (f^* - f)^2 - (1+c) \cdot \widehat{\En}(f^* - f)^2 \right\} \label{emp.quad.low.2.bounded} \\ & \leq \sup_{f \in \H} \left\{ (\widehat{\En}-\En) [2(f^*-Y)(f^* - f)] + (1+c/4)\En (f^* - f)^2 - (1+3c/4) \cdot \widehat{\En}(f^* - f)^2 \right. \notag\\ &\left.\hspace{3in}- (c/4) \left( \widehat{\En}(f^* - f)^2 + \En(f^*-f)^2 \right) \right\} \notag\\ &\leq \sup_{f \in \H} \left\{ (\widehat{\En}-\En) [2(f^*-Y)(f^* - f)] - (c/4) \left( \widehat{\En}(f^* - f)^2 + \En(f^*-f)^2 \right) \right\} \label{eq:frt}\\ &+ \sup_{f \in \H} \left\{ (1+c/4)\En (f^* - f)^2 - (1+3c/4) \cdot \widehat{\En}(f^* - f)^2 \right\} \label{eq:sec} \end{align} We invoke the supporting Lemma~\ref{lem:contraction} (stated and proved below) for the term \eqref{eq:sec}: \begin{align} &\En \sup_{f \in \H} \left\{ (1+c/4)\En (f^* - f)^2 - (1+3c/4) \cdot \widehat{\En}(f^* - f)^2 \right\} \\ &\leq \frac{K(2+c)}{2} \cdot \En \sup_{f \in \H} \frac{1}{n} \left\{ \sum_{i=1}^n 2 \epsilon_i (f(X_i)-f^*(X_i)) - \frac{c}{4K(2+c)}\cdot \sum_{i=1}^n (f(X_i)-f^*(X_i))^2 \right\}. \end{align} Let $\widehat{\En}'$ stand for empirical expectation with respect to an independent copy $(X'_1,\ldots,X'_n)$. For the term \eqref{eq:frt}, Jensen's inequality yields \begin{align*} &\En \sup_{f \in \H} \left\{ (\widehat{\En}-\En) [2(f^*-Y)(f^* - f)] - (c/4) \left( \widehat{\En}(f^* - f)^2 + \En(f^*-f)^2 \right) \right\} \\ & \leq \En \sup_{f \in \H} \left\{ (\widehat{\En}-\widehat{\En}') [2(f^*-Y)(f^* - f)] - (c/4) \left( \widehat{\En}(f^* - f)^2 + \widehat{\En}'(f^*-f)^2 \right)\right\}. \end{align*} When introducing i.i.d. Rademacher random variables, we observe that the quadratic term remains unchanged by renaming $X_i$ and $X_i'$, and thus the preceding expression is upper bounded by \begin{align*} 2\En \sup_{f \in \H} \left\{ \frac{1}{n}\sum_{i=1}^n 2\epsilon_i(f^*(X_i)-Y_i)(f^*(X_i) - f(X_i)) - (c/4)(f^*(X_i) - f(X_i))^2 \right\}. \end{align*} Using a contraction technique as in the proof of Lemma~\ref{lem:contraction}, we obtain an upper bound of \begin{align} &2M\cdot \En \sup_{f \in \H} \frac{1}{n}\left\{ \sum_{i=1}^n 2\epsilon_i (f^*(X_i) - f(X_i)) - \frac{c}{4M} \cdot \sum_{i=1}^n (f^*(X_i) - f(X_i))^2 \right\} \end{align} Combining the bounds yields the statement of the theorem. \end{proof} \begin{lemma} \label{lem:contraction} For any class $\F$ of uniformly bounded functions with $K=\sup_{f\in\F} |f|_\infty$, for any $f^*\in\F$, and for any $c>0$, it holds that \begin{align*} &\En\sup_{f\in\F}\left\{ \En(f-f^*)^2 - (1+2c)\widehat{\En}(f-f^*)^2 \right\}\\ &\leq c \cdot \En \sup_{f\in\F} \frac{1}{n} \left\{ \frac{4K(1+c)}{c}\sum_{i=1}^n \epsilon_i (f(X_i)-f^*(X_i)) - \sum_{i=1}^n (f(X_i)-f^*(X_i))^2 \right\}. \end{align*} \end{lemma} \begin{proof}[\textbf{Proof of Lemma~\ref{lem:contraction}}] We write \begin{align*} &\En\sup_{f\in\F}\left\{ \En(f-f^*)^2 - (1+2c)\widehat{\En}(f-f^*)^2 \right\}\\ &=\En\sup_{f\in\F}\left\{ (1+c)\En(f-f^*)^2 - (1+c)\widehat{\En}(f-f^*)^2 - c\En(f-f^*)^2-c\widehat{\En}(f-f^*)^2 \right\} \end{align*} which, by Jensen's inequality, is upper bounded by \begin{align*} &\En\sup_{f\in\F} \left\{ (1+c)(\widehat{\En}'(f-f^*)^2-\widehat{\En}(f-f^*)^2) - c\widehat{\En}'(f-f^*)^2 - c\widehat{\En}(f-f^*)^2 \right\} \end{align*} We recall that $\widehat{\En}'$ is an empirical mean operator with respect to an independent copy $(X_1',\ldots,X_n')$. Writing out the empirical expectations in the above expression, the above is equal to \begin{align*} &\En\sup_{f\in\F} \left\{ \frac{1+c}{n}\sum_{i=1}^n \epsilon_i \Big((f(X_i')-f^*(X_i'))^2-(f(X_i)-f^*(X_i))^2 \Big) - c\widehat{\En}'(f-f^*)^2 - c\widehat{\En}(f-f^*)^2 \right\}\\ &\leq 2 \cdot \En\sup_{f\in\F} \left\{ \frac{1+c}{n}\sum_{i=1}^n \epsilon_i (f(X_i)-f^*(X_i))^2 - c\widehat{\En}(f-f^*)^2 \right\} \end{align*} with the last expectation taken over $\epsilon_i$ and data $X_i$, $1\leq i\leq n$. We proceed with a contraction-style proof. Condition on $X_1,\ldots,X_n$ and $\epsilon_2,\ldots,\epsilon_n$, and write out the expectation with respect to $\epsilon_1$: \begin{align*} & \quad \frac{1}{2} \sup_{f\in\F} \left\{ \frac{1+c}{n}\sum_{i=2}^n \epsilon_i(f(X_i)-f^*(X_i))^2 - c\widehat{\En}(f-f^*)^2 + \frac{1+c}{n}(f(X_1)-f^*(X_1))^2 \right\} \\ &+ \frac{1}{2} \sup_{g\in\F} \left\{ \frac{1+c}{n}\sum_{i=2}^n \epsilon_i(g(X_i)-f^*(X_i))^2 - c\widehat{\En}(g-f^*)^2 - \frac{1+c}{n}(g(X_1)-f^*(X_1))^2 \right\}\\ &\leq \frac{1}{2} \sup_{f,g\in\F} \left\{ \frac{1+c}{n}\sum_{i=2}^n \epsilon_t(f(X_i)-f^*(X_i))^2 - c\widehat{\En}(f-f^*)^2 + \frac{1+c}{n}\sum_{i=2}^n \epsilon_t(g(X_i)-f^*(X_i))^2 \right.\\ &\left.\hspace{3in} - c\widehat{\En}(g-f^*)^2 + \frac{4K(1+c)}{n}|f(X_1)-g(X_1)| \right\} \end{align*} The absolute value can be dropped since the expression is symmetric in $f,g$. We obtain an upper bound of \begin{align*} &\frac{1}{2} \sup_{f,g\in\F} \left\{ \frac{1+c}{n}\sum_{i=2}^n \epsilon_t(f(X_i)-f^*(X_i))^2 - c\widehat{\En}(f-f^*)^2 + \frac{1+c}{n}\sum_{i=2}^n \epsilon_t(g(X_i)-f^*(X_i))^2 \right.\\ &\left.\hspace{3in} - c\widehat{\En}(g-f^*)^2 + \frac{4K(1+c)}{n}(f(X_1)-g(X_1)) \right\} \\ &=\En_{\epsilon_1}\sup_{f\in\F} \left\{ \frac{1+c}{n}\sum_{i=2}^n \epsilon_i(f(X_i)-f^*(X_i))^2 - c\widehat{\En}(f-f^*)^2 + \frac{4K(1+c)}{n}\epsilon_1 f(X_1)\right\} \end{align*} Proceeding in this fashion for $\epsilon_2$ until $\epsilon_n$, we conclude \begin{align*} &\En\sup_{f\in\F}\left\{ \En(f-f^*)^2 - (1+2c)\widehat{\En}(f-f^*)^2 \right\} \\ &\leq \En \sup_{f\in\F} \left\{ \frac{4K(1+c)}{n}\sum_{i=1}^n \epsilon_t (f(X_i)-f^*(X_i)) - \frac{c}{n}\sum_{i=1}^n (f(X_i)-f^*(X_i))^2 \right\} \end{align*} where we added $f^*$ back in for free since random signs are zero-mean. \end{proof} \begin{proof}[\textbf{Proof of Theorem~\ref{thm:unbounded-offset}}] We start with the deterministic upper bound \eqref{emp.quad.low.2.bounded} on excess loss (see the proof of Theorem~\ref{thm:bounded_offset}): \begin{align} \label{emp.quad.low.2} \sup_{h \in \H} \left\{ (\widehat{\En}-\En) [2\xi h] + \En h^2 - (1+c) \cdot \widehat{\En} h^2 \right\} \end{align} where $h = f- f^* \in \H $. Define \begin{align*} U_{X_i,Y_i}(h) &= 2\xi_ih(X_i) - \mathbb{E} [2\xi h] + \En h^2 - (1+c) \cdot h(X_i)^2, \\ V_{X_i,Y_i}(h) &= 2\xi_ih(X_i) - \mathbb{E} [2\xi h] - \En h^2 + (1-c') \cdot h(X_i)^2 . \end{align*} where $c'$ will be specified later. We now prove a version of probabilistic symmetrization lemma \cite{GinZin84,Men03fewnslt} for \begin{align} \mbb{P} \left( \sup_{h \in \H} \sum_{i=1}^n U_{X_i,Y_i}(h) > x \right). \end{align} Note that unlike the usual applications of the technique in the literature, we perform symmetrization with the quadratic terms. Define \begin{align} \mathcal{B} =\left\{ \sup_{h \in \H} \sum_{i=1}^n U_{X_i,Y_i}(h) > x \right\}, ~~ \beta = \inf_{h \in \H} \mbb{P} \left( \sum_{i=1}^n V_{X_i,Y_i}(h) < \frac{x}{2} \right). \end{align} Clearly for $\{X_i, Y_i\}_{i=1}^n \in \mathcal{B}$, there exists a $h \in \H$ satisfies condition in $\mathcal{B}$. If in addition $h$ satisfies $$ \sum_{i=1}^n V_{X_i',Y_i'}(h)<\frac{x}{2} $$ then $$\sum_{i=1}^n U_{X_i,Y_i}(h) - V_{X_i',Y_i'}(h) > \frac{x}{2}$$ and therefore $$\sup_{h \in \H} \sum_{i=1}^n U_{X_i,Y_i}(h) - V_{X_i',Y_i'}(h) > \frac{x}{2}.$$ The latter can be written as \begin{align*} &\sup_{h \in \H} \left\{ \sum_{i=1}^n 2\xi_ih(X_i) - 2\xi_i' h(X_i') + 2 \En h^2 - (1+c) \cdot h(X_i)^2 - (1-c') \cdot h(X_i')^2\right\} > \frac{x}{2} . \end{align*} Then for this particular $h$, \begin{align*} \beta & = \inf_{g \in \cH} \mbb{P} \left( \sum_{i=1}^n V_{X_i',Y_i'}(g) < \frac{x}{2} \right) \leq \mbb{P} \left( \sum_{i=1}^n V_{X_i',Y_i'}(h) < \frac{x}{2} \right) \\ & \leq \mbb{P} \left( \sum_{i=1}^n U_{X_i,Y_i}(h) - V_{X_i',Y_i'}(h) > \frac{x}{2} \right) \leq \mbb{P} \left(\sup_{h \in \H} \sum_{i=1}^n U_{X_i,Y_i}(h) - V_{X_i',Y_i'}(h) > \frac{x}{2} \right). \end{align*} Note that the right-hand-side does not depend on $h$. We integrate over $\{X_i, Y_i\}_{i=1}^n \in \mathcal{B}$ to obtain \begin{align} &\beta \cdot \mbb{P}\left( \sup_{h \in \H} \sum_{i=1}^n U_{X_i,Y_i}(h) > x \right) \notag\\ & \leq \mbb{P} \left(\sup_{h \in \H} n \cdot \left\{ 2(\widehat{\En} - \widehat{\En}')[\xi h] + 2 \En h^2- (1+c) \cdot \widehat{\En} h^2 - (1 - c')\cdot \widehat{\En}' h^2 \right\} > \frac{x}{2} \right) \label{eq:RHS1} \end{align} Next, we apply Assumption \ref{Assump:Low-Iso-Bd} with $\epsilon = c/4 = 1/72$ to terms in \eqref{eq:RHS1} to construct an offset Rademacher process. Note $$ \frac{2}{1-\epsilon} < 2 ( 1+ 2 \epsilon) = 2 + c. $$ We can now choose $\tilde{c}, c'>0$ in that satisfy \begin{align} \frac{2}{1 - \epsilon} \leq 2+c-c'-2\tilde{c} ~~~~\Longleftrightarrow~~~~ 1 - (1-c'-\tilde{c})(1-\epsilon) \leq (1+c - \tilde{c})(1-\epsilon) -1. \end{align} Choose $b$ now such that \begin{align} \label{eq:b} 1 - (1-c'-\tilde{c})(1-\epsilon) \leq b \leq (1+c - \tilde{c})(1-\epsilon) -1. \end{align} Then we have on the set $\H$, applying lower isometry bound and Eq.~\eqref{eq:b}, with probability at least $1 - 2\delta$, \begin{align*} \widehat{\En} (f - f^*)^2 \geq (1-\epsilon) \cdot \En (f - f^*)^2 ~~\Longrightarrow~~ (1+b) \En h^2- (1+c) \cdot \widehat{\En} h^2 \leq -\tilde{c} \cdot \widehat{\En} h^2, \\ \widehat{\En}' (f - f^*)^2 \geq (1-\epsilon) \cdot \En (f - f^*)^2 ~~\Longrightarrow~~ (1-b) \En h^2 - (1 - c')\cdot \widehat{\En}' h^2 \leq -\tilde{c} \cdot \widehat{\En}' h^2. \end{align*} Thus we can continue bounding the expression in \eqref{eq:RHS1} as \begin{align*} & \quad \sup_{h \in \H} n \cdot \left\{ 2(\widehat{\En} - \widehat{\En}')[\xi h] + 2 \En h^2- (1+c) \cdot \widehat{\En} h^2 - (1 - c')\cdot \widehat{\En}' h^2 \right\} \\ & = \sup_{h \in \H } n \cdot \left\{ 2(\widehat{\En} - \widehat{\En}')[\xi h] + (1+b) \En h^2- (1+c) \cdot \widehat{\En} h^2 + (1-b) \En h^2 - (1 - c')\cdot \widehat{\En}' h^2 \right\} \\ & \leq \sup_{h \in \H } n \cdot \left\{ 2(\widehat{\En} - \widehat{\En}')[\xi h] -\tilde{c} \cdot \widehat{\En} h^2 - \tilde{c} \cdot \widehat{\En}' h^2 \right\} \\ \end{align*} For the probability of deviation, we obtain \begin{align*} & \quad \beta \cdot \mbb{P}\left( \sup_{h \in \H} \sum_{i=1}^n U_{X_i,Y_i}(h) > x \right) \\ & \leq \mbb{P} \left(\sup_{h \in \H} n \cdot \left\{ 2(\widehat{\En} - \widehat{\En}')[\xi h] -\tilde{c} \cdot \widehat{\En} h^2 - \tilde{c} \cdot \widehat{\En}' h^2 \right\} > \frac{x}{2} \right) + 2\delta \label{eq:plug-emr-sq-bd}\\ & = \mbb{P} \left(\sup_{h \in \H} n \cdot \left\{ 2(\widehat{\En} - \widehat{\En}')[\epsilon\xi h] - \tilde{c} \cdot \widehat{\En} h^2 - \tilde{c} \cdot \widehat{\En}' h^2 \right\} > \frac{x}{2} \right)+ 2\delta \\ & \leq 2 \mbb{P} \left( \sup_{h \in \H} \left\{ \sum_{i=1}^n 2\epsilon_i \xi_i h(X_i) - \tilde{c} \cdot \sum_{i=1}^n h(X_i)^2 \right\} > \frac{x}{4} \right) + 2 \delta. \end{align*} To estimate $\beta$, write \begin{align} \beta & = \inf_{h \in \H} \mbb{P} \left( \sum_{i=1}^n V_{X_i,Y_i}(h) < \frac{x}{2} \right) \\ & = 1 - \sup _{h \in \H} \mbb{P} \left( \sum_{i=1}^n 2\xi_ih(X_i) - \mathbb{E} [2\xi h] - \En h^2 + (1-c') \cdot h(X_i)^2 \geq \frac{x}{2} \right). \end{align} Let's bound the last term in above equation, for any $h \in \H$ \begin{align} & \mbb{P}\left( (\widehat{\En} - \En) [2\xi h]+(1-c')\widehat{\En} h^2 - \En h^2 > \frac{x}{2n} \right) \\ \leq & \mbb{P}\left( (\widehat{\En} - \En) [2\xi h] > \frac{x}{2n} + \frac{c'}{2} \En h^2 \right)+\mbb{P}\left( (\widehat{\En}-\En) [h^2] > \frac{c'}{2(1-c')} \En h^2 \right). \label{beta.eq} \\ \end{align} Define $$ A := \sup_{h \in \H} \frac{\En h^4}{(\En h^2)^2} ~~~\text{and}~~~ B:= \sup_{X, Y} \En \xi^4. $$ Then for the second term in Eq~\eqref{beta.eq}, using Chebyshev's inequality \begin{align*} \mbb{P}\left( (\widehat{\En}-\En) [h^2] > \frac{c'}{2(1-c')} \En h^2 \right) & \leq \frac{4(1-c')^2A}{c'^2n} \leq 1/4 \end{align*} if $$n \geq \frac{16 (1-c')^2 A}{c'^2}.$$ For the first term in Eq~\eqref{beta.eq}, note $$ {\sf Var}[2\xi h] \leq 4 \En [\xi^2 h^2] \leq 4 \sqrt{AB} \cdot \En h^2 $$ and thus through Chebyshev inequality \begin{align*} \mbb{P}\left( (\widehat{\En} - \En) [2\xi h] > \frac{x}{2n} + \frac{c'}{2} \En h^2 \right) & \leq \frac{4 \sqrt{AB} \cdot \En h^2}{n\left( \frac{x}{2n} + \frac{c'}{2} \En h^2 \right)^2} \\ & \leq \frac{4 \sqrt{AB} \cdot \En h^2}{n \cdot 4 \frac{x}{2n} \cdot \frac{c'}{2} \En h^2} \leq \frac{1}{4} \end{align*} if $$ x \geq \frac{16\sqrt{AB}}{c'}. $$ Assemble above bounds, for any $h \in \H$ $$ \sup _{h \in \H} \mbb{P} \left( \sum_{i=1}^n 2\xi_ih(X_i) - \mathbb{E} [2\xi h] - \En h^2 + (1-c') \cdot h(X_i)^2 \geq \frac{x}{2} \right)\leq \frac{1}{2} $$ which further implies $\beta \geq 1/2$ for any $x> \frac{16\sqrt{AB}}{c'}$ and whenever $$n > \frac{16 (1-c')^2 A}{c'^2}.$$ Under the above regime, \begin{align*} &\frac{1}{2} \mbb{P}\left( \sup_{h \in \H} \sum_{i=1}^n U_{X_i,Y_i}(h) > x \right)\leq 2 \mbb{P}\left( \sup_{h \in \H} \left\{ \sum_{i=1}^n \epsilon_i \xi_i h(X_i) - \tilde{c} \cdot \sum_{i=1}^n h(X_i)^2 \right\} > \frac{x}{4} \right)+ 2 \delta \end{align*} and so \begin{align*} &\quad \mbb{P}\left( \sup_{h \in \H} \sum_{i=1}^n U_{X_i,Y_i}(h) > 4t \right) \\ &\leq 4 \mbb{P}\left( \sup_{h \in \H} \left\{ \sum_{i=1}^n \epsilon_i \xi_i h(X_i) - \tilde{c} \cdot \sum_{i=1}^n h(X_i)^2 \right\} >t \right) + 4\delta. \end{align*} We conclude by writing \begin{align*} &\mbb{P}\left( \sup_{h \in \H} (\widehat{\En}-\En)[2\xi h] + \En h^2 - (1+c) \cdot \widehat{\En} h^2 > 4t \right) \\ & \leq 4 \mbb{P} \left( \sup_{h \in \H} \frac{1}{n} \sum_{i=1}^n \epsilon_i \xi_i h(X_i) - \tilde{c} \cdot \sum_{i=1}^n h(X_i)^2 > t \right) + 4 \delta. \end{align*} \end{proof} \begin{proof}[\textbf{Proof of Lemma~\ref{eq:finite}}] Using a standard argument, \begin{align*} \En_\epsilon \max_{v\in V} \left[ \sum_{i=1}^n \epsilon_i v_i - Cv_i^2 \right] &\leq \frac{1}{\lambda}\log\sum_{v\in V} \En_\epsilon \exp\left\{\sum_{i=1}^n \lambda \epsilon_i v_i - \lambda Cv_i^2\right\}. \end{align*} For any $v\in V$, \begin{align*} \En_\epsilon \exp\left\{\sum_{i=1}^n \lambda\epsilon_i v_i - \lambda Cv_i^2\right\} \leq \exp\left\{\sum_{i=1}^n \lambda^2 v_i^2/2 - \lambda Cv_i^2\right\} \leq 1 \end{align*} by setting $\lambda = 2C $. The first claim follows. For the second claim, \begin{align*} \Pr{\max_{v\in V} \left[ \sum_{i=1}^n \epsilon_i v_i - Cv_i^2 \right] \geq \frac{1}{2C} \log ( N/\delta)} &\leq \En\exp\left\{ \lambda \max_{v\in V} \left[ \sum_{i=1}^n \epsilon_i v_i - Cv_i^2 \right] - \lambda\frac{1}{2C} \log( N/\delta) \right\}\\ &\leq \sum_{v\in V} \En \exp\left\{ \lambda \left[ \sum_{i=1}^n \epsilon_i v_i - Cv_i^2 \right] - \lambda\frac{1}{2C} \log( N/\delta) \right\}\\ &\leq \sum_{v\in V} \exp\left\{ - \log (N/\delta) \right\} = \delta. \end{align*} Now let's move to the case where $\xi$, the noise is unbounded. \begin{align*} &\En_{\epsilon} \frac{1}{n} \max_{v\in V } \left\{ \sum_{i=1}^n \epsilon_i \xi_i v_i - C v_i^2 \right\} \leq \frac{1}{n\lambda} \log \En_{\epsilon} \sum_{v\in V} \exp \left( \lambda\sum_{i=1}^n \epsilon_i \xi_i v_i -\lambda C v_i^2 \right) \\ &\leq \frac{1}{n\lambda} \log \sum_{v\in V} \exp \left( \sum_{i=1}^n \frac{\lambda^2}{2} \xi_i^2 v_i^2 - \lambda C v_i^2 \right) \leq \max_{v \in V \setminus \{0\}} \frac{\sum_{i=1}^n v_i^2 \xi_i^2}{2C \sum_{i=1}^n v_i^2} \cdot \frac{\log N}{n} \end{align*} if we take $\lambda = \min_{v \in V \setminus \{0\}} \frac{2C \sum_{i=1}^n v_i^2}{\sum_{i=1}^n v_i^2 \xi_i^2}$. The high probability statement follows also use this particular choice of $\lambda$. \end{proof} \begin{proof}[\textbf{Proof of Lemma~\ref{lem:offset_estimate_chaining}}] The proof proceeds as in \citep{RakSri14nonparam}. Fix $\gamma\in[0,1]$. By definition of a cover, there exists a set $V\subset \mathbb{R}^n$ vectors of size $ N=\cN_2(\G,\gamma)$ with the following property: for any $g\in\G$, there exists a $v=v[g]\in V$ such that $$\frac{1}{n}\sum_{i=1}^n (g(z_i)-v_i)^2 \leq \gamma^2.$$ Then we may write, \begin{align} \label{eq:decomp} & \quad \En_\epsilon \sup_{g\in\G} \left[ \sum_{t=1}^n \epsilon_i g(z_i) - Cg(z_i)^2 \right] \\ & \leq \En_\epsilon \sup_{g\in\G} \left[ \frac{1}{n}\sum_{t=1}^n \epsilon_i (g(z_i)-v[g]_i) \right] + \En_\epsilon \sup_{g\in\G} \left[ \sum_{t=1}^n (C/4)v[g]_i^2- Cg(z_i)^2 \right] + \En_\epsilon \sup_{g\in\G} \left[ \sum_{t=1}^n \epsilon_i v[g]_i - (C/4)v[g]_i^2 \right] \end{align} We now argue that the second term is nonpositive. More precisely, we claim that for any $g\in\G$, \begin{align} \label{eq:req_norm_comparison} \frac{1}{4}\sum_{t=1}^n v[g]_i^2 \leq \sum_{t=1}^n g(z_i)^2 \end{align} for some element $v[g]\in V \cup \{\bf{0}\}$. First, consider the case $\sum_{t=1}^n g(z_i)^2\leq \gamma^2$. Then $v[g]=\bf{0}$ is an element $\gamma$-close to values of $g$ on the sample, and \eqref{eq:req_norm_comparison} is trivially satisfied. Next, consider the case $\sum_{t=1}^n g(z_i)^2 > \gamma^2$ and write $u=(g(z_1),\ldots,g(z_n))$. The triangle inequality for the Euclidean norm yields $$\|v[g]\| \leq \|v[g]-u\|+\|u\| \leq \gamma+\|u\| \leq 2\|u\|,$$ establishing non-positivity of the second term in \eqref{eq:decomp}. The third term in \eqref{eq:decomp} is upper bounded with the help of Lemma~\ref{eq:finite} as \begin{align*} \En_\epsilon \max_{g\in\G} \left[ \sum_{t=1}^n \epsilon_i v[g]_i - (C/4)v[g]_i^2 \right] \leq \frac{2}{C}\log \cN_2(\G,\gamma) \end{align*} Finally, the first term in \eqref{eq:decomp} is upper bounded using the standard chaining technique, keeping in mind that the $\ell_2$-diameter of the indexing set is at most $\gamma$. \end{proof} \begin{proof}[\textbf{Proof of Lemma~\ref{lma:prob-chaining}}] The proof is similar to the proof of Lemma~\ref{lem:offset_estimate_chaining}. We proceed with the following decomposition: \begin{align*} \sup_{g\in\G} \left[ \frac{1}{n}\sum_{t=1}^n \epsilon_i g(z_i) - Cg(z_i)^2 \right] \leq \sup_{g\in\G} \left[ \frac{1}{n}\sum_{t=1}^n \epsilon_i (g(z_i)-v[g]_i) \right] + \sup_{g\in\G} \left[ \frac{1}{n}\sum_{t=1}^n \epsilon_i v[g]_i - (C/4)v[g]_i^2 \right]. \end{align*} For the first term, we can employ the traditional high probability chaining bound. For some $c>0$, the following holds, \begin{align*} \mbb{P}_\epsilon \left( \sup_{g\in\G} \left[ \frac{1}{n}\sum_{t=1}^n \epsilon_i (g(z_i)-v[g]_i) \right] > u \cdot \inf_{\alpha \in [0,\gamma]} \left\{ 4\alpha + \frac{12}{\sqrt{n}} \int_{\alpha}^{\gamma} \sqrt{\log \mc{N}_2(\G,\delta)} d\delta \right\} \right) \leq \frac{2}{1-e^{-2}} \exp(-cu^2). \end{align*} For the second term, \begin{align*} \mbb{P}_\epsilon \left( \sup_{g\in\G} \left[ \frac{1}{n}\sum_{t=1}^n \epsilon_i v[g]_i - (C/4)v[g]_i^2 \right] > \frac{2}{C} \frac{\log \mc{N}_2(\G,\gamma) + u}{n} \right) \leq \exp(-u). \end{align*} Combining the above two bounds, we have \begin{align*} &\quad \mbb{P}_\epsilon \left( \sup_{g\in\G} \left[ \frac{1}{n}\sum_{t=1}^n \epsilon_i g(z_i) - Cg(z_i)^2 \right] > u \cdot \inf_{\alpha \in [0,\gamma]} \left\{ 4\alpha + \frac{12}{\sqrt{n}} \int_{\alpha}^{\gamma} \sqrt{\log \mc{N}_2(\G,\delta)} d\delta \right\} + \frac{2}{C} \frac{\log \mc{N}_2(\G,\gamma) + u}{n} \right) \\ & \leq \mbb{P}_\epsilon \left( \sup_{g\in\G} \left[ \frac{1}{n}\sum_{t=1}^n \epsilon_i (g(z_i)-v[g]_i) \right] > u \cdot \inf_{\alpha \in [0,\gamma]} \left\{ 4\alpha + \frac{12}{\sqrt{n}} \int_{\alpha}^{\gamma} \sqrt{\log \mc{N}_2(\G,\delta)} d\delta \right\} \right) \\ & \quad + \mbb{P}_\epsilon \left( \sup_{g\in\G} \left[ \frac{1}{n}\sum_{t=1}^n \epsilon_i v[g]_i - (C/4)v[g]_i^2 \right] > \frac{2}{C} \frac{\log \mc{N}_2(\G,\gamma) + u}{n} \right) \\ & \leq \frac{2}{1-e^{-2}} \exp(-cu^2) + \exp(-u). \end{align*} \end{proof} \begin{proof}[\textbf{Proof of Theorem~\ref{thm:crit_radius}}] Denote by $\mathcal{B}$ the unit ball with respect to $\ell_2$ distance, $\mathcal{B} = \{ h: (\En h^2)^{1/2} \leq 1 \}$, and let $\S$ denote the unit sphere. Choosing any $h \in\cH \backslash r\mathcal{B}$, we have $\| h \|_{\ell_2} >r \triangleq \alpha_n(\H, \kappa',\delta)$ with $k'$ to be chosen later. Under the assumption that $\H$ is star-shaped, we know $h_r:=r/\| h \|_{\ell_2}\cdot h \in\cH$, thus \begin{align*} &\frac{2}{n} \sum_{i=1}^n \epsilon_i \xi_i h(X_i) - c' \frac{1}{n} \sum_{i=1}^n h^2(X_i) \\ =& \frac{\| h \|_{\ell_2}}{r} \frac{2}{n} \sum_{i=1}^n \epsilon_i \xi_i h_r(X_i) - \left(\frac{\| h \|_{\ell_2}}{r}\right)^2 c' \frac{1}{n} \sum_{i=1}^n h_r^2(X_i) \\ =& \frac{\| h \|_{\ell_2}}{r} \left\{ \frac{2}{n} \sum_{i=1}^n \epsilon_i \xi_i h_r(X_i) - c' \frac{1}{n} \sum_{i=1}^n h_r^2(X_i) \right\} - \frac{\| h \|_{\ell_2}}{r} \left(\frac{\| h \|_{\ell_2}}{r}-1\right) c' \frac{1}{n} \sum_{i=1}^n h_r^2(X_i). \end{align*} Comparing the supremum of the offset Rademacher process outside the ball $r\mathcal{B}$ with the one inside the ball $r\mathcal{B}$, we have \begin{align} & \sup_{h\in\cH \backslash r\mathcal{B}} \left\{ \frac{2}{n} \sum_{i=1}^n \epsilon_i \xi_i h(X_i) - c' \frac{1}{n} \sum_{i=1}^n h^2(X_i) \right\} - \sup_{h\in\cH \cap r\mathcal{B}} \left\{ \frac{2}{n} \sum_{i=1}^n \epsilon_i \xi_i h(X_i) - c' \frac{1}{n} \sum_{i=1}^n h^2(X_i) \right\} \notag\\ & \leq \sup_{h\in\cH \backslash r\mathcal{B}} \left\{ \left(\frac{\| h \|_{\ell_2}}{r}-1\right) \sup_{h_r \in \H \cap r\mathcal{B}} \left\{ \frac{2}{n} \sum_{i=1}^n \epsilon_i \xi_i h_r(X_i) - c' \frac{1}{n} \sum_{i=1}^n h_r^2(X_i) \right\} \right. \notag\\ &\left.\hspace{1in} - \frac{\| h \|_{\ell_2}}{r} \left(\frac{\| h \|_{\ell_2}}{r}-1\right) \inf_{h_r \in \H \cap r\S}\left\{ c' \frac{1}{n} \sum_{i=1}^n h_r^2(X_i) \right\} \right\} \notag\\ & \leq \sup_{h\in\cH \backslash r\mathcal{B}} \left\{ \left(\frac{\| h \|_{\ell_2}}{r}-1\right) \left\{ \sup_{h_r \in \H \cap r\mathcal{B}} \left\{ \frac{2}{n} \sum_{i=1}^n \epsilon_i \xi_i h_r(X_i) - c' \frac{1}{n} \sum_{i=1}^n h_r^2(X_i) \right\} - \inf_{h_r \in \H_r \cap r\S}\left\{ c' \frac{1}{n} \sum_{i=1}^n h_r^2(X_i) \right\} \right\} \right\} \label{eq:cr.rad}. \end{align} If $$ \kappa' r^2 \leq c' (1-\epsilon) r^2, $$ we can apply the lower isometry bound \ref{Assump:Low-Iso-Bd} and conclude $$ \sup_{h\in\cH \cap r\mathcal{B}} \left\{ \frac{2}{n} \sum_{i=1}^n \epsilon_i \xi_i h(X_i) - c' \frac{1}{n} \sum_{i=1}^n h^2(X_i) \right\} \leq k'r^2\leq c' (1-\epsilon) r^2 \leq \inf_{h_r \in \H \cap r\S}\left\{ c' \frac{1}{n} \sum_{i=1}^n h_r^2(X_i) \right\} $$ with probability at least $1-2\delta$. Under this event, the difference of terms in \eqref{eq:cr.rad} is smaller than $0$, and we conclude \begin{align*} & \sup_{h\in\cH \backslash r\mathcal{B}} \left\{ \frac{2}{n} \sum_{i=1}^n \epsilon_i \xi_i h(X_i) - c' \frac{1}{n} \sum_{i=1}^n h^2(X_i) \right\} - \sup_{h\in\cH \cap r\mathcal{B}} \left\{ \frac{2}{n} \sum_{i=1}^n \epsilon_i \xi_i h(X_i) - c' \frac{1}{n} \sum_{i=1}^n h^2(X_i) \right\}\\ \leq & \sup_{h\in\cH \backslash r\mathcal{B}} \left\{ \left(\frac{\| h \|_{\ell_2}}{r}-1\right) \left\{ \sup_{h_r \in \H \cap r\mathcal{B}} \left\{ \frac{2}{n} \sum_{i=1}^n \epsilon_i \xi_i h_r(X_i) - c' \frac{1}{n} \sum_{i=1}^n h_r^2(X_i) \right\} - \inf_{h_r \in \H_r \cap r\S}\left\{ c' \frac{1}{n} \sum_{i=1}^n h_r^2(X_i) \right\} \right\} \right\} \\ \leq & \sup_{h\in\cH \backslash r\mathcal{B}} \left\{ \left(\frac{\| h \|_{\ell_2}}{r}-1\right) \left( \kappa' r^2 - c' (1-\epsilon) r^2 \right) \right\} \leq 0 \end{align*} Thus the excess loss is upper bounded by the offset Rademacher process, and the latter is further bounded by the process restricted within the critical radius: \begin{align} \sup_{h\in\cH} \left\{ \frac{2}{n} \sum_{i=1}^n \epsilon_i \xi_i h(X_i) - c' \frac{1}{n} \sum_{i=1}^n h^2(X_i) \right\} & \leq \sup_{h\in\cH \cap r\mathcal{B}} \left\{ \frac{2}{n} \sum_{i=1}^n \epsilon_i \xi_i h(X_i) - c' \frac{1}{n} \sum_{i=1}^n h^2(X_i)\right\}\\ & \leq \alpha^2_n(\H, c' (1-\epsilon), \delta) \end{align} with probability at least $1 -2 \delta $. \end{proof} \begin{proof}[\textbf{Proof of Theorem~\ref{thm: mini-low-bd}}] Denote $\F \subset \G = \F + \star(\F-\F)$. The minimax excess loss can be written as \begin{align*} &\inf_{\hat{g} \in \G} \sup_{P} \left\{ \En (\hat{g} - Y)^2 - \inf_{f \in \F} \En (f - Y)^2 \right\} \\ & = \inf_{\hat{g} \in \G} \sup_{P} \left\{ \left\{ - \En 2 Y \hat{g} + \En \hat{g}^2 \right\} + \sup_{f \in \F} \left\{ \En 2 Y f - \En f^2 \right\} \right\}. \end{align*} Now let's construct a particular distribution $P$ in the following way: take any $x_1, x_2,..., x_{(1+c)n} \in \X$ and let $P_X$ be the uniform distribution on these $(1+c)n$ points. For any $\epsilon = (\epsilon_1, \ldots, \epsilon_{(1+c)n}) \in \{ \pm 1\}^{(1+c)n}$, denote the distribution $P_{\epsilon}$ of $(X, Y)$ indexed by $\epsilon$ to be: $X$ is sampled from $P_X$, and $Y_{| X = x_i} = \epsilon_i,~\forall 1\leq i \leq (1+c)n$. Note here $\hat{g} : (X, Y)^{\otimes n} \rightarrow \F+\star(\F-\F)$. Now we proceed with this particular distribution \begin{align*} &\inf_{\hat{g} \in \G} \sup_{P} \left\{ \left\{ - \En 2 Y \hat{g} + \En \hat{g}^2 \right\} + \sup_{f \in \F} \left\{ \En 2 Y f - \En f^2 \right\} \right\} \\ &\geq \inf_{\hat{g} \in \G} \sup_{\{x_i\}_{i=1}^{(1+c)n} \in \X^{\otimes (1+c)n}} \En_{\epsilon} \left\{ \left\{ - \En 2 Y \hat{g} + \En \hat{g}^2 \right\} + \sup_{f \in \F} \left\{ \En 2 Y f - \En f^2 \right\} \right\} \\ & \geq \sup_{\{x_i\}_{i=1}^{(1+c)n} \in \X^{\otimes (1+c)n}} \En_{\epsilon} \left\{ \sup_{f \in \F} \frac{1}{(1+c)n}\left\{\sum_{i=1}^{(1+c)n} 2 \epsilon_i f(x_i) - f(x_i)^2 \right\} \right\} \\ &\hspace{2in} - \sup_{\hat{g} \in \G} \sup_{\{x_i\}_{i=1}^{(1+c)n} \in \X^{\otimes (1+c)n}} \En_{\epsilon} \left\{ 2 \En Y \hat{g} - \En \hat{g}^2 \right\}. \end{align*} Note that the first term is exactly $\Rad^{\sf o}((1+c)n, \F)$. Let us upper bound the second term. Denote the indices of a uniform $n$ samples from $(1+c)n$ samples $\{x_i\}_{i=1}^{(1+c)n}$ with replacement as $i_1,i_2,\ldots, i_n$, and $I$ be the set of unique indices $|I|\leq n$. Observe that $\hat{g}$ is a function of $(x_{I},Y_{I})$ only, independent of $\epsilon_{j}, j\notin I$. \begin{align} &\sup_{\hat{g} \in \G} \sup_{\{x_i\}_{i=1}^{(1+c)n} \in \X^{\otimes (1+c)n}} \En_{\epsilon} \left\{ 2 \En Y \hat{g} - \En \hat{g}^2 \right\} \notag\\ & \leq \sup_{\hat{g} \in \G} \sup_{\{x_i\}_{i=1}^{(1+c)n} \in \X^{\otimes (1+c)n}} \En_{\epsilon} \En_{i_1,\ldots, i_n} \left\{ \frac{1}{(1+c)n} \sum_{i=1}^{(1+c)n} \left\{ 2 \epsilon_i \hat{g}(x_i) - \hat{g}(x_i)^2 \right\}\right\} \notag\\ & = \sup_{\hat{g} \in \G} \sup_{\{x_i\}_{i=1}^{(1+c)n} \in \X^{\otimes (1+c)n}} \En_{i_1,\ldots, i_n} \En_{\epsilon} \left\{ \frac{1}{(1+c)n} \sum_{i=1}^{(1+c)n} \left\{ 2 \epsilon_i \hat{g}(x_i) - \hat{g}(x_i)^2 \right\}\right\} \label{eq:long} \end{align} Conditionally on $i_1, i_2,...,i_n$, $$ \frac{1}{(1+c)n}\sum_{i \notin I} \left\{ 2 \epsilon_i \hat{g}(x_i) - \hat{g}(x_i)^2 \right\} = 0 -\frac{1}{(1+c)n}\sum_{i \notin I} \hat{g}(x_i)^2 < 0 . $$ Expression in \eqref{eq:long} is upper bounded by \begin{align*} &\sup_{\hat{g} \in \G} \sup_{\{x_i\}_{i=1}^{(1+c)n} \in \X^{\otimes (1+c)n}} \En_{i_1,\ldots, i_n} \En_{\epsilon} \left\{ \frac{1}{(1+c)n} \sum_{i\in I} \left\{ 2 \epsilon_i \hat{g}(x_i) - \hat{g}(x_i)^2 \right\}\right\} \\ & \leq \sup_{\hat{g} \in \G} \En_{i_1,\ldots, i_n} \sup_{\{x_i\}_{i=1}^{|I|} \in \X^{\otimes |I|}} \En_{\epsilon} \left\{ \frac{1}{(1+c)n} \sum_{i\in I} \left\{ 2 \epsilon_i \hat{g}(x_i) - \hat{g}(x_i)^2 \right\}\right\} \\ & \leq \sup_{\hat{g} \in \G} \sup_{\{x_i\}_{i=1}^{n} \in \X^{\otimes n}} \En_{\epsilon} \left\{ \frac{1}{(1+c)n} \sum_{i=1 }^{cn} \left\{ 2 \epsilon_i \hat{g}(x_i) - \hat{g}(x_i)^2 \right\}\right\} \\ & \leq \sup_{\{x_i\}_{i=1}^{n} \in \X^{\otimes n}} \En_{\epsilon} \sup_{g \in \G} \left\{ \frac{1}{(1+c)n} \sum_{i=1 }^{cn} \left\{ 2 \epsilon_i g(x_i) - g(x_i)^2 \right\}\right\} \\ &= \frac{c}{1+c}\Rad^{\sf o}(cn, \G). \end{align*} Thus the claim holds. \end{proof} \begin{proof}[\textbf{Proof of Lemma~\ref{lem:finite_agg}}] From Lemma~\ref{lma:star-cov}, we know for $\H = \F -f^* + \star(\F - \F)$, \begin{align*} \log \mc{N}_2(\H, 8 \epsilon) \leq \log \mc{N}_2(\F - f^*, 4\epsilon) + \log \mc{N}_2(\star(\F - \F), 4\epsilon) \leq \log \frac{2}{\epsilon} + 3\log \mc{N}_2(\F, \epsilon). \end{align*} Consider the $\delta$-covering net of $\H$, where for any $h \in \H$, $v[h]$ is the closest point on the net. \begin{align*} & \frac{1}{n} \sup_{h \in \H} \left\{ \sum_{i=1}^n 2 \epsilon_i \xi_i h(X_i) - C h(X_i)^2 \right\} \\ & \leq \frac{1}{n} \sup_{h \in \H} \left\{ \sum_{i=1}^n 2 \epsilon_i \xi_i (h(X_i) - v[h]) - C (h(X_i)^2 -v[h]^2)\right\} + \frac{1}{n} \sup_{v \in \mc{N}_2(\H, \delta)} \left\{ \sum_{i=1}^n 2 \epsilon_i \xi_i v - C v^2 \right\} \\ & \leq 2 ( \sqrt{\sum_{i=1}^n \xi_i^2/n} + 2C ) \cdot \delta + \frac{1}{n} \sup_{v \in \mc{N}_2(\H, \delta)} \left\{ \sum_{i=1}^n 2 \epsilon_i \xi_i v - C v^2 \right\}. \end{align*} The second term is the offset Rademacher for a finite set of cardinality at most $\log (16/\delta) + 3 \log N$, thus applying Lemma~\ref{eq:finite}, \begin{align*} \En_{\epsilon} \frac{1}{n} \sup_{h \in \H} \left\{ \sum_{i=1}^n 2 \epsilon_i \xi_i h(X_i) - C h(X_i)^2 \right\} & \leq \inf_{\delta>0}\left\{ K \cdot \delta + M \cdot \frac{3 \log N + \log (16/\delta)}{n} \right\} \\ & \leq \tilde{C} \cdot \frac{\log (N \vee n)}{n} \end{align*} where $K:=2 ( \sqrt{\sum_{i=1}^n \xi_i^2/n} + 2C )$ and $M$ is defined in Equation~\eqref{eq:M}. We also have the high probability bound via Lemma~\ref{eq:finite}: \begin{align*} \mbb{P}_{\epsilon} \left( \frac{1}{n} \sup_{h \in \H} \left\{ \sum_{i=1}^n 2 \epsilon_i \xi_i h(X_i) - C h(X_i)^2 \right\} \leq \tilde{C} \cdot \frac{\log (N\vee n) + u }{n} \right) \leq e^{-u}. \end{align*} \end{proof} \section{Offset Rademacher Process: Chaining and Critical Radius} Let us summarize the development so far. We have shown that excess loss of the Star estimator is upper bounded by the (data-dependent) offset Rademacher complexity, both in expectation and in high probability, under the appropriate assumptions. We claim that the necessary properties of the estimator are now captured by the offset complexity, and we are now squarely in the realm of empirical process theory. In particular, we may want to quantify rates of convergence under complexity assumptions on $\F$, such as covering numbers. In contrast to local Rademacher analyses where one would need to estimate the data-dependent fixed point of the critical radius in some way, the task is much easier for the offset complexity. To this end, we study the offset process with the tools of empirical process theory. \subsection{Chaining Bounds} The first lemma describes the behavior of offset Rademacher process for a finite class. \begin{lemma} \label{eq:finite} Let $V\subset \mathbb{R}^n$ be a finite set of vectors of cardinality $N$. Then for any $C>0$, \begin{align*} \En_\epsilon \max_{v\in V} \left[ \frac{1}{n}\sum_{i=1}^n \epsilon_i v_i - Cv_i^2 \right] \leq \frac{1}{2C} \frac{\log N}{n}. \end{align*} Furthermore, for any $\delta>0$, \begin{align*} \Pr{\max_{v\in V} \left[ \frac{1}{n}\sum_{i=1}^n \epsilon_i v_i - Cv_i^2 \right] \geq \frac{1}{2C} \frac{\log N+ \log 1/\delta}{n} } \leq \delta. \end{align*} When the noise $\xi $ is unbounded, \begin{align*} &\En_\epsilon \max_{v \in V } \left[ \frac{1}{n}\sum_{i=1}^n \epsilon_i \xi_i v_i - C v_i^2 \right] \leq M \cdot \frac{\log N }{n}, \\ &\mbb{P}_{\epsilon} \left( \max_{v\in V} \left[ \frac{1}{n}\sum_{i=1}^n \epsilon_i \xi_i v_i - Cv_i^2 \right] \geq M \cdot \frac{\log N+ \log 1/\delta}{n} \right) \leq \delta, \end{align*} where \begin{align} \label{eq:M} M : = \sup_{v \in V \setminus \{0\}} \frac{ \sum_{i=1}^n v_i^2 \xi_i^2}{2C \sum_{i=1}^n v_i^2}. \end{align} \end{lemma} Armed with the lemma for a finite collection, we upper bound the offset Rademacher complexity of a general class through the chaining technique. We perform the analysis in expectation and in probability. Recall that a $\delta$-cover of a subset $S$ in a metric space $(T,d)$ is a collection of elements such that the union of the $\delta$-balls with centers at the elements contains $S$. A covering number at scale $\delta$ is the size of the minimal $\delta$-cover. One of the main objectives of symmetrization is to arrive at a stochastic process that can be studied conditionally on data, so that all the relevant complexities can be made sample-based (or, empirical). Since the functions only enter offset Rademacher complexity through their values on the sample $X_1,\ldots,X_n$, we are left with a finite-dimensional object. Throughout the paper, we work with the empirical $\ell_2$ distance $$d_n(f,g) = \left( \frac{1}{n}\sum_{i=1}^n (f(X_i)-g(X_i))^2\right)^{1/2}.$$ The covering number of $\G$ at scale $\delta$ with respect to $d_n$ will be denoted by $\cN_2(\G,\delta)$. \begin{lemma} \label{lem:offset_estimate_chaining} Let $\G$ be a class of functions from $\Z$ to ~$\mathbb{R}$. Then for any $z_1,\ldots,z_n \in \Z$ \begin{align*} \En_\epsilon \sup_{g\in\G} \left[ \frac{1}{n}\sum_{t=1}^n \epsilon_i g(z_i) - Cg(z_i)^2 \right] &\leq \inf_{\gamma\geq 0, \alpha\in [0,\gamma]} \left\{ \frac{(2/C) \log \cN_2(\G, \gamma)}{n} \right.\\ &\left.\hspace{1.5in} + 4\alpha + \frac{12}{\sqrt{n}}\int_{\alpha}^\gamma \sqrt{\log\cN_2(\G, \delta)}d\delta \right\} \end{align*} where $\cN_2(\G,\gamma)$ is an $\ell_2$-cover of $\G$ on $(z_1,\ldots,z_n)$ at scale $\gamma$ (assumed to contain $\bf{0}$). \end{lemma} Instead of assuming that $\bf{0}$ is contained in the cover, we may simply increase the size of the cover by $1$, which can be absorbed by a small change of a constant. Let us discuss the upper bound of Lemma~\ref{lem:offset_estimate_chaining}. First, we may take $\alpha=0$, unless the integral diverges (which happens for very large classes with entropy growth of $\log \cN_2(\G,\delta) \sim \delta^{-p}$, $p\geq 2$). Next, observe that first term is precisely the rate of aggregation with a finite collection of size $\cN_2(\G,\gamma)$. Hence, the upper bound is an optimal balance of the following procedure: cover the set at scale $\gamma$ and pay the rate of aggregation for this finite collection, plus pay the rate of convergence of ERM within a $\gamma$-ball. The optimal balance is given by some $\gamma$ (and can be easily computed under assumptions on covering number behavior --- see \citep{RakSri14nonparam}). The optimal $\gamma$ quantifies the localization radius that arises from the curvature of the loss function. One may also view the optimal balance as the well-known equation $$\frac{\log \cN(\G,\gamma)}{n} \asymp \gamma^2,$$ studied in statistics \citep{yang1999information} for well-specified models. The present paper, as well as \citep{RakSriTsy15}, extend the analysis of this balance to the misspecified case and non-convex classes of functions. Now we provide a high probability analogue of Lemma \ref{lem:offset_estimate_chaining}. \begin{lemma} \label{lma:prob-chaining} Let $\G$ be a class of functions from $\Z$ to ~$\mathbb{R}$. Then for any $z_1,\ldots,z_n \in \Z$ and any $u>0$, \small{ \begin{align*} & \quad \mbb{P}_\epsilon \left( \sup_{g\in\G} \left[ \frac{1}{n}\sum_{t=1}^n \epsilon_i g(z_i) - Cg(z_i)^2 \right] > u \cdot \inf_{\alpha \in [0,\gamma]} \left\{ 4\alpha + \frac{12}{\sqrt{n}} \int_{\alpha}^{\gamma} \sqrt{\log \mc{N}_2(\G,\delta)} d\delta \right\} + \frac{2}{C} \frac{\log \mc{N}_2(\G,\gamma) + u}{n} \right)\\ & \leq \frac{2}{1-e^{-2}} \exp(-cu^2) + \exp(-u) \end{align*}} where $\cN_2(\G,\gamma)$ is an $\ell_2$-cover of $\G$ on $(z_1,\ldots,z_n)$ at scale $\gamma$ (assumed to contain $\bf{0}$) and $C,c>0$ are universal constants. \end{lemma} The above lemmas study the behavior of offset Rademacher complexity for abstract classes $\G$. Observe that the upper bounds in previous sections are in terms of the class $\F-f^*+\star(\F-\F)$. This class, however, is not more complex that the original class $\F$ (with the exception of a finite class $\F$). More precisely, the covering numbers of $\F + \F' : = \{ f+g: f \in \F, g \in \F' \}$ and $\F - \F' := \{ f -g: f \in \F , g \in \F' \}$ are bounded as \begin{align*} \log \mc{N}_{2}(\F+\F', 2 \epsilon), ~ \log \mc{N}_{2}(\F-\F', 2 \epsilon) \leq \log \mc{N}_2(\F, \epsilon) + \log \mc{N}_2(\F', \epsilon) \end{align*} for any $\F,\F'$. The following lemma shows that the complexity of the star hull $\star(\F)$ is also not significantly larger than that of $\F$. \begin{lemma}[\cite{mendelson2002improving}, Lemma 4.5] \label{lma:star-cov} For any scale $\epsilon>0$, the covering number of $\F \subset \mathcal{B}_2$ and that of $\star(\F)$ are bounded in the sense \begin{align*} \log \mc{N}_2(\F,2\epsilon) \leq \log \mc{N}_2(\star(\F),2\epsilon) \leq \log \frac{2}{\epsilon} + \log \mc{N}_2(\F,\epsilon). \end{align*} \end{lemma} \subsection{Critical Radius} Now let us study the critical radius of offset Rademacher processes. Let $\xi = f^*-Y$ and define \begin{align} \label{eq:alpha} \alpha_n(\H, \kappa, \delta ) \triangleq \inf \left\{ r > 0 : \mbb{P} \left( \sup_{h \in \H \cap r\mathcal{B}} \left\{ \frac{1}{n} \sum_{i=1}^n 2\epsilon_i \xi_i h(X_i) - c' \frac{1}{n} \sum_{i=1}^n h^2(X_i) \right\} \leq \kappa r^2 \right) \geq 1-\delta \right\}. \end{align} \begin{theorem} \label{thm:crit_radius} Assume $\H$ is star-shaped around 0 and the lower isometry bound holds for $\delta,\epsilon$. Define the critical radius $$ r = \alpha_n(\H, c' (1-\epsilon), \delta) . $$ Then we have with probability at least $1 -2 \delta $, \begin{align*} \sup_{h \in \H} \left\{ \frac{2}{n} \sum_{i=1}^n \epsilon_i \xi_i h(X_i) - c' \frac{1}{n} \sum_{i=1}^n h^2(X_i) \right\} =\sup_{h \in \H \cap r \mathcal{B}} \left\{ \frac{2}{n} \sum_{i=1}^n \epsilon_i \xi_i h(X_i) - c' \frac{1}{n} \sum_{i=1}^n h^2(X_i) \right\}, \end{align*} which further implies \begin{align*} \sup_{h \in \H} \left\{ \frac{2}{n} \sum_{i=1}^n \epsilon_i \xi_i h(X_i) - c' \frac{1}{n} \sum_{i=1}^n h^2(X_i) \right\} \leq r^2. \end{align*} \end{theorem} The first statement of Theorem~\ref{thm:crit_radius} shows the self-modulating behavior of the offset process: there is a critical radius, beyond which the fluctuations of the offset process are controlled by those within the radius. To understand the second statement, we observe that the complexity $\alpha_n$ is upper bounded by the corresponding complexity in \citep{Mendelson14}, which is defined without the quadratic term subtracted off. Hence, offset Rademacher complexity is no larger (under our Assumption~\ref{Assump:Low-Iso-Bd}) than the upper bounds obtained by \cite{Mendelson14} in terms of the critical radius. \section{Examples} \label{sec:examples} In this section, we briefly describe several applications. The first is concerned with parametric regression. \begin{lemma} Consider the parametric regression $Y_i = X_i^T \beta^* + \xi_i, 1\leq i\leq n$, where $\xi_i$ need not be centered. The offset Rademacher complexity is bounded as \begin{align*} &\En_{\epsilon} \sup_{\beta \in \mbb{R}^p } \left\{ \frac{1}{n}\sum_{i=1}^n 2 \epsilon_i \xi_i X_i^T \beta - C \beta^T X_i X_i^T \beta \right\} = \frac{{\sf tr}\left(G^{-1}H\right)}{Cn} \end{align*} and \small{ \begin{align*} &\mbb{P}_{\epsilon} \left( \sup_{\beta \in \mbb{R}^p } \left\{ \frac{1}{n} \sum_{i=1}^n 2 \epsilon_i \xi_i X_i^T \beta - C \beta^T X_i X_i^T \beta \right\} \geq \frac{{\sf tr}\left(G^{-1} H\right)}{Cn} + \frac{\sqrt{{\sf tr}\left([G^{-1}H]^2\right) }}{n} (4\sqrt{2\log \frac{1}{\delta}} + 64 \log \frac{1}{\delta}) \right) \leq \delta \end{align*}} where $G: = \sum_{i=1}^n X_i X_i^T$ is the Gram matrix and $H = \sum_{i=1}^n \xi_i^2 X_i X_i^T$. In the well-specified case (that is, $\xi_i$ are zero-mean), assuming that conditional variance is $\sigma^2$, then conditionally on the design matrix, $\En G^{-1}H = \sigma^2 I_p$ and excess loss is upper bounded by order $\frac{\sigma^2 p}{n} $. \end{lemma} \begin{proof} The offset Rademacher can be interpreted as the Fenchel-Legendre transform, where \begin{align} \label{eq:rad-chaos} \sup_{\beta \in \mbb{R}^p } \left\{ \sum_{i=1}^n 2 \epsilon_i \xi_i X_i^T \beta - C \beta^T X_i X_i^T \beta \right\} = \frac{\sum_{i,j=1}^n \epsilon_i \epsilon_j \xi_i \xi_j X_i^T G^{-1} X_j}{C n}. \end{align} Thus we have in expectation \begin{align} \En_{\epsilon} \frac{1}{n} \sup_{\beta \in \mbb{R}^p } \left\{ \sum_{i=1}^n 2 \epsilon_i \xi_i X_i^T \beta - C \beta^T X_i X_i^T \beta \right\} = \frac{\sum_{i=1}^n \xi_i^2 X_i^T G^{-1} X_i}{C n} = \frac{{\sf tr}[G^{-1} (\sum_{i=1}^n \xi_i^2 X_i X_i^T )]}{Cn}. \end{align} For high probability bound, note the expression in Equation \eqref{eq:rad-chaos} is Rademacher chaos of order two. Define symmetric matrix $M \in \mbb{R}^{n\times n}$ with entries $$ M_{ij} = \xi_i \xi_j X_i^T G^{-1} X_j $$ and define $$ Z = \sum_{i,j=1}^n \epsilon_i \epsilon_j \xi_i \xi_j X_i^T G^{-1} X_j = \sum_{i,j=1}^n \epsilon_i \epsilon_j M_{ij}. $$ Then $$ \En Z = {\sf tr}[G^{-1} (\sum_{i=1}^n \xi_i^2 X_i X_i^T )], $$ and $$ \En \sum_{i=1}^n(\sum_{j=1}^n \epsilon_j M_{ij})^2 = \| M \|_{F}^2 = {\sf tr}[G^{-1}(\sum_{i=1}^n \xi_i^2 X_i X_i^T)G^{-1}(\sum_{i=1}^n \xi_i^2 X_i X_i^T )]. $$ Furthermore, $$ \| M \| \leq \| M \|_{F} = \sqrt{{\sf tr}[G^{-1}(\sum_{i=1}^n \xi_i^2 X_i X_i^T)G^{-1}(\sum_{i=1}^n \xi_i^2 X_i X_i^T )] } $$ We apply the concentration result in \citep[Exercise 6.9]{boucheron2013concentration}, \begin{align} \mbb{P} \left( Z - \mbb{E} Z \geq 4\sqrt{2} \| M \|_F \sqrt{t} + 64 \| M \| t \right) \leq e^{-t}. \end{align} \end{proof} For the finite dictionary aggregation problem, the following lemma shows control of offset Rademacher complexity. \begin{lemma} \label{lem:finite_agg} Assume $\F \in \mathcal{B}_2$ is a finite class of cardinality $N$. Define $\H = \F -f^* + \star(\F - \F)$ which contains the Star estimator $\widehat{f} -f^*$ defined in Equation \eqref{eq:def_estimator}. The offset Rademacher complexity for $\H$ is bounded as \begin{align*} &\En_{\epsilon} \sup_{h \in \H} \left\{ \frac{1}{n}\sum_{i=1}^n 2 \epsilon_i \xi_i h(X_i) - C h(X_i)^2 \right\} \leq \tilde{C} \cdot \frac{\log (N \vee n)}{n} \end{align*} and \begin{align*} \mbb{P}_{\epsilon} \left( \sup_{h \in \H} \left\{ \frac{1}{n}\sum_{i=1}^n 2 \epsilon_i \xi_i h(X_i) - C h(X_i)^2 \right\} \leq \tilde{C} \cdot \frac{\log (N\vee n) + \log \frac{1}{\delta} }{n} \right) \leq \delta . \end{align*} where $\tilde{C}$ is a constant depends on $K:=2 ( \sqrt{\sum_{i=1}^n \xi_i^2/n} + 2C )$ and $$M : = \sup_{h \in \H \setminus \{0\}} \frac{ \sum_{i=1}^n h(X_i)^2 \xi_i^2}{2C \sum_{i=1}^n h(X_i)^2}.$$ \end{lemma} We observe that the bound of Lemma~\ref{lem:finite_agg} is worse than the optimal bound of \citep{audibert2007progressive} by an additive $\frac{\log n}{n}$ term. This is due to the fact that the analysis for finite case passes through the offset Rademacher complexity of the star hull, and for this case the star hull is more rich than the finite class. For this case, a direct analysis of the Star estimator is provided in \citep{audibert2007progressive}. While the offset complexity of the star hull is crude for the finite case, the offset Rademacher complexity \emph{does} capture the correct rates for regression with larger classes, initially derived in \citep{RakSriTsy15}. We briefly mention the result. The proof is identical to the one in \citep{RakSri14nonparam}, with the only difference that offset Rademacher is defined in that paper as a sequential complexity in the context of online learning. \begin{corollary} Consider the problem of nonparametric regression, as quantified by the growth $$\log \cN_2(\F,\epsilon) \leq \epsilon^{-p}.$$ In the regime $p\in(0,2)$, the upper bound of Lemma~\ref{lma:prob-chaining} scales as $n^{-\frac{2}{2+p}}$. In the regime $p\geq 2$, the bound scales as $n^{-1/p}$, with an extra logarithmic factor at $p=2$. \end{corollary} For the parametric case of $p=0$, one may also readily estimate the offset complexity. Results for VC classes, sparse combinations of dictionary elements, and other parametric cases follow easily by plugging in the estimate for the covering number or directly upper bounding the offset complexity (see \cite{RakSriTsy15, RakSri14nonparam}). \section{A Geometric Inequality} We start by proving a geometric inequality for the Star estimator. This deterministic inequality holds conditionally on $X_1,\ldots,X_n$, and therefore reduces to a problem in $\mathbb{R}^n$. \begin{lemma}[Geometric Inequality] \label{lem:angle_ineq} The two-step estimator $\widehat{f}$ in \eqref{eq:def_estimator} satisfies \begin{align} \label{claim.angle} \widehat{\En}(h - Y)^2 - \widehat{\En}(\widehat{f} - Y)^2 \geq c \cdot \widehat{\En}(\widehat{f} - h)^2 \end{align} for any $h\in\F$ and $c= 1/18$. If $\F$ is convex, \eqref{claim.angle} holds with $c=1$. Moreover, if $\F$ is a linear subspace, \eqref{claim.angle} holds with equality and $c=1$ by the Pythagorean theorem. \end{lemma} \begin{remark} In the absence of convexity of $\F$, the two-step estimator $\widehat{f}$ mimics the key Pythagorean identity, though with a constant $1/18$. We have not focused on optimizing $c$ but rather on presenting a clean geometric argument. \end{remark} \begin{proof}[\textbf{Proof of Lemma~\ref{lem:angle_ineq}}] \begin{figure}[h] \centering \includegraphics[width=.3\textwidth]{geometry_angle.pdf} \label{fig:geometry} \end{figure} Define the empirical $\ell_2$ distance to be, for any $f,g$, $\| f \|_{n}:= [\widehat{\En} f^2]^{1/2}$ and empirical product to be $\langle f, g\rangle_{n} := \widehat{\En} [f g]$. We will slightly abuse the notation by identifying every function with its finite-dimensional projection on $(X_1,\ldots,X_n)$. Denote the ball (and sphere) centered at $Y$ and with radius $\|\widehat{g}-Y\|_n$ to be $\mathcal{B}_{1}: = \mathcal{B}(Y, \| \widehat{g} - Y\|_{n})$ (and $\S_1$, correspondingly). In a similar manner, define $\mathcal{B}_2: = \mathcal{B}(Y, \| \widehat{f} - Y\|_{n})$ and $\S_2$. By the definition of the Star algorithm, we have $\mathcal{B}_{2} \subseteq \mathcal{B}_{1}$. The statement holds with $c=1$ if $\widehat{f} = \widehat{g}$, and so we may assume $\mathcal{B}_2\subset \mathcal{B}_1$. Denote by $\mathcal{C}$ the conic hull of $\mathcal{B}_2$ with origin at $\widehat{g}$. Define the spherical cap outside the cone $\mathcal{C}$ to be $\S = \S_1 \setminus \mathcal{C}$ (drawn in red in Figure~\ref{fig:geometry}). First, by the optimality of $\widehat{g}$, for any $h \in \F$, we have $\| h-Y\|_{n}^2 \geq \|\widehat{g}-Y\|_{n}^2$, i.e. any $h\in \F$ is not in the interior of $\mathcal{B}_1$. Furthermore, $h$ is not in the interior of the cone $\mathcal{C}$, as otherwise there would be a point inside $\mathcal{B}_2$ strictly better than $\widehat{f}$. Thus $h \in (\text{int} \mathcal{C})^c \cap (\text{int} \mathcal{B}_1)^c$. Second, $\widehat{f} \in \mathcal{B}_2$ and it is a contact point of $\mathcal{C}$ and $\S_2$. Indeed, $\widehat{f}$ is necessarily on a line segment between $\hat{g}$ and a point outside $\mathcal{B}_1$ that does not pass through the interior of $\mathcal{B}_2$ by optimality of $\widehat{f}$. Let $K$ be the set of all contact points -- potential locations of $\widehat{f}$. Now we fix $h\in\F$ and consider the two dimensional plane $\mathcal{L}$ that passes through three points $(\hat{g}, Y, h)$, depicted in Figure~\ref{fig:geometry}. Observe that the left-hand-side of the desired inequality \eqref{claim.angle} is constant as $\widehat{f}$ ranges over $K$. To prove the inequality it therefore suffices to choose a value $f'\in K$ that maximizes the right-hand-side. The maximization of $\|h-f'\|^2$ over $f'\in K$ is achieved by $f'\in K\cap \mathcal{L}$. This can be argued simply by symmetry: the two-dimensional plane $\mathcal{L}$ intersects ${\sf span}(K)$ in a line and the distance between $h$ and $K$ is maximized at the extreme point of this intersection. Hence, to prove the desired inequality, we can restrict our attention to the plane $\mathcal{L}$ and $f'$ instead of $\widehat{f}$. For any $h \in \F$, define the projection of $h$ onto the shell $\mathcal{L}\cap \S$ to be $h_{\perp} \in \S$. We first prove \eqref{claim.angle} for $h_{\perp}$ and then extend the statement to $h$. By the geometry of the cone, $$ \| f' - \widehat{g} \|_{n} \geq \frac{1}{2} \| \widehat{g} - h_{\perp}\|_{n} . $$ By triangle inequality, \begin{align*} \|f'- \widehat{g}\|_{n} \geq \frac{1}{2} \|\widehat{g} - h_{\perp}\|_{n} \geq \frac{1}{2} \left( \|f'- h_{\perp}\|_{n} - \|f' - \widehat{g} \|_{n} \right) . \end{align*} Rearranging, \begin{align*} \| f' - \widehat{g}\|_{n}^2 \geq \frac{1}{9} \|f' - h_{\perp}\|_{n}^2 . \end{align*} By the Pythagorean theorem, $$ \| h_{\perp} - Y\|_{n}^2 - \|f' - Y\|_{n}^2 = \|\widehat{g} - Y\|_{n}^2 - \|f' - Y\|_{n}^2 = \|f' - \widehat{g}\|_{n}^2 \geq \frac{1}{9} \|f' - h_{\perp}\|_{n}^2, $$ thus proving the claim for $h_{\perp}$ for constant $c = 1/9$. We can now extend the claim to $h$. Indeed, due to the fact that $h \in (\text{int} \mathcal{C})^c \cap (\text{int} \mathcal{B}_1)^c$ and the geometry of the projection $h \rightarrow h_{\perp}$, we have $\langle h_{\perp} - Y, h_{\perp} - h \rangle_{n} \leq 0$. Thus \begin{align*} \| h - Y\|_{n}^2 - \|f'-Y \|_{n}^2 & = \| h_{\perp} - h \|_{n}^2 + \| h_{\perp} - Y\|_{n}^2 -2\langle h_{\perp} - Y, h_{\perp} - h \rangle_{n} -\|f'-Y \|_{n}^2 \\ & \geq \| h_{\perp} - h \|_{n}^2 + (\| h_{\perp} - Y\|_{n}^2-\|f'-Y \|_{n}^2) \\ & \geq \| h_{\perp} - h \|_{n}^2+ \frac{1}{9} \| f' - h_{\perp} \|_{n}^2 \geq \frac{1}{18} (\| h_{\perp} - h \|_{n}+\|f' - h_{\perp} \|_{n})^2 \\ & \geq \frac{1}{18} \| f'-h \|_{n}^2. \end{align*} This proves the claim for $h$ with constant $1/18$. \end{proof} An upper bound on excess loss follows immediately from Lemma~\ref{lem:angle_ineq}. \begin{corollary} \label{cor:excess_loss_bound_deterministic} Conditioned on the data $\{X_n, Y_n\}$, we have a deterministic upper bound for the Star algorithm: \begin{align} \label{eq:excess_loss_bound_deterministic} \mathcal{E}(\widehat{f}) &\leq (\widehat{\En}-\En) [2(f^*-Y)(f^* - \widehat{f})] + \En (f^* - \widehat{f})^2 - (1+c) \cdot \widehat{\En}(f^* - \widehat{f})^2, \end{align} with the value of constant $c$ given in Lemma~\ref{lem:angle_ineq}. \end{corollary} \begin{proof} \begin{align*} \mathcal{E}(\widehat{f}) & = \En(\widehat{f}(X) - Y)^2 - \inf_{f \in \F} \En(f(X)-Y)^2 \\ & \leq \En(\widehat{f} - Y)^2 - \En(f^*-Y)^2 + \left[ \widehat{\En}(f^* - Y)^2 - \widehat{\En}(\widehat{f} - Y)^2 - c \cdot \widehat{\En}(\widehat{f} - f^*)^2 \right] \\ & = (\widehat{\En}-\En) [2(f^*-Y)(f^* - \widehat{f})] + \En (f^* - \widehat{f})^2 - (1+c) \cdot \widehat{\En}(f^* - \widehat{f})^2. \end{align*} \end{proof} An attentive reader will notice that the multiplier on the negative empirical quadratic term in \eqref{eq:excess_loss_bound_deterministic} is slightly larger than the one on the expected quadratic term. This is the starting point of the analysis that follows. \section{Introduction} Determining the finite-sample behavior of risk in the problem of regression is arguably one of the most basic problems of Learning Theory and Statistics. This behavior can be studied in substantial generality with the tools of empirical process theory. When functions in a given convex class are uniformly bounded, one may verify the so-called ``Bernstein condition.'' The condition---which relates the variance of the increments of the empirical process to their expectation---implies a certain localization phenomenon around the optimum and forms the basis of the analysis via \emph{local Rademacher complexities}. The technique has been developed in \citep{KolPan00,koltchinskii2011oracle,bousquet2002some,bartlett2005local,bousquet2002concentration}, among others, based on Talagrand's celebrated concentration inequality for the supremum of an empirical process. In a recent pathbreaking paper, \cite{Mendelson14} showed that a large part of this heavy machinery is not necessary for obtaining tight upper bounds on excess loss, even---and especially---if functions are unbounded. Mendelson observed that only one-sided control of the tail is required in the deviation inequality, and, thankfully, it is the tail that can be controlled under very mild assumptions. In a parallel line of work, the search within the online learning setting for an analogue of ``localization'' has led to a notion of an ``offset'' Rademacher process \citep{RakSri14nonparam}, yielding---in a rather clean manner---optimal rates for minimax regret in online supervised learning. It was also shown that the supremum of the offset process is a lower bound on the minimax value, thus establishing its intrinsic nature. The present paper blends the ideas of \cite{Mendelson14} and \cite{RakSri14nonparam}. We introduce the notion of an offset Rademacher process for i.i.d. data and show that the supremum of this process upper bounds (both in expectation and in high probability) the excess risk of an empirical risk minimizer (for convex classes) and a two-step Star estimator of \cite{audibert2007progressive} (for arbitrary classes). The statement holds under a weak assumption even if functions are not uniformly bounded. The offset Rademacher complexity provides an intuitive alternative to the machinery of local Rademacher averages. Let us recall that the Rademacher process indexed by a function class $\G\subseteq \mathbb{R}^\X$ is defined as a stochastic process $g\mapsto \frac{1}{n}\sum_{t=1}^n \epsilon_t g(x_t)$ where $x_1,\ldots,x_n \in\X$ are held fixed and $\epsilon_1,\ldots,\epsilon_n$ are i.i.d. Rademacher random variables. We define the offset Rademacher process as a stochastic process $$g\mapsto \frac{1}{n}\sum_{t=1}^n \epsilon_t g(x_t) - c g(x_t)^2$$ for some $c\geq 0$. The process itself captures the notion of localization: when $g$ is large in magnitude, the negative quadratic term acts as a compensator and ``extinguishes'' the fluctuations of the term involving Rademacher variables. The supremum of the process will be termed \emph{offset Rademacher complexity}, and one may expect that this complexity is of a smaller order than the classical Rademacher averages (which, without localization, cannot be better than the rate of $n^{-1/2}$). The self-modulating property of the offset complexity can be illustrated on the canonical example of a linear class $\G = \{x\mapsto w^\ensuremath{{\scriptscriptstyle\mathsf{T}}} x: w\in \mathbb{R}^p\}$, in which case the offset Rademacher complexity becomes $$ \frac{1}{n}\sup_{w\in\mathbb{R}^p} \left\{ w^\ensuremath{{\scriptscriptstyle\mathsf{T}}} \left(\sum_{t=1}^n \epsilon_t x_t\right) - c \|w\|_\Sigma^2 \right\} = \frac{1}{4cn} \left\|\sum_{t=1}^n \epsilon_t x_t\right\|^2_{\Sigma^{-1}}$$ where $\Sigma=\sum_{t=1}^n x_tx_t^\ensuremath{{\scriptscriptstyle\mathsf{T}}}$. Under mild conditions, the above expression is of the order $\mathcal{O}\left(p/n\right)$ in expectation and in high probability --- a familiar rate achieved by the ordinary least squares, at least in the case of a well-specified model. We refer to Section~\ref{sec:examples} for the precise statement for both well-specified and misspecified case. Our contributions can be summarized as follows. First, we show that offset Rademacher complexity is an upper bound on excess loss of the proposed estimator, both in expectation and in deviation. We then extend the chaining technique to quantify the behavior of the supremum of the offset process in terms of covering numbers. By doing so, we recover the rates of aggregation established in \citep{RakSriTsy15} and, unlike the latter paper, the present method does not require boundedness (of the noise and functions). We provide a lower bound on minimax excess loss in terms of offset Rademacher complexity, indicating its intrinsic nature for the problems of regression. While our in-expectation results for bounded functions do not require any assumptions, the high probability statements rest on a lower isometry assumption that holds, for instance, for subgaussian classes. We show that offset Rademacher complexity can be further upper bounded by the fixed-point complexities defined by Mendelson \cite{Mendelson14}. We conclude with the analysis of ordinary least squares. \section{Problem Description and the Estimator} Let $\F$ be a class of functions on a probability space $(\X,P_X)$. The response is given by an unknown random variable $Y$, distributed jointly with $X$ according to $P=P_X \times P_{Y|X}$. We observe a sample $(X_1,Y_1),\ldots,(X_n,Y_n)$ distributed i.i.d. according to $P$ and aim to construct an estimator $\widehat{f}$ with small excess loss $\mathcal{E}(\widehat{f})$, where \begin{align} \mathcal{E}(g) ~\triangleq~ \En (g-Y)^2 - \inf_{f\in\F} \En(f-Y)^2 \end{align} and $\En(f-Y)^2 = \En(f(X)-Y)^2$ is the expectation with respect to $(X,Y)$. Let $\widehat{\En}$ denote the empirical expectation operator and define the following two-step procedure: \begin{align} \label{eq:def_estimator} \widehat{g}=\argmin{f\in\F} \widehat{\En}(f(X)-Y)^2, ~~~~ \widehat{f}=\argmin{f\in \star(\F,\widehat{g})} \widehat{\En}(f(X)-Y)^2 \end{align} where $\star(\F,g)=\{\lambda g+(1-\lambda)f: f\in\F, \lambda\in[0,1]\}$ is the star hull of $\F$ around $g$. (we abbreviate $\star(\F,0)$ as $\star(\F)$.) This two-step estimator was introduced (to the best of our knowledge) by \cite{audibert2007progressive} for a finite class $\F$. We will refer to the procedure as the Star estimator. Audibert showed that this method is deviation-optimal for finite aggregation --- the first such result, followed by other estimators with similar properties \citep{lecue2009aggregation,dai2012deviation} for the finite case. We present analysis that quantifies the behavior of this method for arbitrary classes of functions. The method has several nice features. First, it provides an alternative to the 3-stage discretization method of \cite{RakSriTsy15}, does not require the prior knowledge of the entropy of the class, and goes beyond the bounded case. Second, it enjoys an upper bound of offset Rademacher complexity via relatively routine arguments under rather weak assumptions. Third, it naturally reduces to empirical risk minimization for convex classes (indeed, this happens whenever $\star(\F,\widehat{g})=\F$). Let $f^*$ denote the minimizer $$ f^* = \argmin{f \in \F} \En (f(X) - Y)^2, $$ and let $\xi$ denote the ``noise'' $$ \xi = Y - f^*. $$ We say that the model is misspecified if the regression function $\En[Y|X=x]\notin \F$, which means $\xi$ is not zero-mean. Otherwise, we say that the model is well-specified. \section{Lower bound on Minimax Regret via Offset Rademacher Complexity} We conclude this paper with a lower bound on minimax regret in terms of offset Rademacher complexity. \begin{theorem}[Minimax Lower Bound on Regret] \label{thm: mini-low-bd} Define the offset Rademacher complexity over $\X^{\otimes n}$ as \begin{align*} \Rad^{\sf o}(n, \F) = \sup_{\{x_i\}_{i=1}^n \in \X^{\otimes n}} \En_{\epsilon} \sup_{f \in \F} \left\{ \frac{1}{n} \sum_{i=1}^n 2 \epsilon_i f(x_i) -f(x_i)^2 \right\} \end{align*} then the following minimax lower bound on regret holds: \begin{align*} \inf_{\hat{g} \in \G} \sup_{P} \left\{ \En (\hat{g} - Y)^2 - \inf_{f \in \F} \En (f - Y)^2 \right\} \geq \Rad^{\sf o}((1+c)n, \F) - \frac{c}{1+c}\Rad^{\sf o}(cn, \G), \end{align*} for any $c>0$. \end{theorem} For the purposes of matching the performance of the Star procedure, we can take $\G=\F+\star(\F-\F)$. \section*{Acknowledgements} We thank Shahar Mendelson for many helpful discussions and for providing valuable feedback on this paper. \bibliographystyle{apalike} \section{Symmetrization} We will now show that the discrepancy in the multiplier constant in \eqref{eq:excess_loss_bound_deterministic} leads to offset Rademacher complexity through rather elementary symmetrization inequalities. We perform this analysis both in expectation (for the case of bounded functions) and in high probability (for the general unbounded case). While the former result follows from the latter, the in-expectation statement for bounded functions requires no assumptions, in contrast to control of the tails. \begin{theorem} \label{thm:bounded_offset} Define the set $\H : = \F - f^* + \star(\F - \F) $. The following expectation bound on excess loss of the Star estimator holds: \begin{align*} \En\mathcal{E}(\widehat{f}) \leq (2M + K(2+c)/2) \cdot \En \sup_{h \in \H} \left\{ \frac{1}{n}\sum_{i=1}^n 2 \epsilon_i h(X_i) - c' h(X_i)^2 \right\} \end{align*} where $\epsilon_1,\ldots,\epsilon_n$ are independent Rademacher random variables, $c' = \min\{ \frac{c}{4M},\frac{c}{4K(2+c)}\}$, $K=\sup_{f } |f|_\infty$, and $M = \sup_{f} |Y-f |_\infty$ almost surely. \end{theorem} The proof of the theorem involves an introduction of independent Rademacher random variables and two contraction-style arguments to remove the multipliers $(Y_i-f^*(X_i))$. These algebraic manipulations are postponed to the appendix. The term in the curly brackets will be called an offset Rademacher process, and the expected supremum --- an offset Rademacher complexity. While Theorem~\ref{thm:bounded_offset} only applies to bounded functions and bounded noise, the upper bound already captures the localization phenomenon, even for non-convex function classes (and thus goes well beyond the classical local Rademacher analysis). As argued in \citep{Mendelson14}, it is the contraction step that requires boundedness of the functions when analyzing square loss. Mendelson uses a small ball assumption (a weak condition on the distribution, stated below) to split the analysis into the study of the multiplier and quadratic terms. This assumption allows one to compare the expected square of any function to its empirical version, to within a multiplicative constant that depends on the small ball property. In contrast, we need a somewhat stronger assumption that will allow us to take this constant to be at least $1-c/4$. We phrase this condition---the lower isometry bound---as follows. \footnote{We thank Shahar Mendelson for pointing out that the small ball condition in the initial version of this paper was too weak for our purposes.} \begin{definition}[Lower Isometry Bound] \label{Assump:Low-Iso-Bd} We say that a function class $\F$ satisfies the lower isometry bound with some parameters $0<\eta<1$ and $0<\delta<1$ if \begin{align} \mbb{P}\left( \inf_{f \in \F \setminus \{ 0\}}\frac{1}{n} \sum_{i=1}^n \frac{f^2(X_i)}{\En f^2} \geq 1 - \eta \right) \geq 1 - \delta \end{align} for all $n \geq n_0(\F, \delta, \eta)$, where $n_0(\F, \delta, \eta)$ depends on the complexity of the class. \end{definition} In general this is a mild assumption that requires good tail behavior of functions in $\F$, yet it is stronger than the small ball property. Mendelson \cite{Mendelson15} shows that this condition holds for heavy-tailed classes assuming the small ball condition plus a norm-comparison property $\| f \|_{\ell_q} \leq L \| f \|_{\ell_2}, \forall f \in \F $. We also remark that Assumption~\ref{Assump:Low-Iso-Bd} holds for sub-gaussian classes $\F$ using concentration tools, as already shown in \cite{lecue2013learning}. For completeness, let us also state the small ball property: \begin{definition}[Small Ball Property \cite{Mendelson14,Mendelson14general}] The class of functions $\F$ satisfies the small-ball condition if there exist constants $\kappa>0$ and $0<\epsilon<1$ for every $f\in \F$, $$\mathbb{P}\big(|f(X)| \geq \kappa (\En f^2)^{1/2} \big) \geq \epsilon.$$ \end{definition} Armed with the lower isometry bound, we now prove that the tail behavior of the deterministic upper bound in \eqref{eq:excess_loss_bound_deterministic} can be controlled via the tail behavior of offset Rademacher complexity. \begin{theorem} \label{thm:unbounded-offset} Define the set $\H : = \F - f^* + \star(\F - \F) $. Assume the lower isometry bound in Definition~\ref{Assump:Low-Iso-Bd} holds with $\eta = c/4$ and some $\delta<1$, where $c$ is the constant in \eqref{claim.angle}. Let $\xi_i = Y_i - f^*(X_i)$. Define $$ A := \sup_{h \in \H} \frac{\En h^4}{(\En h^2)^2} ~~~\text{and}~~~ B:= \sup_{X, Y} \En \xi^4. $$ Then there exist two absolute constants $c', \tilde{c}>0$ (only depends on $c$), such that \begin{align*} &\mbb{P}\left( \mathcal{E}(\widehat{f}) > 4 u \right) \leq 4 \delta + 4 \mbb{P}\left( \sup_{h \in \H} \frac{1}{n} \sum_{i=1}^n \epsilon_i \xi_i h(X_i) - \tilde{c} \cdot h(X_i)^2 > u \right) \end{align*} for any $$u>\frac{32\sqrt{AB}}{c'} \cdot \frac{1}{n},$$ as long as $n > \frac{16 (1-c')^2 A}{c'^2} \vee n_0(\H,\delta,c/4)$. \end{theorem} Theorem~\ref{thm:unbounded-offset} states that excess loss is stochastically dominated by offset Rademacher complexity. We remark that the requirement in $A,B$ holds under the mild moment conditions. \begin{remark} In certain cases, Definition~\ref{Assump:Low-Iso-Bd} can be shown to hold for $f\in \F \setminus r^* \mathcal{B}$ (rather than all $f\in\F$), for some critical radius $r^*$, as soon as $n \geq n_0(\F,\delta,\eta,r^*)$ (see \cite{Mendelson15}). In this case, the bound on the offset complexity is only affected additively by $(r^*)^2$. \end{remark} We postpone the proof of the Theorem to the appendix. In a nutshell, it extends the classical probabilistic symmetrization technique \citep{GinZin84,Men03fewnslt} to the non-zero-mean offset process under the investigation.
1,941,325,219,922
arxiv
\section{INTRODUCTION} Causal inference is grounded in estimation of interventional effects. This requires researchers to identify \emph{which} variables are going to change as the result of an intervention and \emph{in what way} those variables can be expected to change. The former question, regarding \emph{which} variables are affected by a manipulation, is a structural question that requires correct identification of the causes of each variable. The latter question requires correctly representing the functional relationships between variables in the model. The prevailing approaches to causal discovery from observational data focus on identifying the correct causal structure, represented as a directed acyclic graph (DAG). DAG models are almost always evaluated using graph-based measures of quality, such as structural Hamming distance (SHD)~\citep{tsamardinos2006maxmin} and structural intervention distance (SID)~\citep{peters2015structural}. These structural quantities measure the quality of an estimated DAG by comparing the estimated edge set to a known edge set. Such measures only characterize part of the causal inference task, specifically, \emph{which} variables are affected by a potential manipulation. However, in many settings, the ultimate quantity of interest is the \emph{interventional distribution}, which completely characterizes the nature of a causal relationship. As we will show, SHD and SID can be poor proxies for the quality of estimated causal effects or interventional distributions. In particular, SHD often overestimates the consequences of model over-specification (including too many edges), while SID imposes no penalty for over-specification. Conversely, under-specified models (with too few edges) can be problematic, but will impact distributional quality in a manner that is consistent with the strength with which omitted variables affect others in the model. For example, consider the models shown in Figure~\ref{fig:three-var-example}. With respect to the true graph $G_1$, $G_2$ and $G_3$ both have SHD and SID of 1, but omission of $V_1$ induces more severe parameterization errors than omission of $V_2$. The consequences of model over-specification are dependent in part on estimator selection and sample size, but SID and SHD do not account for such factors. \begin{figure} \captionsetup[subfigure]{labelformat=empty} \centering \begin{subfigure}{0.3\linewidth} \includegraphics[width=\linewidth]{full-model.pdf} \caption{$G_1$} \label{fig:three-var-example-full} \end{subfigure} \begin{subfigure}{0.3\linewidth} \includegraphics[width=\linewidth]{omit-v2.pdf} \caption{$G_2$} \label{fig:three-var-example-omit-2} \end{subfigure} \begin{subfigure}{0.3\linewidth} \includegraphics[width=\linewidth]{omit-v1.pdf} \caption{$G_3$} \label{fig:three-var-example-omit-1} \end{subfigure} \caption{Variants of a 3-variable system. For $G_1$, $V_3 \!\sim\!\mathcal{N}(V_1\!+\!0.1V_2, 1)$. In $G_2$, $V_3\!\sim\!\mathcal{N}(V_1, 1)$. In $G_3$, $V_3\!\sim\!\mathcal{N}(0.1V_2, 1)$. In all cases, $V_1, V_2 \sim \mathcal{N}(0, 1)$.} \label{fig:three-var-example} \end{figure} We present an evaluation methodology for observational causal discovery techniques. This methodology relies on distributional distances for evaluation of an estimated parameterized DAG, and takes both structural quality and parametric quality into account. We demonstrate several desirable properties of distributional distances, and show how these distances provide a more complete and accurate characterization of various modeling errors than commonly employed structural error measures. To identify the practical differences between distributional distances and structural distances, we performed exhaustive experimentation on three real domains and a number of synthetic domains commonly used in the literature. We highlight a number of instances in which structural distances can mislead researchers aiming to compare algorithms for learning and inference with causal models. \section{CAUSAL GRAPHICAL MODELS} Causal graphical models, represented as parametrized directed acyclic graphs, are an attractive framework for estimating interventional distributions from observational data. A DAG $G$ has a set of vertices, $\mathbf{V}(G)$, and a set of edges $\mathbf{E}(G)$. Each vertex $v$ has an associated conditional probability model $P(v|\mathbf{PA}^G_v=\vec{p})$, specified by the vector of values $\vec{p}$ of $v$'s parent variables $\mathbf{PA}^G_v$. When clear from context, we will write this parent set as $\mathbf{PA}_v$. Estimating causal quantities with DAGs is simplified through a number of graphical procedures and criterion. The \emph{do}-Calculus \citep{galles1995testing} specifies a graphical procedure for testing \emph{identifiability} of a causal effect, estimating the effects of an intervention given a known parameterized DAG. A causal effect is said to be \emph{identifiable} if it can be estimated from observed quantities. The associated \emph{do} operator is a notational convenience to indicate that a probabilistic expression is related to a specific interventional context. This machinery is necessary because, in the general setting, an interventional distribution, e.g., $P(O|do(T=t_1))$, is distinct from an observational conditional distribution, $P(O|T=t_1)$. The former specifies a probability distribution over $O$ where $T$ is forced to take on the value $t_1$, whereas the latter specifies a distribution over $O$ where $T$ is observed as $t_1$. When intent is clear, we will abbreviate $P(O|do(T=t_1))$ as $P(O|do(t_1))$. A nearly universal characteristic of observational data is the presence of \emph{back-door paths} between a treatment $T$ and an outcome $O$ of interest. A non-directed path $T v_1 v_2 v_3 \ldots v_n O$ in a DAG $G$ is called a \emph{back-door path} when $v_1$ is a parent of $T$. Following the rules of \emph{d}-separation \citep{pearl2009causality}, a set of variables $\mathbf{Z}$ that blocks every back-door path from $T$ to $O$, such that no member of $\mathbf{Z}$ is a descendant of $T$, is said to be a valid back-door adjustment set for $(T, O)$. In this case, $\mathbf{Z}$ is said to satisfy the \emph{back-door criterion} and $P(O|do(T=t)) = \sum_{\vec{z}} P(O|T=t, \mathbf{Z} = \vec{z})P(\mathbf{Z} = \vec{z})$ \citep{pearl2009causality}. \cite{shpitser10validity} presented a relaxation of the back-door criterion for which $\mathbf{Z}$ permits identification of causal effects between $T$ and $O$, often referred to as the \emph{generalized back-door criterion}. \section{EXISTING EVALUATION METHODS} \label{sec:existing-evaluation-methods} Structural Hamming distance (SHD) \citep{tsamardinos2006maxmin,acid2003searching} is commonly used to measure of distance between DAGs \citep{de2009comparison,kalisch2007estimating,pellet2008using,hoyer2009nonlinear,colombo2012learning,hyttinen2014constraint}.\footnote{SHD is sometimes decomposed into true/false positive rates, or the number of missing/extra/incorrectly oriented edges} SHD measures the number of edge additions, deletions, or reversals necessary to transform one DAG into another. SHD has become a common measure for evaluating causal discovery algorithms. However, as shown by \cite{peters2015structural}, a non-zero SHD is not necessary for consistent estimation of causal effects. A simple example can be gleaned from Figure~\ref{fig:three-var-example}, treating $G_3$ as the true causal structure and $G_1$ as the estimated structure. In this case, $\mathit{SHD}(G_3, G_1) = 1$, but all interventional distributions are consistently estimated \citep{galles1995testing}. \cite{peters2015structural} proposed a measure of structural quality, called \emph{structural intervention distance} (SID), that counts the number of interventional distributions that are inconsistently estimated by a model. Specifically, with respect to a true DAG $G$ and an estimated DAG $H$, SID is computed as the number of pairs of variables $(V_1, V_2)$ for which: \begin{itemize}[topsep=0pt,parsep=0pt] \item $V_1 \in \mathbf{PA}^G_{V_2}$ and $V_2 \in \mathbf{PA}^H_{V_1}$, or \item $\mathbf{PA}^{H}_{V_1}$ is not a valid adjustment set for $P(V_2|do(V_1))$. \end{itemize} Thus, a SID of zero is necessary for consistent estimation of causal effects. However, SID is insensitive to model over-specification, since any set of variables $\mathbf{Z}$ such that $\mathbf{PA}^{G}_{V_2} \subseteq \mathbf{Z} $ is a valid adjustment set for $P(V_1|do(V_2))$. Thus, when a DAG model $H$ is a super-graph of the true graph $G$, $\mathit{SID}(G, H) = 0$. Over-specification permits consistent estimation in the large-sample limit, but dense models can dramatically reduce statistical efficiency \citep{koller2009probabilistic}. Unlike SID, SHD does penalize for model over-specification, but with equal weight as model under-specification. \citeauthor{peters2015structural} proposed a modification of SID that penalizes superfluous edges by counting the difference in the number of edges between $G$ and $H$. As with SHD, this penalization supposes that all edges are equally important. \section{DISTRIBUTIONAL DISTANCES} In many real situations, directed acyclic graphs are not the ultimate artifact of interest---they are a representation that facilitates estimation of interventional effects \citep{pearl2009causality,spirtes2000causation}. Thus, it seems natural to define an accuracy measure in terms of interventional effects rather than graphical structure. Most causal quantities of interest take the form of probability queries with $do$ operators, for instance $P(O|do(T=1))$. These quantities can be estimated by a learned distribution $\hat{P}$ using a parameterized DAG or another causal modeling technique. The accuracy of the $O-T$ interventional distribution can be assessed by comparing the true distribution $P$ to the estimated distribution $\hat{P}$ using an information-theoretic metric. Despite the simplicity of this formulation, few researchers evaluate their models using direct comparison of known distributions to estimated distributions. Notable exceptions are \cite{tsamardinos2006maxmin} and \cite{eaton2007bayesian}. However, neither of these works consider the intrinsically causal task of interventional distribution estimation. \citeauthor{tsamardinos2006maxmin} use an information theoretic measure to compare estimated \emph{predictive} distributions to true \emph{predictive} distributions. In this work, we explore the use of total variation distance (TV) \citep{lin1991divergence} to measure distance between two \emph{interventional} distributions for an outcome $O$. For discrete outcomes, this computation is quite straightforward: \begin{align} \label{eq:total-variation} TV_{P, \hat{P}, T=t}(O) = \frac{1}{2} \sum_{o \in \Omega(O) } \big| &P \left( O = o|do(T=t) \right) - \notag\\ &\hat{P}\left(O=o|do(T=t) \right) \big|, \end{align} where $\Omega(O)$ is the domain of $O$. For continuous distributions, TV can be computed through an integral of differences in probability densities. Total variation characterizes both the parametric quality and the structural quality of a model. When a model is over-specified, statistical efficiency degrades and the estimator $\hat{P}$ will have high variance. When a model is under-specified, $\hat{P}$ may not be a consistent estimator of $P$. TV has the advantage of penalizing model errors in accordance with their impact on the quality of probability estimates, rather than treating all errors as having equal weight as in SHD or SID. Although TV is not constrained to application on a DAG, we can summarize the quality of an estimated DAG $\hat{G}$ by computing a sum of pairwise total variations: \begin{align} \label{eq:sum-pairwise-variations} TV_{\mathit{DAG}}(G, \hat{G}) = \sum_{\mathclap{V \in \mathbf{V}(G), V' \in \mathbf{V}(G) \setminus \{V\}}} TV_{P_{G}, P_{\hat{G}}, v'=v'_*}(V) \end{align} Here, $v'_*$ represents the value of the hypothetical intervention to $V'$ and is a fixed value assigned by the analyst. For instance, $v'_*$ could be set to a large value based on the quantiles of $V'$. More generally, we could consider a sum or an integral over settings of $v'_*$, however it seems unlikely that this added expense would yield more informative results. Evaluating $TV_{\mathit{DAG}}$ requires inference, which may be computationally expensive. It is common to have a clearly defined set of treatments and outcomes of interest, in which case the sum of pairwise total variations would be best expressed in terms of only those vertices. If an investigator truly is interested in all pairwise interventional distributions, then evaluating this sum is no more expensive than \emph{using} the model to reason about causal effects. There is a clear relationship between structural intervention distance and $TV_{\mathit{DAG}}$. In particular, when $TV_{\mathit{DAG}}$ is 0, then for all $V$ and $V'$, $P_{\hat{G}}(V|do(V'=v'_*)) = P_{G}(V|do(V'=v'_*))$. As defined by~\cite{peters2015structural}, SID counts the number of pairs $(V, V')$ such that $P_{\hat{G}}(V|do(V'=v'_*)) \neq P_{G}(V|do(V'=v'_*))$, thus $TV_{\mathit{DAG}}(G, \hat{G}) = 0 \Rightarrow \mathit{SID}(G, \hat{G}) = 0$. However, the converse of this statement does not hold. $TV_{\mathit{DAG}}(G, \hat{G})$ depends in part on how well the parameters of $\hat{G}$ have been estimated. It is possible that $\hat{G}$ permits unbiased inference, that is, the parent set of each node $V$ is a valid adjustment set for an intervention on $V'$, but due to variance in finite-sample settings, $P_{\hat{G}}$ does not exactly equal $P_G$. $TV_{\mathit{DAG}}$ accounts for both the bias \emph{and} variance of the estimated interventional distribution, and is therefore more closely related to real-world use cases for causal discovery. \subsection{A SIMPLE EXAMPLE} Before examining how structural distances compare to total variation distance, consider again the example presented in Figure~\ref{fig:three-var-example}. If $G_1$ is the true model, then omission of the edge $V_2 \to V_3$ in $G_2$ results in an SHD of 1 (edge edit distance 1) and an SID of 1 (one interventional distribution is mis-estimated). Similarly, omission of the edge $V_1 \to V_3$ in $G_3$ results in an SID and an SHD of 1. These two edge-omission errors are indistinguishable. Now consider an information-theoretic evaluation. Consider the three alternate conditional models for $V_3$: \begin{align} &P_1(V_3 | v_1, v_2) = \mathcal{N}(v_1 + 0.1v_2, 1) \\ &P_2(V_3 | v_1, v_2) = P_2(V_3 | v_1) = \mathcal{N}(v_1, 1) \\ &P_3(V_3 | v_1, v_2) = P_3(V_3 | v_2) = \mathcal{N}(0.1v_2, 1) \end{align} $P_1$ corresponds to a correct model, $P_2$ represents an estimated model which omits $V_2 \to V_3$, and $P_3$ represents an estimated model which omits $V_1 \to V_3$. We computed $TV_{P_1, P_m, V_i=2}(V_3)$ for $m=2,3$ and $i=1,2$. We used adaptive quadrature to approximate integrals over $V_3$. Table~\ref{tbl:simple-example-tv} demonstrates that, consistent with the SID definition, $P_2$ and $P_3$ both mis-estimate one interventional distribution (there is one non-zero row per column). The key advantage of TV lies in its ability to differentiate between the \emph{severity} of the mis-estimation. In this case, omitting $V_1 \to V_3$ (TV=0.68) is a more significant error than omitting $V_2 \to V_3$ (TV=0.08). \begin{table}[ht] \centering \begin{tabular}{lrr} \hline & $TV_{P_1, P_2}(V_3)$ & $TV_{P_1, P_3}(V_3)$ \\ \hline $do(V_1 = 2)$ & 0.00 & 0.68 \\ $do(V_2 = 2)$ & 0.08 & 0.00 \\ \hline \end{tabular} \caption{Total variation of $P_2$ and $P_3$ with respect to true model $P_1$ and hypothetical interventions on $V_1$ and $V_2$} \label{tbl:simple-example-tv} \end{table} \section{EVALUATION METHODOLOGY} To explore the differences between total variation distance and structural distances in realistic situations, we instrumented and gathered data from three real domains and three commonly used synthetic data generation techniques. \begin{figure*}[ht!] \centering \includegraphics[width=\linewidth]{obs-transformation.pdf} \caption{Outline of the evaluation process. An interventional distribution with multiple observations per subject (identified by \texttt{ID}) is sub-sampled using Algorithm~\ref{alg:passive-treatment} to form a smaller dataset containing one obervation per subject and observational bias. Causal discovery algorithms are employed to estimate a DAG from the observational data, and the resulting structure is parameterized with maximum likelihood estimation. Interventional distributions are estimated using the parameterized DAG, and compared to those implied by the interventional data.} \label{fig:evaluation-process} \end{figure*} \subsection{REAL DOMAINS} Each real domain is a large-scale computational system, consisting of many thousands of lines of source code and used for a diverse set of tasks. Large-scale software systems offer several desirable characteristics for the purposes of empirical evaluation. Specifically, such systems are: \begin{itemize}[leftmargin=5pt] \item[] \textbf{Empirical:} They are pre-existing systems created by individuals other than the researchers for purposes other than evaluating algorithms for causal discovery. This avoids implicit or explicit bias that can affect the structure and parameters of synthetic data generators. \item[] \textbf{Stochastic:} They produce experimental results that are non-deterministic through some combination of epistemic factors (e.g., latent variables) and aleatory factors (inherently stochastic behavior in the data generating process). \item[] \textbf{Identifiable:} They are amenable to direct experimental investigation to estimate interventional distributions. In particular, these systems facilitate interventions on single variables in ways that largely avoid the ``fat hand'' effects that plague some physical systems. \item[] \textbf{Recoverable:} They lack memory or irreversible effects, which enables complete state recovery during experiments. Such state recovery is far from simple because many modern software systems have features such as caches that can create temporal dependence among runs, but state recovery is possible in principle. This enables factorial experiments in which every joint combination of interventions is run on every experimental unit. \item[] \textbf{Efficient:} They are capable of generating large amounts of data that can be recorded with relatively little effort. \item[] \textbf{Reproducible:} They allow future investigators to recreate nearly identical data sets with reasonable resources and without access to one-of-a-kind hardware or software. \end{itemize} Few, if any, other classes of systems offer a similar range and combination of advantages. Within each computational system, we measure three classes of variables: subject covariates, treatment settings, and outcomes. Outcomes are measurements of the result of a computational process. Treatments correspond to system configurations and are selected such that they could plausibly induce changes in outcomes. Subject covariates logically exist prior to treatment and are invariant with respect to treatment. With these variables defined, we conduct a factorial experiment. This dictates that each combination of treatment variables be applied to every subject. Thus, given a set of $n$ subjects and $k$ binary treatment variables, there are $n 2^k$ data instances, referred to as \emph{subject-treatment combinations}. This dataset can then be used to estimate interventional distributions for the treatment variables in a straightforward manner without confounding bias. \begin{algorithm} \DontPrintSemicolon \KwIn{Interventional dataset $I$, biasing strength $\beta \geq 0$, biasing covariate $C$} \KwOut{Biased dataset $O$, $|O| = nd$} $l \gets $ The number of distinct values of $C$ \; \ForEach{Subject $e \in I$} { Let $C_e \in \{1..l\}$ represent the $C$ value of subject $e$ \; $Assign \gets \{\}$ \; \ForEach{Treatment $T_j$} { $s_{ej} \gets \begin{cases} 1 & \text{if $C_e \times j$ is even} \\ -1 & \text{if $C_e \times j$ is odd} \\ \end{cases}$ \; $p \gets \text{logit}^{-1}(s_{ej} \beta)$\; $t_j \gets $ Bernoulli$(p)$ \; $Assign \gets Assign \cup \{ T_j = t_j \}$ \; } $M \gets $ Record in $I$ corresponding to $(e, Assign)$ \; $O \gets O \cup M$ \; } \caption{Logistic Sampling of Passive Treatments} \label{alg:passive-treatment} \end{algorithm} We can also transform the dataset generated by the factorial experiment into a dataset that has properties consistent with observational data. This transformation induces a set of back-door paths between treatments and outcomes, yielding a dataset with a single treatment observation per subject. Using a logistic function, Algorithm~\ref{alg:passive-treatment} samples a value for each treatment $T_j$ with strength of dependence $\beta$ and sign $s_{ej}$ depending on subject $e$'s value of $C$. For each domain, we note which variable acts as $C$, the biasing covariate. When $\beta$ is large ($\geq 3$), some subject-treatment combinations (with $s_{ej} = -1$) are almost always in the control setting ($P(T_j = 0 | C = C_e) \approx 0$), and some subject-treatment combinations (with $s_{ej} = 1$) are almost always in the treated setting ($P(T_j = 1 | C = C_e) \approx 1$). When $\beta$ is zero, the dataset corresponds to a uniformly randomized experiment. In this case, conditional distributions $P(O|T=t)$ yield consistent estimates of the causal quantities $P(O|do(t))$. For $\beta > 0$, back-door paths exist and this conditional model is no longer appropriate for causal reasoning, requiring causal learning and reasoning techniques appropriate for observational data. Each domain is described below. \footnote{Datasets are available at \url{https://kdl.cs.umass.edu/display/public/Causal+Evaluation}} \subsubsection{Oracle Java Development Kit} The Java Development Kit (JDK) is a software library used to compile, run, and diagnose problems with Java programs. Each subject in this domain is an open-source Java project, and the computational process is compilation and execution of the unit tests for that project. As treatments, we selected four system settings that are of interest to developers: compiler optimization, use of debugging symbols, garbage collection method, and code obfuscation. For outcomes, we measured factors pertaining to the run time, memory usage, time to compile, and code size. To better approximate observational settings, we measured subject covariates which could confound treatments and outcomes if no controls were present. We measured the number of non-comment source statements in both the project source code and associated unit test source code, along with the number of functions and classes in the unit test source, correlating to some extent with unit test runtime. The number of ``Javadoc'' comments in the unit test source code was also measured, as it may be associated with code quality---this was selected as the biasing covariate. \subsubsection{PostgreSQL} PostgreSQL (or just Postgres) is a widely-used open-source database management system. For this domain, a subject is a database query, and the computational task is to execute that query. Treatments on the Postgres domain are system settings that a database administrator may be interested in tuning through experimentation: the use of indexing, page access cost estimates, and working memory allocation. As outcomes, we recorded query runtime, the number of blocks read from shared memory, temporary memory, and a fast memory cache. As subject covariates, we measured aspects of the query itself such as the number of joins, number of grouping operations, length, and statistics of the referenced tables. We also recorded the number of rows retrieved by the query (which is logically prior to treatment), using this as the treatment-biasing covariate. \subsubsection{Hypertext Transfer Protocol} The Hypertext Transfer Protocol (HTTP) is the primary mechanism by which information is transferred across the Web. In this domain, a subject is a web request to a specific web site, and the computational process under study is the transmission of that request and the response it elicits. We selected several options of the HTTP request as treatments: use of a proxy server, compression specifications, and the HTTP user agent. Several response characteristics served as outcomes: the number of HTML attributes and tags, the elapsed time of the web request, the content length before decompression, and the size of the response after decompression. Few treatment-invariant subject covariates exist in this domain, since almost every aspect of a web page is subject to change based on request parameters. The host-reported web server (e.g., Apache) is the sole subject covariate which is highly unlikely to be influenced by any of the above treatments; we used this as the biasing covariate. \subsection{SYNTHETIC DATA} \subsubsection{Linear-Gaussian} In our literature review, we found that synthetic linear-Gaussian systems were the most commonly used structures for evaluation \citep{hyttinen2014constraint,ramsey2006adjacency,kalisch2007estimating,colombo2012learning}. The typical construction of such systems begins with generation of a random sparse DAG $G$ with an expected neighborhood size ($E[N]$) of 2, 3, or 5. The most common sparsity setting we found was $E[N] = 2$. Then, a weight matrix $\mathbf{W}$ is generated, with values sampled uniformly from $[0.1, 1]$ (the most commonly used interval in our review). A set of error terms $\vec{\epsilon}$ are generated from $\mathcal{N}(0, 1)$. Then, samples $X_i$ for each vertex $i$ are generated using the process: \begin{align} X_i \gets \sum_{j \in \mathbf{PA}_i^G} \mathbf{W}_{ji} X_j + \epsilon_i, \end{align} \subsubsection{Dirichlet} Some authors have constructed synthetic DAGs using discrete variables with relationships arising from by a Dirichlet distribution. \cite{chickering2002finding} use a small number of gold-standard DAGs, for which the $E[N] \leq 2$. \cite{eaton2007bayesian} use the structure of the CHILD network \citep{cowell2007probabilistic}, which has $E[N] \approx 1$. Conditional probability tables are generated for $k$-state node $i$ using a Dirichlet distribution. Specifically, let $\mu = (\frac{1}{1}, \frac{1}{2}, \ldots, \frac{1}{k})$ and $\alpha = \frac{1}{\sum \mu}$. Number the joint assignments to $\mathbf{PA}_i$ as $1..A$. Consider rotations $\mu_a$ of $\mu$ such that $\mu_1 = (\frac{1}{k}, \frac{1}{1}, \ldots, \frac{1}{k-1})$ and $\mu_2 = (\frac{1}{k-1}, \frac{1}{k}, \ldots, \frac{1}{k-2})$. Then, for each numbered assignment $a$ to $\mathbf{PA}_i$, draw $P(X_i|\mathbf{PA}_i = {\mathbf{PA}_i}_a)$ from $\text{Dirichlet}(S\alpha\mu_a)$. The $S$ factor, often called the equivalent sample size, can be viewed as a measure of confidence in the Dirichlet hyper-parameters \citep{heckerman1995learning}. Unless otherwise noted, $S=10$ for our experiments, consistent with usage in the literature. \subsubsection{Logistic} A third category of synthetic data generation uses a logistic function to generate binary data. \cite{li2009controlling} largely follow the ``linear-Gaussian'' strategy. Random DAGs are generated with $E[N] \in \{2, 3\}$. Instead of weighting edges, the authors weight vertices, sampling $\vec{W}$ from $\{\delta, -\delta\}$. The strength of dependence parameter $\delta$ can be varied. In what follows, we use $\delta = 0.375$, the mean of the range explored in the original work. Then, values $X_i$ are sampled for vertex $i$ using: \begin{align} X_i \gets \text{Binomial}\left( \text{logit}^{-1} \left( \sum_{j \in \mathbf{PA}_i} X_j W_j \right) \right) \end{align} In some cases, it is useful to compare results on the synthetic datasets with those on our real datasets. In these cases, we use three ``look-alike'' configurations which each have the same number of variables and data points (subjects) as one of the real systems. In addition, the datasets agree on the number of treatments, outcomes, and subject covariates. The synthetic data generation strategy is adjusted to resemble a factorial experiment and the observational sampling process of Algorithm~\ref{alg:passive-treatment} is employed. This ensures that the synthetic domain and the real domain are comparable with respect to the back-door paths induced between treatments and outcomes. In what follows, we label these configurations with a J, P, or H depending on whether they were modeled after the JDK, Postgres, or HTTP domain, respectively. \subsection{ALGORITHMS} We selected three algorithms representative of constraint-based, score-based, and hybrid causal discovery for our evaluation; respectively PC \citep{spirtes2000causation}, GES \citep{chickering2002finding}, and MMHC \citep{tsamardinos2006maxmin}. In PC and MMHC, we used the $G$-test with $\alpha=0.05$ for conditional independence testing on discrete data, and the $z$-statistic with Fisher's partial correlation for testing linear-Gaussian data. For GES and score-based phases of MMHC, we used BIC as the scoring criterion. In all cases, conditional probabilities were modeled with tables. These choices are common, and are the default options in the \textsf{R} packages \texttt{pcalg} and \texttt{bnlearn} that we used in this study. PC and GES learn a complete partially directed acyclic graph (CPDAG), which may contain undirected edges as well as directed edges. CPDAGs represent a class of models with equivalent likelihood. When a CPDAG has more than one member, the reported performance value is a mean of the performance of each DAG extension. When a CPDAG has more than 100 DAG extensions, we sample 100 uniformly at random. We then measure SID, SHD, and TV on each DAG extension, and compute the mean of each measure for a given CPDAG. \section{EMPIRICAL COMPARISON} Ultimately, the distinction between total variation distance and commonly-used structural distance measures is important only if the two categories of evaluation would lead to different conclusions. We sought to address the following questions to help make that distinction clear: \begin{itemize}[noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt] \item Is the relative performance of causal discovery algorithms systematically different when evaluating with TV, as compared to a structural evaluation? \item Does model over-specification or under-specification impact total variation evaluations differently than evaluations with structural measures? \item Do the parameters used in synthetic data generation elicit different behaviors in TV than in SHD/SID? \end{itemize} \subsection{RELATIVE PERFORMANCE OF ALGORITHMS} \begin{figure} \centering \begin{subfigure}{\linewidth} \includegraphics[width=\linewidth]{relative-performance-SID.png} \caption{Structural intervention distance} \end{subfigure} \begin{subfigure}{\linewidth} \includegraphics[width=\linewidth]{relative-performance-SHD.png} \caption{Structural Hamming distance} \end{subfigure} \begin{subfigure}{\linewidth} \includegraphics[width=\linewidth]{relative-performance-TV.png} \caption{Total variation distance} \end{subfigure} \caption{Relative Performance on Synthetic Datasets} \label{fig:relative-performance} \end{figure} One of the most basic questions about TV is whether it produces different conclusions than using SHD or SID in realistic evaluations. We found that TV produces implies a very different ordering of the relative performance of different learning algorithms than that implied by SHD and SID. We began by constructing 30 random DAGs with 14 variables and $E[N]=2$. We generated parameters on those DAGs using each of the synthetic data techniques and sampled 5,000 data points from each DAG. Then, we applied PC, MMHC, and GES to the resulting datasets and measured the SID, SHD, and sum of pairwise total variations as in equation~\ref{eq:sum-pairwise-variations}. As shown in Figure~\ref{fig:relative-performance}, some of the findings that would be reached with SID and SHD are not supported by a TV evaluation. The structural measures suggest that MMHC outperforms PC on the Dirichlet domain. However, the performance of the two algorithms is statistically indistinguishable as measured by TV. When measured with SID or SHD, GES does not outperform either MMHC or PC. However, GES is consistently the best performing algorithm in terms of interventional distribution accuracy. \subsection{MODEL OVER-SPECIFICATION AND UNDER-SPECIFICATION} \begin{table*} \centering { \small \begin{tabular}{ lll | rrr | rrr | rrr } \hline Domain & Subjects & Model Type & \multicolumn{3}{c|}{SID: Min, Median, Max} & \multicolumn{3}{c|}{SHD: Min, Median, Max} & \multicolumn{3}{c}{TV: Min, Median, Max} \\ \hline \multirow{2}{*}{JDK} & \multirow{2}{*}{473} & Over-specify & 0 & \hspace{3em} 0 & 0 & 1 & \hspace{2.5em} 3 & 3 & 0.04 & \hspace{0.75em} 0.17 & 0.21 \\ & & Under-specify & 4 & 5 & 9 & 2 & 2 & 4 & 0.22 & 0.41 & 0.58 \\ \hline \multirow{2}{*}{Postgres} & \multirow{2}{*}{5,000} & Over-specify & 0 & 0 & 0 & 0 & 1 & 2 & 0.00 & 0.06 & 0.09 \\ & & Under-specify & 4 & 6 & 8 & 3 & 4 & 5 & 0.17 & 0.35 & 0.61 \\ \hline \multirow{2}{*}{HTTP} & \multirow{2}{*}{2,599} & Over-specify & 0 & 0 & 0 & 1 & 2 & 4 & 0.06 & 0.06 & 0.09 \\ & & Under-specify & 2 & 6 & 10 & 1 & 3 & 4 & 0.22 & 0.25 & 0.30 \\ \hline \end{tabular} } \caption{Evaluation Metric Comparison on Real Domains} \label{tbl:altered-models} \end{table*} Another important question about TV is how the measure responds to specific types of errors in learned structure. Specifically, we wanted to evaluate the effects of over-specification (extraneous edges) and under-specification (omitted edges) on model performance. Compared to TV, we found that neither SID or SHD provide good proxies for the effects of over- and under-specification. To characterize these effects, we turned to the real domains. In each case, treatment assignments are moderately biased ($\beta = 1$). For simplicity of illustration, we omit some subject covariates which we know cannot cause any of the treatments. From our experiments, we can identify which treatment-outcome pairs are causally related. We construct a true DAG by introducing an edge between each pair of causally related treatment and outcome. Since the biasing covariate necessarily blocks all back-door paths between each treatment and outcome, an edge is introduced between this covariate and all treatments. The resulting DAG model (illustrated for the JDK dataset in Figure~\ref{fig:jdk-true-consistent-model}) consistently estimates distributions $P(O|do(T=t))$ for all treatment-outcome pairs. \begin{figure} \centering \includegraphics[width=\linewidth]{jdk-consistent.pdf} \caption{Consistent Model for the JDK Dataset} \label{fig:jdk-true-consistent-model} \end{figure} We altered the consistent models of each dataset to induce over-specification and under-specification. To quantify the effects of over-specification, we produced models in which one of the treatment variables had a directed edge into every outcome, regardless of the causal relationships in the true model. To quantify the effects of under-specification, we produced models in which one of the treatment variables had no outgoing edges. This process was then repeated for each of our three domains and each treatment variable within that domain. For each model, a sum of pairwise total variations were computed as $\sum_{T,O} TV_{P,\hat{P},T=1}(O)$, where $P$ represents the reference distribution given by the consistent model (as in Figure~\ref{fig:jdk-true-consistent-model}) and $\hat{P}$ represents the distribution induced by the altered model. A comparison of TV, SHD, and SID on these experiments is shown in Table~\ref{tbl:altered-models}. Two properties are apparent: \begin{itemize}[noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt] \item Model over-specification is not ignorable. For small datasets, such as that from the JDK domain, over-specified models have zero SID but significant TV values due to loss of statistical efficiency. \item Penalizing over-specification and under-specification with equal cost, as in SHD, is inconsistent with interventional distribution quality. In these domains, model over-specification has 2-5 times less distributional impact than under-specification as measured by total variation. \end{itemize} \subsection{REACTION TO STRENGTH OF DEPENDENCE} \begin{figure}[t] \centering \begin{subfigure}{\linewidth} \includegraphics[width=\linewidth]{dependence-strength-comparison-SID.png} \caption{Structural intervention distance} \end{subfigure} \begin{subfigure}{\linewidth} \includegraphics[width=\linewidth]{dependence-strength-comparison-SHD.png} \caption{Structural Hamming distance} \end{subfigure} \begin{subfigure}{\linewidth} \includegraphics[width=\linewidth]{dependence-strength-comparison-TV.png} \caption{Total variation distance} \end{subfigure} \caption{Performance measures with respect to strength of dependence. Each box contains 30 data points, with each data point representing performance on a CPDAG output by either PC, MMHC, or GES.} \label{fig:strength-of-dependence} \end{figure} As already noted, an advantage of TV is that it weights inferred causal dependencies based on the strength of dependence. However, a reasonable question is whether SID and SHD might still serve as reasonable proxies for TV as the strength of dependence varies. We found that they do not. Specifically, the presence of weak dependencies tends to increase structural measures (because learning weak dependencies is more difficult) but tends to decrease TV (because missing a weak dependence is less important). To examine this effect, we again used experiments with synthetic data. While many researchers use roughly similar parametric forms when generating data using the linear-Gaussian, logistic, and Dirichlet strategies, there is no accepted standard for the strength of dependence in each setting. For linear-Gaussian and logistic systems, strength of dependence can be controlled by adjusting the sampling distribution for the edge and vertex weights. In the case of the Dirichlet strategy, smaller $S$ values yield stronger dependencies (more skewed CPTs). We generated variants of each synthetic domain with dependencies 10 times stronger or weaker than the most common value used in existing work. For each of these configurations, we generated 10 networks. We ran PC, MMHC, and GES on each of these networks, and recorded the mean SID, SHD, and TV of members of the resulting CPDAG. From Figure~\ref{fig:strength-of-dependence}, we see that the structural measures have an inverse relationship with TV as dependence strength is varied. As the strength of dependence decreases, the detectability of an effect is reduced, making structure learning more difficult. However, structural inaccuracies impact interventional distributions in accordance with strength of dependence---weak dependencies imply lower TV. \section{DEVELOPING REALISTIC SYNTHETIC DATA} Now that we have established the value of TV for measuring the most important property of causal models (their ability to accurately estimate interventional distributions), we can deploy the measure to evaluate other properties of existing evaluation methods. One key property is the inherent difficulty of learning causal models for various real and synthetic data sets. We found that the real datasets were significantly more challenging than the synthetic datasets when measured with total variation. One example is the Postgres dataset in which the best-performing model (GES) learns a much less accurate model than the worst performing algorithm on any synthetic domain (see Figure~\ref{fig:real-synthetic-compare-postgres}). The difference in means between each real domain and its synthetic counterparts is shown in Table~\ref{tbl:difference-in-means}. In all but one case (HTTP/Dirichlet), the real domains are more challenging. \begin{table} \centering { \small \begin{tabular}{rrrr} \hline & Dirichlet & Linear-Gaussian & Logistic \\ \hline Postgres & 0.92 & 1.10 & 1.17 \\ JDK & 2.20 & 2.51 & 3.05 \\ HTTP & -0.31 & 0.45 & 0.54 \\ \hline \end{tabular} } \caption{Differences in means of total variation for all pairs of real domains and synthetic counterparts. All differences are strongly significant using Tukey's test.} \label{tbl:difference-in-means} \end{table} \begin{figure} \centering \includegraphics[width=\linewidth]{real-synthetic-comparison-Postgres.png} \caption{Relative Performance on Postgres dataset and synthetic look-alike configurations.} \label{fig:real-synthetic-compare-postgres} \end{figure} We sought to characterize what alterations could be made to synthetic data generation techniques to reach the same level of difficulty as the real domains. In the unbiased models for the three real datasets, it is common for treatment nodes to causally affect three or four outcomes. This stands in constrast to typical synthetic structures, for which nodes have one parent and one child in expectation. To characterize the effect of this property on learned causal structures, we adjusted the synthetic data generation techniques to include additional edges between treatments and outcomes, and also varied the strength of dependence. Focusing on the largest dataset, Postgres, we generated synthetic configurations with treatments varying in out-degree from 3 to 5 and dependence strength varying from 1 to 5. For the linear-Gaussian and logistic techniques, all settings were significantly less challenging. While the linear-Gaussian and logistic techniques can encode only a specific (generalized) linear form of dependence, the Dirichlet model can encode arbitrary multinomial CPTs. For the Dirichlet technique, some settings of the dependence strength and sparsity parameters yield datasets which are at least as challenging as Postgres (Table~\ref{tbl:sparsity-sweep-dirichlet}). This suggests that there are three key properties necessary for realistic synthetic evaluation: relatively dense connectivity, strong dependencies, and complex functional relationships. \begin{table} \setlength{\extrarowheight}{1pt} \centering { \small \begin{tabular}{ | r r | rrr | } \hline & & \multicolumn{3}{ c |}{Dependence Strength} \\ & & 1 & 3 & 5 \\ \hline \multirow{3}{*}{\rotatebox[origin=c]{90}{Degree}} & 3 & 0.73 & 0.37 & - \\ & 4 & 0.58 & - & - \\ & 5 & - & - & -0.93 \\ \hline \end{tabular} } \caption{Mean difference in pairwise TV between between Postgres and synthetic Dirichlet models with varying parameters. Positive numbers indicate that the Postgres configuration was more difficult than the Dirichlet counterpart. Hyphens indicate that no significant difference was present.} \label{tbl:sparsity-sweep-dirichlet} \end{table} \section{CONCLUSIONS} In this work, we provided empirical demonstrations that structural distance measures do not correspond to the quality of interventional distributions. Structural Hamming distance and structural intervention distance penalize for model over-specification in a way that is inconsistent with the parametric quality of estimated causal effects. Structural distances disagree with total variation distance as dependence strength varies, and can lead to different conclusions about relative algorithmic performance on a commonly used synthetic datasets. Through a simple theoretical argument, we have shown that total variation distance captures a wider variety of modeling errors than structural intervention distance, and is more closely related to applications of causal discovery methods. The synthetic datasets we studied are typically less challenging than the real datasets gathered from computational systems---suggesting that commonly employed synthetic evaluations have been unrealistically simple. We found that increasing network density, strength of dependence, and generating data with complex conditional models can yield synthetic models that are as challenging as real datasets. \bibliographystyle{named}
1,941,325,219,923
arxiv
\section{Introduction} The forces induced on a isolated system by reaction to the emission of gravitational waves and their impact on the motion of gravitationally bound binary systems have been studied in great accuracy since the first derivation of the radiation reaction force in General Relativity by Burke and Thorne \cite{BurkeThorne}. Their phenomenological impact is linked to the forthcoming observation runs of the Laser Interferometer Gravitational Observer (LIGO) and Virgo, see \cite{:2012dr} for the result of the latest compact binary coalescence search, and have already been observed to be at play in binary pulsar systems \cite{Hulse:1974eb,Taylor:1982zz}. The motion of coalescing binaries is imprinted in the shape of the emitted gravitational waves and the output of gravitational detectors is particularly sensitive to the time varying phase of the radiated wave, which has to be determined with high accuracy in order to ensure high efficiency of the detection algorithms and faithful source parameter reconstruction.\\ The standard approach to describe the motion of coalescing binaries lies within the post-Newtonian (PN) approximation to General Relativity, describing the binary system dynamics as a perturbative series in terms of the relative velocity of the binary constituents, see e.g. \cite{Blanchet_living} for a review. The leading effect of radiation reaction modifies the binary dynamics giving rise to a term non-invariant under time reversal which affects the dynamics of the system at 2.5 PN order: this is the lowest order at which linear effects of the gravitational radiation enter.\\ In \cite{Blanchet:1987wq,Blanchet:1993ng} the leading non-linear radiation reaction effect has been derived, see also \cite{Blanchet:1993ec}, and it is shown to modify the binary dynamics at 4PN order (i.e. at 1.5PN order relatively to the leading effect): it belongs to the species of terms dubbed \emph{hereditary}, as it depends on the entire history of the source. In particular it originates from radiation emitted and then scattered back into the system by the background curvature generated by the total mass $M$ of the binary system, hence the name of \emph{tail} term. Such non-linear 4PN tail term is in good agreement with computations performed within the framework of the gravitational self-force analysis of circular orbits in Schwarzschild background, as found in \cite{Blanchet:2010cx,Blanchet:2010zd} and it includes both a term non-invariant and a term \emph{invariant} under time reversal; the latter can be incorporated in the conservative dynamics of the binary system. Here we present the re-computation of the 4PN tail contribution to the dynamics of inspiraling binaries via the use of the Effective Field Theory (EFT) methods for gravity introduced in \cite{Goldberger:2004jt}. EFT methods turn out to be useful in problems admitting a clear scale separation: in the binary system case we have the size of the compact objects $r_s$, the orbital separation $r$ at which the system consists of point particles interacting via instantaneous potential, and the gravitational wave-length $\lambda$ (with hierarchy $r_s < r \sim r_s/v^2 < \lambda \sim r/v$, being $v$ the relative velocity between the two bodies) at which the binary system can be described as a particle of negligible size endowed with multipoles. Using this approach several different groups have re-produced results in the PN analysis which have been previously computed in the standard approach both in the conservative \cite{EFToldcons,Foffa:2011ub} and in the dissipative \cite{Goldberger:2009qd,EFTolddis} sector, and EFT methods have also been applied within the extreme mass ratio limit approach to binary coalescence \cite{EFTextmass}. Moreover new results on the PN analysis have been made available by the use of the EFT method in both sectors \cite{EFTnewcons,EFTnewdis,Porto:2012as,Goldberger:2012kf,Foffa:2012rn}, The leading orders in the radiation reaction effects has been re-produced via effective field theory in \cite{Galley:2009px, Galley:2012qs}, extending the Lagrangian formalism to include time-asymmetric systems, see \cite{Galley:2012hx} for a rigorous extension of Hamilton's principle to generally dissipative systems. \begin{center} \begin{figure}[t] \includegraphics[width=.45\linewidth]{effectivea} \includegraphics[width=.45\linewidth]{effectiveb} \caption{Diagram describing the gravitational radiation reaction force at leading order (left) and leading non-linear order (right). The thick line represents the massive binary system, the curly line the the gravitons emitted and absorbed by the system, the dashed line the potential graviton responsible for the Newtonian potential.} \label{fig:radReac} \end{figure} \end{center} \section{Radiation reaction logarithms from tail term} \label{se:QQ} In the following we use $c=1$ units and the mostly plus signature convention. Contraction of space indices are taken with the Kronecker delta. We work in generic $d$ space dimensions as we adopt dimensional regularization to handle divergences. It will be convenient to use the $d+1$-dimensional Planck mass $\Lambda\equiv (32\pi G)^{-1/2}$, being $G$ the $d+1$-dimensional gravitational constant. Following the EFT framework for non relativistic General Relativity, after integrating out the ``potential'' gravitons, one is left with an effective action at the orbital scale $r$, describing radiation gravitons coupled to the multipole moments of the compact binary system. In this limit, by adopting the decomposition of the metric suggested in \cite{Kol} \begin{eqnarray} \label{eq:metric} g_{\mu\nu}=e^{2\phi/\Lambda}\pa{ \begin{array}{cc} -1 & A_j/\Lambda\\ A_i/\Lambda & \quad e^{-c_d\phi/\Lambda}\pa{\delta_{ij}+\sigma_{ij}/\Lambda}-A_iA_j/\Lambda^2 \end{array}}\,, \end{eqnarray} with $c_d\equiv 2\frac{(d-1)}{(d-2)}$, the dynamics is described by an effective word-line Lagrangian coupled to gravity, whose relevant terms for the present work are \begin{eqnarray} \label{eq:smult} S_{mult}\supset -\int d\tau \pa{M+\frac{M\phi}{\Lambda}-\frac 12 Q_{ij}R^0_{\ i0j}} \,, \end{eqnarray} where $R^0_{\ i0j}$ denotes the appropriate component of the Riemann tensor, and $M$ and $Q_{ij}$ are, respectively, the mass monopole and quadrupole moments of the system; dependence on the time-like coordinate parametrizing the word-line is understood in all terms in $S_{mult}$. The bulk dynamics of the gravitational fields $\phi$, $A$ and $\sigma$ is given by the standard Einstein-Hilbert action plus gauge fixing, whose terms relevant for the present calculation are reported in the appendix. In order to perform the computation of the diagrams of fig.~\ref{fig:radReac}, boundary conditions asymmetric in time have to be imposed, as no incoming radiation at past null infinity is required. Technically this is implemented by adopting a generalization of the Hamilton's variational principle similar to the closed-time-path, or in-in formalism (first proposed in \cite{Schwinger:1960qe}, see \cite{deWitt} for a review) as described in \cite{Galley:2012hx}, which requires a doubling of the fields variable. For instance, for a free scalar field $\Psi$, the generating functional $W$ for connected correlation functions in the in-in formalism has the path integral representation \renewcommand{\arraystretch}{1.5} \begin{eqnarray} \label{eq:doubleP} \begin{array}{rcl} \displaystyle e^{i\mathcal{S}_{eff}[J_1,J_2]}&=&\displaystyle\int \mathcal{D}\Psi_1\mathcal{D}\Psi_2 \exp\pag{\int d^{d+1}x \,\paq{-\frac i2(\partial\Psi_1)^2+\frac i2(\partial\Psi_2)^2-iJ_1\Psi_2+iJ_2\Psi_2}}\,. \end{array} \end{eqnarray} \renewcommand{\arraystretch}{1.} In this toy example the path integral can be performed exactly, and using the Keldysh representation \cite{Keldysh:1964ud} defined by $\Psi_-\equiv\Psi_1-\Psi_2$, $\Psi_+\equiv (\Psi_1+\Psi_2)/2$, one can write \begin{eqnarray} \mathcal{S}_{eff}[J_+,J_-]=\frac i2\int d^4x\,d^4y J_B(x)G^{BC}(x-y)J_C(y)\,, \end{eqnarray} where the $B,C$ indices take values $\{+,-\}$ and \begin{eqnarray} \label{eq:prop} G^{BC}(t,{\bf x})= \pa{\begin{array}{cc} 0 & -iG_A(t,{\bf x})\\ -iG_R(t,{\bf x}) & \frac 12 G_H(t,{\bf x}) \end{array}}\,, \end{eqnarray} where $G^{++}=0$ and $G_{A,R,H}$ are the usual advanced, retarded propagators and Hadamard function respectively, see sec.~\ref{se:apGreen} for more detailed formulae. In our case, the expression of the quadrupole in terms of the binary constituents world-lines ${\bf x}_a$, i.e. \begin{eqnarray} Q_{ij}\equiv\sum_{a=1}^2 m_a\pa{{\bf x}_{ai}{\bf x}_{aj}-\frac{\delta_{ij}}d{\bf x}_{ak}{\bf x}_{ak}}\,, \end{eqnarray} is doubled to \renewcommand{\arraystretch}{.6} \begin{eqnarray} \begin{array}{rcl} \displaystyle Q_{-ij}&=&\displaystyle\sum_{a=1}^2m_a\pa{x_{-ai}x_{+aj}+x_{+ai}x_{-aj}} -\frac 2d\delta_{ij}x_{+ak}x_{-ak}\\ \displaystyle Q_{+ij}&=&\displaystyle\sum_{a=1}^2m_ax_{+ai}x_{+aj}-\frac 1d\delta_{ij}x_{+a}^2+O(x_-^2)\,. \end{array} \end{eqnarray} \renewcommand{\arraystretch}{1.} The word-line equations of motion that properly include radiation reaction effects are given by \begin{eqnarray} \label{eq:eqmoto} \left.0=\frac{\delta S_{eff}[{\bf x}_{1\pm},{\bf x}_{2\pm}]}{\delta {\bf x}_{a-}}\right|_{\substack{{\bf x}_{a-}=0\\ {\bf x}_{a+}={\bf x}_a}}\,. \end{eqnarray} At lowest order, by integrating out the radiation graviton, i.e. by computing the diagram in the left of fig.~\ref{fig:radReac}, the Burke-Thorne potential \cite{BurkeThorne} is obtained from the action \begin{eqnarray} \label{eq:BT} S_{eff}^{(Q^2)}=-\frac{G_N}5\int dt\,Q_{-ij}(t)Q^{(5)}_{+ij}(t)\,, \end{eqnarray} where $A^{(n)}(t)\equiv d^nA(t)/dt^n$, and $G_N$ the standard Newton's constant, which has been derived in the EFT framework in \cite{Galley:2009px}. Corrections to the leading effect appears at relative 1PN order due to the inclusion of higher multipoles and the 1PN modified dynamics of the quadrupole \cite{355730,Galley:2012qs}. The genuinely non-linear effect appear at relative 1.5PN order and it is due to the rightmost diagram in fig.~\ref{fig:radReac}. In order to compute the $S_{eff}$ we expand the metric as in eq.~(\ref{eq:metric}) and integrate out the fluctuations $\phi,A,\sigma$ according to the in-in prescription, getting to an effective action \begin{eqnarray} \label{eq:Wick} iS_{eff}[M,Q_\pm]=\int D\phi_\pm D\sigma_\pm DA_\pm e^{iS(\phi_\pm,\sigma_\pm,A_\pm,M,Q_\pm)}\,, \end{eqnarray} for the multipole moments alone (we have denoted by $S(\phi_\pm,\sigma_\pm,A_\pm,M,Q_\pm)$ the action including both the standard Einstein-Hilbert action (plus gauge-fixing) and the $S_{mult}$ from eq.~(\ref{eq:smult})). The diagram on the right of fig.~\ref{fig:radReac}, see sec.~\ref{ss:coord} for computation details, gives the following logarithmic contribution to the effective action (by virtue of eq.~(\ref{eq:eqmoto}) only terms linear in $Q_-$ are kept) \begin{eqnarray} \label{eq:radReacT} S_{eff}^{(MQ^2)}=-\frac 45G_N^2M\int dt\, Q_{-ij}(t)\int_{-\infty}^t dt'Q_{+ij}^{(6)}\, \frac 1{(t-t')} \end{eqnarray} which exhibits a short distance singularity for the gravitational wave being emitted and absorbed at the same space-time point (with Green functions used in their $d=3$ expression). Actually the complete result of the tail diagram in fig.~\ref{fig:radReac} in momentum space, dimensionally regularized, reads \renewcommand{\arraystretch}{1.4} \begin{eqnarray} \label{eq:radReacRes} \begin{array}{rcl} S_{eff}^{(MQ^2)}&=&\displaystyle -\frac 15G_N^2M\int_{-\infty}^\infty\frac{dk_0}{2\pi}\,k_0^6 \displaystyle\pa{\frac 1\epsilon-\frac{41}{30}+i\pi-\log\pi+\gamma+ \log(k_0^2/\mu^2)}\times\\ &&\displaystyle\paq{Q_{ij-}(k_0)Q_{ij+}(-k_0)+Q_{ij-}(-k_0)Q_{ij+}(k_0)}\,, \end{array} \end{eqnarray} \renewcommand{\arraystretch}{1.} where we the $3+1$-dimensional gravitational constant $G_N$ is related to the one in generic space dimension $d$ by $G=1/(32\pi\Lambda^2)=G_N\mu^{-\epsilon}$. By performing the computation in $d=3+\epsilon$ the logarithmic divergence has been regularized and a spurious dependence on the arbitrary subtraction scale $\mu$ has been introduced \footnote{The logarithmic term is non-analytic in $k_0$-space and non-local (but causal) in direct space: after integrating out a mass-less propagating degree of freedom the effective action does not have to be local, and indeed it is not.}. A local counter term $M_{ct}$ defined by \begin{eqnarray} M_{ct}=-\frac{2G_N^2}5M\pa{\frac 1\epsilon +\gamma -\log\pi}Q_{-ij}Q_{+ij}^{(6)} \end{eqnarray} can be straightforwardly added to the world-line effective action to get rid of the divergence appearing as $\epsilon\to 0$. According to the standard renormalization procedure, one can define a renormalized mass $M^{(R)}(t,\mu)$ for the monopole term in the action (\ref{eq:smult}), depending on time (or frequency) and on the arbitrary scale $\mu$ in such a way that physical quantities (like the energy or the radiation reaction force) will be $\mu$-independent. Note that at the order required in the diagram in fig.\ref{fig:radReac}, $M^{(R)}(t,\mu)$ can be safely treated as a constant $M$ on both its arguments $t$ and $\mu$. Also the renormalization of the quadrupole moment, which occurs at 3PN order with respect to its leading value, see \cite{Goldberger:2009qd}, can be neglected here. From eq.~(\ref{eq:radReacRes}), representing the contribution to the radiation reaction force of the tail diagram, the multipole effective action relevant from the tail process can be derived to be: \renewcommand{\arraystretch}{1.4} \begin{eqnarray} \label{eq:radReacRen} \begin{array}{rcl} \displaystyle S_{eff}&=&\displaystyle\int\frac{dk_0}{2\pi}\left\{M^{(R)}(k_0,\mu) -i\frac{G_N}{5}k_0^5Q_{-ij}(-k_0)Q_{+ij}(k_0)\right.\\ &&\displaystyle \!\!\!\!\!\! \left.-\frac{G_N^2}5M\,k_0^6 \paq{\log(k_0^2/\mu^2)-\frac{41}{30}+i\pi} \pa{Q_{ij-}(k_0)Q_{ij+}(-k_0)+Q_{ij-}(-k_0)Q_{ij+}(k_0)}\right\}\,, \end{array} \end{eqnarray} \renewcommand{\arraystretch}{1.} where the renormalized monopole term appears: we will now determine its explicit $\mu$ dependence, recovering the result found in \cite{Goldberger:2012kf}, by requiring that the physical energy does not depend on $\mu$. \begin{figure} \begin{center} \includegraphics[width=.32\linewidth]{effectiveMa} \includegraphics[width=.32\linewidth]{effectiveMb} \includegraphics[width=.32\linewidth]{effectiveMc}\\ \includegraphics[width=.32\linewidth]{effectiveMd} \includegraphics[width=.32\linewidth]{effectiveMe} \end{center} \caption{Series of diagram studied in \cite{Goldberger:2012kf} showing that the mass monopole undergoes a non-trivial renormalization group flow.} \label{fig:massRen}. \end{figure} We start by deriving the contribution that the effective action (\ref{eq:radReacRen}) makes to the equations of motion of the binary constituents, limiting to the logarithmic term \begin{eqnarray} \label{eq:resreg} \left.\delta\ddot x_{ai}(t)\right|_{log}= -\frac 85 x_{aj}(t)G_N^2 M\int^t_{-\infty}dt'\,Q_{ij}^{(7)}(t')\log\paq{(t-t')\mu}\,, \end{eqnarray} which agrees with the result obtained in \cite{Blanchet:1993ng}. Note that the normalization of the time (i.e. the value of $\mu$) in the logarithm is arbitrary: changing the time normalization shifts the action by a quantity proportional to $x_{aj}(t)G_N^2 M Q_{ij}^{(6)}(t)$, see next section for a discussion of such analytic, local term. Following \cite{Blanchet:2010zd}, we can separate the logarithm argument into a $t$-dependent and a $t$-independent part via the trivial identity \begin{eqnarray} \log\paq{(t-t')\mu}=\log\paq{(t-t')/\lambda}+\log\pa{\lambda\mu}\,, \end{eqnarray} for any $\lambda$. The logarithmic term not-involving time gives a \emph{conservative} contribution to the force in eq.~(\ref{eq:resreg}) which gives a logarithmic shift $\delta M$ to the mass of the binary system. The mass-shift $\delta M$ can be determined by observing that \begin{eqnarray} \frac{d(\delta M^{(R)})}{dt}=-\sum_a m_a\delta\ddot x_{ai}\dot x_{ai} \end{eqnarray} and thus the tail contribution tail contribution to the conservative part of the energy $E$ is \cite{Blanchet:2010zd} \begin{eqnarray} \label{eq:dE} E=M^{(R)}+\sum_a m_a\delta\ddot x_{ai}x_{ai}=M^{(R)}+\frac{2G_N^2M}5\pa{2Q_{ij}^{(5)}Q_{ij}^{(1)}-2Q_{ij}^{(4)}Q_{ij}^{(2)}+Q_{ij}^{(3)}Q_{ij}^{(3)}}\log(\mu\lambda)\,, \end{eqnarray} where the renormalized monopole term has also been included, as it gives a contribution of the order $G_NMQ^2$: actually by imposing that the physical energy $E$ does not depend on $\mu$ one can find the renormalization group flow equation \begin{eqnarray} \mu\frac{d}{d\mu}M(t,\mu)=-\frac{2G_N^2M}5\pa{2Q_{ij}^{(5)}Q_{ij}^{(1)} -2Q_{ij}^{(4)}Q_{ij}^{(2)}+Q_{ij}^{(3)}Q_{ij}^{(3)}}\,, \end{eqnarray} which agrees with the result found in \cite{Goldberger:2012kf}, where the monopole mass $M$ (identified with the Bondi mass of the binary system) is shown to undergo a non-trivial renormalization group flow by analyzing the diagrams in fig.~\ref{fig:massRen}. \section{Finite quantity from tail terms} What about the finite part? The divergence encountered in the previous section comes from the lack of UV-completeness of the effective model when treating the coalescing binary as a fundamental system endowed with multipoles moments: the exact numerical result is sensitive to the short distance physics and the EFT in terms of source multipoles does not know about it. Such numerical quantity can be fixed by performing the radiation reaction computation in the full theory of gravity coupled to individual (point-like) binary constituents. Within the traditional approach, the finite analytic term entering the radiation reaction force was actually computed in \cite{Blanchet:1993ng}, by relating the radiation reaction potential to the ``anti-symmetric'' (i.e. non-time invariant) wave perturbation of the time-time component of the metric generated by the quadrupole, which was in turn fixed to the $ij$ component. The gravitational wave in the Trasverse-Traceless gauge, including the tail effect, has been computed in \cite{Blanchet:1993ng,Blanchet:1993ec} to be: \begin{eqnarray} \label{eq:htail} h^{(TT)}_{ij}=-\Lambda_{ij,kl}\frac{2G_NM}r\int\frac{dk_0}{2\pi} e^{ik_0(t-r)}k_0^2Q^{(tail)}_{kl}(k_0)\,, \end{eqnarray} with \begin{eqnarray} \label{eq:qTail} Q^{(tail)}_{kl}\equiv Q_{kl}\pag{1+G_NMk_0 \paq{-2i\pa{\frac 1\epsilon+\log(k_0/\mu)+ \frac{\gamma}2-\frac{11}{12}}+\pi sgn(k_0)}}, \end{eqnarray} showing a long-scale (IR) divergence due to the gravitational wave emitted by the quadrupole source and scattered off the by the long-ranged Newtonian potential. The IR singularity in the phase of the emitted wave is un-physical as it can be absorbed in a re-definition of time in eq.~(\ref{eq:htail}) by exponentiation the imaginary term in eq.~(\ref{eq:qTail}). Moreover any experiment, like LIGO and Virgo for instance, can only probe phase \emph{differences} (e.g. the gravitational wave phase difference between the instants when the wave enters and exits the experiment sensitive band) and the un-physical dependencies on the regulator $\epsilon$ and on the subtraction scale $\mu$ drops out of any observable. Such result has been re-derived within EFT techniques in \cite{Porto:2012as} by computing the diagram in fig.~\ref{fig:htail}. Actually the diagram computed in the previous section is related to the one in fig.~\ref{fig:htail}. In order to recover the right diagram in fig.~\ref{fig:radReac} from fig.~\ref{fig:htail}, the gravitational wave emitted has to be absorbed via another quadrupole insertion. In this process, the IR singularity of fig.~\ref{fig:htail} is turned into the UV one of fig.~\ref{fig:radReac}, which occurs when the time difference between emission and absorption goes to zero. In \cite{Blanchet:1993ng} the radiation reaction potential corrected for the tail term was computed by observing that the tail effect amounts at shifting the quadrupole as per eq.~(\ref{eq:qTail}), and the radiation reaction potential can then be inferred by evaluating the radiation-reaction Burke Thorne term in eq.~(\ref{eq:BT}) on the ``shifted'' quadrupole moment given by eq.~(\ref{eq:qTail}). Such procedure gives the correct logarithmic term for the radiation reaction force, enabling to fix the finite term analytic in $k_0$. The finite piece in the tail term radiation reaction force is responsible for a conservative force at 4PN (as the leading radiation reaction acts at 2.5PN and the tail term is a 1.5PN correction to it), so it must be added to the conservative dynamics coming from the calculation of the effective action not involving gravitational radiation, see \cite{Foffa:2012rn,Jaranowski:2012eb} for partial results at 4PN. In particular, the conservative part in the radiation reaction force affects the (coordinate transformation invariant) energy of circular orbits $E(x,\nu)$, being $x$ the PN expansion parameter $x\equiv (G M\omega)^{2/3}$ and $\omega$ the angular frequency of circular orbits. Such energy function depends also on the symmetric mass ratio parameter $\nu\equiv m_1 m_2/M^2$ and linearly on the total mass $M$, the tail effect gives a contribution proportional to $\nu^2$ to the 4PN energy of circular orbit $E_{4PN}(x,\nu)$ (the $O(\nu)$ contribution is known from the Schwarzschild limit). The complete $\nu^2$ term of $E_{4PN}(x,\nu)$ have been computed in \cite{LeTiec:2011ab} within the context of the extreme mass ratio inspiral approximation (where $\nu$ is the expansion parameter and the metric expanded around the curved background created by the more massive object forming the binary system), with the result \begin{eqnarray} \left.E_{4PN}(x,\nu)\right|_{\nu^2}=-\frac 12\nu^2 M x^5 \pa{e_1+\frac{448}{15}\log(x)}\,, \end{eqnarray} with $e_1\simeq 153.8803$ \cite{LeTiec:2011ab}. The logarithmic term matches the term derived from the PN approximation, with traditional method as done in \cite{Blanchet:2010zd}, and EFT methods: both via the computation of the mass renormalization as done in \cite{Goldberger:2012kf} or via the computation of the radiation reaction force as done here. Work is under-way to derive the full $E_{4PN}(x,\nu)$ in a PN context. \begin{center} \begin{figure} \includegraphics[width=.6\linewidth]{effectivehij} \caption{Diagram describing the gravitational radiation emitted by a quadrupole source and scattered off the Newtonian potential before escaping to infinite.} \label{fig:htail} \end{figure} \end{center} \section{Conclusion} The conservative dynamics of gravitationally bound binary systems is completely known in literature up to the third post-Newtonian order, result being derived by both traditional methods and within the context of effective field theory methods. At fourth post-Newtonian order the conservative dynamics receives contribution from a process involving the emission and absorption of radiation (a so-called radiation reaction process), giving rise to logarithmic and analytic terms in the post-Newtonian expansion parameter. We have computed here the logarithmic part of the radiation reaction potential, affecting both the dissipative and the conservative dynamics, using effective field theory methods, and comparing the result with related ones obtained with different methods within the post-Newtonian framework. The fourth post-Newtonian order contribution to the conservative dynamics, at specific order in the symmetric mass ratio, which includes the radiation reaction tail term, has been computed in literature within the extreme mass ratio inspiral approximation and its logarithmic part agrees with what has been computed within the framework of the post-Newtonian approximation to General Relativity. Work is under-way within the post-Newtonian approximation to recover the full fourth order energy function, including all the terms analytic in the post-Newtonian expansion parameter. \section*{Acknowledgments} The authors wish to thank G. Cella and R. Porto for useful discussions. SF wishes to thank the Dipartimento di Scienze di Base e Fondamenti of the University of Urbino for kind hospitality during the realization of part of this work, RS wishes to thank the D\'epartment de Physique Th\'eorique of the University of Geneva and the Instituto de F\'\i sica Te\'orica of the UNESP of Sao Paulo for kind hospitality and support during the realization of part of this work. The work of SF is supported by the Fonds National Suisse, the work of RS is supported by the EGO Consortium through the VESF fellowship EGO-DIR-41-2010.
1,941,325,219,924
arxiv
\section*{Supplemental material} \subsection{Near-zone AdS$_2$ geometry} \noindent Here we elaborate on the geometry of the $(t,r)$ subspace of the near-zone metric~\eqref{near zone metric}. This metric describes ${\rm AdS}_2$ in de Sitter-slice coordinates. In order to see this, we make the coordinate transformation \begin{equation}\label{AdS coords} \tau = \frac{t}{2 r_s} \, , \qquad\quad \xi = \cosh^{-1} \left(\frac{2 r}{r_s} -1\right) \, , \end{equation} so that the two-dimensional metric ${\rm d} s^2= - \frac{\Delta}{r_s^2} {\rm d} t^2 + \frac{r_s^2 }{\Delta} {\rm d} r^2$ becomes \begin{align} \label{AdS2 metric} {\rm d} s^2 = r_s^2 \left( {\rm d} \xi^2 - \sinh^2 \! \xi \, {\rm d} \tau ^2 \right), \end{align} with $\tau \!\in\! (-\infty, +\infty), \, \xi \!\in\! [0, + \infty)$. Notice in particular that $\xi = 0$ corresponds to $r = r_s$. This is an AdS$_2$ metric. To see this explicitly, we note that these coordinates correspond to the embedding \begin{align} X_0 = r_s \cosh \xi, \qquad\qquad X_1 = r_s \sinh \xi \sinh \tau, \qquad\qquad X_2 = r_s \sinh \xi \cosh \tau, \end{align} which satisfy $-X_0^2 - X_1^2 +X^2_2 = -r_s^2$, and cover the region of this hyperboloid that satisfies $X_0 \geq r_s, X_2\geq 0$. This portion of AdS$_2$ is depicted in Figure~\ref{fig}. \begin{figure}[h!] \includegraphics[scale=.8]{ads2dsslice} \caption{\small Portion of ${\rm AdS}_2$ covered by the de Sitter slice coordinates $(\tau, \xi)$, along with lines of constant $\tau$ and $\xi$.}\label{fig} \end{figure} \subsection{(Conformal) Killing vectors of near-zone Kerr geometry} \noindent The (conformal) Killing vectors of the near-zone Kerr metric in eq.~\eqref{near zone metric Kerr} (or, equivalently, eq.~\eqref{near zone 2}) are: \begin{subequations} \label{app:genkerr} \begin{align} T &= \mathcal{R}\partial_t+\tfrac{2a}{r_\star}\partial_\varphi,\label{Tkerr}\\ J_{01} &= - \tfrac{2 \Delta}{r_\star} \cos \theta \, \partial_r - \tfrac{\partial_r \Delta}{r_\star} \sin \theta \,\partial_\theta, \label{J01kerr} \\ J_{02} &= - \cos \varphi' \left[ \tfrac{2 \Delta}{r_\star} \sin \theta \, \partial_r + \tfrac{\partial_r \Delta}{r_\star} \left( \tfrac{\tan\varphi'}{\sin\theta} \partial_\varphi - \cos \theta \partial_\theta \right) \right], \\ J_{03} &= - \sin \varphi' \left[ \tfrac{2 \Delta}{r_\star} \sin \theta \, \partial_r - \tfrac{\partial_r \Delta}{r_\star} \left( \tfrac{\cot\varphi'}{\sin\theta} \partial_\varphi + \cos \theta \partial_\theta \right) \right], \\ J_{12} &= \cos \varphi' \partial_\theta - \cot \theta \sin \varphi' \, \partial_\varphi , \\ J_{13} &= \sin \varphi' \partial_\theta + \cot \theta \cos \varphi' \, \partial_\varphi, \\ J_{23} &= \partial_\varphi,\\ L_\pm &= e^{\pm t/\mathcal{R}}\left[\mathcal{R}(\partial_r\sqrt\Delta)\partial_t\mp\sqrt\Delta\partial_r+\tfrac{2a}{r_\star}(\partial_r\sqrt\Delta)\partial_\varphi\right]\label{Lkerr}\\ K_{\pm} &= e^{\pm t/{\cal R}} \tfrac{\sqrt{\Delta}}{r_\star} \cos \theta \left( \tfrac{r_s r_+ r_\star}{\Delta}\partial_t \mp \partial_r \Delta \partial_r \mp 2 \tan \theta \partial_\theta +\tfrac{ar_\star}{\Delta}\partial_\varphi \right) , \\ M_{\pm} &= e^{\pm t/{\cal R}} \cos \varphi' \left[ \tfrac{r_s r_+}{\sqrt{\Delta}} \sin \theta \partial_t \mp \tfrac{\sqrt{\Delta} \partial_r \Delta \sin \theta}{r_\star } \partial_r \pm \tfrac{2 \sqrt{\Delta}}{r_\star} \cos \theta \partial_\theta +\left(\tfrac{a\sin\theta}{\sqrt\Delta}\mp \tfrac{2 \sqrt{\Delta}}{r_\star} \tfrac{\tan \varphi'}{\sin \theta}\right) \partial_\varphi \right] , \\ N_{\pm} &= e^{\pm t/{\cal R}} \sin \varphi' \left[ \tfrac{r_s r_+}{\sqrt{\Delta}} \sin \theta \partial_t \mp \tfrac{\sqrt{\Delta} \partial_r \Delta \sin \theta}{r_\star } \partial_r \pm \tfrac{2 \sqrt{\Delta}}{r_\star} \cos \theta \partial_\theta +\left(\tfrac{a\sin\theta}{\sqrt\Delta}\pm \tfrac{2 \sqrt{\Delta}}{r_\star} \tfrac{\cot \varphi'}{\sin \theta}\right) \partial_\varphi \right], \end{align} \end{subequations} where we have defined $\mathcal{R} = \frac{2r_s r_+}{r_\star}$, $\varphi'=\varphi-\frac{a}{r_sr_+}t$ and $r_\star\equiv r_+-r_-$. Note that these reduce to~\eqref{KVs} and~\eqref{CKVs} in the limit $a\to 0$. The generators \eqref{app:genkerr} satisfy the ${\rm so}(4,2)$ algebra. We can make this explicit by defining $J_{54} = T$ along with \begin{subequations} \label{Ji4} \begin{align} J_{04} &= \frac{L_+ - L_-}{2}, & J_{05} &= \frac{L_+ + L_-}{2} , \\ J_{14} &= \frac{K_+ - K_-}{2}, & J_{15} &= \frac{K_+ + K_-}{2} , \\ J_{24} &= \frac{M_+ - M_-}{2}, & J_{25} &= \frac{M_+ + M_-}{2} , \\ J_{34} &= \frac{N_+ - N_-}{2}, & J_{35} &= \frac{N_+ + N_-}{2}, \end{align} \end{subequations} which then have the ${\rm so}(4,2)$ commutation relations \begin{equation} [J_{AB}, J_{CD}] = \eta_{AD} J_{BC} + \eta_{BC} J_{AD} - \eta_{AC} J_{BD} - \eta_{BD} J_{AC} , \label{SO42cr} \end{equation} where $\eta_{AB} = \text{diag} (-1, 1, 1, 1, 1, -1)$. A few comments are in order. First, note that only $J_{01}$ and $J_{23}$ are time-independent (when expressing all quantities in Boyer--Lindquist coordinates) and are exact symmetries of the static sector~\cite{Hui:2021vcv}. The other generators depend explicitly on time and are not (C)KVs of the effective 3D Kerr metric of \cite{Hui:2021vcv}. Second, the generators $L_\pm$ differ from the ones introduced in~\cite{Charalambous:2021kcz} for non-zero values of the spin parameter $a$. Interestingly, though, they coincide (up to a rescaling of the time coordinate) in the region close to the horizon defined by $(r - r_+)/r_+ \ll 1$. This is a manifestation of the fact that all the near-zone approximations of Kerr put forward in the literature actually coincide in this limit. Note also that some of the generators in \eqref{app:genkerr} are manifestly well defined in the extremal limit ($a \to r_s/2$, $r_\star\rightarrow0$), while others look singular in this limit. This is not a problem because one can consistently recover all the (C)KVs of the metric \eqref{near zone metric Kerr} at extremality by multiplying with suitable powers of $r_\star$ and taking linear combinations of the generators \eqref{app:genkerr}. For instance, in addition to $J_{ij}$ and $T$ (after extracting a $1/r_\star$ factor), the other two KVs of \eqref{near zone metric Kerr} in the extremal limit are obtained by expanding $J_{04}$ and the combination $(T -J_{05})/r_\star$ at leading order in $r_\star$. \subsection{Ladder in spin and finite frequency} \noindent A convenient near-zone approximation that describes the dynamics of particles of generic spin, $s$, in the limit $r_+ \leq r \ll1/\omega$ is \cite{Page:1976df,1974JETP...38....1S} \begin{equation} \label{near-zonTeuk} x (x+1) \partial_x^2R +(s+1) (2 x+1) \partial_xR + \left[- (\ell-s) (\ell+s+1) + \frac{q^2 +isq(2x+1)}{x(x+1)} \right]R=0, \end{equation} where $q\equiv r_sr_+(m\Omega_+-\omega)/(r_+-r_-)$ and \begin{equation} x\equiv \frac{r-r_+}{r_+-r_-} . \end{equation} It is straightforward to show that eq.~\eqref{near-zonTeuk} admits the following set of spin raising and lowering operators: \begin{equation} E^+_s = \left( \partial_r -\frac{ i r_sr_+ (\omega-m\Omega_+) }{\Delta } \right) R \, , \qquad\qquad E^-_s =\Delta^{-s+1} \left( \partial_r +\frac{ i r_sr_+ (\omega-m\Omega_+) }{\Delta } \right) \Delta^s R \, , \label{LaddersKerrPage} \end{equation} which generate solutions with spin $s+1$ and $s-1$ respectively, i.e., $R^{(s+1)}=E^+_sR^{(s)}$ and $R^{(s-1)}=E_s^-R^{(s)}$, where $R^{(s)}$ solves \eqref{near-zonTeuk} with spin $s$. The operators~\eqref{LaddersKerrPage} generalize the Teukolsky--Starobinsky identities~\cite{Press:1973zz,1974JETP...38....1S,Chandrasekhar:1985kt} in the near-zone regime by connecting solutions with consecutive spin $s$ and $s\pm1$. These spin raising and lowering operators provide a simple way of extending the results discussed above for spin-0 fields to spin-1 and spin-2 particles described by the Teukolsky equation~\cite{Hui:2021vcv}. \end{document}
1,941,325,219,925
arxiv
\section{Introduction} An original goal of this paper was to extend the weak convergence methodology of Dupuis and Ellis \cite{dupuis-ellis} to the context of non-exponential (e.g., heavy-tailed) large deviations. While we claim only modest success in this regard, we do find some general-purpose large deviation upper bounds which can be seen as polynomial-rate analogs of the upper bounds in the classical theorems of Sanov and Cram\'er. At least as interesting, however, are the abstract principles behind these bounds, which have broad implications beyond the realm of large deviations. Let us first describe these abstract principles before specializing them in various ways. Let $E$ be a Polish space, and let ${\mathcal P}(E)$ denote the set of Borel probability measures on $E$ endowed with the topology of weak convergence. Let $B(E)$ (resp. $C_b(E)$) denote the set of measurable (resp. continuous) and bounded real-valued functions on $E$. For $n \ge 1$ and $\nu \in {\mathcal P}(E^n)$, define $\nu_{0,1} \in {\mathcal P}(E)$ and measurable maps $\nu_{k-1,k} : E^{k-1} \rightarrow {\mathcal P}(E)$ for $k=2,\ldots,n$ via the disintegration \[ \nu(dx_1,\ldots,dx_n) = \nu_{0,1}(dx_1)\prod_{k=2}^n\nu_{k-1,k}(x_1,\ldots,x_{k-1})(dx_k). \] In other words, if $(X_1,\ldots,X_n)$ is an $E^n$-valued random variable with law $\nu$, then $\nu_{0,1}$ is the law of $X_1$, and $\nu_{k-1,k}(X_1,\ldots,X_{k-1})$ is the conditional law of $X_k$ given $(X_1,\ldots,X_{k-1})$. Of course, $\nu_{k-1,k}$ are uniquely defined up to $\nu$-almost sure equality. The protagonist of the paper is a proper (i.e., not identically $\infty$) convex function $\alpha : {\mathcal P}(E) \rightarrow (-\infty,\infty]$ with compact sub-level sets; that is, $\{\nu \in {\mathcal P}(E) : \alpha(\nu) \le c\}$ is compact for every $c \in {\mathbb R}$. For $n \ge 1$ define $\alpha_n : {\mathcal P}(E^n) \rightarrow (-\infty,\infty]$ by \[ \alpha_n(\nu) = \int_{E^n}\sum_{k=1}^n\alpha(\nu_{k-1,k}(x_1,\ldots,x_{k-1}))\,\nu(dx_1,\ldots,dx_n), \] and note that $\alpha_1 \equiv \alpha$. Define the convex conjugate $\rho_n : B(E^n) \rightarrow {\mathbb R}$ by \begin{align} \rho_n(f) = \sup_{\nu \in {\mathcal P}(E^n)}\left(\int_{E^n}f\,d\nu - \alpha_n(\nu)\right), \quad\quad \text{ and } \quad\quad \rho \equiv \rho_1. \label{intro:duality} \end{align} Our main interest is in evaluating $\rho_n$ at functions of the \emph{empirical measure} $L_n : E^n \rightarrow {\mathcal P}(E)$ defined by \[ L_n(x_1,\ldots,x_n) = \frac{1}{n}\sum_{i=1}^n\delta_{x_i}. \] The main abstract result of the paper is the following extension of Sanov's theorem, proven in a more general form in Section \ref{se:sanovproof} by adapting the weak convergence techniques of Dupuis-Ellis \cite{dupuis-ellis}. \begin{theorem} \label{th:main-sanov} For $F \in C_b({\mathcal P}(E))$, \[ \lim_{n\rightarrow\infty}\frac{1}{n}\rho_n(nF \circ L_n) = \sup_{\nu \in {\mathcal P}(E)}(F(\nu) - \alpha(\nu)). \] \end{theorem} The guiding example is the relative entropy, $\alpha(\cdot) = H(\cdot | \mu)$, where $\mu \in {\mathcal P}(E)$ is a fixed reference measure, and $H$ is defined by \begin{align} H(\nu | \mu) = \int_E\log(d\nu/d\mu)\,d\nu, \text{ for } \nu \ll \mu, \quad\quad H(\nu | \mu) = \infty \text{ otherwise}, \label{def:relativeentropy} \end{align} It turns out that $\alpha_n(\cdot) = H(\cdot | \mu^n)$, by the so-called \emph{chain rule} of relative entropy \cite[Theorem B.2.1]{dupuis-ellis}. The dual $\rho_n$ is well known to be $\rho_n(f) = \log\int_{E^n}e^f\,d\mu^n$, and the formula relating $\rho_n$ and $\alpha_n$ is often known as the Gibbs variational principle or the Donsker-Varadhan formula. In this case Theorem \ref{th:main-sanov} reduces to the Laplace principle form of Sanov's theorem: \[ \lim_{n\rightarrow\infty}\frac{1}{n}\log\int_{E^n}e^{nF\circ L_n}\,d\mu^n = \sup_{\nu \in {\mathcal P}(E)}(F(\nu) - H(\nu | \mu)). \] Well known theorems of Varadhan and Dupuis-Ellis (see \cite[Theorem 1.2.1 and 1.2.3]{dupuis-ellis}) assert the equivalence of this form of Sanov's theorem with the more common form: for every Borel set $A \subset {\mathcal P}(E)$ with closure $\overline{A}$ and interior $A^\circ$, \begin{align} -\inf_{\nu \in A^\circ}H(\nu | \mu) &\le \liminf_{n\rightarrow\infty}\frac{1}{n}\log\mu^n(L_n \in A) \\ &\le \limsup_{n\rightarrow\infty}\frac{1}{n}\log\mu^n(L_n \in A) \le -\inf_{\nu \in \overline{A}}H(\nu | \mu). \label{def:classicalsanov} \end{align} To derive this heuristically, apply Theorem \ref{th:main-sanov} to the function \begin{align} F(\nu) = \begin{cases} 0 &\text{if } \nu \in A \\ -\infty &\text{otherwise}. \end{cases} \label{def:convexindicator} \end{align} For general $\alpha$, Theorem \ref{th:main-sanov} does not permit an analogous \emph{equivalent} formulation in terms of deviation probabilities. In fact, for many $\alpha$, Theorem \ref{th:main-sanov} has nothing to do with large deviations (see Sections \ref{se:intro:lln} and \ref{se:intro:optimaltransport} below). Nonetheless, for certain $\alpha$, Theorem 1.1 \emph{implies} interesting large deviations upper bounds, which we prove by formalizing the aforementioned heuristic. While many $\alpha$ admit fairly explicit known formulas for the dual $\rho$, the recurring challenge in applying Theorem \ref{th:main-sanov} is finding a useful expression for $\rho_n$, and herein lies but one of many instances of the wonderful tractability of relative entropy. The examples to follow do admit good expressions for $\rho_n$, or at least workable one-sided bounds, but we also catalog in Section \ref{se:intro:alternatives} some natural alternative choices of $\alpha$ for which we did not find useful bounds or expressions for $\rho_n$. The functional $\rho$ is (up to a sign change) a \emph{convex risk measure}, in the language of F\"ollmer and Schied \cite{follmer-schied-book}. A rich duality theory for convex risk measures emerged over the past two decades, primarily geared toward applications in financial mathematics and optimization. We take advantage of this theory in Section \ref{se:riskmeasures} to demonstrate how $\alpha$ can be reconstructed from $\rho$ in many cases, which shows that $\rho$ could be taken as the starting point instead of $\alpha$. Additionally, the theory of risk measures provides insight on how to deal with the subtleties that arise in extending the domain of $\rho$ (and Theorem \ref{th:main-sanov}) to accommodate unbounded functions or stronger topologies on ${\mathcal P}(E)$. Section \ref{se:intro:riskmeasures} briefly reinterprets Theorem \ref{th:main-sanov} in a language more consistent with the risk measure literature. The reader familiar with risk measures may notice a \emph{time consistent dynamic risk measure} (see \cite{acciaio-penner-dynamic} for definitions and survey) hidden in the definition of $\rho_n$ above. We will make no use of the interpretation in terms of dynamic risk measures, but it did inspire a recursive formula for $\rho_n$ (similar to a result of \cite{cheridito2011composition}). To state it loosely, if $f \in B(E^n)$ then we may write \begin{align} \rho_n(f) = \rho_{n-1}(g), \quad \text{where} \quad g(x_1,\ldots,x_{n-1}) := \rho\left(f(x_1,\ldots,x_{n-1},\cdot)\right). \label{def:intro:rhon-recursive} \end{align} To make rigorous sense of this, we must note that $g : E^{n-1} \rightarrow {\mathbb R}$ is merely upper semianalytic and not Borel measurable in general, and that $\rho$ is well defined for such functions. We make this precise in Proposition \ref{pr:rhon-iterative}. This recursive formula is not essential for any of the arguments but is convenient for calculations. \subsection{Nonexponential large deviations} \label{se:intro:nonexpLDP} A first applicatoin of Theorem \ref{th:main-sanov} is to derive large deviation upper bounds in the absence of exponential rates or finite moment generating functions. While Cram\'er's theorem in full generality does not require any finite moments, the upper bound is often vacuous when the underlying random variables have heavy tails. This simple observation has driven a large and growing literature on large deviation asymptotics for sums of i.i.d. random variables, to be reviewed shortly. Our approach is well suited not to \emph{precise} asymptotics but rather to widely applicable upper bounds. In Section \ref{se:shortfall} we derive alternatives to the upper bounds of Sanov's and Cram\'er's theorems by applying (an extension of) Theorem \ref{th:main-sanov} with \begin{align} \alpha(\nu) = \|d\nu/d\mu\|_{L^p(\mu)}-1, \text{ for } \nu \ll \mu, \quad\quad \alpha(\nu) = \infty \text{ otherwise}, \label{intro:lpentropy} \end{align} where $\mu \in {\mathcal P}(E)$ is fixed. We state the results here: For a continuous function $\psi : E \rightarrow {\mathbb R}_+ := [0,\infty)$, let ${\mathcal P}_\psi(E)$ denote the set of $\nu \in {\mathcal P}(E)$ satisfying $\int\psi\,d\nu < \infty$, and equip ${\mathcal P}_\psi(E)$ with the topology induced by the linear maps $\nu \mapsto \int f\,d\nu$, where $f : E \rightarrow {\mathbb R}$ is continuous and $|f| \le 1+\psi$. \begin{theorem} \label{th:lpsanov} Let $p \in (1,\infty)$, and let $q = p/(p-1)$ denote the conjugate exponent. Let $\mu \in {\mathcal P}(E)$, and suppose $\int\psi^q\,d\mu < \infty$ for some continuous $\psi : E \rightarrow {\mathbb R}_+$. Then, for every closed set $A \subset {\mathcal P}_\psi(E)$, \[ \limsup_{n\rightarrow\infty}\,n^{1/p}\mu^n(L_n \in A)^{1/q} \le \left(\inf_{\nu \in A}\|d\nu/d\mu\|_{L^p(\mu)}-1\right)^{-1}. \] \end{theorem} \begin{corollary} \label{co:lpcramer} Let $p \in (1,\infty)$ and $q = p/(p-1)$. Let $E$ be a separable Banach space. Let $(X_i)_{i=1}^\infty$ be i.i.d. $E$-valued random variables with ${\mathbb E}\|X_1\|^q < \infty$. Define $\Lambda : E^* \rightarrow {\mathbb R} \cup \{\infty\}$ by\footnote{In the following, $E^*$ denotes the continuous dual of $E$.} \[ \Lambda(x^*) = \inf\left\{m \in {\mathbb R} : {\mathbb E}\left[[\left(1+\langle x^*,X_1\rangle - m\right)^+]^q\right] \le 1\right\}, \] and define $\Lambda^*(x) = \sup_{x^* \in E^*}\left(\langle x^*,x\rangle - \Lambda(x^*)\right)$ for $x \in E$. Then, for every closed set $A \subset E$, \[ \limsup_{n\rightarrow\infty}\,n^{1/p}\,{\mathbb P}\left(\frac{1}{n}\sum_{i=1}^nX_i \in A\right)^{1/q} \le \left(\inf_{x \in A}\Lambda^*(x)\right)^{-1}. \] \end{corollary} In analogy with the classical Cram\'er's theorem, the function $\Lambda$ in Corollary \ref{co:lpcramer} plays the role of the cumulant generating function. In both Theorem \ref{th:lpsanov} and Corollary \ref{co:lpcramer}, notice that as soon as the constant on the right-hand side is finite, we may conclude that the probabilities in question are $O(n^{-q/p}) = O(n^{1-q})$. This is consistent with some now-standard results on one-dimensional heavy tailed sums, for events of the form $A=[r,\infty)$, for $r > 0$. For instance, it is known \cite[Chapter IX, Theorem 28]{petrov2012sums} that if $(X_i)_{i=1}^\infty$ are i.i.d. real-valued random variables with mean zero and ${\mathbb E}|X_1|^q < \infty$, then ${\mathbb P}(X_1 + \cdots + X_n > nr) = o(n^{1-q})$. For $q > 2$, the well known inequality of Fuk-Nagaev provides a related non-asymptotic bound; see \cite[Corollary 1.8]{nagaev1979large}, or \cite{einmahl2008characterization} for a Banach space version. If instead a stronger assumption is made on $X_i$, such as regular variation, then there are corresponding lower bounds for certain sets $A$. Refer to the books \cite{borovkov,foss2011introduction} and the survey of Mikosch and Nagaev \cite{mikosch-nagaev} for detailed reviews of such results, as well as the more recent \cite{denisov2008large} and references therein. Indeed, \emph{precise} asymptotics require detailed assumptions on the shape of the tails of $X_i$, and this is especially true in multivariate and infinite-dimensional contexts. A recent line of interesting work extends the theory of regular variation to metric spaces \cite{de2001convergence,hult2005functional,hult2006regular,lindskog2014regularly}, but again the typical assumptions on the underlying law $\mu$ are substantially stronger than mere existence of a finite moment. The main advantage of our results is their broad applicability, requiring only finite moments, but two other strengths are worth emphasizing. First, our bounds apply to arbitrary closed sets $A$, which enables a natural contraction principle (i.e., continuous mapping). Section \ref{se:optimization} illustrates this by using Theorem \ref{th:lpsanov} to find error bounds for Monte Carlo schemes in stochastic optimization, essentially providing a heavy-tailed analog of the results of \cite{kaniovski-king-wets}. Lastly, while this discussion has focused on literature related to our analog of Cram\'er's upper bound (Corollary \ref{co:lpcramer}), our analog of Sanov's upper bound (Theorem \ref{th:lpsanov}) seems even more novel. No other results are known to the author on empirical measure large deviations in heavy-tailed contexts. Of course, Sanov's theorem applies without any moment assumptions, but the upper bound provides no information in many heavy-tailed applications, such as in Section \ref{se:optimization}. \subsection{Uniform upper bounds and martingales} \label{se:intro:uniform} Certain classes of dependent sequences admit uniform upper bounds, which we derive from Theorem \ref{th:main-sanov} by working with \begin{align} \alpha(\nu) = \inf_{\mu \in M}H(\nu|\mu), \label{intro:robustentropic} \end{align} for a given convex weakly compact set $M \subset {\mathcal P}(E)$. The conjugate $\rho$, unsurprisingly, is $\rho(f) = \sup_{\mu \in M}\log\int e^f\,d\mu$, and $\rho_n$ turns out to be tractable as well: \begin{align} \rho_n(f) = \sup_{\mu \in M_n}\log\int_{E^n} e^f\,d\mu, \label{intro:robustentropic-rhon} \end{align} where $M_n$ is defined as the set of laws $\mu \in {\mathcal P}(E^n)$ with $\mu_{k-1,k} \in M$ for each $k=1,\ldots,n$, $\mu$-almost surely; in other words, $M_n$ is the set of laws of $E^n$-valued random variables $(X_1,\ldots,X_n)$, when the law of $X_1$ belongs to $M$ and so does the conditional law of $X_k$ given $(X_1,\ldots,X_{k-1})$, almost surely, for each $k=2,\ldots,n$. Theorem \ref{th:main-sanov} becomes \[ \lim_{n\rightarrow\infty}\frac{1}{n}\log\sup_{\mu \in M_n}\int_{E^n}e^{nF\circ L_n}\,d\mu = \sup_{\mu \in M, \ \nu \in {\mathcal P}(E)}(F(\nu) - H(\nu|\mu)), \text{ for } F \in C_b({\mathcal P}(E)). \] From this we derive a uniform large deviation upper bound, for closed sets $A \subset {\mathcal P}(E)$: \begin{align} \limsup_{n\rightarrow\infty}\frac{1}{n}\log\sup_{\mu \in M_n}\mu(L_n \in A) \le -\inf_{\mu \in M, \nu \in A}H(\nu | \mu). \label{intro:uniformLDP} \end{align} With a prudent choice of $M$, this specializes to an asymptotic relative of the Azuma-Hoeffding inequality. The surprising feature here is that we can work with arbitrary closed sets and in multiple dimensions: \begin{theorem} \label{th:azuma} Let $\varphi : {\mathbb R}^d \rightarrow {\mathbb R} \cup \{\infty\}$, and define $\mathcal{S}_{d,\varphi}$ to be the set of ${\mathbb R}^d$-valued martingales $(S_k)_{k=0}^n$, defined on a common but arbitrary probability space, satisfying $S_0=0$ and \[ {\mathbb E}\left[\left.\exp\left(\langle y, S_k-S_{k-1}\rangle\right)\right|S_0,\ldots,S_{k-1}\right] \le e^{\varphi(y)}, \ a.s., \text{ for } k=1,\ldots,n, \ y \in {\mathbb R}^d. \] Suppose the effective domain $\{y \in {\mathbb R}^d : \varphi(y) < \infty\}$ spans ${\mathbb R}^d$. Then, for closed sets $A \subset {\mathbb R}^d$, we have \[ \limsup_{n\rightarrow\infty}\sup_{(S_k)_{k=0}^n \in \mathcal{S}_{d,\varphi}}\frac{1}{n}\log{\mathbb P}\left(S_n/n \in A\right) \le -\inf_{x \in A}\varphi^*(x), \] where $\varphi^*(x) = \sup_{y \in {\mathbb R}^d}(\langle x,y\rangle - \varphi(y))$. \end{theorem} F\"ollmer and Knispel \cite{follmer-knispel} found some results which loosely resemble \eqref{intro:uniformLDP} (see Corollary 5.3 therein), based on an analysis of the same risk measure $\rho$. See also \cite{hu2010cramer,fuqing2012relative} for somewhat related results on large deviations for capacities. \subsection{Laws of large numbers} \label{se:intro:lln} Some specializations of Theorem \ref{th:main-sanov} appear to have nothing to do with large deviations. For example, suppose $M \subset {\mathcal P}(E)$ is convex and compact, and let \[ \alpha(\nu) = \begin{cases} 0 &\text{if } \nu \in M \\ \infty &\text{otherwise}. \end{cases}. \] It can be shown that $\rho_n(f) = \sup_{\mu \in M_n}\int_{E^n}f\,d\mu$, where $M_n$ is defined as in Section \ref{se:intro:uniform}, for instance by a direct computation using \eqref{def:intro:rhon-recursive}. Theorem \ref{th:main-sanov} then becomes \[ \lim_{n\rightarrow\infty}\sup_{\mu \in M_n}\int_{E^n}F \circ L_n\,d\mu = \sup_{\mu \in M}F(\mu), \text{ for } F \in C_b({\mathcal P}(E)). \] When $M =\{\mu\}$ is a singleton, so is $M_n = \{\mu^n\}$, and this simply expresses the weak convergence $\mu^n \circ L_n^{-1} \rightarrow \delta_\mu$. The general case can be interpreted as a robust law of large numbers, where ``robust'' refers to perturbations of the joint law of an i.i.d. sequence. This is closely related to laws of large numbers under nonlinear expectations \cite{peng2007law}. \subsection{Optimal transport costs} \label{se:intro:optimaltransport} Another interesting consequence of Theorem \ref{th:main-sanov} comes from choosing $\alpha$ as an optimal transport cost. Fix $\mu \in {\mathcal P}(E)$ and a lower semicontinuous function $c : E^2 \rightarrow [0,\infty]$, and define \[ \alpha(\nu) = \inf_{\pi \in \Pi(\mu,\nu)}\int c\,d\pi, \] where $\Pi(\mu,\nu)$ is the set of probability measures on $E \times E$ with first marginal $\mu$ and second marginal $\nu$. Under a modest additional assumption on $c$ (stated shortly in Corollary \ref{co:transport1}, proven later in Lemma \ref{le:tightnessfunction}), $\alpha$ satisfies our standing assumptions. The dual $\rho$ can be identified using Kantorovich duality, and $\rho_n$ turns out to be the value of a stochastic optimal control problem. To illustrate this, it is convenient to work with probabilistic notation: Suppose $(X_i)_{i=1}^\infty$ is a sequence of i.i.d. $E$-valued random variables with common law $\mu$, defined on some fixed probability space. For each $n$, let ${\mathcal Y}_n$ denote the set of $E^n$-valued random variables $(Y_1,\ldots,Y_n)$ where $Y_k$ is $(X_1,\ldots,X_k)$-measurable for each $k=1,\ldots,n$. We think of elements of ${\mathcal Y}_n$ as adapted control processes. For each $n \ge 1$ and each $f \in B(E^n)$, we show in Proposition \ref{pr:rhon-optimaltransport} that \begin{align} \rho_n(f) = \sup_{(Y_1,\ldots,Y_n) \in {\mathcal Y}_n}{\mathbb E}\left[f(Y_1,\ldots,Y_n) - \sum_{i=1}^nc(X_i,Y_i)\right]. \label{def:optimaltransport-expression} \end{align} The expression \eqref{def:optimaltransport-expression} yields the following corollary of Theorem \ref{th:main-sanov}: \begin{corollary} \label{co:transport1} Suppose that for each compact set $K \subset E$, the function $h_K(y) := \inf_{x \in K}c(x,y)$ has pre-compact sub-level sets.\footnote{That is, the closure of $\{y \in E : h_K(y) \le m\}$ is compact for each $m \ge 0$. This assumption holds, for example, if $E$ is a subset of Euclidean space and there exists $y_0 \in E$ such that $c(x,y) \rightarrow \infty$ as $d(y,y_0) \rightarrow \infty$, uniformly for $x$ in compacts.} For each $F \in C_b({\mathcal P}(E))$, we have \begin{align} \lim_{n\rightarrow\infty}\sup_{(Y_k)_{k=1}^n \in {\mathcal Y}_n}&{\mathbb E}\left[F(L_n(Y_1,\ldots,Y_n)) - \frac{1}{n}\sum_{i=1}^nc(X_i,Y_i)\right] = \sup_{\nu \in {\mathcal P}(E)}\left(F(\nu) - \alpha(\nu)\right) \nonumber \\ &= \sup_{\pi \in \Pi(\mu)}\left(F(\pi(E \times \cdot)) - \int_{E \times E}c\,d\pi\right), \label{def:optimaltransport-mainlimit} \end{align} where $\Pi(\mu) = \cup_{\nu \in {\mathcal P}(E)}\Pi(\mu,\nu)$. \end{corollary} This can be seen as a long-time limit of the optimal value of the control problems. However, the renormalization in $n$ is a bit peculiar in that it enters inside of the terminal cost $F$, and there does not seem to be a direct connection with ergodic control. A direct proof of \eqref{def:optimaltransport-mainlimit} is possible but seems to be no simpler and potentially narrower in scope. The limiting object of Corollary \ref{co:transport1} encapsulates a wide variety of interesting variational problems involving optimal transport costs. Variational problems of this form are surely more widespread than the author is aware, but two notable recent examples can be found in the study of Cournot-Nash equilibria in large-population games \cite{blanchet2015optimal} and in the theory of Wasserstein barycenters \cite{agueh2011barycenters}. \subsection{Alternative choices of $\alpha$} \label{se:intro:alternatives} There are many other natural choices of $\alpha$ for which the implications of Theorem \ref{th:main-sanov} remain unclear. For example, consider the $\varphi$-divergence \[ \alpha(\nu) = \int_E\varphi(d\nu/d\mu)\,d\mu, \text{ for } \nu \ll \mu, \quad\quad \alpha(\nu)=\infty \text{ otherwise}, \] where $\mu \in {\mathcal P}(E)$ and $\varphi : {\mathbb R}_+ \rightarrow {\mathbb R}$ is convex and satisfies $\varphi(x)/x \rightarrow \infty$ as $x \rightarrow \infty$. This $\alpha$ has weakly compact (actually $\sigma({\mathcal P}(E),B(E))$-compact) sub-level sets, according to \cite[Lemma 6.2.16]{dembozeitouni}, and it is clearly convex. The dual, known in the risk literature as the \emph{optimized certainty equivalent,} was computed by Ben-Tal and Teboulle \cite{bental-teboulle-1986,bental-teboulle-2007} to be \[ \rho(f) = \inf_{m \in {\mathbb R}}\left(\int_E\varphi^*\left(f(x)-m\right)\mu(dx) + m\right), \] where $\varphi^*(x) = \sup_{y \in {\mathbb R}}(xy - \varphi(y))$ is the convex conjugate. Unfortunately, we did not find any good expressions or estimates for $\rho_n$ or $\alpha_n$, so the interpretation of the main Theorem \ref{th:main-sanov} eludes us in this case. A related choice is the so-called \emph{shortfall risk measure} introduced by F\"ollmer and Schied \cite{follmer-schied-convex}: \begin{align} \rho(f) = \inf\left\{m \in {\mathbb R} : \int_E\ell(f(x)-m)\mu(dx) \le 1\right\}. \label{intro:shortfall} \end{align} This choice of $\rho$ and the corresponding (tractable!) $\alpha$ are discussed briefly in Section \ref{se:shortfall}. The choice of $\ell(x) = [(1+x)^+]^{p/(p-1)}$ corresponds to \eqref{intro:lpentropy}, and we make extensive use of this in Section \ref{se:nonexpLDP}, as was discussed in Section \ref{se:intro:nonexpLDP}. The choice of $\ell(x) = e^x$ recovers the classical case $\rho(f) = \log\int_Ee^f\,d\mu$. Aside from these two examples, for general $\ell$, we found no useful expressions or estimates for $\rho_n$ or $\alpha_n$. In connection with tails of random variables, shortfall risk measures have an intuitive appeal stemming from the following simple analog of Chernoff's bound, observed in \cite[Proposition 3.3]{lacker-liquidity}: If $\gamma(\lambda) = \rho(\lambda f)$ for all $\lambda \ge 0$, where $f$ is some given measurable function, then $\mu(f > t) \le 1/\ell(\gamma^*(t))$ for all $t \ge 0$, where $\gamma^*(t) = \sup_{\lambda \ge 0}(\lambda t - \gamma(\lambda))$. It is worth pointing out the natural but ultimately fruitless idea of working with $\rho(f) = \varphi^{-1}(\int_E\varphi(f)\,d\mu)$, where $\varphi$ is increasing. Such functionals were studied first it seems by Hardy, Littlewood, and P\'olya \cite[Chapter 3]{hardy-littlewood-polya}, providing necessary and sufficient conditions for $\rho$ to be convex (rediscovered in \cite{bental-teboulle-1986}). Using the formula \eqref{def:intro:rhon-recursive} to compute $\rho_n$, this choice would lead to the exceptionally pleasant formula $\rho_n(f) = \varphi^{-1}(\int_{E^n}\varphi(f)\,d\mu^n)$, which we observed already in the classical case $\varphi(x)=e^x$. Unfortunately, however, such a $\rho$ cannot come from a functional $\alpha$ on ${\mathcal P}(E)$, in the sense that \eqref{intro:duality} cannot hold unless $\varphi$ is affine or exponential. Another way of seeing this is that the convex conjugate of $\rho$ (with respect to the dual pairing of $C_b(E)$ with the space of bounded signed measures) fails to be infinite outside of the set ${\mathcal P}(E)$. The problem, as is known in the risk measure literature, is that the additivity property $\rho(f+c)=\rho(f)+c$ for all $c \in {\mathbb R}$ and $f \in B(E)$ fails unless $\varphi$ is affine or exponential (c.f. \cite[Proposition 2.46]{follmer-schied-book}). The consequences of Theorem \ref{th:main-sanov} remain unexplored for several other potentially interesting choices of $\alpha$ with well understood duals: To name just a few, we mention the Schr\"odinger problem surveyed in \cite{leonard2013survey} and related functionals arising from stochastic optimal control problems \cite{mikami2006duality}, martingale optimal transport costs \cite{beiglboeck-juillet}, and functionals related to Orlicz norms studied in \cite{cheridito-li-orlicz}. \subsection{Connection to superhedging} Again, the challenge in working with Theorem \ref{th:main-sanov} is in computing or estimating $\rho_n$ or $\alpha_n$. With this in mind, we present an alternative expression for $\rho_n$ as the value of a particular type of optimal control problem, more specifically a \emph{superhedging} problem (see, e.g., \cite[Chapter 7]{follmer-schied-book}). To a given dual pair $(\alpha,\rho)$ we may associate the \emph{acceptance set} \begin{align} {\mathcal A} = \left\{f \in B(E) : \rho(f) \le 0\right\} = \left\{f \in B(E) : \int_Ef\,d\nu \le \alpha(\nu), \ \forall \nu \in {\mathcal P}(E)\right\}. \label{def:acceptanceset} \end{align} As is well known in the risk measure literature, we may express $\rho$ in terms of ${\mathcal A}$ by \begin{align} \rho(f) = \inf\{c \in {\mathbb R} : f - c \in {\mathcal A}\}. \label{intro:rho-acceptance} \end{align} Indeed, this follows easily from the fact that $\rho(f-c)=\rho(f)-c$ for constants $c \in {\mathbb R}$. In fact, $\alpha$ can also be reconstructed from ${\mathcal A}$, and this provides a third possible entry point to the $(\alpha,\rho)$ duality. To elaborate on this would take us too far afield, but see \cite{follmer-schied-book} for details. Now, let us compute $\rho_n$ in terms of the acceptance set. For $f \in B(E^n)$, define ${\mathcal A}_n$ to be the set of $(Y_1,\ldots,Y_n)$, where $Y_k$ is a measurable function from $E^k$ to ${\mathbb R}$ satisfying \begin{align} Y_k(x_1,\ldots,x_{k-1},\cdot) &\in {\mathcal A}, \text{ for all } (x_1,\ldots,x_n) \in E^{n}, \ k=1,\ldots,n. \label{intro:acceptanceset-admissible} \end{align} \begin{theorem} \label{th:superhedging} For $f \in B(E^n)$, \[ \rho_n(f) = \inf\left\{y \in {\mathbb R} : \exists (Y_k)_{k=1}^n \in {\mathcal A}_n \text{ s.t. } f \le y + \sum_{k=1}^nY_k\right\}. \] where the inequality is understood pointwise. Moreover, the infimum is attained. \end{theorem} To interpret this as a control problem, consider the partial sum process $S_k = y + \sum_{i=1}^kY_i$ as a state process, which we must ``steer'' to be larger than $f$ pointwise at the final time $n$. The control $Y_k$ at each time $k$ must be admissible in the sense of \eqref{intro:acceptanceset-admissible}, and notice that the dependence of $Y_k$ on only $(x_1,\ldots,x_k)$ is an expression of adaptedness or non-anticipativity. We seek the minimal starting point $y$ for which this steering can be done. The iterative form of $\rho_n$ in \eqref{def:intro:rhon-recursive} (more precisely stated in Proposition \ref{pr:rhon-iterative}) can be seen as an expression of the dynamic programming principle for the control problem of Theorem \ref{th:superhedging}. For a concrete example, if $\rho$ is the shortfall risk measure \eqref{intro:shortfall}, and if $(X_k)_{k=1}^n$ denote i.i.d. $E$-valued random variables with common law $\mu$, then Theorem \ref{th:superhedging} expresses $\rho_n(f)$ as the infimum over all $y \in {\mathbb R}$ for which there exists an $X$-adapted process\footnote{We say $(Y_k)_{k=1}^n$ is $X$-adapted if $Y_k$ is $(X_1,\ldots,X_k)$-measurable for each $k=1,\ldots,n$.} $(Y_k)_{k=1}^n$ satisfying $f(X_1,\ldots,X_n) \le y + \sum_{k=1}^nY_k$ and ${\mathbb E}[\ell(Y_k) | X_1,\ldots,X_{k-1}] \le 1$ a.s., for each $k=1,\ldots,n$. \subsection{Interpreting Theorem \ref{th:main-sanov} in terms of risk measures} \label{se:intro:riskmeasures} It is straightforward to rewrite Theorem \ref{th:main-sanov} in a language more in line with the literature on convex risk measures, for which we again defer to \cite{follmer-schied-book} for background. Let $(\Omega,{\mathcal F})$ be a measurable space, and suppose $\varphi$ is a convex risk measure on the set $B(\Omega,{\mathcal F})$ of bounded measurable functions. That is, $\varphi : B(\Omega,{\mathcal F}) \rightarrow {\mathbb R}$ is convex, $\varphi(f + c) = \varphi(f)+c$ for all $f \in B(\Omega,{\mathcal F})$ and $c \in {\mathbb R}$, and $\varphi(f) \ge \varphi(g)$ whenever $f \ge g$ pointwise. Suppose we are given a sequence of $E$-valued random variables $(X_i)_{i=1}^\infty$, i.e., measurable maps $X_i : \Omega \rightarrow E$. Assume $X_i$ have the following independence property, identical to Peng's notion of independence under nonlinear expectations \cite{peng2010nonlinear}: for $n \ge 1$ and $f \in B(E^n)$ \begin{align} \varphi(f(X_1,\ldots,X_n)) = \varphi\left[\varphi(f(X_1,\ldots,X_{n-1},x))|_{x = X_n}\right]. \label{def:peng-independence} \end{align} In particular, $\varphi(f(X_i))=\varphi(f(X_1))$ for all $i$. Define $\alpha : {\mathcal P}(E) \rightarrow (-\infty,\infty]$ by \[ \alpha(\nu) = \sup_{f \in B(E)}\left(\int_Ef\,d\nu - \varphi(f(X_1))\right). \] Additional assumptions on $\varphi$ (see, e.g., Theorem \ref{th:rho-to-alpha} below) can ensure that $\alpha$ has weakly compact sub-level sets, so that Theorem \ref{th:main-sanov} applies. Then, for $F \in C_b({\mathcal P}(E))$, \begin{align} \lim_{n\rightarrow\infty}\frac{1}{n}\varphi\left(nF(L_n(X_1,\ldots,X_n))\right) = \sup_{\nu \in {\mathcal P}(E)}(F(\nu)-\alpha(\nu)), \label{intro:def:riskmeasure-interpretation} \end{align} Indeed, in our previous notation, $\rho_n(f)=\varphi(f(X_1,\ldots,X_n))$ for $f \in B(E^n)$. In the risk measure literature, one thinks of $\varphi(f)$ as the risk associated to an uncertain financial loss $f \in B(\Omega,{\mathcal F})$. With this in mind, and with $Z_n=F(L_n(X_1,\ldots,X_n))$, the quantity $\varphi(nZ_n)$ appearing in \eqref{intro:def:riskmeasure-interpretation} is the risk-per-unit of an investment in $n$ units of $Z_n$. One might interpret $Z_n$ as capturing the \emph{composition} of the investment, while the multiplicative factor $n$ represents the \emph{size} of the investment. As $n$ increases, say to $n+1$, the investment is ``rebalanced'' in the sense that one additional independent component, $X_{n+1}$, is incorporated and the size of the total investment is increased by one unit. The limit in \eqref{intro:def:riskmeasure-interpretation} is then an asymptotic evaluation of the risk-per-unit of this rebalancing scheme. \subsection{Extensions} Broadly speaking, the book of Dupuis and Ellis \cite{dupuis-ellis} and numerous subsequent works illustrate how the classical convex duality between relative entropy and cumulant-generating functions can serve as a foundation from which to derive an impressive range of large deviation principles. Similarly, each alternative dual pair $(\alpha,\rho)$ should provide an alternative foundation for a potentially equally wide range of limit theorems. From this perspective, our work raises far more questions than it answers by restricting attention to analogs of the two large deviation principles of Sanov and Cram\'er. It is likely, for instance, that an analog of Mogulskii's theorem (see \cite{mogul1977large} or \cite[Section 3]{dupuis-ellis}) holds in our context. Moreover, our framework is not as restricted to i.i.d. samples as it may appear. While the definition of $\alpha_n$ reflects our focus on i.i.d. samples, we might accommodate Markov chains by redefining $\alpha_n$. For instance, we may try \[ \alpha_n(\nu) = \beta(\nu_{0,1},\mu) + \int_{E^n}\sum_{k=2}^n\beta(\nu_{k-1,k}(x_1,\ldots,x_{k-1}),\pi(x_{k-1},\cdot))\nu(dx_1,\ldots,dx_n), \] where $\mu$ is an initial law of a Markov chain, $\pi$ is its transition kernel, and $\beta : {\mathcal P}(E) \times {\mathcal P}(E) \rightarrow (-\infty,\infty]$ plays the role of $\alpha$. This again simplifies in the classical case $\beta(\nu,\eta) = H(\nu | \eta)$, leading to $\alpha_n(\cdot) = H(\cdot | \mu_n)$, where $\mu_n$ is the law of the path $(X_1,\ldots,X_n)$ of the Markov chain described above. These speculations are meant simply to convey the versatility of our framework but are pursued no further, with the paper instead focusing on exploring the implications of various choices of $\alpha$ in our analog of Sanov's theorem. \subsection{Outline of the paper} The remainder of the paper is organized as follows. Section \ref{se:riskmeasures} begins by clarifying the $(\alpha,\rho)$ duality, explaining some useful properties of $\rho$ and $\rho_n$ and extending their definitions to unbounded functions. Section \ref{se:sanovproof} is devoted to the statement and proof Theorem \ref{th:main-sanov-extended}, which contains Theorem \ref{th:main-sanov} as a special case but is extended to stronger topologies and unbounded functions $F$. See also Section \ref{se:contraction} for abstract analogs of the contraction principle and Cram\'er's theorem. These extensions are put to use in Section \ref{se:nonexpLDP}, which proves and elaborates on the non-exponential forms of Sanov's and Cram\'er's theorems discussed in Section \ref{se:intro:nonexpLDP}. Section \ref{se:optimization} applies these results to obtain error estimates for a common Monte Carlo approach to stochastic optimization. Sections \ref{se:uniform} and \ref{se:optimaltransport} respectively elaborate on the examples of \ref{se:intro:uniform} and \ref{se:intro:optimaltransport}. Section \ref{se:superhedging} proves two different representations of $\rho_n$, namely those of \eqref{def:intro:rhon-recursive} and Theorem \ref{th:superhedging}. The short Appendix \ref{se:superadditivity} describes a natural but largely unsuccessful attempt to derive tractable large deviation upper bounds from Theorem \ref{th:main-sanov} by working with a class of functionals $\alpha$ of not one but two measures, such as $\varphi$-divergences. Finally two minor technical results are relegated to Appendix \ref{se:technical-lemma}. \section{Convex duality preliminaries} \label{se:riskmeasures} This section outlines the key features of the $(\alpha,\rho)$ duality. The first three theorems, stated in this subsection, are borrowed from the literature on convex risk measures, for which an excellent reference is the book of F\"ollmer and Schied \cite{follmer-schied-book}. While we will make use of some of the properties listed in Theorem \ref{th:alpha-to-rho}, the goal of the first two theorems is more to illustrate how one can make $\rho$ the starting point rather than $\alpha$. In particular, Theorem \ref{th:rho-to-alpha} will not be needed in the sequel. For proofs of Theorems \ref{th:alpha-to-rho} and \ref{th:rho-to-alpha}, refer to Bartl \cite[Theorem 2.4]{bartl2016pointwise}. \begin{theorem} \label{th:alpha-to-rho} Suppose $\alpha : {\mathcal P}(E) \rightarrow (-\infty,\infty]$ is convex and has weakly compact sub-level sets. Define $\rho : B(E) \rightarrow {\mathbb R}$ as in \eqref{intro:duality}. Then the following hold: \begin{enumerate} \item[(R1)] If $f \ge g$ pointwise then $\rho(f) \ge \rho(g)$. \item[(R2)] If $f \in B(E)$ and $c \in {\mathbb R}$, then $\rho(f+c)=\rho(f)+c$. \item[(R3)] If $f,f_n \in B(E)$ with $f_n \uparrow f$ pointwise, then $\rho(f_n) \uparrow \rho(f)$. \item[(R4)] If $f_n \in C_b(E)$ and $f \in B(E)$ with $f_n \downarrow f$ pointwise, then $\rho(f_n) \downarrow \rho(f)$. \end{enumerate} Moreover, for $\nu \in {\mathcal P}(E)$ we have \begin{align} \alpha(\nu) = \sup_{f \in C_b(E)}\left(\int_Ef\,d\nu - \rho(f)\right). \label{def:rho-to-alpha} \end{align} \end{theorem} \begin{theorem} \label{th:rho-to-alpha} Suppose $\rho : B(E) \rightarrow {\mathbb R}$ is convex and satisfies properties (R1--4) of Theorem \ref{th:alpha-to-rho}. Define $\alpha : {\mathcal P}(E) \rightarrow (-\infty,\infty]$ by \eqref{def:rho-to-alpha}. Then $\alpha$ is convex and has weakly compact sub-level sets. Moreover, the identity \eqref{intro:duality} holds. \end{theorem} We state also a useful theorem of F\"ollmer and Schied \cite{follmer-schied-book} which allows us to verify tightness of the sub-level sets of $\alpha$ by checking a property of $\rho$. \begin{theorem}[Proposition 4.30 of \cite{follmer-schied-book}] \label{th:tight} Suppose a functional $\rho : B(E) \rightarrow {\mathbb R}$ admits the representation \[ \rho(f) = \sup_{\nu \in {\mathcal P}(E)}\left(\int_Ef\,d\nu - \alpha(\nu)\right), \text{ for } f \in C_b(E), \] for some function $\alpha : {\mathcal P}(E) \rightarrow (-\infty,\infty]$. Suppose also that there is a sequence $(K_n)$ of compact subsets of $E$ such that \[ \lim_{n\rightarrow\infty}\rho(\lambda 1_{K_n}) = \rho(\lambda), \ \forall \lambda \ge 1. \] Then $\alpha$ has tight sub-level sets. \end{theorem} The goal of the rest of the section is to extend the domain of $\rho$ to unbounded functions and study the compactness of the sub-level sets of $\alpha$ with respect to stronger topologies. From now on, we work at all times with the standing assumptions on $\alpha$ described in the introduction: \begin{assumption} The function $\alpha : {\mathcal P}(E) \rightarrow (-\infty,\infty]$ is convex, has weakly compact sub-level sets, and is not identically equal to $\infty$. Lastly, $\rho$ is defined as in \eqref{intro:duality}. \end{assumption} \subsection{Extending $\rho$ and $\rho_n$ to unbounded functions} \label{se:extension-unbounded} This section extends the domain of $\rho$ to unbounded functions. Let $\overline{{\mathbb R}} = {\mathbb R} \cup \{-\infty,\infty\}$. We adopt the convention that $\infty - \infty := -\infty$, although this will have few consequences aside from streamlined definitions. In particular, if $\nu \in {\mathcal P}(E^n)$ and a measurable function $f : E^n \rightarrow \overline{{\mathbb R}}$ and satisfies $\int f^-\,d\nu = \int f^+\,d\nu = \infty$, we define $\int f\,d\nu = -\infty$. \begin{definition} \label{def:rho-unbounded} For $n \ge 1$ and measurable $f : E^n \rightarrow \overline{{\mathbb R}}$, define \[ \rho_n(f) = \sup_{\nu \in {\mathcal P}(E^n)}\left(\int_{E^n}f\,d\nu - \alpha_n(\nu)\right). \] As usual, abbreviate $\rho \equiv \rho_1$. \end{definition} It is worth emphasizing that while $\rho(f)$ is finite for bounded $f$, it can be either $+\infty$ or $-\infty$ when $f$ is unbounded. The following simple lemma will aid in some computations in Section \ref{se:nonexpLDP}. \begin{lemma} \label{le:extended-duality} If $f \in : E \rightarrow {\mathbb R} \cup \{\infty\}$ is measurable and bounded from below, then \[ \rho(f) = \lim_{m \rightarrow \infty}\rho(f \wedge m) = \sup_{m > 0}\rho(f \wedge m). \] \end{lemma} \begin{proof} Define $f_m = f \wedge m$. Monotone convergence yields \[ \sup_{m > 0}\rho(f_m) = \sup_{m > 0}\sup_{\nu \in {\mathcal P}(E)}\left(\int_Ef_m\,d\nu - \alpha(\nu)\right) = \sup_{\nu \in {\mathcal P}(E)}\left(\int_Ef\,d\nu - \alpha(\nu)\right) = \rho(f). \] One checks easily that this is consistent with the convention $\infty-\infty=-\infty$. \end{proof} \section{An extension of Theorem \ref{th:main-sanov}} \label{se:sanovproof} In this section we state and prove a useful generalization of Theorem \ref{th:main-sanov} for stronger topologies and unbounded functions, taking advantage of the results of the previous section. At all times in this section, the standing assumptions on $(\alpha,\rho)$ (stated just before Section \ref{se:extension-unbounded}) are in force. We prepare by defining a well known class of topologies on subsets of ${\mathcal P}(E)$. Given a continuous function $\psi : E \rightarrow {\mathbb R}_+ := [0,\infty)$, define \[ {\mathcal P}_\psi(E) = \left\{\mu \in {\mathcal P}(E) : \int_E\psi\,d\mu < \infty\right\}. \] Endow ${\mathcal P}_\psi(E)$ with the (Polish) topology generated by the maps $\nu \mapsto \int_Ef\,d\nu$, where $f : E \rightarrow {\mathbb R}$ is continuous and $|f| \le 1+\psi$; we call this the \emph{$\psi$-weak topology}. A useful fact about this topology is that a set $K \subset {\mathcal P}_\psi(E)$ is pre-compact if and only if for every $\epsilon > 0$ there exists a compact set $K \subset E$ such that \[ \sup_{\mu \in K}\int_{K^c}\psi\,d\mu \le \epsilon. \] This is easily proven directly, or refer to \cite[Corollary A.47]{follmer-schied-book}. In the following theorem, the extension of the upper bound to the $\psi$-weak topology requires the assumption that the sub-level sets of $\alpha$ are pre-compact in ${\mathcal P}_\psi(E)$. This rather opaque assumption is explored in more detail in the subsequent Section \ref{se:strongcramer}. \begin{theorem} \label{th:main-sanov-extended} Let $\psi : E \rightarrow {\mathbb R}_+$ be continuous. If $F : {\mathcal P}_\psi(E) \rightarrow {\mathbb R} \cup \{\infty\}$ is lower semicontinuous (with respect to the $\psi$-weak topology) and bounded from below, then \[ \liminf_{n\rightarrow\infty}\frac{1}{n}\rho_n(nF \circ L_n) \ge \sup_{\nu \in {\mathcal P}_\psi(E)}(F(\nu) - \alpha(\nu)). \] Suppose also that the sub-level sets of $\alpha$ are pre-compact subsets of ${\mathcal P}_\psi(E)$. If $F : {\mathcal P}_\psi(E) \rightarrow {\mathbb R} \cup \{-\infty\}$ is upper semicontinuous and bounded from above, then \[ \limsup_{n\rightarrow\infty}\frac{1}{n}\rho_n(nF \circ L_n) \le \sup_{\nu \in {\mathcal P}_\psi(E)}(F(\nu) - \alpha(\nu)). \] \end{theorem} \begin{proof} {\ } \textbf{Lower bound:} Let us prove first the lower bound. It is immediate from the definition that $n^{-1}\alpha_n(\nu^n) = \alpha(\nu)$ for each $\nu \in {\mathcal P}(E)$, where $\nu^n$ denotes the $n$-fold product measure. Thus \begin{align} \frac{1}{n}\rho_n(nF(L_n)) &= \sup_{\nu \in {\mathcal P}(E^n)}\left\{\int_{E^n} F \circ L_n \,d\nu - \frac{1}{n}\alpha_n(\nu)\right\} \label{pf:sanov1} \\ &\ge \sup_{\nu \in {\mathcal P}(E)}\left\{\int_{E^n} F \circ L_n \,d\nu^n - \frac{1}{n}\alpha_n(\nu^n)\right\} \nonumber \\ &= \sup_{\nu \in {\mathcal P}(E)}\left\{\int_{E^n} F \circ L_n \,d\nu^n - \alpha(\nu)\right\}. \nonumber \end{align} For $\nu \in {\mathcal P}(E)$, the law of large numbers implies $\nu^n \circ L_n^{-1} \rightarrow \delta_\nu$ weakly, i.e. in ${\mathcal P}({\mathcal P}(E))$. For $\nu \in {\mathcal P}_{\psi}(E)$, the convergence takes place in ${\mathcal P}({\mathcal P}_{\psi}(E))$. Lower semicontinuity of $F$ on ${\mathcal P}_\psi(E)$ then implies, for each $\nu \in {\mathcal P}_\psi(E)$, \begin{align*} \liminf_{n\rightarrow\infty}\frac{1}{n}\rho_n(nF(L_n)) &\ge \liminf_{n\rightarrow\infty}\int_{E^n} F \circ L_n \,d\nu^n - \alpha(\nu) \\ &\ge F(\nu) - \alpha(\nu). \end{align*} Take the supremum over $\nu$ to complete the proof of the lower bound. It is worth noting that if $d$ is a compatible metric on $E$ and $\psi(x)=d^p(x,x_0)$ for some fixed $x_0 \in E$ and $p \ge 1$, then the $\psi$-weak topology is nothing but the $p$-Wasserstein topology. \textbf{Upper bound, $F$ bounded:} The upper bound is more involved. First we prove it in four steps under the assumption that $F$ is bounded. \textbf{Step 1:} First we simplify the expression somewhat. For each $\nu \in {\mathcal P}(E^n)$ the definition of $\alpha_n$ and convexity of $\alpha$ imply \begin{align*} \frac{1}{n}\alpha_n(\nu) &= \frac{1}{n}\sum_{k=1}^n\int_{E^n}\alpha(\nu_{k-1,k}(x_1,\ldots,x_{k-1}) )\nu(dx_1,\ldots,dx_n) \\ &\ge \int_{E^n}\alpha\left(\frac{1}{n}\sum_{k=1}^n\nu_{k-1,k}(x_1,\ldots,x_{k-1})\right)\nu(dx_1,\ldots,dx_n). \end{align*} Combine this with \eqref{pf:sanov1} to get \begin{align} \frac{1}{n}\rho_n(nF(L_n)) &\le \sup_{\nu \in {\mathcal P}(E^n)}\int_{E^n}\left[F(L_n) - \alpha\left(\frac{1}{n}\sum_{k=1}^n\nu_{k-1,k}\right)\right]\,d\nu. \label{pf:sanov1.1} \end{align} Now choose arbitrarily some $\mu_f$ such that $\alpha(\mu_f) < \infty$. The choice $\nu = \mu_f^n$ and boundedness of $F$ show that the supremum in \eqref{pf:sanov1.1} is bounded below by $-\|F\|_{\infty} - \alpha(\mu_f)$, where $\|F\|_\infty := \sup_{\nu \in {\mathcal P}_\psi(E)}|F(\nu)|$. For each $n$, choose $\nu^{(n)} \in {\mathcal P}(E^n)$ attaining the supremum in \eqref{pf:sanov1.1} to within $1/n$. Then \begin{align} \int_{E^n}\alpha\left(\frac{1}{n}\sum_{k=1}^n\nu^{(n)}_{k-1,k}\right)\,d\nu^{(n)} \le 2\|F\|_{\infty} + \alpha(\mu_f) + \frac{1}{n}. \label{pf:sanov1.1.01} \end{align} It is convenient to switch now to a probabilistic notation: One some sufficiently rich probability space, find an $E^n$-valued random variable $(Y^n_1,\ldots,Y^n_n)$ with law $\nu^{(n)}$. Define the random measures \[ S_n := \frac{1}{n}\sum_{k=1}^n\nu^{(n)}_{k-1,k}(Y^n_1,\ldots,Y^n_{k-1}), \quad\quad \widetilde{S}_n := \frac{1}{n}\sum_{k=1}^n\delta_{Y^n_k}. \] Use \eqref{pf:sanov1.1} and the unwrap the definitions to find \begin{align} \frac{1}{n}\rho_n(nF(L_n)) &\le {\mathbb E}[F(\widetilde{S}_n) - \alpha(S_n)] + 1/n. \label{pf:sanov1.1.1} \end{align} Moreover, \eqref{pf:sanov1.1.01} implies \begin{align} \sup_n{\mathbb E}[\alpha(S_n)] \le 2\|F\|_\infty + \alpha(\mu_f) + 1 < \infty. \label{pf:sanov1.2.02} \end{align} \textbf{Step 2:} We next show that the sequence $(S_n,\widetilde{S}_n)$ is tight, viewed as ${\mathcal P}_\psi(E)\times{\mathcal P}_\psi(E)$-valued random variables. Here we use the assumption that the sub-level sets of $\alpha$ are $\psi$-weakly compact subsets of ${\mathcal P}_\psi(E)$. It then follows from \eqref{pf:sanov1.2.02} that $(S_n)$ is tight (see, e.g., \cite[Theorem A.3.17]{dupuis-ellis}). To see that the pair $(S_n,\widetilde{S}_n)$ is tight, it remains to check that $(\widetilde{S}_n)_n$ is tight. To this end, we first notice that $S_n$ and $\widetilde{S}_n$ have the same mean measure for each $n$, in the sense that for every $f \in B(E)$ we have \begin{align} {\mathbb E}\left[\int_Ef\,dS_n\right] &= {\mathbb E}\left[\frac{1}{n}\sum_{k=1}^n{\mathbb E}\left[\left. f(Y^n_k)\right|Y^n_1,\ldots,Y^n_{k-1}\right]\right] = {\mathbb E}\left[\frac{1}{n}\sum_{k=1}^nf(Y^n_k)\right] = {\mathbb E}\left[\int_Ef\,d\widetilde{S}_n\right]. \label{pf:meanmeasures} \end{align} To prove $(\widetilde{S}_n)$ is tight, it suffices (by Prohorov's theorem) to show that for all $\epsilon > 0$ there exists a $\psi$-weakly compact set $K \subset {\mathcal P}_\psi(E)$ such that $P(\widetilde{S}_n \notin K) \le \epsilon$. We will look for $K$ of the form $K = \cap_{k=1}^\infty\{\nu : \int_{C_k^c}\psi\,d\mu \le 1/k\}$, where $(C_k)_{k=1}^\infty$ a sequence of compact subsets of $E$ to be specified later; indeed, sets $K$ of this form are pre-compact in ${\mathcal P}_\psi(E)$ according to a form of Prohorov's theorem discussed at the beginning of this section (see also \cite[Corollary A.47]{follmer-schied-book}). For such a set $K$, use Markov's inequality and \eqref{pf:meanmeasures} to compute \begin{align} P\left(\widetilde{S}_n \notin K \right) &\le \sum_{k=1}^\infty P\left(\int_{C_k^c}\psi\,d\widetilde{S}_n> 1/k\right) \le \sum_{k=1}^\infty k{\mathbb E}\int_{C_k^c}\psi\,d\widetilde{S}_n = \sum_{k=1}^\infty k{\mathbb E}\int_{C_k^c}\psi\,dS_n. \label{pf:cramer1} \end{align} By a form of Jensen's inequality (see Lemma \ref{le:jensen}), \[ \sup_n\alpha({\mathbb E} S_n) \le \sup_n{\mathbb E}[\alpha(S_n)] < \infty, \] where ${\mathbb E} S_n$ is the probability measure on $E$ defined by $({\mathbb E} S_n)(A) = {\mathbb E}[S_n(A)]$. Hence, the sequence $({\mathbb E} S_n)$ is pre-compact in ${\mathcal P}_\psi(E)$, thanks to the assumption that sub-level sets of $\alpha$ are pre-compact subsets of ${\mathcal P}_\psi(E)$. It follows that for every $\epsilon > 0$ there exists a compact set $C \subset E$ such that $\sup_n{\mathbb E}\int_{C^c}\psi\,dS_n \le \epsilon$. With this in mind, we may choose $C_k$ to make \eqref{pf:cramer1} arbitrarily small, uniformly in $n$. This shows that $(\widetilde{S}_n)$ is tight, completing Step 2. \textbf{Step 3:} We next show that every limit in distribution of $(S_n,\widetilde{S}_n)$ is concentrated on the diagonal $\{(\nu,\nu) : \nu \in {\mathcal P}_\psi(E)\}$. By definition of $\nu^{(n)}_{k-1,k}$, we have \[ {\mathbb E}\left[\left. f(Y^n_k) - \int_E f\,d\nu^{(n)}_{k-1,k}(Y^n_1,\ldots,Y^n_{k-1})\right| Y_1,\ldots,Y_{k-1}\right] = 0, \text{ for } k=1,\ldots,n \] for every $f \in B(E)$. That is, the terms inside the expectation form a martingale difference sequence. Thus, for $f \in B(E)$, we have \begin{align} {\mathbb E}\left[\left(\int_E f\,dS_n - \int_E f\, d\widetilde{S}_n\right)^2 \right] &= {\mathbb E}\left[\left(\frac{1}{n}\sum_{k=1}^n\left( f(Y^n_k) - \int_E f\,d\nu^{(n)}_{k-1,k}(Y^n_1,\ldots,Y^n_{k-1})\right)\right)^2\right] \nonumber \\ &= \frac{1}{n^2}\sum_{k=1}^n{\mathbb E}\left[\left( f(Y^n_k) - \int_E f\,d\nu^{(n)}_{k-1,k}(Y^n_1,\ldots,Y^n_{k-1})\right)^2\right] \nonumber \\ &\le 2\|f\|_{\infty}^2/n, \label{pf:sanov1.2} \end{align} where $\|f\|_\infty := \sup_{x \in E}|f(x)|$. It is straightforward to check that \eqref{pf:sanov1.2} implies that every weak limit of $(S_n,\widetilde{S}_n)$ is concentrated on (i.e., almost surely belongs to) the diagonal $\{(\nu,\nu) : \nu \in {\mathcal P}(E)\}$ (c.f. \cite[Lemma 2.5.1(b)]{dupuis-ellis}). \textbf{Step 4:} We can now complete the proof of the upper bound. With Step 3 in mind, fix a subsequence and a ${\mathcal P}_\psi(E)$-valued random variable $\eta$ such that $(S_n,\widetilde{S}_n) \rightarrow (\eta,\eta)$ in distribution (where we relabeled the subsequence). Recall that $\alpha$ is bounded from below and $\psi$-weakly lower semicontinuous, whereas $F$ is upper semicontinuous and bounded. Returning to \eqref{pf:sanov1.1.1}, we conclude now that \begin{align*} \limsup_{n\rightarrow\infty}\frac{1}{n}\rho_n(nF(L_n)) &\le \limsup_{n\rightarrow\infty}{\mathbb E}\left[F(\widetilde{S}_n) - \alpha(S_n)\right] \\ &\le {\mathbb E}[F(\eta) - \alpha(\eta)] \\ &\le \sup_{\nu \in {\mathcal P}_\psi(E)}\left\{F(\nu) - \alpha(\nu)\right\}. \end{align*} Of course, we abused notation by relabeling the subsequences, but we have argued that for every subsequence there exists a further subsequence for which this bound holds, which proves the upper bound for $F$ bounded. \textbf{Upper bound, unbounded $F$:} With the proof complete for bounded $F$, we now remove the boundedness assumption using a natural truncation procedure. Let $F : {\mathcal P}(E) \rightarrow E \cup \{-\infty\}$ be upper semicontinuous and bounded from above. For $m > 0$ let $F_m := F \vee (-m)$. Since $F_m$ is bounded and upper semicontinuous, the previous step yields \[ \limsup_{n\rightarrow\infty}\frac{1}{n}\rho_n(nF_m(L_n)) \le \sup_{\nu \in {\mathcal P}_\psi(E)}\left\{F_m(\nu) - \alpha(\nu)\right\} =: S_m, \] for each $m > 0$. Since $F_m \ge F$, we have \[ \rho_n(nF_m(L_n)) \ge \rho_n(nF(L_n)) \] for each $m$, and it remains only to show that \begin{align} \lim_{m \rightarrow \infty}S_m = \sup_{\nu \in {\mathcal P}_\psi(E)}\left\{F(\nu) - \alpha(\nu)\right\} =: S. \label{pf:sanov2} \end{align} Clearly $S_m \ge S$, since $F_m \ge F$. Note that $S < \infty$, as $F$ and $\alpha$ are bounded from above and from below, respectively. If $S = -\infty$, then $F(\nu) = -\infty$ whenever $\alpha(\nu) < \infty$, and we conclude that, as $m\rightarrow\infty$, \[ S_m \le -m - \inf_{\nu \in {\mathcal P}(E)}\alpha(\nu) \ \rightarrow \ -\infty = S. \] Now suppose instead that $S$ is finite. Fix $\epsilon > 0$. For each $m > 0$, find $\nu_m \in {\mathcal P}(E)$ such that \begin{align} F_m(\nu_m) - \alpha(\nu_m) + \epsilon \ge S_m \ge S. \label{pf:sanov3} \end{align} Since $F$ is bounded from above and $S > -\infty$, it follows that $\sup_m\alpha(\nu_m) <\infty$. The sub-level sets of $\alpha$ are $\psi$-weakly compact, and thus the sequence $(\nu_m)$ has a limit point (in ${\mathcal P}_\psi(E)$). Let $\nu_\infty$ denote any limit point, and suppose $\nu_{m_k} \rightarrow \nu_\infty$. Then \begin{align*} \lim_{k\rightarrow\infty}\left\{F_{m_k}(\nu_{m_k}) - \alpha(\nu_{m_k})\right\} &\le F(\nu_\infty) - \alpha(\nu_\infty)\le S, \end{align*} where the second inequality follows from upper semicontinuity of $F$ and lower semicontinuity of $\alpha$. This holds for any limit point of the pre-compact sequence $(\nu_m)$, and it follows from \eqref{pf:sanov3} that \[ S \le \limsup_{m\rightarrow\infty}S_m \le \limsup_{m\rightarrow\infty}\left\{F_m(\nu_m) - \alpha(\nu_m)\right\} + \epsilon \le S + \epsilon. \] Since $\epsilon > 0$ was arbitrary, this proves \eqref{pf:sanov2}. \end{proof} \begin{remark} If $\alpha$ has $\sigma({\mathcal P}(E),B(E))$-compact sub-level sets, it is likely that the conclusion of Theorem \ref{th:main-sanov} will hold for bounded $\sigma({\mathcal P}(E),B(E))$-continuous functions $F$. This is known to be true in the classical case $\alpha(\cdot)=H(\cdot|\mu)$ (see, e.g. \cite[Section 6.2]{dembozeitouni}), where we recall the definition of relative entropy $H$ from \eqref{def:relativeentropy}. For the sake of brevity, we do not pursue this generalization. \end{remark} \subsection{Pre-compactness in ${\mathcal P}_\psi(E)$ and Cram\'er's condition} \label{se:strongcramer} This section identifies an important sufficient condition for the sub-level sets of $\alpha$ to be pre-compact subsets of ${\mathcal P}_\psi(E)$, which was required for the upper bound of Theorem \ref{th:main-sanov-extended}. A first useful result provides a condition under which the effective domain of $\alpha$ is contained in ${\mathcal P}_\psi(E)$. \begin{proposition} \label{pr:weakcramer} Fix a measurable function $\psi : E \rightarrow {\mathbb R}_+$. Suppose $\rho(\lambda \psi) < \infty$ for some $\lambda > 0$. Then, for each $\nu \in {\mathcal P}(E)$ satisfying $\alpha(\nu) < \infty$, we have $\int_E\psi\,d\nu < \infty$. \end{proposition} \begin{proof} By definition, for each $\nu \in {\mathcal P}(E)$, \[ \infty > \rho(\lambda\psi) \ge \lambda\int_E\psi\,d\nu - \alpha(\nu). \] If $\alpha(\nu) < \infty$ then certainly $\int\psi\,d\nu < \infty$. \end{proof} The next and more important proposition identifies a condition under which the sub-level sets of $\alpha$ are not only weakly compact but also $\psi$-weakly compact. \begin{proposition} \label{pr:strongcramer} Fix a continuous function $\psi : E \rightarrow {\mathbb R}_+$. Suppose \begin{align} \lim_{m\rightarrow\infty}\rho(\lambda\psi 1_{\{\psi \ge m\}}) = \rho(0), \ \forall \lambda > 0. \label{def:strongcramer-condition} \end{align} Then, for each $c \in {\mathbb R}$, the weak and $\psi$-weak topologies coincide on $\{\nu \in {\mathcal P}(E) : \alpha(\nu) \le c\} \subset {\mathcal P}_\psi(E)$; in particular, the sub-level sets of $\alpha$ are $\psi$-weakly compact. \end{proposition} \begin{proof} Fix $c \in {\mathbb R}$, and abbreviate $S = \{\nu \in {\mathcal P}(E) : \alpha(\nu) \le c\}$. Assume $S \neq \emptyset$. Note that Proposition \ref{pr:weakcramer} implies $S \subset {\mathcal P}_\psi(E)$. It suffices to prove that the map $\nu \mapsto \int_Ef\,d\nu$ is weakly continuous on $S$ for every continuous $f$ with $|f| \le 1 + \psi$. For this it suffices to prove the uniform integrability condition \[ \lim_{m\rightarrow\infty}\sup_{\nu \in S}\int_{\{\psi \ge m\}}\psi\,d\nu = 0. \] By definition of $\rho$, for $m > 0$ and $\nu \in S$, \begin{align} \lambda\int_{\{\psi \ge m\}}\psi\,d\nu &\le \rho(\lambda\psi 1_{\{\psi \ge m\}}) + \alpha(\nu) \le \rho(\lambda\psi 1_{\{\psi \ge m\}}) + c, \label{pf:strongcramer1} \end{align} Given $\epsilon > 0$, choose $\lambda > 0$ large enough that $(\epsilon + \rho(0) + c)/\lambda \le \epsilon$. Then choose $m$ large enough that $\rho(\lambda\psi 1_{\{\psi \ge m\}}) \le \epsilon + \rho(0)$, which is possible because of assumption \eqref{def:strongcramer-condition}. It then follows from \eqref{pf:strongcramer1} that $\int_{\{\psi \ge m\}}\psi\,d\nu \le \epsilon$, and the proof is complete. \end{proof} Several extensions of Sanov's theorem to stronger topologies rely on what might be called a ``strong Cram\'er condition.'' For instance, if $\psi : E \rightarrow {\mathbb R}_+$ is continuous, the results of Schied \cite{schied1998cramer} indicate that Sanov's theorem can be extended to the $\psi$-weak topology if (and essentially only if) $\log\int_Ee^{\lambda\psi}\,d\mu < \infty$ for every $\lambda \ge 0$; see also \cite{wang2010sanov,eichelsbacher1996large}. It may seem natural to guess that the analogous condition in our general setting is $\rho(\lambda\psi) < \infty$ for all $\lambda \ge 0$, but it turns out this is not enough. We refer to \eqref{def:strongcramer-condition} as the \emph{strong Cram\'er condition}, noting that this condition was heavily inspired by the work of Owari \cite{owari2014maximum} on continuous extensions of monotone convex functionals. The following simple lemma is worth stating for emphasis: \begin{lemma} \label{le:strongcramer-implies-weak} Fix a continuous function $\psi : E \rightarrow {\mathbb R}_+$. Suppose \eqref{def:strongcramer-condition} holds. Then $\rho(\lambda\psi) < \infty$ for every $\lambda \ge 0$. \end{lemma} \begin{proof} For $m,\lambda > 0$ we have $\lambda\psi \le \lambda m + \lambda\psi1_{\{\psi \ge m\}}$, and thus properties (R1) and (R2) of Theorem \ref{th:alpha-to-rho} imply \[ \rho(\lambda \psi) \le \lambda m + \rho(\lambda\psi 1_{\{\psi \ge m\}}). \] \end{proof} In several cases of interest (namely, Propositions \ref{pr:shortfallcramercondition} and \ref{pr:robustcramercondition} below), it turns out that a converse to Lemma \ref{le:strongcramer-implies-weak} is true, i.e., the strong Cram\'er condition \eqref{def:strongcramer-condition} is equivalent to the statement that $\rho(\lambda\psi) < \infty$ for all $\lambda > 0$. In general, however, the strong Cram\'er condition is the strictly stronger statement. Consider the following simple example, borrowed from \cite[Example 3.7]{owari2014maximum}: Let $E = \{0,1,\ldots,\}$ be the natural numbers, and define $\mu_n \in {\mathcal P}(E)$ by $\mu_1\{0\}=1$, $\mu_n\{0\} = 1-1/n$, and $\mu_n\{n\} = 1/n$. Let $M$ denote the closed convex hull of $(\mu_n)$. Then $M$ is convex and weakly compact. Define $\alpha(\mu) = 0$ for $\mu \in M$ and $\alpha(\mu)=\infty$ otherwise. Then $\alpha$ satisfies our standing assumptions, and $\rho(f) = \sup_{\mu \in M}\int f\,d\mu = \sup_n\int f\,d\mu_n$. Finally, let $\psi(x)=x$ for $x \in E$. Then $\rho(\lambda\psi) = \lambda < \infty$ because $\int \psi\,d\mu_n = 1$ for all $n$, and similarly $\rho(\lambda \psi1_{\{\psi \ge m\}}) = \lambda$ because $\int\psi1_{\{\psi \ge m\}}\,d\mu_n = n1_{\{n \ge m\}}$. In particular, $\rho(\lambda\psi) < \infty$ for all $\lambda > 0$, but the strong Cram\'er condition fails. Finally, we remark that it is conceivable that a converse to Proposition \ref{pr:strongcramer} might hold, i.e., that the strong Cram\'er condition \eqref{def:strongcramer-condition} may be \emph{equivalent} to the pre-compactness of the sub-level sets of $\alpha$ in ${\mathcal P}_\psi(E)$. Indeed, the results of Schied \cite[Theorem 2]{schied1998cramer} and Owari \cite[Theorem 3.8]{owari2014maximum} suggest that this may be the case. This remains an open problem. \subsection{Implications of $\psi$-weakly compact sub-level sets} This section contains two results to be used occasionally in the sequel. First is a useful lemma that aid in the computation of $\rho(f)$ for certain unbounded $f$ in Section \ref{se:nonexpLDP}. \begin{lemma} \label{le:extended-duality2} Suppose $\psi : E \rightarrow {\mathbb R}_+$ is continuous, and suppose the sub-level sets of $\alpha$ are pre-compact subsets of ${\mathcal P}_\psi(E)$. Let $f : E \rightarrow {\mathbb R}$ be upper semicontinuous with $f \le c(1+\psi)$ pointwise for some $c \ge 0$. Then \[ \rho(f) = \lim_{m\rightarrow \infty}\rho(f \vee (-m)) = \inf_{m > 0}\rho(f \vee (-m)) \] \end{lemma} \begin{proof} Monotonicity of $\rho$ (see (R1) of Theorem \ref{th:alpha-to-rho}) implies $\inf_{m > 0}\rho(f \vee (-m)) \ge \rho(f)$, so we need only prove the reverse inequality. Assume without loss of generality that $\inf_{m > 0}\rho(f_m) > -\infty$. For each $n$, we may find for each $n$ some $\nu_n \in {\mathcal P}_\psi(E)$ such that \begin{align} \inf_{m > 0}\rho(f_m) \le \rho(f_n) \le \int_Ef_n\,d\nu_n - \alpha(\nu_n) + 1/n. \label{pf:extended-duality1} \end{align} This implies $\sup_n\alpha(\nu_n) < \infty$, because $f$ is bounded from above and $\rho(f_n) \ge \inf_{m > 0}\rho(f_m) > -\infty$. The sub-level sets of $\alpha$ are $\psi$-weakly pre-compact, and thus we may extract a subsequence ${n_k}$ and $\nu_\infty \in {\mathcal P}_\psi(E)$ such that $\nu_{n_k} \rightarrow \nu_\infty$. Note that this convergence implies the uniform integrability of $\psi$, in the sense that \begin{align} \lim_{r \rightarrow\infty}\sup_k\int_{\{\psi \ge r\}}\psi\,d\nu_{n_k} = 0.\label{pf:extended-duality2} \end{align} By Skorohod's representation, we may find random variables $X_k$ and $X_\infty$ with respective laws $\nu_{n_k}$ and $\nu_\infty$ such that $X_k \rightarrow X_\infty$ a.s. Note that \eqref{pf:extended-duality2} implies \begin{align} \lim_{r \rightarrow\infty}\sup_k{\mathbb E}[\psi(X_k)1_{\{\psi(X_k) \ge r\}}] = 0.\label{pf:extended-duality3} \end{align} The upper semicontinuity assumption implies $\limsup_{k\rightarrow\infty}f_{n_k}(X_k) \le f(X_\infty)$ almost surely. The positive parts of $(f_{n_k}(X_{n_k}))_{k=1}^\infty$ are uniformly integrable thanks to \eqref{pf:extended-duality3} and the bound $f_m \le c(1+\psi)$. We then conclude from Fatou's lemma that \[ \limsup_{k\rightarrow\infty}\int_Ef_{n_k}\,d\nu_{n_k} = \limsup_{k\rightarrow\infty}{\mathbb E}[f_{n_k}(X_k)] \le {\mathbb E}[f(X_\infty)] = \int_Ef\,d\nu_\infty. \] Since $\alpha$ is $\psi$-weakly lower semicontinuous, we conclude from \eqref{pf:extended-duality1} that \[ \inf_{m > 0}\rho(f_m) \le \int_Ef\,d\nu_\infty - \alpha(\nu_\infty) \le \sup_{\nu \in {\mathcal P}_\psi(E)}\left(\int_Ef\,d\nu - \alpha(\nu)\right) = \rho(f). \] \end{proof} The last result of this section will be useful in proving our analog of Cram\'er's upper bound, Corollary \ref{co:lpcramer}. Proposition \ref{pr:cramer-representation} below is a generalization of the well-known result that the functions \[ t \mapsto \log\int_{\mathbb R} e^{tx}\,\mu(dx), \quad\quad \text{ and } \quad\quad t \mapsto \inf\left\{H(\nu | \mu) : \nu \in {\mathcal P}({\mathbb R}), \ \int_{\mathbb R} x\,\nu(dx) = t\right\}, \] are convex conjugates of each other (see, e.g., \cite[Lemma 3.3.3]{dupuis-ellis}). This is used, for instance, in deriving Cram\'er's theorem from Sanov's theorem via contraction mapping. \begin{proposition} \label{pr:cramer-representation} Let $(E,\|\cdot\|)$ be a separable Banach space, and let $\psi(x) = \|x\|$. Suppose the sub-level sets of $\alpha$ are pre-compact subsets of ${\mathcal P}_\psi(E)$. Define $\Psi : E \rightarrow {\mathbb R} \cup \{\infty\}$ by \[ \Psi(x) = \inf\left\{\alpha(\nu) : \nu \in {\mathcal P}_\psi(E), \ \int_Ez\,\nu(dz) = x\right\}, \] where the integral is in the sense of Bochner. Define $\Psi^*$ on the continuous dual $E^*$ by \[ \Psi^*(x^*) = \sup_{x \in E}\left(\langle x^*,x\rangle - \Psi(x)\right). \] Then $\Psi$ is convex and lower semicontinuous, and $\Psi^*(x^*) = \rho(x^*)$ for every $x^* \in E^*$. In particular, \begin{align} \Psi(x) = \sup_{x^* \in E^*}\left(\langle x^*,x\rangle - \rho(x^*)\right). \label{def:cramer-representation} \end{align} \end{proposition} \begin{proof} We first show that $\Psi$ is convex. Let $t \in (0,1)$ and $x_1,x_2 \in E$. Fix $\epsilon > 0$, and find $\nu_1,\nu_2 \in {\mathcal P}_\psi(E)$ such that $\int_Ez\nu_i(dz)=x_i$ and $\alpha(\nu_i) \le \Psi(x_i) + \epsilon$. Convexity of $\alpha$ yields \begin{align*} \Psi(tx_1 + (1-t)x_2) &\le \alpha(t\nu_1+(1-t)\nu_2) \le t\alpha(\nu_1) + (1-t)\alpha(\nu_2) \\ &\le t\Psi(x_1) + (1-t)\Psi(x_2) + \epsilon. \end{align*} To prove that $\Psi$ is lower semicontinuous, first note that $\Psi$ is bounded from below since $\alpha$ is. Let $x_n \rightarrow x$ in $E$, and find $\nu_n \in {\mathcal P}_\psi(E)$ such that $\alpha(\nu_n) \le \Psi(x_n) + 1/n$ and $\int_Ez\nu_n(dz)=x_n$ for each $n$. Fix a subsequence $\{x_{n_k}\}$ such that $\Psi(x_{n_k}) < \infty$ for all $k$ and $\Psi(x_{n_k})$ converges to a finite value (if no such subsequence exists, then there is nothing to prove, as $\Psi(x_n) \rightarrow \infty$). Then $\sup_{k}\alpha(\nu_{n_k}) < \infty$, and because $\alpha$ has $\psi$-weakly compact sub-level sets there exists a further subsequence (again denoted $n_k$) and some $\nu_\infty \in {\mathcal P}_\psi(E)$ such that $\nu_{n_k}\rightarrow\nu_\infty$. The convergence $\nu_{n_k}\rightarrow\nu_\infty$ in the $\psi$-weak topology implies \[ x = \lim_{k\rightarrow\infty}x_{n_k} = \lim_{k\rightarrow\infty}\int_Ez\nu_{n_k}(dz) = \int_Ez\,\nu_\infty(dz). \] Using lower semicontinuity of $\alpha$ we conclude \begin{align} \Psi(x) \le \alpha(\nu_\infty) \le \liminf_{k\rightarrow\infty}\alpha(\nu_{n_k}) \le \liminf_{k\rightarrow\infty}\Psi(x_{n_k}). \label{pf:representation1} \end{align} For every sequence $(x_n)$ in $E$ and any subsequence thereof, this argument shows that there exists a further subsequence for which \eqref{pf:representation1} holds, and this proves that $\Psi$ is lower semicontinuous. Next, compute $\Psi^*$ as follows: \begin{align*} \Psi^*(x^*) &= \sup_{x \in E}\left(\langle x^*,x\rangle - \Psi(x)\right) \\ &= \sup_{x \in E}\sup\left\{\langle x^*,x\rangle - \alpha(\nu) : \nu \in {\mathcal P}_\psi(E), \ \int_Ez\nu(dz)=x\right\} \\ &= \sup_{\nu \in {\mathcal P}_\psi(E)}\left(\left\langle x^*,\int_Ez\nu(dz)\right\rangle - \alpha(\nu)\right) \\ &= \sup_{\nu \in {\mathcal P}_\psi(E)}\left(\int_E\langle x^*,z\rangle\nu(dz) - \alpha(\nu)\right) \\ &= \rho(x^*). \end{align*} Indeed, we can take the supremum equivalently over ${\mathcal P}_\psi(E)$ or over ${\mathcal P}(E)$ in the last step, thanks to the assumption that $\alpha = \infty$ off of ${\mathcal P}_\psi(E)$ and our convention $\infty-\infty=-\infty$. Because $\Psi$ is lower semicontinuous and convex, we conclude from the Fenchel-Moreau theorem \cite[Theorem 2.3.3]{zalinescu-book} that it is equal to its biconjugate, which is precisely what \eqref{def:cramer-representation} says. \end{proof} \subsection{Contraction principles and an abstract form of Cram\'er's theorem} \label{se:contraction} Viewing Theorem \ref{th:main-sanov-extended} as an abstract form of Sanov's theorem, we may derive from it an abstract form of Cram\'er's theorem. The key tool is an analog of the contraction principle from classical large deviations (c.f. \cite[Theorem 4.2.1]{dembozeitouni}). In its simplest form, if $\varphi : {\mathcal P}(E) \rightarrow E'$ is continuous for some topological space $E'$, then for $F \in C_b(E')$ we may write \begin{align*} \lim_{n\rightarrow\infty}\frac{1}{n}\rho_n(nF \circ \varphi \circ L_n) &= \sup_{\nu \in {\mathcal P}(E)}\left(F(\varphi(\nu)) - \alpha(\nu)\right) \\ &= \sup_{x \in E'}\left(F(x) - \alpha_\varphi(x)\right), \end{align*} where we define $\alpha_\varphi : E' \rightarrow (-\infty,\infty]$ by \[ \alpha_\varphi(x) := \inf\left\{\alpha(\nu) : \nu \in {\mathcal P}(E), \ \varphi(\nu) = x\right\}. \] This line of reasoning leads to the following extension of Cram\'er's theorem: \begin{corollary} Let $(E,\|\cdot\|)$ be a separable Banach space with continuous dual $E^*$. Define $\Lambda^* : E \rightarrow {\mathbb R} \cup \{\infty\}$ by \begin{align*} \Lambda^*(x) = \sup_{x^* \in E^*}\left(\langle x^*,x\rangle - \rho(x^*)\right). \end{align*} Define $S_n : E^n \rightarrow E$ by $S_n(x_1,\ldots,x_n) = \frac{1}{n}\sum_{i=1}^nx_i$. If $F : E \rightarrow {\mathbb R} \cup \{\infty\}$ is lower semicontinuous and bounded from below, then \begin{align*} \liminf_{n\rightarrow\infty}\frac{1}{n}\rho_n(nF \circ S_n) \ge \sup_{x \in E}(F(x)-\Lambda^*(x)). \end{align*} Suppose also that the sub-level sets of $\alpha$ are pre-compact subsets of ${\mathcal P}_\psi(E)$, for $\psi(x) = \|x\|$ for $x \in E$. If $F : E \rightarrow {\mathbb R} \cup \{-\infty\}$ is upper semicontinuous and bounded from above, then \begin{align*} \limsup_{n\rightarrow\infty}\frac{1}{n}\rho_n(nF \circ S_n) \le \sup_{x \in E}(F(x)-\Lambda^*(x)). \end{align*} \end{corollary} \begin{proof} The map \[ {\mathcal P}_\psi(E) \ni \mu \mapsto F\left(\int_Ez\,\mu(dz)\right) \] is upper (resp. lower) semicontinuous as soon as $F$ is upper (resp. lower) semicontinuous. The claims then follow from Theorem \ref{th:main-sanov-extended} and Proposition \ref{pr:cramer-representation}. \end{proof} \section{Non-exponential large deviations} \label{se:nonexpLDP} The goal of this section is to prove Theorem \ref{th:lpsanov} and Corollary \ref{co:lpcramer}, but along the way we will explore a particularly interesting class of $(\alpha,\rho)$ pairs. \subsection{Shortfall risk measures} \label{se:shortfall} Fix $\mu \in {\mathcal P}(E)$ and a nondecreasing, nonconstant, convex function $\ell : {\mathbb R} \rightarrow {\mathbb R}_+$ satisfying $\ell(x) < 1$ for all $x < 0$. Let $\ell^*(y) = \sup_{x \in {\mathbb R}}(xy - \ell(x))$ denote the convex conjugate, and define $\alpha : {\mathcal P}(E) \rightarrow [0,\infty]$ by \[ \alpha(\nu) = \begin{cases} \inf_{t > 0}\frac{1}{t}\left(1 + \int_E\ell^*\left(t\frac{d\nu}{d\mu}\right)\,d\mu\right) &\text{if } \nu \ll \mu \\ \infty &\text{otherwise}. \end{cases} \] Note that $\ell^*(x) \ge - \ell(0) \ge -1$, by assumption and by continuity of $\ell$, so that $\alpha \ge 0$. Define $\rho$ as usual by \eqref{intro:duality}. It is known \cite[Proposition 4.115]{follmer-schied-book} that, for $f \in B(E)$, \begin{align} \rho(f) = \inf\left\{m \in {\mathbb R} : \int_E\ell(f(x)-m)\mu(dx) \le 1\right\}. \label{def:shortfall-rho} \end{align} Refer to the book of F\"ollmer and Schied \cite[Section 4.9]{follmer-schied-book} for a thorough study of the properties of $\rho$. Notably, they show that $\rho$ satisfies all of properties (R1--4) of Theorem \ref{th:alpha-to-rho}, and that both dual formulas hold: \begin{align*} \rho(f) &= \sup_{\nu \in {\mathcal P}(E)}\left(\int_Ef\,d\nu - \alpha(\nu)\right), \quad\quad \alpha(\nu) = \sup_{f \in B(E)}\left(\int_Ef\,d\nu - \rho(f)\right). \end{align*} If $\ell(x)=e^x$ we recover $\rho(f)=\log\int_Ee^f\,d\mu$ and $\alpha(\nu) = H(\nu | \mu)$. If $\ell(x) = [(1+x)^+]^q$ for some $q \ge 1$, then \begin{align} \alpha(\nu) = \|d\nu/d\mu\|_{L^p(\mu)}-1, \text{ for } \nu \ll \mu, \quad\quad \alpha(\nu) = \infty \text{ otherwise}, \label{def:lpentropy} \end{align} where $p=q/(q-1)$, and where of course $\|f\|_{L^p(\mu)} = \left(\int |f|^p\,d\mu\right)^{1/p}$; see \cite[Example 4.118]{follmer-schied-book} or \cite[Section 3.1]{lacker-liquidity} for this computation. The $-1$ is a convenient normalization, ensuring that $\alpha(\nu)=0$ if and only if $\nu=\mu$. Note that \eqref{def:shortfall-rho} is only valid, a priori, for bounded $f$, although the expression on the right-hand side certainly makes sense for unbounded $f$. The next results provide some useful cases for which the identity \eqref{def:shortfall-rho} carries over to unbounded functions, and these will be needed in the proof of Corollary \ref{co:lpcramer}. In the following, define $\ell(\pm \infty) = \lim_{x \rightarrow \pm\infty}\ell(x)$. \begin{lemma} \label{le:shortfall-Bb} The identity \eqref{def:shortfall-rho} holds whenever $f : E \rightarrow {\mathbb R}$ is measurable and bounded from below. \end{lemma} \begin{proof} Let $H(f)$ denote the right-hand side of \eqref{def:shortfall-rho}. Let $f_n = f \wedge n$. For each $n$, because $f_n$ is bounded, the identity $\rho(f_n)=H(f_n)$ holds by\cite[Proposition 4.115]{follmer-schied-book}. Monotone convergence yields \begin{align*} \lim_{n\rightarrow\infty}\rho(f_n) &= \sup_n\rho(f_n) = \sup_n\sup_{\nu \in {\mathcal P}(E)}\left(\int_Ef_n\,d\nu - \alpha(\nu)\right) = \sup_{\nu \in {\mathcal P}(E)}\left(\int_Ef\,d\nu - \alpha(\nu)\right) = \rho(f). \end{align*} On the other hand, $H$ is clearly monotone, so $f_n \le f$ implies $H(f_n) \le H(f_{n+1}) \le H(f)$ for all $n$. It remains to show that for all $\epsilon > 0$ there exists $n$ such that $H(f_n) > H(f) - \epsilon$. Letting $c = H(f)-\epsilon$, the definition of $H$ implies $\int_E\ell(f(x) - c)\mu(dx) > 1$. By monotone convergence, there exists $n$ such that $\int_E\ell(f_n(x) - c)\mu(dx) > 1$. The definition of $H$ now implies $H(f_n) > c = H(f) - \epsilon$. \end{proof} The following result shows how the strong Cram\'er condition \eqref{def:strongcramer-condition} simplifies in the present context. It is essentially contained in \cite[Proposition 7.3]{owari2014maximum}, but we include the short proof. \begin{proposition} \label{pr:shortfallcramercondition} Let $\psi : E \rightarrow {\mathbb R}_+$ be measurable. Suppose $\int_E\ell(\lambda\psi(x))\mu(dx) < \infty$ for all $\lambda > 0$. Then $\lim_{m\rightarrow\infty}\rho(\lambda\psi 1{\{\psi \ge m\}})\rightarrow 0$ for all $\lambda > 0$. In particular, the sub-level sets of $\alpha$ are pre-compact subsets of ${\mathcal P}_\psi(E)$. \end{proposition} \begin{proof} Fix $\epsilon > 0$ and $\lambda > 0$. The following two limits hold: \begin{align*} \lim_{m\rightarrow\infty}\mu(\psi < m) = 1, \quad\quad\quad \lim_{m\rightarrow\infty}\int_{\{\psi \ge m\}}\ell\left(\lambda\psi(x) - \epsilon\right)\mu(dx) = 0. \end{align*} Since $\ell(-\epsilon) < 1$, it follows that, for sufficiently large $m$, \begin{align*} 1 &\ge \ell(-\epsilon)\mu(\psi < m) + \int_{\{\psi \ge m\}}\ell\left(\lambda\psi(x) - \epsilon\right)\mu(dx) \\ &= \int_E\ell\left(\lambda\psi(x)1_{\{\psi \ge m\}}(x) - \epsilon\right)\mu(dx). \end{align*} Use Lemma \ref{le:shortfall-Bb} to conclude that, for sufficiently large $m$ , \[ \rho(\lambda\psi 1{\{\psi \ge m\}}) = \inf\left\{c \in {\mathbb R} : \int_E\ell\left(\lambda\psi(x)1_{\{\psi \ge m\}}(x) - c\right)\mu(dx) \le 1 \right\} \le \epsilon. \] \end{proof} Finally, we check that the identity \eqref{def:shortfall-rho} still holds for sufficiently integrable $f$. \begin{lemma} \label{le:shortfall-L1} Let $\psi : E \rightarrow {\mathbb R}_+$ be continuous, and suppose $\int_E\ell(\lambda\psi(x))\mu(dx) < \infty$ for all $\lambda > 0$. Suppose $f : E \rightarrow {\mathbb R}$ is upper semicontinuous with $f \le c(1+\psi)$ for some $c \ge 0$. Then \eqref{def:shortfall-rho} holds. \end{lemma} \begin{proof} Let $H(f)$ denote the right-hand side of \eqref{def:shortfall-rho}. The assumption on $\psi$ along with Proposition \ref{pr:shortfallcramercondition} imply that the strong Cram\'er condition \eqref{def:strongcramer-condition} holds. Now let $f_n = f \vee (-n)$. Because $f_n$ is bounded from below, Lemma \ref{le:shortfall-Bb} yields $\rho(f_n) = H(f_n)$ for each $n$. Thanks to the strong Cram\'er condition, Lemma \ref{le:extended-duality2} implies $\rho(f) = \lim_{n\rightarrow\infty}\rho(f_n)$, and it remains only to shown that $H(f_n) \rightarrow H(f)$. Clearly $H(f_n) \ge H(f_{n+1}) \ge H(f)$ for each $n$ since $f_n \ge f_{n+1} \ge f$ pointwise, so the sequence $(H(f_n))$ has a limit. As $\ell$ is continuous, note that $H(f)$ is the unique solution $c \in {\mathbb R}$ of the equation \[ \int_E\ell(f(x)-c)\mu(dx) = 1. \] Similarly, $H(f_n)$ solves $\int_E\ell(f_n(x)-H(f_n))\mu(dx) = 1$. Passing to the limit shows $H(f_n) \rightarrow H(f)$. \end{proof} \subsection{Proof of Theorem \ref{th:lpsanov} and Corollary \ref{co:lpcramer}} With these generalities in hand, we now turn toward the proof of Theorem \ref{th:lpsanov}. The idea is to apply Theorem \ref{th:main-sanov-extended} with $\alpha$ defined as in \eqref{def:lpentropy}. The following estimate is crucial: \begin{lemma} \label{le:shortfallbound} Let $p \in (1,\infty]$ and $q=p/(p-1)$. Let $\alpha$ be as in \eqref{def:lpentropy}. Then, for each $n \ge 1$ and $\nu \in {\mathcal P}(E^n)$ with $\nu \ll \mu^n$, \begin{align} \alpha_n(\nu) \le n^{1/q}\|d\nu/d\mu^n\|_{L^p(\mu^n)}. \label{pf:lpsanov3} \end{align} \end{lemma} \begin{proof} The case $p=\infty$ and $q=1$ follows by sending $p \rightarrow\infty$ in \eqref{pf:lpsanov3}, so we prove only the case $p < \infty$. As we will be working with conditional expectations, it is convenient to work with a more probabilistic notation: Fix $n$, and endow $\Omega = E^n$ with its Borel $\sigma$-field as well as the probability $P = \mu^n$. Let $X_i : E^n \rightarrow E$ denote the natural projections, and let ${\mathcal F}_k = \sigma(X_1,\ldots,X_k)$ denote the natural filtration, for $k=1,\ldots,n$, with ${\mathcal F}_0 := \{\emptyset,\Omega\}$. For $\nu \in {\mathcal P}(E^n)$ and $k=1,\ldots,n$, let $\nu_k$ denote a version of the regular conditional law of $X_k$ given ${\mathcal F}_{k-1}$ under $\nu$, or symbolically $\nu_k := \nu(X_k \in \cdot \, | \, {\mathcal F}_{k-1})$. Let ${\mathbb E}^\nu$ denote integration with respect to $\nu$. Since $P(X_k \in \cdot \, | \, {\mathcal F}_{k-1}) = \mu$ a.s., if $\nu \ll P$ then \[ \frac{d\nu_k}{d\mu} = \frac{{\mathbb E}^P[d\nu/dP | {\mathcal F}_k]}{{\mathbb E}^P[d\nu/dP | {\mathcal F}_{k-1}]} =: \frac{M_k}{M_{k-1}}, \text{ a.s., where } \frac{0}{0} := 0. \] Therefore \[ \alpha(\nu_k) = {\mathbb E}^P\left[\left.\left(\frac{M_k}{M_{k-1}}\right)^p\right|{\mathcal F}_{k-1}\right]^{1/p}-1. \] Note that $(M_k)_{k=0}^n$ is a nonnegative martingale, with $M_0 = 1$ and $M_n = d\nu/dP$. Then \begin{align*} \alpha_n(\nu) &= {\mathbb E}^\nu\left[\sum_{k=1}^n\alpha(\nu_k)\right] = {\mathbb E}^P\left[M_n\sum_{k=1}^n\left({\mathbb E}^P\left[\left.\left(\frac{M_k}{M_{k-1}}\right)^p\right|{\mathcal F}_{k-1}\right]^{1/p}-1\right)\right] \\ &= {\mathbb E}^P\left[\sum_{k=1}^n\left({\mathbb E}^P\left[\left.M_k^p \right|{\mathcal F}_{k-1}\right]^{1/p}-M_{k-1}\right)\right]. \end{align*} Subadditivity of $x \mapsto x^{1/p}$ implies \[ \left({\mathbb E}^P[M_k^p|{\mathcal F}_{k-1}]\right)^{1/p} \le \left({\mathbb E}^P[M_k^p - M_{k-1}^p|{\mathcal F}_{k-1}]\right)^{1/p} + M_{k-1}, \] where the right-hand side is well-defined because \[ {\mathbb E}^P[M_k^p | {\mathcal F}_{k-1}] \ge{\mathbb E}^P[M_k | {\mathcal F}_{k-1}]^p = M_{k-1}^p. \] Concavity of $x \mapsto x^{1/p}$ and Jensen's inequality yield \begin{align*} \alpha_n(\nu) &\le {\mathbb E}^P\left[\sum_{k=1}^n\left({\mathbb E}^P[M_k^p - M_{k-1}^p|{\mathcal F}_{k-1}]\right)^{1/p}\right] \\ &\le n^{1-\frac{1}{p}}\left({\mathbb E}^P\left[\sum_{k=1}^n{\mathbb E}^P[M_k^p - M_{k-1}^p|{\mathcal F}_{k-1}]\right]\right)^{1/p} \\ &= n^{1/q}\left({\mathbb E}^P\left[M_n^p - M_0^p\right]\right)^{1/p} \\ &\le n^{1/q}\left({\mathbb E}^P\left[M_n^p\right]\right)^{1/p}. \end{align*} \end{proof} \subsection*{Proof of Theorem \ref{th:lpsanov}} Again, let $\alpha$ be as in \eqref{def:lpentropy}, and note that it corresponds to the shortfall risk measure \eqref{def:shortfall-rho} with $\ell(x) = [(1+x)^+]^q$. Then Proposition \ref{pr:shortfallcramercondition} and the assumption that $\int\psi^q\,d\mu < \infty$ imply that the sub-level sets of $\alpha$ are pre-compact subsets of ${\mathcal P}_\psi(E)$. Hence, Theorem \ref{th:main-sanov-extended} applies to the $\psi$-weakly upper semicontinuous function $F : {\mathcal P}_\psi(E) \rightarrow [-\infty,0]$ defined by $F(\nu) = 0$ if $\nu \in A$ and $F(\nu) = -\infty$ otherwise. This yields \begin{align} \limsup_{n\rightarrow\infty}\frac{1}{n}\rho_n(nF \circ L_n) \le -\inf_{\nu \in A}\alpha(\nu). \label{pf:lpsanov1.1} \end{align} Now use Lemma \ref{le:shortfallbound} to get \begin{align*} \frac{1}{n}\rho_n(nF\circ L_n) &= \sup_{\nu \in {\mathcal P}(E^n)}\left(\int F\circ L_n\,d\nu- \frac{1}{n}\alpha_n(\nu)\right) \\ &= -\inf_{\nu \in {\mathcal P}(E^n)}\frac{1}{n}\alpha_n(\nu) \\ &\ge -\inf_{\nu \ll \mu^n}\frac{1}{n}n^{1/q}\|d\nu/d\mu^n\|_{L^p(\mu^n)} \\ &= -\inf_{\nu \ll \mu^n}n^{-1/p}\|d\nu/d\mu^n\|_{L^p(\mu^n)} \end{align*} Set $B_n = \{x \in E^n : L_n(x) \in A\}$, and define $\nu \ll \mu^n$ by $d\nu/d\mu^n = 1_{B_n}/\mu^n(B_n)$. A quick computation yields \[ \|d\nu/d\mu^n\|_{L^p(\mu^n)} = \mu^n(B_n)^{(1-p)/p} = \mu^n(B_n)^{-1/q}. \] Thus \[ \frac{1}{n}\rho_n(nF \circ L_n) \ge -\left(n^{1/p}\mu^n(B_n)^{1/q}\right)^{-1}. \] Combine this with to \eqref{pf:lpsanov1.1} to get \begin{align*} \limsup_{n\rightarrow\infty}-\left(n^{1/p}\mu^n(L_n \in A)^{1/q}\right)^{-1} \le -\inf_{\nu \in A}\alpha(\nu). \end{align*} Recalling the definition of $\alpha$ from \eqref{def:lpentropy}, the proof is complete. \hfill \qedsymbol \subsection*{Proof of Corollary \ref{co:lpcramer}} Let $\psi(x) = \|x\|$, and consider the ${\mathcal P}_\psi(E)$-closed set \[ B = \left\{\mu \in {\mathcal P}_\psi(E) : \int_Ez\,\mu(dz) \in A\right\}, \] where the integral is defined in the Bochner sense. Proposition \ref{pr:shortfallcramercondition} and the assumption that $\int\psi^q\,d\mu = {\mathbb E}[\|X_1\|^q] < \infty$ imply that the sub-level sets of $\alpha$ are pre-compact subsets of ${\mathcal P}_\psi(E)$. We may then apply Theorem \ref{th:lpsanov} to get \[ \limsup_{n\rightarrow\infty}n^{1/p}{\mathbb P}\left(\frac{1}{n}\sum_{i=1}^nX_i \in A\right)^{1/q} \le \left(\inf_{\nu \in B}\alpha(\nu)\right)^{-1}, \] where again $\alpha$ is as in \eqref{def:lpentropy}. Proposition \ref{pr:cramer-representation} yields \[ \Lambda^*(x) = \inf\left\{\alpha(\nu) : \nu \in {\mathcal P}_\psi(E), \ \int_Ez\,\nu(dz)=x\right\}, \text{ for } x \in E. \] It follows that $\inf_{\nu \in B}\alpha(\nu) = \inf_{x \in A}\Lambda^*(x)$. \hfill \qedsymbol \subsection{A simple deviation bound} \label{se:deviationbounds} Before proceeding to the more involved application to stochastic optimization in the next subsection, we now show briefly how the quantities in Corollary \ref{co:lpcramer} are not entirely intractable. Suppose the set $A$ therein is the complement of the open ball centered at the origin with radius $r > 0$. Corollary \ref{co:lpcramer} then yields \begin{align} \limsup_{n\rightarrow\infty}\,n^{1/p}\,{\mathbb P}\left(\left\|\frac{1}{n}\sum_{i=1}^nX_i\right\| \ge r\right)^{1/q} \le \left(\inf_{\|x\| \ge r}\Lambda^*(x)\right)^{-1}. \label{pf:deviationbound1} \end{align} We wish to bound the right-hand side from above. For $x^* \in E^*$, notice that $\Lambda(x^*) \le 1$ if and only if \[ {\mathbb E}\left[(\langle x^*,X_1\rangle^+)^q\right] \le 1. \] This latter clearly holds if $\|x^*\| \le 1/M_q$, where $\|x^*\|$ is the usual dual norm and $M_q := {\mathbb E}[\|X_1\|^q]^{1/q}$. In particular, we find that $\Lambda \le H$ pointwise, where \[ H(x^*) = \begin{cases} 1 &\text{if } \|x^*\| \le M_q^{-1}, \\ \infty &\text{otherwise}. \end{cases} \] This is a proper convex lower semicontinuous function, and its conjugate is \begin{align*} H^*(x) &= \sup_{x^* \in E^*}\left(\langle x^*,x\rangle - H(x^*)\right) \\ &= -1 + \sup\left\{\langle x^*,x\rangle : x^* \in E^*, \ \|x^*\| \le M_q^{-1}\right\} \\ &= -1 + \|x\|M_q^{-1}. \end{align*} As $\Lambda \le H$ immediately implies $\Lambda^* \ge H^*$, we conclude that \begin{align*} \left(\inf_{\|x\| \ge r}\Lambda^*(x)\right)^{-1} &\le \left(-1 + rM_q^{-1}\right)^{-1} = \frac{M_q}{r-M_q}, \ \text{ for } r > M_q. \end{align*} Returning to \eqref{pf:deviationbound1} and recalling that $q/p=q-1$, we may write \begin{align*} {\mathbb P}\left(\left\|\frac{1}{n}\sum_{i=1}^nX_i\right\| \ge r\right) \le \left(\frac{M_q}{r-M_q}\right)^qn^{1-q} + o(n^{1-q}), \ \text{ for each } r > M_q. \end{align*} \subsection{Stochastic optimization with heavy tails} \label{se:optimization} This section applies Theorem \ref{th:lpsanov} to obtain rates of convergence of Monte-Carlo estimates for stochastic optimization problems in which the underlying random parameter has heavy tails. These results parallel and complement those of Kaniovski, King, and Wets \cite{kaniovski-king-wets}, who obtained exponential bounds assuming the existence of certain exponential moments. Let ${\mathcal X}$ and $E$ be Polish spaces. Consider a continuous function $h : {\mathcal X} \times E \rightarrow {\mathbb R}$ bounded from below, and define $V : {\mathcal P}(E) \rightarrow {\mathbb R}$ by \[ V(\nu) = \inf_{x \in {\mathcal X}}\int_Eh(x,w)\nu(dw). \] Fix $\mu \in {\mathcal P}(E)$ as a reference measure. The goal is to solve the optimization problem $V(\mu)$ numerically. The most common and natural approach is to sample from $\mu$ and replace $\mu$ with the empirical measure $L_n$. The two obvious questions are then: \begin{enumerate}[(A)] \item Does $V(L_n)$ converge to $V(\mu)$? \item Do the minimizers of $V(L_n)$ converge to those of $V(\mu)$ in some sense? \end{enumerate} The answers to these questions are known to be affirmative in very general settings, using a form of set-convergence for question (B); see \cite{dupacova1988asymptotic,kall1987approximations,king1991epi}. Given this, we then hope to quantify the rate of convergence for both of these questions. This is done in the language of large deviations in a paper of Kaniovski et al. \cite{kaniovski-king-wets}, under a strong assumption derived from Cram\'er's condition. In this section we complement their results by showing that under weaker integrability assumptions we can still obtain polynomial asymptotic rates of convergence. We make the following standing assumptions: \begin{assumption} The function $h$ is jointly continuous, and its sub-level sets are compact. We are given $q \in (1,\infty)$ and $\mu \in {\mathcal P}(E)$ such that, if \[ \psi(w) := \left(\sup_{x \in {\mathcal X}}h(x,w)\right)^+, \] then $\int_E\psi^q\,d\mu < \infty$. Moreover, ${\mathcal X}$ is compact. \end{assumption} The joint continuity and compactness assumptions could likely be weakened, but focusing on the more novel integrability issues will ease the exposition. Throughout this section, define $\alpha$ as in \eqref{def:lpentropy}. A simple lemma will be used in both of the following theorems: \begin{lemma} \label{le:alphastrictlypositive} Suppose $A \subset {\mathcal P}_\psi(E)$ is closed (in the $\psi$-weak topology), and suppose $\mu \notin A$. Then $\inf_{\nu \in A}\alpha(\nu) > 0$. \end{lemma} \begin{proof} If $\inf_{\nu \in A}\alpha(\nu) =0$, we may find $\nu_n \in A$ such that $\alpha(\nu_n) \rightarrow 0$. By Proposition \ref{pr:shortfallcramercondition}, the assumption $\int\psi^q\,d\mu < \infty$ implies that the sub-level sets of $\alpha$ are $\psi$-weakly compact, and the sequence $(\nu_n)$ admits a $\psi$-weak limit point $\nu^*$, which must of course belong to the $\psi$-weakly closed set $A$. Lower semicontinuity of $\alpha$ implies $\alpha(\nu^*) = 0$. This implies $\nu^*=\mu$, as $t \mapsto t^p$ is strictly convex, and this contradicts the assumption that $\mu \notin A$. \end{proof} \begin{theorem} \label{th:valueconvergence} For $\epsilon > 0$, \[ \limsup_{n\rightarrow\infty}n^{q-1}\mu^n(|V(L_n)-V(\mu)| \ge \epsilon) < \infty. \] \end{theorem} \begin{proof} Let $A = \{\nu \in {\mathcal P}_\psi(E) : |V(\nu)-V(\mu)| \ge \epsilon\}$. The map \[ {\mathcal X} \times {\mathcal P}_\psi(E) \ni (x,\nu) \mapsto \int_Eh(x,w)\nu(dx) \] is jointly continuous. By Berge's theorem \cite[Theorem 17.31]{aliprantisborder}, $V$ is continuous on ${\mathcal P}_\psi(E)$, and so $A$ is closed. Theorem \ref{th:main-sanov-extended} implies \begin{align*} \limsup_{n\rightarrow\infty}n^{q/p}\mu^n(|V(L_n)-V(\mu)| \ge \epsilon) &= \limsup_{n\rightarrow\infty}n^{q/p}\mu^n(L_n \in A) \le \left(\inf_{\nu \in A}\alpha(\nu)\right)^{-q}. \end{align*} Note that $q/p=q-1$, and finally use Lemma \ref{le:alphastrictlypositive} to conclude $\inf_{\nu \in A}\alpha(\nu) > 0$. \end{proof} \begin{theorem} \label{th:optimizerconvergence} Let $\hat{x} : {\mathcal P}_\psi(E) \rightarrow {\mathcal X}$ be any measurable function satisfying\footnote{Such a function $\hat{x}$ exists because $(x,\nu) \mapsto \int_Eh(x,w)\nu(dw)$ is measurable in $\nu$ and continuous in $x$; see, e.g., \cite[Theorem 18.19]{aliprantisborder}.} \[ \hat{x}(\nu) \in \arg\min_{x \in {\mathcal X}}\int_Eh(x,w)\nu(dw), \text{ for each } \nu. \] Suppose there exist a measurable function $\varphi : {\mathbb R} \rightarrow {\mathbb R}$ and a compatible metric $d$ on ${\mathcal X}$ such that \[ \varphi(d(\hat{x}(\mu),x)) \le \int_Eh(x,w)\mu(dw) - \int_Eh(\hat{x}(\mu),w)\mu(dw). \] Then, for any $\epsilon > 0$, \[ \limsup_{n\rightarrow\infty}n^{q-1}\mu^n(\varphi(d(\hat{x}(\mu),\hat{x}(L_n))) \ge \epsilon) < \infty. \] In particular, if $\varphi$ is strictly increasing with $\varphi(0)=0$, then for any $\epsilon > 0$, \[ \limsup_{n\rightarrow\infty}n^{q-1}\mu^n(d(\hat{x}(\mu),\hat{x}(L_n)) \ge \epsilon) < \infty. \] \end{theorem} \begin{proof} Note that for $\epsilon > 0$, on the event $\{\psi(d(\hat{x}(\mu),\hat{x}(L_n))) \ge \epsilon\}$ we have \begin{align*} \epsilon &\le \varphi(d(\hat{x}(\mu),\hat{x}(L_n))) \le \int_Eh(\hat{x}(L_n),w)\mu(dw) - \int_Eh(\hat{x}(\mu),w)\mu(dw) \\ &\le |V(L_n)-V(\mu)| + \sup_{x \in {\mathcal X}}\int_Eh(x,w)[\mu-L_n](dw). \end{align*} The first term converges at the right rate, thanks to Theorem \ref{th:valueconvergence} it remains to check that \[ \limsup_{n\rightarrow\infty}n^{q-1}\mu^n\left(\sup_{x \in {\mathcal X}}\int_Eh(x,w)[\mu-L_n](dw) \ge \epsilon\right) < \infty. \] The map $(x,\nu) \mapsto \int_Eh(x,w)\nu(dw)$ is continuous on ${\mathcal X} \times {\mathcal P}_\psi(E)$, and so the map \[ {\mathcal P}_\psi(E) \ni \nu \mapsto \sup_{x \in {\mathcal X}}\int_Eh(x,w)[\mu-\nu](dw) \] is continuous by Berge's theorem \cite[Theorem 17.31]{aliprantisborder}. Hence, the set \[ B := \left\{\nu \in {\mathcal P}_\psi(E) : \sup_{x \in {\mathcal X}}\int_Eh(x,w)[\mu-\nu](dw) \ge \epsilon\right\} \] is closed in ${\mathcal P}_\psi(E)$. Theorem \ref{th:main-sanov-extended} then implies \begin{align*} \limsup_{n\rightarrow\infty}n^{q-1}\mu^n\left(\sup_{x \in {\mathcal X}}\int_Eh(x,w)[\mu-L_n](dw) \ge \epsilon\right) &\le \left(\inf_{\nu \in B}\alpha(\nu)\right)^{-q}. \end{align*} Finally, Lemma \ref{le:alphastrictlypositive} implies that $\inf_{\nu \in B}\alpha(\nu) > 0$. \end{proof} Under the assumption $\int_E\psi^q\,d\mu < \infty$, we see that the value $V(L_n)$ always converges to $V(\mu)$ with the polynomial rate $n^{1-q}$. To see when Theorem \ref{th:optimizerconvergence} applies, notice that in many situations, ${\mathcal X}$ is a convex subset of a normed vector space, and we have uniform convexity in the following form: There exists a strictly increasing function $\varphi$ such that $\varphi(0)=0$ and, for all $t \in (0,1)$ and $x,y \in {\mathcal X}$, \begin{align*} \int_E&h(tx + (1-t)y,w)\mu(dw) \\ &\le t\int_Eh(x,w)\mu(dw) + (1-t)\int_Eh(y,w)\mu(dw) - t(1-t)\varphi(\|x-y\|). \end{align*} See \cite[pp. 202-203]{kaniovski-king-wets} for more on this. \section{Uniform large deviations and martingales} \label{se:uniform} This section returns to the example of Section \ref{se:intro:uniform}. Fix a convex weakly compact family of probability measures $M \subset {\mathcal P}(E)$. Define \begin{align} \alpha(\nu) = \inf_{\mu \in M}H(\nu | \mu), \label{def:uniformrelativeentropy} \end{align} where the relative entropy was defined in \eqref{def:relativeentropy}. The corresponding $\rho$ is then \begin{align*} \rho(f) &= \sup_{\nu \in B(E)}\left(\int_Ef\,d\nu - \alpha(\nu)\right) = \sup_{\nu \in B(E)}\sup_{\mu \in M}\left(\int_Ef\,d\nu - H(\nu | \mu)\right) \\ &= \sup_{\mu \in M}\log\int_Ee^f\,d\mu. \end{align*} \begin{lemma} \label{le:uniform-standingassumptions} The functional $\alpha$ defined in \eqref{def:uniformrelativeentropy} satisfies the standing assumptions. That is, it is convex and has weakly compact sub-level sets. \end{lemma} \begin{proof} Because $\mu \mapsto -\log\int_Ee^f\,d\mu$ is convex, Sion's minimax theorem \cite{sion1958general} yields \begin{align*} \alpha(\nu) &= \inf_{\mu \in M}\sup_{f \in C_b(E)}\left(\int_Ef\,d\nu - \log\int_Ee^f\,d\mu\right) \\ &= \sup_{f \in C_b(E)}\inf_{\mu \in M}\left(\int_Ef\,d\nu - \log\int_Ee^f\,d\mu\right) \\ &= \sup_{f \in C_b(E)}\left(\int_Ef\,d\nu - \rho(f)\right). \end{align*} This shows that $\alpha$ is convex and lower semicontinuous. It remains to prove that $\alpha$ has tight sub-level sets, which will follow from Theorem \ref{th:tight} once we check the second assumption therein. By Prohorov's theorem, there exist compact sets $K_1 \subset K_2 \subset \cdots$ such that $\sup_{\mu \in M}\mu(K_n^c) \le 1/n$. Then, for $\lambda \ge 0$, \begin{align*} \lambda &\ge \rho(\lambda 1_{K_n}) = \sup_{ \mu \in M}\log\int_E\exp(\lambda 1_{K_n})\,d\mu \\ &= \sup_{ \mu \in M}\log\left[(e^\lambda - 1)\mu(K_n) + 1\right] \\ &\ge \log\left[(e^\lambda - 1)(1-1/n) + 1\right]. \end{align*} As $n\rightarrow\infty$, the right-hand side converges to $\lambda$, which shows $\rho(\lambda 1_{K_n})\rightarrow \lambda = \rho(\lambda)$. \end{proof} To compute $\rho_n$, recall that for $M \subset {\mathcal P}(E)$ we define $M_n$ as the set of $\mu \in {\mathcal P}(E^n)$ satisfying $\mu_{0,1} \in M$ and $\mu_{k-1,k}(x_1,\ldots,x_{k-1}) \in M$ for all $k=2,\ldots,n$ and $x_1,\ldots,x_{n-1} \in E$. (Recall that the conditional measures $\mu_{k-1,k}$ were defined in the introduction.) Notice that $M_1=M$. \begin{proposition} \label{pr:robustentropic-rhon} For each $n \ge 1$, $\alpha_n(\nu) = \inf_{\mu \in M_n}H(\nu | \mu)$. Moreover, for each measurable $f :E^n \rightarrow {\mathbb R} \cup \{-\infty\}$ satisfying $\int_{E^n}e^f\,d\mu < \infty$ for every $\mu \in M_n$, \begin{align} \rho_n(f) = \sup_{\mu \in M_n}\log\int_{E^n}e^f\,d\mu. \label{def:uniform-rhon} \end{align} \end{proposition} \begin{proof} Given the first claim, the second follows from the well-known duality \[ \sup_{\nu \in {\mathcal P}(E^n)}\left(\int_{E^n}f\,d\nu - H(\nu | \mu)\right) = \log\int_{E^n}e^f\,d\mu, \] which holds for $\mu \in {\mathcal P}(E^n)$ as long as $e^f$ is $\mu$-integrable (see, e.g., the proof of \cite[1.4.2]{dupuis-ellis}). Indeed, this implies \begin{align*} \rho_n(f) &= \sup_{\nu \in {\mathcal P}(E^n)}\left(\int_{E^n}f\,d\nu - \alpha_n(\nu)\right) = \sup_{\mu \in M_n}\sup_{\nu \in {\mathcal P}(E^n)}\left(\int_{E^n}f\,d\nu - H(\nu | \mu)\right) \\ &= \sup_{\mu \in M_n}\log\int_{E^n}e^f\,d\mu. \end{align*} To prove the first claim, note that by definition \begin{align*} \alpha_n(\nu) &= \sum_{k=1}^n \int_{E^n}\inf_{\mu \in M}H(\nu_{k-1,k}(x_1,\ldots,x_{k-1} | \mu) \nu(dx_1,\ldots,dx_n). \end{align*} For $k=2,\ldots,n$ let ${\mathcal Y}_k$ denote the set of measurable maps from $E^{k-1}$ to $M$, and let ${\mathcal Y}_1 = M$. Then the usual measurable selection argument \cite[Proposition 7.50]{bertsekasshreve} yields \begin{align*} \alpha_n(\nu) &= \sum_{k=1}^n \inf_{\eta_k \in {\mathcal Y}_k}\int_{E^n}H(\nu_{k-1,k}(x_1,\ldots,x_{k-1} | \eta_k(x_1,\ldots,x_{k-1})) \nu(dx_1,\ldots,dx_n). \end{align*} Now, if $(\eta_1,\ldots,\eta_n) \in \prod_{k=1}^n{\mathcal Y}_k$, then the measure \[ \mu(dx_1,\ldots,dx_n) = \eta_1(dx_1)\prod_{k=2}^n\eta_2(x_1,\ldots,x_{k-1})(dx_k) \] is in $M$, and $\mu_{k-1,k} = \eta_k$ is a version of the conditional law. Thus \begin{align*} \alpha_n(\nu) &\ge \inf_{\mu \in M}\sum_{k=1}^n\int_{E^n}H(\nu_{k-1,k}(x_1,\ldots,x_{k-1} | \mu_{k-1,k}(x_1,\ldots,x_{k-1})) \nu(dx_1,\ldots,dx_n). \end{align*} On the other hand, for every $\mu \in M_n$, the vector $(\mu_{0,1},\mu_{1,2},\ldots,\mu_{n-1,n})$ belongs to $\prod_{k=1}^n{\mathcal Y}_k$, and we deduce the opposite inequality. Hence \begin{align*} \alpha_n(\nu) &\ge \inf_{\mu \in M}\sum_{k=1}^n\int_{E^n}H(\nu_{k-1,k}(x_1,\ldots,x_{k-1} | \mu_{k-1,k}(x_1,\ldots,x_{k-1})) \nu(dx_1,\ldots,dx_n) \\ &= \inf_{\mu \in M}H(\nu|\mu), \end{align*} where the last equality follows from the chain rule for relative entropy \cite[Theorem B.2.1]{dupuis-ellis}. \end{proof} Theorem \ref{th:main-sanov-extended} now leads to the following uniform large deviation bound: \begin{corollary} \label{co:uniform-sanov} For $F \in C_b({\mathcal P}(E))$, we have \begin{align*} \lim_{n\rightarrow\infty}\sup_{\mu \in M_n}\frac{1}{n}\log\int_{E^n}e^{nF \circ L_n}\,d\mu &= \sup_{\nu \in {\mathcal P}(E), \ \mu \in M}\left(F(\nu) - H(\nu | \mu)\right). \end{align*} For closed sets $A \subset {\mathcal P}(E)$, we have \begin{align*} \lim_{n\rightarrow\infty}\sup_{\mu \in M_n}\frac{1}{n}\log\mu(L_n \in A) &\le -\inf\left\{H(\nu | \mu) : \nu \in A, \ \mu \in M\right\}. \end{align*} \end{corollary} \begin{proof} The first claim is an immediate consequence of Theorem \ref{th:main-sanov-extended} and the calculation of $\rho_n$ in Proposition \ref{pr:robustentropic-rhon}. To prove the second claim, define $F$ on ${\mathcal P}(E)$ by \begin{align*} F(\nu) = \begin{cases} 0 &\text{if } \nu \in A, \\ -\infty &\text{otherwise.} \end{cases} \end{align*} Then $F$ is upper semicontinuous and bounded from above. Use Proposition \ref{pr:robustentropic-rhon} to compute \begin{align*} \rho_n(nF) &= \sup_{\mu \in M_n}\log\int_{E^n}\exp(nF)\,d\mu = \sup_{\mu \in M_n}\log\mu(L_n \in A). \end{align*} The proof is completed by applying Theorem \ref{th:main-sanov-extended} with this function $F$. \end{proof} The following proposition simplifies the strong Cram\'er condition \eqref{def:strongcramer-condition} in the present context. \begin{proposition} \label{pr:robustcramercondition} Let $\psi : E \rightarrow {\mathbb R}_+$ be measurable. Suppose that for every $\lambda > 0$ we have \[ \sup_{\mu \in M}\int_E e^{\lambda\psi}\,d\mu < \infty. \] Then the strong Cram\'er condition holds, i.e., $\lim_{m\rightarrow\infty}\rho(\lambda\psi 1_{\{\psi \ge m\}})\rightarrow 0$ for all $\lambda > 0$. In particular, the sub-level sets of $\alpha$ are pre-compact subsets of ${\mathcal P}_\psi(E)$. \end{proposition} \begin{proof} Because $e^{\lambda\psi}$ is $\mu$-integrable for each $\mu\in M$ and $\lambda > 0$, Proposition \ref{pr:robustentropic-rhon} implies \begin{align*} \rho(\lambda\psi 1_{\{\psi \ge m\}}) &= \sup_{\mu \in M}\log\int_E\exp\left(\lambda\psi 1_{\{\psi \ge m\}}\right)\,d\mu \\ &\le \sup_{\mu \in M}\log\left( 1 + \int_{\{\psi \ge m\}}\exp\left(\lambda\psi\right)\,d\mu\right). \end{align*} It suffices now to show that $e^{\lambda\psi}$ is uniformly integrable with respect to $M$ for every $\lambda > 0$, meaning \[ \lim_{m\rightarrow\infty}\sup_{\mu \in M}\int_{\{\psi \ge m\}} e^{\lambda\psi}\,d\mu = 0. \] But this follows from the assumption, because if $\lambda \ge 0$ and $p > 1$ then \[ \sup_{\mu \in M}\int_E \left(e^{\lambda\psi}\right)^p\,d\mu < \infty. \] \end{proof} We are finally ready to specialize Theorem \ref{co:uniform-sanov} to prove Theorem \ref{th:azuma}, similarly to how we specialized Theorem \ref{th:lpsanov} to prove Corollary \ref{co:lpcramer} in Section \ref{se:nonexpLDP}. \begin{proof}[Proof of Theorem \ref{th:azuma}] Define \[ M = \left\{ \mu \in {\mathcal P}({\mathbb R}^d) : \log\int_{{\mathbb R}^d} e^{\langle y,x\rangle}\mu(dx) \le \varphi(y), \ \forall y \in {\mathbb R}^d\right\}. \] The assumption that $\varphi$ is finite everywhere ensures that $M$ is weakly compact: Indeed, if $e_1,\ldots,e_d$ denote the standard basis vectors in ${\mathbb R}^d$, then for each $\mu \in M$ \begin{align*} \mu\left(\max_{i=1,\ldots,d}X_i > t \right) &\le \sum_{k=1}^d\mu(X_i > t) \le \sum_{k=1}^de^{-t}\int e^{X_i}\,d\mu \le e^{-t}\sum_{k=1}^de^{\varphi(e_i)}. \end{align*} This shows that $M$ is tight, and is it easy to check that $M$ is closed and convex. Now define $\psi(x) = \sum_{i=1}^d|x_i|$ and notice that \begin{align*} \sup_{\mu \in M}\int_{{\mathbb R}^d}\exp(\lambda\psi)\,d\mu < \infty, \text{ for all } \lambda \ge 0. \end{align*} Proposition \ref{pr:robustcramercondition} then shows that the strong Cram\'er condition holds. Define a closed set $B \subset {\mathcal P}_\psi(E)$ by $B = \{\nu \in {\mathcal P}_\psi(E) : \int z\nu(dz) \in A\}$, where $A$ was the given closed subset of $E={\mathbb R}^d$. Corollary \ref{co:uniform-sanov} yields \begin{align*} \limsup_{n\rightarrow\infty}\sup_{\mu \in M_n}\frac{1}{n}\log\mu(L_n \in B) &\le -\inf\left\{\alpha(\nu) : \nu \in {\mathcal P}_\psi({\mathbb R}^d), \ \int x\,\nu(dx) \in A\right\}, \end{align*} Now let $(S_0,\ldots,S_n) \in \mathcal{S}_{d,\varphi}$. The law of $S_1$ belongs to $M$, and the conditional law of $S_k-S_{k-1}$ given $S_1,\ldots,S_{k-1}$ belongs almost surely to $M$, for each $k$, and so the law of $(S_1,S_2-S_1,\ldots,S_n-S_{n-1})$ belongs to $M_n$. Thus \[ {\mathbb P}\left(S_n/n \in A\right) \le \sup_{\mu \in M_n}\mu(L_n \in B), \] and all that remains is to prove that \begin{align*} \inf\left\{\alpha(\nu) : \nu \in {\mathcal P}_\psi({\mathbb R}^d), \ \int z\,\nu(dz) \in A\right\} \ge \inf_{x \in A}\varphi^*(x). \end{align*} To prove this, it suffices to show $\Psi(x) \ge \varphi^*(x)$ for every $x \in {\mathbb R}^d$, where \begin{align} \Psi(x) := \inf\left\{\alpha(\nu) : \nu \in {\mathcal P}_\psi({\mathbb R}^d), \ \int z\,\nu(dz) =x\right\}. \label{pf:azuma1} \end{align} To this end, note that for all $y \in {\mathbb R}^d$ \[ \rho(\langle \cdot,y\rangle) = \sup_{\mu \in M}\log\int_Ee^{\langle z,y\rangle}\mu(dz) \le \varphi(y), \] and then use the representation of Proposition \ref{pr:cramer-representation} to get \begin{align*} \Psi(x) &= \sup_{y \in {\mathbb R}^d}\left(\langle x,y\rangle - \rho(\langle \cdot,y\rangle)\right) \ge \sup_{y \in {\mathbb R}^d}\left(\langle x,y\rangle - \varphi(y)\right) = \varphi^*(x). \end{align*} \end{proof} \section{Optimal transport and control} \label{se:optimaltransport} This section discusses example \ref{se:intro:optimaltransport} in more detail. Again let $E$ be a Polish space, and fix a lower semicontinuous function $c : E^2 \rightarrow [0,\infty]$ which is not identically equal to $\infty$. Fix $\mu \in {\mathcal P}(E)$, and define \[ \alpha(\nu) = \inf_{\pi \in \Pi(\mu,\nu)}\int c\,d\pi, \] where $\Pi(\mu,\nu)$ is the set of probability measures on $E \times E$ with first marginal $\mu$ and second marginal $\nu$. Assume that $\int_Ec(x,x)\mu(dx) < \infty$; in many practical cases, $c(x,x)=0$ for all $x$, so this is not a restrictive assumption and merely ensures that $\alpha(\mu) < \infty$. Kantorovich duality \cite[Theorem 1.3]{villani-book} shows that \begin{align*} \alpha(\nu) &= \sup\left(\int_Ef\,d\nu - \int_Eg\,d\mu : f,g \in B(E), \ f(y) - g(x) \le c(x,y) \ \forall x,y\right) \\ &= \sup_{f \in B(E)}\left(\int_Ef\,d\nu - \rho(f)\right), \end{align*} and also that the supremum can be taken merely over $C_b(E)$ rather than $B(E)$ without changing the value. This immediately shows that $\alpha$ is convex and weakly lower semicontinuous. The next two lemmas identify, respectively, the dual $\rho$ and the modest conditions that ensure that $\alpha$ has compact sub-level sets. \begin{lemma} Given $\alpha$ as above, and defining $\rho$ as usual by \eqref{intro:duality}, we have \begin{align} \rho(f) = \int_ER_cf\,d\mu, \text{ for all } f \in B(E), \label{def:rho-optimaltransport} \end{align} where $R_cf : E \rightarrow {\mathbb R}$ is defined by \[ R_cf(x) = \sup_{y \in E}\left(f(y) - c(x,y)\right). \] \end{lemma} \begin{proof} Note that $R_cf$ is universally measurable (e.g., by \cite[Proposition 7.50]{bertsekasshreve}), so the integral in \eqref{def:rho-optimaltransport} makes sense. Now compute \begin{align*} \rho(f) &= \sup_{\nu \in {\mathcal P}(E)}\left(\int_Ef\,d\nu - \alpha(\nu)\right) \\ &= \sup_{\nu \in {\mathcal P}(E)}\sup_{\pi \in \Pi(\mu,\nu)}\left(\int_Ef\,d\nu - \int_{E^2}c\,d\pi\right) \\ &= \sup_{\pi \in \Pi(\mu)}\int_{E^2}\left(f(y) - c(x,y)\right)\pi(dx,dy), \end{align*} where $\Pi(\mu)$ is the set of $\pi \in {\mathcal P}(E \times E)$ with first marginal $\mu$. Use the standard measurable selection theorem \cite[Proposition 7.50]{bertsekasshreve} to find a measurable $Y : E \rightarrow E$ such that $R_cf(x) = f(Y(x)) - c(x,Y(x))$ for $\mu$-a.e. $x$. Then, choosing $\pi(dx,dy) = \mu(dx)\delta_{Y(x)}(dy)$ shows \[ \rho(f) \ge \int_E\left(f(Y(x))-c(x,Y(x))\right)\mu(dx) = \int_ER_cf\,d\mu. \] On the other hand, it is clear that for every $\pi \in \Pi(\mu)$ we have \[ \int_{E^2}\left(f(y) - c(x,y)\right)\pi(dx,dy) \le \int_{E}\sup_{y \in E}\left(f(y) - c(x,y)\right)\mu(dx) = \int_ER_cf\,d\mu. \] \end{proof} \begin{lemma} \label{le:tightnessfunction} Suppose that for each compact set $K \subset E$, the function $h_K(y) := \inf_{x \in K}c(x,y)$ has pre-compact sub-level sets.\footnote{In fact, since $c$ is lower semicontinuous, so is $h_K$ (see \cite[Lemma 17.30]{aliprantisborder}). Thus, our assumption is equivalent to requiring $\{y \in E : h_K(y) \le m\}$ to be compact for each $m \ge 0$.} Then $\alpha$ has compact sub-level sets. \end{lemma} \begin{proof} We already know that $\alpha$ has closed sub-level sets, so we must show only that they are tight. Fix $\nu \in {\mathcal P}(E)$ such that $\alpha(\nu) < \infty$ (noting that such $\nu$ certainly exist, as $\mu$ is one example). Fix $\epsilon > 0$, and find $\pi \in \Pi(\mu,\nu)$ such that \begin{align} \int c\,d\pi \le \alpha(\nu) + \epsilon < \infty. \label{pf:tightnessfunction1} \end{align} As finite measures on Polish spaces are tight, we may find a compact set $K \subset E$ such that $\mu(K^c) \le \epsilon$. Set $K_n := \{y \in E : h_K(y) < n\}$ for each $n$, and note that this set is pre-compact by assumption. Disintegrate $\pi$ by finding a measurable map $E \ni x \mapsto \pi_x \in {\mathcal P}(E)$ such that $\pi(dx,dy) = \mu(dx)\pi_x(dy)$. By Markov's inequality, for each $n > 0$ and each $x \in K$ we have \begin{align*} \pi_x(K_n^c) &\le \pi_x\{y \in E : c(x,y) \ge n\} \le \frac{1}{n}\int_E c(x,y)\pi_x(dy). \end{align*} Using this and the inequality \eqref{pf:tightnessfunction1} along with the assumption that $c$ is nonnegative, \begin{align*} \nu(K_n^c) &= \int_E\mu(dx)\pi_x(K_n^c) \\ &\le \mu(K^c) + \int_K\mu(dx)\pi_x(K_n^c) \\ &\le \epsilon + \frac{1}{n}\int_K\mu(dx)\int_E\pi_x(dy) c(x,y) \\ &\le \epsilon + \frac{1}{n}\int_{E \times E}c\,d\pi \\ &\le \left(1 + \frac{1}{n}\right)\epsilon + \frac{1}{n}\alpha(\nu). \end{align*} As $\epsilon$ was arbitrary, we have $\nu(K_n^c) \le \alpha(\nu)/n$. Thus, each $m > 0$, the sub-level set $\{\nu \in {\mathcal P}(E) : \alpha(\nu) \le m\}$ is contained in the tight set \[ \bigcap_{n=1}^\infty\left\{\nu \in {\mathcal P}(E) : \nu(K_n^c) \le m/n\right\}. \] \end{proof} Let us now compute $\rho_n$. It is convenient to work with more probabilistic notation, so let us suppose $(X_i)_{i=1}^\infty$ is a sequence of i.i.d. $E$-valued random variables with common law $\mu$, defined on some fixed probability space. For each $n$, let ${\mathcal Y}_n$ denote the set of equivalence classes of a.s. equal $E^n$-valued random variables $(Y_1,\ldots,Y_n)$ where $Y_k$ is $(X_1,\ldots,X_k)$-measurable for each $k=1,\ldots,n$. \begin{proposition} \label{pr:rhon-optimaltransport} For each $n \ge 1$ and each $f \in B(E)$, \[ \rho_n(f) = \sup_{(Y_1,\ldots,Y_n) \in {\mathcal Y}_n}{\mathbb E}\left[f(Y_1,\ldots,Y_n) - \sum_{i=1}^nc(X_i,Y_i)\right]. \] \end{proposition} \begin{proof} The proof is by induction. Let us first rewrite $\rho$ in our probabilistic notation: \[ \rho(f) = {\mathbb E}\left[\sup_{y \in E}[f(y)-c(X_1,y)]\right]. \] Using a standard measurable selection argument \cite[Proposition 7.50]{bertsekasshreve}, we deduce \[ \rho(f) = \sup_{Y_1 \in {\mathcal Y}_1}{\mathbb E}\left[f(Y_1)-c(X_1,Y_1)\right] \] The inductive step proceeds as follows. Suppose we have proven the claim for a given $n$. Fix $f \in B(E^{n+1})$ and define $g \in B(E^n)$ by \[ g(x_1,\ldots,x_n) := \rho(f(x_1,\ldots,x_n,\cdot)) . \] Since $X_1$ and $X_{n+1}$ have the same distribution, we may relabel to find \begin{align*} g(x_1,\ldots,x_n) &= \sup_{Y_1 \in {\mathcal Y}_1}{\mathbb E}\left[f(x_1,\ldots,x_n,Y_1)-c(X_1,Y_1)\right] \\ &= \sup_{Y_{n+1} \in {\mathcal Y}_{n+1}^1}{\mathbb E}\left[f(x_1,\ldots,x_n,Y_{n+1})-c(X_{n+1},Y_{n+1})\right], \end{align*} where we define ${\mathcal Y}^1_{n+1}$ to be the set of $X_{n+1}$-measurable $E$-valued random variables. Now note that any $(Y_1,\ldots,Y_n)$ in ${\mathcal Y}_n$ is $(X_1,\ldots,X_n)$-measurable, and independence of $(X_i)_{i=1}^\infty$ implies \[ g(Y_1,\ldots,Y_n) = \sup_{Y_{n+1} \in {\mathcal Y}_{n+1}^1}{\mathbb E}\left[\left.f(Y_1,\ldots,Y_n,Y_{n+1})-c(X_{n+1},Y_{n+1})\right| Y_1,\ldots,Y_n\right]. \] We claim that \begin{align} {\mathbb E}\left[g(Y_1,\ldots,Y_n)\right] = \sup_{Y_{n+1}}{\mathbb E}\left[f(Y_1,\ldots,Y_n,Y_{n+1}) - c(X_{n+1},Y_{n+1})\right], \label{pf:jointly-measurable0} \end{align} where the supremum is over $(X_1,\ldots,X_{n+1})$-measurable $E$-valued random variables $Y_{n+1}$. Indeed, once this is established, we conclude as desired that \begin{align*} \rho_{n+1}(f) &= \rho_n(g) = \sup_{(Y_1,\ldots,Y_n) \in {\mathcal Y}_n}{\mathbb E}\left[g(Y_1,\ldots,Y_n) - \sum_{i=1}^nc(X_i,Y_i)\right] \\ &= \sup_{(Y_1,\ldots,Y_n) \in {\mathcal Y}_n}\sup_{Y_{n+1}}{\mathbb E}\left[f(Y_1,\ldots,Y_n,Y_{n+1}) - \sum_{i=1}^{n+1}c(X_i,Y_i)\right]. \end{align*} Hence, the rest of the proof is devoted to justifying \eqref{pf:jointly-measurable0}, which is really an interchange of supremum and expectation. Note that ${\mathcal Y}_{n+1}^1$ is a Polish space when topologized by convergence in measure. The function $h : E^n \times {\mathcal Y}^1_{n+1} \rightarrow {\mathbb R}$ given by \[ h(x_1,\ldots,x_n;Y_{n+1}) := {\mathbb E}\left[f(x_1,\ldots,x_n,Y_{n+1})-c(X_{n+1},Y_{n+1})\right]. \] is jointly measurable. Note as before that independence implies that for every $(Y_1,\ldots,Y_n) \in {\mathcal Y}_n$ and $Y_{n+1} \in {\mathcal Y}^1_{n+1}$ we have, for a.e. $\omega$, \begin{align} h(Y_1(\omega),&\ldots,Y_n(\omega);Y_{n+1}) = {\mathbb E}\left[\left. f(Y_1,\ldots,Y_n,Y_{n+1})-c(X_{n+1},Y_{n+1})\right| Y_1,\ldots,Y_n\right](\omega). \label{pf:jointly-measurable1} \end{align} Using the usual measurable selection theorem \cite[Proposition 7.50]{bertsekasshreve} we get \begin{align*} {\mathbb E}\left[g(Y_1,\ldots,Y_n)\right] &= {\mathbb E}\left[\sup_{Y_{n+1} \in {\mathcal Y}_{n+1}^1}h(Y_1(\cdot),\ldots,Y_n(\cdot);Y_{n+1})\right] \\ &= \sup_{H \in \widetilde{{\mathcal Y}}_{n+1}^1}{\mathbb E}\left[h(Y_1(\cdot),\ldots,Y_n(\cdot);H(Y_1,\ldots,Y_n))\right], \end{align*} where $\widetilde{{\mathcal Y}}_{n+1}^1$ denotes the set of measurable maps $H : E^n \rightarrow {\mathcal Y}^1_{n+1}$. But a measurable map $H : E^n \rightarrow {\mathcal Y}^1_{n+1}$ can be identified almost everywhere with an $(X_1,\ldots,X_{n+1})$-measurable random variable $Y_{n+1}$. Precisely, by Lemma \ref{le:jointmeasurability} (in the appendix) there exists a jointly measurable map $\varphi : E^{n+1} \rightarrow E$ such that, for $\mu^n$-a.e. $(x_1,\ldots,x_n) \in E^n$, we have \[ \varphi(x_1,\ldots,x_{n+1}) = H(x_1,\ldots,x_n)(x_{n+1}), \text{ for } \mu\text{-a.e. } x_{n+1} \in E. \] Define $Y_{n+1} = \varphi(X_1,\ldots,X_{n+1})$, and note that \eqref{pf:jointly-measurable1} implies, for a.e. $\omega$, \begin{align*} h(Y_1(\omega),&\ldots,Y_n(\omega);H(Y_1,\ldots,Y_n)) = {\mathbb E}\left[\left.f(Y_1,\ldots,Y_n,Y_{n+1})-c(X_{n+1},Y_{n+1})\right| Y_1,\ldots,Y_n\right](\omega). \end{align*} This identification of $\widetilde{{\mathcal Y}}_{n+1}^1$ and the tower property of conditional expectations leads to \eqref{pf:jointly-measurable0}. \end{proof}
1,941,325,219,926
arxiv
\section{Introduction}\label{sec:introduction} \IEEEPARstart{T}{here} are a number of new memory technologies that are impacting, or likely to impact, computing architectures in the near future. One example of such a technology is so called high bandwidth memory, already featured today on Intel's latest many-core processor, the Xeon Phi Knights Landing \cite{Sodani:hotchips}, and NVIDIA's latest GPU, Volta \cite{nvidia:volta}. These contain MCDRAM \cite{Sodani:hotchips} and HBM2 \cite{jun:hbm} respectively, memory technologies built with traditional DRAM hardware but connected with a very wide memory bus (or series of buses) directly to the processor to provide very high memory bandwidth when compared to traditional main memory (DDR channels). This has been enabled, in part, by the hardware trend for incorporating memory controllers and memory controller hubs directly onto processors, enabling memory to be attached to the processor itself rather than through the motherboard and associated chipset. However, the underlying memory hardware is the same, or at least very similar, to the traditional volatile DRAM memory that is still used as main memory for computer architectures, and that remains attached to the motherboard rather than the processor. Non-volatile memory, i.e. memory that retains data even after power is turned off, has been exploited by consumer electronics and computer systems for many years. The flash memory cards used in cameras and mobile phones are an example of such hardware, used for data storage. More recently, flash memory has been used for high performance I/O in the form of Solid State Disk (SSD) drives, providing higher bandwidth and lower latency than traditional Hard Disk Drives (HDD). Whilst flash memory can provide fast input/output (I/O) performance for computer systems, there are some draw backs. It has limited endurance when compare to HDD technology, restricted by the number of modifications a memory cell can undertake and thus the effective lifetime of the flash storage\cite{ssd:wear}. It is often also more expensive than other storage technologies. However, SSD storage, and enterprise level SSD drives, are heavily used for I/O intensive functionality in large scale computer systems because of their random read and write performance capabilities. Byte-addressable random access persistent memory (B-APM), also known as storage class memory (SCM), NVRAM or NVDIMMs, exploits a new generation of non-volatile memory hardware that is directly accessible via CPU load/store operations, has much higher durability than standard flash memory, and much higher read and write performance. High-performance computing (HPC) and high-performance data analytics (HPDA) systems currently have different hardware and configuration requirements. HPC systems generally require very high floating-point performance, high memory bandwidth, and high-performance networks, with a high-performance filesystem for production I/O and possibly a larger filesystem for long term data storage. However, HPC applications do not generally have large memory requirements (although there are some exceptions to this) \cite{turner:memory}. HPDA systems on the other hand often do have high memory capacity requirements, and also require a high-performance filesystem to enable very large amounts of data to be read, processed, and written. B-APM, with its very high performance I/O characteristics, and vastly increased capacity (compared to volatile memory), offers a potential hardware solution to enable the construction of a compute platform that can support both types of use case, with high performance processors, very large amounts of B-APM in compute nodes, and a high-performance network, providing a scalable compute, memory, and I/O system. In this paper, we outline the systemware and hardware required to provide such a system. We start by describing persistent memory, and the functionality it provides, in more detail in section \ref{sec:pm}. In section \ref{sec:opportunities} we discuss how B-APM could be exploited for scientific computation or data analytics. Following this we outline our proposed hardware and systemware architectures in sections \ref{sec:hardware} and \ref{sec:systemware}, describe how applications could benefit from such a system in section \ref{sec:using}. We finish by discussing related work in section \ref{sec:related} summarising the paper in the final section. \section{Persistent Memory}\label{sec:pm} B-APM takes new non-volatile memory technology and packages it in the same form factor (i.e. using the same connector and dimensions) as main memory (SDRAM DIMM form factor). This allows B-APM to be installed and used alongside DRAM based main memory, accessed through the same memory controller. As B-APM is installed in a processor’s memory channels, applications running on the system can access B-APM directly as if it was main memory, including true random data access at byte or cache line granularity. Such an access mechanism is very different to the traditional block based approaches used for current HDD or SSD devices, which generally requires I/O to be done using blocks of data (i.e. 4KB of data written or read in one operation), and relies on expensive kernel interrupts. The B-APM technology that will be the first to market is Intel and Micron’s 3D XPoint\texttrademark memory \cite{hady:3dxpoint}. The performance of this, byte-addressable, B-APM, is projected to be lower than main memory (with a latency $\sim$5-10x that of DDR4 memory when connected to the same memory channels), but much faster than SSDs or HDDs. It is also projected to be of much larger capacity than DRAM, around 2-5x denser (i.e. 2-5x more capacity in the same form factor). \subsection{Data Access} This new class of memory offers very large memory capacity for servers, as well as long term very high performance persistent storage within the memory space of the servers, and the ability to undertake I/O (reading and writing storage data) in a new way. Direct access (DAX) from applications to individual bytes of data in the B-APM is very different from the block-oriented way I/O is currently implemented. B-APM has the potential to enable synchronous, byte level I/O, moving away from the asynchronous block-based file I/O applications currently rely on. In current asynchronous I/O user applications pass data to the operating system (OS) which then use driver software to issue an I/O command, putting the I/O request into a queue on a hardware controller. The hardware controller will process that command when ready, notifying the OS that the I/O operation has finished through an interrupt to the device driver. B-APM can be accessed simply by using a load or store instruction, as with any other memory operation from an application or program. If the application requires persistence, it must flush the data from the volatile CPU caches and ensure that the same data has also arrived on the non-volatile medium. There are cache flush commands and fence instructions. To keep the performance for persistence writes, new cache flush operation have been introduced. Additionally, write buffers in the memory controller may be protected by hardware through modified power supplies (such as those supporting asynchronous DRAM refresh \cite{nvdimm:snia}). With B-APM providing much lower latencies than external storage devices, the traditional I/O block access model, using interrupts, becomes inefficient because of the overhead of context switches between user and kernel mode (which can take thousands of CPU cycles\cite{contextswitch}). Furthermore, with B-APM it becomes possible to implement remote persistent access to data stored in the memory using RDMA technology over a suitable interconnect. Using high performance networks can enable access to data stored in B-APM in remote nodes faster than accessing local high performance SSDs via traditional I/O interfaces and stacks inside a node. Therefore, it is possible to use B-APM to greatly improve I/O performance within a server, increase the memory capacity of a server, or provide a remote data store with high performance access for a group of servers to share. Such storage hardware can also be scaled up by adding more B-APM memory in a server, or adding more nodes to the remote data store, allowing the I/O performance of a system to scale as required. However, if B-APM is provisioned in the servers, there must be software support for managing data within the B-APM. This includes moving data as required for the jobs running on the system, and providing the functionality to let applications run on any server and still utilise the B-APM for fast I/O and storage (i.e. applications should be able to access B-APM in remote nodes if the system is configured with B-APM only in a subset of all nodes). As B-APM is persistent, it also has the potential to be used for resiliency, providing backup for data from active applications, or providing long term storage for databases or data stores required by a range of applications. With support from the systemware, servers can be enabled to handle power loss without experiencing data loss, efficiently and transparently recovering from power failure and resuming applications from their latest running state, and maintaining data with little overhead in terms of performance. \subsection{B-APM modes of operation} Ongoing developments in memory hierarchies, such as the high bandwidth memory in Xeon Phi manycore processors or NVIDIA GPUS, have provided new memory models for programmers and system designers/implementers. A common model that has been proposed includes the ability to configure main memory and B-APM in two different modes: Single-level and Dual-level memory \cite{multilevel:intel}. Single-level memory, or SLM, has main memory (DRAM) and B-APM as two separate memory spaces, both accessible by applications, as outlined in Figure \ref{1lm_pic}. This is very similar to the Flat Mode \cite{mcdram:colfax} configuration of the high bandwidth, on-package, MCDRAM in current Intel Knights Landing processor. The DRAM is allocated and managed via standard memory API's such as {\it malloc} and represent the OS visible main memory size. The B-APM will be managed by programming APIs and present the non-volatile part of the system memory. Both will allow direct CPU load/store operations. In order to take advantage of B-APM in SLM mode, systemware or applications have to be adapted to use these two distinct address spaces. \begin{figure}[!t] \centering \includegraphics[width=2.5in]{1lm.png} \caption{Single-level memory (SLM) configuration using main memory and B-APM} \label{1lm_pic} \end{figure} Dual-level memory, or DLM, configures DRAM as a cache in front of the B-APM, as shown in Figure \ref{2lm_pic}. Only the memory space of the B-APM is available to applications, data being used is stored in DRAM, and moved to B-APM when no longer immediately required by the memory controller (as in standard CPU caches). This is very similar to the Cache Mode \cite{mcdram:colfax} configuration of MCDRAM on KNL processors. \begin{figure}[!t] \centering \includegraphics[width=2.5in]{2lm.png} \caption{Dual-level memory (DLM) configuration using main memory and B-APM} \label{2lm_pic} \end{figure} This mode of operation does not require applications to be altered to exploit the capacity of B-APM, and aims to give memory access performance at main memory speeds whilst providing access to the large memory space of B-APM. However, exactly how well the main memory cache performs will depend on the specific memory requirements and access pattern of a given application. Furthermore, persistence of the B-APM contents cannot be longer guaranteed, due to the volatile DRAM cache in front of the B-APM, so the non-volatile characteristics of B-APM are not exploited. \subsection{Non-volatile memory software ecosystem} The Storage Networking Industry Association (SNIA) have produced a software architecture for B-APM with persistent load/store access, formalised in the Linux Persistent Memory Development Kit (PMDK) \cite{pmem} library. This approach re-uses the naming scheme of files as traditional persistent entities and map the B-APM regions into the address space of a process (similar to memory mapped files in Linux). Once the mapping has been done, the file descriptor is no longer needed and can be closed. Figure \ref{pmem_pic} outlines the PMDK software architecture. \begin{figure}[!t] \centering \includegraphics[width=2.5in]{pmempic.png} \caption{PMDK software architecture} \label{pmem_pic} \end{figure} \section{Opportunities for exploiting B-APM for computational simulations and data analytics}\label{sec:opportunities} Reading data from and writing it to persistent storage is usually not the most time consuming part of computational simulation applications. Analysis of common applications from a range of different scientific areas shows that around 5-20\% of runtime for applications is involved in I/O operations \cite{layton:IO}\cite{Luu:IO}. It is evident that B-APM can be used to improve I/O performance for applications by replacing slower SSDs or HDDs in external filesystems. However, such a use of B-APM would be only an incremental improvement in I/O performance, and would neglect some of the significant features of B-APM that can provide performance benefits for applications. Firstly, deploying B-APM as an external filesystem would require provisioning a filesystem on top of the B-APM hardware. Standard storage devices require a filesystem to enable data to be easily written to or read from the hardware. However, B-APM does not require such functionality, and data can be manipulated directly on B-APM hardware simply through load store instructions. Adding the filesystem and associated interface guarantees (i.e. POSIX interface \cite{POSIX}), adds performance overheads that will reduce I/O performance on B-APM. Secondly, an external B-APM based filesystem would require all I/O operations to be performed over a network connection, as current filesystems are external to compute nodes (see Figure \ref{external_pic}). This would limit the maximum performance of I/O to that of the network between compute nodes and the nodes the B-APM is hosted in. \begin{figure}[!t] \centering \includegraphics[width=2.5in]{externalstorage.png} \caption{Current external storage for HPC and HPDA systems} \label{external_pic} \end{figure} Our vision \cite{weiland:scm} for exploiting B-APM for HPC and HPDA systems is to incorporate the B-APM into the compute nodes, as outlined in Figure \ref{internal_pic}. This architecture allows applications to exploit the full performance of B-APM within the compute nodes they are using, by giving them the ability to access B-APM through load/store at byte-level granularity, as opposed to block based, asynchronous I/O in traditional storage devices. Incorporating B-APM into compute nodes also has the benefit that I/O capacity and bandwidth can scale with the number of compute nodes in the system. Adding more compute nodes will increase the amount of B-APM in the system and add more aggregate bandwidth to I/O/B-APM operations. \begin{figure}[!t] \centering \includegraphics[width=2.5in]{internalstorage.png} \caption{Internal storage using B-APM in compute nodes for HPC and HPDA systems} \label{internal_pic} \end{figure} For example, current memory bandwidth of a HPC system scales with the number of nodes used. If we assume an achievable memory bandwidth per node of 100GB/s, then it follows that a system with 10 nodes has the potential to provide 1TB/s of memory bandwidth for a distributed application, and a system with 10000 nodes can provide 1PB/s of memory bandwidth. If an application is memory bandwidth bound and can parallelise across nodes then scaling up nodes like this clearly has the potential to improve performance. For B-APM in nodes, and taking 3D XPoint\texttrademark as an example, if we assume 20GB/s of memory bandwidth per node (5x less than the volatile memory bandwidth), then scaling up to 10 nodes provides 200GB/s of (I/O) memory bandwidth and 10000 nodes provides 200TB/s of (I/O) memory bandwidth. For comparison, the Titan system at ORNL has a Lustre file system with 1.4TB/s of bandwidth \cite{ornl:hardware} and they are aiming for 50TB/s of burst buffer\cite{dayley:burst} I/O by 2024 \cite{ornl:req}. Furthermore, there is the potential to optimise not only the performance of a single application, but rather the performance of a whole scientific workflow, from data preparation through simulations, to data analysis and visualisation. Optimising full workflows by sharing data between different stages or steps in the workflow has the potential to completely remove, or greatly reduce, data movement/storage costs for large parts of the workflow altogether. Leaving data in-situ on B-APM for other parts of the workflow can significantly improve the performance of analysis and visualisation steps at the same time as reducing I/O costs for the application when writing the data out. The total runtime of an application can be seen as the sum of its compute time, plus the time spent in I/O. Greatly reduced I/O costs therefore also have the beneficial side effect of allowing applications to perform more I/O within the same total cost of the overall simulation. This enables applications to maintain I/O costs in line with current behaviour whilst being able to process significantly more data. Furthermore, for those applications for which I/O does take up a large portion of the run time, including data analytics applications, B-APM has the potential to significantly reduce runtime. \subsection{Potential caveats} However, utilising internal storage is not without drawbacks. Firstly, the benefit of external storage is that there is a single namespace and location for compute nodes to use for data storage and retrieval. This means that applications running on the compute nodes can access data trivially as it is stored externally to the compute nodes. With internal storage, this guarantee is not provided. Data written to B-APM is local to specific compute nodes. It is therefore necessary for applications to be able to manage and move data between compute nodes, as well as to external data storage, or for some systemware components to undertake this task. Secondly, B-APM may be expensive to provision in all compute nodes. It may not be practical to add the same amount of B-APM to all compute nodes, and systems may be constructed with islands of nodes with B-APM, and islands of nodes without B-APM. Therefore, application or systemware functionality to enable access to remote B-APM and to exploit/manage asymmetric B-APM configurations will be required. Both these issues highlight the requirement for an integrated hardware and software (systemware) architecture to enable efficient and easy use of this new memory technology in large scale computational platforms. \section{Hardware architecture}\label{sec:hardware} As 3D Xpoint\textsuperscript{TM} memory, and other B-APM when it becomes available, is designed to fit into standard memory form factors and be utilised using the same memory controllers that main memory exploit, the hardware aspect of incorporating B-APM into a compute server or system is not onerous. Standard HPC and HPDA systems comprise a number of compute nodes, connected together with a high performance network, along with login nodes and an external filesystem. Inside a compute node there are generally 2 or more multicore processors, connected to a shared motherboard, with associated volatile main memory provided for each processor. One or more network connections are also required in each node, generally connected to the PCIe bus on the motherboard. To construct a compute cluster that incorporates B-APM all that is required is a processor and associated memory controller that support such memory. Customised memory controllers are required to intelligently deal with the variation in performance between B-APM and traditional main memory (i.e. DDR). For instance, as B-APM has a higher access latency than DDR memory it would impact performance if B-APM accesses were blocking, i.e. if the memory controller could not progress DDR accesses whilst an B-APM access was outstanding. However, other than modifying the memory controller to support such variable access latencies, it should be possible to support B-APM in a standard hardware platform, provided that sufficient capacity for memory is provided. \begin{figure}[!t] \centering \includegraphics[width=2.5in]{nodehardware2.png} \caption{Compute node hardware architecture} \label{node_pic} \end{figure} Given both DRAM and B-APM are connected through the same memory controller, and memory controllers have a number of memory channels, it is also important to consider the balance of DRAM and B-APM attached to a processor. If we assume a processor has 6 memory channels, to get full DRAM bandwidth we require at least one DRAM DIMM per memory channel. Likewise, if we want full B-APM bandwidth we need a B-APM DIMM per memory channel. Assuming that a memory channel can support are two DIMM slots, this leads us to a configuration with 6 DRAM DIMMs and 6 B-APM DIMMs per processor, and double that with two processors per node. This configuration is also desirable to enable the DLM configuration, as DLM requires DRAM available to act as a cache for B-APM, meaning at least a DRAM DIMM is required per memory controller. Pairing DRAM and B-APM DIMMs on memory channels is not required for all systems, and it should be possible to have some memory channels with no B-APM installed, or some memory channels with no DRAM DIMMs installed. However, if DLM mode is required on a system, it is sensible to expect that at least one DRAM DIMM must be installed per memory controller in addition to B-APM. Future system design may consider providing more than two DIMM slots per memory channel to facilitate systems with different memory configurations (i.e. more B-APM than DRAM DIMMs or memory controllers enabling full B-APM population of memory channels). The proposed memory configuration allows us to investigate possible system configurations using B-APM memory. Table \ref{tab_scale_nodes} outlines different systems, assuming 3TB of B-APM per node, with a node capable of 2TFlop/s compute performance. These projections are achievable with existing processor technology, and demonstrate that very significant I/O bandwidth can be provided to match the compute performance achieved when scaling to very large numbers of nodes. \begin{table}[ht] \caption{B-APM enabled system configurations} \begin{center} \begin{tabular}{|l|c|c|c|} \hline \textbf{Number of} & \textbf{Compute} & \textbf{B-APM Capacity}& \textbf{B-APM Storage I/O}\\ \textbf{nodes} & \textbf{(PFlop/s)} & \textbf{(PB)}& \textbf{Bandwidth (TB/s)}\\ \hline 1 & 0.002 & 0.003 & 0.02 \\ \hline 768 & 1.5 & 2.3 & 15 \\ \hline 3072 & 6 & 9 & 61 \\ \hline 24576 & 49 & 73 & 491 \\ \hline 196608 & 393 & 589 & 3932 \\ \hline \end{tabular} \label{tab_scale_nodes} \end{center} \end{table} Integrating new memory technology in existing memory channels does mean that providing sufficient locations for both main memory and B-APM to be added is important. Depending on the size of B-APM and main memory technology available, sufficient memory slots must be provided per processor to allow a reasonable amount of both memory types to be added to a node. Therefore, we are designing our system around a standard compute node architecture with sufficient memory slot provision to support large amounts of main memory and B-APM as shown in Figure \ref{node_pic}. Another aspect, which we are not focusing on in the hardware architecture, is data security. As B-APM enables data to be retained inside compute nodes, ensuring the security of that data, and ensuring that it cannot be accessed by users or applications that are not authorised to access the data is important. The reason that we are not focusing on this in the hardware architecture is because this requirement can be addressed in software, but it may also be sensible to integrate encryption directly in the memory hardware, memory controller, or processor managing the B-APM. \section{Systemware architecture}\label{sec:systemware} Systemware implements the software functionality necessary to enable users to easily and efficiently utilise the system. We have designed a systemware architecture that provides a number of different types of functionality, related to different methods for exploiting B-APM for large scale computational simulation or data analytics. From the hardware features B-APM provides, our analysis of current HPC and HPDA applications and functionality they utilise, and our investigation of future functionality that may benefit such applications, we have identified a number of different kinds of functionality that the systemware architecture should support: \begin{enumerate} \item Enable users to be able to request systemware components to load/store data in B-APM prior to a job starting, or after a job has completed. This can be thought of as similar to current burst buffer technology. This will allow users to be able to exploit B-APM without changing their applications. \item Enable users to directly exploit B-APM by modifying their applications to implement direct memory access and management. This offers users the ability to access the best performance B-APM can provide, but requires application developers to undertake the task of programming for B-APM themselves, and ensure they are using it in an efficient manner. \item Provide a filesystem built on the B-APM in compute nodes. This allows users to exploit B-APM for I/O operations without having to fundamentally change how I/O is implemented in their applications. However, it does not enable the benefit of moving away from file based I/O that B-APM can provide. \item Provide an object, or key value, store that exploits the B-APM to enable users to explore different mechanisms for storing and accessing data from their applications. \item Enable the sharing of data between applications through B-APM. For example, this may be sharing data between different components of the same computational workflow, or be the sharing of a common dataset between a group of users. \item Ensure data access is restricted to those authorised to access that data and enable deletion or encryption of data to make sure those access restrictions are maintained \item Provision of profiling and debugging tools to allow application developers to understand the performance characteristics of B-APM, investigate how their applications are utilising it, and identify any bugs that occur during development. \item Efficient check-pointing for applications, if requested by users. \item Provide different memory modes if they are supported by the B-APM hardware. \item Enable or disable systemware components as required for a given user application to reduce the performance impact of the systemware to a running application, if that application is not using those systemware components. \end{enumerate} \begin{figure*}[!t] \centering \includegraphics[width=\textwidth]{systemware.png} \caption{Systemware architecture to exploit B-APM hardware in Compute nodes} \label{systemware_pic} \end{figure*} The systemware architecture we have defined is outlined in Figure \ref{systemware_pic}. Whilst this architecture may appear to have a large number of components and significant complexity, the number of systemware components that are specific to a system that contains B-APM is relatively small. The new or modified components we have identified are required to support B-APM in a large scale, multi-user, multi-application, compute platforms are as follows: \begin{itemize} \item Job Scheduler \item Data Scheduler \item Object Store \item Filesystem \item Programming Environment \end{itemize} We will describe these in more detail in the following subsections. \subsection{Job scheduler} As the innovation in our proposed system is the inclusion of B-APM within nodes, one of the key components that must support the new hardware resource is the job scheduler. Job schedulers, or batch systems, are used to manage, schedule, and run user jobs on the shared resource that are the compute nodes. Standard job schedulers are configured with the number of nodes in a system, the number of cores per node, and possibly the amount of memory or whether there are accelerators (like GPUs) in compute nodes in a system. They then use this information, along with a scheduling algorithm and scheduling policies to allocate user job request to a set of compute nodes. Users submit job requests specifying the compute resources required (i.e. number of node or number of compute cores a job will require) along with a maximum runtime for the job. This information is used by the job scheduler to accurately, efficiently, and fairly assign applications to resources. Adding B-APM to compute nodes provides another layer of hardware resource that needs to be monitored and scheduled against by the job scheduler. As data can persist in B-APM, and one of our target use cases is the sharing of data between applications using B-APM, the job scheduler needs to be extended to both be aware of this new hardware resource, and to allow data to be retained in B-APM after an individual job has finished. This functionality is achieved through adding workflow awareness to the job scheduler, providing functionality to allow data to be retained and shared through jobs participating in the workflow, although not indefinitely\cite{farasarakis:PDSW}. The job scheduler also needs to be able to clean up the B-APM after a job has finished, ensuring no data is left behind or B-APM resources consumed, unless specifically as part of a workflow. Furthermore, as the memory system is likely to have different modes of operation, the job scheduler will need to be able to query the current configuration of the memory hardware, and be able to change configuration modes if required by the next job that will be using a particular set of compute nodes. We are also investigating new scheduling algorithms, specifically data aware and energy aware scheduling algorithms, to optimise system efficiency or throughput using B-APM functionality. These will utilise the job scheduler’s awareness of B-APM functionality and compute job data requirements. \subsection{Data scheduler} The data scheduler is an entirely new component, designed to run on each compute node and provide data movement and shepherding functionality. Much of the new functionality we are implementing to exploit B-APM for users involves moving data to and from B-APM asynchronously (i.e. pre-loading data before a job starts, or moving data from B-APM after a job finishes). Furthermore, we also require the ability to move data between different nodes (i.e. in the case that a job runs on a node without B-APM and requires B-APM functionality, or a job runs and needs to access data left on B-APM in a different node by another job). To provide such support without requiring users to modify their applications we implement such functionality in the data scheduler component. This component has interfaces for applications to interact with, and is also interfaced with the job scheduler component on each compute node. Through these interfaces the data scheduler can be instructed to move data as required by a given application or workflow. \subsection{Object store} We recognise the performance and functionality benefits that exploiting new storage technologies can bring to applications. We are therefore investigating the use of object stores, such as DAOS\cite{lofstead:DAOS} and dataClay\cite{marti:dataclay} and are porting them to the hardware architecture we are proposing, i.e. systems with distributed B-APM as the main storage hardware. \subsection{Filesystems} As previously discussed, there is a very large pre-existing code base currently exploiting HPC and HPDA systems. The majority of these will undertake I/O using files through an external filesystem. Therefore, the easiest mechanism for supporting such applications, in the first instance, is to provide filesystems hosted on the B-APM hardware. Our architecture provides the functionality for two different types of filesystems using B-APM: \begin{itemize} \item Local, on-node \item Distributed, cross-node \end{itemize} The local filesystem will provide applications with a space for reading or writing data from/to a filesystem that is separate for each compute node, i.e. a scratch or \texttt{/tmp} filesystem on each node. This will enable very high performance file I/O, but require applications (or the data scheduler) to manage these files. It will also provide a storage space for files to be loaded prior to a job starting (i.e. similar to burst buffer functionality), or for files that should be move to an external filesystem when a job has finished. The distributed filesystem will provide functionality similar to current parallel filesystems (e.g Lustre), except it will be hosted directly on the B-APM hardware and not require external I/O servers or nodes. \subsection{Programming environment} Finally, the programming environment, i.e. libraries, compilers, programming languages, communication libraries, used by application needs to be modified to support B-APM. An obvious example of such requirements is ensuring that common I/O libraries support using B-APM for storage. For instance, many computational simulation applications use MPI-I/O, HDF5, or NetCDF to undertake I/O operations. Ensuring these libraries can utilise B-APM in some way to undertake high performance I/O will ensure a wide range of existing applications can exploit the system effectively. Further modifications, or new functionality, may also be beneficial. For instance, we will deploy a task based programming environment, PyCOMPs\cite{tejedor:pycomps}, which can interact directly with object storage. This will enable us to evaluate whether new parallel programming approaches will enable exploitation of B-APM and the functionality it provides more easily than adapting existing applications to functionality such as object stores or byte-based data accesses to B-APM. \section{Using B-APM}\label{sec:using} To allow a fuller understanding of how a system developed from the architectures we have designed could be used, we discuss some of the possible usage scenarios in the following text. We outline the systemware and hardware components used by a given use case, and the lifecycle of the data in those components. \subsection{Filesystems on B-APM} For this use case we assume an application undertaking standard file based I/O operations, using either parallel or serial I/O functionality. We assume the distributed, cross-node, B-APM filesystem is used for I/O whilst the application is running, and the external high performance filesystem is used for data storage after an application has finished. The job scheduling request includes a request for files to be pre-loaded on to the distributed B-APM filesystem prior to the job starting. In this use case we expect the following systemware interaction to occur (also outlined in Figure \ref{filesystem_seq_pic}): \begin{enumerate} \item User job scheduling requests submitted. \begin{enumerate}[label=\alph*.] \item At this point the application to be run is either stored on the external high performance filesystem or on the local filesystem on the login nodes, and the data for the application is stored on the external high performance filesystem. \end{enumerate} \item Job scheduler allocates resources for an application. \begin{enumerate}[label=\alph*.] \item The job scheduler ensures that nodes are in SLM mode and that the multi-node B-APM filesystem component is operational on the nodes being used by this application. \end{enumerate} \item Once the nodes that the job has been allocated to are available the data scheduler (triggered by the job scheduler) copies input data from the external high performance filesystem to the multi-node B-APM filesystem. \begin{enumerate}[label=\alph*.] \item This step is optional; an application could read input data directly from the external high performance filesystem, but using the multi-node B-APM filesystem will deliver better performance. \end{enumerate} \item Job Launcher starts the user application on the allocated compute nodes. \item Application reads data from the multi-node B-APM filesystem. \item Application writes data to the multi-node B-APM filesystem. \item Application finishes. \item Data Scheduler is triggered and moves data from multi-node B-APM filesystem to the external high performance filesystem. \end{enumerate} \begin{figure}[!t] \centering \includegraphics[width=2.5in]{filesystem_sequence.png} \caption{Sequence diagram for systemware component use by application requesting data be loaded into distributed B-APM-based filesystem prior to starting} \label{filesystem_seq_pic} \end{figure} \section{Related work}\label{sec:related} There are existing technological solutions that are offering similar functionality to B-APM and that can also be exploited for high performance I/O. One example is NVMe devices: SSDs that are attached to the PCIe bus and support the NVM Express interface. Indeed, Intel already has a line of an NVMe device on the market that use 3D XPoint\texttrademark memory technology, called Intel Optane. Other vendors have a large range of NVMe devices on the market, most of them based on different variations of Flash technology. NVMe devices have the potential to provide byte-level storage access, using the {\it PMDK} libraries. A file can be opened and presented as a memory space for an application, and then can be used directly as memory by that application, removing the overhead of file access (i.e. data access through file reads and writes) when performing I/O and enabling the development of applications that exploit B-APM functionality. However, given that NVMe devices are connected via the PCIe bus, and have a disk controller on the device through which access is managed, NVMe devices do not provide the same level of performance that B-APM offers. Indeed, as these devices still use block-based data access, fine grained memory operations can require whole blocks of data to be loaded or stored to the device, rather than individual bytes. There are a wide range of parallel and high performance filesystems designed to enable high performance I/O from large scale compute clusters\cite{schwan:Lustre}\cite{schmuck:GPFS}\cite{beegfs}. However, these provide POSIX compliant block based I/O interfaces, which do not offer byte level data access, requiring conversion of data from program data structures to a flat file format. Furthermore, whilst it is advantageous that such filesystems are external resources, and therefore can be accessed from any compute node in a cluster, this means that filesystem performance does not necessarily scale with compute nodes. Such filesystems are specified and provisioned separately from the compute resource in a HPC or HPDA system. Work has been done to optimise I/O performance of such high performance filesystems\cite{jian:lustre}\cite{choi:storage}\cite{lin:lustre}\cite{carns:storage}, but they do not address B-APM or new mechanisms for storing or accessing data without the overhead of a POSIX-compliant (or weakly-compliant) filesystem. Another technology that is being widely investigated for improving performance and changing I/O functionality for applications is some form of object, key value, store\cite{kim:keystore}\cite{lofstead:DAOS}\cite{marti:dataclay}. These provide alternatives to file-based data storage, enabling data to be stored in similar formats or structures as those used in the application itself. Object stores can start to approach byte level access granularity, however, they require applications to be significantly re-engineered to exploit such functionality. We are proposing hardware and systemware architectures in this work that will integrate B-APM into large scale compute clusters, providing significant I/O performance benefits and introducing new I/O and data storage/manipulation features to applications. Our key goal is to create systems that can both exploit the performance of the hardware and support applications whilst they port to these new I/O or data storage paradigms. Indeed, we recognise that there is a very large body of existing applications and data analysis workflows that cannot immediately be ported to new storage hardware (for time and resource constraint reasons). Therefore, our aims in this work are to provide a system that enables applications to obtain best performance if porting work is undertaken to exploit B-APM hardware features, but still allow applications to exploit B-APM and significantly improve performance without major software changes. \section{Summary}\label{sec:summary} This paper outlines a hardware and systemware architecture designed to enable the exploitation of B-APM hardware directly by applications, or indirectly by applications using systemware functionality that can exploit B-APM for applications. This dual nature of the system provides support for existing application to exploit this emerging memory new hardware whilst enabling developers to modify applications to best exploit the hardware over time. The system outlined provides a range of different functionality. Not all functionality will be utilised by all applications, but providing a wide range of functionality, from filesystems to object stores to data schedulers will enable the widest possible use of such systems. We are aiming for hardware and systemware that enables HPC and HPDA applications to co-exist on the same platform. Whilst the hardware is novel and interesting in its own right, we predict that the biggest benefit in such technology will be realised through changes in application structure and data storage approaches facilitated by the byte-addressable persistent memory that will become routinely available in computing systems. In time it could possible to completely remove the external filesystem from HPC and HPDA systems, removing hardware complexity and the energy/cost associated with such functionality. There is also the potential for volatile memory to disappear from the memory stack everywhere except on the processor itself, removing further energy costs from compute nodes. However, further work is required to evaluate the impact of the costs of the active systemware environment we have outlined in this paper, and the memory usage patterns of applications. Moving data asynchronous to support applications can potentially bring big performance benefits but the impact such functionality has on applications running on those compute node needs to be investigated. This is especially important as with distributed filesystems or object stores hosted on node distributed B-APM such in-node asynchronous data movements will be ubiquitous, even with intelligent scheduling algorithms. \appendices \section*{Acknowledgements} The NEXTGenIO project\footnote{www.nextgenio.eu} and the work presented in this paper were funded by the European Union’s Horizon 2020 Research and Innovation programme under Grant Agreement no. 671951. All the NEXTGenIO Consortium members (EPCC, Allinea, Arm, ECMWF, Barcelona Supercomputing Centre, Fujitsu Technology Solutions, Intel Deutschland, Arctur and Technische Universit\"at Dresden) contributed to the design of the architectures. \ifCLASSOPTIONcaptionsoff \newpage \fi \IEEEtriggeratref{24}
1,941,325,219,927
arxiv
\section{Introduction} \label{sect:introduction} Polar codes and Reed-Muller (RM) codes are two closely related code families in the sense that their generator matrices are formed of rows from the same square matrix. Although RM codes were discovered several decades earlier than polar codes, the capacity-achieving property of RM codes was established very recently. Specifically, polar codes were proposed by Ar{\i}kan in \cite{Arikan09} and were shown to achieve capacity on all binary memoryless symmetric (BMS) channels in the same paper. In contrast, RM codes were proposed back in the 1950s \cite{Reed54,Muller54}, but the question of whether RM codes achieve capacity remained open for more than 60 years until the recent breakthroughs. It was shown in \cite{Kudekar17} that RM codes achieve capacity on binary erasure channels (BEC) under the block-MAP decoder. More recently, Reeves and Pfister proved that RM codes achieve capacity on all BMS channels under the bit-MAP decoder \cite{Reeves21}. While both code families achieve capacity of BMS channels, simulation results \cite{Mondelli14,Ye20} and theoretical analysis \cite{Hassani14,Hassani18} suggest that RM codes have better finite-length performance than polar codes. It was conjectured in \cite{AY20} that this is because RM codes polarize even faster than polar codes. More precisely, in polar coding framework, we multiply a message vector consisting of $n=2^m$ message bits with the matrix $\mathbf{G}_n^{\polar}=(\mathbf{G}_2^{\polar})^{\otimes m}$ and transmit the resulting codeword vector through a BMS channel. Here $\mathbf{G}_2^{\polar}=\begin{bmatrix} 1 & 0 \\ 1 & 1 \end{bmatrix}$, and $\otimes$ is the Kronecker product. The message bits are divided into information bits and frozen bits according to their reliability under the successive decoder. This polar coding framework can also be used to analyze RM codes. To that end, we replace the recursive relation $\mathbf{G}_{n}^{\polar}=\mathbf{G}_{n/2}^{\polar} \otimes \mathbf{G}_2^{\polar}$ in the standard polar code construction with $\mathbf{G}_{n}^{\RM}= \mathbf{P}_n^{\RM} (\mathbf{G}_{n/2}^{\RM} \otimes \mathbf{G}_2^{\polar}).$ Here $\mathbf{P}_n^{\RM}$ is an $n\times n$ permutation matrix which reorders the rows of $\mathbf{G}_{n/2}^{\RM} \otimes \mathbf{G}_2^{\polar}$ according to their Hamming weights. It was conjectured in \cite{AY20} that for the matrix $\mathbf{G}_{n}^{\RM}$, the reliability of each message bit under the successive decoder becomes completely ordered, i.e., each message bit is always more reliable than its previous bit. If this conjecture were true, then one could show that RM codes polarize faster than polar codes, which leads to a better finite-length performance. Inspired by the recursive relation $\mathbf{G}_{n}^{\RM}= \mathbf{P}_n^{\RM} (\mathbf{G}_{n/2}^{\RM} \otimes \mathbf{G}_2^{\polar})$ of RM codes, we propose a new family of codes called the Adjacent-Bits-Swapped (ABS) polar codes. In the construction of ABS polar codes, we use a similar recursive relation $ \mathbf{G}_{n}^{\ABS}= \mathbf{P}_n^{\ABS} (\mathbf{G}_{n/2}^{\ABS} \otimes \mathbf{G}_2^{\polar}). $ The matrix $\mathbf{P}_n^{\ABS}$ is an $n\times n$ permutation matrix which swaps two adjacent rows if the two corresponding message bits are ``unordered", i.e., if the previous bit is more reliable than its next bit under the successive decoder. Swapping such two adjacent rows always accelerates polarization because it makes the reliable bit even more reliable and the noisy bit even noisier. While the permutation matrix $\mathbf{P}_n^{\RM}$ for RM codes involves a large number of swaps of adjacent rows, we limit the number of swaps in $\mathbf{P}_n^{\ABS}$ so that the overall structure of ABS polar codes is still close to standard polar codes. In this way, we are able to devise a modified successive cancellation list (SCL) decoder to efficiently decode ABS polar codes. Recall that both the code construction and the decoding algorithm of standard polar codes rely on a recursive relation between the bit-channels, which are the channels mapping from a message bit to the previous message bits and all the channel outputs. Since we swap certain pairs of adjacent bits in the ABS polar code construction, there is no explicit recursive relation between bit-channels. Instead, we introduce the notion of adjacent-bits-channels, which are $4$-ary-input channels mapping from two adjacent message bits to the previous message bits and all the channel outputs. As the main technical contribution of this paper, we derive a recursive relation between the adjacent-bits-channels. This recursive relation serves as the foundation of efficient code construction and decoding algorithms for ABS polar codes. { We provide two sets of simulation results to compare the performance of ABS polar codes and standard polar codes. First, we empirically calculate the scaling exponents of ABS polar codes and standard polar codes over a binary erasure channel with erasure probability $0.5$. Our calculations show that the scaling exponent of ABS polar codes is $3.37$ while the scaling exponent of standard polar codes is $3.65$, confirming that the polarization of ABS polar codes is indeed faster than standard polar codes. Second, we conduct extensive simulations over the binary-input AWGN channels for various choices of parameters.} In particular, we have tested the performance for code length $256,512,1024,2048$. For each choice of code length, we test $3$ code rates $0.3,0.5$ and $0.7$. When we set the list size to be $32$ for the CRC-aided SCL decoders of both code families, ABS polar codes consistently outperform standard polar codes by $0.15\dB$---$0.3\dB$, but the decoding time of ABS polar decoder is longer than that of standard polar codes by roughly $60\%$. If we use list size $20$ for ABS polar codes and keep the list size to be $32$ for standard polar codes, then the decoding time is more or less the same for these two codes, and ABS polar codes still outperform standard polar codes for most choices of parameters. In this case, the improvement over standard polar codes is up to $0.15\dB$. The organization of this paper is as follows: In Section~\ref{sect:main_idea}, we describe the main idea behind the ABS polar code construction and explain why ABS polar codes polarize faster than standard polar codes. In Section~\ref{sect:construction}, we derive the new recursive relation between the adjacent-bits-channels and use this recursive relation to construct ABS polar codes. In Section~\ref{sect:encoding}, we present an efficient encoding algorithm for ABS polar codes. In Section~\ref{sect:SCL}, we present the new SCL decoder for ABS polar codes. Finally, in Section~\ref{sect:simu}, we provide the simulation results. \section{Main idea of the new code construction} \label{sect:main_idea} \subsection{The polarization framework} \label{sect:polarization_framework} Let $U_1,U_2,\dots,U_n$ be $n$ i.i.d. Bernoulli-$1/2$ random variables. Let $\mathbf{G}_n$ be an $n\times n$ invertible matrix over the binary field. Define $(X_1,X_2,\dots,X_n)=(U_1,U_2,\dots,U_n) \mathbf{G}_n$. We transmit each $X_i$ through a BMS channel $W$ and denote the channel output vector as $(Y_1,Y_2,\dots,Y_n)$. In this framework, $(U_1,U_2,\dots,U_n)$ is the message vector, $\mathbf{G}_n$ is the encoding matrix, and $(X_1,X_2,\dots,X_n)$ is the codeword vector. We use a successive decoder to recover the message vector from the channel output vector. More precisely, we decode the coordinates in the message vector one by one from $U_1$ to $U_n$. When decoding $U_i$, the successive decoder knows the values of all the previous message bits $U_1,\dots,U_{i-1}$ and all the channel outputs $Y_1,\dots,Y_n$. Note that the codeword vector $(X_1,X_2,\dots,X_n)$ depends on the matrix $\mathbf{G}_n$, and the channel output vector $(Y_1,Y_2,\dots,Y_n)$ depends on both the matrix $\mathbf{G}_n$ and the BMS channel $W$, although we omit the dependence from their notation. Next we define \begin{equation} \label{eq:conditional_entropy} H_i(\mathbf{G}_n,W) := H(U_i | U_1,\dots,U_{i-1},Y_1,\dots,Y_n) \text{~for~} 1\le i\le n, \end{equation} where $H(\cdot | \cdot)$ is the conditional entropy. $H_i(\mathbf{G}_n,W)$ measures the reliability of the $i$th message bit under the successive decoder when we use the encoding matrix $\mathbf{G}_n$ and transmit the codeword vector through the BMS channel $W$. Since $\mathbf{G}_n$ is an invertible matrix, we have \begin{equation} \label{eq:chain_rule} H_1(\mathbf{G}_n,W)+H_2(\mathbf{G}_n,W)+\dots+H_n(\mathbf{G}_n,W)=n(1-I(W)), \end{equation} where $I(W)$ is the channel capacity of $W$. We say that a family of matrices $\{\mathbf{G}_n\}$ is polarizing over a BMS channel $W$ if $H_i(\mathbf{G}_n,W)$ is close to either $0$ or $1$ for almost all $i\in\{1,2,\dots,n\}$ as $n\to\infty$. In order to quantify the polarization level of a given encoding matrix $\mathbf{G}_n$ over a BMS channel $W$, we define $$ \Gamma(\mathbf{G}_n, W)= \frac{1}{n}\sum_{i=1}^n H_i(\mathbf{G}_n,W) (1-H_i(\mathbf{G}_n,W) ). $$ According to the definition above, a family of matrices $\{\mathbf{G}_n\}$ is polarizing over $W$ if and only if $\Gamma(\mathbf{G}_n, W)\to 0$ as $n\to\infty$. A family of polarizing matrix $\{\mathbf{G}_n\}$ over a BMS channel $W$ allows us to construct capacity-achieving codes as follows: We include the $i$th row of $\mathbf{G}_{n}$ in the generator matrix if and only if $H_i(\mathbf{G}_n,W)$ is very close to $0$. The condition $H_i(\mathbf{G}_n,W)\approx 0$ guarantees that the decoding error of the constructed codes approaches $0$ under the successive decoder. We can further use \eqref{eq:chain_rule} to show that the code rate of the constructed codes approaches $I(W)$. To see this, we first assume the extreme case where $\Gamma(\mathbf{G}_n, W) = 0$, i.e., $H_i(\mathbf{G}_n,W)$ is either $0$ or $1$ for all $1\le i\le n$. Then by \eqref{eq:chain_rule} we know that the dimension of the constructed polar code is precisely $nI(W)$, i.e., the code rate is $R=I(W)$. For the realistic case of $\Gamma(\mathbf{G}_n, W) \to 0$ as $n \to \infty$, one can show that the gap to capacity $I(W)-R$ also decreases to $0$ as $n\to\infty$. Moreover, the smaller $\Gamma(\mathbf{G}_n, W)$ is, the smaller gap to capacity we have. In the standard polar code construction \cite{Arikan09}, we construct the family of matrices $\{\mathbf{G}_{2^m}^{\polar}\}_{m=1}^\infty$ recursively using the following relation: $$ \mathbf{G}_2^{\polar} := \begin{bmatrix} 1 & 0 \\ 1 & 1 \end{bmatrix} \text{~and~~} \mathbf{G}_n^{\polar}= \mathbf{G}_{n/2}^{\polar} \otimes \mathbf{G}_2^{\polar} \text{~for~} n=2^m \ge 4, $$ where $\otimes$ is the Kronecker product and $m>1$ is a positive integer. It was shown in \cite{Arikan09} that $\{\mathbf{G}_{2^m}^{\polar}\}_{m=1}^\infty$ is polarizing over every BMS channel $W$, and the codes constructed from these matrices can be efficiently decoded. In this paper, our objective is to construct another family of polarizing matrices $\{\mathbf{G}_{2^m}^{\ABS}\}_{m=1}^\infty$ satisfying the following two conditions: (1) $\Gamma(\mathbf{G}_{2^m}^{\ABS}, W) < \Gamma(\mathbf{G}_{2^m}^{\polar}, W)$, i.e., the matrices $\mathbf{G}_{2^m}^{\ABS}$ polarize even faster than $\mathbf{G}_{2^m}^{\polar}$; (2) the codes constructed from $\{\mathbf{G}_{2^m}^{\ABS}\}_{m=1}^\infty$ can also be efficiently decoded. The first condition allows us to construct a new family of codes with smaller gap to capacity and better finite-length performance than standard polar codes. \subsection{Swapping unordered adjacent bits accelerates polarization}\label{sect:reason} The key observation in the standard polar code construction is that $\Gamma(\mathbf{G}_n, W)$ decreases as we perform the Kronecker product $\mathbf{G}_{2n} = \mathbf{G}_{n} \otimes \mathbf{G}_2^{\polar}$. More precisely, we always have $$ \Gamma(\mathbf{G}_{n} \otimes \mathbf{G}_2^{\polar}, W) < \Gamma(\mathbf{G}_n, W) $$ for every invertible matrix $\mathbf{G}_n$ as long as $I(W)$ is not equal to $0$ or $1$. Therefore, the Kronecker product $\mathbf{G}_{2n}=\mathbf{G}_n \otimes \mathbf{G}_2^{\polar}$ deepens the polarization at the cost of increasing the code length by a factor of $2$. In this paper, we observe that there is another method to deepen the polarization without increasing the code length, and this simple observation forms the foundation of our new code construction. Given a matrix $\mathbf{G}_{n}$ and a BMS channel $W$, we say that two adjacent message bits $U_i$ and $U_{i+1}$ are unordered if $H_i(\mathbf{G}_n,W)\le H_{i+1}(\mathbf{G}_n,W)$. This inequality means that $U_i$ is more reliable than $U_{i+1}$ under the successive decoder although $U_i$ is decoded before $U_{i+1}$. Our key observation is that in this case, switching the decoding order of $U_i$ and $U_{i+1}$ deepens the polarization. Intuitively, this is because switching the decoding order of these two bits makes the reliable bit even more reliable and the noisy bit even noisier. Note that switching the decoding order of $U_i$ and $U_{i+1}$ is equivalent to swapping the $i$th row and the $(i+1)$th row of $\mathbf{G}_{n}$. More specifically, let us define a new matrix $\overline{\mathbf{G}}_{n}$ as the matrix obtained from swapping the $i$th row and the $(i+1)$th row of $\mathbf{G}_{n}$ and keeping all the other rows the same as $\mathbf{G}_{n}$. Following the framework in Section~\ref{sect:polarization_framework}, let $(\overline{U}_1,\overline{U}_2,\dots,\overline{U}_n)$ be the message vector associated with the new matrix $\overline{\mathbf{G}}_{n}$, where $\overline{U}_1,\dots,\overline{U}_n$ are $n$ i.i.d. Bernoulli-$1/2$ random variables. Let $(\overline{X}_1,\dots,\overline{X}_n)=(\overline{U}_1,\dots,\overline{U}_n) \overline{\mathbf{G}}_{n}$ be the codeword vector transmitted through the BMS channel $W$ and let $(\overline{Y}_1,\dots,\overline{Y}_n)$ be the corresponding channel output vector. By definition \eqref{eq:conditional_entropy}, we have $$ H_j(\overline{\mathbf{G}}_n,W)= H(\overline{U}_j | \overline{U}_1,\dots,\overline{U}_{j-1},\overline{Y}_1,\dots,\overline{Y}_n) \text{~for~} 1\le j\le n. $$ By the relation between the matrices $\overline{\mathbf{G}}_{n}$ and $\mathbf{G}_{n}$, we have \begin{align} & H_j(\mathbf{G}_n,W)=H_j(\overline{\mathbf{G}}_n,W) \text{~for all~} j\in\{1,2,\dots,n\}\setminus\{i,i+1\}, \label{eq:d1} \\ & H_i(\mathbf{G}_n,W) = H(\overline{U}_{i+1} | \overline{U}_1,\dots,\overline{U}_{i-1},\overline{Y}_1,\dots,\overline{Y}_n) \nonumber \\ \ge & H(\overline{U}_{i+1} | \overline{U}_1,\dots,\overline{U}_{i-1},\overline{U}_i,\overline{Y}_1,\dots,\overline{Y}_n) = H_{i+1}(\overline{\mathbf{G}}_n,W), \label{eq:d2} \\ & H_{i+1}(\mathbf{G}_n,W) = H(\overline{U}_i | \overline{U}_1,\dots,\overline{U}_{i-1},\overline{U}_{i+1},\overline{Y}_1,\dots,\overline{Y}_n) \nonumber \\ \le & H(\overline{U}_i | \overline{U}_1,\dots,\overline{U}_{i-1},\overline{Y}_1,\dots,\overline{Y}_n) = H_i(\overline{\mathbf{G}}_n,W). \label{eq:d3} \end{align} Now suppose that $U_i$ and $U_{i+1}$ are unordered, i.e., $H_i(\mathbf{G}_n,W)\le H_{i+1}(\mathbf{G}_n,W)$. Combining this inequality with \eqref{eq:d2}--\eqref{eq:d3}, we obtain \begin{equation} \label{eq:3inq} H_{i+1}(\overline{\mathbf{G}}_n,W) \le H_i(\mathbf{G}_n,W)\le H_{i+1}(\mathbf{G}_n,W) \le H_i(\overline{\mathbf{G}}_n,W). \end{equation} Moreover, $$ H_i(\mathbf{G}_n,W) + H_{i+1}(\mathbf{G}_n,W) = H_i(\overline{\mathbf{G}}_n,W) + H_{i+1}(\overline{\mathbf{G}}_n,W) = H(\overline{U}_i, \overline{U}_{i+1} | \overline{U}_1,\dots,\overline{U}_{i-1},\overline{Y}_1,\dots,\overline{Y}_n). $$ This equality together with \eqref{eq:3inq} implies that \begin{align*} & (H_i(\mathbf{G}_n,W))^2 + (H_{i+1}(\mathbf{G}_n,W))^2 \\ = & \frac{1}{2} \Big( \big(H_i(\mathbf{G}_n,W) + H_{i+1}(\mathbf{G}_n,W) \big)^2 + \big(H_i(\mathbf{G}_n,W) - H_{i+1}(\mathbf{G}_n,W) \big)^2 \Big) \\ \le & \frac{1}{2} \Big( \big(H_i(\overline{\mathbf{G}}_n,W) + H_{i+1}(\overline{\mathbf{G}}_n,W) \big)^2 + \big(H_i(\overline{\mathbf{G}}_n,W) - H_{i+1}(\overline{\mathbf{G}}_n,W) \big)^2 \Big) \\ = & (H_i(\overline{\mathbf{G}}_n,W))^2 + (H_{i+1}(\overline{\mathbf{G}}_n,W))^2. \end{align*} Therefore, \begin{align*} & H_i(\overline{\mathbf{G}}_n,W) \big(1-H_i(\overline{\mathbf{G}}_n,W) \big) + H_{i+1}(\overline{\mathbf{G}}_n,W) \big(1- H_{i+1}(\overline{\mathbf{G}}_n,W) \big) \\ \le & H_i(\mathbf{G}_n,W) \big(1- H_i(\mathbf{G}_n,W) \big) + H_{i+1}(\mathbf{G}_n,W) \big(1- H_{i+1}(\mathbf{G}_n,W) \big). \end{align*} Combining this with \eqref{eq:d1}, we conclude that $\Gamma(\overline{\mathbf{G}}_n, W) \le \Gamma(\mathbf{G}_n, W)$. This formally justifies that switching the decoding order of two unordered adjacent bits deepens polarization. \subsection{Our new code construction and its connection to RM codes} \label{sect:connection_to_RM} We view the operation of taking the Kronecker product $\mathbf{G}_{n}^{\polar}=\mathbf{G}_{n/2}^{\polar} \otimes \mathbf{G}_2^{\polar}$ in the standard polar code construction as one layer of polar transform. Then the construction of a standard polar code with code length $n=2^m$ consists of $m$ consecutive layers of polar transforms. In light of the discussion in Section~\ref{sect:reason}, we add a permutation layer after each polar transform layer in our ABS polar code construction. More precisely, we replace the recursive relation $\mathbf{G}_{n}^{\polar}=\mathbf{G}_{n/2}^{\polar} \otimes \mathbf{G}_2^{\polar}$ in the standard polar code construction with \begin{equation} \label{eq:GnABS} \mathbf{G}_{n}^{\ABS}= \mathbf{P}_n^{\ABS} (\mathbf{G}_{n/2}^{\ABS} \otimes \mathbf{G}_2^{\polar}), \end{equation} where the matrix $\mathbf{P}_n^{\ABS}$ is an $n\times n$ permutation matrix. In this case, $\mathbf{G}_{n}^{\ABS}$ is a row permutation of the Kronecker product $\mathbf{G}_{n/2}^{\ABS} \otimes \mathbf{G}_2^{\polar}$. The permutation associated with $\mathbf{P}_n^{\ABS}$ is a composition of multiple swaps of unordered adjacent bits. The starting point of the recursive relation \eqref{eq:GnABS} is $\mathbf{G}_1^{\ABS}=[1]$, the identity matrix of size $1\times 1$. Before we present how to choose $\mathbf{P}_n^{\ABS}$ in \eqref{eq:GnABS}, let us point out an interesting connection between our new code and RM codes. In fact, RM codes can also be constructed using a similar recursive relation: \begin{equation} \label{eq:GnRM} \mathbf{G}_{n}^{\RM}= \mathbf{P}_n^{\RM} (\mathbf{G}_{n/2}^{\RM} \otimes \mathbf{G}_2^{\polar}). \end{equation} Here $\mathbf{P}_n^{\RM}$ is an $n\times n$ permutation matrix which reorders the rows of $\mathbf{G}_{n/2}^{\RM} \otimes \mathbf{G}_2^{\polar}$ according to their Hamming weights. In other words, $\mathbf{G}_{n}^{\RM}$ is a row permutation of $\mathbf{G}_{n/2}^{\RM} \otimes \mathbf{G}_2^{\polar}$, and the Hamming weights of the rows of $\mathbf{G}_{n}^{\RM}$ are monotonically increasing from the first row to the last row. It was shown in \cite{AY20} that the family of matrices $\{\mathbf{G}_{n}^{\RM}\}$ is polarizing over every BMS channel $W$, i.e., $H_i(\mathbf{G}_n^{\RM},W)$ is close to either $0$ or $1$ for almost all $i\in\{1,2,\dots,n\}$ as $n\to\infty$. It was further conjectured\footnote{The authors of \cite{AY20} provided some theoretical analysis and simulation results to support this conjecture.} in \cite{AY20} that $\{H_i(\mathbf{G}_n^{\RM},W)\}_{i=1}^n$ is decreasing for every BMS channel $W$, i.e., \begin{equation} \label{eq:conjecture} H_1(\mathbf{G}_n^{\RM},W) \ge H_2(\mathbf{G}_n^{\RM},W) \ge \dots \ge H_n(\mathbf{G}_n^{\RM},W). \end{equation} If this conjecture were true, then we can immediately conclude that RM codes achieve capacity of BMS channels. Indeed, RM codes choose rows with heaviest Hamming weight in $\mathbf{G}_{n}^{\RM}$ to form the generator matrices. Since the rows of $\mathbf{G}_{n}^{\RM}$ are sorted according to their Hamming weights, RM codes simply pick the rows with large row indices. By \eqref{eq:conjecture}, these rows correspond to the most reliable bits under the successive decoder. Moreover, since almost all the conditional entropy in \eqref{eq:conjecture} are close to either $0$ or $1$, the conditional entropy of the most reliable bits must be close to $0$, and the number of such bits is close to $nI(W)$ as $n\to\infty$. Moreover, the conjecture \eqref{eq:conjecture} indicates that RM codes do not have any unordered adjacent bits. According to the discussion in Section~\ref{sect:reason}, this suggests that RM codes have fast polarization. In fact, it is widely believed that RM codes have a smaller gap to capacity than polar codes with the same parameters, which was suggested to be the case by both theoretical analysis \cite{Hassani14,Hassani18} and simulation results \cite{Mondelli14,Ye20}. Although RM codes are believed to have better performance than polar codes under the Maximum Likelihood (ML) decoder, the problem of designing an efficient decoder whose performance is almost the same as the ML decoder still remains open for RM codes, except for a certain range of parameters. In particular, the performance of currently known decoding algorithms for RM codes \cite{Dumer06,Ye20,Lian20,Geiselhart21} is close to the ML decoder only in the short code length or the low code rate regimes. In contrast, the performance of the successive cancellation list (SCL) decoder with list size $32$ is almost the same as the ML decoder for polar codes. Our new code construction is an intermediate point between RM codes and polar codes. On the one hand, the recursive relation \eqref{eq:GnABS} of our new code is similar to the recursion \eqref{eq:GnRM} of RM codes in the sense that both codes add a permutation layer after each polar transform layer to accelerate polarization. On the other hand, we only use a relatively small number of swaps in the permutation matrix $\mathbf{P}_n^{\ABS}$ while the permutation matrix $\mathbf{P}_n^{\RM}$ for RM codes involves a large number of swaps. As a consequence, the overall structure of our new code is still close to the standard polar codes, and it allows a modified SCL decoder to efficiently decode. In order to explain how to choose $\mathbf{P}_n^{\ABS}$ in \eqref{eq:GnABS}, we introduce a sequence of permutation matrices. For $1\le i\le n-1$, we use $\mathbf{S}_n^{(i)}$ to denote the $n\times n$ permutation matrix that swaps $i$ and $i+1$ while mapping all the other elements to themselves. More precisely, only $4$ entries of $\mathbf{S}_n^{(i)}$ are different from the identity matrix. These $4$ entries are $\mathbf{S}_n^{(i)}(i,i)=\mathbf{S}_n^{(i)}(i+1,i+1)=0$ and $\mathbf{S}_n^{(i)}(i,i+1)=\mathbf{S}_n^{(i)}(i+1,i)=1$, where $\mathbf{S}_n^{(i)}(a,b)$ is the entry of $\mathbf{S}_n^{(i)}$ located at the cross of the $a$th row and the $b$th column. The permutation matrix $\mathbf{P}_n^{\ABS}$ can be written as \begin{equation} \label{eq:pi} \mathbf{P}_n^{\ABS} = \prod_{i\in\mathcal{I}^{(n)}} \mathbf{S}_n^{(i)} , \end{equation} where $\mathcal{I}^{(n)}$ is a subset of $\{1,2,\dots,n-1\}$. Let us write $\mathcal{I}^{(n)}=\{i_1,i_2,\dots,i_s\}$, where $s$ is the size of $\mathcal{I}^{(n)}$. In the ABS polar code construction, we require that \begin{equation} \label{eq:separated} i_2\ge i_1+4, \quad i_3\ge i_2+4, \quad i_4\ge i_3+4, \quad \dots, \quad i_s\ge i_{s-1}+4. \end{equation} This condition guarantees that the swapped elements are fully separated, and it is the foundation of efficient code construction and efficient decoding for ABS polar codes. More specifically, the condition \eqref{eq:separated} allows us to efficiently track the evolution of every pair of adjacent bits through different layers of polar transforms in a recursive way. We will explain the details about this in Section~\ref{sect:construction}. As a final remark, we note that one needs to choose $m$ permutation matrices $\mathbf{P}_2^{\ABS},\mathbf{P}_4^{\ABS},\mathbf{P}_8^{\ABS},\dots,\mathbf{P}_n^{\ABS}$ in the construction of an ABS polar code with code length $n=2^m$. \subsection{Comparison with the large kernel method} \label{sect:cmp_lkm} The finite-length scaling of polar codes is an important research topic in the polar coding literature \cite{Hassani14, Guruswami15, Mondelli15, Mondelli16}. The ABS polar code construction proposed in this paper is one way to improve the scaling exponent of polar codes. Another extensively-studied method is to use large kernels instead of the Ar{\i}kan kernel $\mathbf{G}_2^{\polar}$ in the polar code construction \cite{Buz2017ISIT, Buz2017, Ye2015, Fazeli21, Wang21, Guruswami22}. In particular, it was shown in \cite{Fazeli21, Wang21, Guruswami22} that when the kernel size goes to infinity, the scaling exponent of polar codes approaches the optimal value $2$. Compared to the ABS polar code construction, the large kernel method has the following three disadvantages: (i) The choice of code length is more restrictive. The code length of ABS polar codes can be any power of $2$, but the code length of polar codes with large kernels must be a power of the kernel size $\ell$, where $\ell$ is larger than $2$. Some typical choices of $\ell$ are $4,8,16$. (ii) The code construction is also more restrictive. In the original large kernel method, the same kernel is used repetitively throughout the whole code construction. In contrast, we use different permutation matrices $\mathbf{P}_2^{\ABS},\mathbf{P}_4^{\ABS},\mathbf{P}_8^{\ABS},\dots,\mathbf{P}_n^{\ABS}$ in different layers. (iii) The decoding complexity is much larger. For $\ell\times\ell$ kernels, the decoding time increases by a factor of $2^\ell$ compared to standard polar codes. In contrast, the decoding time of ABS polar codes only increases by $60\%$ compared to standard polar codes, as indicated by the simulation results in Section~\ref{sect:simu}. Among the research on polar codes with large kernels, the permuted kernels and the permuted successive cancellation (PSC) decoder proposed in \cite{Buz2017ISIT, Buz2017} are particularly relevant to our paper. More specifically, \cite{Buz2017ISIT, Buz2017} proposed to use permuted kernels, whose size $\ell$ is a power of $2$. As suggested by its name, the permuted kernel is a row permutation of $\mathbf{G}_{\ell}^{\polar}$. This is similar in nature to the ABS polar code construction because the encoding matrix $\mathbf{G}_n^{\ABS}$ of ABS polar codes is also a row permutation of $\mathbf{G}_n^{\polar}$. Moreover, \cite{Buz2017ISIT, Buz2017} further proposed the PSC decoder to efficiently decode polar codes with permuted kernels. The PSC decoder together with the permuted kernels significantly reduces the decoding time compared to the standard SC decoder for polar codes with large kernels. In other words, the PSC decoder and permuted kernels mitigate the third disadvantage above. However, the first two disadvantages still remain, i.e., the choice of code length and the code construction are still more restrictive than ABS polar codes. \section{Code construction of ABS polar codes} \label{sect:construction} The construction of ABS polar codes with code length $n=2^m$ consists of two main steps. The first step is to pick the permutation matrices $\mathbf{P}_2^{\ABS},\mathbf{P}_4^{\ABS},\mathbf{P}_8^{\ABS},\dots,\mathbf{P}_n^{\ABS}$ in the recursive relation \eqref{eq:GnABS}, as mentioned at the end of the previous section. After picking these permutation matrices, the second step is to find which bits are information bits and which bits are frozen bits. Although the second step is also needed in the construction of standard polar codes \cite{Arikan09,Tal13}, the techniques used in this paper are quite different. In the standard polar code construction, we can directly track the evolution of bit-channels in a recursive way. However, in the ABS polar code construction, it is not possible to identify a recursive relation between bit-channels directly because we swap certain pairs of adjacent bits in the code construction. Instead, we find a recursive relation between pairs of adjacent bits from different layers of polar transforms. After obtaining the joint distribution of every pair of adjacent bits, we are able to calculate the transition probability of the bit-channels and locate the information bits and the frozen bits. The organization of this section is as follows: In Section~\ref{sect:2x2transform}, we first recall how to track the evolution of bit-channels in standard polar codes using the basic $2\times 2$ transform. In Section~\ref{sect:DB}, we introduce a new transform and use it to establish a recursive relation between pairs of adjacent bits for standard polar codes. The purpose of Section~\ref{sect:DB} is to illustrate the application of the new transform in a familiar setting. In Section~\ref{sect:SDB}, we use the new transform to track the evolution of adjacent bits in the ABS polar codes. The result in Section~\ref{sect:SDB} accomplishes the second step of the ABS polar code construction, i.e., it allows us to locate the information bits and the frozen bits when the permutation matrices $\mathbf{P}_2^{\ABS},\mathbf{P}_4^{\ABS},\mathbf{P}_8^{\ABS},\dots,\mathbf{P}_n^{\ABS}$ in the recursive relation \eqref{eq:GnABS} are known. Next, in Section~\ref{sect:cons_PnABS}, we explain how to pick these permutation matrices in the ABS polar code construction. Recall that the quantization operation is needed in the standard polar code construction \cite{Tal13} because the output alphabet size of the bit-channels grows exponentially with $n$. The same issue also arises in the ABS polar code construction, and we will discuss this in Section~\ref{sect:quantization}. Finally, we put everything together and summarize the code construction algorithm for ABS polar codes in Section~\ref{sect:summary_cons}. \subsection{Tracking the evolution of bit-channels in standard polar codes using the $2\times 2$ transform} \label{sect:2x2transform} Let us first recall the $2\times 2$ transform in the standard polar code construction. \begin{figure}[ht] \centering \begin{subfigure}{.45\linewidth} \centering \begin{tikzpicture} \draw node at (0,10.5) [] (u1) {$U_1$} node at (0,9) [] (u2) {$U_2$} node at (1.5,10.5) [XOR,scale=1.2] (x1) {} node at (2.5,10.5) [] (xx1) {$X_1$} node at (2.5,9) [] (xx2) {$X_2$} node at (3.8,10.5) [block] (v1) {$W$} node at (3.8,9) [block] (v2) {$W$} node at (5.5,10.5) [] (y1) {$Y_1$} node at (5.5,9) [] (y2) {$Y_2$}; \draw[fill] (1.5, 9) circle (.6ex); \draw[very thick,->](u1) -- node {}(x1); \draw[very thick,->](u2) -| node {}(x1); \draw[very thick,->](x1) -- (xx1); \draw[very thick,->](u2) -- (xx2); \draw[very thick,->](xx1) -- (v1); \draw[very thick,->](xx2) -- (v2); \draw[very thick,->](v1) -- node {}(y1); \draw[very thick,->](v2) -- node {}(y2); \end{tikzpicture} \caption{Multiply i.i.d. Bernoulli-$1/2$ random variables $(U_1,U_2)$ with the matrix $\mathbf{G}_2^{\polar}$, and then transmit the results through two copies of $W$. Under the successive decoder, this transforms two copies of $W$ into $W^-:U_1\to Y_1,Y_2$ and $W^+:U_2\to U_1,Y_1,Y_2$.} \label{fig:polar_transform_a} \end{subfigure} \hspace*{0.2in} \begin{subfigure}{.45\linewidth} \centering \begin{tikzpicture} \draw node at (0,1.5) [] (u1) {$W$} node at (0,0) [] (u2) {$W$} node at (1.2,1.5) [] (v1) {} node at (1.2,0) [] (v2) {} node at (3.7,1.5) [] (x1) {} node at (3.7,0) [] (x2) {} node at (3.7,1.5) [] (x1) {} node at (3.7,0) [] (x2) {} node at (5.2,1.5) [] (y1) {$W^-$} node at (5.2,0) [] (y2) {$W^+$} node at (2.45,0.75) [text width=2cm,align=center] {$2\times 2$\\Transform}; \draw[thick] (1.2,-0.5) rectangle (3.7,2); \draw[very thick,->](u1) -- node {}(v1); \draw[very thick,->](u2) -- node {}(v2); \draw[very thick,->](x1) -- node[above] {``$-$"}(y1); \draw[very thick,->](x2) -- node [above] {``$+$"} (y2); \end{tikzpicture} \caption{We take two independent copies of $W$ as inputs. After the transform, we obtain a ``worse" channel $W^-:U_1\to Y_1,Y_2$ and a ``better" channel $W^+:U_2\to U_1,Y_1,Y_2$.} \end{subfigure} \caption{The $2\times 2$ basic polar transform} \label{fig:polar_transform} \end{figure} Given a BMS channel $W:\{0,1\}\to\mathcal{Y}$, the transition probabilities of $W^-:\{0,1\}\to\mathcal{Y}^2$ and $W^+:\{0,1\}\to\{0,1\}\times\mathcal{Y}^2$ in Fig.~\ref{fig:polar_transform} are given by \begin{equation} \label{eq:polar_transform} \begin{aligned} & W^-(y_1,y_2|u_1) = \frac{1}{2} \sum_{u_2\in\{0,1\}} W(y_1|u_1+u_2) W(y_2|u_2) \quad \text{for~} u_1\in\{0,1\} \text{~and~} y_1,y_2\in\mathcal{Y}, \\ & W^+(u_1,y_1,y_2|u_2) = \frac{1}{2} W(y_1|u_1+u_2) W(y_2|u_2) \quad \text{for~} u_1,u_2\in\{0,1\} \text{~and~} y_1,y_2\in\mathcal{Y}. \end{aligned} \end{equation} The basic $2\times 2$ transform plays a fundamental role in the standard polar code construction because it allows us to efficiently track the evolution of bit-channels in a recursive way. More specifically, the bit-channels induced by the matrix $\mathbf{G}_n^{\polar}$ are defined in Fig.~\ref{fig:bit_channels_polar} below. It is well known that the bit-channels associated with $\mathbf{G}_n^{\polar}$ and the bit-channels associated with $\mathbf{G}_{n/2}^{\polar}$ satisfy the following recursive relation: \begin{equation} \label{eq:recur_bit_channels} W_{2i-1}^{(n)}=(W_i^{(n/2)})^- \text{~~and~~} W_{2i}^{(n)}=(W_i^{(n/2)})^+ \text{~~for~} 1\le i\le n/2. \end{equation} Both the code construction and the decoding algorithm of standard polar codes rely on this recursive relation. \begin{figure}[ht] \centering \begin{tikzpicture} \node [block, align=center] at (3,1.6) (y1) { $U_1$ \\[0.5em] $U_2$ \\[0.5em] \vdots \\[0.5em] $U_n$ }; \node [sblock, align=center] at (5,1.6) (y2) {$\mathbf{G}_n^{\polar}$}; \node [block, align=center] at (7,1.6) (y3) { $X_1$ \\[0.5em] $X_2$ \\[0.5em] \vdots \\[0.5em] $X_n$ }; \node [block] at (8.5, 2.7) (w1) {$W$}; \node [block] at (8.5, 2) (w2) {$W$}; \node at (8.5, 1.3) {\vdots}; \node [block] at (8.5, 0.6) (w3) {$W$}; \node at (10, 2.7) (z1) {$Y_1$}; \node at (10, 2) (z2) {$Y_2$}; \node at (10, 1.3) {\vdots}; \node at (10, 0.6) (z3) {$Y_n$}; \draw[->,thick] (y1)--(y2); \draw[->,thick] (y2)--(y3); \draw[->,thick] (w1)--(z1); \draw[->,thick] (w2)--(z2); \draw[->,thick] (w3)--(z3); \draw[->,thick] (7.4, 2.7)--(w1); \draw[->,thick] (7.4, 2)--(w2); \draw[->,thick] (7.4, 0.6)--(w3); \node at (5, -0.4) (p1) {$(X_1,\dots,X_n)=(U_1,\dots,U_n) \mathbf{G}_n^{\polar}$}; \node at (15,1.6) [align=left] {$W_1^{(n)}:U_1\to Y_1,\dots,Y_n$ \\[3pt] $W_2^{(n)}:U_2\to U_1,Y_1,\dots,Y_n$ \\[3pt] $W_3^{(n)}:U_3\to U_1,U_2,Y_1,\dots,Y_n$\\ \hspace*{1in} \vdots\\ $W_n^{(n)}:U_n\to U_1,\dots,U_{n-1},Y_1,\dots,Y_n$}; \node at (15, -0.4) {$n$ bit-channels induced by $\mathbf{G}_n^{\polar}$}; \end{tikzpicture} \caption{$U_1,\dots,U_n$ are $n=2^m$ i.i.d. Bernoulli-$1/2$ random variables. $(X_1,\dots,X_n)=(U_1,\dots,U_n) \mathbf{G}_n^{\polar}$ is the codeword vector, and $(Y_1,\dots,Y_n)$ is the channel output vector. The $n$ bit-channels induced by $\mathbf{G}_n^{\polar}$ are listed on the right side of the figure. $W_i^{(n)}$ is the bit-channel mapping from $U_i$ to $U_1,\dots,U_{i-1},Y_1,\dots,Y_n$.} \label{fig:bit_channels_polar} \end{figure} \subsection{Tracking the evolution of adjacent bits in standard polar codes using a new transform} \label{sect:DB} In the construction of ABS polar codes, we need to track the joint distribution of every pair of adjacent bits, not just the distribution of every single bit given the previous bits and channel outputs. To that end, we introduce a new transform, named as the Double-Bits (DB) polar transform. All the channels involved in the DB polar transform have $4$-ary inputs. To distinguish between binary-input channels and $4$-ary-input channels, we use $W$ to denote the former channels and use $V$ to denote latter channels\footnote{More precisely, $W$ and its variations such as $W^+,W^-,W_i^{(n)}$ are used for binary-input channels; $V$ and its variations such as $V^{\triangledown},V^{\lozenge},V^{\vartriangle},V_i^{(n)}$ are used for channels with $4$-ary inputs.}. The details of the DB polar transform are illustrated in Fig.~\ref{fig:DBpt}. Given a $4$-ary-input channel $V:\{0,1\}^2\to\mathcal{Y}$, the transition probabilities of $V^{\triangledown}:\{0,1\}^2\to\mathcal{Y}^2,V^{\lozenge}:\{0,1\}^2\to\{0,1\}\times\mathcal{Y}^2$, and $V^{\vartriangle}:\{0,1\}^2\to\{0,1\}^2\times\mathcal{Y}^2$ in Fig.~\ref{fig:DBpt} are given by \begin{equation} \label{eq:DBpt} \begin{aligned} & V^{\triangledown}(y_1,y_2|u_1,u_2)=\frac{1}{4}\sum_{u_3,u_4\in\{0,1\}} V(y_1|u_1+u_2,u_3+u_4) V(y_2|u_2,u_4) \\ & \hspace*{2.7in} \text{~for~} u_1,u_2\in\{0,1\} \text{~and~} y_1,y_2\in\mathcal{Y}, \\ & V^{\lozenge}(u_1,y_1,y_2|u_2,u_3)=\frac{1}{4}\sum_{u_4\in\{0,1\}} V(y_1|u_1+u_2,u_3+u_4) V(y_2|u_2,u_4) \\ & \hspace*{2.7in} \text{~for~} u_1,u_2,u_3\in\{0,1\} \text{~and~} y_1,y_2\in\mathcal{Y}, \\ & V^{\vartriangle}(u_1,u_2,y_1,y_2|u_3,u_4)=\frac{1}{4} V(y_1|u_1+u_2,u_3+u_4) V(y_2|u_2,u_4) \\ & \hspace*{2.7in} \text{~for~} u_1,u_2,u_3,u_4\in\{0,1\} \text{~and~} y_1,y_2\in\mathcal{Y}. \end{aligned} \end{equation} \begin{figure}[ht] \centering \begin{subfigure}{.53\linewidth} \centering \begin{tikzpicture} \draw node at (0,10.5) [] (u1) {$U_1$} node at (0,9.5) [] (u2) {$U_2$} node at (1.5,10.5) [XOR,scale=1.2] (x1) {} node at (4.1,10.5) (v1) {} node at (4.1,9.5) (v2) {} node at (4.5,10) (t1) {} node at (4.3,10) [vblock] (76) {$V$} node at (6,10) [] (y1) {$Y_1$}; \draw[fill] (1.5, 9.5) circle (.6ex); \draw node at (0,8.5) [] (u3) {$U_3$} node at (0,7.5) [] (u4) {$U_4$} node at (1.5,8.5) [XOR,scale=1.2] (x3) {} node at (4.1,8.5) (v3) {} node at (4.1,7.5) (v4) {} node at (4.5,8) (t3) {} node at (4.3,8) [vblock] (83) {$V$} node at (6,8) [] (y3) {$Y_2$}; \draw[fill] (1.5, 7.5) circle (.6ex); \draw[very thick,->](u1) -- node {}(x1); \draw[very thick,->](u2) -| node {}(x1); \draw[very thick,->](x1) -- (v1); \draw[very thick,->](u2) -- (2.2,9.5) -- (3.3,8.5) -- (v3); \draw[very thick,->](t1) -- node {}(y1); \draw[very thick,->](x3) -- (2.2,8.5) -- (3.3,9.5) -- (v2); \draw[very thick,->](u4) -| node {}(x3); \draw[very thick,->](u3) -- node {}(x3); \draw[very thick,->](u4) -- node {}(v4); \draw[very thick,->](t3) -- node {}(y3); \end{tikzpicture} \caption{$U_1,U_2,U_3,U_4$ are i.i.d. Bernoulli-$1/2$ random variables. The channel $V:\{0,1\}^2\to \mathcal{Y}$ takes two bits as its inputs, i.e., $V$ has $4$-ary inputs. Under the successive decoder, we have the following three channels: (1) $V^{\triangledown}:U_1,U_2\to Y_1,Y_2$; (2) $V^{\lozenge}:U_2,U_3\to U_1,Y_1,Y_2$; (3) $V^{\vartriangle}:U_3,U_4\to U_1,U_2,Y_1,Y_2$.} \label{fig:DBpt_a} \end{subfigure} \hfill \begin{subfigure}{.43\linewidth} \centering \begin{tikzpicture} \draw node at (0,1.5) [] (u1) {$V$} node at (0,0) [] (u2) {$V$} node at (1.4,1.5) [] (v1) {} node at (1.4,0) [] (v2) {} node at (3.5,2.2) [] (x1) {} node at (3.5,0.75) [] (x2) {} node at (3.5,-0.7) [] (x3) {} node at (5.2,2.2) [] (y1) {$V^{\triangledown}$} node at (5.2,0.75) [] (y2) {$V^{\lozenge}$} node at (5.2,-0.7) [] (y3) {$V^{\vartriangle}$} node at (2.45,0.75) [lgblock, align=center] {DB polar\\Transform}; \draw[very thick,->](u1) -- node {}(v1); \draw[very thick,->](u2) -- node {}(v2); \draw[very thick,->](x1) -- node[above] {``$\triangledown$"}(y1); \draw[very thick,->](x2) -- node [above] {``$\lozenge$"} (y2); \draw[very thick,->](x3) -- node [above] {``$\vartriangle$"} (y3); \end{tikzpicture} \caption{Two independent copies of $V$ are transformed into three channels $V^{\triangledown},V^{\lozenge},V^{\vartriangle}$. These three channels also have $4$-ary inputs. Note that the inputs of $V^{\triangledown}$ and $V^{\lozenge}$ have one-bit overlap, and the inputs of $V^{\lozenge}$ and $V^{\vartriangle}$ also have one-bit overlap.} \end{subfigure} \caption{The Double-Bits (DB) polar transform} \label{fig:DBpt} \end{figure} The role of the DB polar transform in the construction of ABS polar codes is the same as the role of the $2\times 2$ basic polar transform in the standard polar code construction. Instead of jumping directly into the ABS polar code construction, let us first use standard polar codes to illustrate how to track the evolution of adjacent bits recursively using the DB polar transform. In order to calculate the joint distribution of adjacent bits, we introduce the notion of adjacent-bits-channels, which is the counterpart of the bit-channels used for tracking the distribution of every single bit. We still use the setting in Fig.~\ref{fig:bit_channels_polar}, where we defined the bit-channels. For the matrix $\mathbf{G}_n^{\polar}$ and a BMS channel $W$, we define $n-1$ adjacent-bits-channels $V_1^{(n)},V_2^{(n)},\dots,V_{n-1}^{(n)}$ as follows: \begin{equation} \label{eq:st_adc} V_i^{(n)}: U_i,U_{i+1} \to U_1,\dots,U_{i-1},Y_1,\dots,Y_n \text{~~for~} 1\le i\le n-1, \end{equation} where $U_1,\dots,U_n,Y_1,\dots,Y_n$ are defined in Fig.~\ref{fig:bit_channels_polar}. By definition, $V_1^{(n)},V_2^{(n)},\dots,V_{n-1}^{(n)}$ take two bits as their inputs, i.e., all of them have $4$-ary inputs. Moreover, these adjacent-bits-channels depend on the BMS channel $W$, although we omit this dependence in the notation. The following lemma allows us to calculate $V_1^{(n)},V_2^{(n)},\dots,V_{n-1}^{(n)}$ recursively from $V_1^{(n/2)},V_2^{(n/2)},\dots, \linebreak[4] V_{n/2-1}^{(n/2)}$. \begin{lemma} \label{lemma:recur_ST_DB} Let $n\ge 4$. We have \begin{equation} \label{eq:abc} V_{2i-1}^{(n)} = (V_i^{(n/2)})^\triangledown, \quad V_{2i}^{(n)} = (V_i^{(n/2)})^\lozenge, \quad V_{2i+1}^{(n)} = (V_i^{(n/2)})^\vartriangle \quad \text{for~} 1\le i\le n/2-1. \end{equation} \end{lemma} The proof of Lemma~\ref{lemma:recur_ST_DB} is given in Appendix~\ref{ap:lm1}. The relation \eqref{eq:abc} is similar in nature to the relation \eqref{eq:recur_bit_channels}, and the proof of \eqref{eq:abc} also uses the same method as the proof of \eqref{eq:recur_bit_channels}. There is, however, one difference between these two recursive relations: The ``$+$" and ``$-$" transforms of different bit-channels are distinct while the ``$\triangledown$", ``$\lozenge$" and ``$\vartriangle$" transforms of different adjacent-bits-channels may overlap. More precisely, the $n/2$ sets $\{(W_i^{(n/2)})^-,(W_i^{(n/2)})^+\}_{i=1}^{n/2}$ are disjoint while the two sets $\{(V_i^{(n/2)})^\triangledown,(V_i^{(n/2)})^\lozenge,(V_i^{(n/2)})^\vartriangle\}$ and $\{(V_{i+1}^{(n/2)})^\triangledown,(V_{i+1}^{(n/2)})^\lozenge,(V_{i+1}^{(n/2)})^\vartriangle\}$ have the following element in their intersection for every $1\le i\le n/2-2$: \begin{equation} \label{eq:2w} V_{2i+1}^{(n)} = (V_i^{(n/2)})^\vartriangle = (V_{i+1}^{(n/2)})^\triangledown. \end{equation} This gives us two methods of calculating $V_{2i+1}^{(n)}$ recursively for $1\le i\le n/2-2$. Lemma~\ref{lemma:recur_ST_DB} tells us how to calculate $\{V_i^{(n)}\}_{i=1}^{n-1}$ from $\{V_i^{(n/2)}\}_{i=1}^{n/2-1}$ recursively for $n\ge 4$. The last question we need to answer is how to calculate the adjacent-bits-channel $V_1^{(2)}$ from the BMS channel $W$, because $V_1^{(2)}$ is the starting point of the recursive relation in Lemma~\ref{lemma:recur_ST_DB}. Fortunately, this is an easy task. Let us go back to the setting in Fig.~\ref{fig:polar_transform}. Given a BMS channel $W$, the adjacent-bits-channel $V_1^{(2)}$ is simply the channel mapping from $U_1,U_2$ to $Y_1,Y_2$. More precisely, we have \begin{equation} \label{eq:v_init} V_1^{(2)}(y_1,y_2|u_1,u_2)=W(y_1|u_1+u_2) W(y_2|u_2). \end{equation} After obtaining the transition probabilities of the adjacent-bits-channels $\{V_i^{(n)}\}_{i=1}^{n-1}$, it is straightforward to calculate the transition probabilities of the bit-channels $\{W_i^{(n)}\}_{i=1}^n$. More precisely, we have \begin{equation} \label{eq:v_to_w} \begin{aligned} & W_i^{(n)}(y_1,y_2,\dots,y_n,u_1,u_2,\dots,u_{i-1}|u_i) = \frac{1}{2} \sum_{u_{i+1}\in\{0,1\}} V_i^{(n)}(y_1,y_2,\dots,y_n,u_1,u_2,\dots,u_{i-1}|u_i,u_{i+1}), \\ & W_{i+1}^{(n)}(y_1,y_2,\dots,y_n,u_1,u_2,\dots,u_i|u_{i+1}) = \frac{1}{2} V_i^{(n)}(y_1,y_2,\dots,y_n,u_1,u_2,\dots,u_{i-1}|u_i,u_{i+1}) \end{aligned} \end{equation} for $1\le i\le n-1$. As a final remark, we note that the output alphabet size of the adjacent-bits-channels $\{V_i^{(n)}\}_{i=1}^{n-1}$ grows exponentially with $n$. Therefore, accurate calculations of $\{V_i^{(n)}\}_{i=1}^{n-1}$ are intractable. We need to quantize the output alphabets by merging output symbols with similar posterior distributions. Recall that in the standard polar code construction \cite{Tal13}, we also need the quantization operation to calculate an approximation of the bit-channels $\{W_i^{(n)}\}_{i=1}^n$. Our quantization method is different from the one used in \cite{Tal13} because the adjacent-bits-channels have $4$-ary inputs while the bit-channels have binary inputs. We will present our quantization method later in Section~\ref{sect:quantization}. \subsection{Tracking the evolution of adjacent bits in ABS polar codes} \label{sect:SDB} As discussed at the beginning of this section, the construction of ABS polar codes consists of two main steps. The first step is to pick the permutation matrices $\mathbf{P}_2^{\ABS},\mathbf{P}_4^{\ABS},\mathbf{P}_8^{\ABS},\dots,\mathbf{P}_n^{\ABS}$ in the recursive relation \eqref{eq:GnABS}, and the second step is to find which bits are information bits and which bits are frozen bits after picking these permutation matrices. In this subsection, we explain how to accomplish the second step. More precisely, we define the bit-channels and the adjacent-bits-channels for ABS polar codes in Fig.~\ref{fig:abc_in_ABS}. The task of this subsection is to show how to calculate the capacity of the bit-channels $\{W_i^{(n),\ABS}\}_{i=1}^n$ when the permutation matrices $\mathbf{P}_2^{\ABS},\mathbf{P}_4^{\ABS},\mathbf{P}_8^{\ABS},\dots,\mathbf{P}_n^{\ABS}$ in \eqref{eq:GnABS} are known. Then the information bits are simply the $U_i$'s satisfying that $I(W_i^{(n),\ABS})\approx 1$, where $I(\cdot)$ is the channel capacity. Unlike the standard polar codes, there does not exist a recursive relation between the bit-channels $\{W_i^{(n),\ABS}\}_{i=1}^n$ and $\{W_i^{(n/2),\ABS}\}_{i=1}^{n/2}$ for ABS polar codes. Instead, we derive a recursive relation between the adjacent-bits-channels $\{V_i^{(n),\ABS}\}_{i=1}^{n-1}$ and $\{V_i^{(n/2),\ABS}\}_{i=1}^{n/2-1}$. After that, the transition probabilities of $\{W_i^{(n),\ABS}\}_{i=1}^n$ can be calculated from the transition probabilities of $\{V_i^{(n),\ABS}\}_{i=1}^{n-1}$. \begin{figure}[ht] \centering \begin{tikzpicture} \node [block, align=center] at (-1,1.6) (y-1) { $U_1$ \\[0.5em] $U_2$ \\[0.5em] \vdots \\[0.5em] $U_n$ }; \node [sblock, align=center] at (1,1.6) (y0) {$\mathbf{P}_n^{\ABS}$}; \node [block, align=center] at (3,1.6) (y1) { $\widehat{U}_1$ \\[0.5em] $\widehat{U}_2$ \\[0.5em] \vdots \\[0.5em] $\widehat{U}_n$ }; \node [sblock, align=center] at (5,1.6) (y2) {$\mathbf{G}_{n/2}^{\ABS} \otimes \mathbf{G}_2^{\polar}$}; \node [block, align=center] at (7,1.6) (y3) { $X_1$ \\[0.5em] $X_2$ \\[0.5em] \vdots \\[0.5em] $X_n$ }; \node [block] at (8.5, 2.7) (w1) {$W$}; \node [block] at (8.5, 2) (w2) {$W$}; \node at (8.5, 1.3) {\vdots}; \node [block] at (8.5, 0.6) (w3) {$W$}; \node at (10, 2.7) (z1) {$Y_1$}; \node at (10, 2) (z2) {$Y_2$}; \node at (10, 1.3) {\vdots}; \node at (10, 0.6) (z3) {$Y_n$}; \draw[->,thick] (y-1)--(y0); \draw[->,thick] (y0)--(y1); \draw[->,thick] (y1)--(y2); \draw[->,thick] (y2)--(y3); \draw[->,thick] (w1)--(z1); \draw[->,thick] (w2)--(z2); \draw[->,thick] (w3)--(z3); \draw[->,thick] (7.4, 2.7)--(w1); \draw[->,thick] (7.4, 2)--(w2); \draw[->,thick] (7.4, 0.6)--(w3); \node at (4.5, -0.7) [align=center] (p1) {$(\widehat{U}_1,\dots,\widehat{U}_n)=(U_1,\dots,U_n)\mathbf{P}_n^{\ABS}, \quad (X_1,\dots,X_n)=(\widehat{U}_1,\dots,\widehat{U}_n) (\mathbf{G}_{n/2}^{\ABS} \otimes \mathbf{G}_2^{\polar})$ \\[0.4em] $(X_1,\dots,X_n)=(U_1,\dots,U_n)\mathbf{P}_n^{\ABS}(\mathbf{G}_{n/2}^{\ABS} \otimes \mathbf{G}_2^{\polar})=(U_1,\dots,U_n)\mathbf{G}_n^{\ABS}$}; \node at (4, -3.2) [align=center] (nbc1) {Two sets of bit-channels\\ $\{W_i^{(n),\ABS}:U_i\to U_1,\dots,U_{i-1},Y_1,\dots,Y_n\}_{i=1}^n$\\[0.1em] $\{\widehat{W}_i^{(n),\ABS}:\widehat{U}_i\to \widehat{U}_1,\dots,\widehat{U}_{i-1},Y_1,\dots,Y_n\}_{i=1}^n$ \\[0.6em] Two sets of adjacent-bits-channels\\ $\{V_i^{(n),\ABS}:U_i,U_{i+1}\to U_1,\dots,U_{i-1},Y_1,\dots,Y_n\}_{i=1}^{n-1}$\\[0.1em] $\{\widehat{V}_i^{(n),\ABS}:\widehat{U}_i,\widehat{U}_{i+1}\to \widehat{U}_1,\dots,\widehat{U}_{i-1},Y_1,\dots,Y_n\}_{i=1}^{n-1}$}; \end{tikzpicture} \caption{$U_1,\dots,U_n$ are $n=2^m$ i.i.d. Bernoulli-$1/2$ random variables. $(X_1,\dots,X_n)=(U_1,\dots,U_n) \mathbf{G}_n^{\ABS}$ is the codeword vector, and $(Y_1,\dots,Y_n)$ is the channel output vector. We view each Kronecker product with $\mathbf{G}_2^{\polar}$ as one layer of polar transform and view each multiplication with a permutation matrix as one layer of permutation. Then $\mathbf{G}_n^{\ABS}$ is obtained from $m$ layers of polar transforms and $m$ layers of permutations while $\mathbf{G}_{n/2}^{\ABS} \otimes \mathbf{G}_2^{\polar}$ is obtained from $m$ layers of polar transforms and $m-1$ layers of permutations. Therefore, $\{W_i^{(n),\ABS}\}_{i=1}^n$ and $\{V_i^{(n),\ABS}\}_{i=1}^{n-1}$ are the bit-channels and adjacent-bits-channels seen by the successive decoder after $m$ layers of polar transforms and $m$ layers of permutations. Similarly, $\{\widehat{W}_i^{(n),\ABS}\}_{i=1}^n$ and $\{\widehat{V}_i^{(n),\ABS}\}_{i=1}^{n-1}$ are the bit-channels and adjacent-bits-channels seen by the successive decoder after $m$ layers of polar transforms and $m-1$ layers of permutations.} \label{fig:abc_in_ABS} \end{figure} In order to derive the recursive relation between the adjacent-bits-channels for ABS polar codes, we need another new transform named as the Swapped-Double-Bits (SDB) polar transform in addition to the DB polar transform defined in \eqref{eq:DBpt}. The details of the SDB polar transform are illustrated in Fig.~\ref{fig:SDBpt}. In fact, the SDB polar transform is very similar to the DB polar transform. By comparing Fig.~\ref{fig:DBpt_a} and Fig.~\ref{fig:SDBpt_a}, we can see that the only difference between these two transforms is the order of $U_2$ and $U_3$. Given a $4$-ary-input channel $V:\{0,1\}^2\to\mathcal{Y}$, the transition probabilities of $V^{\blacktriangledown}:\{0,1\}^2\to\mathcal{Y}^2,V^{\blacklozenge}:\{0,1\}^2\to\{0,1\}\times\mathcal{Y}^2$, and $V^{\blacktriangle}:\{0,1\}^2\to\{0,1\}^2\times\mathcal{Y}^2$ in Fig.~\ref{fig:SDBpt} are given by \begin{equation} \begin{aligned} & V^{\blacktriangledown}(y_1,y_2|u_1,u_2)=\frac{1}{4}\sum_{u_3,u_4\in\{0,1\}} V(y_1|u_1+u_3,u_2+u_4) V(y_2|u_3,u_4) \\ & \hspace*{2.7in} \text{~for~} u_1,u_2\in\{0,1\} \text{~and~} y_1,y_2\in\mathcal{Y}, \\ & V^{\blacklozenge}(u_1,y_1,y_2|u_2,u_3)=\frac{1}{4}\sum_{u_4\in\{0,1\}} V(y_1|u_1+u_3,u_2+u_4) V(y_2|u_3,u_4) \\ & \hspace*{2.7in} \text{~for~} u_1,u_2,u_3\in\{0,1\} \text{~and~} y_1,y_2\in\mathcal{Y}, \\ & V^{\blacktriangle}(u_1,u_2,y_1,y_2|u_3,u_4)=\frac{1}{4} V(y_1|u_1+u_3,u_2+u_4) V(y_2|u_3,u_4) \\ & \hspace*{2.7in} \text{~for~} u_1,u_2,u_3,u_4\in\{0,1\} \text{~and~} y_1,y_2\in\mathcal{Y}. \end{aligned} \end{equation} \begin{figure}[ht] \centering \begin{subfigure}{.53\linewidth} \centering \begin{tikzpicture} \draw node at (-1.5,10.5) [] (u1) {$U_1$} node at (-1.5,9.5) [] (u2) {$U_2$} node at (1.5,10.5) [XOR,scale=1.2] (x1) {} node at (4.1,10.5) (v1) {} node at (4.1,9.5) (v2) {} node at (4.5,10) (t1) {} node at (4.3,10) [vblock] (76) {$V$} node at (6,10) [] (y1) {$Y_1$}; \draw[fill] (1.5, 9.5) circle (.6ex); \draw node at (-1.5,8.5) [] (u3) {$U_3$} node at (-1.5,7.5) [] (u4) {$U_4$} node at (1.5,8.5) [XOR,scale=1.2] (x3) {} node at (4.1,8.5) (v3) {} node at (4.1,7.5) (v4) {} node at (4.5,8) (t3) {} node at (4.3,8) [vblock] (83) {$V$} node at (6,8) [] (y3) {$Y_2$}; \draw[fill] (1.5, 7.5) circle (.6ex); \draw[very thick,->](u1) -- node {}(x1); \draw[very thick,->](0.6, 9.5) -| node {}(x1); \draw[very thick,->](x1) -- (v1); \draw[very thick,->](u3) -- (-0.4, 8.5) -- (0.6, 9.5) -- (2.2,9.5) -- (3.3,8.5) -- (v3); \draw[very thick,->](t1) -- node {}(y1); \draw[very thick,->](x3) -- (2.2,8.5) -- (3.3,9.5) -- (v2); \draw[very thick,->](u4) -| node {}(x3); \draw[very thick,->](u2) -- (-0.4, 9.5) -- (0.6, 8.5) -- (x3); \draw[very thick,->](u4) -- (v4); \draw[very thick,->](t3) -- (y3); \end{tikzpicture} \caption{$U_1,U_2,U_3,U_4$ are i.i.d. Bernoulli-$1/2$ random variables. The channel $V:\{0,1\}^2\to \mathcal{Y}$ takes two bits as its inputs, i.e., $V$ has $4$-ary inputs. Under the successive decoder, we have the following three channels: (1) $V^{\blacktriangledown}:U_1,U_2\to Y_1,Y_2$; (2) $V^{\blacklozenge}:U_2,U_3\to U_1,Y_1,Y_2$; (3) $V^{\blacktriangle}:U_3,U_4\to U_1,U_2,Y_1,Y_2$.} \label{fig:SDBpt_a} \end{subfigure} \hfill \begin{subfigure}{.43\linewidth} \centering \begin{tikzpicture} \draw node at (0,1.5) [] (u1) {$V$} node at (0,0) [] (u2) {$V$} node at (1.4,1.5) [] (v1) {} node at (1.4,0) [] (v2) {} node at (3.5,2.2) [] (x1) {} node at (3.5,0.75) [] (x2) {} node at (3.5,-0.7) [] (x3) {} node at (5.2,2.2) [] (y1) {$V^{\blacktriangledown}$} node at (5.2,0.75) [] (y2) {$V^{\blacklozenge}$} node at (5.2,-0.7) [] (y3) {$V^{\blacktriangle}$} node at (2.45,0.75) [lgblock, align=center] {SDB polar\\Transform}; \draw[very thick,->](u1) -- node {}(v1); \draw[very thick,->](u2) -- node {}(v2); \draw[very thick,->](x1) -- node[above] {``$\blacktriangledown$"}(y1); \draw[very thick,->](x2) -- node [above] {``$\blacklozenge$"} (y2); \draw[very thick,->](x3) -- node [above] {``$\blacktriangle$"} (y3); \end{tikzpicture} \caption{Two independent copies of $V$ are transformed into three channels $V^{\blacktriangledown},V^{\blacklozenge},V^{\blacktriangle}$. These three channels also have $4$-ary inputs. Note that the inputs of $V^{\blacktriangledown}$ and $V^{\blacklozenge}$ have one-bit overlap, and the inputs of $V^{\blacklozenge}$ and $V^{\blacktriangle}$ also have one-bit overlap.} \end{subfigure} \caption{The Swapped-Double-Bits (SDB) polar transform} \label{fig:SDBpt} \end{figure} Recall that we use the set $\mathcal{I}^{(n)}=\{i_1,i_2,\dots,i_s\}$ to represent the permutation matrix $\mathbf{P}_n^{\ABS}$ in \eqref{eq:pi}. Moreover, we require that $i_1,i_2,\dots,i_s$ in the set $\mathcal{I}^{(n)}$ satisfy the condition \eqref{eq:separated} because otherwise there does not exist a recursive relation between the adjacent-bits-channels $\{V_i^{(n),\ABS}\}_{i=1}^{n-1}$ and $\{V_i^{(n/2),\ABS}\}_{i=1}^{n/2-1}$. We will give a detailed explanation about this later in Section~\ref{sect:necess}. Here we point out another property of the elements $i_1,i_2,\dots,i_s$ in $\mathcal{I}^{(n)}$: they must all be even numbers. To see this, let us go back to the setting in Fig.~\ref{fig:abc_in_ABS}. The role of $\mathcal{I}^{(n)}$ is to decide which pairs of adjacent bits to swap in the vector $(\widehat{U}_1,\widehat{U}_2,\dots,\widehat{U}_n)$ defined in Fig.~\ref{fig:abc_in_ABS}. According to the discussion in Section~\ref{sect:reason}, we swap the adjacent bits $\widehat{U}_i$ and $\widehat{U}_{i+1}$ only if they are unordered, i.e., if $\widehat{U}_i$ is more reliable than $\widehat{U}_{i+1}$ under the successive decoder. In other words, we swap the adjacent bits $\widehat{U}_i$ and $\widehat{U}_{i+1}$ only if $I(\widehat{W}_i^{(n),\ABS})\ge I(\widehat{W}_{i+1}^{(n),\ABS})$, where the bit-channels $\widehat{W}_i^{(n),\ABS}$ and $\widehat{W}_{i+1}^{(n),\ABS}$ are also defined in Fig.~\ref{fig:abc_in_ABS}. Since $\{\widehat{W}_i^{(n),\ABS}\}_{i=1}^n$ are obtained from the $2\times 2$ basic polar transform of $\{W_i^{(n/2),\ABS}\}_{i=1}^{n/2}$, they satisfy the following relation: $$ \widehat{W}_{2i-1}^{(n),\ABS} = (W_i^{(n/2),\ABS})^- \text{~~and~~} \widehat{W}_{2i}^{(n),\ABS} = (W_i^{(n/2),\ABS})^+ \text{~~for~} 1\le i\le n/2. $$ Therefore, $$ I(\widehat{W}_{2i-1}^{(n),\ABS}) \le I(W_i^{(n/2),\ABS}) \le I(\widehat{W}_{2i}^{(n),\ABS}), $$ so we should not swap $\widehat{U}_{2i-1}$ and $\widehat{U}_{2i}$ for any $1\le i\le n/2$. Thus we conclude that the set $\mathcal{I}^{(n)}$ in \eqref{eq:pi} only contains even numbers. Therefore, the elements of $\mathcal{I}^{(n)}$ can be written as $\mathcal{I}^{(n)}=\{2j_1,2j_2,\dots,2j_s\}$, and the condition \eqref{eq:separated} becomes \begin{equation} \label{eq:separated_even} j_2\ge j_1+2, \quad j_3\ge j_2+2, \quad j_4\ge j_3+2, \quad \dots, \quad j_s\ge j_{s-1}+2. \end{equation} Now we are ready to state the recursive relation between $\{V_i^{(n),\ABS}\}_{i=1}^{n-1}$ and $\{V_i^{(n/2),\ABS}\}_{i=1}^{n/2-1}$. \begin{lemma} \label{lemma:recur_ABS} Let $n\ge 4$. We write $\mathbf{P}_n^{\ABS}$ in the form of \eqref{eq:pi} and require that $\mathcal{I}^{(n)}=\{2j_1,2j_2,\dots,2j_s\}$ satisfies \eqref{eq:separated_even}. For $1\le i\le n/2 -1$, we have the following results: \noindent {\em \bf Case i)} If $2i\in\mathcal{I}^{(n)}$, then $$ V_{2i-1}^{(n),\ABS} = (V_i^{(n/2),\ABS})^\blacktriangledown, \quad V_{2i}^{(n),\ABS} = (V_i^{(n/2),\ABS})^\blacklozenge, \quad V_{2i+1}^{(n),\ABS} = (V_i^{(n/2),\ABS})^\blacktriangle. $$ \noindent {\em \bf Case ii)} If $2(i-1)\in\mathcal{I}^{(n)}$ and $2(i+1)\in\mathcal{I}^{(n)}$, then $$ V_{2i}^{(n),\ABS} = (V_i^{(n/2),\ABS})^\lozenge. $$ \noindent {\em \bf Case iii)} If $2(i-1)\in\mathcal{I}^{(n)}$ and $2(i+1)\notin\mathcal{I}^{(n)}$, then $$ V_{2i}^{(n),\ABS} = (V_i^{(n/2),\ABS})^\lozenge, \quad V_{2i+1}^{(n),\ABS} = (V_i^{(n/2),\ABS})^\vartriangle. $$ \noindent {\em \bf Case iv)} If $2(i-1)\notin\mathcal{I}^{(n)}$ and $2(i+1)\in\mathcal{I}^{(n)}$, then $$ V_{2i-1}^{(n),\ABS} = (V_i^{(n/2),\ABS})^\triangledown, \quad V_{2i}^{(n),\ABS} = (V_i^{(n/2),\ABS})^\lozenge. $$ \noindent {\em \bf Case v)} If $2(i-1)\notin\mathcal{I}^{(n)}$, $2i\notin\mathcal{I}^{(n)}$ and $2(i+1)\notin\mathcal{I}^{(n)}$, then $$ V_{2i-1}^{(n),\ABS} = (V_i^{(n/2),\ABS})^\triangledown, \quad V_{2i}^{(n),\ABS} = (V_i^{(n/2),\ABS})^\lozenge, \quad V_{2i+1}^{(n),\ABS} = (V_i^{(n/2),\ABS})^\vartriangle. $$ \end{lemma} Note that in a previous arXiv version and the ISIT version \cite{Li2022ISIT} of this paper, the statement of this lemma was not complete. In the previous versions, {\em \bf Case ii)} was missing, and the conditions in {\em \bf Case iii)} and {\em \bf Case iv)} were incomplete. The proof of Lemma~\ref{lemma:recur_ABS} is omitted because it is essentially the same as the proof of Lemma~\ref{lemma:recur_ST_DB}. Here we point out one difference between Lemma~\ref{lemma:recur_ST_DB} and Lemma~\ref{lemma:recur_ABS}. Lemma~\ref{lemma:recur_ST_DB} tells us that $V_{2i+1}^{(n)}$ can be recursively calculated in two different ways for every $1\le i\le n/2-2$; see \eqref{eq:2w}. However, for $i\in\{j_1-1,j_2-1,\dots,j_s-1\}\cup\{j_1,j_2,\dots,j_s\}$, there is only one way to calculate $V_{2i+1}^{(n),\ABS}$ recursively. More precisely, if $i\in\{j_1-1,j_2-1,\dots,j_s-1\}$, then $V_{2i+1}^{(n),\ABS}$ can only be calculated from $V_{2i+1}^{(n),\ABS}=(V_{i+1}^{(n/2),\ABS})^\blacktriangledown$, and the relation $V_{2i+1}^{(n),\ABS} = (V_i^{(n/2),\ABS})^\vartriangle$ does {\em not} hold. Similarly, if $i\in\{j_1,j_2,\dots,j_s\}$, then $V_{2i+1}^{(n),\ABS}$ can only be calculated from $V_{2i+1}^{(n),\ABS} = (V_i^{(n/2),\ABS})^\blacktriangle$, and the relation $V_{2i+1}^{(n),\ABS}=(V_{i+1}^{(n/2),\ABS})^\triangledown$ does {\em not} hold. Since we require $n\ge 4$ in Lemma~\ref{lemma:recur_ABS}, the starting point of the recursive relation in Lemma~\ref{lemma:recur_ABS} is $V_1^{(2),\ABS}$. It is easy to see that the permutation matrix $\mathbf{P}_2^{\ABS}$ is the identity matrix. Therefore, given a BMS channel $W$, the transition probability of $V_1^{(2),\ABS}$ is given by \begin{equation} \label{eq:v_abs_init} V_1^{(2),\ABS}(y_1,y_2|u_1,u_2)=W(y_1|u_1+u_2) W(y_2|u_2). \end{equation} Note that this is the same as \eqref{eq:v_init} for standard polar codes. After obtaining the transition probabilities of the adjacent-bits-channels $\{V_i^{(n),\ABS}\}_{i=1}^{n-1}$, we can use \eqref{eq:v_to_w} to calculate the transition probabilities of the bit-channels $\{W_i^{(n),\ABS}\}_{i=1}^n$. We only need to replace $W_i^{(n)},W_{i+1}^{(n)},V_i^{(n)}$ in \eqref{eq:v_to_w} with $W_i^{(n),\ABS},W_{i+1}^{(n),\ABS},V_i^{(n),\ABS}$. Once the transition probabilities of $\{W_i^{(n),\ABS}\}_{i=1}^n$ are known, we are able to determine which bits are information bits and which bits are frozen bits. \subsection{Constructing the permutation matrices $\mathbf{P}_2^{\ABS},\mathbf{P}_4^{\ABS},\mathbf{P}_8^{\ABS},\dots,\mathbf{P}_n^{\ABS}$ in \eqref{eq:GnABS}} \label{sect:cons_PnABS} We construct the permutation matrices in \eqref{eq:GnABS} one by one, starting from $\mathbf{P}_2^{\ABS}$. Therefore, the matrices $\mathbf{P}_2^{\ABS},\mathbf{P}_4^{\ABS},\mathbf{P}_8^{\ABS},\dots,\mathbf{P}_{n/2}^{\ABS}$ are already known when we construct $\mathbf{P}_n^{\ABS}$. The method described in Section~\ref{sect:SDB} allows us to calculate the transition probabilities of the adjacent-bits-channels $\{V_i^{(n/2),\ABS}\}_{i=1}^{n/2-1}$ from $\mathbf{P}_2^{\ABS},\mathbf{P}_4^{\ABS},\mathbf{P}_8^{\ABS},\dots,\mathbf{P}_{n/2}^{\ABS}$. As a consequence, we know the transition probabilities of $\{V_i^{(n/2),\ABS}\}_{i=1}^{n/2-1}$ when constructing $\mathbf{P}_n^{\ABS}$. Since the set $\mathcal{I}^{(n)}=\{2j_1,2j_2,\dots,2j_s\}$ in \eqref{eq:pi} uniquely determines $\mathbf{P}_n^{\ABS}$, constructing $\mathbf{P}_n^{\ABS}$ is further equivalent to constructing the set $\mathcal{S}^*=\{j_1,j_2,\dots,j_s\}$, where the elements $j_1,j_2,\dots,j_s$ satisfy the condition \eqref{eq:separated_even}. Before presenting how to construct the set $\mathcal{S}^*$, let us introduce some notation. Suppose that $V:\{0,1\}^2\to \mathcal{Y}$ is an adjacent-bits-channel with $4$-ary inputs. Define two bit-channels $V_{\first}:\{0,1\}\to\mathcal{Y}$ and $V_{\second}:\{0,1\}\to \{0,1\}\times\mathcal{Y}$ as \begin{align*} V_{\first}(y|u_1)=\frac{1}{2}\sum_{u_2\in\{0,1\}} V(y|u_1,u_2) \text{~~and~~} V_{\second}(y,u_1|u_2)= \frac{1}{2} V(y|u_1,u_2). \end{align*} Comparing this with \eqref{eq:v_to_w}, we can see that if $V$ is $V_i^{(n)}$, then $V_{\first}$ is simply $W_i^{(n)}$, and $V_{\second}$ is $W_{i+1}^{(n)}$. Similarly, if $V$ is $V_i^{(n),\ABS}$, then $V_{\first}$ is simply $W_i^{(n),\ABS}$, and $V_{\second}$ is $W_{i+1}^{(n),\ABS}$. Next we define \begin{align*} & I_{\first}(V):= I(V_{\first}) \text{~~and~~} I_{\second}(V) := I(V_{\second}), \\ & g(V):= I_{\first}(V) (1-I_{\first}(V)) + I_{\second}(V) (1-I_{\second}(V)). \end{align*} The function $g(V)$ measures the polarization level of the two bit-channels induced by $V$. In particular, $g(V)\approx 0$ means that the capacity of both bit-channels is very close to either $0$ or $1$. Finally, for $1\le i\le n/2-1$, we define $$ \texttt{Score}(i) := g\big( (V_i^{(n/2),\ABS})^\lozenge \big) - g\big( (V_i^{(n/2),\ABS})^\blacklozenge \big). $$ The interpretation of $\texttt{Score}(i)$ is as follows: According to Lemma~\ref{lemma:recur_ABS}, if $i\in\mathcal{S}^*$, then $V_{2i}^{(n),\ABS} = (V_i^{(n/2),\ABS})^\blacklozenge$; if $i\notin\mathcal{S}^*$, then $V_{2i}^{(n),\ABS} = (V_i^{(n/2),\ABS})^\lozenge$. Therefore, $g\big( (V_i^{(n/2),\ABS})^\blacklozenge \big)$ measures the polarization level of the two bit-channels $W_{2i}^{(n),\ABS}$ and $W_{2i+1}^{(n),\ABS}$ when we include $i$ in the set $\mathcal{S}^*$. Similarly, $g\big( (V_i^{(n/2),\ABS})^\lozenge \big)$ measures the polarization level of the two bit-channels $W_{2i}^{(n),\ABS}$ and $W_{2i+1}^{(n),\ABS}$ when we do not include $i$ in the set $\mathcal{S}^*$. If $\texttt{Score}(i)>0$, then including $i$ in the set $\mathcal{S}^*$ accelerates polarization. If $\texttt{Score}(i)<0$, then including $i$ in the set $\mathcal{S}^*$ slows down polarization, and in this case we should not include $i$ in $\mathcal{S}^*$. If we ignore the condition \eqref{eq:separated_even}, then we can simply choose the set $\mathcal{S}^*$ to be $\mathcal{S}^*=\{i:\texttt{Score}(i)>0\}$. However, as we will see in Section~\ref{sect:necess}, the condition \eqref{eq:separated_even} is crucial for us to calculate the transition probabilities of the adjacent-bits-channels, so it must be satisfied. As a consequence, we need to find a set $\mathcal{S}^*\subseteq\{1,2,\dots,n/2-1\}$ to maximize $\sum_{i\in\mathcal{S}^*}\texttt{Score}(i)$ under the constraint that the distance between any two distinct elements of $\mathcal{S}^*$ must be at least $2$. In other words, we need to solve the following optimization problem: \begin{equation} \label{eq:argmaxS} \begin{aligned} \mathcal{S}^*= & \argmax_{\mathcal{S}\subseteq\{1,2,\dots,n/2-1\}} \sum_{i\in\mathcal{S}}\texttt{Score}(i) \\ & \text{subject to: } |i_1-i_2|\ge 2 \text{~for all~} i_1,i_2\in\mathcal{S} \text{~such that~} i_1\neq i_2. \end{aligned} \end{equation} This problem can be solved using a dynamic programming method. For $1\le j\le n/2-1$, define \begin{align*} \mathcal{S}_j^*= & \argmax_{\mathcal{S}\subseteq\{1,2,\dots,j\}} \sum_{i\in\mathcal{S}}\texttt{Score}(i) \\ & \text{subject to: } |i_1-i_2|\ge 2 \text{~for all~} i_1,i_2\in\mathcal{S} \text{~such that~} i_1\neq i_2, \\ M_j= & \max_{\mathcal{S}\subseteq\{1,2,\dots,j\}} \sum_{i\in\mathcal{S}}\texttt{Score}(i) \\ & \text{subject to: } |i_1-i_2|\ge 2 \text{~for all~} i_1,i_2\in\mathcal{S} \text{~such that~} i_1\neq i_2. \end{align*} By definition, we can see that $M_1\le M_2\le M_3\le \dots \le M_{n/2-1}$. The sets $\mathcal{S}_1^*,\mathcal{S}_2^*$ and the maximum values $M_1,M_2$ can be calculated as follows: If $\texttt{Score}(1)>0$, then $\mathcal{S}_1^*=\{1\}$ and $M_1=\texttt{Score}(1)$. If $\texttt{Score}(1)\le 0$, then $\mathcal{S}_1^*=\emptyset$ and $M_1=0$. If $\texttt{Score}(2)>M_1$, then $\mathcal{S}_2^*=\{2\}$ and $M_2=\texttt{Score}(2)$. If $\texttt{Score}(2)\le M_1$, then $\mathcal{S}_2^*=\mathcal{S}_1^*$ and $M_2=M_1$. For $j\ge 3$, the set $\mathcal{S}_j^*$ and the maximum value $M_j$ can be calculated recursively as follows: If $\texttt{Score}(j)+M_{j-2}>M_{j-1}$, then $\mathcal{S}_j^*=\mathcal{S}_{j-2}^*\cup\{j\}$ and $M_j=\texttt{Score}(j)+M_{j-2}$. If $\texttt{Score}(j)+M_{j-2}\le M_{j-1}$, then $\mathcal{S}_j^*=\mathcal{S}_{j-1}^*$ and $M_j=M_{j-1}$. This dynamic programming algorithm allows us to calculate $\mathcal{S}_j^*$ for every $1\le j\le n/2-1$. In particular, we are able to calculate $\mathcal{S}_{n/2-1}^*=\mathcal{S}^*$, which is the set we want to construct. Once we know the set $\mathcal{S}^*=\{j_1,j_2,\dots,j_s\}$, we can immediately write out the set $\mathcal{I}^{(n)}=\{2j_1,2j_2,\dots,2j_s\}$ and obtain the corresponding permutation matrix $\mathbf{P}_n^{\ABS}$ according to \eqref{eq:pi}. As a final remark, we note that $\mathbf{P}_2^{\ABS}$ is always the identity matrix. However, for $n\ge 4$, the permutation matrix $\mathbf{P}_n^{\ABS}$ depends on the underlying BMS channel $W$. \subsection{Quantization of the output alphabet} \label{sect:quantization} \begin{algorithm}[H] \DontPrintSemicolon \caption{\texttt{QuantizeChannel}$(\mu,V)$} \label{algo:quantization} \KwIn{an upper bound $\mu$ on the output alphabet size after quantization; an adjacent-bits-channel $V$ with outputs $y_1,y_2,\dots,y_M$} \vspace*{0.05in} \KwOut{quantized channel $\widetilde{V}$ with outputs $\{\tilde{y}_{i_1,i_2,i_3}: 0\le i_1,i_2,i_3\le b\}$} \vspace*{0.05in} \If{$M\le \mu$} { \vspace*{0.05in} Set $\widetilde{V}$ to be the same as $V$ } \Else { $b\gets \lfloor\mu^{1/3}\rfloor - 1$ \vspace*{0.05in} Set $\widetilde{V}(\tilde{y}_{i_1,i_2,i_3}|(u_1,u_2))=0$ for all $0\le i_1,i_2,i_3\le b$ and all $u_1,u_2\in\{0,1\}$ \Comment{Initialize all the transition probabilities of $\widetilde{V}$ as $0$} \For{$j=1,2,\dots,M$} { $sum\gets V(y_j|(0,0))+V(y_j|(0,1))+V(y_j|(1,0))+V(y_j|(1,1))$ \vspace*{0.05in} $p_1\gets \frac{V(y_j|(0,0))}{sum}$, \quad $p_2\gets \frac{V(y_j|(0,1))}{sum}$, \quad $p_3\gets \frac{V(y_j|(1,0))}{sum}$ \Comment{Calculate the posterior probability of $y_j$} $i_1\gets \lfloor b p_1 \rfloor$, \quad $i_2\gets \lfloor b p_2 \rfloor$, \quad $i_3\gets \lfloor b p_3 \rfloor$ \vspace*{0.05in} $\widetilde{V}(\tilde{y}_{i_1,i_2,i_3}|(u_1,u_2))\gets \widetilde{V}(\tilde{y}_{i_1,i_2,i_3}|(u_1,u_2)) + V(y_j|(u_1,u_2))$ for all $u_1,u_2\in\{0,1\}$ \Comment{Merge $y_j$ into $\tilde{y}_{i_1,i_2,i_3}$} } } \Return $\widetilde{V}$ \end{algorithm} An important step in the construction of standard polar codes is to quantize the output alphabets of the bit-channels $\{W_i^{(n)}\}_{i=1}^n$ because the output alphabet size grows exponentially with the code length $n$. The most widely used quantization method for binary-input standard polar codes was given in \cite{Tal13}, where the main idea is to merge output symbols with similar posterior distributions using a greedy algorithm. This greedy algorithm was later generalized to construct polar codes with non-binary input alphabets \cite{TSV12,Pereg17,Gulcu18}. The time complexity of the greedy quantization algorithm is $O(\mu^2\log \mu)$, where $\mu$ is the maximum size of the output alphabet after quantization. Since there are $2n-1$ bit-channels we need to quantize in the code construction procedure, the overall time complexity of standard polar code construction is $O(n\mu^2\log \mu)$. In the ABS polar code construction, the output alphabet size of the adjacent-bits-channels $\{V_i^{(n)}\}_{i=1}^{n-1}$ also grows exponentially with $n$, and the quantization operations are also needed. Since the adjacent-bits-channels have $4$-ary inputs, we can simply use the greedy quantization algorithms proposed in \cite{TSV12,Pereg17,Gulcu18} for polar codes with non-binary inputs. However, in practical implementations, we found that these greedy algorithms for non-binary inputs usually involve implicit large constants in their time complexity. Therefore, we propose a new quantization algorithm to merge the output symbols of the adjacent-bits-channels $\{V_i^{(n)}\}_{i=1}^{n-1}$. The time complexity of our new quantization algorithm is $O(\mu^2)$. Since there are $\Theta(n)$ adjacent-bits-channels we need to quantize in the ABS polar code construction, its overall time complexity is $O(n\mu^2)$. Our new quantization algorithm works as follows. Given an upper bound $\mu$ on the output alphabet size after quantization, we define $b=\lfloor\mu^{1/3}\rfloor - 1$. For an adjacent-bits-channel $V$, we write its $4$ inputs as $(0,0),(0,1),(1,0),(1,1)$, and we write its outputs as $y_1,y_2,\dots,y_M$, where $M$ is the output alphabet size of $V$. We use $\widetilde{V}$ to denote the channel after output quantization. The $4$ inputs of $\widetilde{V}$ are the same as the original channel $V$, and the outputs of $\widetilde{V}$ are written as $\{\tilde{y}_{i_1,i_2,i_3}: 0\le i_1,i_2,i_3\le b\}$. Clearly, the output alphabet size of $\widetilde{V}$ is no larger than $\mu$. With the above notation in mind, we present our quantization algorithm in Algorithm~\ref{algo:quantization}. In our implementation, we pick $\mu=250000$. \subsection{Summary of the ABS polar code construction} \label{sect:summary_cons} In Section~\ref{sect:SDB}, we showed how to calculate the transition probabilities of the adjacent-bits-channels $\{V_i^{(n),\ABS}\}_{i=1}^{n-1}$ when the permutation matrices $\mathbf{P}_2^{\ABS},\mathbf{P}_4^{\ABS},\mathbf{P}_8^{\ABS},\dots,\mathbf{P}_n^{\ABS}$ in \eqref{eq:GnABS} are known. In Section~\ref{sect:cons_PnABS}, we showed how to construct the permutation matrix $\mathbf{P}_n^{\ABS}$ when the transition probabilities of $\{V_i^{(n/2),\ABS}\}_{i=1}^{n/2-1}$ are available. In Section~\ref{sect:quantization}, we proposed Algorithm~\ref{algo:quantization} to quantize the output alphabets of the adjacent-bits-channels. Now we are in a position to put everything together and present the code construction algorithm for ABS polar codes in Algorithm~\ref{algo:ABS_Construction}. \begin{algorithm}[H] \DontPrintSemicolon \caption{\texttt{ABSConstruct}$(n,k,W)$} \label{algo:ABS_Construction} \KwIn{code length $n=2^m\ge 4$, code dimension $k$, and the BMS channel $W$} \KwOut{the permutation matrices $\mathbf{P}_2^{\ABS},\mathbf{P}_4^{\ABS},\mathbf{P}_8^{\ABS},\dots,\mathbf{P}_n^{\ABS}$, and the index set $\mathcal{A}$ of the information bits} Quantize the output alphabet of $W$ using the method in \cite{Tal13} \Comment{This step is needed when the output alphabet size of $W$ is very large, e.g., when $W$ has a continuous output alphabet.} Set $\mathbf{P}_2^{\ABS}$ to be the identity matrix Calculate the transition probability of $V_1^{(2),\ABS}$ from $W$ using \eqref{eq:v_abs_init} Quantize the output alphabet of $V_1^{(2),\ABS}$ using Algorithm~\ref{algo:quantization} \For{$n_0=4,8,16,\dots,n$} { Construct $\mathbf{P}_{n_0}^{\ABS}$ from $\{V_i^{(n_0/2),\ABS}\}_{i=1}^{n_0/2-1}$ using the method in Section~\ref{sect:cons_PnABS} Calculate the transition probabilities of $\{V_i^{(n_0),\ABS}\}_{i=1}^{n_0-1}$ from $\mathbf{P}_{n_0}^{\ABS}$ and $\{V_i^{(n_0/2),\ABS}\}_{i=1}^{n_0/2-1}$ using Lemma~\ref{lemma:recur_ABS} Quantize the output alphabets of $\{V_i^{(n_0),\ABS}\}_{i=1}^{n_0-1}$ using Algorithm~\ref{algo:quantization} } Calculate the transition probabilities of $\{W_i^{(n),\ABS}\}_{i=1}^n$ from the transition probabilities of $\{V_i^{(n),\ABS}\}_{i=1}^{n-1}$. Sort the capacity of the bit-channels $\{W_i^{(n),\ABS}\}_{i=1}^n$ to obtain $I(W_{i_1}^{(n),\ABS})\ge I(W_{i_2}^{(n),\ABS})\ge \dots\ge I(W_{i_n}^{(n),\ABS})$, where $\{i_1,i_2,\dots,i_n\}$ is a permutation of $\{1,2,\dots,n\}$ $\mathcal{A}\gets\{i_1,i_2,\dots,i_k\}$ \Return $\mathbf{P}_2^{\ABS},\mathbf{P}_4^{\ABS},\mathbf{P}_8^{\ABS},\dots,\mathbf{P}_n^{\ABS},\mathcal{A}$ \end{algorithm} \subsection{Necessity of the condition \eqref{eq:separated_even}} \label{sect:necess} The condition \eqref{eq:separated_even} is necessary for us to derive a recursive relation between $\{V_i^{(n),\ABS}\}_{i=1}^{n-1}$ and $\{V_i^{(n/2),\ABS}\}_{i=1}^{n/2-1}$. In order to prove this claim, we introduce some notation. Instead of $(U_1,U_2,\dots,U_n)$, now we use $(U_1^{(n)},U_2^{(n)},\dots,U_n^{(n)})$ to denote the message vector. We add the superscript $(n)$ in the notation to distinguish between random variables in different layers. Define $$ (\widehat{U}_1^{(n)},\widehat{U}_2^{(n)},\dots,\widehat{U}_n^{(n)}) = (U_1^{(n)},U_2^{(n)},\dots,U_n^{(n)}) \mathbf{P}_n^{\ABS} . $$ We further define random vectors $(U_{1,1}^{(n/2)},U_{2,1}^{(n/2)},\dots,U_{n/2,1}^{(n/2)})$ and $(U_{1,2}^{(n/2)},U_{2,2}^{(n/2)},\dots,U_{n/2,2}^{(n/2)})$ as follows: $$ U_{i,1}^{(n/2)}=\widehat{U}_{2i-1}^{(n)} + \widehat{U}_{2i}^{(n)} , \quad U_{i,2}^{(n/2)}=\widehat{U}_{2i}^{(n)} , $$ i.e., the vectors $(U_{1,1}^{(n/2)},U_{2,1}^{(n/2)},\dots,U_{n/2,1}^{(n/2)})$ and $(U_{1,2}^{(n/2)},U_{2,2}^{(n/2)},\dots,U_{n/2,2}^{(n/2)})$ are obtained from applying one layer of polar transform to $(\widehat{U}_1^{(n)},\widehat{U}_2^{(n)},\dots,\widehat{U}_n^{(n)})$. By definition, $V_i^{(n),\ABS}$ gives us the conditional distribution of $(U_i^{(n)},U_{i+1}^{(n)})$ given the channel outputs and the previous message bits; $V_i^{(n/2),\ABS}$ gives us the conditional distribution of $(U_{i,1}^{(n/2)},U_{i+1,1}^{(n/2)})$ and the conditional distribution of $(U_{i,2}^{(n/2)},U_{i+1,2}^{(n/2)})$ given the channel outputs and the previous message bits. Therefore, deriving a recursive relation between $\{V_i^{(n),\ABS}\}_{i=1}^{n-1}$ and $\{V_i^{(n/2),\ABS}\}_{i=1}^{n/2-1}$ is equivalent to the following task: Suppose that we know the joint distribution\footnote{More precisely, this should be the conditional distribution of $(U_{i,j}^{(n/2)},U_{i+1,j}^{(n/2)})$ given the channel outputs and the previous message bits. Similarly, the joint distribution of $(U_i^{(n)},U_{i+1}^{(n)})$ in the next sentence also refers to the conditional distribution.} of $(U_{i,j}^{(n/2)},U_{i+1,j}^{(n/2)})$ for all $1\le i\le n/2-1$ and $j\in\{1,2\}$. The task is to calculate the joint distribution of $(U_i^{(n)},U_{i+1}^{(n)})$ for all $1\le i\le n-1$. We will show that it is not possible to accomplish this task without the condition \eqref{eq:separated_even}. Suppose that the condition \eqref{eq:separated_even} does not hold. Then there exists an integer $i$ such that we swap the adjacent bits $\widehat{U}_{2i}^{(n)}$ and $\widehat{U}_{2i+1}^{(n)}$, and we also swap $\widehat{U}_{2i+2}^{(n)}$ and $\widehat{U}_{2i+3}^{(n)}$; see Fig.~\ref{fig:nosepar} for an illustration. According to our assumption, we know the joint distribution of $(U_{i,1}^{(n/2)},U_{i+1,1}^{(n/2)})$ and the joint distribution of $(U_{i,2}^{(n/2)},U_{i+1,2}^{(n/2)})$. Moreover, $(U_{i,1}^{(n/2)},U_{i+1,1}^{(n/2)})$ and $(U_{i,2}^{(n/2)},U_{i+1,2}^{(n/2)})$ are independent. Therefore, we know the joint distribution of $(U_{i,1}^{(n/2)},U_{i,2}^{(n/2)},U_{i+1,1}^{(n/2)},U_{i+1,2}^{(n/2)})$. Since there is a one-to-one mapping between $(\widehat{U}_{2i-1}^{(n)},\widehat{U}_{2i}^{(n)},\widehat{U}_{2i+1}^{(n)},\widehat{U}_{2i+2}^{(n)})$ and $(U_{i,1}^{(n/2)},U_{i,2}^{(n/2)},U_{i+1,1}^{(n/2)},U_{i+1,2}^{(n/2)})$, we also know the distribution of $(\widehat{U}_{2i-1}^{(n)},\widehat{U}_{2i}^{(n)},\widehat{U}_{2i+1}^{(n)},\widehat{U}_{2i+2}^{(n)})$. Since $(U_{2i-1}^{(n)},U_{2i}^{(n)},U_{2i+1}^{(n)})$ is a function of $(\widehat{U}_{2i-1}^{(n)},\widehat{U}_{2i}^{(n)},\widehat{U}_{2i+1}^{(n)})$, we are able to calculate the joint distribution of $(U_{2i-1}^{(n)},U_{2i}^{(n)})$ and the joint distribution of $(U_{2i}^{(n)},U_{2i+1}^{(n)})$. Using a similar argument, we can show that we are able to calculate the joint distribution of $(U_{2i+2}^{(n)},U_{2i+3}^{(n)})$ and the joint distribution of $(U_{2i+3}^{(n)},U_{2i+4}^{(n)})$. The only problem is that we are not able to calculate the joint distribution of $(U_{2i+1}^{(n)},U_{2i+2}^{(n)})$. By definition, $$ U_{2i+1}^{(n)}=\widehat{U}_{2i}^{(n)}=U_{i,2}^{(n/2)}, \qquad U_{2i+2}^{(n)}=\widehat{U}_{2i+3}^{(n)}=U_{i+2,1}^{(n/2)}+U_{i+2,2}^{(n/2)} . $$ Therefore, our task is to calculate the joint distribution of $(U_{i,2}^{(n/2)}, U_{i+2,1}^{(n/2)}+U_{i+2,2}^{(n/2)})$. Since the two random vectors $(U_{1,1}^{(n/2)},U_{2,1}^{(n/2)},\dots,U_{n/2,1}^{(n/2)})$ and $(U_{1,2}^{(n/2)},U_{2,2}^{(n/2)},\dots,U_{n/2,2}^{(n/2)})$ are independent, this further requires us to know the joint distribution of $(U_{i,2}^{(n/2)}, U_{i+2,2}^{(n/2)})$, which is not available. Therefore, we are not able to calculate the joint distribution of $(U_{2i+1}^{(n)},U_{2i+2}^{(n)})$. This proves the necessity of \eqref{eq:separated_even}. \begin{figure}[ht] \centering \includegraphics[scale = 1.0]{separated.pdf} \caption{Swap the adjacent bits $\widehat{U}_{2i}$ and $\widehat{U}_{2i+1}$. Also swap $\widehat{U}_{2i+2}$ and $\widehat{U}_{2i+3}$.} \label{fig:nosepar} \end{figure} \section{The encoding algorithm for ABS polar codes} \label{sect:encoding} In this section, we present the encoding algorithm of ABS polar codes. Suppose that we have constructed an $(n,k)$ ABS polar code with permutation matrices $\mathbf{P}_2^{\ABS},\mathbf{P}_4^{\ABS},\mathbf{P}_8^{\ABS},\dots,\mathbf{P}_n^{\ABS}$ and the index set $\mathcal{A}=\{i_1,i_2,\dots,i_k\}$ of the information bits. We present the encoding algorithm of this code in Algorithm~\ref{algo:ABS_encoding} below. \begin{algorithm}[H] \DontPrintSemicolon \caption{\texttt{Encode}$((m_1,m_2,\dots,m_k))$} \label{algo:ABS_encoding} \KwIn{the message vector $(m_1,m_2,\dots,m_k)\in\{0,1\}^k$} \KwOut{the codeword $(c_1,c_2,\dots,c_n)\in\{0,1\}^n$, where $n=2^m$ is the code length} Initialize $(c_1,c_2,\dots,c_n)$ as the all-zero vector $(c_{i_1},c_{i_2},\dots,c_{i_k})\gets (m_1,m_2,\dots,m_k)$ \Comment{Recall that $i_1,i_2,\dots,i_k$ are the indices of the information bits.} \For{$i=0,1,2,3,\dots,m-1$} { $t\gets 2^i$ $n_0\gets 2^{m-i}$ \For{$h=1,2,3,\dots,t$} { $(c_h,c_{h+t},c_{h+2t},c_{h+3t},\dots,c_{h+(n_0-1)t})\gets (c_h,c_{h+t},c_{h+2t},c_{h+3t},\dots,c_{h+(n_0-1)t}) \mathbf{P}_{n_0}^{\ABS}$ \Comment{Line 8 is the only difference between the encoding algorithms for ABS polar codes and standard polar codes} \For{$j=0,1,2,3,\dots,n_0/2-1$} { $c_{h+2jt} \gets c_{h+2jt}+c_{h+2jt+t}$ \Comment{The addition between $c_{h+2jt}$ and $c_{h+2jt+t}$ is over the binary field} } } } \Return $(c_1,c_2,\dots,c_n)$ \end{algorithm} \begin{figure*} \centering \includegraphics[scale = 0.75]{example_enc.pdf} \caption{Encoding circuit of an $(n=16, k=8)$ ABS polar code defined by the sets in \eqref{eq:enc_example}.} \label{fig:example_enc} \end{figure*} Without Line 8, Algorithm~\ref{algo:ABS_encoding} is the same as the encoding algorithm of standard polar codes, whose time complexity is $O(n\log(n))$. In line 8, we perform a permutation on $n_0$ elements. According to our code construction, each of these $n_0$ elements is swapped at most once, so the number of operations involved in this permutation is no more than $n_0=2^{m-i}$. From the for loop in Line 7, we can see that Line 8 is executed $t=2^i$ times for each $i\in\{0,1,\dots,m-1\}$. In other words, for each fixed value of $i$, Line 8 induces at most $n_0*t=2^m=n$ operations. Therefore, the total number of operations induced by Line 8 is upper bounded by $n*m=n\log(n)$. Thus we conclude that the encoding complexity of ABS polar codes is still $O(n\log(n))$. \begin{proposition} The encoding time complexity of ABS polar codes is $O(n\log(n))$. \end{proposition} Note that the set $\mathcal{I}^{(n)}$ in \eqref{eq:pi} uniquely determines the permutation matrix $\mathbf{P}_n^{\ABS}$. In Fig.~\ref{fig:example_enc}, we present the encoding circuit of an $(n=16, k=8)$ ABS polar code defined by the following sets: \begin{equation} \label{eq:enc_example} \begin{aligned} & \mathcal{I}^{(2)} = \emptyset,\quad \mathcal{I}^{(4)} = \emptyset,\quad \mathcal{I}^{(8)} = \{4\},\quad \mathcal{I}^{(16)} = \{6,10\} , \\ & \mathcal{A}=\{9,10,11,12,13,14,15,16\} . \end{aligned} \end{equation} \section{The SCL decoder for ABS polar codes}\label{sect:SCL} In this section, we present a new SCL decoder for ABS polar codes. The organization of this section is as follows: In Section~\ref{sect:polar_decoding}, we recap the classic SCL decoder for standard polar codes based on the $2\times 2$ polar transform. The purpose of doing so is to get ourselves familiar with the recursive structure, which is shared by both the classic SCL decoder and our new SCL decoder. The SCL decoder presented in Section~\ref{sect:polar_decoding} is based on the one proposed in \cite{Tal15}. While the classic SCL decoder is based on the $2\times 2$ polar transform, our new SCL decoder is based on the DB polar transform and the SDB polar transform; see Fig.~\ref{fig:DBpt} and Fig.~\ref{fig:SDBpt} for the definitions of these two transforms. Instead of jumping directly into the decoding of ABS polar codes, we first present a new SCL decoder for standard polar codes based on the DB polar transform in Section~\ref{sect:ST_decoder_DB}. This new SCL decoder for standard polar codes already contains most of the new ingredients in the SCL decoder for ABS polar codes, and it helps us learn these new ingredients in a familiar setting. Finally, in Section~\ref{sect:ABS_decoder}, we present our new SCL decoder for ABS polar codes. \subsection{SCL decoder for standard polar codes based on the $2\times 2$ polar transform}\label{sect:polar_decoding} In this subsection, we recap the classic SCL decoder proposed in \cite{Tal15} for standard polar codes. Suppose that the code length is $n=2^m$, and the upper bound of the list size in the SCL decoder is $L$. We use $L_c\in \{1, 2, \dots, L\}$ to denote the current list size. $\mathcal{A}$ is the index set of the information bits. Before describing the decoding algorithms, let us introduce some notation and intermediate variables. Following the notation in Fig.~\ref{fig:bit_channels_polar}, $(U_1, U_2, \dots, U_n)$ is the message vector, and we use $(X_1,\dots,X_n)$ and $(Y_1,\dots,Y_n)$ to denote the random codeword vector and the random channel output vector, respectively. We use $(y_1,\dots,y_n)$ to denote a realization of the random vector $(Y_1,\dots,Y_n)$. For each $0\le \lambda \le m$, we introduce an intermediate vector $(X_{1}^{(2^\lambda)}, X_{2}^{(2^\lambda)},\dots, X_{n}^{(2^\lambda)})$. For $\lambda=m$, we define the intermediate vector as \begin{equation} \label{eq:xnn} (X_1^{(n)}, X_2^{(n)}, \dots, X_n^{(n)})=(U_1, U_2, \dots, U_n). \end{equation} For $0\le \lambda \le m-1$, the intermediate vectors are defined recursively using the following relation: \begin{equation}\label{eq:def_intermediate} \begin{aligned} & (X_{1}^{(2^\lambda)}, X_{2}^{(2^\lambda)},\dots, X_{n}^{(2^\lambda)}) \\ = & (X_{1}^{(2^{\lambda+1})}, X_{2}^{(2^{\lambda+1})},\dots, X_{n}^{(2^{\lambda+1})})(\mathbf{I}_{2^\lambda}\otimes\mathbf{G}_2^{\polar}\otimes\mathbf{I}_{2^{m-\lambda-1}}), \end{aligned} \end{equation} where $\mathbf{I}_n$ is the $n\times n$ identity matrix. By definition, $(X_1^{(1)}, X_2^{(1)}, \dots, X_n^{(1)})=(X_1, X_2, \dots, X_n)$ is the codeword vector. Intuitively, the intermediate vector $(X_{1}^{(2^\lambda)}, X_{2}^{(2^\lambda)},\dots, X_{n}^{(2^\lambda)})$ is obtained from performing $(m-\lambda)$ layers of polar transform on the message vector $(U_1, U_2, \dots, U_n)$. Fig.~\ref{fig:example_enc} gives a concrete example of the intermediate vectors in an ABS polar code, which are similar to the ones in standard polar codes. For each $0\le \lambda \le m$, $1\le i\le 2^\lambda$ and $1\le \beta \le 2^{m-\lambda}$, we introduce the shorthand notation \begin{equation}\label{eq:subscript} \begin{aligned} & X_{i,\beta}^{(\lambda)} = X_{\beta + (i-1)2^{m-\lambda}}^{(2^\lambda)}, \quad Y_{i,\beta}^{(\lambda)} = Y_{\beta + (i-1)2^{m-\lambda}} , \\ & \mathbi{O}_{i,\beta}^{(\lambda)} =(X_{1, \beta}^{(\lambda)}, X_{2, \beta}^{(\lambda)}, \dots, X_{i-1, \beta}^{(\lambda)}, Y_{1, \beta}^{(\lambda)}, Y_{2, \beta}^{(\lambda)}, \dots, Y_{2^\lambda, \beta}^{(\lambda)}) . \end{aligned} \end{equation} According to the standard polar code construction, the $2^{m-\lambda}$ random vectors \begin{equation*} \begin{aligned} \left\{ (X_{1, \beta}^{(\lambda)}, X_{2, \beta}^{(\lambda)}, \dots, X_{2^\lambda, \beta}^{(\lambda)}, Y_{1, \beta}^{(\lambda)}, Y_{2, \beta}^{(\lambda)}, \dots, Y_{2^\lambda, \beta}^{(\lambda)}) \right\}_{\beta = 1}^{2^{m-\lambda}} \end{aligned} \end{equation*} are independent and identically distributed. Moreover, the channel mapping from $X_{i,\beta}^{(\lambda)}$ to $\mathbi{O}_{i,\beta}^{(\lambda)}$ is the bit-channel $W_{i}^{(2^\lambda)}$ for every $1\le \beta \le 2^{m-\lambda}$, where $W_{i}^{(2^\lambda)}$ is defined recursively using the relation \eqref{eq:recur_bit_channels}. Recall that $(y_1, \dots, y_n)$ is a realization of the random vector $(Y_1,\dots,Y_n)$. For each $0\le \lambda \le m$, $1\le i\le 2^\lambda$ and $1\le \beta \le 2^{m-\lambda}$, we introduce the shorthand notation $y_{i,\beta}^{(\lambda)} = y_{\beta + (i-1)2^{m-\lambda}}$, and we use $\hat x_{i,\beta}^{(\lambda)}$ to denote the decoded value of $X_{i,\beta}^{(\lambda)}$. Moreover, we define a vector \begin{equation}\label{eq:decoded_output_vector} \begin{aligned} \hat{\mathbi{o}}_{i,\beta}^{(\lambda)} = (\hat x_{1,\beta}^{(\lambda)}, \hat x_{2,\beta}^{(\lambda)}, \dots, \hat x_{i-1,\beta}^{(\lambda)}, y_{1, \beta}^{(\lambda)}, y_{2, \beta}^{(\lambda)}, \dots, y_{2^\lambda, \beta}^{(\lambda)}) . \end{aligned} \end{equation} By the analysis above, we have \begin{equation} \label{eq:PWI} \mathbb{P} \big( \mathbi{O}_{i,\beta}^{(\lambda)}=\hat{\mathbi{o}}_{i,\beta}^{(\lambda)} \big| X_{i,\beta}^{(\lambda)}=b \big) = W_{i}^{(2^\lambda)} \big( \hat{\mathbi{o}}_{i,\beta}^{(\lambda)} \big| b \big) \qquad \text{for~} b\in\{0,1\} . \end{equation} Now we are ready to introduce the data structures used in the SCL decoder for standard polar codes. Most of the data structures below are also used in the SCL decoder for ABS polar codes. \begin{enumerate}[(i)] \item 4-dimensional \emph{probability array $\mathtt{D}$.} The entries in the array $\mathtt{D}$ are indexed as \begin{align*} \mathtt{D}[\lambda, s, \beta, b],\quad & 0\le \lambda \le m,\qquad~ 1\le s\le L, \\ & 1\le \beta \le 2^{m-\lambda},\quad 0\le b \le 1. \end{align*} For each $0\le \lambda \le m, 1\le s \le L$, we define a subarray of $\mathtt{D}$ as \begin{equation*} \mathtt{D}[\lambda, s]= (\mathtt{D}[\lambda, s, \beta, b],\quad 1\le \beta\le 2^{m-\lambda}, \quad b\in\{0,1\}), \end{equation*} and we use $\vec{\mathtt{D}}[\lambda, s]$ to denote the pointer to the head address of $\mathtt{D}[\lambda, s]$. In the algorithms below, we will write $\mathtt{D}[\lambda, s, \beta, b]$ and $\vec{\mathtt{D}}[\lambda, s][\beta, b]$ interchangeably. Each array $\mathtt{D}[\lambda, s]$ is used to store a set of transition probabilities in \eqref{eq:PWI}. \item 1-dimensional \emph{integer array $\mathtt{N_D}$.} The entries of $\mathtt{N_D}$ are $\mathtt{N_D}[\lambda], 0\le \lambda \le m$. The entry $\mathtt{N_D}[\lambda]$ takes value in the set $\{0,1,2,\dots,L\}$ for every $0\le \lambda \le m$. The value of $\mathtt{N_D}[\lambda]$ has the following meaning: The arrays $\mathtt{D}[\lambda, 1],\mathtt{D}[\lambda, 2],\dots,\mathtt{D}[\lambda, \mathtt{N_D}[\lambda]]$ are currently occupied in the decoding procedure while the arrays $\mathtt{D}[\lambda, \mathtt{N_D}[\lambda]+1],\mathtt{D}[\lambda, \mathtt{N_D}[\lambda]+2],\dots,\mathtt{D}[\lambda, L]$ are free to use. See Fig.~\ref{fig:ds_n_nd} for an illustration. \begin{figure} \centering \includegraphics[scale = 1.2]{ds.pdf} \caption{An illustration of $\mathtt{D}$ and $\mathtt{N_D}$ for code length $n=64$ and list size $L = 8$. We put $\mathtt{D}[\lambda, s]$ in a shaded cell if it is currently occupied; otherwise, we put it in a white cell. For example, $\mathtt{N_D}[5] = 4$ means that $\mathtt{D}[5,1], \mathtt{D}[5,2], \mathtt{D}[5,3], \mathtt{D}[5,4]$ have already been allocated to store some transition probabilities while $\mathtt{D}[5,5], \mathtt{D}[5,6], \mathtt{D}[5,7], \mathtt{D}[5,8]$ are free to use.} \label{fig:ds_n_nd} \end{figure} \item 3-dimensional \emph{bit array $\mathtt{B}$.} The entries in the array $\mathtt{B}$ are indexed as $$ \mathtt{B}[\lambda, s, \beta],\quad 0\le \lambda \le m, \quad 1\le s\le 2L, \quad 1\le \beta \le 2^{m-\lambda}. $$ For each $0\le \lambda \le m, 1\le s \le 2L$, we define a subarray of $\mathtt{B}$ as \begin{equation*} \mathtt{B}[\lambda, s]= (\mathtt{B}[\lambda, s, \beta],\quad 1\le \beta\le 2^{m-\lambda}), \end{equation*} and we use $\vec{\mathtt{B}}[\lambda, s]$ to denote the pointer to the head address of $\mathtt{B}[\lambda, s]$. In the algorithms below, we will write $\mathtt{B}[\lambda, s, \beta]$ and $\vec{\mathtt{B}}[\lambda, s][\beta]$ interchangeably. Each array $\mathtt{B}[\lambda, s]$ is used to store a set of decoding results of the intermediate vectors. \item 1-dimensional \emph{integer array $\mathtt{N_B}$.} The entries of $\mathtt{N_B}$ are $\mathtt{N_B}[\lambda], 0\le \lambda \le m$. The entry $\mathtt{N_B}[\lambda]$ takes value in the set $\{0,1,2,\dots,2L\}$ for every $0\le \lambda \le m$. The value of $\mathtt{N_B}[\lambda]$ has the following meaning: The arrays $\mathtt{B}[\lambda, 1],\mathtt{B}[\lambda, 2],\dots,\mathtt{B}[\lambda, \mathtt{N_B}[\lambda]]$ are currently occupied in the decoding procedure while the arrays $\mathtt{B}[\lambda, \mathtt{N_B}[\lambda]+1],\mathtt{B}[\lambda, \mathtt{N_B}[\lambda]+2],\dots,\mathtt{B}[\lambda, 2L]$ are free to use. \item 1-dimensional \emph{probability array $\mathtt{score}$.} The entries of $\mathtt{score}$ are $\mathtt{score}[\ell], 1\le \ell \le L_c$, where $L_c\in\{1,2,\dots,L\}$ is the current list size. Each $\mathtt{score}[\ell]$ records the current transition probability of the $\ell$th candidate in the decoding list. When the current list size is larger than the prescribed upper bound $L$, we prune the list according to the value of $\mathtt{score}[\ell]$. \item 2-dimensional \emph{pointer arrays $\mathtt{P}, \mathtt{\bar{P}}$.} Their entries are $$ \mathtt{P}=(\mathtt{P}[\ell,\lambda],~ 1\le \ell\le L,~ 0\le \lambda \le m), \qquad \mathtt{\bar{P}}=(\mathtt{\bar{P}}[\ell,\lambda],~ 1\le \ell\le L,~ 0\le \lambda \le m) . $$ We use $\mathtt{P}[\ell,\lambda]$ to store the pointer $\vec{\mathtt{D}}[\lambda, \mathtt{N_D}[\lambda]+1]$, so that we can store the transition probabilities in the array $\mathtt{D}[\lambda, \mathtt{N_D}[\lambda]+1]$ and access them in the future. We usually assign values (i.e., pointers) to $\mathtt{P}[\ell,\lambda]$ through the function $\texttt{allocate\_prob}$ in Algorithm~\ref{algo:st_allocate_prob}. The function $\texttt{allocate\_prob}$ is called in Line~4 of Algorithm~\ref{algo:ST_Decode}, Line~3 of Algorithm~\ref{algo:calcu_-_proba}, and Line~3 of Algorithm~\ref{algo:calcu_+_proba}. The array $\mathtt{\bar{P}}$ is a supplement to $\mathtt{P}$. We use $\mathtt{\bar{P}}$ when the entries in $\mathtt{P}$ are occupied. \item 2-dimensional \emph{pointer arrays $\mathtt{R}, \mathtt{\bar{R}}$.} Their entries are $$ \mathtt{R}=(\mathtt{R}[\ell,\lambda],~ 1\le \ell\le L,~ 0\le \lambda \le m), \qquad \mathtt{\bar{R}}=(\mathtt{\bar{R}}[\ell,\lambda],~ 1\le \ell\le L,~ 0\le \lambda \le m) . $$ We use $\mathtt{R}[\ell,\lambda]$ to store the pointer $\vec{\mathtt{B}}[\lambda, \mathtt{N_B}[\lambda]+1]$, so that we can store the decoding results of intermediate vectors in the array $\mathtt{B}[\lambda, \mathtt{N_B}[\lambda]+1]$ and access them in the future. We usually assign values (i.e., pointers) to $\mathtt{R}[\ell,\lambda]$ through the function $\texttt{allocate\_bit}$ in Algorithm~\ref{algo:st_allocate_bit}. The function $\texttt{allocate\_bit}$ is called in Line~13 of Algorithm~\ref{algo:st_decode_channel} and Lines~10,16 of Algorithm~\ref{algo:st_decode_boundary_channel}. The array $\mathtt{\bar{R}}$ is a supplement to $\mathtt{R}$. We use $\mathtt{\bar{R}}$ when the entries in $\mathtt{R}$ are occupied. \item \emph{priority queue $\texttt{PriQue}$.} $\texttt{PriQue}$ is a maximum priority queue with size $2L$ such that the element with the maximum value is always removed first from the queue. We use $\texttt{PriQue}$ to record and prune candidate decoding paths. Each element in the queue is a triple $(\ell, b, \prob)$ with the following meaning: When we decode $U_i$ in the last layer $\lambda = m$, the (posterior) probability of $U_i = b$ in the $\ell$th decoding path is $\prob$. The queue $\texttt{PriQue}$ has 4 interfaces: i) $\texttt{PriQue.push}(\ell, b, \prob)$ pushes the element $(\ell, b, \prob)$ to the queue; ii) $\texttt{PriQue.pop}()$ removes the element $(\ell, b, \prob)$ with the maximum $\prob$ in the queue; iii) $\texttt{PriQue.clear}()$ removes all the remaining elements in the queue; iv) $\texttt{PriQue.size}()$ returns the current number of elements in the queue. \end{enumerate} We associate each candidate in the decoding list with a list element. There are at most $L$ list elements in total. For $1\le \ell\le L$, the $\ell$th list element has the following fields: \begin{equation}\label{eq:st_list_field} \begin{aligned} & (\mathtt{P}[\ell,0], \mathtt{P}[\ell,1], \dots, \mathtt{P}[\ell, m]), \\ & (\mathtt{R}[\ell,0], \mathtt{R}[\ell,1], \dots, \mathtt{R}[\ell, m]), \\ & \mathtt{score}[\ell]. \end{aligned} \end{equation} \begin{algorithm}[ht] \DontPrintSemicolon \caption{$\texttt{allocate\_prob}(\lambda)$} \label{algo:st_allocate_prob} \KwIn{layer $\lambda\in\{0, 1, 2, \dots, m\}$} \KwOut{a pointer to the allocated memory} $\mathtt{N_D}[\lambda]\gets \mathtt{N_D}[\lambda] + 1$ \Return $\vec{\mathtt{D}}[\lambda, \mathtt{N_D}[\lambda]]$ \end{algorithm} \begin{algorithm}[ht] \DontPrintSemicolon \caption{$\texttt{allocate\_bit}(\lambda)$} \label{algo:st_allocate_bit} \KwIn{layer $\lambda\in\{0, 1, 2, \dots, m\}$} \KwOut{a pointer to the allocated memory} $\mathtt{N_B}[\lambda]\gets \mathtt{N_B}[\lambda] + 1$ \Return $\vec{\mathtt{B}}[\lambda, \mathtt{N_B}[\lambda]]$ \end{algorithm} The function $\texttt{allocate\_prob}$ in Algorithm~\ref{algo:st_allocate_prob} and the function $\texttt{allocate\_bit}$ in Algorithm~\ref{algo:st_allocate_bit} are used to allocate memory spaces throughout the decoding procedure. $\texttt{allocate\_prob}(\lambda)$ returns the pointer to the next usable array in $\mathtt{D}[\lambda, 1],\mathtt{D}[\lambda, 2],\dots,\mathtt{D}[\lambda, L]$ and updates the value of $\mathtt{N_D}[\lambda]$. Similarly, $\texttt{allocate\_bit}(\lambda)$ returns the pointer to the next usable array in $\mathtt{B}[\lambda, 1],\mathtt{B}[\lambda, 2],\dots,\mathtt{B}[\lambda, 2L]$ and updates the value of $\mathtt{N_B}[\lambda]$. We present the main function $\texttt{ST\_decode}((y_1, y_2,\dots, y_n))$ in Algorithm~\ref{algo:ST_Decode}. Note that we only update the value of the current list size in the last layer $\lambda = m$, and we have only one list element in the beginning. The first 3 lines initialize the parameters. In Line~4, we assign the pointer $\vec{\mathtt{D}}[0,1]$ to $\mathtt{P}[1, 0]$ and update the value of $\mathtt{N_D}[0]$ to be $1$. In Lines~5--7, we store the transition probabilities of the whole channel output vector in the array $\mathtt{D}[0,1]$. Line~8 executes recursive decoding which we will explain later. After recursive decoding, we obtain $L_c$ list elements. In the $\ell$th list element, $\mathtt{score}[\ell]$ is the transition probability which measures the likelihood of this list element, and the decoding result is stored in the array $(\mathtt{R}[\ell,0][1],\mathtt{R}[\ell,0][2],\dots,\mathtt{R}[\ell,0][n])$. In Lines~9--17, we pick the list element with the maximum $\mathtt{score}[\ell]$ and return the corresponding decoding result. \begin{algorithm}[ht] \DontPrintSemicolon \caption{\texttt{ST\_Decode}$((y_1,y_2,\dots,y_n))$} \label{algo:ST_Decode} \KwIn{the received vector $(y_1,y_2,\dots,y_n)\in\mathcal{Y}^n$} \KwOut{the decoded codeword $(\hat x_1,\hat x_2,\dots,\hat x_n)\in\{0,1\}^n$} \For{\em $\lambda \in \{1, 2, \dots, m\}$}{ $\mathtt{N_D}[\lambda] \gets 0, \quad \mathtt{N_B}[\lambda] \gets 0$ } $L_c\gets 1$ $\mathtt{P}[1, 0]\gets \texttt{allocate\_prob}(0)$ \For{\em $\beta \in \{1, 2, \dots, n$\}}{ \For{\em $b\in \{0,1\}$}{ $\mathtt{P}[1,0][\beta,b]\gets W(y_\beta|b)$ } } \texttt{decode\_channel$(0, 1)$} \Comment{Algorithm~\ref{algo:st_decode_channel}, recursive decoding} $\max\_\mathtt{score}\gets 0$ $\max\_\ell\gets 0$ \For{$\ell\in\{1,2,\dots, L_c\}$}{ \If{$\mathtt{score}[\ell]\ge \max\_\mathtt{score}$}{ $\max\_\mathtt{score}\gets \mathtt{score}[\ell]$ $\max\_\ell\gets \ell$ } } \For{$\beta=1,2,\dots,n$} { $\hat x_\beta \gets \mathtt{R}[\max\_\ell,0][\beta]$ } \Return $(\hat x_1,\hat x_2,\dots,\hat x_n)$ \end{algorithm} Before explaining the recursive decoding function $\texttt{decode\_channel}$ in Algorithm~\ref{algo:st_decode_channel}, let us introduce some additional notation. Recall that we defined a vector $\hat{\mathbi{o}}_{i,\beta}^{(\lambda)}$ in \eqref{eq:decoded_output_vector} which consists of both the decoding results of intermediate vectors and the channel outputs. This notation is designed for the SC decoder because we only have a single decoding result in the whole SC decoding procedure. However, we have multiple decoding results in the SCL decoder, so we need the following modification of the notation $\hat{\mathbi{o}}_{i,\beta}^{(\lambda)}$. For each $1 \le \ell \le L_c$, we use $\hat x_{i,\beta}^{(\ell,\lambda)}$ to denote the decoded value of $X_{i,\beta}^{(\lambda)}$ in the $\ell$th list element, and we define a vector \begin{equation} \label{eq:listdecoded} \hat{\mathbi{o}}_{i,\beta}^{(\ell,\lambda)} = (\hat x_{1,\beta}^{(\ell,\lambda)}, \hat x_{2,\beta}^{(\ell,\lambda)}, \dots, \hat x_{i-1,\beta}^{(\ell,\lambda)}, y_{1, \beta}^{(\lambda)}, y_{2, \beta}^{(\lambda)}, \dots, y_{2^\lambda, \beta}^{(\lambda)}) . \end{equation} Then \eqref{eq:PWI} becomes $$ \mathbb{P} \big( \mathbi{O}_{i,\beta}^{(\lambda)}=\hat{\mathbi{o}}_{i,\beta}^{(\ell,\lambda)} \big| X_{i,\beta}^{(\lambda)}=b \big) = W_{i}^{(2^\lambda)} \big( \hat{\mathbi{o}}_{i,\beta}^{(\ell,\lambda)} \big| b \big) \qquad \text{for~} b\in\{0,1\} . $$ \begin{lemma} Suppose that $0\le \lambda \le m$ and $1\le i\le 2^\lambda$. Before we call the function $\texttt{decode\_channel}$ in Algorithm~\ref{algo:st_decode_channel} with input parameters $(\lambda, i)$, the pointer $\mathtt{P}[\ell, \lambda]$ satisfies that \begin{equation}\label{eq:prob} \mathtt{P}[\ell, \lambda][\beta, b] = W_{i}^{(2^\lambda)}(\hat{\mathbi o}_{i,\beta}^{(\ell,\lambda)} | b) \quad \text{for all~} 1\le \ell\le L_c,~ 1\le \beta \le 2^{m-\lambda} \text{~and~} b\in\{0,1\} . \end{equation} After the function $\texttt{decode\_channel}(\lambda, i)$ in Algorithm~\ref{algo:st_decode_channel} returns, the pointer $\mathtt{R}[\ell, \lambda]$ satisfies that \begin{equation} \label{eq:bit} \mathtt{R}[\ell, \lambda][\beta] = \hat{x}_{i,\beta}^{(\ell, \lambda)} \quad \text{for all~} 1\le \ell\le L_c \text{~and~} 1\le \beta\le 2^{m-\lambda}. \end{equation} \end{lemma} \begin{proof} We prove \eqref{eq:bit} first, and we prove it by induction. Lines~1--2 of Algorithm~\ref{algo:st_decode_channel} deal with the base case $\lambda=m$, where we decode $U_i$ in the message vector $(U_1,U_2,\dots,U_n)$ by calling the function $\texttt{decode\_boundary\_channel}(i)$ in Algorithm~\ref{algo:st_decode_boundary_channel}. By \eqref{eq:subscript}, when $\lambda=m$, we have $X_{i,1}^{(m)}=X_i^{(n)}$. By \eqref{eq:xnn}, we further obtain that $X_{i,1}^{(m)}=U_i$. If $U_i$ is a frozen bit, then Line~17 of Algorithm~\ref{algo:st_decode_boundary_channel} immediately implies \eqref{eq:bit}. If $U_i$ is an information bit, we first use $\mathtt{\bar{R}}[\ell, m][1]$ to store the decoding result of $U_i$ in the $\ell$th list element\footnote{The variable $b$ in Line~11 of Algorithm~\ref{algo:st_decode_boundary_channel} is the decoding result of $U_i$ in the $\ell$th list element. We will explain Algorithm~\ref{algo:st_decode_boundary_channel} later.}; see Line~11 of Algorithm~\ref{algo:st_decode_boundary_channel}. Next we swap $\mathtt{\bar{R}}$ and $\mathtt{R}$ in Line~13, so \eqref{eq:bit} is satisfied. For the inductive step, we assume that \eqref{eq:bit} holds for $\lambda+1$ and prove it for $\lambda$. By this induction hypothesis, after executing Line~6 of Algorithm~\ref{algo:st_decode_channel}, we have $$ \mathtt{R}[\ell, \lambda+1][\beta] = \hat{x}_{2i-1,\beta}^{(\ell, \lambda+1)} \quad \text{for all~} 1\le \ell\le L_c \text{~and~} 1\le \beta\le 2^{m-\lambda-1}. $$ After executing Line~8 and Line~12 of Algorithm~\ref{algo:st_decode_channel}, we have $$ \temppointer[\beta] = \hat{x}_{2i-1,\beta}^{(\ell, \lambda+1)} \quad \text{for all~} 1\le \ell\le L_c \text{~and~} 1\le \beta\le 2^{m-\lambda-1}. $$ Again by the induction hypothesis, after executing Line~10, we have $$ \mathtt{R}[\ell, \lambda+1][\beta] = \hat{x}_{2i,\beta}^{(\ell, \lambda+1)} \quad \text{for all~} 1\le \ell\le L_c \text{~and~} 1\le \beta\le 2^{m-\lambda-1}. $$ Since we set $n_c=2^\lambda$ in Line~4, we have $n/(2n_c)=2^{m-\lambda-1}$. Therefore, Lines~15--16 become \begin{equation} \label{eq:R+-} \begin{aligned} \mathtt{R}[\ell, \lambda][\beta] = \hat{x}_{2i-1,\beta}^{(\ell, \lambda+1)} + \hat{x}_{2i,\beta}^{(\ell, \lambda+1)} , \quad \mathtt{R}[\ell, \lambda][\beta + 2^{m-\lambda-1}] = \hat{x}_{2i,\beta}^{(\ell, \lambda+1)} \\ \text{for all~} 1\le \ell\le L_c \text{~and~} 1\le \beta\le 2^{m-\lambda-1}. \end{aligned} \end{equation} \eqref{eq:def_intermediate}--\eqref{eq:subscript} together imply that $$ X_{i,\beta}^{(\lambda)} = X_{2i-1,\beta}^{(\lambda+1)} + X_{2i,\beta}^{(\lambda+1)}, \quad X_{i,\beta+2^{m-\lambda-1}}^{(\lambda)} = X_{2i,\beta}^{(\lambda+1)} \quad \text{for all~} 1\le \beta\le 2^{m-\lambda-1}. $$ This further implies that \begin{equation} \label{eq:fthat} \begin{aligned} \hat{x}_{i,\beta}^{(\ell, \lambda)} = \hat{x}_{2i-1,\beta}^{(\ell, \lambda+1)} + \hat{x}_{2i,\beta}^{(\ell, \lambda+1)}, \quad \hat{x}_{i,\beta+2^{m-\lambda-1}}^{(\ell, \lambda)} = \hat{x}_{2i,\beta}^{(\ell, \lambda+1)} \\ \text{for all~} 1\le \ell\le L_c \text{~and~} 1\le \beta\le 2^{m-\lambda-1}. \end{aligned} \end{equation} Combining this with \eqref{eq:R+-}, we complete the proof of \eqref{eq:bit}. Next we prove \eqref{eq:prob} by induction. This time the base case is $\lambda=0$, and this case only occurs once in Line~8 of Algorithm~\ref{algo:ST_Decode} during the whole decoding procedure. Note that the channel $W_1^{(1)}$ is $W$ itself. Therefore, Lines~5--7 of Algorithm~\ref{algo:ST_Decode} immediately imply \eqref{eq:prob} for $\lambda=0$. For the inductive step, we assume that \eqref{eq:prob} holds for $\lambda$ and prove it for $\lambda+1$. By this induction hypothesis, \eqref{eq:prob} holds for $\lambda$ when we execute Line~5 of Algorithm~\ref{algo:st_decode_channel}. In other words, the array associated with the pointer $\mathtt{P}[\ell, \lambda]$ stores the transition probabilities of $W_i^{(2^\lambda)}$. By \eqref{eq:recur_bit_channels}, $W_{2i-1}^{(2^{\lambda+1})}$ is the ``$-$" transform of $W_i^{(2^\lambda)}$. The function \texttt{calculate\_$-$\_transform}$(\lambda+1)$ calculates the ``$-$" transform of $W_i^{(2^\lambda)}$ and stores the results in the array associated with the pointer $\mathtt{P}[\ell, \lambda+1]$, so \eqref{eq:prob} holds before we call $\texttt{decode\_channel}$ in Line~6 of Algorithm~\ref{algo:st_decode_channel}. Again by \eqref{eq:recur_bit_channels}, $W_{2i}^{(2^{\lambda+1})}$ is the ``$+$" transform of $W_i^{(2^\lambda)}$. The function \texttt{calculate\_$+$\_transform}$(\lambda+1)$ in Line~9 of Algorithm~\ref{algo:st_decode_channel} calculates the ``$+$" transform of $W_i^{(2^\lambda)}$ and stores the results in the array associated with the pointer $\mathtt{P}[\ell, \lambda+1]$, so \eqref{eq:prob} holds before we call $\texttt{decode\_channel}$ in Line~10 of Algorithm~\ref{algo:st_decode_channel}. During the whole decoding procedure, the function $\texttt{decode\_channel}$ is only called in Line~8 of Algorithm~\ref{algo:ST_Decode} and Lines~6,10 of Algorithm~\ref{algo:st_decode_channel}. We have proved that \eqref{eq:prob} holds for all three places. This completes the proof of the lemma. \end{proof} Now let us explain how Algorithm~\ref{algo:st_decode_boundary_channel} works when $U_i$ is an information bit. First, we explore both cases $U_i=0$ and $U_i=1$ for every list element; see Lines~2--4. The variable $b$ in Lines~3--4 represents the (possible) value of $U_i$. Since we explore two possible paths for each existing list element, we have expanded the list size by a factor of $2$ after executing Lines~2--4. If the current list size is larger than $L$, then we need to prune the list, and this is done in Lines~5--13. In Line~5, we update the current list size $L_c$ to be the smaller value among $L$ and the size of $\texttt{PriQue}$. Then in Lines~6--11, we execute \texttt{PriQue.pop()} $L_c$ times to obtain $L_c$ elements in the queue with the largest value of $\mathtt{score}[\ell]$. By Line~4, $\mathtt{score}[\ell]$ stores the transition probability $\mathtt{P}[\ell, m][1,b]$, which measures the likelihood of the $\ell$th list element. Therefore, we obtain $L_c$ list elements with the largest likelihood after executing Lines~6--11. \begin{algorithm}[ht] \DontPrintSemicolon \caption{\texttt{decode\_channel$(\lambda, i)$}} \label{algo:st_decode_channel} \KwIn{layer $\lambda\in \{0,1,2,\dots, m\}$ and index $i\in \{1, 2, \dots, 2^\lambda\}$} \uIf{$\lambda = m$}{ \texttt{decode\_boundary\_channel}$(i)$ \Comment{Algorithm~\ref{algo:st_decode_boundary_channel}} }\Else{ $n_c\gets 2^\lambda$ \Comment{$W_{2i-1}^{(2n_c)} = (W_{i}^{(n_c)})^-$} \texttt{calculate\_$-$\_transform}$(\lambda+1)$ \texttt{decode\_channel}$(\lambda+1, 2i-1)$ \For{$\ell \in\{1,2,\dots, L_c\}$}{ $\mathtt{R}[\ell, \lambda]\gets\mathtt{R}[\ell, \lambda+1]$ } \texttt{calculate\_$+$\_transform}$(\lambda+1)$ \Comment{$W_{2i}^{(2n_c)} = (W_{i}^{(n_c)})^+$} \texttt{decode\_channel}$(\lambda+1, 2i)$ \For{$\ell\in\{1,2, \dots L_c\}$}{ $\temppointer\gets \mathtt{R}[\ell, \lambda]$ $\mathtt{R}[\ell, \lambda]\gets \texttt{allocate\_bit}(\lambda)$ \For{$\beta \in\{1, 2, \dots, n/(2n_c)\}$}{ $\mathtt{R}[\ell, \lambda][\beta]\gets \temppointer[\beta] + \mathtt{R}[\ell,\lambda+1][\beta]$ $\mathtt{R}[\ell, \lambda][\beta + n/(2n_c)]\gets \mathtt{R}[\ell,\lambda+1][\beta]$ } } $\mathtt{N_B}[\lambda+1]\gets 0$ } $\mathtt{N_D}[\lambda]\gets 0$ \Return \end{algorithm} \begin{algorithm}[ht] \DontPrintSemicolon \caption{\texttt{decode\_boundary\_channel$(i)$}} \label{algo:st_decode_boundary_channel} \KwIn{index $i$ in the last layer $(\lambda = m)$} \eIf(\Comment{$U_i$ is an information bit}){$i\in\mathcal{A}$}{ \For{$\ell \in \{1,2,\dots, L_c\}$}{ \For{$b\in\{0,1\}$}{ $\texttt{PriQue.push}(\ell, b,\mathtt{P}[\ell, m][1,b])$ } } $L_c\gets \min\{L, \texttt{PriQue.size()}\}$ \For{$\ell \in\{1,2,\dots, L_c\}$}{ $(\ell', b, \mathtt{score}[\ell])\gets \texttt{PriQue.pop()}$ \For{$\lambda \in\{0,1,2,\dots, m-1\}$}{ $(\mathtt{\bar{P}}[\ell,\lambda], \mathtt{\bar{R}}[\ell,\lambda]) \gets (\mathtt{P}[\ell',\lambda], \mathtt{R}[\ell',\lambda])$ } $\mathtt{\bar{R}}[\ell, m]\gets \texttt{allocate\_bit}(m)$ $\mathtt{\bar{R}}[\ell, m][1]\gets b$ } $\texttt{PriQue.clear()}$ \Comment{Remove all the remaining elements} $\texttt{swap}(\mathtt{\bar{P}}, \mathtt{P})$,\quad $\texttt{swap}(\mathtt{\bar{R}}, \mathtt{R})$ }(\Comment{$U_i$ is a frozen bit}){ \For{$\ell \in \{1,2,\dots, L_c\}$}{ $\mathtt{R}[\ell, m]\gets \texttt{allocate\_bit}(m)$ $\mathtt{R}[\ell, m][1]\gets$ frozen value of $U_i$ } } \Return \end{algorithm} \begin{algorithm}[ht] \DontPrintSemicolon \caption{\texttt{calculate\_$-$\_transform}$(\lambda)$} \label{algo:calcu_-_proba} \KwIn{layer $1\le \lambda \le m$} \KwOut{Update the entries pointed by $\mathtt{P}[\ell, \lambda]$, $1\le \ell\le L_c $ } $\bar{n}_c \gets 2^{m-\lambda}$ \For{$\ell\in\{1, 2, \dots, L_c\}$}{ $\mathtt{P}[\ell,\lambda]\gets \texttt{allocate\_prob}(\lambda)$ \For{\em $\beta \in \{1,2,\dots,\bar{n}_c\}, a \in\{0,1\}$}{ $\beta' \gets \beta + \bar{n}_c$ $\mathtt{P}[\ell, \lambda][\beta,a] \gets$ $\frac12\sum_{b\in\{0,1\}} \mathtt{P}[\ell, \lambda-1][\beta, a+b]\mathtt{P}[\ell, \lambda-1][\beta',b]$ } } \Return \end{algorithm} \begin{algorithm}[ht] \DontPrintSemicolon \caption{\texttt{calculate\_$+$\_transform}$(\lambda)$} \label{algo:calcu_+_proba} \KwIn{layer $1\le \lambda \le m$} \KwOut{Update the entries pointed by $\mathtt{P}[\ell, \lambda]$, $1\le \ell\le L_c $ } $\bar{n}_c \gets 2^{m-\lambda}$ \For{$\ell\in\{1, 2, \dots, L_c\}$}{ $\mathtt{P}[\ell,\lambda]\gets \texttt{allocate\_prob}(\lambda)$ \For{\em $\beta \in \{1,2,\dots,\bar{n}_c\},b\in\{0,1\}$}{ $a \gets \mathtt{R}[\ell, \lambda-1][\beta]$ $\beta' \gets \beta + \bar{n}_c$ $\mathtt{P}[\ell, \lambda][\beta,b] \gets$ $\frac12 \mathtt{P}[\ell, \lambda-1][\beta, a+b]\mathtt{P}[\ell, \lambda-1][\beta',b]$ } } \Return \end{algorithm} The next lemma shows that the data structures $\mathtt{D}$ and $\mathtt{B}$ are large enough to store the transition probabilities and the decoding results of the intermediate vectors throughout the decoding procedure. \begin{lemma}\label{lemma:st_space} Throughout the whole decoding procedure, we have $\mathtt{N_D}[\lambda]\le L$ and $\mathtt{N_B}[\lambda]\le 2L$ for all $0\le \lambda \le m$. The space complexity of the SCL decoder is $O(Ln)$. \end{lemma} \begin{proof} For every $0 \le \lambda\le m$ and every $1\le i\le 2^{\lambda}$, the function $\texttt{decode\_channel}(\lambda, i)$ is called only once. Moreover, the function $\texttt{decode\_channel}(\lambda, i+1)$ is always called after the function $\texttt{decode\_channel}(\lambda, i)$ returns. Each time we call the function $\texttt{decode\_channel}(\lambda, i)$, we only need to store the transition probabilities for $L_c\le L$ different decoding paths, and we always reset $\mathtt{N_D}[\lambda]$ to $0$ before the function $\texttt{decode\_channel}(\lambda, i)$ returns, so $\mathtt{N_D}[\lambda]\le L$. We need to store the decoding results of intermediate vectors for $L_c\le L$ list elements when we call the function $\texttt{decode\_channel}(\lambda+1, 2i-1)$ in Line~6 of Algorithm~\ref{algo:st_decode_channel}. Similarly, we need to store the decoding results of intermediate vectors for another $L'_c\le L$ list elements\footnote{We use $L'_c$ here because the current list size may change over the decoding procedure.} when we call the function $\texttt{decode\_channel}(\lambda+1, 2i)$ in Line~10 of Algorithm~\ref{algo:st_decode_channel}. Therefore, before we reset $\mathtt{N_B}[\lambda+1]$ to $0$ in Line~17, we have $\mathtt{N_B}[\lambda+1]=L_c+L'_c\le 2L$. This proves that $\mathtt{N_B}[\lambda]$ can not exceed $2L$ for all $0\le \lambda \le m$. Next we prove the $O(Ln)$ space complexity of the SCL decoder. The number of entries in the array $\mathtt{D}$ is upper bounded by $$ 2L \sum_{\lambda=0}^m 2^{m-\lambda}= 2L(1+2+4+\dots+2^m) < 2L\cdot 2^{m+1}=4Ln . $$ Similarly, the number of entries in $\mathtt{B}$ is upper bounded by $$ 2L \sum_{\lambda=0}^m 2^{m-\lambda} < 4Ln . $$ The number of entries in both $\mathtt{N_D}$ and $\mathtt{N_B}$ is $O(\log(n))$. The number of entries in both $\mathtt{score}$ and $\texttt{PriQue}$ is $O(L)$. The number of entries in the pointer arrays $\mathtt{P},\mathtt{\bar{P}},\mathtt{R},\mathtt{\bar{R}}$ is $O(L\log(n))$. Adding these up gives us the $O(Ln)$ space complexity. \end{proof} \begin{proposition} The decoding time complexity of standard polar codes is $O(Ln\log(n))$. \end{proposition} \subsection{SCL decoder for standard polar codes based on the Double-Bits polar transform}\label{sect:ST_decoder_DB} In this subsection, we present a new SCL decoder for standard polar codes based on the Double-Bits polar transform in Fig.~\ref{fig:DBpt}. We still use the notation in \eqref{eq:xnn}--\eqref{eq:subscript} and \eqref{eq:listdecoded}. By \eqref{eq:st_adc}, the channel mapping from $(X_{i,\beta}^{(\lambda)}, X_{i+1,\beta}^{(\lambda)})$ to $\mathbi{O}_{i,\beta}^{(\lambda)}$ is the adjacent-bits-channel $V_{i}^{(2^\lambda)}$ for every $1\le \beta \le 2^{m-\lambda}$, i.e., \begin{equation} \label{eq:PVI} \mathbb{P} \big( \mathbi{O}_{i,\beta}^{(\lambda)}=\hat{\mathbi{o}}_{i,\beta}^{(\ell,\lambda)} \big| X_{i,\beta}^{(\lambda)}=a, X_{i+1,\beta}^{(\lambda)}=b \big) = V_{i}^{(2^\lambda)} \big( \hat{\mathbi{o}}_{i,\beta}^{(\ell,\lambda)} \big| a,b \big) \qquad \text{for~} a,b\in\{0,1\} . \end{equation} Below we list the data structures of the new SCL decoder for standard polar codes based on the DB polar transform. \begin{enumerate}[(i)] \item 5-dimensional \emph{probability array $\mathtt{D}$.} The entries in the array $\mathtt{D}$ are indexed as \begin{align*} \mathtt{D}[\lambda, s, \beta, a, b],\quad & 1\le \lambda \le m,\qquad~ 1\le s\le L, \\ & 1\le \beta \le 2^{m-\lambda},\quad 0\le a, b \le 1 . \end{align*} For each $1\le \lambda \le m, 1\le s \le L$, we define a subarray of $\mathtt{D}$ as $$ \mathtt{D}[\lambda, s] = (\mathtt{D}[\lambda, s, \beta, a, b],\quad 1\le \beta\le 2^{m-\lambda}, \quad 0\le a,b\le 1), $$ and we use $\vec{\mathtt{D}}[\lambda, s]$ to denote the pointer to the head address of $\mathtt{D}[\lambda, s]$. In the algorithms below, we will write $\mathtt{D}[\lambda, s, \beta, a, b]$ and $\vec{\mathtt{D}}[\lambda, s][\beta, a, b]$ interchangeably. Each array $\mathtt{D}[\lambda, s]$ is used to store a set of transition probabilities in \eqref{eq:PVI}. \item 1-dimensional \emph{integer array $\mathtt{N_D}$.} The entries of $\mathtt{N_D}$ are $\mathtt{N_D}[\lambda], 1\le \lambda \le m$. This array is defined in the same way as the previous subsection. \item 3-dimensional \emph{bit array $\mathtt{B}$.} The entries in the array $\mathtt{B}$ are indexed as \begin{equation} \label{eq:B_index} \mathtt{B}[\lambda, s, \beta],\quad 1\le \lambda \le m, \quad 1\le s\le 4L, \quad 1\le \beta \le 2^{m-\lambda}. \end{equation} For each $1\le \lambda \le m, 1\le s \le 4L$, we define a subarray of $\mathtt{B}$ as \begin{equation*} \mathtt{B}[\lambda, s]= (\mathtt{B}[\lambda, s, \beta],\quad 1\le \beta\le 2^{m-\lambda}), \end{equation*} and we use $\vec{\mathtt{B}}[\lambda, s]$ to denote the pointer to the head address of $\mathtt{B}[\lambda, s]$. In the algorithms below, we will write $\mathtt{B}[\lambda, s, \beta]$ and $\vec{\mathtt{B}}[\lambda, s][\beta]$ interchangeably. Each array $\mathtt{B}[\lambda, s]$ is used to store a set of decoding results of the intermediate vectors. \item 1-dimensional \emph{integer array $\mathtt{N_B}$.} The entries of $\mathtt{N_B}$ are $\mathtt{N_B}[\lambda], 1\le \lambda \le m$. The entry $\mathtt{N_B}[\lambda]$ takes value in the set $\{0,1,2,\dots,4L\}$ for every $1\le \lambda \le m$. The meaning of $\mathtt{N_B}[\lambda]$ is the same as the previous subsection. \item 1-dimensional \emph{probability array $\mathtt{score}$,} defined in the same way as the previous subsection. \item 2-dimensional \emph{pointer arrays $\mathtt{P}, \mathtt{\bar{P}}$.} Their entries are $$ \mathtt{P}=(\mathtt{P}[\ell,\lambda],~ 1\le \ell\le L,~ 1\le \lambda \le m), \qquad \mathtt{\bar{P}}=(\mathtt{\bar{P}}[\ell,\lambda],~ 1\le \ell\le L,~ 1\le \lambda \le m) . $$ They are used in the same way as the previous subsection. \item 2-dimensional \emph{pointer arrays $\mathtt{R}, \mathtt{\bar{R}}$.} Their entries are $$ \mathtt{R}=(\mathtt{R}[\ell,\lambda],~ 1\le \ell\le L,~ 1\le \lambda \le m), \qquad \mathtt{\bar{R}}=(\mathtt{\bar{R}}[\ell,\lambda],~ 1\le \ell\le L,~ 1\le \lambda \le m) . $$ They are used in the same way as the previous subsection. \item 2-dimensional \emph{pointer arrays $\mathtt{H}, \mathtt{\bar{H}}$.} Their entries are $$ \mathtt{H}=(\mathtt{H}[\ell,\lambda],~ 1\le \ell\le L,~ 1\le \lambda \le m), \qquad \mathtt{\bar{H}}=(\mathtt{\bar{H}}[\ell,\lambda],~ 1\le \ell\le L,~ 1\le \lambda \le m) . $$ These two pointer arrays serve as backups of $\mathtt{R}$ and $\mathtt{\bar{R}}$. We use $\mathtt{H}, \mathtt{\bar{H}}$ when all the entries in $\mathtt{R}$ and $\mathtt{\bar{R}}$ are occupied. \item \emph{priority queue $\texttt{PriQue}$.} $\texttt{PriQue}$ is defined essentially in the same way as the previous subsection. The only difference is that each element in the queue changes from a triple $(\ell, b, \prob)$ to a quadruple $(\ell, a, b, \prob)$. The quadruple $(\ell, a, b, \prob)$ has the following meaning: When we decode $U_i$ and $U_{i+1}$ in the last layer $\lambda = m$, the (posterior) probability of $(U_i = a, U_{i+1}=b)$ in the $\ell$th decoding path is $\prob$. \end{enumerate} Below we list the main differences between the data structures in this subsection and the previous subsection. \begin{enumerate}[(1)] \item The range of $\lambda$ in all the data structures changes from $0\le \lambda \le m$ (previous subsection) to $1\le \lambda \le m$ (this subsection). \item The dimension of the probability array $\mathtt{D}$ changes from $4$ (previous subsection) to $5$ (this subsection). \item The range of the index $s$ in the array $\mathtt{B}$ changes from $1\le s\le 2L$ (previous subsection) to $1\le s\le 4L$ (this subsection). \item We have two more pointer arrays $\mathtt{H}, \mathtt{\bar{H}}$ in this subsection. \item Each element in the priority queue $\texttt{PriQue}$ changes from a triple $(\ell, b, \prob)$ to a quadruple $(\ell, a, b, \prob)$. \end{enumerate} For the SCL decoder presented in this subsection, the $\ell$th list element has the following fields: \begin{equation}\label{eq:stDB_list_field} \begin{aligned} & (\mathtt{P}[\ell,1], \mathtt{P}[\ell,2], \dots, \mathtt{P}[\ell, m]), \\ & (\mathtt{R}[\ell,1], \mathtt{R}[\ell,2], \dots, \mathtt{R}[\ell, m]), \\ & (\mathtt{H}[\ell,1], \mathtt{H}[\ell,2], \dots, \mathtt{H}[\ell, m]), \\ & \mathtt{score}[\ell] . \end{aligned} \end{equation} We still use the function $\texttt{allocate\_prob}(\lambda)$ in Algorithm~\ref{algo:st_allocate_prob} although the range of $\lambda$ is $\{1,2,\dots, m\}$ in this subsection. However, we will use the function $\texttt{allocate\_bit}$ in Algorithm~\ref{algo:allocate_bit} for the new decoder in this subsection, which is different from the function with the same name in Algorithm~\ref{algo:st_allocate_bit}. The main difference is that the function $\texttt{allocate\_bit}$ in Algorithm~\ref{algo:allocate_bit} has an extra input parameter $k$, which takes value in $\{1,2\}$. In this subsection, the decoder makes decisions according to the transition probabilities of adjacent-bits-channels. Each adjacent-bits-channel has two input bits. In some cases we only decode one bit while in other cases we need to decode both bits. The input parameter $k$ in Algorithm~\ref{algo:allocate_bit} corresponds to the number of input bits we need to decode for each adjacent-bits-channel. We do not have the parameter $k$ in Algorithm~\ref{algo:st_allocate_bit} because each bit-channel only has one input bit. \begin{algorithm}[ht] \DontPrintSemicolon \caption{$\texttt{allocate\_bit}(\lambda, k)$} \label{algo:allocate_bit} \KwIn{layer $\lambda\in\{1, 2, \dots, m\}$ and an integer $k\in\{1,2\}$} \KwOut{a pointer to the allocated memory} $s \gets \mathtt{N_B}[\lambda] + 1$ $\mathtt{N_B}[\lambda]\gets \mathtt{N_B}[\lambda] + k$ \Return $\vec{\mathtt{B}}[\lambda, s]$ \end{algorithm} We present the main function $\texttt{decode}$ in Algorithm~\ref{algo:ABS_Decoding}. The first 3 lines initialize the parameters. In Line~4, we assign the pointer $\vec{\mathtt{D}}[1,1]$ to $\mathtt{P}[1, 1]$ and update the value of $\mathtt{N_D}[1]$ to be $1$. In Lines~5--7, we calculate the transition probabilities $V_1^{(2)}(y_\beta,y_{\beta+n/2}|a,b)$ for $1\le\beta\le n/2$ and $a,b\in\{0,1\}$ using \eqref{eq:v_init} and store $V_1^{(2)}(y_\beta,y_{\beta+n/2}|a,b)$ in $\mathtt{P}[1,1][\beta,a,b]$. Line~8 executes recursive decoding which we will explain later. After recursive decoding, we obtain $L_c$ list elements. In the $\ell$th list element, $\mathtt{score}[\ell]$ is the transition probability which measures the likelihood of this list element. In Lines~9--14, we pick the list element with the maximum $\mathtt{score}[\ell]$. Recall that $\hat x_{i,\beta}^{(\ell,\lambda)}$ is the decoded value of $X_{i,\beta}^{(\lambda)}$ in the $\ell$th list element. As we will prove in Lemma~\ref{lm:566} below, after recursive decoding, we have $$ \mathtt{R}[\ell,1][\beta] = \hat x_{1,\beta}^{(\ell,1)} , \quad \mathtt{R}[\ell,1][\beta+n/2] = \hat x_{2,\beta}^{(\ell,1)} \quad \text{for~} 1\le\ell\le L_c \text{~and~} 1\le\beta\le n/2 . $$ Since the codeword vector $(X_1,\dots,X_n)$ and the intermediate vectors $(X_{1,1}^{(1)},\dots,X_{1,n/2}^{(1)}), (X_{2,1}^{(1)},\dots,X_{2,n/2}^{(1)})$ satisfy $$ X_{\beta} = X_{1,\beta}^{(1)} + X_{2,\beta}^{(1)} , \quad X_{\beta+n/2} = X_{2,\beta}^{(1)} \quad \text{for~} 1\le\beta\le n/2 , $$ we further have $$ \hat x_\beta^{(\ell)} = \mathtt{R}[\ell,1][\beta] + \mathtt{R}[\ell,1][\beta + n/2] , \quad \hat x_{\beta+n/2}^{(\ell)} = \mathtt{R}[\ell,1][\beta + n/2] \quad \text{for~} 1\le\beta\le n/2 , $$ where $(\hat x_1^{(\ell)},\hat x_2^{(\ell)},\dots,\hat x_n^{(\ell)})$ is the decoding result of the codeword vector in the $\ell$th list element. This is how we calculate the final decoding result in Lines~15--17. \begin{algorithm}[ht] \DontPrintSemicolon \caption{\texttt{Decode}$((y_1,y_2,\dots,y_n)) \quad$} \label{algo:ABS_Decoding} \KwIn{the received vector $(y_1,y_2,\dots,y_n)\in\mathcal{Y}^n$} \KwOut{the decoded codeword $(\hat x_1,\hat x_2,\dots,\hat x_n)\in\{0,1\}^n$} \For{\em $\lambda \in \{1, 2, \dots, m\}$}{ $\mathtt{N_D}[\lambda] \gets \mathtt{N_B}[\lambda] \gets 0$ } $L_c\gets 1$ $\mathtt{P}[1, 1]\gets \texttt{allocate\_prob}(1)$ \For{\em $\beta \in \{1, 2, \dots, n/2\}$}{ \For{\em $a\in \{0,1\}$, $b\in \{0,1\}$}{ $\mathtt{P}[1,1][\beta,a,b]\gets W(y_\beta|a+b)\cdot W(y_{\beta+n/2}|b)$ } } \texttt{decode\_channel$(1, 1)$} \Comment{Recursive decoding} $\max\_\mathtt{score}\gets 0$ $\max\_\ell\gets 0$ \For{$\ell\in\{1,2,\dots, L_c\}$}{ \If{$\mathtt{score}[\ell]\ge \max\_\mathtt{score}$}{ $\max\_\mathtt{score}\gets \mathtt{score}[\ell]$ $\max\_\ell\gets \ell$ } } \For{$\beta=1,2,\dots,n/2$} { $\hat x_\beta \gets \mathtt{R}[\max\_\ell,1][\beta] + \mathtt{R}[\max\_\ell,1][\beta + n/2]$ $\hat x_{\beta+n/2} \gets \mathtt{R}[\max\_\ell,1][\beta + n/2]$ } \Return $(\hat x_1,\hat x_2,\dots,\hat x_n)$ \end{algorithm} The recursive decoding function $\texttt{decode\_channel}$ in Algorithm~\ref{algo:STDB_decode_channel} has two branches. If $\lambda=m$, we call the function $\texttt{decode\_boundary\_channel}$ in Algorithm~\ref{algo:decode_boundary_channel}. If $\lambda < m$, we call the function $\texttt{decode\_original\_channel}$ in Algorithm~\ref{algo:st_decode_ori_channel}. In Algorithms~\ref{algo:decode_boundary_channel}--\ref{algo:st_decode_ori_channel}, we only decode $(X_{i,\beta}^{(\lambda)}, 1\le \beta\le 2^{m-\lambda})$ if $i\le 2^{\lambda}-2$; we decode both $(X_{i,\beta}^{(\lambda)}, 1\le \beta\le 2^{m-\lambda})$ and $(X_{i+1,\beta}^{(\lambda)}, 1\le \beta\le 2^{m-\lambda})$ if $i=2^\lambda-1$. The following lemma further explains how Algorithms~\ref{algo:STDB_decode_channel},\ref{algo:decode_boundary_channel}--\ref{algo:st_decode_ori_channel} work. \begin{lemma} \label{lm:566} Suppose that $1\le \lambda \le m$ and $1\le i\le 2^\lambda-1$. Before we call the function $\texttt{decode\_channel}$ in Algorithm~\ref{algo:STDB_decode_channel} with input parameters $(\lambda, i)$, the pointer $\mathtt{P}[\ell, \lambda]$ satisfies that \begin{equation}\label{eq:1prob} \mathtt{P}[\ell, \lambda][\beta, a, b] = V_{i}^{(2^\lambda)}(\hat{\mathbi o}_{i,\beta}^{(\ell,\lambda)} | a, b) \quad \text{for all~} 1\le \ell\le L_c,~ 1\le \beta \le 2^{m-\lambda} \text{~and~} a,b\in\{0,1\} . \end{equation} After the function $\texttt{decode\_channel}(\lambda, i)$ in Algorithm~\ref{algo:STDB_decode_channel} returns, the pointer $\mathtt{R}[\ell, \lambda]$ satisfies that \begin{equation} \label{eq:1bit} \mathtt{R}[\ell, \lambda][\beta] = \hat{x}_{i,\beta}^{(\ell, \lambda)} \quad \text{for all~} 1\le \ell\le L_c \text{~and~} 1\le \beta\le 2^{m-\lambda}. \end{equation} Moreover, if $i=2^\lambda-1$, then the pointer $\mathtt{R}[\ell, \lambda]$ further satisfies that \begin{equation} \label{eq:2bit} \mathtt{R}[\ell, \lambda][\beta+2^{m-\lambda}] = \hat{x}_{i+1,\beta}^{(\ell, \lambda)} \quad \text{for all~} 1\le \ell\le L_c \text{~and~} 1\le \beta\le 2^{m-\lambda}. \end{equation} \end{lemma} \begin{proof} We first prove \eqref{eq:1bit}--\eqref{eq:2bit} by induction. Algorithm~\ref{algo:decode_boundary_channel} deals with the base case $\lambda=m$. Recall from \eqref{eq:subscript} that $X_{i,1}^{(m)}=X_i^{(n)}$ and $X_{i+1,1}^{(m)}=X_{i+1}^{(n)}$ when $\lambda=m$. By \eqref{eq:xnn}, we further obtain $X_{i,1}^{(m)}=U_i$ and $X_{i+1,1}^{(m)}=U_{i+1}$. The discussion below is divided into two cases. {\bf Case (1) $i\le n-2$:} If $U_i$ is a frozen bit, then Line~11 of Algorithm~\ref{algo:decode_boundary_channel} immediately implies \eqref{eq:1bit}. If $U_i$ is an information bit, then we explore both decoding paths $U_i=0$ and $U_i=1$ for every list element, where the variable $a$ in Lines~4--6 represents the (possible) value of $U_i$. The question mark ``?" in Line~6 means that we do not need to decode $U_{i+1}$ when $i\le n-2$. Since we expand the current list size by a factor of $2$ in Lines~4--6, the current list size might exceed the prescribed upper bound $L$. In this case, we prune the list according to $\mathtt{score}[\ell]$ in Lines~29--44. The variables $a$ and $b$ in Lines~29--44 represent the decoded values of $U_i$ and $U_{i+1}$ in each list element, respectively. In Line~42, we use $\mathtt{\bar{R}}[\ell, m][1]$ to temporarily store the decoding result of $U_i$ in the $\ell$th list element, and we use $\mathtt{\bar{R}}[\ell, m][2]$ to temporarily store the decoding result of $U_{i+1}$ in the $\ell$th list element. Next we swap $\mathtt{\bar{R}}$ and $\mathtt{R}$ in Line~44, so \eqref{eq:1bit}--\eqref{eq:2bit} are satisfied. {\bf Case (2) $i= n-1$:} This case is handled in Lines~12--28. Note that $U_{n-1}$ and $U_n$ in Lines~17,21,25 refer to their frozen values (or true values). The argument for Case (2) is similar to Case (1), and we do not repeat it here. For the inductive step, we assume that \eqref{eq:1bit}--\eqref{eq:2bit} hold for $\lambda+1$ and prove them for $\lambda$. By this induction hypothesis, after executing Lines~1--5 of Algorithm~\ref{algo:st_decode_ori_channel}, we have $$ \mathtt{H}[\ell, \lambda+1][\beta] = \hat{x}_{2i-1,\beta}^{(\ell, \lambda+1)} \quad \text{for all~} 1\le \ell\le L_c \text{~and~} 1\le \beta\le 2^{m-\lambda-1}. $$ In Line~1, we set $n_c=2^\lambda$. We again divide the discussion into two cases. {\bf Case (1) $i\le n_c-2$:} In this case, we only need to prove \eqref{eq:1bit}. After executing Lines~7--10 and Line~14, we have $$ \temppointer[\beta] = \hat{x}_{2i,\beta}^{(\ell, \lambda+1)} \quad \text{for all~} 1\le \ell\le L_c \text{~and~} 1\le \beta\le 2^{m-\lambda-1}. $$ Since $n_c=2^\lambda$, we have $n/(2n_c)=2^{m-\lambda-1}$. Therefore, Lines~18--19 of Algorithm~\ref{algo:st_decode_ori_channel} become \eqref{eq:R+-}. Combining \eqref{eq:R+-} with \eqref{eq:fthat}, we finish the proof of \eqref{eq:1bit} for Case (1). {\bf Case (2) $i= n_c-1$:} In this case, we need to prove both \eqref{eq:1bit} and \eqref{eq:2bit}. The proof of \eqref{eq:1bit} is exactly the same as Case (1). To prove \eqref{eq:2bit}, we observe that if $i= n_c-1$, then $2i+1=2n_c-1=2^{\lambda+1}-1$. Then by the induction hypothesis, after executing Line~24, we have \begin{align*} \mathtt{R}[\ell, \lambda+1][\beta] = \hat{x}_{2i+1,\beta}^{(\ell, \lambda+1)} , \quad \mathtt{R}[\ell, \lambda+1][\beta+2^{m-\lambda-1}] = \hat{x}_{2i+2,\beta}^{(\ell, \lambda+1)} \\ \text{for all~} 1\le \ell\le L_c \text{~and~} 1\le \beta\le 2^{m-\lambda-1}. \end{align*} Therefore, Lines~32--34 become \begin{equation} \label{eq:lmd+1} \begin{aligned} \mathtt{R}[\ell, \lambda][\beta+2^{m-\lambda}] = \hat{x}_{2i+1,\beta}^{(\ell, \lambda+1)} + \hat{x}_{2i+2,\beta}^{(\ell, \lambda+1)}, \quad \mathtt{R}[\ell, \lambda][\beta+2^{m-\lambda-1}+2^{m-\lambda}] = \hat{x}_{2i+2,\beta}^{(\ell, \lambda+1)} \\ \text{for all~} 1\le \ell\le L_c \text{~and~} 1\le \beta\le 2^{m-\lambda-1}. \end{aligned} \end{equation} Replacing $i$ with $i+1$ in \eqref{eq:fthat} we obtain \begin{align*} \hat{x}_{i+1,\beta}^{(\ell, \lambda)} = \hat{x}_{2i+1,\beta}^{(\ell, \lambda+1)} + \hat{x}_{2i+2,\beta}^{(\ell, \lambda+1)}, \quad \hat{x}_{i+1,\beta+2^{m-\lambda-1}}^{(\ell, \lambda)} = \hat{x}_{2i+2,\beta}^{(\ell, \lambda+1)} \\ \text{for all~} 1\le \ell\le L_c \text{~and~} 1\le \beta\le 2^{m-\lambda-1}. \end{align*} Combining this with \eqref{eq:lmd+1}, we complete the proof of \eqref{eq:2bit}. Next we prove \eqref{eq:1prob} by induction. This time the base case is $\lambda=1$, and this case only occurs once in Line~8 of Algorithm~\ref{algo:ABS_Decoding} during the whole decoding procedure. By \eqref{eq:v_init}, we have $V_1^{(2)}(y_\beta, y_{\beta+n/2}| a,b) = W(y_\beta|a+b)\cdot W(y_{\beta+n/2}|b)$. Therefore, Lines~5--7 of Algorithm~\ref{algo:ABS_Decoding} immediately imply \eqref{eq:1prob} for $\lambda=1$. For the inductive step, we assume that \eqref{eq:1prob} holds for $\lambda$ and prove it for $\lambda+1$. By this induction hypothesis, \eqref{eq:1prob} holds for $\lambda$ when we execute Line~2 of Algorithm~\ref{algo:st_decode_ori_channel}. In other words, the array associated with the pointer $\mathtt{P}[\ell, \lambda]$ stores the transition probabilities of $V_i^{(2^\lambda)}$. By Lemma~\ref{lemma:recur_ST_DB}, $V_{2i-1}^{(2^{\lambda+1})}$ is the ``$\triangledown$" transform of $V_i^{(2^\lambda)}$. The function \texttt{calculate\_$\triangledown$\_transform}$(\lambda+1)$ calculates the ``$\triangledown$" transform of $V_i^{(2^\lambda)}$ and stores the results in the array associated with the pointer $\mathtt{P}[\ell, \lambda+1]$, so \eqref{eq:1prob} holds before we call $\texttt{decode\_channel}$ in Line~3 of Algorithm~\ref{algo:st_decode_ori_channel}. Again by Lemma~\ref{lemma:recur_ST_DB}, $V_{2i}^{(2^{\lambda+1})}$ is the ``$\lozenge$" transform of $V_i^{(2^\lambda)}$. The function \texttt{calculate\_$\lozenge$\_transform}$(\lambda+1)$ in Line~7 of Algorithm~\ref{algo:st_decode_ori_channel} calculates the ``$\lozenge$" transform of $V_i^{(2^\lambda)}$ and stores the results in the array associated with the pointer $\mathtt{P}[\ell, \lambda+1]$, so \eqref{eq:1prob} holds before we call $\texttt{decode\_channel}$ in Line~8 of Algorithm~\ref{algo:st_decode_ori_channel}. Using exactly the same method, we can show that \eqref{eq:1prob} also holds before we call $\texttt{decode\_channel}$ in Line~24 of Algorithm~\ref{algo:st_decode_ori_channel}. During the whole decoding procedure, the function $\texttt{decode\_channel}$ is only called in Line~8 of Algorithm~\ref{algo:ABS_Decoding} and Lines~3,8,24 of Algorithm~\ref{algo:st_decode_ori_channel}. We have proved that \eqref{eq:1prob} holds for all four places. This completes the proof of the lemma. \end{proof} \begin{algorithm}[ht] \DontPrintSemicolon \caption{\texttt{decode\_channel$(\lambda, i)$}} \label{algo:STDB_decode_channel} \KwIn{layer $\lambda\in \{1,2,\dots, m\}$ and index $i\in \{1, 2, \dots, 2^\lambda-1\}$} \uIf{$\lambda = m$}{ \texttt{decode\_boundary\_channel}$(i)$ \Comment{Algorithm~\ref{algo:decode_boundary_channel}} } \Else{ \texttt{decode\_original\_channel}$(\lambda, i)$ \Comment{Algorithm~\ref{algo:st_decode_ori_channel}} } $\mathtt{N_D}[\lambda]\gets 0$ \Return \end{algorithm} Before proceeding further, let us explain the meaning of the boolean variable ``flag" in Algorithm~\ref{algo:decode_boundary_channel}. flag takes value $0$ if we do not expand the decoding list in the decoding procedure, and it takes value $1$ otherwise. In Algorithm~\ref{algo:decode_boundary_channel}, we do not expand the decoding list if and only if we only decode frozen bits. There are two such cases, one in Lines~7--11 and the other in Lines~13--17. We set the variable flag to be $0$ in both cases. In all the other cases, we need to decode at least one information bit, and we need to expand the list size by a factor of at least $2$, so we set the variable flag to be $1$ in all the other cases. If flag$=0$, then the list size does not change, and we do not need to prune the list. Therefore, we only prune the list when flag$=1$; see Line~29. \begin{remark} The calculations in Line 6 of Algorithm~\ref{algo:calcu_oria_proba} correspond to the "$\triangledown$" transform in Fig.~\ref{fig:DBpt} and the first equation in \eqref{eq:DBpt}. The calculations in Line 7 of Algorithm~\ref{algo:calcu_orib_proba} correspond to the "$\lozenge$" transform in Fig.~\ref{fig:DBpt} and the second equation in \eqref{eq:DBpt}. The calculations in Lines 7-8 of Algorithm~\ref{algo:calcu_oric_proba} correspond to the "$\vartriangle$" transform in Fig.~\ref{fig:DBpt} and the third equation in \eqref{eq:DBpt}. This is why we say that the SCL decoder presented in this subsection is based on the DB polar transform. \end{remark} \begin{algorithm}[ht] \DontPrintSemicolon \caption{\texttt{calculate\_$\triangledown$\_transform}$(\lambda)$} \label{algo:calcu_oria_proba} \KwIn{layer $2\le \lambda \le m$} \KwOut{Update the entries pointed by $\mathtt{P}[\ell, \lambda]$, $1\le \ell\le L_c $ } $\bar{n}_c \gets 2^{m-\lambda}$ \For{$\ell\in\{1, 2, \dots, L_c\}$}{ $\mathtt{P}[\ell,\lambda]\gets \texttt{allocate\_prob}(\lambda)$ \For{\em $\beta \in \{1,2,\dots,\bar{n}_c\}, r_1,r_2\in\{0,1\}$}{ $\beta' \gets \beta + \bar{n}_c$ $\mathtt{P}[\ell, \lambda][\beta,r_1,r_2] \gets$ $\frac14\sum_{r_3, r_4\in\{0,1\}}$ $\mathtt{P}[\ell, \lambda-1][\beta, r_1+r_2, r_3+r_4]\mathtt{P}[\ell, \lambda-1][\beta', r_2, r_4]$ } } \Return \end{algorithm} \begin{algorithm}[ht] \DontPrintSemicolon \caption{\texttt{calculate\_$\lozenge$\_transform}$(\lambda)$} \label{algo:calcu_orib_proba} \KwIn{layer $2\le \lambda \le m$} \KwOut{Update the entries pointed by $\mathtt{P}[\ell, \lambda]$, $1\le \ell\le L_c $ } $\bar{n}_c \gets 2^{m-\lambda}$ \For{$\ell\in\{1, 2, \dots, L_c\}$}{ $\mathtt{P}[\ell,\lambda]\gets \texttt{allocate\_prob}(\lambda)$ \For{\em $\beta \in \{1,2,\dots,\bar{n}_c\}, r_2,r_3\in\{0,1\}$}{ $r_1\gets \mathtt{H}[\ell, \lambda][\beta]$ $\beta' \gets \beta + \bar{n}_c$ $\mathtt{P}[\ell, \lambda][\beta,r_2,r_3] \gets$ $\frac14\sum_{r_4\in\{0,1\}}$ $\mathtt{P}[\ell, \lambda-1][\beta, r_1+r_2, r_3+r_4]\mathtt{P}[\ell, \lambda-1][\beta', r_2, r_4]$ } } \Return \end{algorithm} \begin{algorithm}[ht] \DontPrintSemicolon \caption{\texttt{calculate\_$\vartriangle$\_transform}$(\lambda)$} \label{algo:calcu_oric_proba} \KwIn{layer $2\le \lambda \le m$} \KwOut{Update the entries pointed by $\mathtt{P}[\ell, \lambda]$, $1\le \ell\le L_c $ } $\bar{n}_c \gets 2^{m-\lambda}$ \For{$\ell\in\{1, 2, \dots, L_c\}$}{ $\mathtt{P}[\ell,\lambda]\gets \texttt{allocate\_prob}(\lambda)$ \For{\em $\beta \in \{1,2,\dots,\bar{n}_c\}, r_3,r_4\in\{0,1\}$}{ $r_1\gets \mathtt{H}[\ell, \lambda][\beta],\quad r_2\gets \mathtt{R}[\ell, \lambda-1][\beta]$ $\beta' \gets \beta + \bar{n}_c$ $\mathtt{P}[\ell, \lambda][\beta,r_3,r_4] \gets$\\ $\frac14\mathtt{P}[\ell, \lambda-1][\beta, r_1+r_2, r_3+r_4]\mathtt{P}[\ell, \lambda-1][\beta', r_2, r_4]$ } } \Return \end{algorithm} \begin{algorithm}[ht] \DontPrintSemicolon \caption{\texttt{decode\_boundary\_channel$(i)$}} \label{algo:decode_boundary_channel} \KwIn{index $i$ in the last layer $(\lambda = m)$} flag $\gets 1$ \eIf(\Comment{Only decode $U_i$}){$i\le n-2$}{ \eIf(\Comment{$U_i$ is an information bit}){$i\in\mathcal{A}$}{ \For{$\ell \in \{1,2,\dots, L_c\}$, $a\in\{0,1\}$}{ $\prob \gets\frac12\sum_{b\in\{0,1\}}\mathtt{P}[\ell, m][1,a,b]$ $\texttt{PriQue.push}(\ell, a, ``?",\prob)$ } }(\Comment{$U_i$ is a frozen bit}){ flag $\gets 0$ \For{$\ell \in \{1,2,\dots, L_c\}$}{ $\mathtt{R}[\ell, m]\gets \texttt{allocate\_bit}(m, 1)$ $\mathtt{R}[\ell, m][1]\gets$ frozen value of $U_i$ } } }(\Comment{Decode both $U_{n-1}$ and $U_n$.}){ \uIf{$n-1, n\notin \mathcal{A}$}{ flag $\gets 0$ \Comment{$U_{n-1}$ and $U_{n}$ are both frozen bits} \For{$\ell\in\{1,2,\dots, L_c\}$}{ $\mathtt{R}[\ell,m]\gets \texttt{allocate\_bit}(m,2)$ $(\mathtt{R}[\ell,m][1],\mathtt{R}[\ell, m][2])\gets(U_{n-1}, U_{n})$ } }\ElseIf{$n-1\in \mathcal{A}$, $n\notin \mathcal{A}$}{ \Comment{information bit $U_{n-1}$, frozen bit $U_{n}$ } \For{$\ell\in\{1,2,\dots, L_c\}$, $a\in\{0,1\}$}{ $\texttt{PriQue.push}(\ell, a, U_{n},\mathtt{P}[\ell,m][1, a, U_{n}])$ } }\ElseIf{$n-1\notin \mathcal{A}$, $n\in \mathcal{A}$}{ \Comment{frozen bit $U_{n-1}$, information bit $U_{n}$} \For{$\ell\in\{1,2,\dots, L_c\}$, $b\in\{0,1\}$}{ $\texttt{PriQue.push}(\ell, U_{n-1}, b,\mathtt{P}[\ell,m][1, U_{n-1}, b])$ } }\Else(\Comment{$U_{n-1}$ and $U_{n}$ are both information bits}){ \For{$\ell\in\{1,2,\dots, L_c\}$, $a,b\in\{0,1\}$}{ $\texttt{PriQue.push}(\ell, a, b,\mathtt{P}[\ell,m][1, a, b])$ } } } \If{\em flag $ = 1$}{ $L_c\gets \min\{L, \texttt{PriQue.size()}\}$ \For{$\ell \in\{1,2,\dots, L_c\}$}{ $(\ell', a, b, \mathtt{score}[\ell])\gets \texttt{PriQue.pop()}$ \For{$\lambda \in\{1,2,\dots, m-1\}$}{ $\mathtt{\bar{P}}[\ell,\lambda]\gets \mathtt{P}[\ell',\lambda]$ $\mathtt{\bar{R}}[\ell,\lambda]\gets \mathtt{R}[\ell',\lambda]$ $\mathtt{\bar{H}}[\ell,\lambda]\gets \mathtt{H}[\ell',\lambda]$ } \eIf{$i < n-1$}{ $\mathtt{\bar{R}}[\ell, m]\gets \texttt{allocate\_bit}(m, 1)$ $\mathtt{\bar{R}}[\ell, m][1]\gets a$ }{ $\mathtt{\bar{R}}[\ell, m]\gets \texttt{allocate\_bit}(m, 2)$ $(\mathtt{\bar{R}}[\ell,m][1],\mathtt{\bar{R}}[\ell, m][2]) \gets (a,b)$ } } $\texttt{PriQue.clear()}$ $\texttt{swap}(\mathtt{\bar{P}}, \mathtt{P})$,\quad $\texttt{swap}(\mathtt{\bar{R}}, \mathtt{R})$,\quad $\texttt{swap}(\mathtt{\bar{H}}, \mathtt{H})$ } \Return \end{algorithm} \begin{algorithm}[ht] \DontPrintSemicolon \caption{\texttt{decode\_original\_channel$(\lambda, i)$}} \label{algo:st_decode_ori_channel} \KwIn{$\lambda \in \{1,2, \dots, m\}$ and index $i$ satisfying $1\le i \le 2^{\lambda}-1$} $n_c\gets 2^{\lambda}$ \Comment{$V_{2i-1}^{(2n_c)} = (V_{i}^{(n_c)})^\triangledown$} \texttt{calculate\_$\triangledown$\_transform}$(\lambda+1)$ \texttt{decode\_channel}$(\lambda+1, 2i-1)$ \For{$\ell\in\{1, 2, \dots, L_c\}$}{ $\mathtt{H}[\ell, \lambda + 1]\gets \mathtt{R}[\ell, \lambda+1]$ } \Comment{$V_{2i}^{(2n_c)} = (V_{i}^{(n_c)})^\lozenge$} \texttt{calculate\_$\lozenge$\_transform}$(\lambda+1)$ \texttt{decode\_channel}$(\lambda+1, 2i)$ \For{$\ell\in\{1, 2, \dots, L_c\}$}{ $\mathtt{R}[\ell, \lambda]\gets \mathtt{R}[\ell, \lambda+1]$ } \vspace*{.1in} \eIf{$i \le n_c-2$}{ \Comment{Only decode one bit ${X}_{i,\beta}^{(\lambda)}$ for each $\beta$} \For{$\ell\in\{1, 2, \dots, L_c\}$}{ $\temppointer \gets \mathtt{R}[\ell, \lambda]$ $\mathtt{R}[\ell, \lambda] \gets \texttt{allocate\_bit}(\lambda, 1)$ \For{\em $\beta \in \{1,2,\dots,n/(2n_c)\}$}{ $\beta' \gets \beta + n/(2n_c)$ $\mathtt{R}[\ell, \lambda][\beta] \gets \mathtt{H}[\ell, \lambda+1][\beta] + \temppointer[\beta]$ $\mathtt{R}[\ell, \lambda][\beta'] \gets \temppointer[\beta]$ } } }{ \Comment{Decode two bits ${X}_{n_c-1,\beta}^{(\lambda)}, {X}_{n_c,\beta}^{(\lambda)}$ for each $\beta$} \Comment{$V_{2i+1}^{(2n_c)} = (V_{i}^{(n_c)})^\vartriangle$} \texttt{calculate\_$\vartriangle$\_transform}$(\lambda+1)$ \texttt{decode\_channel}$(\lambda+1, 2i+1)$ \For{$\ell\in\{1, 2, \dots, L_c\}$}{ $\temppointer \gets \mathtt{R}[\ell, \lambda]$ $\mathtt{R}[\ell, \lambda] \gets \texttt{allocate\_bit}(\lambda, 2)$ \For{\em $\beta \in \{1,2,\dots,n/(2n_c)\}$}{ $\beta' \gets \beta + n/(2n_c)$ $\mathtt{R}[\ell, \lambda][\beta] \gets \mathtt{H}[\ell, \lambda+1][\beta] + \temppointer[\beta]$ $\mathtt{R}[\ell, \lambda][\beta'] \gets \temppointer[\beta]$ $\mathtt{R}[\ell, \lambda][\beta + n/(n_c)] \gets$\\ $\qquad\mathtt{R}[\ell, \lambda+1][\beta] + \mathtt{R}[\ell, \lambda+1][\beta']$ $\mathtt{R}[\ell, \lambda][\beta' + n/(n_c)] \gets \mathtt{R}[\ell, \lambda+1][\beta']$ } } } $\mathtt{N_B}[\lambda+1]\gets 0$ \Return \end{algorithm} The next lemma shows that the data structures $\mathtt{D}$ and $\mathtt{B}$ are large enough to store the transition probabilities and the decoding results of the intermediate vectors throughout the decoding procedure. \begin{lemma}\label{lemma:stdb_space} Throughout the whole decoding procedure, we have $\mathtt{N_D}[\lambda]\le L$ and $\mathtt{N_B}[\lambda]\le 4L$ for all $1\le \lambda\le m$. The space complexity of the SCL decoder is $O(Ln)$. \end{lemma} \begin{proof} The proof of $\mathtt{N_D}[\lambda]\le L$ is the same as Lemma~\ref{lemma:st_space}, and we do not repeat it. Now we prove $\mathtt{N_B}[\lambda]\le 4L$. As we can see from Algorithms~\ref{algo:STDB_decode_channel},\ref{algo:decode_boundary_channel}--\ref{algo:st_decode_ori_channel}, each time we call the function \texttt{decode\_channel$(\lambda, i)$}, the value of $\mathtt{N_B}[\lambda]$ increases by $L_c$ if $i\le 2^\lambda-2$, and it increases by $2L_c$ if $i=2^\lambda-1$. Since the input $i$ in Algorithm~\ref{algo:st_decode_ori_channel} satisfies $i\le 2^\lambda-1$, we have $2i-1<2i\le 2^{\lambda+1}-2$. Therefore, after executing Line~3 of Algorithm~\ref{algo:st_decode_ori_channel}, the value of $\mathtt{N_B}[\lambda+1]$ increases by at most $L$. Similarly, after executing Line~8 of Algorithm~\ref{algo:st_decode_ori_channel}, the value of $\mathtt{N_B}[\lambda+1]$ also increases by at most $L$. If $i=2^\lambda-1$, we will execute Line~24 of Algorithm~\ref{algo:st_decode_ori_channel}. In this case, $2i+1=2^{\lambda+1}-1$, so the value of $\mathtt{N_B}[\lambda+1]$ increases by at most $2L$. Therefore, before we reset $\mathtt{N_B}[\lambda+1]$ to $0$ in Line~35 of Algorithm~\ref{algo:st_decode_ori_channel}, its value is at most $4L$. This proves $\mathtt{N_B}[\lambda]\le 4L$. The proof of the space complexity is the same as Lemma~\ref{lemma:st_space}. \end{proof} In TABLE~\ref{tb:stdb_space}, we list the upper bound of $\mathtt{N_B}[\lambda+1]$ at the starting point and the end of the function $\texttt{decode\_channel}(\lambda+1, j)$. The starting point refers to the moment we call $\texttt{decode\_channel}(\lambda+1, j)$, and the end refers to the moment this function returns. These upper bounds come from the proof of Lemma~\ref{lemma:stdb_space}. \begin{table}[H] \centering \begin{tabular}{|r|l|c|c|} \hline \multicolumn{2}{|c|}{cases} & start & end \\ \hline \multirow{2}*{$1\le i\le 2^{\lambda}-1$} & $j = 2i-1$ & 0 & L \\ \cline{2-4} ~ & $j = 2i$ & L & 2L \\ \hline $ i=2^{\lambda}-1$ & $j = 2i+1$ & 2L & 4L \\ \hline \end{tabular} \caption{The upper bound of $\mathtt{N_B}[\lambda+1]$ at the starting point and the end of the function $\texttt{decode\_channel}(\lambda+1, j)$} \label{tb:stdb_space} \end{table} \begin{proposition} The decoding time complexity of standard polar codes based on the DB polar transform is $O(Ln\log(n))$. \end{proposition} \subsection{SCL decoder for ABS polar codes}\label{sect:ABS_decoder} \begin{figure*} \centering \includegraphics[scale = 0.6]{example_dec.pdf} \caption{Recursive decoding of the ABS polar code defined in Fig.~\ref{fig:example_enc}. We put $V_{i}^{(2^\lambda),\ABS}$ in a black block (e.g., $V_1^{(2),\ABS}$) if $2i\notin\mathcal{I}^{(2^{\lambda+1})}$. In this case, \texttt{decode\_channel}$(\lambda, i)$ in Algorithm~\ref{algo:decode_channel} calls $\texttt{decode\_original\_channel}(\lambda,i)$. We put $V_{i}^{(2^\lambda),\ABS}$ in a red block (e.g., $V_2^{(4),\ABS}$) if $2i\in\mathcal{I}^{(2^{\lambda+1})}$. In this case, \texttt{decode\_channel}$(\lambda, i)$ calls $\texttt{decode\_swapped\_channel}(\lambda,i)$. An arrow from $V_{i}^{(2^\lambda),\ABS}$ to $V_{i_1}^{(2^{\lambda_1}),\ABS}$ means that \texttt{decode\_channel}$(\lambda_1, i_1)$ is called in the execution of \texttt{decode\_channel}$(\lambda, i)$. For example, we call \texttt{decode\_channel} with input parameters $(2,1),(2,2)$ and $(2,3)$ in the execution of \texttt{decode\_channel}$(1,1)$. } \label{fig:example_dec} \end{figure*} \begin{algorithm}[ht] \DontPrintSemicolon \caption{\texttt{decode\_channel$(\lambda, i)$} } \label{algo:decode_channel} \KwIn{layer $\lambda\in \{1,2,\dots, m\}$ and index $i\in \{1, 2, \dots, 2^\lambda-1\}$} \uIf{$\lambda = m$}{ \texttt{decode\_boundary\_channel}$(i)$ \Comment{Algorithm~\ref{algo:decode_boundary_channel}} } \ElseIf{$2i\notin \mathcal{I}^{(2^{\lambda + 1})}$}{ \texttt{decode\_original\_channel}$(\lambda, i)$ \Comment{Algorithm~\ref{algo:decode_ori_channel}} }\ElseIf{$2i\in \mathcal{I}^{(2^{\lambda + 1})}$}{ \texttt{decode\_swapped\_channel}$(\lambda, i)$ \Comment{Algorithm~\ref{algo:decode_swp_channel}} } $\mathtt{N_D}[\lambda]\gets 0$ \Return \end{algorithm} In this subsection, we present the new SCL decoder for ABS polar codes. This decoder is based on the DB polar transform in Fig.~\ref{fig:DBpt} and the SDB polar transform in Fig.~\ref{fig:SDBpt}. Since we have the permutation matrices $\mathbf{P}_{2}^{\ABS}, \mathbf{P}_4^{\ABS} ,\dots, \mathbf{P}_{n}^{\ABS}$ in the ABS polar code construction, we need to replace the recursive relation \eqref{eq:def_intermediate} with \begin{equation}\label{eq:abs_intermediate} \begin{aligned} & (X_{1}^{(2^\lambda)}, X_{2}^{(2^\lambda)},\dots, X_{n}^{(2^\lambda)}) = & (X_{1}^{(2^{\lambda+1})}, X_{2}^{(2^{\lambda+1})},\dots, X_{n}^{(2^{\lambda+1})}) \cdot & \big((\mathbf{P}_{2^{\lambda+1}}^{\ABS}(\mathbf{I}_{2^\lambda}\otimes\mathbf{G}_2^{\polar}))\otimes\mathbf{I}_{2^{m-\lambda-1}}\big) \end{aligned} \end{equation} in order to define the intermediate vectors in ABS polar codes. We still use the notation in \eqref{eq:subscript} and \eqref{eq:listdecoded}. The data structures in this subsection are essentially the same as the ones in the previous subsection. There are only two minor differences: \begin{enumerate}[(i)] \item We change the range of the index $s$ in \eqref{eq:B_index} from $1\le s\le 4L$ to $1\le s\le 6L$. \item In the integer array $\mathtt{N_B}$, each entry $\mathtt{N_B}[\lambda]$ takes value in $\{0,1,2,\dots,6L\}$ instead of $\{0,1,2,\dots,4L\}$. \end{enumerate} For the SCL decoder presented in this subsection, the fields of the $\ell$th list element are the same as the ones listed in \eqref{eq:stDB_list_field}. The following functions are shared by the decoder in this subsection and the decoders in previous sections: \begin{enumerate}[(1)] \item \texttt{allocate\_prob} in Algorithm~\ref{algo:st_allocate_prob} \item \texttt{allocate\_bit} in Algorithm~\ref{algo:allocate_bit} \item $\texttt{decode}$ in Algorithm~\ref{algo:ABS_Decoding}. This is the main function of the decoder. \item \texttt{calculate\_$\triangledown$\_transform} in Algorithm~\ref{algo:calcu_oria_proba} \item \texttt{calculate\_$\lozenge$\_transform} in Algorithm~\ref{algo:calcu_orib_proba} \item \texttt{calculate\_$\vartriangle$\_transform} in Algorithm~\ref{algo:calcu_oric_proba} \item \texttt{decode\_boundary\_channel} in Algorithm~\ref{algo:decode_boundary_channel} \end{enumerate} The following functions are solely used in this subsection. More precisely, either they appeared in previous subsections with the same name but with different implementations or they did not appear in previous subsections at all. \begin{enumerate}[(1)] \item \texttt{decode\_channel} in Algorithm~\ref{algo:decode_channel}. In Section~\ref{sect:ST_decoder_DB}, we also have the function \texttt{decode\_channel} in Algorithm~\ref{algo:STDB_decode_channel}, but the implementations in these two algorithms are different. \item \texttt{decode\_original\_channel} in Algorithm~\ref{algo:decode_ori_channel}. In Section~\ref{sect:ST_decoder_DB}, we also have the function \texttt{decode\_original\_channel} in Algorithm~\ref{algo:st_decode_ori_channel}, but the implementations in these two algorithms are different. \item \texttt{decode\_swapped\_channel} in Algorithm~\ref{algo:decode_swp_channel}. This function did not appear in previous subsections. \item \texttt{calculate\_$\blacktriangledown$\_transform} in Algorithm~\ref{algo:calcu_swpa_proba}. This function did not appear in previous subsections. \item \texttt{calculate\_$\blacklozenge$\_transform} in Algorithm~\ref{algo:calcu_swpb_proba}. This function did not appear in previous subsections. \item \texttt{calculate\_$\blacktriangle$\_transform} in Algorithm~\ref{algo:calcu_swpc_proba}. This function did not appear in previous subsections. \end{enumerate} Although this subsection and the previous subsection share the same main function $\texttt{decode}$ in Algorithm~\ref{algo:ABS_Decoding}, the function \texttt{decode\_channel} in Line~8 of Algorithm~\ref{algo:ABS_Decoding} has different implementations in these two subsections. More specifically, the function $\texttt{decode\_channel}$ in Algorithm~\ref{algo:decode_channel} has one more branch than $\texttt{decode\_channel}$ in Algorithm~\ref{algo:STDB_decode_channel}. The additional branch decodes swapped adjacent bits. Algorithm~\ref{algo:decode_ori_channel} and Algorithm~\ref{algo:st_decode_ori_channel} are the implementations of $\texttt{decode\_original\_channel}$ for this subsection and the previous subsection, respectively. The difference between Algorithm~\ref{algo:decode_ori_channel} and Algorithm~\ref{algo:st_decode_ori_channel} is that we calculate the $\triangledown$ transform only when $2(i-1)\notin\mathcal{I}^{(2^{\lambda+1})}$ in Algorithm~\ref{algo:decode_ori_channel}; see Lines~2--6. In contrast, we always calculate the $\triangledown$ transform in Algorithm~\ref{algo:st_decode_ori_channel}; see Lines~2--5. The reason behind this difference is given in Lemma~\ref{lemma:recur_ABS}: When $2(i-1)\in\mathcal{I}^{(2^{\lambda+1})}$, we only have $V_{2i-1}^{(2^{\lambda+1}), \ABS} = \big(V_{i-1}^{(2^\lambda),\ABS}\big)^{\blacktriangle}$, but $V_{2i-1}^{(2^{\lambda+1}), \ABS} = \big(V_{i}^{(2^\lambda),\ABS}\big)^{\triangledown}$ does not hold, so we do not calculate the $\triangledown$ transform in this case. In Fig.~\ref{fig:example_dec}, we use the ABS polar code defined in Fig.~\ref{fig:example_enc} as a concrete example to illustrate the recursive structure of the function $\texttt{decode\_channel}$ in Algorithm~\ref{algo:decode_channel}. \begin{lemma} \label{lm:swp6} Suppose that $1\le \lambda \le m$ and $1\le i\le 2^\lambda-1$. Before we call the function $\texttt{decode\_channel}$ in Algorithm~\ref{algo:decode_channel} with input parameters $(\lambda, i)$, the pointer $\mathtt{P}[\ell, \lambda]$ satisfies that \begin{equation}\label{eq:swp_prob} \mathtt{P}[\ell, \lambda][\beta, a, b] = V_{i}^{(2^\lambda)}(\hat{\mathbi o}_{i,\beta}^{(\ell,\lambda)} | a, b) \quad \text{for all~} 1\le \ell\le L_c,~ 1\le \beta \le 2^{m-\lambda} \text{~and~} a,b\in\{0,1\} . \end{equation} After the function $\texttt{decode\_channel}(\lambda, i)$ in Algorithm~\ref{algo:decode_channel} returns, the pointer $\mathtt{R}[\ell, \lambda]$ satisfies that \begin{equation} \label{eq:swp_1bit} \mathtt{R}[\ell, \lambda][\beta] = \hat{x}_{i,\beta}^{(\ell, \lambda)} \quad \text{for all~} 1\le \ell\le L_c \text{~and~} 1\le \beta\le 2^{m-\lambda}. \end{equation} Moreover, if $i=2^\lambda-1$, then the pointer $\mathtt{R}[\ell, \lambda]$ further satisfies that \begin{equation} \label{eq:swp_2bit} \mathtt{R}[\ell, \lambda][\beta+2^{m-\lambda}] = \hat{x}_{i+1,\beta}^{(\ell, \lambda)} \quad \text{for all~} 1\le \ell\le L_c \text{~and~} 1\le \beta\le 2^{m-\lambda}. \end{equation} \end{lemma} \begin{proof} The proof of \eqref{eq:swp_prob} is the same as that of \eqref{eq:1prob}. Here we only prove \eqref{eq:swp_1bit}--\eqref{eq:swp_2bit} by induction. The proof of the base case $\lambda=m$ relies on the analysis of Algorithm~\ref{algo:decode_boundary_channel}, which was already done in the proof of Lemma~\ref{lm:566}. For the inductive step, we assume that \eqref{eq:swp_1bit}--\eqref{eq:swp_2bit} hold for $\lambda+1$ and prove them for $\lambda$. This requires us to analyze Algorithm~\ref{algo:decode_ori_channel} for $2i\notin \mathcal{I}^{(2^{\lambda + 1})}$ and analyze Algorithm~\ref{algo:decode_swp_channel} for $2i\in \mathcal{I}^{(2^{\lambda + 1})}$. Algorithm~\ref{algo:decode_ori_channel} and Algorithm~\ref{algo:st_decode_ori_channel} are essentially the same. Since we have already analyzed Algorithm~\ref{algo:st_decode_ori_channel} in the proof of Lemma~\ref{lm:566}, we omit the analysis of Algorithm~\ref{algo:decode_ori_channel} here. We will focus on the analysis of Algorithm~\ref{algo:decode_swp_channel} for the rest of this proof. By the induction hypothesis, after executing Lines~1--5 of Algorithm~\ref{algo:decode_swp_channel}, we have $$ \mathtt{H}[\ell, \lambda+1][\beta] = \hat{x}_{2i-1,\beta}^{(\ell, \lambda+1)} \quad \text{for all~} 1\le \ell\le L_c \text{~and~} 1\le \beta\le 2^{m-\lambda-1}. $$ After executing Lines~7--10 and Lines~17,27, we have $$ \temppointer[\beta] = \hat{x}_{2i,\beta}^{(\ell, \lambda+1)} \quad \text{for all~} 1\le \ell\le L_c \text{~and~} 1\le \beta\le 2^{m-\lambda-1}. $$ After executing Lines~12--13, we have \begin{equation} \label{eq:part1} \mathtt{R}[\ell, \lambda+1][\beta] = \hat{x}_{2i+1,\beta}^{(\ell, \lambda+1)} \quad \text{for all~} 1\le \ell\le L_c \text{~and~} 1\le \beta\le 2^{m-\lambda-1}. \end{equation} In Line~1, we set $n_c=2^\lambda$. We again divide the discussion into two cases. {\bf Case (1) $i\le n_c-2$:} In this case, we only need to prove \eqref{eq:swp_1bit}. Since $n_c=2^\lambda$, we have $n/(2n_c)=2^{m-\lambda-1}$. Therefore, Lines~21--22 of Algorithm~\ref{algo:decode_swp_channel} become \begin{equation} \label{eq:swp_R1} \begin{aligned} \mathtt{R}[\ell, \lambda][\beta] = \hat{x}_{2i-1,\beta}^{(\ell, \lambda+1)} + \hat{x}_{2i+1,\beta}^{(\ell, \lambda+1)} , \quad \mathtt{R}[\ell, \lambda][\beta+2^{m-\lambda-1}] = \hat{x}_{2i+1,\beta}^{(\ell, \lambda+1)} \\ \text{for all~} 1\le \ell\le L_c \text{~and~} 1\le \beta\le 2^{m-\lambda-1}. \end{aligned} \end{equation} Equations \eqref{eq:abs_intermediate} and \eqref{eq:subscript} together imply that if $2i\in \mathcal{I}^{(2^{\lambda + 1})}$, then \begin{align} & X_{i,\beta}^{(\lambda)} = X_{2i-1,\beta}^{(\lambda+1)} + X_{2i+1,\beta}^{(\lambda+1)}, \quad X_{i,\beta+2^{m-\lambda-1}}^{(\lambda)} = X_{2i+1,\beta}^{(\lambda+1)} \quad \text{for all~} 1\le \beta\le 2^{m-\lambda-1} , \label{eq:rdX1} \\ & X_{i+1,\beta}^{(\lambda)} = X_{2i,\beta}^{(\lambda+1)} + X_{2i+2,\beta}^{(\lambda+1)}, \quad X_{i+1,\beta+2^{m-\lambda-1}}^{(\lambda)} = X_{2i+2,\beta}^{(\lambda+1)} \quad \text{for all~} 1\le \beta\le 2^{m-\lambda-1}. \label{eq:rdX2} \end{align} \eqref{eq:rdX1} further implies that \begin{align*} \hat{x}_{i,\beta}^{(\ell, \lambda)} = \hat{x}_{2i-1,\beta}^{(\ell, \lambda+1)} + \hat{x}_{2i+1,\beta}^{(\ell, \lambda+1)}, \quad \hat{x}_{i,\beta+2^{m-\lambda-1}}^{(\ell, \lambda)} = \hat{x}_{2i+1,\beta}^{(\ell, \lambda+1)} \\ \text{for all~} 1\le \ell\le L_c \text{~and~} 1\le \beta\le 2^{m-\lambda-1}. \end{align*} Combining this with \eqref{eq:swp_R1}, we complete the proof of \eqref{eq:swp_1bit} for Case (1). {\bf Case (2) $i= n_c-1$:} In this case, we need to prove both \eqref{eq:swp_1bit} and \eqref{eq:swp_2bit}. The proof of \eqref{eq:swp_1bit} is exactly the same as Case (1). To prove \eqref{eq:swp_2bit}, we observe that if $i= n_c-1$, then $2i+1=2n_c-1=2^{\lambda+1}-1$. Then by the induction hypothesis, after executing Lines~12--13, we have not only \eqref{eq:part1} but also \begin{align*} \mathtt{R}[\ell, \lambda+1][\beta+2^{m-\lambda-1}] = \hat{x}_{2i+2,\beta}^{(\ell, \lambda+1)} \quad \text{for all~} 1\le \ell\le L_c \text{~and~} 1\le \beta\le 2^{m-\lambda-1}. \end{align*} Therefore, Lines~33--35 become \begin{equation} \label{eq:swp_R2} \begin{aligned} \mathtt{R}[\ell, \lambda][\beta+2^{m-\lambda}] = \hat{x}_{2i,\beta}^{(\ell, \lambda+1)} + \hat{x}_{2i+2,\beta}^{(\ell, \lambda+1)}, \quad \mathtt{R}[\ell, \lambda][\beta+2^{m-\lambda-1}+2^{m-\lambda}] = \hat{x}_{2i+2,\beta}^{(\ell, \lambda+1)} \\ \text{for all~} 1\le \ell\le L_c \text{~and~} 1\le \beta\le 2^{m-\lambda-1}. \end{aligned} \end{equation} Equation \eqref{eq:rdX2} implies that \begin{align*} \hat{x}_{i+1,\beta}^{(\ell, \lambda)} = \hat{x}_{2i,\beta}^{(\ell, \lambda+1)} + \hat{x}_{2i+2,\beta}^{(\ell, \lambda+1)}, \quad \hat{x}_{i+1,\beta+2^{m-\lambda-1}}^{(\ell, \lambda)} = \hat{x}_{2i+2,\beta}^{(\ell, \lambda+1)} \\ \text{for all~} 1\le \ell\le L_c \text{~and~} 1\le \beta\le 2^{m-\lambda-1}. \end{align*} Combining this with \eqref{eq:swp_R2}, we complete the proof of \eqref{eq:swp_2bit}. \end{proof} The next lemma shows that the data structures $\mathtt{D}$ and $\mathtt{B}$ are large enough to store the transition probabilities and the decoding results of the intermediate vectors throughout the decoding procedure. \begin{lemma}\label{lemma:abs_space} Throughout the whole decoding procedure, we have $\mathtt{N_D}[\lambda]\le L$ and $\mathtt{N_B}[\lambda]\le 6L$ for all $1\le \lambda\le m$. The space complexity of the SCL decoder is $O(Ln)$. \end{lemma} \begin{proof} In TABLE~\ref{tb:abs_space}, we use the method in the proof of Lemma~\ref{lemma:stdb_space} to obtain the upper bound of $\mathtt{N_B}[\lambda+1]$ at the starting point and the end of the function $\texttt{decode\_channel}(\lambda+1, j)$. The upper bounds in TABLE~\ref{tb:abs_space} immediately imply $\mathtt{N_B}[\lambda]\le 6L$. The proof of $\mathtt{N_D}[\lambda]\le L$ is the same as Lemma~\ref{lemma:st_space}, and we do not repeat it. The proof of the space complexity is the same as Lemma~\ref{lemma:st_space}. \end{proof} \begin{table}[H] \centering \begin{tabular}{|c|r|l|c|c|} \hline \multicolumn{3}{|c|}{cases} & start & end \\ \hline \multirow{4}*{$2i\in\mathcal{I}^{(2^{\lambda+1})}$} & \multirow{2}*{$1\le i\le 2^{\lambda}-1$} & $j = 2i-1$ & 0 & L \\ \cline{3-5} ~ & ~ & $j = 2i$ & L & 2L \\ \cline{2-5} ~ & $1\le i< 2^{\lambda}-1$ & \multirow{2}*{$j = 2i+1$} & 2L & 3L \\ \cline{2-2}\cline{4-5} ~ & $i = 2^{\lambda}-1$ & ~ & 2L & 4L \\ \hline \multirow{2}*{$2(i-1)\in\mathcal{I}^{(2^{\lambda+1})}$} & $1 \le i\le 2^{\lambda}-1$ & $j = 2i$ & 3L & 4L \\ \cline{2-5} ~ & $i = 2^{\lambda}-1$ & $j = 2i+1$ & 4L & 6L \\ \hline \multirow{2}*{$2(i-1)\notin\mathcal{I}^{(2^{\lambda+1})}, 2i\notin\mathcal{I}^{(2^{\lambda+1})}$} & \multirow{2}*{$1 \le i\le 2^{\lambda}-1$} & $j = 2i-1$ & 0 & L \\ \cline{3-5} ~ & ~ & $j = 2i$ & L & 2L \\ \cline{2-5} ~ & $i=2^{\lambda}-1$ & $j = 2i+1$ & 2L & 4L \\ \hline \end{tabular} \caption{The upper bound of $\mathtt{N_B}[\lambda+1]$ at the starting point and the end of the function $\texttt{decode\_channel}(\lambda+1, j)$} \label{tb:abs_space} \end{table} \begin{proposition} The decoding time complexity of ABS polar codes is $O(Ln\log(n))$. \end{proposition} \begin{algorithm}[ht] \DontPrintSemicolon \caption{\texttt{decode\_original\_channel$(\lambda, i)$}} \label{algo:decode_ori_channel} \KwIn{$\lambda \in \{1,2, \dots, m\}$ and index $i$ satisfying $1\le i \le 2^{\lambda}-1$ and $2i\notin \mathcal{I}^{(2^{\lambda + 1})}$} $n_c\gets 2^{\lambda}$ \If(\Comment{$V_{2i-1}^{(2n_c)} = (V_{i}^{(n_c)})^\triangledown$}){\em $2(i-1)\notin \mathcal{I}^{(2^{\lambda + 1})}$}{ \texttt{calculate\_$\triangledown$\_transform}$(\lambda+1)$ \texttt{decode\_channel}$(\lambda+1, 2i-1)$ \For{$\ell\in\{1, 2, \dots, L_c\}$}{ $\mathtt{H}[\ell, \lambda + 1]\gets \mathtt{R}[\ell, \lambda+1]$ } } \Comment{$V_{2i}^{(2n_c)} = (V_{i}^{(n_c)})^\lozenge$} \texttt{calculate\_$\lozenge$\_transform}$(\lambda+1)$ \texttt{decode\_channel}$(\lambda+1, 2i)$ \For{$\ell\in\{1, 2, \dots, L_c\}$}{ $\mathtt{R}[\ell, \lambda]\gets \mathtt{R}[\ell, \lambda+1]$ } \vspace*{.1in} \eIf{$i \le n_c-2$}{ \Comment{Only decode one bit ${X}_{i,\beta}^{(\lambda)}$ for each $\beta$} \For{$\ell\in\{1, 2, \dots, L_c\}$}{ $\temppointer \gets \mathtt{R}[\ell, \lambda]$ $\mathtt{R}[\ell, \lambda] \gets \texttt{allocate\_bit}(\lambda, 1)$ \For{\em $\beta \in \{1,2,\dots,n/(2n_c)\}$}{ $\beta' \gets \beta + n/(2n_c)$ $\mathtt{R}[\ell, \lambda][\beta] \gets \mathtt{H}[\ell, \lambda+1][\beta] + \temppointer[\beta]$ $\mathtt{R}[\ell, \lambda][\beta'] \gets \temppointer[\beta]$ } } }{ \Comment{Decode two bits ${X}_{n_c-1,\beta}^{(\lambda)}, {X}_{n_c,\beta}^{(\lambda)}$ for each $\beta$} \Comment{$V_{2i+1}^{(2n_c)} = (V_{i}^{(n_c)})^\vartriangle$} \texttt{calculate\_$\vartriangle$\_transform}$(\lambda+1)$ \texttt{decode\_channel}$(\lambda+1, 2i+1)$ \For{$\ell\in\{1, 2, \dots, L_c\}$}{ $\temppointer \gets \mathtt{R}[\ell, \lambda]$ $\mathtt{R}[\ell, \lambda] \gets \texttt{allocate\_bit}(\lambda, 2)$ \For{\em $\beta \in \{1,2,\dots,n/(2n_c)\}$}{ $\beta' \gets \beta + n/(2n_c)$ $\mathtt{R}[\ell, \lambda][\beta] \gets \mathtt{H}[\ell, \lambda+1][\beta] + \temppointer[\beta]$ $\mathtt{R}[\ell, \lambda][\beta'] \gets \temppointer[\beta]$ $\mathtt{R}[\ell, \lambda][\beta + n/(n_c)] \gets$\\ $\qquad\mathtt{R}[\ell, \lambda+1][\beta] + \mathtt{R}[\ell, \lambda+1][\beta']$ $\mathtt{R}[\ell, \lambda][\beta' + n/(n_c)] \gets \mathtt{R}[\ell, \lambda+1][\beta']$ } } } $\mathtt{N_B}[\lambda+1]\gets 0$ \Return \end{algorithm} \begin{algorithm}[ht] \DontPrintSemicolon \caption{\texttt{decode\_swapped\_channel$(\lambda, i)$}} \label{algo:decode_swp_channel} \KwIn{$\lambda \in \{1,2, \dots, m\}$ and index $i$ satisfying $1\le i \le 2^{m-\lambda}-1$ and $2i\in \mathcal{I}^{(2^{\lambda + 1})}$.} $n_c\gets 2^{\lambda}$ \Comment{$V_{2i-1}^{(2n_c)} = (V_{i}^{(n_c)})^\blacktriangledown$} \texttt{calculate\_$\blacktriangledown$\_transform}$(\lambda+1)$ \texttt{decode\_channel}$(\lambda+1, 2i-1)$ \For{$\ell\in\{1, 2, \dots, L_c\}$}{ $\mathtt{H}[\ell, \lambda + 1]\gets \mathtt{R}[\ell, \lambda+1]$ } \vspace*{.1in} \Comment{$V_{2i}^{(2n_c)} = (V_{i}^{(n_c)})^\blacklozenge$} \texttt{calculate\_$\blacklozenge$\_transform}$(\lambda+1)$ \texttt{decode\_channel}$(\lambda+1, 2i)$ \For{$\ell\in\{1, 2, \dots, L_c\}$}{ $\mathtt{R}[\ell, \lambda]\gets \mathtt{R}[\ell, \lambda+1]$ } \vspace*{.1in} \Comment{$V_{2i+1}^{(2n_c)} = (V_{i}^{(n_c)})^\blacktriangle$} \texttt{calculate\_$\blacktriangle$\_transform}$(\lambda+1)$ \texttt{decode\_channel}$(\lambda+1, 2i+1)$ \eIf{$i\le n_c-2$}{ \Comment{Only decode one bit ${X}_{i,\beta}^{(\lambda)}$ for each $\beta$} \For{$\ell\in\{1, 2, \dots, L_c\}$}{ $\temppointer \gets \mathtt{R}[\ell, \lambda]$ $\mathtt{R}[\ell, \lambda] \gets \texttt{allocate\_bit}(\lambda, 1)$ \For{\em $\beta \in \{1,2,\dots,n/(2n_c)\}$}{ $\beta' \gets \beta + n/(2n_c)$ $\mathtt{R}[\ell, \lambda][\beta] \gets \mathtt{H}[\ell, \lambda+1][\beta] + \mathtt{R}[\ell, \lambda+1][\beta]$ $\mathtt{R}[\ell, \lambda][\beta'] \gets \mathtt{R}[\ell, \lambda+1][\beta]$ } $\mathtt{H}[\ell, \lambda+1] \gets \temppointer$ } }{ \Comment{Decode two bits ${X}_{n_c-1, \beta}^{(\lambda)}, {X}_{n_c,\beta}^{(\lambda)}$ for each $\beta$} \For{$\ell\in\{1, 2, \dots, L_c\}$}{ $\temppointer \gets \mathtt{R}[\ell, \lambda]$ $\mathtt{R}[\ell, \lambda] \gets \texttt{allocate\_bit}(\lambda, 2)$ \For{\em $\beta \in \{1,2,\dots,n/(2n_c)\}$}{ $\beta' \gets \beta + n/(2n_c)$ $\mathtt{R}[\ell, \lambda][\beta] \gets \mathtt{H}[\ell, \lambda+1][\beta] + \mathtt{R}[\ell,\lambda+1][\beta]$ $\mathtt{R}[\ell, \lambda][\beta'] \gets \mathtt{R}[\ell,\lambda+1][\beta]$ $\mathtt{R}[\ell, \lambda][\beta + n/(n_c)] \gets$\\ $\qquad\temppointer[\beta] + \mathtt{R}[\ell, \lambda+1][\beta']$ $\mathtt{R}[\ell, \lambda][\beta' + n/(n_c)] \gets \mathtt{R}[\ell, \lambda+1][\beta']$ } } $\mathtt{N_B}[\lambda+1]\gets 0$ } \Return \end{algorithm} \begin{algorithm}[ht] \DontPrintSemicolon \caption{\texttt{calculate\_$\blacktriangledown$\_transform}$(\lambda)$} \label{algo:calcu_swpa_proba} \KwIn{layer $2\le \lambda \le m$} \KwOut{Update the entries pointed by $\mathtt{P}[\ell, \lambda]$, $1\le \ell\le L_c $ } $\bar{n}_c \gets 2^{m-\lambda}$ \For{$\ell\in\{1, 2, \dots, L_c\}$}{ $\mathtt{P}[\ell,\lambda]\gets \texttt{allocate\_prob}(\lambda)$ \For{\em $\beta \in \{1,2,\dots,\bar{n}_c\}, r_1,r_2\in\{0,1\}$}{ $\beta' \gets \beta + \bar{n}_c$ $\mathtt{P}[\ell, \lambda][\beta,r_1, r_2] \gets$ $\frac14\sum_{r_3, r_4\in\{0,1\}}$ $\mathtt{P}[\ell, \lambda-1][\beta, r_1+r_3, r_2+r_4]\mathtt{P}[\ell, \lambda-1][\beta', r_3, r_4]$ } } \Return \end{algorithm} \begin{algorithm}[ht] \DontPrintSemicolon \caption{\texttt{calculate\_$\blacklozenge$\_transform}$(\lambda)$} \label{algo:calcu_swpb_proba} \KwIn{layer $2\le \lambda \le m$} \KwOut{Update the entries pointed by $\mathtt{P}[\ell, \lambda]$, $1\le \ell\le L_c $ } $\bar{n}_c \gets 2^{m-\lambda}$ \For{$\ell\in\{1, 2, \dots, L_c\}$}{ $\mathtt{P}[\ell,\lambda]\gets \texttt{allocate\_prob}(\lambda)$ \For{\em $\beta \in \{1,2,\dots,\bar{n}_c\}, r_2,r_3\in\{0,1\}$}{ $r_1\gets \mathtt{H}[\ell, \lambda][\beta]$ $\beta' \gets \beta + \bar{n}_c$ $\mathtt{P}[\ell, \lambda][\beta,r_2,r_3] \gets$ $\frac14\sum_{r_4\in\{0,1\}}$ $\mathtt{P}[\ell, \lambda-1][\beta, r_1+r_3, r_2+r_4]\mathtt{P}[\ell, \lambda-1][\beta', r_3, r_4]$ } } \Return \end{algorithm} \begin{algorithm}[ht] \DontPrintSemicolon \caption{\texttt{calculate\_$\blacktriangle$\_transform}$(\lambda)$} \label{algo:calcu_swpc_proba} \KwIn{layer $2\le \lambda \le m$} \KwOut{Update the entries pointed by $\mathtt{P}[\ell, \lambda]$, $1\le \ell\le L_c $ } $\bar{n}_c \gets 2^{m-\lambda}$ \For{$\ell\in\{1, 2, \dots, L_c\}$}{ $\mathtt{P}[\ell,\lambda]\gets \texttt{allocate\_prob}(\lambda)$ \For{\em $\beta \in \{1,2,\dots,\bar{n}_c\}, r_3,r_4\in\{0,1\}$}{ $r_1\gets \mathtt{H}[\ell, \lambda][\beta],\quad r_2\gets \mathtt{R}[\ell, \lambda-1][\beta]$ $\beta' \gets \beta + \bar{n}_c$ $\mathtt{P}[\ell, \lambda][\beta,r_3,r_4] \gets$\\ $\frac14\mathtt{P}[\ell, \lambda-1][\beta, r_1+r_3, r_2+r_4]\mathtt{P}[\ell, \lambda-1][\beta', r_3, r_4]$ } } \Return \end{algorithm} \clearpage \section{Simulation results} \label{sect:simu} \subsection{Scaling exponent over binary erasure channels} \label{sect:scaling_BEC} In this subsection, we empirically calculate the scaling exponents of ABS polar codes and standard polar codes over a BEC with erasure probability $0.5$. When the original channel $W$ is a general BMS channel, we can only obtain an approximation of the transition probabilities of the adjacent-bits-channels through quantization, as discussed in Section~\ref{sect:quantization}. However, when the original channel $W$ is a BEC, we are able to calculate the exact parameters of the adjacent-bits-channels. To that end, we introduce a class of channels called double-bits-erasure-channels (DBEC). The input alphabet of a DBEC is $\{0,1\}^2$, and the output alphabet is $\{0,1,?\}^3$. For a given input $(u_1,u_2)\in\{0,1\}^2$, the output of the DBEC can only take the following five values \begin{itemize} \item $(u_1,u_1+u_2,u_2)$ with probability $p$, \item $(u_1,?,?)$ with probability $q$, \item $(?,u_1+u_2,?)$ with probability $r$, \item $(?,?,u_2)$ with probability $s$, \item $(?,?,?)$ with probability $t$. \end{itemize} $p$ is the probability of preserving all information in the inputs. $q,r,s$ are the probabilities of preserving one bit of information in the inputs. $t$ is the probability of erasing all the information. Such a DBEC is denoted as DBEC$(p,q,r,s,t)$, where the parameters satisfy $p+q+r+s+t=1$. Note that DBEC has been studied in the literature under other names. For example, the authors of \cite{Duursma22} call it tetrahedral erasure channel. One can show that if the original channel $W$ is a BEC, then all the adjacent-bits-channels in the ABS polar code construction are DBEC. More precisely, using \eqref{eq:v_abs_init}, we can show that if $W$ is a BEC with erasure probability $\epsilon$, then $$ V_{1}^{(2),\ABS} = \text{DBEC}( (1-\epsilon)^2,0,(1-\epsilon)\epsilon,(1-\epsilon)\epsilon,\epsilon^2 ) . $$ Moreover, if an adjacent-bits-channel $V=\text{DBEC}(p,q,r,s,t)$, then \begin{align*} V^{\triangledown}= & \text{DBEC}((p+q)^2, 0, (p+q)(r+s+t), (r+s+t)(p+q), (r+s+t)^2) , \\ V^{\lozenge}= & \text{DBEC}(p^2+2rp+2sp, 2q-q^2+2pt, 2rs, r^2+s^2, 2t(r+s)+t^2) , \\ V^{\vartriangle}= & \text{DBEC}((p+r+s)^2, 0, (p+r+s)(q+t), (q+t)(p+r+s), (q+t)^2) , \\ V^{\blacktriangledown}= & \text{DBEC}(p^2, q^2+2pq, r^2+2pr, s^2+2ps, 2t-t^2+2qr+2qs+2rs) , \\ V^{\blacklozenge}= & \text{DBEC}(p^2+2pr+2ps, r^2+s^2, 2rs, 2q-q^2+2pt, t^2+2rt+2rs) , \\ V^{\blacktriangle}= & \text{DBEC}(2p-p^2+2qr+2qs+2rs, q^2+2tq, r^2+2rt, s^2+2st, t^2) . \end{align*} Combining this with Lemma~\ref{lemma:recur_ABS}, we can explicitly calculate the parameters of all the adjacent-bits-channels in the ABS polar code construction when the original channel $W$ is a BEC. After that, we use \eqref{eq:v_to_w} to calculate the erasure probabilities of each bit-channel: If $V_{i}^{(n),\ABS}=\text{DBEC}(p,q,r,s,t)$, then $W_{i}^{(n),\ABS}$ is an erasure channel with erasure probability $r+s+t$, and $W_{i+1}^{(n),\ABS}$ is an erasure channel with erasure probability $q+t$. Let $W$ be a BEC with erasure probability $0.5$. For $n\in\{2^6, 2^7, \dots, 2^{20}\}$, we define \begin{align*} & f_{\polar}(n):=\frac{1}{n} |\{i:1\le i\le n,~~ 0.01\le I(W_i^{(n)})\le 0.99 \}| , \\ & f_{\ABS}(n):=\frac{1}{n} |\{i:1\le i\le n,~~ 0.01\le I(W_i^{(n),\ABS})\le 0.99 \}| . \end{align*} By definition, $f_{\polar}(n)$ is the fraction of ``unpolarized" bit-channels in the length-$n$ standard polar code constructed for the BEC $W$, and $f_{\ABS}(n)$ is the fraction of ``unpolarized" bit-channels in the length-$n$ ABS polar code constructed for the BEC $W$. A bit-channel is said to be unpolarized if its capacity is between $0.01$ and $0.99$. The values of $f_{\polar}(n)$ and $f_{\ABS}(n)$ for $n\in\{2^6, 2^7, \dots, 2^{20}\}$ are listed in TABLE~\ref{fract}. \begin{table} \centering \begin{tabular}{r|cc} \hline $n$ & $f_{\polar}(n)$ & $f_{\ABS}(n)$ \\ \hline 64 & 0.53125000 & 0.50000000 \\ 128 & 0.43750000 & 0.42187500 \\ 256 & 0.37500000 & 0.34375000 \\ 512 & 0.30078125 & 0.27343750 \\ 1024 & 0.25390625 & 0.22070312 \\ 2048 & 0.20605469 & 0.18164062 \\ 4096 & 0.17041016 & 0.15136719 \\ 8192 & 0.14208984 & 0.12329102 \\ 16384 & 0.11755371 & 0.09936523 \\ 32768 & 0.09674072 & 0.08087158 \\ 65536 & 0.07995605 & 0.06542969 \\ 131072 & 0.06613159 & 0.05333710 \\ 262144 & 0.05499268 & 0.04324722 \\ 524288 & 0.04529572 & 0.03502846 \\ 1048576 & 0.03742218 & 0.02853012 \\ \hline \end{tabular} \caption{The fractions of ``unpolarized" bit-channels in standard polar codes and ABS polar codes constructed for a BEC with erasure probability $\epsilon = 0.5$.} \label{fract} \end{table} In order to estimate the scaling exponents, we approximate $f_{\polar}(n)$ as $f_{\polar}(n)\approx c_1 n^{-\gamma_1}$, and we approximate $f_{\ABS}(n)$ as $f_{\ABS}(n)\approx c_2 n^{-\gamma_2}$. By taking the logarithm on both sides of the equation and running linear regression, we obtain that $f_{\polar}(n) \approx 1.67 n^{-0.274}$ and $f_{\ABS}(n) \approx 1.76 n^{-0.297}$. Therefore, the scaling exponent for standard polar codes is $\mu_{\polar}\approx 1/0.274=3.65$, and the scaling exponent for ABS polar codes is $\mu_{\ABS}\approx 1/0.297=3.37$. The above empirical calculations of scaling exponents confirm that the polarization of ABS polar codes is indeed faster than standard polar codes. An interesting problem for future research is to obtain provable and tight upper bounds on the scaling exponent of ABS polar codes. Another related question is to analyze the code distance of ABS polar codes and compare it with standard polar codes. \subsection{Simulation results over binary-input AWGN channels} \begin{figure} \centering \begin{subfigure}{0.41\linewidth} \centering \includegraphics[width=\linewidth]{256_77.pdf} \caption{length 256, dimension 77} \end{subfigure} ~\hspace*{0.2in} \begin{subfigure}{0.41\linewidth} \centering \includegraphics[width=\linewidth]{256_128.pdf} \caption{length 256, dimension 128} \end{subfigure} \vspace*{0.1in} \begin{subfigure}{0.41\linewidth} \centering \includegraphics[width=\linewidth]{256_179.pdf} \caption{length 256, dimension 179} \end{subfigure} ~\hspace*{0.2in} \begin{subfigure}{0.41\linewidth} \centering \includegraphics[width=\linewidth]{512_154.pdf} \caption{length 512, dimension 154} \end{subfigure} \vspace*{0.1in} \begin{subfigure}{0.41\linewidth} \centering \includegraphics[width=\linewidth]{512_256.pdf} \caption{length 512, dimension 256} \end{subfigure} ~\hspace*{0.2in} \begin{subfigure}{0.41\linewidth} \centering \includegraphics[width=\linewidth]{512_358.pdf} \caption{length 512, dimension 358} \end{subfigure} \caption{Comparison between ABS polar codes and standard polar codes over the binary-input AWGN channel. The legend ``ABS" refers to ABS polar codes, and ``ST" refers to standard polar codes. ``CRC-0" means that we do not use CRC. The nonzero CRC length is chosen from the set $\{4,8,12,16,20,24\}$ to minimize the decoding error probability. The parameter $L$ is the list size. For standard polar codes, we always choose $L=32$. For ABS polar codes, we test two different list sizes $L=20$ and $L=32$.} \label{fig:cp1} \end{figure} \begin{figure} \centering \begin{subfigure}{0.41\linewidth} \centering \includegraphics[width=\linewidth]{1024_307.pdf} \caption{length 1024, dimension 307} \end{subfigure} ~\hspace*{0.2in} \begin{subfigure}{0.41\linewidth} \centering \includegraphics[width=\linewidth]{1024_512.pdf} \caption{length 1024, dimension 512} \end{subfigure} \vspace*{0.1in} \begin{subfigure}{0.41\linewidth} \centering \includegraphics[width=\linewidth]{1024_717.pdf} \caption{length 1024, dimension 717} \end{subfigure} ~\hspace*{0.2in} \begin{subfigure}{0.41\linewidth} \centering \includegraphics[width=\linewidth]{2048_614.pdf} \caption{length 2048, dimension 614} \end{subfigure} \vspace*{0.1in} \begin{subfigure}{0.41\linewidth} \centering \includegraphics[width=\linewidth]{2048_1024.pdf} \caption{length 2048, dimension 1024} \end{subfigure} ~\hspace*{0.2in} \begin{subfigure}{0.41\linewidth} \centering \includegraphics[width=\linewidth]{2048_1434.pdf} \caption{length 2048, dimension 1434} \end{subfigure} \caption{Comparison between ABS polar codes and standard polar codes over the binary-input AWGN channel. The legend ``ABS" refers to ABS polar codes, and ``ST" refers to standard polar codes. ``CRC-0" means that we do not use CRC. The nonzero CRC length is chosen from the set $\{4,8,12,16,20,24\}$ to minimize the decoding error probability. The parameter $L$ is the list size. For standard polar codes, we always choose $L=32$. For ABS polar codes, we test two different list sizes $L=20$ and $L=32$.} \label{fig:cp2} \end{figure} \begin{table} \centering \begin{tabular}{c|cccccc} \hline $(n,k)$ & $(256,77)$ & $(256,128)$ & $(256,179)$ & $(512,154)$ & $(512,256)$ & $(512,358)$ \\ \hline ST, $L=32$ & $0.963$ms & 1.41ms & 1.73ms & 1.94ms & 2.80ms & 3.54ms \\ ABS, $L=20$ & 0.816ms & 1.24ms & 1.47ms & 1.86ms & 2.66ms & 3.10ms \\ ABS, $L=32$ & 1.29ms & 1.99ms & 2.37ms & 2.93ms & 4.36ms & 5.13ms \\ \hline $(n,k)$ & $(1024,307)$ & $(1024,512)$ & $(1024,717)$ & $(2048,614)$ & $(2048,1024)$ & $(2048,1434)$ \\ \hline ST, $L=32$ & 4.21ms & 5.75ms & 7.15ms & 9.05ms & 11.7ms & 14.6ms \\ ABS, $L=20$ & 4.32ms & 5.90ms & 6.67ms & 10.6ms & 12.6ms & 14.0ms \\ ABS, $L=32$ & 6.63ms & 9.41ms & 10.8ms & 16.7ms & 20.1ms & 23.2ms \\ \hline \end{tabular} \caption{Comparison of the decoding time over the binary-input AWGN channel with $E_b/N_0=2\dB$. The row starting with $(n,k)$ lists the code length and code dimension we have tested. The row starting with ``ST, $L=32$" lists the decoding time of the CRC-aided SCL decoder for standard polar codes with list size $32$. The row starting with ``ABS, $L=20$" lists the decoding time of the CRC-aided SCL decoder for ABS polar codes with list size $20$. The row starting with ``ABS, $L=32$" lists the decoding time of the CRC-aided SCL decoder for ABS polar codes with list size $32$. The time unit ``ms" is $10^{-3}$s.} \label{tb:time} \end{table} We conduct extensive simulations to compare the performance of the ABS polar codes and the standard polar codes over the binary-input AWGN channel with various choices of parameters. We have tested the performance for $4$ different choices of code length $256,512,1024,2048$. For each choice of code length, we test $3$ different code rates $0.3, 0.5$ and $0.7$. The comparison of decoding error probability is given in Fig.~\ref{fig:cp1} and Fig.~\ref{fig:cp2}. Specifically, Fig.~\ref{fig:cp1} contains the plots for code length $256$ and $512$; Fig.~\ref{fig:cp2} contains the plots for code length $1024$ and $2048$. The comparison of decoding time is given in Table~\ref{tb:time}. In Fig.~\ref{fig:cp1} and Fig.~\ref{fig:cp2}, for each choice of code length and code dimension, we compare the decoding error probability of the following $6$ decoders: (1) SCL decoder for standard polar codes with list size $32$ and no CRC; (2) SCL decoder for ABS polar codes with list size $20$ and no CRC; (3) SCL decoder for ABS polar codes with list size $32$ and no CRC; (4) SCL decoder for standard polar codes with list size $32$ and optimal CRC length; (5) SCL decoder for ABS polar codes with list size $20$ and optimal CRC length; (6) SCL decoder for ABS polar codes with list size $32$ and optimal CRC length. The optimal CRC length is chosen from the set $\{4,8,12,16,20,24\}$ to minimize the decoding error probability. For standard polar codes, we use the classic SCL decoder presented in Section~\ref{sect:polar_decoding}, {\bf not} the new SCL decoder presented in Section~\ref{sect:ST_decoder_DB}. For ABS polar codes, we use the SCL decoder presented in Section~\ref{sect:ABS_decoder}. Note that in a previous arXiv version and the ISIT version \cite{Li2022ISIT} of this paper, we used a different choice of CRC length. More specifically, for cases (4)--(6) in the above paragraph, we used CRC length $8$ for all choices of code length and code dimension in the previous versions. In contrast, we use the optimal CRC length in this version, and the optimal CRC length varies with the code length and the code dimension. From Fig.~\ref{fig:cp1} and Fig.~\ref{fig:cp2} we can see that the performance of ABS polar codes is consistently better than standard polar codes if we set the list size to be $32$ for the CRC-aided SCL decoders of both codes. More specifically, for all $12$ choices of $(n,k)$, the improvement of ABS polar codes over standard polar codes ranges from $0.15\dB$ to $0.3\dB$. Even if we reduce the list size of ABS polar codes to be $20$ and maintain the list size of standard polar codes to be $32$, ABS polar codes still demonstrate better performance for most choices of parameters, and the improvement over standard polar codes is up to $0.15\dB$ in this case. Next let us compare the performance of ABS polar codes and standard polar codes when neither of them uses CRC. When there is no CRC, the performance of ABS polar codes with list size $20$ is more or less the same as that of ABS polar codes with list size $32$. Again, ABS polar codes consistently outperform standard polar codes for all $12$ choices of $(n,k)$. This time the improvement over standard polar codes is up to $1.1\dB$. In Table~\ref{tb:time}, we only compare the decoding time of the SCL decoders with CRC length $8$. From Table~\ref{tb:time}, we can see that the decoding time of the SCL decoder for ABS polar codes with list size $20$ is more or less the same as the decoding time of the SCL decoder for standard polar codes with list size $32$. More precisely, for $8$ out of $12$ choices of $(n,k)$, the SCL decoder for ABS polar codes with list size $20$ runs faster. For the other $4$ choices of $(n,k)$, the SCL decoder for standard polar codes with list size $32$ runs faster. If we set the list size to be $32$ for both the standard polar codes and the ABS polar codes, then Table~\ref{tb:time} tells us that the decoding time of ABS polar codes is longer than that of standard polar codes by roughly $60\%$. In conclusion, when we use list size $32$ for the CRC-aided SCL decoders of both codes, ABS polar codes consistently outperform standard polar codes by $0.15\dB$---$0.3\dB$, but the decoding time of ABS polar decoder is longer than that of standard polar codes by roughly $60\%$. If we use list size $20$ for ABS polar codes and maintain the list size to be $32$ for standard polar codes, then the decoding time is more or less the same for these two codes, and ABS polar codes still outperform standard polar codes for most choices of parameters. In this case, the improvement over standard polar codes is up to $0.15\dB$. As a final remark, the implementations of all the algorithms in this paper are available at the website \texttt{https://github.com/PlumJelly/ABS-Polar} \section*{Acknowledgement} In the implementation of our decoding algorithm, we have learned a lot from the GitHub project \texttt{https://github.com/kshabunov/ecclab} maintained by Kirill Shabunov. Shabunov's GitHub project mainly presents the implementation of the Reed-Muller decoder proposed in \cite{Dumer06}. Due to the similarity between (ABS) polar codes and Reed-Muller codes, some of the accelerating techniques for Reed-Muller decoders can also be used to speed up (ABS) polar decoders. \appendices \section{The proof of Lemma~\ref{lemma:recur_ST_DB}} \label{ap:lm1} Let $(U_1,\dots,U_n),(X_1,\dots,X_n)$ and $(Y_1,\dots,Y_n)$ be the random vectors defined in Fig.~\ref{fig:bit_channels_polar}. Define a new vector $(\widetilde{U}_1,\dots,\widetilde{U}_n)$ as follows: $$ \widetilde{U}_{2i-1} = U_{2i-1}+U_{2i} \text{~and~} \widetilde{U}_{2i}=U_{2i} \text{~for all~} 1\le i\le n/2. $$ Since $\mathbf{G}_n^{\polar}= \mathbf{G}_{n/2}^{\polar} \otimes \mathbf{G}_2^{\polar}$, we have \begin{align*} & (X_1,X_3,X_5,\dots,X_{n-1})=(\widetilde{U}_1,\widetilde{U}_3,\widetilde{U}_5,\dots,\widetilde{U}_{n-1}) \mathbf{G}_{n/2}^{\polar}, \\ & (X_2,X_4,X_6,\dots,X_{n})=(\widetilde{U}_2,\widetilde{U}_4,\widetilde{U}_6,\dots,\widetilde{U}_{n}) \mathbf{G}_{n/2}^{\polar}. \end{align*} Therefore, the mapping from $\widetilde{U}_{2i-1},\widetilde{U}_{2i+1}$ to $\widetilde{U}_1,\widetilde{U}_3,\dots,\widetilde{U}_{2i-3}, Y_1,Y_3,\dots,Y_{n-1}$ is $V_i^{(n/2)}$, and the channel mapping from $\widetilde{U}_{2i},\widetilde{U}_{2i+2}$ to $\widetilde{U}_2,\widetilde{U}_4,\dots,\widetilde{U}_{2i-2}, Y_2,Y_4,\dots,Y_{n}$ is also $V_i^{(n/2)}$. Moreover, the two random vectors $(\widetilde{U}_1,\widetilde{U}_3,\dots,\widetilde{U}_{n-1},Y_1,Y_3,\dots,Y_{n-1})$ and $(\widetilde{U}_2,\widetilde{U}_4,\dots,\widetilde{U}_{n},Y_2,Y_4,\dots,Y_{n})$ are independent. As a consequence, \begin{align*} & V_{2i-1}^{(n)}(y_1,y_2,\dots,y_n,u_1,u_2,\dots,u_{2i-2}|u_{2i-1},u_{2i}) \\ = & \mathbb{P}_{Y_1,Y_2,\dots,Y_n,U_1,U_2,\dots,U_{2i-2}|U_{2i-1},U_{2i}}(y_1,y_2,\dots,y_n,u_1,u_2,\dots,u_{2i-2}|u_{2i-1},u_{2i}) \\ = & \frac{1}{4} \sum_{u_{2i+1},u_{2i+2}\in\{0,1\}} \mathbb{P}_{Y_1,Y_2,\dots,Y_n,U_1,U_2,\dots,U_{2i-2}|U_{2i-1},U_{2i},U_{2i+1},U_{2i+2}}(y_1,y_2,\dots,y_n, \\ & \hspace*{2.8in} u_1,u_2,\dots,u_{2i-2}|u_{2i-1},u_{2i},u_{2i+1},u_{2i+2}) \\ \overset{(a)}{=} & \frac{1}{4} \sum_{u_{2i+1},u_{2i+2}\in\{0,1\}} \mathbb{P}_{Y_1,Y_2,\dots,Y_n,\widetilde{U}_1,\widetilde{U}_2,\dots,\widetilde{U}_{2i-2}|\widetilde{U}_{2i-1},\widetilde{U}_{2i},\widetilde{U}_{2i+1},\widetilde{U}_{2i+2}}(y_1,y_2,\dots,y_n, \\ & \hspace*{2.8in} \widetilde{u}_1,\widetilde{u}_2,\dots,\widetilde{u}_{2i-2}|\widetilde{u}_{2i-1},\widetilde{u}_{2i},\widetilde{u}_{2i+1},\widetilde{u}_{2i+2}) \\ = & \frac{1}{4} \sum_{u_{2i+1},u_{2i+2}\in\{0,1\}} \Big( \mathbb{P}_{Y_1,Y_3,\dots,Y_{n-1},\widetilde{U}_1,\widetilde{U}_3,\dots,\widetilde{U}_{2i-3}|\widetilde{U}_{2i-1},\widetilde{U}_{2i+1}}(y_1,y_3,\dots,y_{n-1}, \\ & \hspace*{3.2in} \widetilde{u}_1,\widetilde{u}_3,\dots,\widetilde{u}_{2i-3}|\widetilde{u}_{2i-1},\widetilde{u}_{2i+1}) \\ & \hspace*{1.2in} \cdot \mathbb{P}_{Y_2,Y_4,\dots,Y_n,\widetilde{U}_2,\widetilde{U}_4,\dots,\widetilde{U}_{2i-2}|\widetilde{U}_{2i},\widetilde{U}_{2i+2}}(y_2,y_4,\dots,y_n, \\ & \hspace*{3.2in} \widetilde{u}_2,\widetilde{u}_4,\dots,\widetilde{u}_{2i-2}|\widetilde{u}_{2i},\widetilde{u}_{2i+2}) \Big) \\ = & \frac{1}{4} \sum_{u_{2i+1},u_{2i+2}\in\{0,1\}} \Big( V_i^{(n/2)}(y_1,y_3,\dots,y_{n-1}, \widetilde{u}_1,\widetilde{u}_3,\dots,\widetilde{u}_{2i-3}|\widetilde{u}_{2i-1},\widetilde{u}_{2i+1}) \\ & \hspace*{1.2in} \cdot V_i^{(n/2)} (y_2,y_4,\dots,y_n, \widetilde{u}_2,\widetilde{u}_4,\dots,\widetilde{u}_{2i-2}|\widetilde{u}_{2i},\widetilde{u}_{2i+2}) \Big) \\ = & \frac{1}{4} \sum_{u_{2i+1},u_{2i+2}\in\{0,1\}} \Big( V_i^{(n/2)}(y_1,y_3,\dots,y_{n-1}, \widetilde{u}_1,\widetilde{u}_3,\dots,\widetilde{u}_{2i-3}|u_{2i-1}+u_{2i},u_{2i+1}+u_{2i+2}) \\ & \hspace*{1.2in} \cdot V_i^{(n/2)} (y_2,y_4,\dots,y_n, \widetilde{u}_2,\widetilde{u}_4,\dots,\widetilde{u}_{2i-2}|u_{2i},u_{2i+2}) \Big) \\ = & (V_i^{(n/2)})^\triangledown (y_1,y_2,\dots,y_n,\widetilde{u}_1,\widetilde{u}_2,\dots,\widetilde{u}_{2i-2}|u_{2i-1},u_{2i}), \end{align*} where $\widetilde{u}_1,\widetilde{u}_2,\dots,\widetilde{u}_{2i+2}$ in equality $(a)$ are defined as $\widetilde{u}_{2j-1}=u_{2j-1}+u_{2j}$ and $\widetilde{u}_{2j}=u_{2j}$ for $1\le j\le i+1$. Finally, by noting that there is a one-to-one mapping between $y_1,y_2,\dots,y_n,u_1,u_2,\dots,u_{2i-2}$ in the first line and $y_1,y_2,\dots,y_n,\widetilde{u}_1,\widetilde{u}_2,\dots,\widetilde{u}_{2i-2}$ in the last line, we conclude that $V_{2i-1}^{(n)} = (V_i^{(n/2)})^\triangledown$. The proofs of $V_{2i}^{(n)} = (V_i^{(n/2)})^\lozenge$ and $V_{2i+1}^{(n)} = (V_i^{(n/2)})^\vartriangle$ are similar. We include them here for the sake of completeness. \begin{align*} & V_{2i}^{(n)}(y_1,y_2,\dots,y_n,u_1,u_2,\dots,u_{2i-1}|u_{2i},u_{2i+1}) \\ = & \mathbb{P}_{Y_1,Y_2,\dots,Y_n,U_1,U_2,\dots,U_{2i-1}|U_{2i},U_{2i+1}}(y_1,y_2,\dots,y_n,u_1,u_2,\dots,u_{2i-1}|u_{2i},u_{2i+1}) \\ = & \frac{1}{4} \sum_{u_{2i+2}\in\{0,1\}} \mathbb{P}_{Y_1,Y_2,\dots,Y_n,U_1,U_2,\dots,U_{2i-2}|U_{2i-1},U_{2i},U_{2i+1},U_{2i+2}}(y_1,y_2,\dots,y_n, \\ & \hspace*{2.8in} u_1,u_2,\dots,u_{2i-2}|u_{2i-1},u_{2i},u_{2i+1},u_{2i+2}) \\ \overset{(a)}{=} & \frac{1}{4} \sum_{u_{2i+2}\in\{0,1\}} \mathbb{P}_{Y_1,Y_2,\dots,Y_n,\widetilde{U}_1,\widetilde{U}_2,\dots,\widetilde{U}_{2i-2}|\widetilde{U}_{2i-1},\widetilde{U}_{2i},\widetilde{U}_{2i+1},\widetilde{U}_{2i+2}}(y_1,y_2,\dots,y_n, \\ & \hspace*{2.8in} \widetilde{u}_1,\widetilde{u}_2,\dots,\widetilde{u}_{2i-2}|\widetilde{u}_{2i-1},\widetilde{u}_{2i},\widetilde{u}_{2i+1},\widetilde{u}_{2i+2}) \\ = & \frac{1}{4} \sum_{u_{2i+2}\in\{0,1\}} \Big( \mathbb{P}_{Y_1,Y_3,\dots,Y_{n-1},\widetilde{U}_1,\widetilde{U}_3,\dots,\widetilde{U}_{2i-3}|\widetilde{U}_{2i-1},\widetilde{U}_{2i+1}}(y_1,y_3,\dots,y_{n-1}, \\ & \hspace*{3.2in} \widetilde{u}_1,\widetilde{u}_3,\dots,\widetilde{u}_{2i-3}|\widetilde{u}_{2i-1},\widetilde{u}_{2i+1}) \\ & \hspace*{1.2in} \cdot \mathbb{P}_{Y_2,Y_4,\dots,Y_n,\widetilde{U}_2,\widetilde{U}_4,\dots,\widetilde{U}_{2i-2}|\widetilde{U}_{2i},\widetilde{U}_{2i+2}}(y_2,y_4,\dots,y_n, \\ & \hspace*{3.2in} \widetilde{u}_2,\widetilde{u}_4,\dots,\widetilde{u}_{2i-2}|\widetilde{u}_{2i},\widetilde{u}_{2i+2}) \Big) \\ = & \frac{1}{4} \sum_{u_{2i+2}\in\{0,1\}} \Big( V_i^{(n/2)}(y_1,y_3,\dots,y_{n-1}, \widetilde{u}_1,\widetilde{u}_3,\dots,\widetilde{u}_{2i-3}|\widetilde{u}_{2i-1},\widetilde{u}_{2i+1}) \\ & \hspace*{1.2in} \cdot V_i^{(n/2)} (y_2,y_4,\dots,y_n, \widetilde{u}_2,\widetilde{u}_4,\dots,\widetilde{u}_{2i-2}|\widetilde{u}_{2i},\widetilde{u}_{2i+2}) \Big) \\ = & \frac{1}{4} \sum_{u_{2i+2}\in\{0,1\}} \Big( V_i^{(n/2)}(y_1,y_3,\dots,y_{n-1}, \widetilde{u}_1,\widetilde{u}_3,\dots,\widetilde{u}_{2i-3}|u_{2i-1}+u_{2i},u_{2i+1}+u_{2i+2}) \\ & \hspace*{1.2in} \cdot V_i^{(n/2)} (y_2,y_4,\dots,y_n, \widetilde{u}_2,\widetilde{u}_4,\dots,\widetilde{u}_{2i-2}|u_{2i},u_{2i+2}) \Big) \\ = & (V_i^{(n/2)})^\lozenge (y_1,y_2,\dots,y_n,\widetilde{u}_1,\widetilde{u}_2,\dots,\widetilde{u}_{2i-2},u_{2i-1}|u_{2i},u_{2i+1}), \end{align*} where $\widetilde{u}_1,\widetilde{u}_2,\dots,\widetilde{u}_{2i+2}$ in equality $(a)$ are defined the same way as above. This proves $V_{2i}^{(n)} = (V_i^{(n/2)})^\lozenge$. \begin{align*} & V_{2i+1}^{(n)}(y_1,y_2,\dots,y_n,u_1,u_2,\dots,u_{2i}|u_{2i+1},u_{2i+2}) \\ = & \mathbb{P}_{Y_1,Y_2,\dots,Y_n,U_1,U_2,\dots,U_{2i}|U_{2i+1},U_{2i+2}}(y_1,y_2,\dots,y_n,u_1,u_2,\dots,u_{2i}|u_{2i+1},u_{2i+2}) \\ = & \frac{1}{4} \mathbb{P}_{Y_1,Y_2,\dots,Y_n,U_1,U_2,\dots,U_{2i-2}|U_{2i-1},U_{2i},U_{2i+1},U_{2i+2}}(y_1,y_2,\dots,y_n, \\ & \hspace*{2.8in} u_1,u_2,\dots,u_{2i-2}|u_{2i-1},u_{2i},u_{2i+1},u_{2i+2}) \\ \overset{(a)}{=} & \frac{1}{4} \mathbb{P}_{Y_1,Y_2,\dots,Y_n,\widetilde{U}_1,\widetilde{U}_2,\dots,\widetilde{U}_{2i-2}|\widetilde{U}_{2i-1},\widetilde{U}_{2i},\widetilde{U}_{2i+1},\widetilde{U}_{2i+2}}(y_1,y_2,\dots,y_n, \\ & \hspace*{2.8in} \widetilde{u}_1,\widetilde{u}_2,\dots,\widetilde{u}_{2i-2}|\widetilde{u}_{2i-1},\widetilde{u}_{2i},\widetilde{u}_{2i+1},\widetilde{u}_{2i+2}) \\ = & \frac{1}{4} \mathbb{P}_{Y_1,Y_3,\dots,Y_{n-1},\widetilde{U}_1,\widetilde{U}_3,\dots,\widetilde{U}_{2i-3}|\widetilde{U}_{2i-1},\widetilde{U}_{2i+1}}(y_1,y_3,\dots,y_{n-1}, \\ & \hspace*{3.2in} \widetilde{u}_1,\widetilde{u}_3,\dots,\widetilde{u}_{2i-3}|\widetilde{u}_{2i-1},\widetilde{u}_{2i+1}) \\ & \hspace*{1.2in} \cdot \mathbb{P}_{Y_2,Y_4,\dots,Y_n,\widetilde{U}_2,\widetilde{U}_4,\dots,\widetilde{U}_{2i-2}|\widetilde{U}_{2i},\widetilde{U}_{2i+2}}(y_2,y_4,\dots,y_n, \\ & \hspace*{3.2in} \widetilde{u}_2,\widetilde{u}_4,\dots,\widetilde{u}_{2i-2}|\widetilde{u}_{2i},\widetilde{u}_{2i+2}) \\ = & \frac{1}{4} V_i^{(n/2)}(y_1,y_3,\dots,y_{n-1}, \widetilde{u}_1,\widetilde{u}_3,\dots,\widetilde{u}_{2i-3}|\widetilde{u}_{2i-1},\widetilde{u}_{2i+1}) \\ & \hspace*{1.2in} \cdot V_i^{(n/2)} (y_2,y_4,\dots,y_n, \widetilde{u}_2,\widetilde{u}_4,\dots,\widetilde{u}_{2i-2}|\widetilde{u}_{2i},\widetilde{u}_{2i+2}) \\ = & \frac{1}{4} V_i^{(n/2)}(y_1,y_3,\dots,y_{n-1}, \widetilde{u}_1,\widetilde{u}_3,\dots,\widetilde{u}_{2i-3}|u_{2i-1}+u_{2i},u_{2i+1}+u_{2i+2}) \\ & \hspace*{1.2in} \cdot V_i^{(n/2)} (y_2,y_4,\dots,y_n, \widetilde{u}_2,\widetilde{u}_4,\dots,\widetilde{u}_{2i-2}|u_{2i},u_{2i+2}) \\ = & (V_i^{(n/2)})^\vartriangle (y_1,y_2,\dots,y_n,\widetilde{u}_1,\widetilde{u}_2,\dots,\widetilde{u}_{2i-2},u_{2i-1},u_{2i}|u_{2i+1},u_{2i+2}), \end{align*} where $\widetilde{u}_1,\widetilde{u}_2,\dots,\widetilde{u}_{2i+2}$ in equality $(a)$ are defined the same way as above. This proves $V_{2i+1}^{(n)} = (V_i^{(n/2)})^\vartriangle$ and completes the proof of Lemma~\ref{lemma:recur_ST_DB}. \qed \bibliographystyle{IEEEtran}
1,941,325,219,928
arxiv
\section{Introduction} \label{intro} Maria Reguera \cite{MR} disproved Muckenhoupt--Wheeden conjecture. Then Maria Reguera and Christoph Thiele disproved Muckenhoupt--Wheeden conjecture \cite{RT}, which required that the Hilbert transform would map $L^1(Mw)$ into $L^{1,\infty}(w)$. It has been suggested in P\'erez' paper \cite{P} that there should exist such a counterexample, also \cite{P} has several very interesting positive results, where $Mw$ is replaced by a slightly bigger maximal function, in particular by $M^2w$ (which is equivalent to a certain Orlicz maximal function). Here we strengthen Reguera and Reguera--Thiele results by disproving the so called $A_1$ conjecture (which also seems to be rather old and due to Muckenhoupt). The reader can get acquainted with the best so far positive result on $A_1$ conjecture in the paper \cite{LOP}. The $A_1$ conjecture stated that the Hilbert transform would map $L^1(w)$ to $L^{1,\infty}(w)$ with norm bounded by constant times $[w]_{A_1}$ (the $A_1$ ``norm" of $w$). Recall that $[w]_{A_1}:=\sup\frac{Mw(x)}{w(x)}$. Therefore, $A_1$ conjecture is weaker than Muckenhoupt--Wheeden conjecture, and, hence, it is more difficult to disprove it. And, in fact, in \cite{MR}, \cite{RT} the $A_1$ norm of the weight is uncontrolled, while we need to construct a rather ``smooth" $w$ to build our counterexample. The $A_1$ conjecture is also called a weak Muckenhoupt--Wheeden conjecture. We prove that the linear estimate in weak Muckenhoupt--Wheeden conjecture is impossible, and, moreover, the growth of the weak norm of the the martingale transform and the weak norm of the Hilbert transform from $L^1(w)$ into $L^{1,\infty}(w)$ is at least $c\, [w]_{A_1}\log^{\frac15 -\epsilon} [w]_{A_1}$. Paper \cite{LOP} gives an estimate from above for such a norm: it is $\le C\, [w]_{A_1}\log [w]_{A_1}$. We believe that this latter estimate might be sharp and that our estimates from below can be improved. The plan of the paper: first we repeat the result of \cite{RVaVo}, where the exact Bellman function for the unweighted weak estimate of the martingale transform has been constructed. Then we show the logarithmic blow-up for the weighted estimate of the martingale transform in the end-point case $w\in A_1$. Then we adapt this result to obtain the same speed of blow-up for the Hilbert transform. \medskip \section{Unweighted weak type of $0$ shift} \label{unweighted} Here we review the work \cite{RVaVo}, where the Bellman function and the extremizers were constructed for the unweighted martingale transform. The unweighted problem is much easier than the weighted problem that we consider in the current article. However, a glance at a simpler problem helps us to set up a more difficult one and to understand the difficulties. So we start with unweighted martingale transform, and briefly recall the reader the set up and some of the results of \cite{RVaVo}. \bigskip We are on $I_0:=[0,1]$. As always $D$ denote the dyadic lattice. We consider the operator $$ \varphi\rightarrow \sum_{I\subseteq I_0, I\in D} \epsilon_I (\varphi, h_I) h_I\,, $$ where $-1\le \epsilon_I\le 1$. Notice that the sum does not contain the constant term. Put $$ F:=\langle |\varphi|\rangle_{I}\,,\,f:=\langle \varphi\rangle_{I}\,, $$ and introduce the following function: $$ B(F,f, \lambda):= \sup\,\frac1{|I|}|\{x\in I: \sum_{J\subseteq I, J\in D} \epsilon_J(\varphi, h_J) h_J(x)>\lambda\}|\,, $$ where the $sup$ is taken over all $-1\le\epsilon_J\le 1, J\in D,\, J\subseteq I$, and over all $\varphi\in L^1(I)$ such that $ F:=\langle |\varphi|\rangle_{I}\,,\,f:=\langle \varphi\rangle_{I} $, $ h_I$ are normalized in $L^2(\mathbb{R})$ Haar function of the cube (interval) $I$, and $|\cdot |$ denote Lebesgue measure. Recall that $$ h_I(x):=\begin{cases} \frac1{\sqrt{|I|}}\,,\, x\in I_+\\ -\frac1{\sqrt{|I|}}\,,\, x\in I_-\end{cases} $$ This function is defined in a convex domain $\Omega\subset \mathbb{R}^3$: $\Omega:=\{(F,f,\lambda)\in \mathbb{R}^3: |f| \le F\}$. \medskip \noindent {\bf Remark.} Function $B$ should not be indexed by $I$ because it does not depend on $I$. We will use this soon. \subsection{The main inequality} \label{MI} \begin{thm} \label{tuda} Let $P, P_+,P_-\in \Omega, P=(F,f,\lambda)$, $P_+=(F+\alpha, f+\beta, \lambda +\beta)$, $P_-=(F-\alpha, f-\beta, \lambda -\beta)$. Then \begin{equation} \label{mi1} B(P)-\frac12(B(P_+)+B(P_-))\ge 0\,. \end{equation} At the same time, if $P, P_+,P_-\in \Omega, P=(F,f,\lambda)$, $P_+=(F+\alpha, f+\beta, \lambda -\beta)$, $P_-=(F-\alpha, f-\beta, \lambda +\beta)$. Then \begin{equation} \label{mi2} B(P)-\frac12(B(P_+)+B(P_-))\ge 0\,. \end{equation} \end{thm} \begin{proof} Fix $P, P_+,P_-\in \Omega, P=(F,f,\lambda)$, $P_+=(F+\alpha, f+\beta, \lambda +\beta)$, $P_-=(F-\alpha, f-\beta, \lambda -\beta)$. Let $\varphi_+, \varphi_-$ be functions giving the supremum in $B(P_+), B(P_-)$ respectively up to a small number $\eta>0$. Using the remark above we think that $\varphi_+$ is on $I_+$ and $\varphi_-$ is on $I_-$. Consider $$ \varphi(x):=\begin{cases} \varphi_+(x)\,,\, x\in I_+\\ \varphi_-(x)\,,\, x\in I_-\end{cases} $$ Notice that then \begin{equation} \label{beta1} (\varphi, h_I)\cdot\frac1{\sqrt{|I|}} = \beta\,. \end{equation} Then it is easy to see that \begin{equation} \label{av1} \langle | \varphi| \rangle_I = F=P_1, \,\,\,\langle \varphi \rangle_I =f=P_2\,. \end{equation} Notice that for $x\in I_+$ using \eqref{beta1}, we get if $\epsilon_I =-1$ $$ \frac1{|I|}|\{x\in I_+: \sum_{J\subseteq I, J\in D} \epsilon_J(\varphi, h_J) h_J(x)>\lambda\}|= \frac1{|I|}|\{x\in I_+: \sum_{J\subseteq I_+, J\in D} \epsilon_J(\varphi, h_J) h_J(x)>\lambda+\beta\}| $$ $$ =\frac1{2|I_+|}|\{x\in I_+: \sum_{J\subseteq I_+, J\in D} \epsilon_J(\varphi_+, h_J) h_J(x)>P_{+,3}\}|\ge \frac12 B(P_+)-\eta\,. $$ Similarly, for $x\in I_-$ using \eqref{beta1}, we get if $\epsilon_I =-1$ $$ \frac1{|I|}|\{x\in I_-: \sum_{J\subseteq I, J\in D} \epsilon_J(\varphi, h_J) h_J(x)>\lambda\}|= \frac1{|I|}|\{x\in I_-: \sum_{J\subseteq I_+, J\in D} \epsilon_J(\varphi, h_J) h_J(x)>\lambda-\beta\}| $$ $$ =\frac1{2|I_-|}|\{x\in I_-: \sum_{J\subseteq I_-, J\in D} \epsilon_J(\varphi_-, h_J) h_J(x)>P_{-,3}\}|\ge \frac12 B(P_-)-\eta\,. $$ Combining the two left hand sides we obtain for $\epsilon_I=-1$ $$ \frac1{|I|}|\{x\in I_+: \sum_{J\subseteq I, J\in D} \epsilon_J(\varphi, h_J) h_J(x)>\lambda\}|\ge \frac 12 (B(P_+)+B(P_-))-2\eta\,. $$ Let us use now the simple information \eqref{av1}: if we take the supremum in the left hand side over all functions $\varphi$, such that $\langle |\varphi | \rangle_I =F, \langle \varphi \rangle_I =f $, and supremum over all $\epsilon_J\in [-1,1]$ (only $\epsilon_I=-1$ stays fixed), we get a quantity smaller or equal than the one, where we have the supremum over all functions $\varphi$, such that $\langle |\varphi | \rangle =F, \langle \varphi \rangle_I =f $, and an unrestricted supremum over all $\epsilon_J\in [- 1,1]$. The latter quantity is of course $B(F, f, \lambda)$. So we proved \eqref{mi1}. To prove \eqref{mi2} we repeat verbatim the same reasoning, only keeping now $\epsilon_I=1$. We are done. \end{proof} Denote $$ T\varphi:=\sum_{J\subseteq I, J\in D} \epsilon_J(\varphi, h_J) h_J(x)\,. $$ It is a dyadic singular operator (actually, it is a family of operators enumerated by sequences of $\epsilon_I\in [-1, 1]$). To prove that it is of weak type is the same as to prove \begin{equation} \label{wt1} B(F,f,\lambda)\le \frac{C\,F}{\lambda}, \,\lambda >0\,. \end{equation} Our $B$ satisfies \eqref{mi1}, \eqref{mi2}. We consider this as concavity conditions. Let us make the change of variables, $(F,f, \lambda)\rightarrow (F, y_1, y_2)$: $$ y_1:=\frac12 (\lambda+ f)\,,\,\, y_2:=\frac12 (\lambda-f)\,. $$ Denote $$ M(F, y_1, y_2) := B(F, y_1-y_2, y_1+y_2)= B(F, f, \lambda). $$ In terms of function $M$ Theorem \ref{tuda} reads as follows: \begin{thm} \label{tudaM} The function $M$ is defined in the domain $G:=\{(F,y_1,y_2): |y_1 -y_2|\le F\}$, and for each fixed $y_2$, $M(F, y_1, \cdot)$ is concave and for each fixed $y_1$, $M(F, \cdot, y_2)$ is concave. \end{thm} Abusing the language we will call by the same letter $B$ (correspondingly, $M$) {\it any} function satisfying \eqref{mi1}, \eqref{mi2} (correspondingly satisfying Theorem \ref{tudaM}). \bigskip It is not difficult to obtain one more condition, the so-called {\it obstacle condition}: \begin{lm} \label{obstLemma} \begin{equation} \label{obst} \text{If}\,\, \lambda < F\,\,\text{then}\,\, B(F,f, \lambda)=1. \end{equation} \end{lm} \begin{proof} Let us first consider the case $f=F$, which can be viewed as the case of non-negative functions $\phi$. Fix $\lambda_0$ and $\epsilon>0$, let $\varphi$ be a non-negative function on $I=[0,1]$ such that it looks like $(\lambda_0 +\epsilon)\delta_{0}$, and $F=f:=\int_0^1 \varphi \, dx = \lambda_0 +\epsilon>\lambda_0$. Namely, $\varphi$ is zero on the set of measure $1-\tau$, and an almost $\delta$ function times $\lambda_0+\epsilon$ on a small interval of measure $\tau$. As it looks as a multiple of delta function, it can be written down as $\lambda_0 +\epsilon +H$, where $H$ is a combination of Haar functions, and a martingale transform of $\phi$, namely, $-H=\lambda_0+\epsilon>\lambda_0$ on a set of measure $1-\tau$ with an arbitrary small $\tau$ (the smallness is independent of $\lambda_0$ and $\epsilon$). Then the example of $\varphi$ shows that $$ B(\lambda_0+\epsilon, \lambda_0+\epsilon, \lambda_0)\ge 1-\tau\,. $$ We have to consider the case of $f<F$ as well. If $f>\lambda_0$, the construction is the same. Namely, consider $\Phi:=\varphi +aS$, where $S$ is a Haar function with very small support in a small dyadic interval $\ell$ (say, of measure smaller than $\tau$) and normalized in $L^1$, let $\ell$ be contained in the set, where $\varphi$ is small ($\varphi$ is small essentially on almost the whole interval, because it looks like a positive multiple of the delta function), and ensure that $\int S\,dx=0$, and $\int |S|\,dx=1$. Then the example of $\varphi$ shows that $$ B(\int_0^1 |\Phi|\,dx, \lambda_0+\epsilon, \lambda_0)\ge 1-2\tau\,. $$ By varying $a$ from $0$ to $\infty$ we can reach $\int |\Phi|\,dx = F$ for any $F\ge \lambda_0+\epsilon$. Therefore, making first $\tau\rightarrow 0$ and then $\epsilon\rightarrow 0$, we prove \eqref{obst}. We are left to consider the case $F>\lambda_0\ge f$. Choose $\epsilon$ and $\tau$ much smaller than, say, $\frac1{10}(F-f)$. Consider the same function $\varphi$, as above. Let $H$ be the first Haar function, namely $H= -1$ on $I_-=[0, 1/2]$ and $H= 1$ on $I_+=[1/2,1]$. Let us consider now $\psi:= \varphi+c_1\cdot H-c_2$, $c_1>c_2>0$. Then $$ \langle \psi\rangle = \lambda_0+\epsilon -c_2,\, \langle|\psi|\rangle = \lambda_0+ c_1 +O(\tau). $$ It is easy now to choose $c_1, c_2$ such that the first average above is equal to a given number $f$, and the second one is equal to a given $F$, $F>f$. Now on the set $E$ of measure $1-\tau$ we have $\psi= c_1H -c_2$. On the other hand $\psi= \lambda_0+\epsilon +c_1 H- c_2 + H_1$, where $H_1$ is a combination of Haar function, each of which is orthogonal to $H$. Hence, $-H_1 = \lambda_0 +\epsilon >\lambda_0$ on $E$ of measure $1-\tau$. But $-H_1$ is the martingale transform of $\psi$ in our sense. In fact, we just consider the Haar decomposition of $\psi$, forget the constant term, and multiply all Haar coefficients on $-1$ except the first one, which is got multiplied by $0$. We obtain that $B(F, f, \lambda_0) \ge 1-\tau$. We are done. \end{proof} \bigskip \begin{thm} \label{obratno} Let $B\ge 0$ satisfy \eqref{mi1}, \eqref{mi2}. (Equivalently, let the corresponding $M\ge 0$ be concave in $(F, y_1)$ and in $(F, y_2)$.) Let $B$ satisfy \eqref{wt1}, or, equivalently, \begin{equation} \label{wt2} M(F,y_1,y_2)\le \frac{C\,F}{y_1+y_2}, \,y_1 +y_2>0\,. \end{equation} Let $B(F, f, \lambda)=1$ if $\lambda<0$. Then we have the weak type estimate with constant at most $C$ for all $T$ uniformly in $\epsilon_I\in [-1,1]$. \end{thm} \begin{proof} Just by reversing the argument of Theorem \ref{tuda}. \end{proof} \bigskip \noindent{\bf Remark.} Notice that the Bellman function $B$ defined above satisfies by definition $B(F, f, \lambda)= B(F, -f, \lambda)$. Therefore, Lemma \ref{obstLemma} claims in particular that $B(F, f, \lambda)=1$ if $\lambda<0$ (and we saw that it also satisfies \eqref{mi1}, \eqref{mi2}). \medskip Here is the Bellman function for unweighted weak type inequality for martingale transform, see \cite{RVaVo}. \begin{thm} \begin{equation} \label{full} B(F,f,\lambda)= \begin{cases} 1, \,\, \text{if} \,\, \lambda\le F\,, \\ 1-\frac{(\lambda-F)^2}{\lambda^2-f^2}\,\,\text{if}\,\, \lambda > F\,.\end{cases} \end{equation} \end{thm} \medskip In \cite{RVaVo} this formula was found by the use of Monge--Amp\`ere equation. As always in stochastic optimal control related problems (and this is one of such, see the explanation in \cite{NTVot}) one needs to prove that the solution of Bellman equation is actually the Bellman function. This is called ``verification theorem", and it is proved in \cite{RVaVo} as well. \section{Weighted estimate. $A_1$ case} \label{a1} We keep the notations--almost. Now $w$ will be not an arbitrary weight but a dyadic $A_1$ weight. Meaning that $$ \forall I \in D\,\, \langle w\rangle_I \le Q \inf_I w\,. $$ The best $Q$ is called $[w]_{A_1}$. Now $$ F=\langle |f|w\rangle_I , f=\langle f \rangle_I , \lambda=\lambda, w=\langle w\rangle_I, m=\inf_I w \,. $$ We are in the domain \begin{equation} \label{O5} \Omega:=\{(F, w, m, f, \lambda): F\ge |f|\,m,\,\,\, m\le w\le Q\,m\}\,. \end{equation} Introduce \begin{equation} \label{B5} \mathbb{B}(F,w,m,f, \lambda):= \sup\,\frac1{|I|}w\{x\in I: \sum_{J\subseteq I, J\in D} \epsilon_J(\varphi, h_J) h_J(x)>\lambda\}\,, \end{equation} where the $sup$ is taken over all $\epsilon_J, |\epsilon_J|\le 1, J\in D,\, J\subseteq I$, and over all $f\in L^1(I,wdx)$ such that $F:=\langle |f|\,w\rangle_{I}\,,\,f:=\langle f\rangle_{I} $, $ w=\langle w\rangle_I , m\le \inf_I w$, and $w$ are dyadic $A_1$ weights, such that $ \forall I \in D\,\, \langle w\rangle_I \le Q \inf_I w$, and $Q$ being the best such constant. In other words $Q:=[w]_{A_1}^{dyadic}$. Recall that $ h_I$ are normalized in $L^2(\mathbb{R})$ Haar function of the cube (interval) $I$, and $|\cdot |$ denote Lebesgue measure. \subsection{Homogeneity} \label{hom} By definition, it is clear that $$ s\mathbb{B}(F/s,w/s,m/s,f, \lambda)= \mathbb{B}(F,w,m,f, \lambda)\,, $$ $$ \mathbb{B}(tF,w,m,tf, t\lambda)=\mathbb{B}(F,w,m,f, \lambda)\,. $$ Choosing $s=m$ and $t=\lambda^{-1}$, we can see that \begin{equation} \label{Bn} \mathbb{B}(F,w,m,f, \lambda) = m B(\frac{F}{m\lambda}, \frac{w}{m}, \frac{f}{\lambda}) \end{equation} for a certain function $B$. Introducing new variables $\alpha= \frac{F}{m\lambda}, \beta=\frac{w}{m} ,\gamma=\frac{f}{\lambda}$ we write that $B$ is defined in \begin{equation} \label{G3} G:=\{(\alpha, \beta, \gamma): |\gamma|\le \alpha, 1\le \beta\le Q\}\,. \end{equation} \subsection{The main inequality} \begin{thm} \label{tudaweight} Let $P, P_+,P_-\in \Omega, P=(F,w,\min(m_+, m_-), f,\lambda)$, $P_+=(F+\alpha, w+\gamma, m_+, f+\beta, \lambda +\beta)$, $P_-=(F-\alpha, w-\gamma, m_-, f-\beta, \lambda -\beta)$. Then \begin{equation} \label{mi11} \mathbb{B}(P)-\frac12(\mathbb{B}(P_+)+\mathbb{B}(P_-))\ge 0\,. \end{equation} At the same time, if $P, P_+,P_-\in \Omega, P=(F,w, \min(m_+, m_-),f, \lambda)$, $P_+=(F+\alpha, w+\gamma, m_+,f+\beta, \lambda -\beta)$, $P_-=(F-\alpha, w+\gamma, m_+,f-\beta, \lambda +\beta)$. Then \begin{equation} \label{mi21} \mathbb{B}(P)-\frac12(\mathbb{B}(P_+)+\mathbb{B}(P_-))\ge 0\,. \end{equation} In particular, with fixed $m$, and with all points being inside $\Omega$ we get $$ \mathbb{B}(F, w, m, f, \lambda) -\frac14 (\mathbb{B}(F-dF, w-dw, m, f-d\lambda, \lambda-d\lambda) +\mathbb{B}(F-dF, w-dw, m, f+d\lambda, \lambda-d\lambda) + $$ \begin{equation} \label{4conc} \mathbb{B}(F+dF, w+dw, m, f-d\lambda, \lambda+d\lambda) +\mathbb{B}(F+dF, w+dw, m, f+d\lambda, \lambda+d\lambda) )\ge 0\,. \end{equation} \end{thm} \bigskip \noindent{\bf Remark.}1) Differential notations $dF, dw, d\lambda$ just mean small numbers. 2) In \eqref{4conc} we loose a bit of information (in comparison to \eqref{mi11},\eqref{mi21}), but this is exactly \eqref{4conc} that we are going to use in the future. \bigskip \begin{proof} Fix $P, P_+,P_-\in \Omega$. Let $\varphi_+, \varphi_-$, $w_+, w_-$ be functions and weights giving the supremum in $B(P_+), B(P_-)$ respectively up to a small number $\eta>0$. Using the fact that $\mathbb{B}$ does not depend on $I$, we think that $\varphi_+, w_+$ is on $I_+$ and $\varphi_-, w_-$ is on $I_-$. Consider $$ \varphi(x):=\begin{cases} \varphi_+(x)\,,\, x\in I_+\\ \varphi_-(x)\,,\, x\in I_-\end{cases} $$ $$ \omega(x):=\begin{cases} w_+(x)\,,\, x\in I_+\\ w_-(x)\,,\, x\in I_-\end{cases} $$ Notice that then \begin{equation} \label{beta11} (\varphi, h_I)\cdot\frac1{\sqrt{|I|}} = \beta\,. \end{equation} Then it is easy to see that \begin{equation} \label{av11} \langle | \varphi| \omega\rangle_I = F=P_1, \,\,\,\langle \varphi \rangle_I =f=P_4\,. \end{equation} Notice that for $x\in I_+$ using \eqref{beta11}, we get if $\epsilon_I =-1$ $$ \frac1{|I|}w_+\{x\in I_+: \sum_{J\subseteq I_+, J\in D} \epsilon_J(\varphi, h_J) h_J(x)>\lambda\}= \frac1{|I|}w_+\{x\in I_+: \sum_{J\subseteq I_+, J\in D} \epsilon_J(\varphi, h_J) h_J(x)>\lambda+\beta\} $$ $$ =\frac1{2|I_+|}w_+\{x\in I_+: \sum_{J\subseteq I_+, J\in D} \epsilon_J(\varphi_+, h_J) h_J(x)>P_{+,3}\}\ge \frac12 B(P_+)-\eta\,. $$ Similarly, for $x\in I_-$ using \eqref{beta11}, we get if $\epsilon_I =-1$ $$ \frac1{|I|}w_-\{x\in I_-: \sum_{J\subseteq I, J\in D} \epsilon_J(\varphi, h_J) h_J(x)>\lambda\}= \frac1{|I|}w_-\{x\in I_-: \sum_{J\subseteq I_-, J\in D} \epsilon_J(\varphi, h_J) h_J(x)>\lambda-\beta\} $$ $$ =\frac1{2|I_-|}w_-\{x\in I_-: \sum_{J\subseteq I_-, J\in D} \epsilon_J(\varphi_-, h_J) h_J(x)>P_{-,3}\}\ge \frac12 B(P_-)-\eta\,. $$ Combining the two left hand sides we obtain for $\epsilon_I=-1$ $$ \frac1{|I|}\omega\{x\in I_+: \sum_{J\subseteq I, J\in D} \epsilon_J(\varphi, h_J) h_J(x)>\lambda\}\ge \frac 12 (B(P_+)+B(P_-))-2\eta\,. $$ Let us use now the simple information \eqref{av11}: if we take the supremum in the left hand side over all functions $\varphi$, such that $\langle |\varphi | \,w\rangle_I =F, \langle \varphi \rangle_I =f , \langle\omega \rangle=w$, and weights $\omega$: $\langle \omega\rangle =w$, in dyadic $A_1$ with $A_1$-norm at most $Q$, and supremum over all $\epsilon_J=\pm 1$ (only $\epsilon_I=-1$ stays fixed), we get a quantity smaller or equal than the one, where we have the supremum over all functions $\varphi$, such that $\langle |\varphi |\, \omega\rangle =F, \langle \varphi \rangle_I =f, \langle \omega\rangle =w $, and weights $\omega$: $\langle \omega\rangle =w$, in dyadic $A_1$ with $A_1$-norm at most $Q$, and an unrestricted supremum over all $\epsilon_J=\pm 1$ including $\epsilon_I=\pm 1$. The latter quantity is of course $\mathbb{B}(F, w, m,f, \lambda)$. So we proved \eqref{mi1}. To prove \eqref{mi2} we repeat verbatim the same reasoning, only keeping now $\epsilon_I=1$. We are done. \end{proof} \medskip \noindent{\bf Remark.} This theorem is a sort of ``fancy" concavity property, the attentive reader would see that \eqref{mi11}, \eqref{mi21} represent bi-concavity not unlike demonstrated by the celebrated Burkholder's function. We will use the consequence of bi-concavity encompassed by \eqref{4conc}. There is still another concavity if we allow to have $|\epsilon_J|\le 1$. \bigskip \begin{thm} \label{tudaVYP} In the definition of $\mathbb{B}$ we allow now to take supremum over all $|\epsilon_j|\le 1$. Let $P, P_+,P_-\in \Omega, P=(F,w,m, f,\lambda)$, $P_+=(F+\alpha, w+\gamma, m, f+\beta, \lambda )$, $P_-=(F-\alpha, w-\gamma, m, f-\beta, \lambda )$. Then \begin{equation} \label{3conc} \mathbb{B}(P)-\frac12(\mathbb{B}(P_+)+\mathbb{B}(P_-))\ge 0\,. \end{equation} \end{thm} \begin{proof} We repeat the proof of \eqref{mi11} but with $\epsilon_I=0$. \end{proof} \begin{thm} \label{tudaDM} For fixed $F, w, f,\lambda$ function $\mathbb{B}$ is decreasing in $m$. \end{thm} \begin{proof} Let $m=\min(m_-,m_+)= m_-$. And let $m_+>m$. Then \eqref{mi11} becomes $$ \mathbb{B}(F, w, m, f, \lambda)- \mathbb{B}(F, w, m_+, f, \lambda) \ge 0\,. $$ This is what we want. \end{proof} \subsection{Differential properties of $\mathbb{B}$ translated to differential properties of $B$} \label{diff} It is convenient to introduce an auxiliary functions of $4$ and $3$ variables: $$ \widetilde B(x, y,f, \lambda):= B(\frac{x}{\lambda}, y, \frac{f}{\lambda})\,. $$ Of course \begin{equation} \label{53} \mathbb{B}(F,w,m, f, \lambda)= m\widetilde B(\frac{F}{m}, \frac{w}{m}, f, \lambda)=m B(\frac{F}{m\lambda}, \frac{w}{m}, \frac{f}{\lambda})\,. \end{equation} \begin{lm} \label{incr} Function $B$ increases in the first and in the second variable. \end{lm} \begin{proof} We know that by definition the RHS of \eqref{53} is getting bigger if $\lambda $ is getting smaller. So let us consider $\lambda_1>\lambda_2, \lambda_1=\lambda_2+\delta$, and variables $F, w, m, f$ fixed, and choose $\phi_1$ (and a weight $\omega$), $\langle \phi_1\rangle=f+\epsilon, \langle |\phi_1|\omega\rangle =F$, which almost realizes the supremum $\mathbb{B}(F, w, m, f+\epsilon, \lambda_1)$. Consider $\phi_2$ such that $\phi_2= \phi_1 -h$. Function $h$ will be chosen later, however we say now that $h$ is equal to a certain constant $a$ on a small dyadic interval $\ell$ and is zero otherwise. Constant $a$ and interval $\ell$ we will chose later. But $\epsilon:=\langle h\rangle$ will be chosen very soon. Function $\phi_2$ competes for supremizing $\mathbb{B}$ at $(\langle|\phi_2|\omega\rangle, w, m, f, \lambda_2)$. We choose $\epsilon$ in such a way that \begin{equation} \label{epsdelta} \frac{\langle \phi_1\rangle}{\lambda_1} = \frac{f+\epsilon}{\lambda_1} =\frac{f}{\lambda_1-\delta}=\frac{\langle \phi_2\rangle}{\lambda_2}\,. \end{equation} Let us prove that \eqref{epsdelta} implies that \begin{equation} \label{F1F2} \frac{\langle |\phi_1|\omega\rangle}{\lambda_1} \le \frac{\langle |\phi_2|\omega\rangle}{\lambda_2} \,. \end{equation} By \eqref{epsdelta} this is the same as $$ \frac{\langle |\phi_2 +h|\omega\rangle}{\langle |\phi_2|\omega\rangle} \le \frac{\langle \phi_1\rangle}{\langle \phi_2\rangle}= \frac{\langle \phi_2\rangle+\epsilon}{\langle \phi_2\rangle}\,. $$ The previous inequality becomes $$ \frac{\langle |\phi_2 +h|\omega\rangle}{\langle |\phi_2|\omega\rangle} \le 1+\frac{\langle h\rangle}{\langle\phi_2\rangle} \,. $$ By triangle inequality the latter inequality would follow from the following one $$ \langle|\phi_2|\,\omega\rangle \ge \langle \phi_2\rangle\frac{\langle |h|\,\omega\rangle}{\langle h\rangle}\,. $$ We can think that the minimum $m$ of $\omega$ is attained on a whole tiny dyadic interval $\ell$ (we are talking about {\it almost} supremums). Put $h$ to be a certain $a>0$ on this interval and zero otherwise. Of course we choose $a$ to have $\langle h\rangle = \epsilon$, where $\epsilon$ was chosen before. Now the previous display inequality becomes $$ \langle |\phi_2|\,\omega\rangle \ge \langle \phi_2\rangle \cdot m\,, $$ which is obvious. Notice that $\mathbb{B}(\langle|\phi_2|\rangle, w, m, f, \lambda_2)$ as a supremum is larger than the $\omega$-measure of the level set $>\lambda_2$ of the martingale transform of $\phi_2$. But this is also the martingale transform of $\phi_1$. The $\lambda_1$-level set for any martingale transform of $\phi_1$ is smaller, as $\lambda_1>\lambda_2$. But recall that we already said that $\phi_1$ (and weight $\omega$) almost realizes its own supremum $\mathbb{B}(F, w, m, f+\epsilon, \lambda_1)= \mathbb{B}(\langle |\phi_1|\rangle, w, m, \langle\phi_1\rangle, \lambda_1) $ So $$ \mathbb{B}(\langle |\phi_1|\rangle, w, m, \langle\phi_1\rangle, \lambda_1) \le \mathbb{B}(\langle |\phi_2|\rangle, w, m, \langle\phi_2\rangle, \lambda_2)\,. $$ In other notations we get $$ B (\frac{\langle |\phi_1|\rangle}{m\lambda_1}, \frac{w}{m}, \frac{\langle \phi_1\rangle}{\lambda_1}) \le B (\frac{\langle |\phi_2|\rangle}{m\lambda_2}, \frac{w}{m}, \frac{\langle \phi_2\rangle}{\lambda_2}) \,. $$ Let us denote the argument on the LHS as $(x_1, y_1, z_1)$, and on the RHS as $(x_2, y_2, z_2)$. Notice that $y_1= y_2=:y$ trivially and $z_1=z_2=:z$ by \eqref{epsdelta}. Notice also that $x_1< x_2$ by \eqref{F1F2}. Moreover by choosing $\delta$ very small we can realize any $x_1<x_2$ as close to $x_2$ as we want. Then the last display inequality reads as $$ B(x_1, y, z) \le B(x_2, y, z)\,. $$ So we proved that function $B$ increases in the first variable. The increase in the second variable is easy. Choose a dyadic interval $I$ on which $\inf_I \omega>m$, but $\langle \omega\rangle_I/\inf_I\omega <Q=:[\omega]_{A_1}$. For non-constant $\omega$ this is always possible, just take a small interval containing a point $x_0$, where $\omega(x_0) >m$. Then augment $\omega$ on $I$ slightly to get $\omega_1$ with $\langle \omega_1\rangle=w+\epsilon$. It is easy to see that as a result we have the new weight with the $A_1$ norm at most $Q$, the same global infimum $m$ but a larger global average $\langle \omega\rangle$. The $\omega_1$ measure of the level set of the martingale transform will be bigger than $\omega$ measure of the same level set of the same martingale transform, and $w/m$ also grows to $(w+\epsilon)/m$. All other variables stay the same. So if the original $\omega$ (and some $\phi$) were (almost) realizing supremum, we would get $$ B(x, y_1, z) \le B(x, y_2, z) $$ for $y_1=w/m, y_2=(w+\epsilon)/m$. \end{proof} \begin{thm} \label{bigform} Function $B$ from \eqref{Bn} satisfies \begin{equation} \label{Bt} t\rightarrow t^{-1} B(\alpha t, \beta t, \gamma)\,\,\text{is increasing for}\,\,\frac{|\gamma|}{\alpha}\le t \le \frac{Q}{\beta}\,. \end{equation} \begin{equation} \label{BVYP} B\,\, \text{is concave}\,. \end{equation} $$ B(\frac{x}{\lambda}, y, \frac{f}{\lambda}) -\frac14 \bigg[ B(\frac{x-dx}{\lambda-d\lambda}, y-dy, \frac{f-d\lambda}{\lambda-d\lambda}) +B(\frac{x-dx}{\lambda-d\lambda}, y-dy, \frac{f+d\lambda}{\lambda-d\lambda})+ $$ \begin{equation} \label{Bform} B(\frac{x+dx}{\lambda+d\lambda}, y+dy, \frac{f-d\lambda}{\lambda+d\lambda}) +B(\frac{x+dx}{\lambda+d\lambda}, y+dy, \frac{f+d\lambda}{\lambda+d\lambda})\bigg]\ge 0 \,. \end{equation} \end{thm} \begin{proof} These relations follow from Theorem \ref{tudaDM}, Theorem \ref{tudaVYP}, and Theorem \ref{tudaweight} (actually from \eqref{4conc}) correspondingly. \end{proof} We can choose extremely small $\varepsilon_0$ and inside the domain $\Omega$ we can mollify $\mathbb{B}$ by a convolution of it with $\varepsilon_0$-bell function $\psi$ supported in a ball of radius $\varepsilon_0/10$. Multiplicative convolution can be viewed as the integration with $\frac{1}{\delta^5}\psi(\frac{x-x_0}{\delta})$, where $\delta=\varepsilon_0/10$. Here $x_0$ is a point inside the domain of definition $\Omega$ for function $\mathbb{B}$. This new function we call $\mathbb{B}$ again. It is exactly as the initial function $\mathbb{B}$, and it obviously satisfies all the same relationships, in particular it satisfies Theorems \ref{tudaweight}, \ref{tudaVYP}, \ref{tudaDM}. Only its domain of definition$\Omega_{\varepsilon_0}$ is smaller (slightly) than $\Omega$. The advantage however is that the new $\mathbb{B}$ is smooth. We build $B$ by this new $\mathbb{B}$. A new function $B$ defined by the new $\mathbb{B}$ as in \eqref{53} will be smooth. Actually the new $B$ should be denoted $B^{\epsilon_0}$, where superscript denotes our operation of mollification, but we drop the superscript for the sake of brevity. In fact, all these mollifications are for the sake of convenience, the new functions satisfy the old inequalities in the uniform way, independently of $\varepsilon_0$. Property \eqref{Bform} can be now rewritten by the use of Taylor's formula: \begin{thm} \label{Bdiffform} $$ -\alpha^2 B_{\alpha\al}\bigg(\frac{dx}{x}-\frac{d\lambda}{\lambda}\bigg)^2 -\beta^2 B_{\beta\beta} \Bigg(\frac{dy}{y}\bigg)^2 -(1+\gamma^2)B_{\gamma\gamma} \Bigg(\frac{d\lambda}{\lambda}\bigg)^2- $$ $$ -2\alpha\beta B_{\alpha\beta}\bigg(\frac{dx}{x}-\frac{d\lambda}{\lambda}\bigg)\frac{dy}{y} + 2\beta\gamma B_{\beta\gamma} \frac{dy}{y}\frac{d\lambda}{\lambda} + 2\alpha\gamma B_{\alpha\gamma}\bigg(\frac{dx}{x}-\frac{d\lambda}{\lambda}\bigg)\frac{d\lambda}{\lambda}+ $$ $$ +2\alpha B_{\alpha}\bigg(\frac{dx}{x}-\frac{d\lambda}{\lambda}\bigg)\frac{d\lambda}{\lambda} -2\gamma B_{\gamma}\bigg(\frac{d\lambda}{\lambda}\bigg)^2 \ge 0\,. $$ \end{thm} \begin{proof} This is just Taylor's formula applied to \eqref{Bform}. \end{proof} Denoting $$ \xi=\frac{dx}{x}=\frac{dy}{y}\,,\,\,\eta=\frac{d\lambda}{\lambda} $$ we obtain the following quadratic form inequality \begin{thm} \label{xieta} $$ -\xi^2 \,[\alpha^2 B_{\alpha\al} +\beta^2 B_{\beta\beta} + 2\alpha\beta B_{\alpha\beta}] -\eta^2\, [\alpha^2 B_{\alpha\al} + (1+\gamma^2) B_{\gamma\gamma} + 2\alpha\gamma B_{\alpha\gamma} +2\alpha B_{\alpha} +2\gamma B_{\gamma}] + $$ $$ +2\xi\eta \,[\alpha^2 B_{\alpha\al} +\alpha\beta B_{\alpha\beta}+ \beta\gamma B_{\beta\gamma} +\alpha\gamma B_{\alpha\gamma} +\alpha B_{\alpha}] \ge 0\,. $$ \end{thm} Now let us combine Theorem \ref{xieta} and Theorem \ref{tudaVYP}. In fact, Theorem \ref{tudaVYP} implies $$ -2\alpha\gamma B_{\alpha\gamma} \eta^2 \le -\alpha^2\gamma B_{\alpha\al}\eta^2 -\gamma B_{\gamma\gamma} \eta^2\,. $$ We plug it into the second term above. Also Theorem \ref{tudaVYP} implies $$ 2\alpha\gamma B_{\alpha\gamma}\xi\eta \le -\alpha^2\gamma B_{\alpha\al} \xi^2 - \gamma B_{\gamma\gamma} \eta^2\,, $$ $$ 2\beta\gamma B_{\beta\gamma}\xi\eta \le -\beta^2\gamma B_{\beta\beta} \xi^2 - \gamma B_{\gamma\gamma} \eta^2\,, $$ We will plug it into the third term above. Then using the notation $$ \psi(\alpha, \beta,\gamma) := -\alpha^2 B_{\alpha\al}-2\alpha\beta B_{\alpha\beta} -\beta^2 B_{\beta\beta} $$ (which is non-negative by the concavity of $B$ in its first two variables by the way) we introduce the notations $$ K:= \psi(\alpha,\beta,\gamma) + (-\alpha^2 B_{\alpha\al}-\beta^2B_{\beta\beta})\gamma\,, $$ $$ L:=-\psi(\alpha,\beta,\gamma) +(\alpha^2B_{\alpha})_{\alpha} -\beta^2B_{\beta\beta}\,, $$ $$ N:=-(1+3\gamma+\gamma^2) B_{\gamma\gamma} - 2\gamma B_{\gamma} -(\alpha^2B_{\alpha})_{\alpha} -\alpha^2B_{\alpha\al}\gamma\,. $$ And we get that the following quadratic form is non-negative: $$ \xi^2\, K+\xi\eta\,L +\eta^2\,N:= $$ $$ \xi^2\,[ \psi(\alpha,\beta,\gamma) + (-\alpha^2 B_{\alpha\al}-\beta^2B_{\beta\beta})\gamma]+ $$ $$ \xi\eta\, [ -\psi(\alpha,\beta,\gamma) +(\alpha^2B_{\alpha})_{\alpha} -\beta^2B_{\beta\beta}]+ $$ $$ \eta^2\, [-(1+3\gamma+\gamma^2) B_{\gamma\gamma} - 2\gamma B_{\gamma} -(\alpha^2B_{\alpha})_{\alpha} -\alpha^2B_{\alpha\al}\gamma] \ge 0\,. $$ Therefore, $K$ is positive, and \begin{equation} \label{KLN} N\ge \frac{L^2}{4K}\,. \end{equation} Now we will estimate $L$ from below, $K$ from above and as a result we will obtain the estimate of $N$ from below, which will bring us our proof. But first we need some a priori estimates, and for that we will need to mollify $B=B^{\epsilon_0}$ in variables $\alpha, \beta$. Again we make a multiplicative convolution with a bell-type function. Let us explain why we need it. Let $$ \hat{Q}:=\sup_G B/\alpha\,. $$ We want to prove that \begin{equation} \label{hatQ} \hat{Q}/Q\rightarrow \infty\,. \end{equation} First we need to notice that \begin{equation} \label{inavpsi} \int_{1/2}^1 \psi(\alpha t,\beta t, \gamma)\,dt \le C\,(\hat{Q}\gamma +\frac{\hat{Q}}{Q}\alpha), \,\, \psi(\alpha, \beta,\gamma) := -\alpha^2 B_{\alpha\al}-2\alpha\beta B_{\alpha\beta} -\beta^2 B_{\beta\beta}\,. \end{equation} In fact, consider $\beta\in [Q/4, Q/2]$, $b(t):= B(\alpha t, \beta t, \gamma)$ on the interval $\frac{|\gamma|}{\alpha}=:t_0 \le t\le 1$. Let $\ell(t)=b(1)t\le \hat{Q}t\alpha$. We saw that $b(t)/t$ is increasing and $b$ is concave, and $b$ is under $\ell$, and so by elementary picture of concave function having property $b(\cdot)/\cdot$ increasing and $b(\cdot)$ concave on the interval $[t_0',1]$ we get that the maximum of $\ell(\cdot)- b(\cdot)$ is attained on the left end-point. The left end-point $t_0'$ is the maximum of $t_0=|\gamma |/\alpha$ and $1/\beta$ which is $c/Q$. Therefore, $$ \ell(t)-b(t)|(t=(\max(\frac{\gamma}{\alpha},\frac{c}{Q})) \le\ell(\max( \frac{\gamma}{\alpha},\frac{c}{Q}))\le C\hat{Q}\alpha\max(\frac{\gamma}{\alpha}, \frac{1}{Q})\le \hat{Q}\gamma +\frac{\hat{Q}}{Q}\alpha\,, $$ and the above value is maximum of $g(t):=\ell(t)-b(t)$ on $[t_0',1]$. By the same property that $b(t)/t$ is increasing we get that $$ g'(1)=\ell'(1)-b'(1)= b(1) - b'(1)\le 0\,. $$ Combining this with Taylor's formula on $[t_0,1]$ we get for $g:=\ell-b$ (g is convex of course): \begin{equation} \label{d2g} -(1-t_0)g'(1) +\int_{t_0}^1 dt \int_{t}^1 g''(s) ds = \text{positive} + \int_{t_0}^1 (s-t_0) g''(s) ds \le \sup g \le \hat{Q}\gamma +\frac{\hat{Q}}{Q}\alpha\,. \end{equation} This implies \eqref{inavpsi} because $g''(t) = \frac1{t^2} \psi(\alpha t, \beta t, \gamma), \, t\in [1/2,1]$. Consider now function $ a(t):= B(\alpha t, \beta, \gamma)$ We also have the same type of consideration applied to convex function $\hat{Q}\alpha-a(t)$ bringing us \begin{equation} \label{inaval} \int_{1/2}^1 -\alpha^2 B_{\alpha\al} (\alpha t,\beta,\gamma)\,dt \le C\hat{Q}\alpha\,. \end{equation} Similarly, \begin{equation} \label{inavbeta} \int_{1/2}^1 -\beta^2 B_{\beta\beta} (\alpha ,\beta t,\gamma)\,dt \le C\hat{Q}\alpha\,. \end{equation} We used here that $B_{\alpha}\ge 0, B_{\beta}\ge 0$, which is not difficult to see. For the future estimates we want \eqref{inavpsi}, \eqref{inaval}, \eqref{inavbeta} to hold not in average but pointwise. To achieve the replacement of ``in-average" estimates \eqref{inavpsi}, \eqref{inaval}, \eqref{inavbeta} by their pointwise analogs let us consider yet another mollification, now it is of $B$: $$ B_{new}(\alpha, \beta, \gamma):= 2\int_{1/2}^1 B(\alpha t, \beta t, \gamma)\, dt. $$ The domain of definition of $B_{new}$ is only in tiny difference with the domain of definition of $B$. In fact, the latter is $\{(\alpha, \beta, \gamma):\, |\gamma|\le \alpha, 1\le \beta\le Q\}$, and the former is just $G:=\{(\alpha, \beta, \gamma):\, |\gamma|\le \frac12\alpha, 2\le \beta\le Q\}$. If we replace $(\alpha, \beta, \gamma)$ by $(\alpha t, \beta t, \gamma), 1/2\le t \le 1,$ everywhere in the inequality of Theorem \ref{xieta}, and then integrate the inequality with $2\int_{1/2}^1\dots \,dt$, we will get Theorem \ref{xieta} but for $B_{new}$. \bigskip It is not difficult to see that \eqref{inavpsi} becomes a pointwise estimate for $B_{new}$ (just differentiate the formula for $B_{new}$ in $\alpha, \beta, \gamma$ and multiply by $\alpha, \beta,\gamma$ appropriately): \begin{equation} \label{navpsiPT} -\alpha^2 (B_{new})_{\alpha\al} - 2\alpha\beta (B_{new})_{\alpha\beta} - \beta^2 (B_{new})_{\beta\beta} \le C(\hat Q \gamma+ \frac{\hat Q}{Q}\alpha). \end{equation} This pointwise estimate automatically imply new ``average" estimate: $$ 2\int_{1/2}^1 \big(-\alpha^2s^2 (B_{new})_{\alpha\al}(\alpha s, \beta, \gamma) - 2\alpha s\beta (B_{new})_{\alpha\beta}(., \beta,.) - \beta^2 (B_{new})_{\beta\beta}\big) \le C(\hat Q \gamma+ \frac{\hat Q}{Q}\alpha). $$ This means exactly that the function $$ \tilde B:=(B_{new})_{new} := 2\int_{1/2}^1 B(\alpha s, \beta, \gamma)\, ds $$ still satisfies \eqref{navpsiPT}. It also clearly satisfies the inequality of Theorem \ref{xieta} because (as we noticed above) $B_{new}$ satisfies this inequality. To see this fact just replace all $\alpha$'s in the inequality of Theorem \ref{xieta} applied to $B_{new}$ by $\alpha s$ and integrate $2\int_{1/2}^1\dots\,ds$. Now let us see that $\tilde B=(B_{new})_{new} $ also satisfies a pointwise analog of \eqref{inaval}, namely, that \begin{equation} \label{navalPT} -\alpha^2 \tilde B_{\alpha\al}(\alpha, \beta, \gamma) \le C\hat Q\alpha\,. \end{equation} To show \eqref{navalPT} we just repeat what has been done above. Let $\tilde g(t):= \hat Q\alpha - B_{new}(\alpha t, \beta, \gamma)$. Then we have: 1) $0\le \tilde g \le \hat Q\alpha$ on $[t_0, 1]$, 2) $\tilde g'(1) \le 0$ (we saw that $B$, and hence $B_{new}$, are increasing in the first argument), 3) $\tilde g$ is convex. Then we saw in \eqref{d2g} that $$ \int_{1/2}^1 s^2\,\tilde g''(s) \,ds \le\int_{1/2}^1 \tilde g''(s) \,ds \le C\hat Q\alpha. $$ But this is exactly \eqref{navalPT}. So far we constructed a function $\tilde B= (B_{new})_{new}$ that satisfies pointwise inequalities \eqref{navpsiPT}, \eqref{navalPT} and the inequality of Theorem \ref{xieta}. We are left to see that by introducing $$ \hat B:= 2\int_{1/2}^1 \tilde B(\alpha , \beta s, \gamma)\, ds $$ we keep \eqref{navpsiPT}, \eqref{navalPT} and the inequality of Theorem \ref{xieta} valid and also ensure \begin{equation} \label{navbetaPT} -\beta^2 \hat B_{\beta\beta}(\alpha, \beta, \gamma) \le C\hat Q\alpha\,. \end{equation} W already just saw that \eqref{navpsiPT}, \eqref{navalPT} and the inequality of Theorem \ref{xieta} are valid for $\hat B$ just by averaging the same inequalities for $\tilde B$. We can see that \eqref{navbetaPT} holds by the repetition of what has been just done. Namely, consider $\hat g(t):= \hat Q\alpha - \tilde B(\alpha, \beta t, \gamma)$. Then we have: 1) $0\le \hat g \le \hat Q\alpha$ on $[t_0, 1]$, 2) $\hat g'(1) \le 0$ (we saw that $B$, and hence $B_{new}$, $\tilde B$ are increasing in the first argument), 3) $\hat g$ is convex. Using \eqref{d2g} again in exactly the same manner as we did with proving \eqref{navalPT} we get $$ \int_{1/2}^1 s^2\,\hat g''(s) \,ds \le\int_{1/2}^1 \hat g''(s) \,ds \le C\hat Q\alpha. $$ But this is exactly \eqref{navbetaPT}. \bigskip We drop ``hat", and from now on $\hat B$ is just denoted by $B$. We can summarize its properties as follows. \begin{equation} \label{psi} 0\le \psi(\alpha ,\beta , \gamma) \le C(\hat{Q}\gamma +\frac{\hat{Q}}{Q}\alpha)\,. \end{equation} \begin{equation} \label{al} 0\le - \alpha^2 B_{\alpha\al} (\alpha ,\beta,\gamma)\le C\hat{Q}\alpha\,. \end{equation} \begin{equation} \label{beta} 0\le - \beta^2 B_{\beta\beta} (\alpha ,\beta ,\gamma) \le C\hat{Q}\alpha\,. \end{equation} Recall that (now with this mollified $B$): $$ \xi^2\, K+\xi\eta\,L +\eta^2\,N:= $$ $$ \xi^2\,[ \psi(\alpha,\beta,\gamma) + (-\alpha^2 B_{\alpha\al}-\beta^2B_{\beta\beta})\gamma] $$ $$ \xi\eta\, [ -\psi(\alpha,\beta,\gamma) +(\alpha^2B_{\alpha})_{\alpha} -\beta^2B_{\beta\beta}] $$ $$ \eta^2\, [-(1+3\gamma+\gamma^2) B_{\gamma\gamma} - 2\gamma B_{\gamma} -(\alpha^2B_{\alpha})_{\alpha} -\alpha^2B_{\alpha\al}\gamma] \ge 0\,. $$ We will choose soon appropriate $\alpha_0, \alpha_1\le \frac1{100}\alpha_0$ and $\gamma\le \tau\alpha_0$ with some small $\tau$. Let us introduce $$ k:=\int_{\alpha_1}^{\alpha_0} K\, d\alpha = \int_{\alpha_1}^{\alpha_0}[ \psi(\alpha,\beta,\gamma) + (-\alpha^2 B_{\alpha\al}-\beta^2B_{\beta\beta})\gamma]\,d\alpha\,, $$ $$ n:=\int_{\alpha_1}^{\alpha_0} N\,d\alpha = \int_{\alpha_1}^{\alpha_0} [-(1+3\gamma+\gamma^2) B_{\gamma\gamma} - 2\gamma B_{\gamma} -(\alpha^2B_{\alpha})_{\alpha} -\alpha^2B_{\alpha\al}\gamma]\,d\alpha\,, $$ $$ \ell:=\int_{\alpha_1}^{\alpha_0} [ -\psi(\alpha,\beta,\gamma) +(\alpha^2B_{\alpha})_{\alpha} -\beta^2B_{\beta\beta}]\,d\alpha\,. $$ \noindent{\bf Estimate of $k$ from above.} The integrand of $k$ is obviously positive and $\psi$ term dominates other terms (by \eqref{psi}, \eqref{al}, \eqref{beta} and the smallness of $\gamma$). Therefore, \begin{equation} \label{k} 0\le k\le C_1\, (\hat{Q}\gamma\alpha_0 + C\frac{\hat{Q}}{Q}\alpha_0^2) + C_2\,\hat{Q}\gamma\alpha_0^2\le C\, (\hat{Q}\gamma\alpha_0 + C\frac{\hat{Q}}{Q}\alpha_0^2)\,, \end{equation} if $Q$ is very large. We choose (we are sorry for a strange way of writing $\alpha_0$, why we do that will be seen in the next section) \begin{equation} \label{chooseal} \alpha_0= c\,\bigg(\frac{Q}{\hat{Q}}\bigg)^{\rho}\,,\,\rho=1\,,\,\alpha_1= \frac1{100}\sqrt{ \frac{Q}{\hat{Q}}}\alpha_0\,. \end{equation} Here $c$ is a small positive constant. We also choose to have $\gamma$ running only on the following interval \begin{equation} \label{g} \gamma\in [0,\gamma_0]\,,\,\, \gamma_0:= \tau \bigg(\frac{Q}{\hat{Q}}\bigg)^{\rho} \alpha_0\,,\,\rho=1\,, \end{equation} where $\tau$ is a small positive constant. \medskip \noindent{\bf Estimate of $\ell$ from below.} Estimating from below we can skip the non-negative term $ -\beta^2B_{\beta\beta}$. Also $$ \int_{\alpha_1}^{\alpha_0} -\psi(\alpha,\beta,\gamma)\ge -C\hat{Q}\gamma\alpha_0 - C\frac{\hat{Q}}{Q}\alpha_0^2\,. $$ On the other hand, $$ \int_{\alpha_1}^{\alpha_0}(\alpha^2B_{\alpha})_{\alpha} \,d\alpha \ge \alpha_0^2 B_{\alpha}(\alpha_0,\beta, \gamma) -\alpha_1^2 \hat{Q} \,, $$ as mollification gives a pointwise estimate \begin{equation} \label{Bal} B_{\alpha} \le C\hat{Q}\,. \end{equation} Recall that $\beta\in [Q/4, Q/2]$. We also will prove soon the obstacle condition \eqref{againobst}, which says that \begin{equation} \label{boundary} B(1,\beta, \gamma)\ge \frac{\beta}{8}\,. \end{equation} If $B_{\alpha}(\alpha_0, \beta, \gamma)$ would be smaller than $Q/40$ (and then $B_{\alpha}(s, \beta, \gamma)\le Q/40$ for all $s\in [\alpha_0,1]$ by concavity of $B$ in its first variable) we would not be able to reach at least $\frac{Q}{4\cdot 8}$. In fact, by our choice of $\alpha_0$ in \eqref{chooseal} we have \begin{equation} \label{Bal0} B(\alpha_0, \beta, \gamma) \le \hat{Q}\alpha_0\le c \,Q\,. \end{equation} If $B_{\alpha}(\alpha_0, \beta, \gamma) \le \frac{Q}{40}$, and so this derivative $B_{\alpha}(s, \beta, \gamma) \le \frac{Q}{40}$ on $s\in [\alpha_0,1]$ (concavity), we cannot reach $Q/(4\cdot 8)$ for $s=1$ if we start with value of $B$ in \eqref{Bal0} at $s=\alpha_0$. But the fact that we cannot reach $Q/(4\cdot 8)$ contradicts to \eqref{boundary}. Therefore, \begin{equation} \label{Bal0b} B_{\alpha}(\alpha_0, \beta, \gamma) \ge \frac{Q}{40}\,, \end{equation} and \begin{equation} \label{elldominate} \ell\ge \frac{\alpha_0^2}{40} Q -\alpha_1^2 \hat{Q} -C\,\hat{Q} \gamma\alpha_0-C\frac{\hat{Q}}{Q}\alpha_0^2\,. \end{equation} As $\alpha_1=\frac1{100}\alpha_0 \sqrt{\frac{Q}{\hat{Q}}}$ (see \eqref{chooseal}), the second term is dominated by the first; the third term is dominated by the first because of the choice of $\gamma_0$ in \eqref{g}, the fourth term is dominated by the first one because $Q^2>>\hat{Q}$, see \cite {P} for a much better estimate. \bigskip Finally, \begin{equation} \label{ellbelow} \ell\ge \frac{\alpha_0^2}{80} Q\ge c\,\alpha_0^2\,Q\,. \end{equation} And $k$ is $$ 0\le k \le C\, (\hat{Q}\gamma\alpha_0 + C\frac{\hat{Q}}{Q}\alpha_0^2)=\alpha_0 \hat{Q} \,(\gamma +\frac1Q \alpha_0)\,. $$ We got \begin{equation} \label{nbelow} n \ge \frac{\ell^2}{4k}\ge c\,\frac{\alpha_0^4 Q^2}{\alpha_0 \hat{Q} \,(\gamma +\frac1Q \alpha_0)}\,. \end{equation} \medskip \noindent{\bf Estimate of $n$ from above.} By \eqref{Bal0b}, \eqref{Bal} and \eqref{al} we get $$ \int_{\alpha_1}^{\alpha_0} -(\alpha^2 B_{\alpha})_{\alpha}\, d \alpha -\gamma\,\int_{\alpha_1}^{\alpha_0} \alpha^2 B_{\alpha\al}\, d\alpha \le - c Q\alpha_0^2 + C\hat{Q}\alpha_1^2+ c\hat{Q}\alpha_0^2\gamma \le 0\,. $$ Negativity is by the choice of $\alpha_1$ in \eqref{chooseal} and by the fact that \begin{equation} \label{glesq} \gamma \le c\, \sqrt{\frac{Q}{\hat{Q}}}\,, \end{equation} which is much overdone in \eqref{g}. Therefore, we get, combining with \eqref{nbelow} (here $\eta>$ is an absolute constant and it is at least the maximum of all our $3\gamma+\gamma^2$) $$ c\,\frac{\alpha_0^3 Q^2}{ \hat{Q} \,(\gamma +\frac1Q \alpha_0)} \le n\le -(1+\eta)\int_{\alpha_1}^{\alpha_0} (e^{\frac1{1+\eta}\gamma^2} B_{\gamma})_{\gamma}\,d\alpha\,, $$ or \begin{equation} \label{Bgamma} \int_{\alpha_1}^{\alpha_0} (-e^{\frac1{1+\eta}\gamma^2} B_{\gamma})_{\gamma}\,d\alpha \ge C\,\frac{\alpha_0^3 Q^3}{ \hat{Q} \,(Q\gamma +\alpha_0)} \,. \end{equation} Function $B$ is smooth, concave in $\gamma$ and symmetric in $\gamma$ (the latter is by definition). In particular $B_\gamma(\alpha, \beta, 0) =0$. So after integrating in $\gamma$ on $[0, \gamma], \gamma<\gamma_0$ we get \begin{equation} \label{BgammaInt1} \int_{\alpha_1}^{\alpha_0} ( -B_{\gamma})\,d\alpha \ge C\,\alpha_0^3 \frac{Q^2}{\hat{Q}}[\log (\alpha_0+ Q\gamma) -\log \alpha_0]= C\,\alpha_0^3 \frac{Q^2}{\hat{Q}}\,\log (1+ \frac{Q}{\alpha_0}\gamma) \,. \end{equation} Integrate again in $\gamma$ on $[0, \gamma_0]$. We get the integral over $[\alpha_1,\alpha_0]$ of the oscillation of $B$, which is $$ \int_{\alpha_1}^{\alpha_0} [B(\alpha,\beta, 0) - B(\alpha,\beta, \gamma_0)]\,d\alpha \ge C\, \alpha_0^3 \frac{Q^2}{\hat{Q} } \cdot \frac{\alpha_0}{Q} (1+ Q\frac{\gamma_0}{\alpha_0})\log (1+ Q\frac{\gamma_0}{\alpha_0})\,. $$ But this oscillation is smaller than $C\hat{Q}\alpha_0^2$. We get the inequality \begin{equation} \label{QhatQ} C\, \alpha_0^4 \frac{Q}{\hat{Q} } \,(1+ Q\frac{\gamma_0}{\alpha_0})\log (1+ Q\frac{\gamma_0}{\alpha_0}) \le \alpha_0^2\hat{Q}\,. \end{equation} \bigskip Notice that $\alpha_0$, $\gamma_0$, $\gamma_0/\alpha_0$ are all powers of $\frac{Q}{\hat{Q}}$, which we expect to be a sort of $\frac1{(\log Q)^{p}}$. Then we get the estimate in terms of {\it powers} of $\frac{Q}{\hat{Q}}$: \begin{equation} \label{QhatQ1} C\, \alpha_0^2 \frac{Q^2}{\hat{Q}^2 } \frac{\gamma_0}{\alpha_0}\log (1+ Q\frac{\gamma_0}{\alpha_0}) \le 1\,. \end{equation} \bigskip Let us count the powers of $\frac{Q}{\hat{Q}}$: $\alpha_0^2$ brings power $2$---by \eqref{chooseal}, $\frac{\gamma_0}{\alpha_0}$ brings power $1$ by \eqref{g}, so totally we have $\frac1{(\log Q)^{5p}}\log \frac{Q}{(\log Q)^{\dots}}$ in the left hand side. \bigskip We can see that if $\hat{Q}\le Q\log^{p} Q$ with $p <\frac1{5}$, then \eqref{QhatQ1} leads to a contradiction. So we proved \begin{thm} \label{onethird} The weighted weak norm of the martingale transform for weights $w\in A_1^{dyadic}$ can reach $c\, [w]_{A_1}\log^{p} [w]_{A_1}$ for any positive $p<1/5$. \end{thm} \subsection{ A small improvement: from $1/5$ to $2/7$} \label{5} Suppose that we are allowed to transform the martingale not just by $\varepsilon_J=\pm 1$ but by any $|\varepsilon_j|\le 1$ (it is not clear whether this is the same for weak norm estimate, probably yes). The change will give us that $d\lambda|\le |df|$, and this will mean in articular, that we automatically have that function $B(\alpha, \beta, \gamma)$ is concave in $\gamma$. We observed that it is symmetric in $\gamma$. Together this gives us that \begin{equation} \label{maxg} B(\alpha, \beta, \gamma)\le B(\alpha, \beta, 0),\, |\gamma|\le \alpha;\,\,B(\alpha, \beta, \gamma) \ge B(\alpha, \beta, 0)/2,\, |\gamma|\le \alpha/2\,. \end{equation} Now to improve the constant $1/5$ we consider $Q^+:= \sqrt{Q\hat Q}$. We put $$ a_0:= c_1 \frac{Q}{Q^+}>> \alpha_0. $$ Two cases appear: \noindent Case 1. $B(a_0, \beta, 0) \le Q^+ a_0$. Then we replace $\alpha_0$ by $a_0$ in \eqref{chooseal}, we replace $\gamma_0$ by $\tilde\gamma_0=a_0 \big(\frac{Q}{Q^+}\big)^{\rho}$ in \eqref{g}, and we got \eqref{Bal0} with $B(a_0, \beta, \gamma) \le cQ$ (we use \eqref{maxg} here). And a result we have (exactly by the same reasoning as above) $B_\alpha(a_0, \beta, \gamma) \ge \frac{Q}{40}$. This the same as \eqref{Bal0b} but with $a_0$ instead $\alpha_0$. Now the main bookkeeping inequality \eqref{QhatQ1} with $a_0$ replacing of $\alpha_0$, $\tilde\gamma_0$ replacing $\gamma_0$, gives us a new $p= 2/7$. \bigskip \noindent Case 2. $B(a_0, \beta, 0) \ge Q^+ a_0$. Then $B(a_0, \beta, 0) \ge c_1Q$. And by \eqref{maxg} $B(a_0, \beta, \gamma) \ge c_1'Q$ if $|\gamma|\le a_0/2$. But we saw that $B(\alpha_0, \beta, \gamma) \le cQ$. Then between $\alpha_0$ and $a_0$ there is a point $\tilde\alpha_0$ such that $B_\alpha(\tilde\alpha_0, \beta, \gamma) \ge c_2Q/(c_1Q/Q^+- cQ/\hat Q) \ge c_3 Q^+$. Then by concavity $B_\alpha(\alpha_0, \beta, \gamma) \ge c_3 Q^+$. This is exactly \eqref{Bal0b}, but with a bigger constant in the right hand side ($Q^+$ in place of $Q$). Therefore we can repeat verbatim the whole body of estimates after \eqref{Bal0b} up to the main bookkeeping inequality \eqref{QhatQ1}. However, in \eqref{QhatQ1} $Q^2$ in the numerator should be replaced by $(Q^+)^2$. Calculating $p$ we are able again reduce it to $2/7$. \subsection{Obstacle conditions for $B$.} \label{obstacle} Now we want to show the following obstacle condition for $B$, which we already used: \begin{equation} \label{againobst} \text{if}\,\,|\gamma|<\frac14\,,\,\,\text{then}\,\,B(1, \beta, \gamma)\ge \frac{\beta}{8}\,. \end{equation} Let $I:=[0,1]$. Given numbers $|f|<\lambda/4, \frac{F}{m}=\lambda$ it is enough to construct functions $\varphi, \psi, w$ on $I$ such that Put $\varphi=-a$ on $I_{--}$, $=b$ on $I_{++}$, zero otherwise. And $w=1$ on $I_{--}\cup I_{++}$, and $w=Q$ otherwise. Then put $$ \psi:= (\varphi, h_{I_-}) h_{I_-} - (\varphi, h_{I_+}) h_{I_+}\,. $$ Let $0<a<b$ and $a$ is close to $b$. Put $\lambda=(a+b)/4$. Then average of $\varphi$ is small with respect to $\lambda$ and we can prescribe it. $F=(a+b)/4, m=1$. On the other hand, function $\psi$ (which is a martingale transform of $\varphi-\langle\varphi\rangle$) is at least $-(\varphi, h_{I_+}) h_{I_+}\ge \frac12 b\ge \lambda$ on $I_{+-}$, whose $w$-measure is more than $\frac13 w(I)$. So \begin{equation} \label{4betagamma} B(1, \beta,\gamma)\ge \frac13 \beta\,, \end{equation} for all small $\gamma$ and $\beta\asymp Q$. This is what we wanted to prove. \section{Bellman function and the estimate of weighted weak norm from above in $A_1$ case} \label{above} Let us denote by $N_k$ the quantity ($w\,\text{is constant on k-th generation and}\, w\in A_1^{dyadic}$) $$ N_k(V):=\sup\frac{1}{|I|}w\{x\in I: \sum_{J\subset I, J\in D, |J|\ge 2^{-k}|I|} \epsilon_J(f, h_J) h_J>\lambda\}\,. $$ Then we have practically by the definition of $N_k$ (let $V$ temporarily denotes vector $(F,f, \lambda, w, m)$, and $y_1:=\lambda +f, y_2:=\lambda-f$) \begin{equation} \label{Nk} N_{k+1} (V) \le \sup_{V_+, V_-, V=\frac{V_++V_-}{2}, |y_{1+}-y_{1-}|=|y_{2+}-y_{2-}|}\frac{N_k(V_-)+N_k(V_+)}{2}\,. \end{equation} In this language we need to prove that \begin{equation} \label{anykey} N_{k}(V) \le B(V)\,\,\text{for any}\,\, k\,\,\text{and any}\,\, V \in \Omega_k\,. \end{equation} By bi-concavity of $B$ and by \eqref{Nk} we immediately see the induction step from $k$ to $k+1$. We are left to check that \begin{equation} \label{N0} N_{0} (V) \le B(V)\,. \end{equation} Let us check \eqref{N0}. If $ \lambda > \frac{F}{m}\ge |f|$ we just use $B(V)\ge 0$ because for such parameters $$ |(f, h_{[0,1]})|\le |f|<\lambda $$ and the subset of $[0,1]$, where $\epsilon_{[0,1]}(f, h_{[0,1]})h_{[0,1]}(x)$ is greater than $\lambda$ is empty. On the other hand, if $ \lambda \le \frac{F}{m}$, what can be the largest $w$-measure of $E\subset [0,1]$ on which $\epsilon_{[0,1]}(f, h_{[0,1]})h_{[0,1]}(x)\ge \lambda$? Here is the extremal situation: $w$ is $2Q-1$ on $[0, 1/2]$, and $1$ on $[1/2,1]$. Function $\varphi$ is zero on $[0, 1/2]$, and constant $2f$ on $[1/2,1]$. Then $F=f, m=1$ (these are data on $[0,1]$). On the other hand, $$ \epsilon_{[0,1]}(\varphi, h_{[0,1]})h_{[0,1]}(x)=\epsilon_{[0,1]}\,f\,h_{[0,1]}(x) =\frac{F}{m} \ge \lambda $$ on the whole $[0,1/2]$ if $\epsilon_{[0,1]}=\pm$ is chosen in the right way. But in this case again, $B(V)\ge 2Q-1\ge w([0,1/2])$. Hence, $B(V)\ge N_0(V)$ is proved, and we can start the induction procedure. It is left to find our $ \mathbb{B}$ to have a sharp estimate from above in $A_1$ problem. \section{Martingales} \label{Mart} We will use four-adic lattice ${\mathcal F}$. For a four-adic interval $I$ let $H_I= 1$ on its right half, $H_I =-1$ on its left half, let also $G_I=1$ on its leftest and rightest quarters and $G_I =-1$ on two middle quarters. We will call martingale difference the function of the type $$ f_n=\sum_{I\in {\mathcal F}, \ell(I)=4^{-n}} a_I H_I $$ or $$ g_n=\sum_{I\in {\mathcal F}, \ell(I)=4^{-n}} b_I G_I\,, $$ where $a_I, b_I$ are numbers. Martingale for us is any function on $I_0:=[0,1]$ of the type ${\bf f}=f+\sum_{n=0}^N f_n\,,$ or $ {\bf g}=g+\sum_{n=0}^N g_n\,,$ where $f,g$ are two constants. We distinguish $H$- and $G$-martingales. In the previous sections the following theorem was proved. \begin{thm} \label{mart1} Given $Q>1$ there exist three $H$-martingales $\bf{F},{\bf f},\bf{w}$, $\bf{F}\ge 0, \bf{w}\ge 0$, and one $G$-martingale ${\bf g} = g+\sum_{n=0}^N g_n$ with large positive $g$ such that the following holds: 1) For any $I\in {\mathcal F}$, $\langle \bf{F}\rangle_I \ge \langle |{\bf f}| \rangle_I\min _I\bf{w}$. 2) For any $I\in {\mathcal F}$, $\langle \bf{w}\rangle_I \le Q\min_I\bf{w}$. 3) For any $I\in {\mathcal F}$, $a_I=b_I$, where these are martingale differences coefficients for ${\bf f}$ and ${\bf g}$. 4) $g\cdot\int_{x\in I_0: {\bf g}(x) \le 0}\bf{w}\, dx \ge c\, Q \log^{p}Q\,\int_{I_0}\bf{F}\,dx$, $p<\frac15$. \end{thm} \section{Controlled doubling martingales} \label{DoubMart} We are going to make a small modification in the proof to get the following \begin{thm} \label{mart2} Given $Q>1$ there exist three $H$-martingales $\bf{F},{\bf f},\bf{w}$, $\bf{F}\ge 0, \bf{w}\ge 0$, and one $G$-martingale ${\bf g}=\sum_{n=0}^N g_n$ such that the following holds: 1) For any $I\in {\mathcal F}$, $\langle \bf{F}\rangle_I \ge \langle |{\bf f}| \rangle_I\min _I\bf{w}$. 2) For any $I\in {\mathcal F}$, $\langle \bf{w}\rangle_I \le Q\min_I\bf{w}$. 3) For $I\in {\mathcal F}$, $b_I=-a_I$, where these are martingale differences coefficients for ${\bf f}$ and ${\bf g}$. 4)For large positive number $g$, $g\cdot\int_{x\in I_0: {\bf g}(x) \ge g}\bf{w}\, dx \ge c\, Q \log^{p}Q\,\int_{I_0}\bf{F}\,dx$, $p<\frac15$. 5) For any two four-adic neighbors (neighbors in the tree) $I\in {\mathcal F}$ and $\hat{I}$, $\langle \bf{w}\rangle_{\hat{I}}\le 4\langle\bf{w}\rangle_I$. \end{thm} In other words, we can always control the four-adic doubling property of $\bf{w}$. \section{Remodeling by proliferation. The amplification of martingale differences} \label{remod} Now we are going to repeat the procedure from \cite{NV}. We say that $I_0$ supervises itself. Take a very large $n_1$, consider the division of of $I_0$ to $4^{n_1}$ small equal intervals and let the leftest quarter of the supervisor ($I_0$ it is) supervises the first, fifth, etc small subdivision interval of the supervisee (which is still $I_0$ for now, so these are intervals of our just done subdivision). Let the second quarter supervises the second, the sixth, etc; the third quarter supervises the third, the seventh, etc, and the fourth quarter supervises the fourth, the eighth, etc. Now we have new pairs of (supervisor, supervisee). Subdivide each supervisor to its $4$ sons and its supervisee to $4^{n_2}$ sons, where $n_2>>n_1$. Repeat supervisor/supervisee allocation procedure as before, Continue with the new pairs of (supervisor, supervisee). Repeat $N$ times. Now, as in \cite{NV}, we are going to ``remodel" martingales $\bf{w}, \bf{F}, {\bf f}, {\bf g}$ to new functions with basically the same distributions. We first ``square sine" and ``square cosine" function for any supervisee interval $I$. Let $$ sqsin_{I_0} (x) := H_{I_0}(4^{n_1} x)\,,\,\, sqcos_{I_0} (x)= G_{4^{n_1}I_0}( x)\,. $$ Next supervisors will be the quarters of $I_0$. Take on of such quarter, say, $I$, and put $$ sqsin_{I} (x) := H_{I}(4^{n_2} x)\,,\,\, sqcos_{I} (x)= G_{4^{n_2}I_0}( x)\,. $$ We continue doing that for the next generation of supervisors. Let $I, J$ be a supervisor/supervisee pair. Now let $\ell_{IJ}$ be a natural linear map $J\rightarrow I$. We put $sqs_J:= sqsin_I \circ \ell_{IJ}$, $sqc_J= sqcos_I\circ \ell_{IJ}$. Now we basically want to put $$ W:= w+ \sum_{n=1}^N\sum_{I\in {\mathcal F}, \ell(I)=4^{-n}}\sum_{J\, supervised\, by \, I} c_I sqs_J\,, $$ where $c_I$ are coefficients of $\bf{w}$. $$ \Phi:= F+ \sum_{n=1}^N\sum_{I\in {\mathcal F}, \ell(I)=4^{-n}}\sum_{J\, supervised\, by \, I} d_I sqs_J\,, $$ where $d_I$ are coefficients of $\bf{F}$. $$ \phi:= f+ \sum_{n=1}^N\sum_{I\in {\mathcal F}, \ell(I)=4^{-n}}\sum_{J\, supervised\, by \, I} a_I sqs_J\,, $$ where $a_I$ are coefficients of ${\bf f}$. $$ \rho:= \sum_{n=1}^N\sum_{I\in {\mathcal F}, \ell(I)=4^{-n}}\sum_{J\, supervised\, by \, I} b_I sqc_J\,, $$ where $b_I=-a_I$ are coefficients of ${\bf g}$. Notice that the last formula has square cosines $sqc$, and this will be important. We do exactly that, but to ensure the doubling property of $W$ we just for every pair $(I,J)$ (supervisor/supervisee) replace $sqs_J$ by basically the same function, but such that its first $4$ steps on the left are replaced by $0$ and its first $4$ steps on the right are replaced by zero. Call it $sqsm_J$. So $$ W:= w+ \sum_{n=1}^N\sum_{I\in {\mathcal F}, \ell(I)=4^{-n}}\sum_{J\, supervised\, by \, I} c_I sqsm_J\,, $$ where $c_I$ are coefficients of $\bf{w}$. The doubling property of such a $W$ has been checked in \cite{NV}. We notice that if $1<<n_1<<n_2<<...<<n_N$ then the distribution functions of these new function are basically the same that for their model martingales. So we can repeat Theorem \ref{mart2}. Let us consider the periodic extension of $W, \Phi, \phi, \rho$ to the whole line (or we could consider everything just on the unit circle identifying it with $[0,1)$). \begin{thm} \label{mart3} Given $Q>1$ then the above functions $W, \Phi, \phi, \rho$ are such that the following holds: 1) For any $J\in {\mathcal F}$, $\langle \Phi\rangle_J\ge \langle |\phi| \rangle_J\min _J W$. 2) For any $J\in {\mathcal F}$, $\langle W\rangle_J \le Q\min_JW$. 3) For any $J\in {\mathcal F}$, $b_J=-a_J$, these are martingale differences coefficients for $\phi$ and $\rho$. 4) For a large positive number $g$, $g\cdot\int_{x\in I_0: \rho(x) \ge g}W\, dx \ge c_0\, Q \log^{p}Q\,\int_{I_0}\Phi\,dx$, $p<\frac15$. 5) $W$ is doubling with an absolute constant. \end{thm} Now what happens with the Hilbert transform $H\phi$ of $\phi$? It is immediate that if we extend periodically $scsin_{I_0}$ to the real line and do the same with $sqcos_{I_0}$ and call them $sqsin$, $sqcos$, then \begin{equation} \label{xi} H(sqsin)(x)=\xi(x)\, sqcos(x)\,, \end{equation} where $\xi$ is a non-negative $1$-periodic function that looks as follows. It is logarithmically goes to $+\infty$ at $0$, at $\frac12 -$, at $\frac12 +$ and at $1$. It has two zeros: at $\frac14$ and at $\frac34$. Continue it $1$-periodically Let $I$ be one of the supervisors of $k$-th generation. Put $$ \xi_I(x):= \xi(4^{n_k}x), \, x\in I\,. $$ Let now $I,J$ is the the pair of supervisor/supervisee. Recall that $\ell_{IJ}$ is the linear map from $J$ to $I$ sending the left (right) end-point to the left (right) end-point. We put $$ \xi_J := \xi_I\circ \ell_{IJ}. $$ It is now tempting (looking at the definition of $\phi$) to write that (recall that $b_I=-a_I$) $$ H\phi (x)= \sum_{n=1}^N\sum_{I\in {\mathcal F}, \ell(I)=4^{-n}}\sum_{J\, supervised\, by \, I} b_I \xi_J(x) sqc_J(x)\,. $$ Unfortunately, in \eqref{xi} we have $1$-periodic $sqsin, sqcos$ and not their localized to $I_0$'s versions. But $H$ of any bounded highly oscillating function on interval $J$ goes to zero uniformly outside the neighborhood of the end-points of $J$. Therefore we can make up for the problem with localized to $J$ functions $sqs_J, sqc_J$ by writing \begin{equation} \label{Hphi} H\phi (x)= \sum_{n=1}^N\sum_{I\in {\mathcal F}, \ell(I)=4^{-n}}\sum_{J\, supervised\, by \, I} b_I \xi_J(x) sqc_J(x)+\Theta(x)\,, \end{equation} where $\Theta(x)$ is as close to zero as we wish on a set of as small Lebesgue measure as we wish--by the choice of largeness of $n_1<<n_2<<\dots$. We can think that in all our constructions all sums are finite. In particular, the coefficients $a_I, b_I=-a_I$ of ${\bf f}, {\bf g}$ can be thought to be zero after a while. So let $m_0$ be the last generation where we have these coefficients non-zero. Then the set \begin{equation} \label{distrFom} \omega:=\{x\in [0,1]:\, {\bf g}(x) \ge g\} \end{equation} consists of the collection of the whole intervals of the next generation $m_0+1$, that is consists of the certain sons of certain collection of $4$-adic intervals of generation $m_0$ whom we will call $\mathcal{\hat I}$. The set of their sons forming $\omega$ will be called $\mathcal{I}$. Intervals $\hat I$ from $\mathcal{\hat I}$ are the last intervals that are supervisee in the remodeling construction above. Their supervised intervals will be called collection $\mathcal{\hat J}$. Let $I\in \mathcal{\hat I}$, $J\in \mathcal{\hat I}$ are supervisee/supervised pair. We do remodeling last time: divide $I$ to its sons, divide $J$ to $4^{n_{m_0}}$ equal intervals, make correspondence between the sons of $I$ and some small intervals of this subdivision of $J$. Let the son of $I$ happen to be in $\mathcal{ I}$, then we mark correspondent small intervals $J'$ of this subdivision of $J$ by red. All red intervals will be called collection $\mathcal{J}$. Call it $\Omega:= \cup_{J'\in\mathcal{J}} J'$. Now we can see that \begin{equation} \label{distrF} \Omega = \cup_{J'\in\mathcal{J'}} J=\{x\in [0,1]:\, \rho(x) \ge g\}. \end{equation} In fact, let $I'$ be an element of $\mathcal{I}$, and $J'$ be a corresponding red interval (from $\mathcal{J}$). Fix any point $x\in I$ and consider ${\bf g}=\sum_{I} b_I G_I$ at $x$. Consider the sequence of the terms of this sum. Here the sum has a term $b_{\hat I'} G_{\hat{I'}}(x)$ from father $\hat I'$ of $I'$, then a term from a grandfather, et cetera. And for any $x\in I'$ this sequence we just described is the same. If we consider now any $y\in J'$ and consider the sequence of terms in the sum $\rho(y)=\sum_{J:\, J \,supervised\, by\, I} b_I sqc_J(y)$ we will see that it is exactly the same sequence as for $x\in I'$. This was done by remodeling construction because each $J''$ that gives the contribution to the sum at $y$ has a supervisor $I''$ that gives {\it the same} contribution to the sum at $x$. This proves \eqref{distrF}. This proves that ${\bf g}$ and $\rho$ are distributed in the same way (with respect to Lebesgue measure, and also with respect to pair $\bf{w}$, $W$ correspondingly). However, we need a subtler thing. The distribution of $\rho$ is not enough for us, we need also the distribution of $$ \tilde H\phi (y) := \sum_{n=1}^N\sum_{I\in {\mathcal F}, \ell(I)=4^{-n}}\sum_{J\, supervised\, by \, I} b_I \xi_J(y) sqc_J(y)\,. $$ The problem is of course that we have all these $\xi_{J}(y)$. In fact, if $I'\in \mathcal{I}$, the for any red interval $J'$ supervised by $I'$ (and any point $y$ in any such $J'$) we have one and the same sequence of numbers $\{b_I sqc_J(y)\}_{I'\subset I, \, J\, is\, supervised\, by \, I}$. Call this sequence of numbers $d(x)$. It is a finite sequence $\{d_1,\dots, d_{m_0}\}$ and if $x\in I'\subset \omega$, then (see \eqref{distrFom}) $$ d_1+\dots d_{m_0} =: g_1\ge g\,. $$ We normalize by $\theta_i:= d_i/g_1$. Then in the corresponding $y\in J'$ we have the sum for $\tilde H\phi (y)$ which looks like $$ \sum_{i=1}^{m_0} \theta_i x_i(y), $$ where $x_i(y)$ is a corresponding $sqc_J(y)$, for example $x_1(y)= sqc_J(y)$, where $J$ is a father of $J'$ and a supervisee of a father $I$ of $I'$. We have to notice that the sequence $d(x)$ does not depend on $x$, depends only on $I'\in\mathcal{I}$, and hence the sequence $\{\theta_1,\dots, \theta_{m_0}\}$ does not depend on $y$ as long as $y\in J'$, and $J'$ is a red interval corresponding to $I'$. But unfortunately $\xi_i(y)$ depend on $y$ very much. In different red intervals $J_1', J_2',\dots$ corresponding to the same $I'$ the sequence $\{x_1,\dots, x_{m_0}\}$ is completely different. Fix our $I'\in \mathcal{I}$, let $\mathcal{J}(I')$ be the union of all red $J'$ corresponding to $I'$. On $Y:=\cup_{J'\in \mathcal{J}(I')} J'$ we introduce the probability measure as follows: choose any such $J'$ with equal probability $\mathcal{P}'$, and then put a normalized Lebesgue measure on it. Notice that the joint distribution of $\{x_1(y),\dots, x_{m_0}(y)$\}, $y\in Y$, with respect to this $\mathcal{P}'$ is almost the same as the joint distribution of independent random variables $\{\xi_1,\dots \xi_{m_0}\}$ having the same distribution of our function $\xi$ on $[0,1]$. We can make closeness in joint distribution apparent by choosing very large $n_1<<n_2<<\dots$. Consider now two cases: 1) $\sum \theta_k^2 < c_0$, 2) $\sum \theta_k^2 \ge c_0$, where $c_0$ is a certain absolute constant. Let $\xi = \sum\theta_k \xi_k, \zeta_k= \xi_k-\mathbb{E}\xi_k, \zeta= \sum\theta_k\zeta_k$. Let us think that $\int\xi=1$ Notice that then by normalization of $\theta_i$ we have $\mathbb{E}(\sum\xi_k) = 1$. Case 1). $\mathcal{P}\{ \xi<1/2\} = \mathcal{P}\{|\xi-1|>1/2\} \le 4Var(\zeta) \le 4 c_0$. So if $c_0$ is happened to be $=1/8$ we get that $$ \mathcal{P}\{ \xi\ge 1/2\} \ge 1/2\,. $$ Then by the closeness in joint distribution we would conclude that \begin{equation} \label{sumcoef1} \mathcal{P}'\{\sum_{i=1}^{m_0} \theta_i x_i(y)>1/2 \} \ge 1/4\,. \end{equation} \bigskip Case 2). In This case the sum of variations of $\theta_k\xi_k$ is sufficiently large. Now we will use then the following lemma: \begin{lm}\label{le83} Let $\theta_k>0,\ k=0,\dots,m-1$. Let $\tilde\xi_k$ be $\mathbb{R}$-valued independent random variables with variation $\theta_k$ satisfying \begin{equation} \label{moments} \mathbb{E}|\tilde\xi_k|^p\le Cp\,\theta_k^{p/2}, \, p=3, 4,\dots \end{equation} Then there exists $\delta=\delta(C,c)>0$ such that \begin{equation} \label{sumcoef2} \mathcal{P}\bigg\{\bigg|\sum_{k=0}^{m-1}\tilde\xi_k+a\bigg|\ge\delta\bigg(\sum_{k=0}^{m-1}\theta_k^2\bigg)^{1/2}\bigg\}\ge\delta \ \text{ for all }\ a\in\mathbb{R}. \end{equation} \end{lm} \bigskip \noindent{\bf Remark.} Notice that function $\xi\in BMO$, so by John--Nirenberg inequality the requirement \eqref{moments} hold for our $$ \tilde\xi_k := \theta_k\xi_k. $$ We will apply this Lemma to such $\tilde\xi_k$ and to $a=0$. Notice that our $\tilde\xi_k$ will be non-negative. \bigskip \begin{proof} Denote $$ \sigma=\sum_{k=0}^{m-1}\tilde\xi_k+a,\quad \zeta_k=\tilde\xi_k-\mathbb{E}\tilde\xi_k. $$ Take $\lambda>0$ and consider $$ |\mathbb{E} e^{i\lambda\sigma}|=\bigg|e^{i\lambda a}\prod_{k=0}^{m-1}\mathbb{E} e^{i\lambda\tilde\xi_k}\bigg|\\ =\prod_{k=0}^{m-1}|\mathbb{E} e^{i\lambda\tilde\xi_k}|=\prod_{k=0}^{m-1}|\mathbb{E} e^{i\lambda\zeta_k}|. $$ Note now that for $\lambda\le \theta_k^{-1}$ (our $\lambda$ below will be such) we have by \eqref{moments} $$ |\mathscr{E}e^{i\lambda\zeta_k}|=\bigg|1-\frac{\lambda^2}2Var\xi_k+O(\lambda^3\theta_k^3)\bigg|\le \exp\bigg(-\frac{\lambda^2}2Var\xi_k+C\lambda^3\theta_k^3\bigg), $$ and $$\aligned \prod_{k=0}^{m-1}|\mathscr{E}e^{i\lambda\zeta_k}|&\le \exp\left(-c\lambda^2\sum\theta_k^2+C\lambda^3\sum\theta_k^3\right)\\ &\le\exp\bigg(-c'\left[\lambda\left(\sum\theta_k^2\right)^{1/2}\right]^2+ C'\left[\lambda\left(\sum\theta_k^2\right)^{1/2}\right]^3\bigg). \endaligned$$ Now choose $$ \lambda=\frac{c'}{2C'}\left(\sum\theta_k^2\right)^{-1/2}. $$ Then $$ |\mathscr{E}e^{i\lambda\sigma}|\le\exp\bigg(-\frac{(c')^3}{8(C')^2}\bigg). $$ On the other hand, for every $\delta>0$, one has $$ \begin{aligned} |\mathbb{E} e^{i\lambda\sigma}-1|&\le\lambda\delta\left(\sum\theta_k^2\right)^{1/2}+ 2\mathcal{P}\bigg\{|\sigma|>\delta\left(\sum\theta_k^2\right)^{1/2}\bigg\}\\ &\le\frac{c'}{2C'}\delta+2\mathcal{P}\bigg\{|\sigma|>\delta\left(\sum\theta_k^2\right)^{1/2}\bigg\}. \end{aligned} $$ Hence, $$ \mathcal{P}\bigg\{|\sigma|>\delta\left(\sum\theta_k^2\right)^{1/2}\bigg\}\ge \frac12\bigg[1-\exp\bigg(-\frac{(c')^3}{8(C')^2}\bigg)-\frac{c'}{2C'}\delta\bigg]>\delta, $$ if $\delta$ is chosen small enough. \end{proof} The terms of the sum $\sum_i\theta_i x_i(y)$ are almost constant functions on each red interval $J'\in \mathcal{I'}$. We already proved in \eqref{sumcoef1}, \eqref{sumcoef2} that probability $\mathcal{P}$ of the sum $\sum_i\theta_k\xi_k$ is larger than certain fixed absolute $\delta$ is at least $\delta$. Therefore we may think that at least $\delta/2$ portion of red intervals $J'\in \mathcal{I'}$ are such that for the sum $\sum_i\theta_i x_i$ we have $$ \min_{J'}\sum_i\theta_i x_i\ge \delta/2. $$ Denote this collection of $J'$ by symbol $\mathcal{C}(I')$. Now the previous inequality translates into $$ \tilde H\phi(x) \ge \frac{\delta}{4} g, $$ on all $J'$ from the portions $\mathcal{C}(I')$ described above for all intervals $I'\in \mathcal{I}$. Now use 4) of Theorem \ref{mart3}. The estimate in 4) $\rho(x)\ge g$ holds on {\it all} red $J'$ corresponding to any $I'\in \mathcal{I}$ (see \eqref{distrF}). The $W$-measure of the union of them is large as indicated in 4), namely, $\ge \frac{c}{g} Q\log ^p Q\int |\phi(x) W(x)\, dx$. Notice that all red intervals $J'$ from $\mathcal{I'}$ have the same $W$ measure (by construction of $W$). Therefore, the $W$-measure of all these portions of red intervals described above (portions are enumerated by $I'\in \mathcal{I}$) is at least $\delta/4$ times $c \,Q\log ^pQ \int |\phi(x)| W(x)\, dx$. So on such $W$-measure we have $ \tilde H\phi(x) \ge \frac{\delta}{4} g$. This is exactly what we need if we take into consideration that $\Theta(x)$ in \eqref{Hphi} can be taken as small as we wish outside the set of Lebesgue measure (and then obviously of $Wdx$ measure as well) as small as we wish.
1,941,325,219,929
arxiv
\section{Introduction} Amyotrophic Lateral Sclerosis (ALS) is a neurodegenerative disease, commonly characterized by loss of movement due to the degeneration of motor neurons in the brain and spinal cord \cite{Brown2017}. Its severe nature and fast progression result in short survival times, averaging 3 to 5 years \cite{Brown2017}. Death in ALS patients is usually associated with respiratory failure \cite{Lechtzin2018}. Therefore, treatments designed to manage respiratory system related symptoms, such as the administration of Non-Invasive Ventilation (NIV), have shown to improve prognosis and extend survival time \cite{Bourke2006}. In this context, an early prediction of patients' need for assisted ventilation would have significant implications for patients' quality of life and health costs. Machine Learning-based approaches have been followed to tackle this problem \cite{Carreiro2015, Pires2018}. Carreiro et al. \cite{Carreiro2015} developed a supervised learning approach to predict the need for NIV in ALS patients within specific time windows. Furthermore, Pires et al. \cite{Pires2018} attempted to improve these predictive models by tackling the heterogeneity in ALS patients, using stratified disease progression groups. Despite the promising results of these prognostic models, their translation into clinical practice is hampered by the lack of insight on the risk of error for predictions at instance-level (in this case, at patient-level). In other words, for prognostic models to become actionable in the clinicians' decision-making process, they must provide not only the most likely prediction for a given patient but also an indication of how reliable that prediction is. Conformal Prediction (CP) is a machine learning framework built on top of standard classifiers that, for a given test instance, computes a p-value for each possible class, which can be used as a confidence measure (indicating the likelihood of that prediction being correct), or to produce a prediction set guaranteed to include the true label, at that confidence level. CP has been successfully applied in health-related domains \cite{Papadopoulos2009, Lambrou2010}. In this work, we evaluate the feasibility of the CP framework to target the reliability of predictions at patient-level when predicting NIV for ALS patients. In particular, we propose a prognostic model using a mixture of experts (i.e., models learned with different time windows) which not only predicts whether a given patient will suffer from respiratory insufficiency but also outputs the most likely time window of occurrence, at a given reliability level. To the best of our knowledge, there are no previous studies on CP on ALS prediction, making it a relevant case study. Furthermore, we use a different way of building learning examples using time windows when compared with previous literature \cite{Carreiro2015, Pires2018}, by narrowing the time width of prediction. \vspace{-1mm} \section{Dataset and Methods} \subsection{Data} \label{datasect} We used a cohort of 1360 ALS patients, followed in the ALS clinic of the Translational Clinical Physiology Unit, Hospital de Santa Maria, Lisbon, from 1992 until March 2019. It comprises 27 variables, comprising demographic, clinical (including respiratory tests and neurophysiological data), and genetic data. This data is collected from every participant at the baseline assessment, as well as on their quarterly follow-up consultations. For a detailed description of these variables we refer to \cite{Pires2018}. \vspace{-1.5mm} \subsubsection{Creating learning examples} \label{createLE} Since data may not all be collected in the same day, a preprocessing step is required to merge all features into a single observation, called here as a snapshot, reflecting the summary of the patient condition around that appointment's time. We followed the approach proposed in \cite{Carreiro2015}, a bottom-up hierarchical clustering with constraints strategy to cluster temporally-related tests. In the end, we can have multiple snapshots per patient regarding different appointments. Then, we stratified the created patients' snapshots regarding their time of progression to respiratory insufficiency. These correspond to the learning examples used to train the prognostic models. This learning approach using time windows was already used in \cite{Carreiro2015, Pires2018}, and allows to answer the question of "Will an ALS patient require NIV \textit{k} days after the medical assessment?". However, these time windows are inclusive, in the sense that prognostic models built for a 365-days time window, for instance, might include cases requiring NIV either at 90 or 180 days after the assessment. For clinicians, it would be more informative if we could narrow the time window of prediction, for instance, to the temporal distance between appointments (in this case, three-months intervals), thus moving to the question "Will an ALS patient require NIV within \textit{k} and \textit{k+90} days after the medical assessment?". In previous work \cite{Carreiro2015, Pires2018}, time windows of 90, 180, and 365 days (3, 6, and 12 months respectively) were used, as recommended by clinicians. In this context, we predict the need for NIV at up to 90 days, from 90 days to 180 days, and from 180 days to 365 days. \subsection{Cross Conformal Prediction} Let us assume that we are given a training set ${(x_{1}, y_{1}),...,(x_{n-1}, y_{n-1})}$, where $x_{i} \in X$ is a vector of attributes and $y_{i} \in Y$ is the class label (binary classification problem). Given a new test example $(x_{n})$ we aim to predict its class and assess the level of uncertainty of such prediction. Intuitively, we assign each class $y_{n} \in Y$ to $x_{n}$, at a time, and then evaluate how (dis)similar the example $(x_{n}, y_{n})$ is in comparison with the training data, using a (non-)conformity measure. This (non-)conformity measure is a function that assesses the (dis-)similarity between examples by means of a numerical (non-)conformity score ($\alpha_{n}$), and is generally based on the underlying classifier. To evaluate how different $x_{n}$ is from the training set we compare its non-conformity score with those of the remaining training examples $x_{j}, j=0, ..., n-1$, using the \textit{p}-value function: \begin{equation} p(\alpha_{n})=\dfrac{\vert \{ j=1,...,n: \alpha_{j} \geq \alpha_{n} \} \vert }{n}, \end{equation} \label{eq:eq1} \vspace{-1mm} \noindent where $\alpha_{n}$ is the non-conformity score of $x_{n}$, assuming it is assigned to the class label $y_{n}$. If the \textit{p}-value is small, then the test example $(x_{n},y_{n})$ is non-conforming, since few examples $(x_{i},y_{i})$ had a higher non-conformity score when compared with $\alpha_{n}$. If the \textit{p}-value is large, $x_{n}$ is very conforming, since most examples $(x_{i},y_{i})$ had a higher non-conformity score when compared with $\alpha_{n}$. CP is valid under the randomness ($i.i.d$) assumption \cite{Vovk2005}. Once \textit{p}-values are computed, CP can be used in one of the following ways: \textit{i) Using forced predictions (FP):} predicts the class with the highest p-value and output its credibility (the largest p-value) and confidence (complement to 1 of the second highest p-value), or \textit{ii) Using prediction regions (PR):} For a given confidence level ($1-\epsilon$), outputs the prediction region - $T^{\epsilon}$: set of all classes with $p(\alpha_{n})> \epsilon$. For each test example, and each possible class label, the classifier is rerun using all training examples along with the new test example, constraining its applicability on large datasets. To overcome this computational inefficiency problem, an inductive version of this framework has emerged. In this case, the training set ${(x_{1}, y_{1}),...,(x_{n-1}, y_{n-1})}$ is divided into the proper training set training set ${(x_{1}, y_{1}),...,(x_{m}, y_{m})}$ and the calibration set ${(x_{m+1}, y_{m+1}),...,(x_{n-1}, y_{n-1})}$, where $m<n-1$. The proper training set is used to train the underlying classifier, whereas the p-values are computed using only examples in the calibration set. Later on, a new approach named Cross Conformal Prediction (CCP) \cite{Vovk2015} was proposed to cope with the loss of informational efficiency of inductive CP. It consists in splitting the training set into \textit{k} folds, where one of the \textit{k} folds is used as a calibration set while the remaining \textit{k-1} folds are merged to form the proper training data. \subsection{Prognostic model using a mixture of experts} Figure \ref{fig:workflow} depicts the supervised learning approach proposed in this work: a prognostic model in ALS using a mixture of experts. Given a new ALS patient, we predict whether he (or she) will progress to respiratory insufficiency in one of three time windows ($M_{90d}$: up to 90 days, $M_{90-180d}$: from 90 to 180 days, or $M_{180-365d}$: from 180 to 365 days) or remain stable up to the limit of each time window (these are the \textit{base models}). Therefore, we end up with three predictions and the respective reliability measure. In the next step, one must define an aggregation rule to combine those predictions into a final one. In this work, we decided to predict the class with the highest reliability (in this case, measured by the CP credibility). As such, the proposed prognostic model not only predicts whether a given ALS patient will require NIV, but also when is it more likely to happen. Moreover, a measure reflecting the reliability of the predicted class is outputted. If all predictions are below the predefined reliability threshold, we consider that case as unpredictable (No prediction). In a broader sense, we can say our prognostic model addresses a multi-class problem. \begin{figure} \centering \includegraphics[width=46mm]{workflow.pdf} \caption{Workflow of the proposed prognostic model using mixture of experts to predict NIV in ALS patients.} \label{fig:workflow} \end{figure} \vspace{-2.5mm} \subsection{Classification settings} In the evaluation of the proposed prognostic model, an \textit{outer 5-}fold cross-validation (CV) was performed on top of the \textit{inner 5} folds of the CCP. The folds were picked randomly, maintaining class proportions. For each \textit{outer} fold, we created an overall validation set by merging the testing sets from each base model (regarding different time windows). We tested three reliability thresholds ($\tau$=0.80, 0.90, 0.95) considered useful in clinical practice. To evaluate the classification performance of the base models $M_{90d}$, $M_{90-180d}$, and $M_{180-365d}$, we assessed the Area under the ROC curve (AUC), sensitivity, and specificity, using the testing sets created per time window (from the \textit{outer} fold). Then, to evaluate the prognostic model using a mixture of experts, we output the number of correct and incorrect predictions made, using the validation set. We tested different classifiers within the CCP framework, using the credibility to reflect the uncertainty of predictions (Forced Predictions), and the standard classifiers alone, without predictions uncertainty, for the sake of comparability. Namely, we tested the following classifiers: Na\"ive Bayes, Support Vectors Machines with the polynomial (SVM Poly) and RBF kernel (SVM RBF), Logistic Regression, and Decision Tree with J48 algorithm. Class imbalance was tackled by Random Undersampling and SMOTE as suggested in \cite{Pires2018}. Particularly, we first randomly undersample the majority class until a balance of 60/40\% is achieved, and then, SMOTE was used to reach a 50/50\% class proportion. Moreover, the most relevant set of features was selected by the feature selection ensemble algorithm proposed in \cite{Pereira2018}. This feature selection approach starts by ranking features according to their relevance as assessed by a consensus of different feature selection algorithms and then select the top-ranked features which maximize both predictability and stability performances. The proposed prognostic model was implemented in Java using WEKA's functionalities (version 3.8.0). \section{Results and Discussion} The data used in this work is described in Section \ref{datasect} and summarized in Table \ref{tab:freq}. Patients who required assisted ventilation within a given time windows are labelled as "Evol", while those who did not needed NIV are labelled as "No Evol" in this study. The number "No Evol" patients decreases with the time width. This is justifiable by the fast decline nature of ALS. Most "Evol" patients require NIV within either the first 3 months or between the first 6 and 12 months. \vspace{-4mm} \begin{table}[H] \caption{Details on ALS dataset for time windows of 90 to 365 days. Class imbalance (per time window) is shown as \% within brackets.} \label{tab:freq} \begin{tabular}{lcc} \toprule \textbf{} & \textbf{Evol (E=1)} & \textbf{No Evol (E=0)} \\ \midrule up to 90 days & 594 (18\%) & 2750 (82\%) \\ 90 to 180 days & 373 (15\%) & 2186 (85\%) \\ 180 to 365 days & 469 (24\%) & 1456 (76\%) \\ \bottomrule \end{tabular} \end{table} \vspace{-2mm} From all the tested classifiers, the best results for the prognostic models using a mixture of experts was achieved with the SVM Poly. Therefore, and due to space constraints, we report the results using this classifier. \subsection{Learning the base models} Tables \ref{tab:resultStandard} and \ref{resultsInner} show the classification performance of the base models ($M_{90d}$, $M_{90-180d}$, and $M_{180-365d}$) built with the standard SVM Poly and with Conformal predictors (CPs) coupled with the SVM Poly, respectively. Both the standard classifier and CPs (with $\tau$=0) struggle to accurately identify patients who need NIV, as shown by the low sensitivity values obtained across all time windows. This was already the trickiest class to predict in previous works \cite{Carreiro2015, Pires2018}. Notwithstanding, the sensitivity greatly improves when considering predictions made at high-reliability levels (in this case, predictions with high-credibility values), at the cost of giving a prognostic for a limited number of patients. Using CCPs proved to be useful when predicting NIV in ALS patients, with promising AUC, sensitivity, and specificity values, for high credibility-values, mainly when predicting short ($M_{90d}$) and long-term ($M_{180-365d}$) progressions to respiratory failure. \subsection{Prognostic model using a mixture of experts} Table \ref{OverallResults} reports the predictive performance of the proposed prognostic model using a mixture of experts, per reliability-threshold. We aim at simulating a real-world situation, at which, given a new ALS patient, we predict whether he (or she) will need NIV, and the most likely time window of occurrence. The fact of patients not being exclusive of one time windows (i.e., an ALS patient may be "No Evol" in a shorter time windows and become "Evol" on a broader time window), hinders the computation of confusion matrix-based evaluation metrics (such as sensitivity or specificity) since the gold standard is not unique. Nevertheless, we evaluate the number of 1) patients correctly identified as needing NIV in the corresponding time window (labelled as "Evol as Evol"), 2) patients correctly identified as not requiring NIV in the corresponding time window (labelled as "noEvol as noEvol"), and 3) the number of misclassifications due to either a wrong prediction or in an incorrect time window. Comparing with Table \ref{resultsInner}, we noticed a significant increase in the number of predictions made by the prognostic model using a mixture of experts. This suggests that base models complement each other, and patients who may be hard to classify in one model (and time window) are more similar to the training examples of other models (and time window). The percentage of correct predictions is near 80\% across all time windows. While this performance must be enhanced to be translated to clinical settings, we recall that this model is handling a hard classification task, similar to a multi-task problem. \vspace{-2.5mm} \begin{table}[H] \caption{Classification performance of the base models trained with SVM with polynomial kernel within a randomized 5-fold CV scheme, per time windows.} \label{tab:resultStandard} \begin{tabular}{lccc} \toprule & \textbf{AUC} & \textbf{Sensitivity} & \textbf{Specificity} \\* \midrule \textit{$M_{90d}$} & 0.794$\pm$0.02 & 0.682$\pm$0.09 & 0.767$\pm$0.02 \\ \textit{$M_{90-180d}$} & 0.684$\pm$0.02 & 0.560$\pm$0.04 & 0.735$\pm$0.02 \\ \textit{$M_{180-365d}$} & 0.752$\pm$0.03 & 0.657$\pm$0.05 & 0.712$\pm$0.03 \\* \bottomrule \end{tabular} \end{table} \vspace{-1.5mm} \begin{table} \caption{Classification performance of the base models trained with Cross Conformal Predictors coupled with SVM with polynomial kernel within a randomized 5-fold CV scheme, per time windows and reliability threshold.} \label{resultsInner} \begin{tabular}{cccc} \toprule \textbf{$M_{90d}$} & \textbf{AUC} & \textbf{Sensitivity} & \textbf{Specificity} \\ \midrule \textit{All} & 0.801$\pm$0.02 & 0.665$\pm$0.06 & 0.788$\pm$0.02 \\ \textit{0.80} & 0.878$\pm$0.03 (34\%) & 0.863$\pm$0.07 (34\%) & 0.869$\pm$0.01 (34\%) \\ \textit{0.90} & 0.878$\pm$0.02 (19\%) & 0.901$\pm$0.04 (19\%) & 0.845$\pm$0.01 (19\%) \\ \textit{0.95} & 0.899$\pm$0.03 (9\%) & 0.929$\pm$0.04 (9\%) & 0.871$\pm$0.03 (9\%) \\ \midrule \textbf{$M_{90-180d}$} & \textbf{AUC} & \textbf{Sensitivity} & \textbf{Specificity} \\ \midrule \textit{All} & 0.706$\pm$0.004 & 0.587$\pm$0.07 & 0.751$\pm$0.02 \\ \textit{0.80} & 0.806$\pm$0.05 (29\%) & 0.759$\pm$0.15 (29\%) & 0.833$\pm$0.03 (29\%) \\ \textit{0.90} & 0.818$\pm$0.04 (18\%) & 0.846$\pm$0.07 (18\%) & 0.808$\pm$0.07 (18\%) \\ \textit{0.95} & 0.842$\pm$0.07 (9\%) & 0.869$\pm$0.09 (9\%) & 0.794$\pm$0.04 (9\%) \\ \midrule \textbf{$M_{180-365d}$} & \textbf{AUC} & \textbf{Sensitivity} & \textbf{Specificity} \\ \midrule \textit{All} & 0.766$\pm$0.04 & 0.669$\pm$0.05 & 0.741$\pm$0.03 \\ \textit{0.80} & 0.882$\pm$0.04 (33\%) & 0.842$\pm$0.06 (33\%) & 0.867$\pm$0.03 (33\%) \\ \textit{0.90} & 0.875$\pm$0.03 (20\%) & 0.973$\pm$0.04 (20\%) & 0.847$\pm$0.04 (20\%) \\ \textit{0.95} & 0.886$\pm$0.03 (9\%) & 1.0$\pm$0.0 (9\%) & 0.836$\pm$0.03 (9\%) \\ \bottomrule \end{tabular} \end{table} \vspace{-1.5mm} \begin{table}[H] \caption{Classification performance of the prognostic model using mixture of experts trained with Cross Conformal Predictors coupled with SVM with polynomial kernel within a randomized 5-fold CV scheme, per reliability threshold.} \label{OverallResults} \begin{tabular}{@{}ccccc@{}} \toprule \textbf{} & \textbf{Evol} & \textbf{noEvol} & \textbf{No. (\%) of} & \textbf{\% of} \\ \textbf{$\tau$} & \textbf{as Evol} & \textbf{as noEvol} & \textbf{misclassifications} & \textbf{predictions} \\\midrule \textit{All} & 561 & 5478 & 1789 (23\%) & 100\% \\ \textit{0.80} & 561 & 5478 & 1789 (23\%) & 100\% \\ \textit{0.90} & 542 & 5429 & 1714 (22\%) & 98\% \\ \textit{0.95} & 410 & 4892 & 1266 (19\%) & 84\% \\ \bottomrule \end{tabular} \end{table} \section{Conclusions} Early Administration of NIV in ALS patients leads to better prognosis and extended survival times. In this context, we propose a prognostic model to predicts whether, and when, ALS patients would need assisted ventilation, at a given reliability threshold. Such models can be useful to support clinical decisions. High credible predictions can reinforce the decision of prescribing, or not, NIV, and eventually select patients for clinical trials. \begin{acks} This work was supported by FCT through funding of Neuroclinomics2 (PTDC/EEI-SII/1937/2014) and Predict (PTDC/CCI-CIF/ 29877/2017) projects, research grant (SFRH/ BD/95846/2013) to TP and LASIGE Research Unit (UID/CEC/00408/2019). \end{acks} \bibliographystyle{ACM-Reference-Format}
1,941,325,219,930
arxiv
\section{Introduction} Miniaturized Fabry-Perot cavities are based on mirrors that are directly fabricated onto the end facets of optical fibers. They have emerged as a versatile optical resonator platform during the last decade \cite{hunger2010fiber, gallego2016high}. They can provide small mode-volumes for high field concentrations in order to enhance light-matter interaction, while at the same time being inherently fiber coupled. Hence, they have quickly evolved into a standard platform to optically interface quantum emitters like atoms \cite{gallego2018strong, colombe2007strong, Macha2020,brekenfeld2020quantum}, ions \cite{steiner2013single,Takahashi2020}, molecules \cite{toninelli2010scanning}, and solid state systems like quantum dots \cite{miguel2013cavity} or NV-centers \cite{albrecht2013coupling}. Furthermore, they have been successfully used in cavity-optomechanical experiments involving a membrane inside the fiber Fabry-Perot cavity (FFPC) \cite{flowers2012fiber}, for sensing of strain \cite{jiang2001simple} and vibration \cite{garcia2010vibration}, for cavity-enhanced microscopy \cite{Mader2015}, and they have been proposed as a fiber-coupled system for optical filtering \cite{ott2016millimeter}. The fiber mirrors that constitute an FFPC are created by laser ablation and a subsequent high-reflective coating of the end-facets of an optical fiber\cite{hunger2010fiber,Uphoff_2015}. Assembling an FFPC typically requires an iterative adjustment of the two opposing fiber mirrors, using three translation and two angular degrees of freedom, to achieve optimal cavity alignment \cite{Brandstatter2013, hunger2010fiber}. In order to control cavity birefringence also rotary adjustment of the fibers is needed\cite{Uphoff_2015,Garcia18}. After alignment, the fibers are glued to their respective holders, e.g. v-grooves. These are mounted on piezo-electric elements, attached to a common base, to enable tuning of the cavity resonance \cite{gallego2016high,Brandstatter2013}. Such conventional FFPC realizations easily pick up low frequency acoustic noise due to the large distance of the fibers from the common base. Moreover, fiber tips protruding beyond their holders into the free space introduce additional noise due to bending modes. In order to stabilize these cavity systems an electronic locking scheme with feedback bandwidths of the order of several tens of kHz is required \cite{gallego2016high,janitz2017high,brachmann2016photothermal}. An alternative implementation, where the fiber mirrors are fixed inside a common glass ferrule, was demonstrated in \cite{gallego2016high}. This reduces the complexity of the assembling process and at the same time increases the passive stability. The monolithic FFPC in \cite{gallego2016high}, however, had a small scan range via slow thermal tuning only, and hence was limited in its applicability. Here we present the fabrication, characterization of optical properties, and locking characteristics of three monolithic FFPCs. The FFPC devices use slotted glass ferrules glued to a piezoelectric element for fast scanning over the entire free-spectral range of the resonators. The fiber mirrors are inherently aligned by the guide provided from the glass ferrule. Due to the high passive stability, feedback bandwidths as low as $\SI{20}{m\hertz}$ are sufficient to lock the cavity resonance to an external laser under laboratory conditions. Fast piezo tuning allows for feedback bandwidths up to $\SI{27}{\kilo\hertz}$ for tight stabilization of the cavity resonance. \section{FFPC design and fabrication} In Fig.~\ref{fig:generalGeometry} we show the geometry of three FFPC devices based on glass ferrules. These commercially available ferrules ($\SI{8}{\milli \meter}\times \SI{1.25}{\milli \meter} \times\SI{1.25}{\milli \meter}$) are made out of fused silica with $\SI{131}{\micro\meter}$ nominal inner diameter of the bore. The different slotting patterns of the ferrules are cut using a diamond-plated wire saw. Complete slots are finished only after gluing the ferrule onto the piezo-element to maintain the precise alignment of the bores of the segmented blocks (details see Suppl. Sec.\,1). The electrodes of the piezo-element are connected to copper wires by means of a conductive glue. \begin{figure}[htbp] \centering \includegraphics[scale=\figureScaling]{figures/figure01_geometry.pdf} \caption{Designs and components of the three FFPCs. The optical cavities are formed by concave dielectric mirrors on the opposing end-facets of the optical fibers (diameters exaggerated by a factor of 2) at the center of the structures. The fibers are glued into the glass ferrules, which are in turn glued to piezoelectric elements for tuning the cavity resonances. } \label{fig:generalGeometry} \end{figure} In the next step, the optical cavity is formed by inserting the fiber mirrors into the ferrule. The opposing end facets are placed at the center area of the ferrule to form a cavity with length $L_\text{cavity}$ according to the desired free spectral range $\nu_\text{FSR} = c/2\, L_\text{cavity}$ with $c$ the speed of light. In the half and triple slot FFPC the central slot does not fully cut through the bore of the ferrule, such that the fiber tips rest at the base of the bore as shown in the inset of Fig.~\ref{fig:generalGeometry}. In the full slot design, the slotting gap is kept narrow ($\approx\SI{250}{\micro\meter}$) such that there is only a small protrusion of the fiber tips into the free space. This constrains the bending motion of the fiber tips. To find the optical resonance of the cavity, one of the fiber mirrors is scanned using a piezo-driven translation stage while observing the cavity reflection of an incident probe laser. The birefringence of the cavity, due to ellipticity of the mirrors, is reduced by rotating one of the fibers about the cavity axis \cite{Garcia18}. The guide provided by the ferrule significantly simplifies the cavity alignment by requiring only one translation and one rotation degree of freedom, as long as the used fiber mirrors only show small decentrations from the fiber axis. In the last step, the fibers are glued into the ferrule, while keeping the cavity approximately resonant with a laser at the target wavelength. A small amount of UV-curable glue is applied to the fiber where it enters the ferrule. Capillary forces let the glue flow into the space between fiber and ferrule wall, where it is subsequently hardened by UV-light illumination. The piezo-element is a rectangular ceramic block of dimensions $\SI{10}{\milli\meter}\times\SI{1}{\milli\meter}\times\SI{1}{\milli\meter}$ such that it fits the ferrule (Fig.~\ref{fig:generalGeometry}). Applying a voltage to the piezo-element causes a longitudinal displacement of the fiber end facets with respect to each other. The resulting length change $\Delta L_\text{cavity}$, tunes the optical resonance frequencies ($\Delta\nu_\text{scan} = 2\, \nu_\text{FSR} \, \Delta L_\text{cavity} / \lambda$). For fully slotted designs, the tuning range depends on the spacing of the glue points on the piezo element and the length tuning is approximately given by the expansion of the unloaded piezo-element between the glue points. The distance between relevant glue contact points for the full (triple) slot design is $\approx\SI{2.7}{\milli\meter}$ ($\SI{7.4}{\milli\meter}$), translating into a length variation of $\Delta L_\text{cavity}$ $\approx\SI{0.5}{\micro\meter}$ ($\SI{1.5}{\micro\meter}$) for a $\SI{1}{\kilo\volt_{pp}}$ piezo voltage range. The half slot design shows the smallest tuning range since the length change of the cavity is mediated by the elastic deformation of the stiff ferrule. The measured tuning ranges for the three FFPCs are listed in Tab.~\ref{tab:ffpcOverview}. \begin{figure}[htbp] \centering \includegraphics[scale=\figureScaling]{figures/figure02_measSetup.pdf} \caption{Setup for characterizing FFPCs. The reflected and transmitted laser light power from the FFPC is monitored by the two photodiodes (PD) while the FFPC length $L_\text{cavity}$ is scanned. The waveplates ($\lambda/2$ and $\lambda/4$) before the input fiber are used to investigate the polarization mode splitting of the cavity. The calibration of scan time $t_\text{scan}$ to frequency is achieved by modulating sidebands onto the laser tone using an electro-optic modulator (EOM). BS represents a non-polarizing beam splitter. (sketch uses \cite{componenentLibraryInkscape})} \label{fig:measSetup} \end{figure} \section{Optical characterization} For the characterization of the completed FFPCs, the light from a $\SI{780}{\nano\meter}$ narrow linewidth ($\sim \SI{200}{\kilo \hertz}$) laser is sent through an electro-optic modulator (EOM) that is driven at $\SI{250}{\mega\hertz}$ to add sidebands to the laser carrier frequency (see Fig.~\ref{fig:measSetup}). A combination of half- and quarter-wave plates is used to control the polarization state of the light before being coupled into the optical fiber that leads to the FFPC. The reflected (transmitted) light from the cavity is directed onto a photodiode PD$_\text{reflected}$ (PD$_\text{transmitted}$). The cavity length is scanned by driving the piezo-element. To measure the cavity linewidth, the EOM-generated sidebands are used as frequency markers (see inset in Fig.~\ref{fig:measSetup}). \begin{figure}[htbp] \centering \includegraphics[scale=\figureScaling]{figures/figure03_optChar.pdf} \caption{(a) Reflected and transmitted power fraction in an exemplary cavity scan of the half slot FFPC with Lorentzian and dispersive fit \cite{gallego2016high}. (b) Scan of the full free spectral range (FSR) for full slot and triple slots FFPCs. Detailed FFPC properties are listed in Tab.~\ref{tab:ffpcOverview}.} \label{fig:opticalScans} \end{figure} The fiber mirrors used here have a transmission of $T\approx\SI{15}{ppm}$ and absorption of $A\approx\SI{25}{ppm}$ of the dielectric mirror coating. The concave mirror shape on the fiber end facet was created via \chem{CO_2}-laser ablation with a radius of curvature of $\approx \SI{180}{\micro\meter}$. To experimentally determine the linewith and finesse of the FFPCs, the measured reflection signal is fitted using a sum of a Lorentzian and its corresponding dispersive function \cite{gallego2016high,Bick2016} as shown in Fig.~\ref{fig:opticalScans}~(a). In this example of the half slot FFPC, the free spectral range of the $\SI{93}{\micro\meter}$ long cavity is $\nu_\text{FSR} = c/2L_\text{cavity} \sim \SI{1.61}{\tera\hertz}$ yielding a maximum Finesse of $\mathcal{F}\approx \SI{93e3}{}$ from the extracted linewidth of $\kappa/2\pi = (17.34 \pm 0.014)\, \SI{}{\mega\hertz}$. The finesse, the FSR and the linewidths for the three FFPCs are listed in Tab.~\ref{tab:ffpcOverview}. While the half slot FFPC shows a moderate tunability of $\SI{0.13}{\giga\hertz/\volt}$ due to the stiff ferrule geometry, the full and triple slot FFPCs exhibit large tunabilities of $\SI{2.27}{\giga\hertz/\volt}$ and $\SI{6.1}{\giga\hertz/\volt}$ respectively, enabling full FSR scans at moderate voltages as shown in Fig.~\ref{fig:opticalScans} b. Apart from the piezo-electric tuning, the optical cavity resonance is also sensitive to the ambient temperature \cite{gallego2016high}, which can be used for tuning by $\approx\SI{8}{\giga\hertz/\kelvin}$ (Suppl. Sec.\,5). \begin{table}[ht] \caption{Overview of the optical and locking characteristics of the three FFPCs} \begin{center} \begin{threeparttable} \begin{tabular}{ |p{3.9cm}|p{2.5cm}|p{2.5cm}|p{2.5cm}|} \hline Property & Half slot FFPC & Full slot FFPC & Triple slot FFPC \\ \hline Finesse & 93000 &61000 &99000\\ Linewidth (FWHM) & $\SI{17}{\mega\hertz}$ & $\SI{27}{\mega\hertz}$ & $\SI{16}{\mega\hertz}$\\ FSR & $\SI{1.61}{\tera\hertz}$ & $\SI{1.62}{\tera\hertz}$ & $\SI{1.61}{\tera\hertz}$\\ Scan range$^*$ & $\SI{0.13}{\tera\hertz}$ & $\SI{2.27}{\tera\hertz}$ & $\SI{6.10}{\tera\hertz}$ \\ Max. locking bandwidth & $\SI{20}{\kilo\hertz}$ & $\SI{25}{\kilo\hertz}$ & $\SI{27}{\kilo\hertz}$ \\ Mechanical resonance $^{\#}$ & $\SI{56}{\kilo\hertz}$ & $\SI{34}{\kilo\hertz}$ & $\SI{32}{\kilo\hertz}$ \\ Min. locking bandwidth & $\SI{20}{\milli\hertz}$ & $\SI{65}{\milli\hertz}$ & $\SI{110}{\milli\hertz}$ \\ Locked freq. noise (rms)$^{\S}$ & $\SI{0.37}{\mega\hertz}$ & $\SI{0.40}{\mega\hertz}$ & $\SI{0.64}{\mega\hertz}$ \\ \hline \end{tabular} \begin{tablenotes} \item[$^*$] $\SI{1}{\kilo\volt_{pp}}$\,;\,\,\,\, $^{\#}$ Lowest frequency mode\,;\,\,\,\,$^{\S}$ Integrated for $\SI{10}{\hertz}-\SI{1}{\mega\hertz}$ \end{tablenotes} \end{threeparttable} \end{center} \label{tab:ffpcOverview} \end{table} \begin{figure}[htbp] \centering \includegraphics[scale=\figureScaling]{figures/figure04_maxLBW.pdf} \caption{(a) Schematic of the PDH-locking setup for investigating the feedback bandwidth and stability of monolithic FFPCs. The closed feedback system consists of the FFPC device $(S)$, the PDH mixer setup $(M)$, and the feedback controller $(C)$. The input to the PI-controller is the PDH error signal $e$. The output voltage $u$ is applied to the piezo. The gain of the PI-controller can be adjusted to explore different locking bandwidths. To measure the frequency response of the closed-loop circuit a frequency sweep signal $d$ from the electrical network analyser (ENA) can be added to $e$. (b) The plots show the magnitude and phase of the full system ($CSM$) transfer function for the maximum achieved bandwidths (dash-dotted vertical lines) of the three designs listed in Tab.~\ref{tab:ffpcOverview}. Sketch uses \cite{componenentLibraryInkscape}.} \label{fig:lockingBandwidth} \end{figure} \begin{figure}[htbp] \centering \includegraphics[scale=\figureScaling]{figures/figure05_minLBW3.pdf} \caption{(a) Magnitude of the closed-loop-gain ($CSM(\nu)$) for the three FFPCs with small locking bandwidths (LBW). The intersections of the measured transfer functions with the unity gain line are the values corresponding to the low LBW. (b) Measurements of the frequency noise spectral density $S_\nu$ for the triple slot FFPC for three different LBWs. The off resonance noise curve corresponds to the detection noise limit, measured when the cavity is unlocked and far-off resonance. } \label{fig:lowlocking_bandwidth} \end{figure} \section{Cavity locking} The passive and active stability for the three FFPCs is analysed by investigating the closed-loop locking characteristics, where locking refers to an active stabilization of a cavity resonance to a narrow linewidth ($\sim 200$ kHz) laser. We use the standard Pound-Drever-Hall (PDH) locking technique \cite{drever1983laser}, using the EOM-generated sidebands. The schematic of the locking setup is shown in Fig.~\ref{fig:lockingBandwidth} (a). The PDH error-signal $e$ is retrieved from the rf mixer by down-converting the output of the photodiode signal $r$ with the $\SI{250}{\mega\hertz}$ reference signal. The error signal is fed into a PI-controller which drives the piezo attached to the cavity assembly. The PI-controller consists of a variable-gain proportional (P) and an integrator (I) control system. The output of the PI-controller can optionally be low-pass filtered by adding a resistor in series with the piezo. We analyse the locking performance of the FFPCs based on techniques used in \cite{Reinhardt:17,janitz2017high}. The frequency-dependent gains (transfer functions) of the components in the servo-loop, as depicted in Fig.~\ref{fig:lockingBandwidth} (a), are: $C$ - the PI-controller including the optional low pass filter, $S$ - the cavity-assembly, and $M$ - the measurement setup consisting of photodiode and mixer. To retrieve the loop gain $CSM(\nu)$, an electronic network analyzer (ENA) is used to add a frequency-swept external disturbance $d$ to the input of the PI-controller, while the error signal is monitored. The loop gain $CSM(\nu)$ is deduced from the measured closed-loop response of the error signal as follows \begin{equation} CSM(\nu) = - \frac{A}{A+1}\,,\,\, A = \frac{e}{d}. \end{equation} We directly obtain $A$ from the network analyzer by monitoring the spectrum of the error signal $e$ (Fig.~\ref{fig:lockingBandwidth} (a)). The closed-loop locking bandwidth (LBW) is given by the unity-gain frequency, i.e the lowest frequency for which $\left|CSM(\nu)\right|=1$. This frequency is the upper limit at which the PI-controller can exert an effective feedback against cavity resonance drifts. By changing the gain settings of the PI-controller, this frequency can be adjusted in order to realize different locking bandwidths. \subsection{Maximum locking bandwidth} To find the maximum achievable LBW of the FFPC devices, the I-gain of the PI-controller is set to the highest value which allows for stable locking. As shown in Fig.~\ref{fig:lockingBandwidth} (b), the maximum LBWs for all three FFPCs are a few tens of kHz, limited by mechanical resonances of the FFPC assembly (details Sect. 4.4). The rms frequency fluctuation for the maximum LBW is similar to the one given in Tab.~\ref{tab:ffpcOverview}. \subsection{Stability of the FFPCs} For quantitatively characterizing the high passive stability, we extract a value for the minimum locking bandwidth of the FFPCs. For this purpose, the I-gain of the PI-controller is set to the lowest possible value which still allows a stable cavity locking. We found that all three cavities can be locked for many hours under laboratory conditions without any acoustic isolation even for LBW at sub-Hertz level. Fig.~\ref{fig:lowlocking_bandwidth} (a) shows the plots of $\left|CSM(\nu)\right|$ vs frequency at the lowest gain setting. The LBW is extracted from the intersection of the extrapolated loop gain with the unity gain line. The extracted minimum LBWs for the three FFPCs are given in Tab.~\ref{tab:ffpcOverview}, with the lowest measured value of $\SI{20}{\milli\hertz}$ for the half slot FFPC. As more slots are introduced for achieving larger tunability, the value of the lowest LBW increases successively for full slot and triple slot FFPCs. Therefore, the larger tunability of the FFPC is obtained at the cost of a slight reduction of the stability. This is further characterised by the frequency noise spectrum for a locked FFPC as described below. \subsection{Noise-spectral-density analysis} To compare the noise characteristics of the FFPCs under different lock conditions, we measure their frequency noise spectral densities ($S_\nu$), which are derived from the measured noise spectral densities of the error signals. Fig.~\ref{fig:lowlocking_bandwidth} (b) shows $S_\nu$ for the triple slot FFPC at three different locking bandwidths (see Suppl. Sec.\,3 for the other two FFPCs). The integrated rms frequency noise from these measurements amounts to $\nu_\text{rms}=(1.19,0.64,1.28,0.24)\, \SI{}{\mega\hertz}$ for the $\SI{800}{\milli\hertz}$, $\SI{1.7}{\kilo\hertz}$, $\SI{16}{\kilo\hertz}$ locking bandwidths and the off resonance noise respectively. The off resonance noise is measured with an unlocked cavity far away from resonance. Although the locking bandwidth is changed by almost four orders of magnitude, the value of the integrated rms frequency noise remains similar, demonstrating the inherent high passive stability of the FFPCs. The slightly higher noise for sub-Hertz locking bandwidth is due to the uncompensated environmental acoustic noise below $\SI{1}{\kilo\hertz}$, while at a LBW of $\SI{1.7}{\kilo\hertz}$ acoustic noise is suppressed. We have also observed that when the locking bandwidth approaches the mechanical resonances, the rms frequency noise increases again due to the excitation of these resonances. \subsection{Mechanical resonances} The most prominent contribution to the measured frequency noise at higher frequencies is caused by the mechanical resonances of the assemblies starting around $\SI{30}{\kilo\hertz}$, see Fig.~\ref{fig:lowlocking_bandwidth} and Tab.~\ref{tab:ffpcOverview}. Without active damping of these resonances, the minimum achievable noise corresponds to the thermal excitation of these modes at ambient temperature. In order to quantify the thermal noise limit, we performed finite element simulations \cite{COMSOL} of the assembly geometries (see Suppl. Sec.\,4). These are used to extract the displacement fields of the mechanical eigenmodes, their effective masses and optomechanical coupling strengths. Damping was not included in the simulations. The resonances in the recorded noise spectral densities are then attributed to eigenmodes found in the simulation appearing at close-by frequencies. The thermal excitation of the mechanical resonances is calculated from the fluctuation-dissipation theorem using the fitted frequencies and linewidths that parametrize the damping of the modes. Together with the simulated parameters the expected signal in the frequency noise spectrum \cite{gorodetksy2010determination} can be compared to the measured frequency noise spectral density. An example for this is shown in Fig.~\ref{fig:mechanicalmodes}~(a) for the full slot FFPC. Here, we find five mechanical modes (I-V in Fig.~\ref{fig:mechanicalmodes}~(b)) with substantial coupling to the optical resonance for frequencies up to $\SI{150}{\kilo\hertz}$. The two most prominent modes in the experiment (blue traces), I and V, correspond to a bending and longitudinal stretching motion of the piezo. The calculated thermally induced frequency noise is shown in green in Fig.~\ref{fig:mechanicalmodes}~(a). Since the stretching of the piezo is coupled to the driving voltage by the piezoelectric effect, excess electrical noise from the controller is resonantly coupled to the system at the stretching mode frequency (dark blue curve in Fig.~\ref{fig:mechanicalmodes}~(a)). The excess electrical noise can however be removed by inserting a suitable low pass filter before the piezo (medium blue curve). After inserting the filter, the measured noise is compatible with the theoretically achievable thermal noise limit and the off-resonance detection noise (dashed light blue curve). The frequency noise of the laser does not play a role since it is found to be more than an order of magnitude lower than the detection noise (see Suppl. Sec.\,2). \begin{figure}[htbp] \centering \includegraphics[scale=\figureScaling]{figures/figure06_modellingMech2.pdf} \caption{Analysis of the measured FFPC frequency noise induced by mechanical resonances of the system for the full slot FFPC. (a) The measured $S_\nu$ are compared with the laser frequency noise and with the expected noise from thermally excited mechanical resonances of the system. The $S_\nu$ approaches the expected thermal noise limit near the mechanical resonances. Without a low-pass filter, excess electrical noise is coupled to the cavity through the piezoelectric element for some of the modes. (b) Displacement fields of the mechanical resonances included in the model. The two most prominent resonances, I and V, correspond to a bending mode (no piezoelectric coupling) and a longitudinal stretching mode (piezoelectrically coupled) of the structure.} \label{fig:mechanicalmodes} \end{figure} \section{Conclusion} In this paper we have demonstrated a versatile, robust and simple approach for building stable fiber Fabry-Perot cavities with wide frequency tunability and simultaneous high passive stability. The demonstrated high locking bandwidth is feasible due to the lowest mechanical resonance frequencies of the compact devices at few tens of kHz. Achieving stable locking at tens of mHz feedback bandwidth, which implies that the cavity resonance is stable on a minute time scale for a free running cavity, and reaching the thermal noise limit at higher frequencies, proves the high passive stability of these devices against slow and fast environmental disturbances. These compact and inherently fiber coupled cavities can be readily implemented in various applications like cavity-based spectroscopy of gases, tunable optical filters, and cavity quantum electrodynamics experiments, which all benefit from highly stable optical resonators. By incorporating mode-matched \cite{gulati2017fiber} and millimeters long FFPCs \cite{ott2016millimeter} in our compact devices, stable resonators with linewidths below $\SI{1}{\mega\hertz}$ can be achieved. This will enable the realization of compact gas sensors for airborne applications as well as miniaturized cavity enhanced vapor based light-matter interfaces. Considering ferrules made of ultra-low expansion glass and with cryogenically cooled FFPCs, one can envision miniaturized and portable optical oscillators with extremely high short term stability. \section*{Acknowledgements} The authors thank Stephan Kucera for valuable discussions. This work was supported by the Bundesministerium f\"ur Bildung und Forschung (BMBF), projects Q.Link.X and FaResQ. C.S. is supported by a national scholarship from CONACYT, México. W.A. acknowledges funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy – Cluster of Excellence Matter and Light for Quantum Computing (ML4Q) EXC 2004/1 – 390534769. \section{Introduction} Miniaturized Fabry-Perot cavities are based on mirrors that are directly fabricated onto the end facets of optical fibers. They have emerged as a versatile optical resonator platform during the last decade \cite{hunger2010fiber, gallego2016high}. They can provide small mode-volumes for high field concentrations in order to enhance light-matter interaction, while at the same time being inherently fiber coupled. Hence, they have quickly evolved into a standard platform to optically interface quantum emitters like atoms \cite{gallego2018strong, colombe2007strong, Macha2020,brekenfeld2020quantum}, ions \cite{steiner2013single,Takahashi2020}, molecules \cite{toninelli2010scanning}, and solid state systems like quantum dots \cite{miguel2013cavity} or NV-centers \cite{albrecht2013coupling}. Furthermore, they have been successfully used in cavity-optomechanical experiments involving a membrane inside the fiber Fabry-Perot cavity (FFPC) \cite{flowers2012fiber}, for sensing of strain \cite{jiang2001simple} and vibration \cite{garcia2010vibration}, for cavity-enhanced microscopy \cite{Mader2015}, and they have been proposed as a fiber-coupled system for optical filtering \cite{ott2016millimeter}. The fiber mirrors that constitute an FFPC are created by laser ablation and a subsequent high-reflective coating of the end-facets of an optical fiber\cite{hunger2010fiber,Uphoff_2015}. Assembling an FFPC typically requires an iterative adjustment of the two opposing fiber mirrors, using three translation and two angular degrees of freedom, to achieve optimal cavity alignment \cite{Brandstatter2013, hunger2010fiber}. In order to control cavity birefringence also rotary adjustment of the fibers is needed\cite{Uphoff_2015,Garcia18}. After alignment, the fibers are glued to their respective holders, e.g. v-grooves. These are mounted on piezo-electric elements, attached to a common base, to enable tuning of the cavity resonance \cite{gallego2016high,Brandstatter2013}. Such conventional FFPC realizations easily pick up low frequency acoustic noise due to the large distance of the fibers from the common base. Moreover, fiber tips protruding beyond their holders into the free space introduce additional noise due to bending modes. In order to stabilize these cavity systems an electronic locking scheme with feedback bandwidths of the order of several tens of kHz is required \cite{gallego2016high,janitz2017high,brachmann2016photothermal}. An alternative implementation, where the fiber mirrors are fixed inside a common glass ferrule, was demonstrated in \cite{gallego2016high}. This reduces the complexity of the assembling process and at the same time increases the passive stability. The monolithic FFPC in \cite{gallego2016high}, however, had a small scan range via slow thermal tuning only, and hence was limited in its applicability. Here we present the fabrication, characterization of optical properties, and locking characteristics of three monolithic FFPCs. The FFPC devices use slotted glass ferrules glued to a piezoelectric element for fast scanning over the entire free-spectral range of the resonators. The fiber mirrors are inherently aligned by the guide provided from the glass ferrule. Due to the high passive stability, feedback bandwidths as low as $\SI{20}{m\hertz}$ are sufficient to lock the cavity resonance to an external laser under laboratory conditions. Fast piezo tuning allows for feedback bandwidths up to $\SI{27}{\kilo\hertz}$ for tight stabilization of the cavity resonance. \section{FFPC design and fabrication} In Fig.~\ref{fig:generalGeometry} we show the geometry of three FFPC devices based on glass ferrules. These commercially available ferrules ($\SI{8}{\milli \meter}\times \SI{1.25}{\milli \meter} \times\SI{1.25}{\milli \meter}$) are made out of fused silica with $\SI{131}{\micro\meter}$ nominal inner diameter of the bore. The different slotting patterns of the ferrules are cut using a diamond-plated wire saw. Complete slots are finished only after gluing the ferrule onto the piezo-element to maintain the precise alignment of the bores of the segmented blocks (details see Suppl. Sec.\,1). The electrodes of the piezo-element are connected to copper wires by means of a conductive glue. \begin{figure}[htbp] \centering \includegraphics[scale=\figureScaling]{figures/figure01_geometry.pdf} \caption{Designs and components of the three FFPCs. The optical cavities are formed by concave dielectric mirrors on the opposing end-facets of the optical fibers (diameters exaggerated by a factor of 2) at the center of the structures. The fibers are glued into the glass ferrules, which are in turn glued to piezoelectric elements for tuning the cavity resonances. } \label{fig:generalGeometry} \end{figure} In the next step, the optical cavity is formed by inserting the fiber mirrors into the ferrule. The opposing end facets are placed at the center area of the ferrule to form a cavity with length $L_\text{cavity}$ according to the desired free spectral range $\nu_\text{FSR} = c/2\, L_\text{cavity}$ with $c$ the speed of light. In the half and triple slot FFPC the central slot does not fully cut through the bore of the ferrule, such that the fiber tips rest at the base of the bore as shown in the inset of Fig.~\ref{fig:generalGeometry}. In the full slot design, the slotting gap is kept narrow ($\approx\SI{250}{\micro\meter}$) such that there is only a small protrusion of the fiber tips into the free space. This constrains the bending motion of the fiber tips. To find the optical resonance of the cavity, one of the fiber mirrors is scanned using a piezo-driven translation stage while observing the cavity reflection of an incident probe laser. The birefringence of the cavity, due to ellipticity of the mirrors, is reduced by rotating one of the fibers about the cavity axis \cite{Garcia18}. The guide provided by the ferrule significantly simplifies the cavity alignment by requiring only one translation and one rotation degree of freedom, as long as the used fiber mirrors only show small decentrations from the fiber axis. In the last step, the fibers are glued into the ferrule, while keeping the cavity approximately resonant with a laser at the target wavelength. A small amount of UV-curable glue is applied to the fiber where it enters the ferrule. Capillary forces let the glue flow into the space between fiber and ferrule wall, where it is subsequently hardened by UV-light illumination. The piezo-element is a rectangular ceramic block of dimensions $\SI{10}{\milli\meter}\times\SI{1}{\milli\meter}\times\SI{1}{\milli\meter}$ such that it fits the ferrule (Fig.~\ref{fig:generalGeometry}). Applying a voltage to the piezo-element causes a longitudinal displacement of the fiber end facets with respect to each other. The resulting length change $\Delta L_\text{cavity}$, tunes the optical resonance frequencies ($\Delta\nu_\text{scan} = 2\, \nu_\text{FSR} \, \Delta L_\text{cavity} / \lambda$). For fully slotted designs, the tuning range depends on the spacing of the glue points on the piezo element and the length tuning is approximately given by the expansion of the unloaded piezo-element between the glue points. The distance between relevant glue contact points for the full (triple) slot design is $\approx\SI{2.7}{\milli\meter}$ ($\SI{7.4}{\milli\meter}$), translating into a length variation of $\Delta L_\text{cavity}$ $\approx\SI{0.5}{\micro\meter}$ ($\SI{1.5}{\micro\meter}$) for a $\SI{1}{\kilo\volt_{pp}}$ piezo voltage range. The half slot design shows the smallest tuning range since the length change of the cavity is mediated by the elastic deformation of the stiff ferrule. The measured tuning ranges for the three FFPCs are listed in Tab.~\ref{tab:ffpcOverview}. \begin{figure}[htbp] \centering \includegraphics[scale=\figureScaling]{figures/figure02_measSetup.pdf} \caption{Setup for characterizing FFPCs. The reflected and transmitted laser light power from the FFPC is monitored by the two photodiodes (PD) while the FFPC length $L_\text{cavity}$ is scanned. The waveplates ($\lambda/2$ and $\lambda/4$) before the input fiber are used to investigate the polarization mode splitting of the cavity. The calibration of scan time $t_\text{scan}$ to frequency is achieved by modulating sidebands onto the laser tone using an electro-optic modulator (EOM). BS represents a non-polarizing beam splitter. (sketch uses \cite{componenentLibraryInkscape})} \label{fig:measSetup} \end{figure} \section{Optical characterization} For the characterization of the completed FFPCs, the light from a $\SI{780}{\nano\meter}$ narrow linewidth ($\sim \SI{200}{\kilo \hertz}$) laser is sent through an electro-optic modulator (EOM) that is driven at $\SI{250}{\mega\hertz}$ to add sidebands to the laser carrier frequency (see Fig.~\ref{fig:measSetup}). A combination of half- and quarter-wave plates is used to control the polarization state of the light before being coupled into the optical fiber that leads to the FFPC. The reflected (transmitted) light from the cavity is directed onto a photodiode PD$_\text{reflected}$ (PD$_\text{transmitted}$). The cavity length is scanned by driving the piezo-element. To measure the cavity linewidth, the EOM-generated sidebands are used as frequency markers (see inset in Fig.~\ref{fig:measSetup}). \begin{figure}[htbp] \centering \includegraphics[scale=\figureScaling]{figures/figure03_optChar.pdf} \caption{(a) Reflected and transmitted power fraction in an exemplary cavity scan of the half slot FFPC with Lorentzian and dispersive fit \cite{gallego2016high}. (b) Scan of the full free spectral range (FSR) for full slot and triple slots FFPCs. Detailed FFPC properties are listed in Tab.~\ref{tab:ffpcOverview}.} \label{fig:opticalScans} \end{figure} The fiber mirrors used here have a transmission of $T\approx\SI{15}{ppm}$ and absorption of $A\approx\SI{25}{ppm}$ of the dielectric mirror coating. The concave mirror shape on the fiber end facet was created via \chem{CO_2}-laser ablation with a radius of curvature of $\approx \SI{180}{\micro\meter}$. To experimentally determine the linewith and finesse of the FFPCs, the measured reflection signal is fitted using a sum of a Lorentzian and its corresponding dispersive function \cite{gallego2016high,Bick2016} as shown in Fig.~\ref{fig:opticalScans}~(a). In this example of the half slot FFPC, the free spectral range of the $\SI{93}{\micro\meter}$ long cavity is $\nu_\text{FSR} = c/2L_\text{cavity} \sim \SI{1.61}{\tera\hertz}$ yielding a maximum Finesse of $\mathcal{F}\approx \SI{93e3}{}$ from the extracted linewidth of $\kappa/2\pi = (17.34 \pm 0.014)\, \SI{}{\mega\hertz}$. The finesse, the FSR and the linewidths for the three FFPCs are listed in Tab.~\ref{tab:ffpcOverview}. While the half slot FFPC shows a moderate tunability of $\SI{0.13}{\giga\hertz/\volt}$ due to the stiff ferrule geometry, the full and triple slot FFPCs exhibit large tunabilities of $\SI{2.27}{\giga\hertz/\volt}$ and $\SI{6.1}{\giga\hertz/\volt}$ respectively, enabling full FSR scans at moderate voltages as shown in Fig.~\ref{fig:opticalScans} b. Apart from the piezo-electric tuning, the optical cavity resonance is also sensitive to the ambient temperature \cite{gallego2016high}, which can be used for tuning by $\approx\SI{8}{\giga\hertz/\kelvin}$ (Suppl. Sec.\,5). \begin{table}[ht] \caption{Overview of the optical and locking characteristics of the three FFPCs} \begin{center} \begin{threeparttable} \begin{tabular}{ |p{3.9cm}|p{2.5cm}|p{2.5cm}|p{2.5cm}|} \hline Property & Half slot FFPC & Full slot FFPC & Triple slot FFPC \\ \hline Finesse & 93000 &61000 &99000\\ Linewidth (FWHM) & $\SI{17}{\mega\hertz}$ & $\SI{27}{\mega\hertz}$ & $\SI{16}{\mega\hertz}$\\ FSR & $\SI{1.61}{\tera\hertz}$ & $\SI{1.62}{\tera\hertz}$ & $\SI{1.61}{\tera\hertz}$\\ Scan range$^*$ & $\SI{0.13}{\tera\hertz}$ & $\SI{2.27}{\tera\hertz}$ & $\SI{6.10}{\tera\hertz}$ \\ Max. locking bandwidth & $\SI{20}{\kilo\hertz}$ & $\SI{25}{\kilo\hertz}$ & $\SI{27}{\kilo\hertz}$ \\ Mechanical resonance $^{\#}$ & $\SI{56}{\kilo\hertz}$ & $\SI{34}{\kilo\hertz}$ & $\SI{32}{\kilo\hertz}$ \\ Min. locking bandwidth & $\SI{20}{\milli\hertz}$ & $\SI{65}{\milli\hertz}$ & $\SI{110}{\milli\hertz}$ \\ Locked freq. noise (rms)$^{\S}$ & $\SI{0.37}{\mega\hertz}$ & $\SI{0.40}{\mega\hertz}$ & $\SI{0.64}{\mega\hertz}$ \\ \hline \end{tabular} \begin{tablenotes} \item[$^*$] $\SI{1}{\kilo\volt_{pp}}$\,;\,\,\,\, $^{\#}$ Lowest frequency mode\,;\,\,\,\,$^{\S}$ Integrated for $\SI{10}{\hertz}-\SI{1}{\mega\hertz}$ \end{tablenotes} \end{threeparttable} \end{center} \label{tab:ffpcOverview} \end{table} \begin{figure}[htbp] \centering \includegraphics[scale=\figureScaling]{figures/figure04_maxLBW.pdf} \caption{(a) Schematic of the PDH-locking setup for investigating the feedback bandwidth and stability of monolithic FFPCs. The closed feedback system consists of the FFPC device $(S)$, the PDH mixer setup $(M)$, and the feedback controller $(C)$. The input to the PI-controller is the PDH error signal $e$. The output voltage $u$ is applied to the piezo. The gain of the PI-controller can be adjusted to explore different locking bandwidths. To measure the frequency response of the closed-loop circuit a frequency sweep signal $d$ from the electrical network analyser (ENA) can be added to $e$. (b) The plots show the magnitude and phase of the full system ($CSM$) transfer function for the maximum achieved bandwidths (dash-dotted vertical lines) of the three designs listed in Tab.~\ref{tab:ffpcOverview}. Sketch uses \cite{componenentLibraryInkscape}.} \label{fig:lockingBandwidth} \end{figure} \begin{figure}[htbp] \centering \includegraphics[scale=\figureScaling]{figures/figure05_minLBW3.pdf} \caption{(a) Magnitude of the closed-loop-gain ($CSM(\nu)$) for the three FFPCs with small locking bandwidths (LBW). The intersections of the measured transfer functions with the unity gain line are the values corresponding to the low LBW. (b) Measurements of the frequency noise spectral density $S_\nu$ for the triple slot FFPC for three different LBWs. The off resonance noise curve corresponds to the detection noise limit, measured when the cavity is unlocked and far-off resonance. } \label{fig:lowlocking_bandwidth} \end{figure} \section{Cavity locking} The passive and active stability for the three FFPCs is analysed by investigating the closed-loop locking characteristics, where locking refers to an active stabilization of a cavity resonance to a narrow linewidth ($\sim 200$ kHz) laser. We use the standard Pound-Drever-Hall (PDH) locking technique \cite{drever1983laser}, using the EOM-generated sidebands. The schematic of the locking setup is shown in Fig.~\ref{fig:lockingBandwidth} (a). The PDH error-signal $e$ is retrieved from the rf mixer by down-converting the output of the photodiode signal $r$ with the $\SI{250}{\mega\hertz}$ reference signal. The error signal is fed into a PI-controller which drives the piezo attached to the cavity assembly. The PI-controller consists of a variable-gain proportional (P) and an integrator (I) control system. The output of the PI-controller can optionally be low-pass filtered by adding a resistor in series with the piezo. We analyse the locking performance of the FFPCs based on techniques used in \cite{Reinhardt:17,janitz2017high}. The frequency-dependent gains (transfer functions) of the components in the servo-loop, as depicted in Fig.~\ref{fig:lockingBandwidth} (a), are: $C$ - the PI-controller including the optional low pass filter, $S$ - the cavity-assembly, and $M$ - the measurement setup consisting of photodiode and mixer. To retrieve the loop gain $CSM(\nu)$, an electronic network analyzer (ENA) is used to add a frequency-swept external disturbance $d$ to the input of the PI-controller, while the error signal is monitored. The loop gain $CSM(\nu)$ is deduced from the measured closed-loop response of the error signal as follows \begin{equation} CSM(\nu) = - \frac{A}{A+1}\,,\,\, A = \frac{e}{d}. \end{equation} We directly obtain $A$ from the network analyzer by monitoring the spectrum of the error signal $e$ (Fig.~\ref{fig:lockingBandwidth} (a)). The closed-loop locking bandwidth (LBW) is given by the unity-gain frequency, i.e the lowest frequency for which $\left|CSM(\nu)\right|=1$. This frequency is the upper limit at which the PI-controller can exert an effective feedback against cavity resonance drifts. By changing the gain settings of the PI-controller, this frequency can be adjusted in order to realize different locking bandwidths. \subsection{Maximum locking bandwidth} To find the maximum achievable LBW of the FFPC devices, the I-gain of the PI-controller is set to the highest value which allows for stable locking. As shown in Fig.~\ref{fig:lockingBandwidth} (b), the maximum LBWs for all three FFPCs are a few tens of kHz, limited by mechanical resonances of the FFPC assembly (details Sect. 4.4). The rms frequency fluctuation for the maximum LBW is similar to the one given in Tab.~\ref{tab:ffpcOverview}. \subsection{Stability of the FFPCs} For quantitatively characterizing the high passive stability, we extract a value for the minimum locking bandwidth of the FFPCs. For this purpose, the I-gain of the PI-controller is set to the lowest possible value which still allows a stable cavity locking. We found that all three cavities can be locked for many hours under laboratory conditions without any acoustic isolation even for LBW at sub-Hertz level. Fig.~\ref{fig:lowlocking_bandwidth} (a) shows the plots of $\left|CSM(\nu)\right|$ vs frequency at the lowest gain setting. The LBW is extracted from the intersection of the extrapolated loop gain with the unity gain line. The extracted minimum LBWs for the three FFPCs are given in Tab.~\ref{tab:ffpcOverview}, with the lowest measured value of $\SI{20}{\milli\hertz}$ for the half slot FFPC. As more slots are introduced for achieving larger tunability, the value of the lowest LBW increases successively for full slot and triple slot FFPCs. Therefore, the larger tunability of the FFPC is obtained at the cost of a slight reduction of the stability. This is further characterised by the frequency noise spectrum for a locked FFPC as described below. \subsection{Noise-spectral-density analysis} To compare the noise characteristics of the FFPCs under different lock conditions, we measure their frequency noise spectral densities ($S_\nu$), which are derived from the measured noise spectral densities of the error signals. Fig.~\ref{fig:lowlocking_bandwidth} (b) shows $S_\nu$ for the triple slot FFPC at three different locking bandwidths (see Suppl. Sec.\,3 for the other two FFPCs). The integrated rms frequency noise from these measurements amounts to $\nu_\text{rms}=(1.19,0.64,1.28,0.24)\, \SI{}{\mega\hertz}$ for the $\SI{800}{\milli\hertz}$, $\SI{1.7}{\kilo\hertz}$, $\SI{16}{\kilo\hertz}$ locking bandwidths and the off resonance noise respectively. The off resonance noise is measured with an unlocked cavity far away from resonance. Although the locking bandwidth is changed by almost four orders of magnitude, the value of the integrated rms frequency noise remains similar, demonstrating the inherent high passive stability of the FFPCs. The slightly higher noise for sub-Hertz locking bandwidth is due to the uncompensated environmental acoustic noise below $\SI{1}{\kilo\hertz}$, while at a LBW of $\SI{1.7}{\kilo\hertz}$ acoustic noise is suppressed. We have also observed that when the locking bandwidth approaches the mechanical resonances, the rms frequency noise increases again due to the excitation of these resonances. \subsection{Mechanical resonances} The most prominent contribution to the measured frequency noise at higher frequencies is caused by the mechanical resonances of the assemblies starting around $\SI{30}{\kilo\hertz}$, see Fig.~\ref{fig:lowlocking_bandwidth} and Tab.~\ref{tab:ffpcOverview}. Without active damping of these resonances, the minimum achievable noise corresponds to the thermal excitation of these modes at ambient temperature. In order to quantify the thermal noise limit, we performed finite element simulations \cite{COMSOL} of the assembly geometries (see Suppl. Sec.\,4). These are used to extract the displacement fields of the mechanical eigenmodes, their effective masses and optomechanical coupling strengths. Damping was not included in the simulations. The resonances in the recorded noise spectral densities are then attributed to eigenmodes found in the simulation appearing at close-by frequencies. The thermal excitation of the mechanical resonances is calculated from the fluctuation-dissipation theorem using the fitted frequencies and linewidths that parametrize the damping of the modes. Together with the simulated parameters the expected signal in the frequency noise spectrum \cite{gorodetksy2010determination} can be compared to the measured frequency noise spectral density. An example for this is shown in Fig.~\ref{fig:mechanicalmodes}~(a) for the full slot FFPC. Here, we find five mechanical modes (I-V in Fig.~\ref{fig:mechanicalmodes}~(b)) with substantial coupling to the optical resonance for frequencies up to $\SI{150}{\kilo\hertz}$. The two most prominent modes in the experiment (blue traces), I and V, correspond to a bending and longitudinal stretching motion of the piezo. The calculated thermally induced frequency noise is shown in green in Fig.~\ref{fig:mechanicalmodes}~(a). Since the stretching of the piezo is coupled to the driving voltage by the piezoelectric effect, excess electrical noise from the controller is resonantly coupled to the system at the stretching mode frequency (dark blue curve in Fig.~\ref{fig:mechanicalmodes}~(a)). The excess electrical noise can however be removed by inserting a suitable low pass filter before the piezo (medium blue curve). After inserting the filter, the measured noise is compatible with the theoretically achievable thermal noise limit and the off-resonance detection noise (dashed light blue curve). The frequency noise of the laser does not play a role since it is found to be more than an order of magnitude lower than the detection noise (see Suppl. Sec.\,2). \begin{figure}[htbp] \centering \includegraphics[scale=\figureScaling]{figures/figure06_modellingMech2.pdf} \caption{Analysis of the measured FFPC frequency noise induced by mechanical resonances of the system for the full slot FFPC. (a) The measured $S_\nu$ are compared with the laser frequency noise and with the expected noise from thermally excited mechanical resonances of the system. The $S_\nu$ approaches the expected thermal noise limit near the mechanical resonances. Without a low-pass filter, excess electrical noise is coupled to the cavity through the piezoelectric element for some of the modes. (b) Displacement fields of the mechanical resonances included in the model. The two most prominent resonances, I and V, correspond to a bending mode (no piezoelectric coupling) and a longitudinal stretching mode (piezoelectrically coupled) of the structure.} \label{fig:mechanicalmodes} \end{figure} \section{Conclusion} In this paper we have demonstrated a versatile, robust and simple approach for building stable fiber Fabry-Perot cavities with wide frequency tunability and simultaneous high passive stability. The demonstrated high locking bandwidth is feasible due to the lowest mechanical resonance frequencies of the compact devices at few tens of kHz. Achieving stable locking at tens of mHz feedback bandwidth, which implies that the cavity resonance is stable on a minute time scale for a free running cavity, and reaching the thermal noise limit at higher frequencies, proves the high passive stability of these devices against slow and fast environmental disturbances. These compact and inherently fiber coupled cavities can be readily implemented in various applications like cavity-based spectroscopy of gases, tunable optical filters, and cavity quantum electrodynamics experiments, which all benefit from highly stable optical resonators. By incorporating mode-matched \cite{gulati2017fiber} and millimeters long FFPCs \cite{ott2016millimeter} in our compact devices, stable resonators with linewidths below $\SI{1}{\mega\hertz}$ can be achieved. This will enable the realization of compact gas sensors for airborne applications as well as miniaturized cavity enhanced vapor based light-matter interfaces. Considering ferrules made of ultra-low expansion glass and with cryogenically cooled FFPCs, one can envision miniaturized and portable optical oscillators with extremely high short term stability. \section*{Acknowledgements} The authors thank Stephan Kucera for valuable discussions. This work was supported by the Bundesministerium f\"ur Bildung und Forschung (BMBF), projects Q.Link.X and FaResQ. C.S. is supported by a national scholarship from CONACYT, México. W.A. acknowledges funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy – Cluster of Excellence Matter and Light for Quantum Computing (ML4Q) EXC 2004/1 – 390534769.
1,941,325,219,931
arxiv
\section{Acknowledgments} \section{Introduction} \label{sec: intro} The aim of the pure exploration, stochastic, multi-armed bandit (MAB) problem is to identify, via exploration, the optimal arm among a given basket of arms. Here, each arm is associated with an a priori unknown probability distribution, and the optimal arm is classically defined as one that optimizes a certain attribute associated with its distribution (for example, the mean). However, in practical applications, there is rarely just a single arm attribute that is of interest. For example, in clinical trials, one might be interested in not just the the efficacy of a treatment protocol, but also its cost and the severity of its side effects. In portfolio optimization, one is interested in not just the expected return of a candidate portfolio, but also the associated variability/risk. The classical approach in the MAB literature for handling multiple constraints is to combine them into a single objective, often via a linear combination \citep{vakili2016,kagrecha2019}. For example, in portfolio optimization, the optimization of a linear combination of expected return and its variance is often recommended \citep{sani2012}. However, the main drawback of this approach is that there is typically no sound way of determining the weights for this linear combination. After all, can one equate the `value' of a unit decrease in expected return of a portfolio to the `value' of a unit decrease in the return variance in a scale-free manner? Given that the mean-variance landscape across the arms is a priori unknown, a certain choice of arm objective might result in the `optimal' arm having either an unacceptably low expected return, or an unacceptably high variability. An alternative approach for handling multiple arm attributes is to pose the choice of optimal arm as a constrained optimization problem. Specifically, the optimal arm is defined as the one that optimizes a certain attribute, subject to constraints on other attributes of interest. This avoids the `apples to oranges' translation required in order to combine multiple attributes into a single objective. Returning to our portfolio optimization example, this approach would (potentially) define the optimal arm/portfolio as the one that optimizes expected return subject to a prescribed risk appetitite. In this paper, we analyse such a constrained stochastic MAB formulation, in the fixed budget pure exploration setting. Specifically, each arm is associated with a (potentially multi-dimensional) probability distribution. We consider two attributes, both of which are functions of the arm distribution. The optimal arm is then defined as one that minimizes one attribute (henceforth referred to as the \emph{objective attribute}), subject to a prescribed constraint on the other attribute (henceforth referred to as the \emph{constraint attribute}).\footnote{We consider only a single constraint attribute in this paper. The generalization to multiple constraint attributes is straightforward, but cumbersome.} Crucially, we make no limiting assumptions on the class of arm distributions, or on the specific attributes considered. Instead, we simply assume that the arm attributes can be estimated from samples obtained from arm pulls, with reasonable concentration guarantees (details in Section~\ref{sec: prob}). While the unconstrained (single attribute) pure exploration MAB formulation is well studied in the fixed budget setting, the algorithms and lower bounds for this case do not generalize easily to the constrained formulation described above. For example, the best known algorithms for the unconstrained case divide the exploration budget into phases, and eliminate/reject one or more arms at the end of each phase (for example, the \emph{Successive Rejects} algorithm by~\cite{audibert-bubeck}, and the \emph{Sequential Halving} algorithm by \cite{Karnin2013}). The last surviving arm is then flagged as optimal. While the exact `rejection schedule' differs across state of the art algorithms, the decision on which arm(s) to reject is itself straightforward, given a single, scalar arm attribute. However, in the presence of multiple constraints, the decision on which arm(s) to reject is non-trivial, given that estimates of both attributes of each surviving arm must be taken into consideration. A naive strategy is to focus on first rejecting the arms that appear `infeasible' (i.e., arms whose constraint attribute estimates violate the prescribed threshold), and then reject those arms that appear `feasible but suboptimal.' However, this strategy can be far from optimal (see Section~\ref{sec: numerics}). Instead, the approach we propose exploits an information theoretic lower bound for two-armed instances. Specifically, this lower bound motivates the definition of certain suboptimality gaps between pairs of arms. We then reject arms sequentially based on empirical estimates of these pairwise gaps, along with a specific tie-breaking rule. This novel approach, which we formalize as the \textsc{Constrained-SR}\ algorithm, is the main contribution of this paper. This paper is organized as follows. After a brief survey of the related literature, we formally describe the constrained pure exploration MAB formulation in Section~\ref{sec: prob}. In Section~\ref{sec: lb}, we derive an information theoretic lower bound for two-armed instances, which leads to a conjecture on the lower bound for the general $K$-armed case. The \textsc{Constrained-SR}\ algorithm is described and analyzed in Section~\ref{sec: algo}. Finally, we provide a numerical case study in Section~\ref{sec: numerics}, and conclude in Section~\ref{sec:conclusion}. Throughout the paper, references to the appendix (mainly for proofs of certain technical results) point to the appendix in the supplementary materials document. \noindent {\bf Related literature:} There is a substantial literature on the multi-armed bandit problem. We refer the reader to excellent textbook treatments \cite{bubeck2012,lattimore} for an overview. In this review, we restrict attention to (the few) papers that consider MABs with multiple attributes. \cite{drugan2013,yahyaa2015} consider the \emph{Pareto frontier} in the attribute space; the goal in these papers is to play all Pareto-optimal arms equally often. Another useful notion is \emph{lexicographic optimality}, where the attributes are `ranked' with `less important' attributes used to break ties in values of `more important' attributes (see \cite{ehrgott2005}). \cite{tekin2018,tekin2019} apply the notion of lexicographic optimality to contextual MABs. The paper closest to the present paper is~\cite{kagrecha2020constrained}, which analyses a similar constrained MAB formulation, but in the \emph{regret minimization} setting. This paper proposes a UCB-style algorithm for this problem, and establishes information theoretic lower bounds. The follow-up paper~\cite{chang2020risk} proposes a Thompson Sampling based variant. Special cases of the constrained MAB problem (with a risk constraint) are considered in the pure exploration \emph{fixed confidence} setting in~\cite{david2018,hou2022almost}.~\cite{chang2020} considers an average cost constraint (each arm has a cost distribution that is independent of its reward distribution), pursuing the weaker goal of \emph{asymptotic optimality}. A linear bandit setting is considered in~\cite{pacchiano2021} under the assumption that there is at least one arm which satisfies the constraints. Finally,~\cite{amani2019, moradipari2019} consider the problem of maximizing the reward subject to satisfying a linear `safety' constraint with high probability. None of the above mentioned papers considers the \emph{fixed budget} pure exploration setting considered here. Additionally, all the papers above (with the exception of \cite{kagrecha2020constrained}) implicitly assume that the instance is feasible; the present paper explicitly addresses the practically relevant possibility that the learning agent may encounter an instance where no arm meets the prescribed constraint(s). \section{The \textsc{Constrained-SR} algorithm} \label{sec: algo} In this section, we propose the \textsc{Constrained-SR}\ algorithm for the constrained MAB problem posed in Section~\ref{sec: prob}, and provide a performance guarantee via an upper bound on the probability of error under this algorithm. This upper bound compares favourably with the information theoretic lower bound conjectured in Section~\ref{sec: lb} (see Conjecture~\ref{thm: K arms lb}), suggesting that the \textsc{Constrained-SR}\ algorithm is nearly optimal. Importantly, the design of the \textsc{Constrained-SR}\ algorithm is motivated by our information theoretic lower bound for the two-armed case (see Theorem~\ref{thm: 2 arms lb}); \textsc{Constrained-SR}\ rejects arms sequentially based on \emph{estimates} of the same pairwise suboptimality gaps that appear in the lower bound. \noindent {\bf Algorithm description:} The \textsc{Constrained-SR}\ algorithm is based on the well-known Successive Rejects (SR) framework proposed by \citet{audibert-bubeck}. Informally, SR runs over $K-1$ phases; at the end of each phase, one arm (the one that looks empirically `worst') is rejected from consideration. Specifically, SR defines positive integers $n_1,n_2,\ldots,n_{K-1},$ such that $n_1 < n_2 < \cdots < n_{K-1}$ and $n_1 + n_2 + \cdots + n_{K-2} + 2 n_{K-1} \leq T$ (see Algorithm~\ref{algo: pairwise} for the details). In phase~$k,$ each of the surviving $K-k+1$ arms is pulled $n_k - n_{k-1}$ times. (This means that by the end of phase~$k,$ each surviving arm has been pulled~$n_k$ times.) The sole arm that survives at the end of phase~$K-1$ is declared to be optimal, and the instance is flagged as feasible (respectively, infeasible) if this surviving arm `appears' feasible (respectively, infeasible). \textsc{Constrained-SR}\ (formal description as Algorithm~\ref{algo: pairwise}) differs from SR in the criterion used to reject an arm at the end of each phase. Note that the classical SR algorithm is designed for a single attribute; this makes the choice of the empirically `worst' arm obvious. In contrast, the elimination criterion for our constrained MAB problem should depend on estimates of \emph{both} attributes for each surviving arm. The \textsc{Constrained-SR}\ algorithm does this as follows: Let $\hat{J}(A_k)$ denote the arm that `appears' optimal at the end of phase~$k,$ where $A_k$ denotes the set of surviving arms at the beginning of phase~$k.$ Formally, letting~$\hat{\mu}^k_j(i)$ denote the estimate of attribute~$j$ for arm~$i$ at the end of phase~$k,$ \begin{equation} \label{eq:emp_opt_arm} \displaystyle \hat{J}(A_k) = \left\{ \begin{array}{ll} \displaystyle \argmin_{i \in A_k \colon \hat{\mu}^k_2(i) \leq \tau} \hat{\mu}^k_1(i), & \{i \in A_k \colon \hat{\mu}^k_2(i) \leq \tau\} \neq \emptyset \\ \displaystyle \argmin_{i \in A_k} \hat{\mu}^k_2(i), & \{i \in A_k \colon \hat{\mu}^k_2(i) \leq \tau\} = \emptyset. \end{array} \right. \end{equation} Then, the gaps $\delta(\hat{J}(A_k),i)$ are estimated for all arms~$i \in A_k$ as follows. \begin{equation} \hat\delta\left(\hat{J}(A_k),i\right) := \delta\left(\hat{\mu}^k(\hat{J}(A_k),\hat{\mu}^k(i)\right), \label{eq:delta_hat} \end{equation} where $\hat{\mu}^k(i) := \left(\hat{\mu}^k_1(i),\hat{\mu}^k_2(i)\right).$ In other words, the gaps relative to the `seemingly optimal' arm are estimated by replacing the (unknown) arm attributes by their available estimates. \ignore{ If $\hat{\mu}^k_{2}(\hat{J}(A_k)) \leq \tau,$ i.e., arm~$\hat{J}(A_k)$ `appears' feasible, then \begin{equation*} \displaystyle \hat\delta(\hat{J}(A_k),i) = \left\{ \begin{array}{ll} \sqrt{a_1} \left ( \hat{\mu}^k_{1}(i)- \hat{\mu}^k_{1}(\hat{J}(A_k)) \right) & \text{ if } \hat{\mu}^k_{2}(i) \leq \tau \text{ (i.e., $i$ appears feasible)} \\ \sqrt{a_2} \left (\hat{\mu}^k_2(i)-\tau \right) & \text{ if } \hat{\mu}^k_{2}(i) > \tau,\ \hat{\mu}^k_1(i) \leq \hat{\mu}^k_{1}(\hat{J}(A_k)) \\ & \text{\quad (i.e., $i$ appears to be a deceiver)} \\ \max \{ \sqrt{a_2} \left ((\mu^k_2(i) - \tau \right ) , \sqrt{a_1} \left (\mu_1^k(i)-\mu_1^k(\hat{J}(A_k)) \right ) \} & \text{ if } \hat{\mu}^k_{2}(i) > \tau,\ \hat{\mu}^k_1(i) > \hat{\mu}^k_{1}(\hat{J}(A_k))\\ \end{array} \right.. \end{equation*} Finally, the arm $\argmax_{i \in A_k} \hat{\delta}(\hat{J}(A_k),i),$ i.e., the arm with the largest estimated gap relative to $\hat{J}(A_k),$ is rejected, with the following rule used to break ties.\footnote{This tie-breaking rule plays a key role in the performance of \textsc{Constrained-SR}; in contrast, the tie-breaking rule is \emph{inconsequential} in the original SR algorithm for single attribute MABs.} Let \begin{equation} \hat{D}(A_k) = \{ \underset{i \in A_k, i \neq \hat{J}(A_k) }{\argmax} \hat{\Delta}(\hat{J}(A_k),i) \} \end{equation} denote the set of arms in $A_k$ that achieve the same maximizing $\hat\Delta(\hat{J}(A_k),\cdot)$. We denote by $\mathcal{K}(\hat{D}(A_k))$ the set of arms in $\hat{D}(A_k)$ that appear empirically feasible (i.e., satisfying~$\hat{\mu}^k_2(\cdot) \leq \tau$). \noindent $\bullet$ If $\mathcal{K}(\hat{D}(A_k))$ is a strict subset of $\hat{D}(A_k),$ the arm that appears the most infeasible (i.e., the arm in $\hat{D}(A_k) \setminus \mathcal{K}(\hat{D}(A_k))$ with the largest value of $\hat\mu_2^k(\cdot)$) is rejected. \noindent $\bullet$ Else, the arm that appears feasible, but most suboptimal (i.e., the arm in $\mathcal{K}(\hat{D}(A_k))$ with the largest value of $\hat\mu_1^k(\cdot)$) is rejected. \emph{Remark:} We motivate the rationale behind the tie-breaking rule of the \textsc{Constrained-SR}\ algorithm via the scenario shown in Figure~\ref{fig: algo_tie_breaking} at the end of a generic phase. Here, arm~1 appears optimal, with $\hat\Delta(1,2) = \hat\Delta(1,3),$ both gaps being equal to the (small) feasibility gap of arm~1 (i.e., $\tau - \hat\mu_2(1)$). However, the arms~2 and~3 are not `symmetric' from the standpoint of the algorithm. Since the feasibility/infeasibility status of arm~1 is `uncertain' (given how close it is to the $\tau$ boundary), eliminating arm~3 is riskier, since it might be the optimal arm in case arm~1 is subsequenly found to be infeasible. On the other hand, eliminating arm~2 first is `safer,' since it is less likely to be the optimal arm. \ignore{Unlike the classical SR algorithm, these two arms having the same value of $\Delta(1,\cdot)$ is not a zero probability event. Moreover, these two arms are not `symmetric' for a random tie-breaking rule to make sense; arm 2 is infeasible while arm 3 is feasible. Recommending an infeasible arm at the end would result in the algorithm predicting both the optimal arm and the feasibility flag wrong, while recommending a feasible arm at the end would result in an error in predicting only the optimal arm. Thus, it makes sense to `hoard' feasible arms and remove infeasible arms first, which is the motivation behind the Infeasible First algorithm (see Appendix~\ref{app: IF} for a formal description of the algorithm).} \begin{figure}[t] \centering \begin{subfigure}{0.49\linewidth} \centering \begin{tikzpicture}[scale=0.55, dot/.style = {circle, draw, fill=#1, inner sep=2pt},every node/.style={scale=0.8} ] \draw[draw,latex-] (0,5) +(0,0.5cm) node[above right] {$\hat\mu_2$} -- (0,0); \draw[draw,-latex] (0,0) -- (5,0) -- +(0.5cm,0) node[below right] {$\hat\mu_1$}; \draw[dashed] (0,1.8) node[left] {$\tau$} -- (5,1.8); \node[dot=black, label=below:{1}] at (2, 1.4) {}; \node[dot=blue, label=2] at (1, 3.8) {}; \node[dot=red, label=below:{3}] at (4, 1) {}; \end{tikzpicture} \caption{} \label{fig: algo_tie_breaking} \end{subfigure} \hfill \begin{subfigure}{0.49\linewidth} \centering \begin{tikzpicture}[scale=0.55, dot/.style = {circle, draw, fill=#1, inner sep=2pt},every node/.style={scale=0.8} ] \draw[draw,latex-] (0,5) +(0,0.5cm) node[above right] {$\hat\mu_2$} -- (0,0); \draw[draw,-latex] (0,0) -- (5,0) -- +(0.5cm,0) node[below right] {$\hat\mu_1$}; \draw[dashed] (0,1.8) node[left] {$\tau$} -- (5,1.8); \node[dot=black, label=1] at (0.5, 2.1) {}; \node[dot=blue, label=2] at (3, 0.5) {}; \node[dot=red, label=3] at (4, 0.5) {}; \end{tikzpicture} \caption{} \label{fig: algo_if} \end{subfigure} \caption{Panel~(a) shows a feasible instance that motivates our tie-breaking rule. Panel~(b) shows a feasible instance that motivates the use of estimates of our information theoretic suboptimality gaps to guide arm elimination.} \label{fig: algo_motivation} \end{figure} \emph{Remark:} While the above example might suggest that it is sound to blindly eliminate seemingly infeasible arms first, the scenario shown in Figure~\ref{fig: algo_if} (again, at the end of a generic phase) demonstrates that this is not always the case. Here, arm~2 appears optimal, but arm~1, placed slightly above the $\tau$ boundary, might be optimal if $\hat\mu_2(1)$ is a (small) overestimation of $\mu_2(1)$. It is therefore `safer' in this scenario to eliminate arm~3; this is exactly what \textsc{Constrained-SR}\ would do, since $\hat\Delta(2,1) < \hat\Delta(2,3).$ This highlights the importance of the sophisticated elimination criterion employed by \textsc{Constrained-SR}, that captures the relative likelihoods of different arms being optimal (via estimates of information theoretic suboptimality gaps). \ignore{the optimal arm (arm 1) is just below the $\tau$ boundary, while arms 2 and 3 are feasible suboptimal and farther away from the $\tau$ boundary than arm 1. In this instance, it is likely (especially in the initial phases of exploration) that arm~1 appears infeasible (i.e., $\hat{\mu}_2(1) > \tau$). However, eliminating arm~1 based on this observation is clearly risky. Instead, the \textsc{Constrained-SR}\ algorithm rejects an arm based on a more sophisticated criterion that captures the likelihood that it is actually the optimal arm (via estimates of information theoretic suboptimality gaps).} \ignore{In \cite{audibert-bubeck}, the arms are ordered based on their means and the arm with the highest empirical mean is dismissed at the end of each round. Here, we dismiss the arm with the highest empirical gap with respect to the optimal arm. For this purpose, we henceforth assume that the arms are ordered in increasing order of their gaps with respect to the optimal arm, i.e., for $i,j \in [K]$, if $i>j$, then $\Delta(1,i) \geq \Delta(1,j)$. In this ordering, ties are broken on the basis of the corresponding gap defined by $\delta(\cdot,\cdot)$. $A_k$ denotes the set of arms that have survived till phase $k$. At the beginning of phase $k$, the algorithm pulls each surviving arm $n_k$ number of times. Define $\hat{\delta}_{k}(\hat{J}(A),i)$ to be the empirical gap of arm $i$ after $k$ phases with respect to $\hat{J}(A)$, an empirically optimal arm from the set $A \subseteq [K]$. Let $\hat{J}_{T}([K])$ be the arm that survives at the end of $K-1$ phases. If this arm is empirically feasible, the algorithm recommends that arm. Else, the algorithm outputs 0, indicating that it considers the instance to be infeasible. The precise pseudocode is given in Algorithm \ref{algo: pairwise}. } \begin{algorithm}[tb] \caption{\textsc{Constrained-SR}\ algorithm} \label{algo: pairwise} \begin{algorithmic}[1] \Procedure{C-SR}{$T,K,\tau$} \State Let $A_1=\{1,\ldots,K\}$ \State $\overline{\log}(K) := \frac{1}{2} + \sum_{i=2}^{K}\frac{1}{i}$ \State $n_0 = 0,$ $n_k = \lceil \frac{1}{\overline{\log}(K) } \frac{T-K}{K+1-k} \rceil$ for $1 \leq k \leq K-1$ \For{$k=1,\ldots,K-1$} \State For each $i \in A_k$, pull arm $i$ $(n_k-n_{k-1})$ times \State Compute~$\hat{J}(A_k)$ (using~\eqref{eq:emp_opt_arm}) \State Compute~$\hat\Delta\left(\hat{J}(A_k),i\right)$ for $i \in A_k$ (using~\eqref{eq:delta_hat}) \State $ \hat{D}(A_k) = \{ \underset{i \in A_k, i \neq \hat{J}(A_k) }{\argmax} \hat{\Delta}(\hat{J}(A_k),i) \}$ \If{$| \hat{D}(A_k) | <1$} \State $A_{k+1}=A_k \setminus \hat{D}(A_k)$ \Else \State Compute $\mathcal{K}(\hat{D}(A_k))$ \If{$\mathcal{K}(\hat{D}(A_k))^{\mathsf{c}} = \emptyset $} \State $A_{k+1}=A_k \setminus \{ \underset{i \in\mathcal{K}(\hat{D}(A_k)) }{\argmax} \hat\mu_1^k(i) \}$ \Else \State $A_{k+1}=A_k \setminus \{ \underset{i \in\mathcal{K}(\hat{D}(A_k))^{\mathsf{c}} }{\argmax} \hat\mu_2^k(i) \}$ \EndIf \EndIf \EndFor \State Let $\hat{J}([K])$ be the unique element of $A_K$ \If{$\hat{\mu}^{K-1}_{2}(\hat{J}_{T})> \tau$} \State $\hat{F}([K])=$\texttt{False} \Else \State $\hat{F}([K])=$\texttt{True} \EndIf \State \textbf{return} ($\hat{J}([K]),\hat{F}([K])$) \EndProcedure \end{algorithmic} \end{algorithm} \noindent {\bf Performance evaluation:} We now characterize the performance of \textsc{Constrained-SR}. For the purpose of expressing our performance guarantee, we order the arm labels as follows (without loss of generality). Arm~1 is the optimal arm, and arms $2,\ldots,K$ are labelled in increasing order of $\Delta(1,\cdot),$ with ties broken in a manner that is consistent with the \textsc{Constrained-SR}\ algorithm. Formally, for any $1 < i < j \leq K,$ % if $\Delta(1,i) = \Delta(1,j),$ then either\\ \noindent $\bullet$ $i,j \in \mathcal{K}(\nu)$ and $\mu_1(i) \leq \mu_1(j),$ or \\ \noindent $\bullet$ $i \in \mathcal{K}(\nu)$ and $j \notin \mathcal{K}(\nu),$ or \\ \noindent $\bullet$ $i,j \notin \mathcal{K}(\nu)$ and $\mu_2(i) \leq \mu_2(j).$\\ \begin{theorem} \label{thm: ub} Under the \textsc{Constrained-SR}\ algorithm, the probability of error is upper bounded as: $$e_T \leq c(K) \exp \left ( -\frac{\beta T}{H_2\ \overline{\log}(K)} \right ),$$ where $H_2=\underset{i \in [K], i \neq 1}{\max} \frac{i}{\Delta^2(1,i)},$ $c(K)$ is a function of~$K,$ and $\beta$ is a positive universal constant. \end{theorem} The main takeaways from Theorem~\ref{thm: ub} are as follows. \noindent $\bullet$ Theorem~\ref{thm: ub} provides an upper bound on the probability of error under \textsc{Constrained-SR}, that decays exponentially with the budget~$T.$ The associated decay rate is given by $\frac{\beta}{H_2\ \overline{\log}(K)},$ suggesting that the instance-dependent parameter $H_2$ captures the hardness of the instance (under the \textsc{Constrained-SR}\ algorithm); a larger value of $H_2$ implies a `harder' instance, since the probability of error decays more slowly with the budget. \noindent $\bullet$ The `hardness index' $H_2$ agrees with the hardness index obtained for the classical SR algorithm in~\cite{audibert-bubeck} (also denoted $H_2$) for the unconstrained MAB problem when~$\tau \rightarrow \infty.$ \noindent $\bullet$ The decay rate from the upper bound for \textsc{Constrained-SR}\ can be compared with that in the information theoretic lower bound conjectured in Section~\ref{sec: lb} (see Conjecture~\ref{thm: K arms lb}). Indeed, it can be proved that $\frac{H_2}{2} \leq H_1 \leq \overline{\log}(K) H_2$ (see \cite{audibert-bubeck}). This suggests that the decay rate under \textsc{Constrained-SR}\ is optimal up to a factor that is logarithmic in the number of arms. In other words, this suggests \textsc{Constrained-SR}\ is nearly optimal.\footnote{The same logarithmic (in the number of arms) `gap' between the decay rate in the information theoretic lower bounds and that of the best known upper bound also exists in the (unconstrained, fixed budget) pure exploration MAB problem (see~\cite{audibert-bubeck,kaufmann16}).} \noindent {\bf Sketch of the proof of Theorem~\ref{thm: ub}:} In the remainder of this section, we sketch the proof of Theorem~\ref{thm: ub}. The complete proof can be found in Appendix~\ref{app: ub}. Note that \begin{align*} e_T &=\mathbb{P} \left ( \left \{ J([K]) \neq \hat{J}([K]) \right \} \cup \left \{ (\hat{F}([K]) \neq F([K]) \right \} \right ) \\ &= \sum_{k=1}^{K-1} \prob{\textrm{Arm 1 is dismissed in round }k} \\ &+ \prob{\hat{J}([K])=J([K]),\hat{F}([K])\neq F([K])} \end{align*} Let $\mathcal{A}_k$ denote the event that arm~1 is rejected at the end of round~$k.$ Noting that the event in the last term above implies that the feasibility status of~arm~1 is estimated incorrectly at the end of phase~$K-1,$ \eqref{eqn: conc_inequality} implies \begin{align} e_T &\leq \sum_{k=1}^{K-1} \prob{\mathcal{A}_k} + 2 \exp\left(-a_2 n_{K-1} ( | \tau - \mu_2(1) |)^2 \right) \\ &\leq \sum_{k=1}^{K-1} \prob{\mathcal{A}_k} + 2 \exp\left(-n_{K-1} \Delta^2(1,2) \right). \label{eq:error1-CSR} \end{align} We now bound $\prob{\mathcal{A}_k}.$ In round $k$, at least one of the $k$ `worst' arms (according to the ordering defined on the arms) survives (i.e., belongs to $A_k$). Thus, for arm 1 to be dismissed at the end of round $k,$ it must appear empirically `worse' than this arm. Formally, we have \begin{align*} \prob{\mathcal{A}_k} &\leq \sum_{j=K-k+1}^{K} \prob{\hat{J}(A_k) = j} \\ &+ \sum_{i=2}^{K-k}\sum_{j=K-k+1}^{K} \prob{\hat{J}(A_k) = i, \hat{\delta}_k(i,j) \leq \hat{\delta}_k(i,1)} \\ &=: S_1 + S_2. \end{align*} The summation~$S_1$ above corresponds to the event that one of the worst $k$ arms looks empirically optimal at the end of phase~$k.$ On the other hand, the summation~$S_2$ corresponds to the event that some other arm~$i$ (not among the worst~$k$ arms) looks empirically optimal at the end of phase~$k,$ and further that arm~1 has a greater (estimated) gap (relative to~$i$) than an arm~$j,$ which is among the worst~$k$ arms (this is necessary for the elimination of arm~1.). Crucially, the terms in~$S_1$ can be bounded by analysing a \emph{two-armed} instance consisting only of arms~1 and~$j.$ Similarly, the terms in~$S_3$ can be bounded by analysing a \emph{three-armed} instance consisting only of arms~1,~$i$ and~$j.$ The relevant bounds are summarized below. \begin{lemma} \label{lemma:2arm-upper_bound} Consider a two-armed instance where the arms are labelled (without loss of generality) as per the convention described before. Under \textsc{Constrained-SR}, the probability that arm~2 is optimal after phase~1 is at most $c_2 \exp\left(-\beta_2 n_1 \Delta^2(1,2)\right),$ where $c_2,$ $\beta_2$ are universal positive constants. \end{lemma} \begin{lemma} \label{lemma:3arm-upper_bound} Consider a three-armed instance where the arms are labelled (without loss of generality) as per the convention described before. Under \textsc{Constrained-SR}, the probability that after phase~1, arm 2 is empirically optimal and arm~1 is rejected is at most $c_3 \exp\left(-\beta_3 n_1 \Delta^2(1,3)\right),$ where $c_3,$ $\beta_3$ are universal positive constants. \end{lemma} Using Lemmas~\ref{lemma:2arm-upper_bound} and~\ref{lemma:3arm-upper_bound} (proofs in Appendix~\ref{app: ub}), $\prob{\mathcal{A}_k}$ can be upper bounded as follows: \begin{align*} \prob{\mathcal{A}_k} &\leq k c_2 \exp\left(-\beta_2 n_k \Delta^2(1,K-k+1) \right) \\&+ k(K-k-1) c_3 \exp\left(-\beta_3 n_k \Delta^2(1,K-k+1) \right) \\ &\leq K^2 \ \tilde{c} \exp\left(-\tilde{\beta} n_k \Delta^2(1,K-k+1) \right), \end{align*} where $\tilde{c} = \max(c_2,c_3)$ and $\tilde\beta = \min(\beta_2,\beta_3).$ Finally, substituting the above bound into~\eqref{eq:error1-CSR}, we get \begin{align*} e_T &\leq \sum_{k=1}^{K-1} K^2 \ \tilde{c} \exp\left(-\tilde{\beta} n_k \Delta^2(1,K-k+1) \right) \\ &+ 2 \exp\left(- n_{K-1} \Delta^2(1,2) \right) \\ & \leq (K^3 \tilde{c} + 2) \exp\left(-\hat\beta \min_{1 \leq k \leq K-1} \left(n_k \Delta^2(K-k+1) \right)\right), \end{align*} where $\hat\beta = \min(\tilde\beta,1).$ Now, using the definition of $n_k,$ \begin{align*} &\min_{1 \leq k \leq K-1} \left(n_k \Delta^2(1,K-k+1) \right) \\ &\quad \geq \min_{1 \leq k \leq K-1} \left( \frac{T-K}{\overline{\log}(K)} \frac{\Delta^2(K-k+1)}{K-k+1}\right) \geq \frac{T-K}{\overline{\log}(K)} \frac{1}{H_2}, \end{align*} which implies the statement of the theorem. \ignore{ Note that to bound each term in \ref{eqn: A_k decomposition}, it is enough to characterize this for a three-armed instance consisting of arm 1, arm $\hat{J}(A_k)$ and arm $j$. Such an instance would have arm 1 as the optimal arm and arm $j$ as the worst arm. The probability of the event described by each term in \ref{eqn: A_k decomposition} is equal to the probability of dismissing arm 1 at the end of round 1 using the \textsc{Constrained-SR}\ algorithm on the three-armed instance consisting of arm 1, arm $\hat{J}(A_k)$ and arm $j$, and arm 1 is empirically feasible at the end of round 1, the only difference being that the arms would have been sampled $n_k$ times in the former case. Let $e_{i,j,n}$ denote the probability of dismissing arm $i$ at the end of round $j$ when each surviving arm has been sampled $n$ times. Let $\mathcal{E}_{1}, \mathcal{E}_{2}, \mathcal{E}_{3}$ denote the set of instances where the ``worst" arm, i.e., the arm with the largest gap, is feasible suboptimal, deceiver and infeasible suboptimal respectively. The following lemma characterizes $e_{1,1,n}$ for a three-armed instance. \begin{lemma} \label{lem: K3} When $K=3$, to characterize $e_{T,1,1,n}$ using the \textsc{Constrained-SR}\ algorithm on any instance, it is enough to characterize it over each of $\mathcal{E}_1, \mathcal{E}_2$,and $\mathcal{E}_3$. Moreover, on $\mathcal{E}_1, \mathcal{E}_2$ and $\mathcal{E}_3$, there are universal positive constants $c_3, c_4$ such that \begin{align} e_{1,1,n} \leq c_3 \exp \left (-c_4 n \delta(1,3)^2 \right ). \label{feas_final_K3} \end{align} \end{lemma} \begin{proof} Consider an instance $\nu \in \mathcal{E}_1$. The analysis for $\nu \in \mathcal{E}_2$ and $\nu \in \mathcal{E}_3$ are similar and can be found in <\texttt{REF TO SECTION}>. Note that for arm 1 to be empirically feasible and to be dismissed at the end of round 1, $\hat{\mu}_{1,n}(1)$ has to be greater than $\hat{\mu}_{1,n}(3)$. We take 2 cases: arm 3 being empirically feasible or infeasible. If arm 3 is empirically feasible, the bound follows from \eqref{eqn: conc_inequality}. If arm 3 is empirically infeasible and if arm 1 is to be dismissed, arm 3 becomes an empirically infeasible suboptimal arm and thus $\hat{\mu}_{1,n}(3)$ would again have to be smaller than $\hat{\mu}_{1,n}(1)$ as, \begin{align*} \left \{ \hat{\Delta}(\hat{J}(A_1),1)>\hat{\Delta}(\hat{J}(A_1),3) \right \} \subseteq \left \{ \hat{\delta}(\hat{J}(A_1),1)>\hat{\delta}(\hat{J}(A_1),3) \right \} \subseteq \left \{ \hat{\mu}_{1,n}(1) > \hat{\mu}_{1,n}(3)\right \}, \end{align*} where the above events are for the case where arm 3 is empirically infeasible suboptimal. Thus, we have: \begin{align*} \mathbb{P} \left ( \left \{ \hat{\delta}_{1}(\hat{J}(A_1),1) > \hat{\delta}_{1}(\hat{J}(A_1),j) \right \} \cap \mathcal{B}_1 \right ) &\leq \mathbb{P}(\hat{\mu}_{1,n}(1) > \hat{\mu}_{1,n}(3) ). \end{align*} Using the appropriate concentration inequality from \eqref{eqn: conc_inequality}, we get \eqref{feas_final_K3}. \end{proof} \begin{comment} Consider an instance $\nu \in \mathcal{E}_2$, i.e., the ``worst" arm is a deceiver arm. Similar to the last case, we have \eqref{K3_E1_case1}. The first term in \eqref{K3_E1_case1} can be bounded using an appropriate concentration inequality. To bound the second term, we take two cases: arm 3 is empirically infeasible and arm 3 is empirically feasible. The latter case can easily be bounded using an appropriate concentration inequality. In the former case, arm 3 is empirically infeasible, arm 1 is empirically feasible and arm 1 is removed. Note that this constrains arm 2 to be empirically feasible. Arm 1 can be removed only if its empirical gap is greater than that of arm 3. Thus, we can bound \eqref{K3_E1_case1} as: \begin{align*} \mathbb{P} \left ( \mathcal{A}_1 \right ) &\leq \mathbb{P}(\hat{\mu}_1(1)-\hat{\mu}_2(1) > \hat{\mu}_1(K)-\tau ). \end{align*} Using the appropriate concentration inequality from \eqref, we get \eqref{feas_final_K3}. Consider an instance $\nu \in \mathcal{E}_3$, i.e., the ``worst" arm is a suboptimal deceiver arm. Similar to the last two cases, we have \eqref{K3_E1_case1}. The first term in \eqref{K3_E1_case1} can be bounded using an appropriate concentration inequality. To bound the second term, note that both the approaches used in the last two sections work, i.e., in each dimension we get a bound in terms of arm 3's means in that dimension. Hence, we can bound the second term by a minimum of those two bounds, thus giving \eqref{feas_final_K3}. \end{comment} Using Lemma \ref{lem: K3} in \eqref{eqn: A_k decomposition}, for some positive constants $c_5, c_6$, we get the following upper bound for \eqref{eqn: error_prob_main}: \begin{align*} \sum_{k=1}^{K-1} \mathbb{P}( \mathcal{A}_k ) & \leq \sum_{k=1}^{K-1} \sum_{j=K-k+1}^{K} c_5 \exp \left (-c_6 n_k \Delta(1,j)^2 \right ) \\ & \leq \sum_{k=1}^{K-1} c_5 k \exp \left ( -c_6 n_k \Delta(1,K-k+1)^2 \right ). \numberthis \label{eqn: arm_dismiss} \end{align*} From the definition of $n_k$ and $H$, we have \begin{align*} n_k \Delta(1,K-k+1)^2 \geq \frac{n-K}{\overline{\log (K)}} \frac{1}{(K+1-k) \Delta(1,K-k+1)^{-2} } \geq \frac{n-K}{\overline{\log (K) H}}. \end{align*} Substituting this back in \eqref{eqn: arm_dismiss} gives a bound of the form: \begin{align*} \mathbb{P}(\text{Arm 1 is dismissed in an intermediate round}) \leq c_7 \exp \left ( -c_8 (T-K)/H \right), \end{align*} for some positive universal constants $c_7, c_8$. Using this bound in \eqref{eqn: ub overall} gives the result. Note that the bound given by the concentration inequality for the second term in \eqref{eqn: ub overall} is smaller than the bound for the first term for a suitably large value for $c_7$ as the rate of decay of the former contains only $(\tau-\mu_2(i))$ while the rate of decay of the latter containes $\Delta(\cdot,\cdot)$. } \section{Concluding Remarks} \label{sec:conclusion} This work motivates follow-ups in several directions. On the theoretical front, the main gap in this work pertains to the information theoretic lower bound. Proving Conjecture~\ref{thm: K arms lb} would not only establish the `near' optimality of the \textsc{Constrained-SR}\ algorithm, but also, quite likely, introduce a novel approach for deriving lower bounds in the fixed budget pure exploration setting. On the application front, the present work motivates an extensive case study applying the proposed algorithm in various application scenarios. This work also motivates generalizations to constrained reinforcement learning, where the goal is to identify the optimal policy that fulfills additional constraints. \section{Lower bound} \label{sec: lb} In this section, we provide an information theoretic lower bound on the probability of error under any algorithm, for a class of two-armed Gaussian bandit instances. We then extrapolate this bound to conjecture a lower bound for the general $K$-armed case. Crucially, the lower bound for the two-armed case forms the basis for our algorithm design (see Section~\ref{sec: algo}). First, we define some sub-optimality gaps that will be used to state our lower bounds, and also later when we discuss algorithms. Given two arms $i$ and $j,$ we say $i \succ j$ if~$i$ is an optimal arm in a two-armed instance consisting only of arms~$i$ and~$j.$ For $i \succ j,$ define $\delta(i,j) = \delta(\mu(i),\mu(j))$ as follows.\footnote{We abuse notation and use~$\delta(\mu(i),\mu(j))$ in place of $\delta(i,j)$ when we need to emphasize the dependence of~$\delta$ on the attribute values of arms~$i$ and~$j.$} $\delta(i,j) :=$ \begin{equation*} \displaystyle \left\{ \begin{array}{ll} \sqrt{a_1} \left ( \mu_{1}(j)- \mu_{1}(i) \right ), & \text{ if } i, j \in \mathcal{K}(\nu),\\ \sqrt{a_2} \left (\mu_2(j)-\tau \right), & \text{ if } i \in \mathcal{K}(\nu) \text{ and} \\ & \quad j \notin \mathcal{K}(\nu) \text{ is a deceiver,} \\ \max \bigl\{ \sqrt{a_2} \left ((\mu_2(j) - \tau \right ), & \text{ if } i \in \mathcal{K}(\nu) \text{ and} \\ \quad \sqrt{a_1} \left (\mu_1(j)-\mu_1(i) \right ) \bigr\}, & \quad j \notin \mathcal{K}(\nu) \text{ is suboptimal,} \\ \sqrt{a_2} (\mu_2(j)-\mu_2(i)), & \text{ if } i,j \notin \mathcal{K}(\nu) \end{array} \right.. \label{eq:delta_def} \end{equation*} Next, for $i \succ j,$ define $\Delta(i,j) = \Delta(\mu(i),\mu(j))$ as follows. \begin{equation*} \displaystyle \Delta(i,j) := \min \{ \sqrt{a_2}\ |\tau - \mu_2(i)|, \delta(i,j) \}. \end{equation*} \ignore{ We define the following gap function $\Delta(\cdot, \cdot)$ between any two arms that are a measure of how hard it is for any algorithm to give the correct output on that two-armed instance. \begin{definition} $\Delta(\cdot, \cdot)$ and $\delta(\cdot,\cdot)$ are functions from $[K] \times [K]$ to $\mathbb{R}$ defined for a given instance. The first argument is always $J([K])$ and the gaps are defined with respect to this arm. Let $j \in [K]$ be any arm.\\ Let the instance be feasible. If $j$ is a feasible suboptimal arm, we define \begin{align*} \delta(J([K]),j)=\sqrt{a_1} \left ( \mu_{1}(j)- \mu_{1}(J([K])) \right ). \end{align*} If $j$ is a deceiver arm, we define \begin{align*} \delta(J([K]),j)=\sqrt{a_2} \left (\mu_2(j)-\tau \right ). \end{align*} If $j$ is an infeasible suboptimal arm, we define \begin{align*} \delta(J([K]),j)=\max \{ \sqrt{a_2} \left ((\mu_2(j) - \tau \right ) , \sqrt{a_1} \left (\mu_1(j)-\mu_1(J([K])) \right ) \}. \end{align*} Let the instance be infeasible. Then, we define \begin{align*} \delta(J([K]),j)=\sqrt{a_2} \left ( \mu_{2}(j)- \mu_{2}(J([K])) \right ). \end{align*} Let $j$ be any arm. $\Delta(\cdot, \cdot)$ is defined as: \begin{align*} \Delta(J([K]),j) &=\min \left \{ \sqrt{a_1} \left ( \tau - \mu_2(J([K])) \right ), \delta(J([K]),j) \right \}, \shortintertext{if the instance is feasible, else:} \Delta(J([K]),j) &=\delta(J([K]),j). \end{align*} \end{definition} } As we will see, the smaller the value of $\Delta(i,j),$ the harder it is for a learner to identify the optimal arm~$i$ in a two-armed instance consisting of arms~$i$ and~$j.$ Thus, one may interpret~$\Delta(i,j)$ as the `suboptimality gap' of arm~$j$ (relative to arm~$i$); note that this gap depends on the values of the objective attributes, the constraint attributes, the threshold~$\tau,$ and also the concentration parameters~$a_1,a_2.$ For example, if $i$ is feasible and $j$ is feasible and suboptimal, $\Delta(i,j) = \min\{\sqrt{a_2}\left (\tau - \mu_2(i) \right), \sqrt{a_1} \left ( \mu_{1}(j)- \mu_{1}(i) \right )\}.$ Thus, the closer~$i$ is to the constraint boundary, and the smaller the gap between~$i$ and~$j$ in the objective attribute, the harder it is to identify~$i$ as the optimal arm in this pair. The gaps for the other cases can be interpreted in a similar manner. \ignore{For instance, for a feasible suboptimal arm $j$ in a feasible instance, a larger gap $\delta(\cdot, \cdot)$ with respect to the optimal arm would imply that their objective attributes are ``farther away", thus making it easier for any algorithm to differentiate between the optimal arm and arm $j$. A larger gap $\Delta(\cdot, \cdot)$ with respect to the optimal arm would imply that the constraint attribute of the optimal arm lies sufficiently ``far away" from $\tau$, thus reducing the likelihood of a good learner concluding that $J([K])$ is an infeasible arm.} We are now ready to state our information theoretic lower bound. Consider the class of arm distributions $\mathcal{D}$, which consists of 2-dimensional Gaussian distributions with covariance matrix $\Sigma = \textrm{diag}\left(\frac{1}{2a_1},\frac{1}{2a_2}\right).$ Attribute $\mu_1$ is the mean of the first dimension, while attribute $\mu_2$ is the mean of the second dimension. Note that the empirical average estimator satisfies~\eqref{eqn: conc_inequality} for both attributes. \begin{theorem} \label{thm: 2 arms lb} Let $\nu$ be a two-armed bandit instance where $\nu(i) \in \mathcal{D}$ for $i \in \{1,2\},$ with attribute~$\mu_1$ being the mean of the first dimension of the arm distribution, and attribute~$\mu_2$ being the mean of the second dimension of the arm distribution. Under any consistent algorithm, \begin{align*} \limsup_{T \rightarrow \infty}-\frac{1}{T} \log e_{T}(\nu) \leq (\Delta(1,2))^2, \end{align*} where arm~1 is taken to be the optimal arm (without loss of generality). \end{theorem} The proof of Theorem~\ref{thm: 2 arms lb} can be found in Appendix~\ref{app: 2_arm_lb_proof}. Note that Theorem~\ref{thm: 2 arms lb} provides an upper bound on the (asymptotic) exponential rate of decay of the probability of error as $T \rightarrow \infty.$ Specifically, the decay rate can be at most $\Delta^2(1,2).$ This formalizes the interpretation of $\Delta(1,2)$ as a `suboptimality gap' between arm~2 and arm~1. It is instructive to see which aspects of the arm attributes influence this suboptimality gap. For example, if both arms~1 and~2 are feasible, then $\Delta(1,2)$ depends on the \emph{optimality gap} (i.e., $\mu_1(2) - \mu_1(1)$) and the \emph{feasiblity gap} of arm 1 (i.e., $\tau - \mu_2(1)$) but not on the \emph{feasibility gap} of arm~2 (i.e., $\tau - \mu_2(2)$). On the other hand, if arm~1 is feasible and arm~2 is a deceiver, then $\Delta(1,2)$ depends on the \emph{feasibility gap} of arm~1 (i.e., $\tau - \mu_2(1)$) and the \emph{infeasibility gap} of arm~2 (i.e., $\tau - \mu_2(2)$), but not on the gap between the objective attributes. In Section~\ref{sec: algo}, we design an algorithm that eliminates arms from consideration sequentially based on estimates of these (pairwise) suboptimality gaps. \ignore{ Thus, the rate of decay of the probability of error depends on the gap between the optimal and the non-optimal arm. Note that if $\tau$ tends to infinity in \eqref{thm: 2 arms lb}, the lower bound obtained is the same as that in Theorem 12 of \citet{kaufmann16}. For instance, consider a feasible instance where the non-optimal arm is feasible suboptimal. Theorem \ref{thm: 2 arms lb} says that the probability of error for any consistent algorithm on this instance does not depend upon the constraint attribute of $J([2])^{\mathsf{c}}$. Similarly, if $J([2])^{\mathsf{c}}$ is a deceiver arm, then the probability of error for any consistent algorithm on this instance does not depend upon the objective dimension of $J([2])^{\mathsf{c}}$. } Based on Theorem \ref{thm: 2 arms lb}, and results from~\cite{audibert-bubeck} on the classical (unconstrained) MAB problem, we conjecture the following extension of Theorem~\ref{thm: 2 arms lb} to the case of $K$ arms as follows. Taking arm~1 to be the optimal arm without loss of generality, define $H_1:=\sum_{i=2}^{K} \frac{1}{\Delta^2(1,i)}$. \begin{conjecture} \label{thm: K arms lb} Let $\nu$ be a $K$-armed bandit instance where $\nu(i) \in \mathcal{D}, i \in [K],$ with attribute $\mu_1$ being the mean of the first dimension of the arm distribution, and attribute $\mu_2$ being the mean of the second dimension of the arm distribution. Under any consistent algorithm, \begin{align} \limsup_{T \rightarrow \infty}-\frac{1}{T} \log e_{T}(\nu) \leq \frac{d}{H_1}, \end{align} where~$d$ is a universal positive constant. \end{conjecture} The main challenge in proving this conjecture for a general $K$-armed instance is that exising lower bound approaches for the unconstrained setting \citep{audibert-bubeck,kaufmann16,carpentier2016tight} do not generalize to the constrained setting. As per Conjecture~\ref{thm: K arms lb}, $H_1$ can be interpreted as a measure of the hardness of the instance under consideration. Indeed, this definition of~$H_1$ agrees with the hardness measure that appears in lower bounds for the classical (unconstrained) MAB problem (also denoted~$H_1;$ see \cite{audibert-bubeck,kaufmann16}) when~$\tau \rightarrow \infty.$ \section{The Infeasible First algorithm} \label{app: IF} Informally, the algorithm removes the most (empirically) infeasible arm that has survived so far. If there are no infeasible arms, it removes the most (empirically) suboptimal arm. The formal version of the algorithm is given in Algorithm \ref{algo: if}. \begin{algorithm}[t] \caption{Infeasible First algorithm} \label{algo: if} \begin{algorithmic}[1] \Procedure{IF}{$T,K,\tau$} \State Let $A_1=\{1,\ldots,K\}$ \State $\overline{\log}(K) := \frac{1}{2} + \sum_{i=2}^{K}\frac{1}{i}$ \State $n_0 = 0,$ $n_k = \lceil \frac{1}{\overline{\log}(K) } \frac{T-K}{K+1-k} \rceil$ for $1 \leq k \leq K-1$ \For{$k=1,\ldots,K-1$} \State For each $i \in A_k$, pull arm $i$ $(n_k-n_{k-1})$ times \State Compute $\hat{\mathcal{K}}(A_k)=\{i \in A_k : \hat{\mu}_{2}^{k}(i) > \tau \}$ \State Compute $\hat{\mathcal{K}}^{\mathsf{c}}(A_k)=A_k \setminus \hat{\mathcal{K}}(A_k)$ \If{ $\hat{\mathcal{K}}^{\mathsf{c}}(A_k) \neq \emptyset\}$ } \State $A_{k+1}=A_k \setminus \{ \underset{i \in \hat{\mathcal{K}}^{\mathsf{c}}(A_k) }{\argmax} \hat{\mu}_2^{k}(i) \}$ (if there is a tie, choose randomly) \Else \State $A_{k+1}=A_k \setminus \{ \underset{i \in \hat{\mathcal{K}}(A_k) }{\argmax} \hat{\mu}_1^{k}(i) \}$ (if there is a tie, choose randomly) \EndIf \EndFor \State Let $\hat{J}_{T}$ be the unique element of $A_K$ \If{$\hat{\mu}^{K-1}_{2}(\hat{J}_{T})> \tau$} \State $\hat{O}([K])=0$ \Else \State $\hat{O}([K])=\hat{J}_{T}$ \EndIf \State \textbf{return} $\hat{O}([K])$ \EndProcedure \end{algorithmic} \end{algorithm} \section{Some more numerical experiments} \label{app: num_exps} \section{Numerical experiments} \label{sec: numerics} \begin{figure}[t] \centering \begin{subfigure}{0.49\linewidth} \centering \scalebox{0.85}{\input{pics/inst1.tex}} \caption{} \label{fig: csr_better} \end{subfigure} \hfill% \begin{subfigure}{0.49\linewidth} \centering \scalebox{0.85}{\input{pics/inst3.tex}} \caption{} \label{fig: csr_factor} \end{subfigure} \begin{subfigure}{0.49\linewidth} \centering \scalebox{0.85}{\input{pics/inst4.tex}} \caption{} \label{fig: both_same} \end{subfigure} \hfill % \begin{subfigure}{0.49\linewidth} \centering \scalebox{0.85}{\input{pics/inst2.tex}} \caption{} \label{fig: sim_infeasible} \end{subfigure} \caption{The numerical performance of IF and \textsc{Constrained-SR}\ is shown on three different feasible instances in panels~(a), (b), (c) and on an infeasible instance in panel~(d) . Note that the probability of error decays exponentially with the horizon in all four cases. Panels (b) and (c) show that IF and \textsc{Constrained-SR}\ have similar performance on those instances, while in Panel (a), the decay rate of \textsc{Constrained-SR}\ is higher. } \label{fig: sims} \end{figure} In this section, we present the results of simulations that show the performance of the \textsc{Constrained-SR}\ algorithm. We consider 2-dimensional jointly Gaussian arms with the covariance matrix $\begin{bmatrix} 1 & 0.5 \\ 0.5 & 1 \end{bmatrix}$ and attributes as defined in Section~\ref{sec: lb}. We compare the performance of \textsc{Constrained-SR}\ with that of Infeasible First (IF), which also follows a Successive Rejects based framework but differs from \textsc{Constrained-SR}\ in the way arms are rejected. In round~$k$, IF removes the arm with the highest empirical constraint attribute (i.e., the most infeasible looking arm) if $A_k$ contains infeasible looking arms, and otherwise removes the arm with the highest empirical objective attribute (i.e., the arm that looks like the most suboptimal feasible arm). See Appendix~\ref{app: IF} for a formal description of this algorithm. In the first instance, the mean vectors of the arms are $[1\ 0.95]^T,$ $[5\ 0.001]^T,$ and $[10\ 0.001]^T$. The threshold $\tau$, which is the upper bound for the mean of the second dimension, is fixed at~1. Thus, arm~1 is optimal, and arms~2 and~3 are feasible suboptimal. This instance is motivated by the scenario described in Figure~\ref{fig: algo_if}. The second instance that we consider is feasible and has three arms with the mean vectors $[1\ 0.995]^T$, $[2\ 1.005]^T$ and $[12\ 0.001]^T$ with $\tau=1$. Thus, arm 1 is optimal, arm 2 is a deceiver and arm 3 is feasible suboptimal. The third instance that we consider is also feasible and has four arms with the mean vectors $[0.3\ 0.45]^T$, $[0.35\ 0.45]^T$, $[0.2\ 0.8]^T$ and $[0.5\ 0.8]^T$ and $\tau=0.5$. Thus, arm 1 is optimal, arm 2 is feasible suboptimal, arm 3 is a deceiver and arm 4 is infeasible suboptimal. The fourth instance that we consider is infeasible and has four arms with the mean vectors $[0.3\ 1.6]^T,$ $[0.4\ 1.7]^T,$ $[0.2\ 1.1]^T,$ and $[0.5\ 1.2]^T$ and $\tau=1$. Thus, arm 3 is the optimal arm for this instance. The results of the simulations for each of these instances can be found in Figures~\ref{fig: csr_better}, \ref{fig: csr_factor}, \ref{fig: both_same} and \ref{fig: sim_infeasible} respectively. The algorithms were run for horizons up to 10000 and averaged over 100000 runs for the feasible instances and over 10000 runs for the infeasible instance. Empirical averages were used as the attribute estimators. Figure~\ref{fig: sims} shows the variation of $\log_{e}(e_T)$ with the horizon $T$ for these four instances. Note that the slope of this curve captures the (exponential) decay rate of the probability of error. In the case of the infeasible instance (Figure~\ref{fig: sim_infeasible}), the performance is nearly the same. In Figures~\ref{fig: csr_factor} and~\ref{fig: both_same}, we once again observe that the decay rates of \textsc{Constrained-SR}\ and IF are identical; the probability of error under \textsc{Constrained-SR}\ appears to be smaller than that under IF by a constant factor in Figure~\ref{fig: csr_factor}. However, in Figure~\ref{fig: csr_better}, \textsc{Constrained-SR}\ demonstrates a superior decay rate, since it employs a more sophisticated elimination criterion using gaps inspired by the two-armed lower bound (as noted in Section~\ref{sec: algo}). \section{Proof of Conjecture \ref{thm: K arms lb} for special cases} \label{app: K arms lb} \subsection{All arms are infeasible} We first prove Conjecture \ref{thm: K arms lb} for the case when all arms are infeasible. Here, the error event corresponds to the algorithm incorrectly identifying the instance as being feasible. Informally, we would expect this to depend upon the minimum infeasibility gap among all arms, which is what happens. This is similar to the Thresholding Bandit Problem (refer to \cite{tbp-locatelli16}) in the sense that one only has to identify whether each arm is above or below a given threshold, but is a different problem as this \begin{proof} Consider any alternative bandit model $\nu^{\prime}$ with at least one feasible arm. Extending \eqref{eqn: kaufmann_lemma1} for $K$ arms on the the event $\mathcal{D}= \{ \hat{O}([K]) \neq 0 \}$, we get: \begin{align*} \sum_{i=1}^{K}\mathbb{E}_{\nu^{\prime}}\left[N_{i}(T)\right] \textrm{KL}\left(\nu^{\prime}(i), \nu(i)\right) \geq d\left(\mathbb{P}_{\nu^{\prime}}(\mathcal{D}), \mathbb{P}_{\nu}(\mathcal{D})\right), \end{align*} where $\mathbb{E}_{\nu}(\cdot)$ and $\mathbb{P}_{\nu}(\cdot)$ denotes the expectation and the probability respectively with respect to the randomness introduced by the interaction of the algorithm with the bandit instance $\nu$, and $d(\cdot, \cdot)$ denotes the binary relative entropy. Denote by $e_T(\nu)$ the probability of error of the algorithm on the instance $\nu$. We have that $e_{T}(\nu)=1-\mathbb{P}_{\nu}(\mathcal{D})$ and $e_{T}\left(\nu^{\prime}\right) \geq \mathbb{P}_{\nu^{\prime}}(\mathcal{D})$. As algorithm $\mathcal{L}$ is consistent, we have that for every $\epsilon>0, \exists T_{0}(\epsilon)$ such that for all $T \geq T_{0}(\epsilon), \mathbb{P}_{\nu^{\prime}}(\mathcal{D}) \leq \epsilon \leq \mathbb{P}_{\nu}(\mathcal{D})$. For $T \geq T_{0}(\epsilon)$, we have: \begin{align*} \sum_{i=1}^{K}\mathbb{E}_{\nu^{\prime}}\left[N_{i}(T)\right] \textrm{KL}\left(\nu^{\prime}(i), \nu(i)\right) &\geq d\left(\epsilon, 1-e_{T}(\nu)\right) \\& \geq(1-\epsilon) \log \frac{1-\epsilon}{p_{T}(\nu)}+\epsilon \log \epsilon. \end{align*} In the limit where $\epsilon$ goes to zero, we have, \begin{align*} \limsup_{T \rightarrow \infty}-\frac{1}{T} \log e_{T}(\nu) &\leq \limsup_{T \rightarrow \infty} \sum_{j=1}^{K} \frac{\mathbb{E}_{\nu^{\prime}}\left[N_{j}(T)\right]}{T} \textrm{KL}\left(\nu^{\prime}(j), \nu(j)\right) \\ &\leq \max_{j \in [K]} \textrm{KL}\left(\nu^{\prime}(j), \nu(j)\right). \end{align*} Denote by $\mathcal{M}$ the set of two-armed bandit instances whose arms belong to $\mathcal{G}$. Minimizing the RHS over all feasible instances in $\mathcal{M}$ gives us: \begin{align*} \limsup_{T \rightarrow \infty}-\frac{1}{T} \log e_{T}(\nu) \leq \underset{\nu^{\prime} \in \mathcal{M}: O(\nu^{\prime} ) \neq 0 }{\inf} \max_{j \in [K]} \textrm{KL}\left(\nu^{\prime}(j), \nu(j)\right). \numberthis \label{eqn: kaufmann_K} \end{align*} Using the formula for the KL divergence between two multivariate distributions in \eqref{eqn: kaufmann_K} gives: \begin{align*} &\limsup_{T \rightarrow \infty}-\frac{1}{T} \log e_{T}(\nu) \\ &\leq \underset{\nu^{\prime} \in \mathcal{M}: O(\nu^{\prime} ) \in \{0,1,2\} \setminus \{O(\nu)\}}{\inf} \underset{i \in [K]}{\max} a_1\left(\mu_{1}(i)-\mu_{1}^{\prime}(i)\right)^2 \\ &+ a_2\left(\mu_{2}(i)-\mu_{2}^{\prime}(i)\right)^2 . \numberthis \label{eqn: gaps_lb_K} \end{align*} The infimum is obtained in \eqref{eqn: gaps_lb_K} when $\nu^{\prime}$ is such that only arm 1 is feasible with $\mu_1(1)=\mu_1^{\prime}(1)$ and the other infeasible arms coincide with the corresponding infeasible arms of $\nu$ (i.e., both attributes are the same).Thus, we get \begin{align*} \limsup_{T \rightarrow \infty}-\frac{1}{T} \log e_{T}(\nu) \leq a_2 \left ( \mu_2(1)-\tau\right )^2 \end{align*} Note that when two arms are infeasible, we have $\Delta(i.j)=\sqrt{a_2} \min \left \{ \mu_2(i)-\tau, \mu_2(j)-\tau \right \}$. As arm 1 has the least infeasibility gap by definition, $H_1=\frac{K}{a_2\left ( \mu_2(1)-\tau\right )^2}$ and thus we get the form in Conjecture \ref{thm: K arms lb} with $d=K$. \end{proof} \subsection{All arms are feasible} When all arms are feasible, the error event corresponds to the algorithm incorrectly identifying the instance as being infeasible or recommending an arm other than the optimal arm. \section{Complete proof of Theorem \ref{thm: ub}} \label{app: ub} In this section, we complete the proof for Theorem~\ref{thm: ub} by proving Lemmas~\ref{lemma:2arm-upper_bound} and~\ref{lemma:3arm-upper_bound} for feasible and infeasible instances separately. \subsection{Underlying instance is feasible} \begin{proof}[Proof of Lemma \ref{lemma:2arm-upper_bound}] Here, the feasible instance consists of two arms and each arm has been drawn $n_1$ times. Let $\mathcal{B}_k$ denote the event that arm 1 is empirically feasible at the end of round $k$. Thus, \begin{align*} \prob{\mathcal{A}_1} &= \prob{\mathcal{A}_1 \cap \mathcal{B}_1} + \prob{\mathcal{A}_1 \cap \mathcal{B}_1^{\mathsf{c}}} \\ &\leq \prob{\mathcal{A}_1 \cap \mathcal{B}_1} + \prob{ \mu_2^{1}(1) > \tau }. \shortintertext{Using \eqref{eqn: conc_inequality} to bound the last term from above, we get:} \prob{\mathcal{A}_1} &\leq \prob{\mathcal{A}_1 \cap \mathcal{B}_1} + 2 \exp \left ( -a_2 n_{1} (\tau-\mu_2(1))^2\right ) \\ &\leq \prob{\mathcal{A}_1 \cap \mathcal{B}_1} + 2 \exp \left ( -n_{1} \Delta(1,2)^2 \right ). \numberthis \label{eqn: 2 arm ub main} \end{align*} The event $\{{\mathcal{A}_1 \cap \mathcal{B}_1}\}$ corresponds to the set of outcomes where arm 1 is empirically feasible at the end of round 1 and is still rejected. This would require that arm 2 be empirically feasible and also be the empirically optimal arm. Thus, we get \begin{equation} \prob{\mathcal{A}_1 \cap \mathcal{B}_1} \leq \prob{\hat{\mu}_2^1(2) \leq \tau, \hat{\mu}_1^1(1) > \hat{\mu}_1^1(2)}. \label{eqn: consr_2arm_1_feasible} \end{equation} This can be bounded using \eqref{eqn: conc_inequality} depending upon the nature of arm 2. \noindent {\bf Case 1: Arm 2 is a feasible suboptimal arm.} Using \eqref{eqn: consr_2arm_1_feasible}, \begin{align*} \prob{\mathcal{A}_1 \cap \mathcal{B}_1} &\leq \prob{\hat{\mu}_1^1(1) > \hat{\mu}_1^1(2)} \\ &= \mathbb{P} \Bigl ( (\hat{\mu}_1^1(1) - \mu_1(1))- (\hat{\mu}_1^1(2) - \mu_1(2)) \\ &\qquad\qquad > (\mu_1(2)-\mu_1(1)) \Bigr ) \\ &\leq \prob{ (\hat{\mu}_1^1(1) - \mu_1(1))> \frac{(\mu_1(2)-\mu_1(1))}{2} } \\ &+ \prob{ (\hat{\mu}_1^1(2) - \mu_1(2)) < - \frac{(\mu_1(2)-\mu_1(1))}{2} }, \end{align*} where the last step follows from the fact that both $(\hat{\mu}_1^1(1) - \mu_1(1))$ and $( \mu_1(2) - \hat{\mu}_1^1(2)) $ cannot be greater than $\frac{(\mu_1(2)-\mu_1(1))}{2} $ and a subsequent union bounding argument. Thus, using \eqref{eqn: conc_inequality}: \begin{align*} \prob{\mathcal{A}_1 \cap \mathcal{B}_1} &\leq 4 \exp(-\frac{n_1}{4} a_1(\mu_1(2)-\mu_1(1))^2) \\ &\leq 4 \exp(- \frac{n_1}{4} \Delta(1,2)^2). \numberthis \label{eqn: 3 arm ub feasible subopt} \end{align*} \noindent {\bf Case 2: Arm 2 is a deceiver arm.} Similarly, using \eqref{eqn: consr_2arm_1_feasible}, \begin{align*} \prob{\mathcal{A}_1 \cap \mathcal{B}_1} &\leq \prob{\hat{\mu}_2^1(2) \leq \tau} \\ &\leq 2 \exp \left ( -n_{1} a_2 (\mu_2(2)-\tau)^2 \right ) \\ &\leq 2 \exp \left ( -n_{1} \Delta(1,2)^2 \right ). \end{align*} \noindent {\bf Case 3: Arm 2 is an infeasible suboptimal arm.} This case follows from Case~1 if $\delta(1,2)$ is dictated by the the suboptimality gap of arm~2, and from Case~2 if $\delta(1,2)$ is dictated by the the infeasibility gap of arm~2. \ignore{ From~\eqref{eqn: consr_2arm_1_feasible}, \begin{align*} & \prob{\mathcal{A}_1 \cap \mathcal{B}_1} \leq \min \left \{ \prob{ \hat{\mu}_2^1(2) \leq \tau}, \prob{ \hat{\mu}_1^1(1) > \hat{\mu}_1^1(2)} \right \} \\ &\leq 4 \exp(- \frac{n_1}{4} \max \left \{ a_1(\mu_1(2)-\mu_1(1))^2), a_2 (\mu_2(2)-\tau)^2 \right \} ) \\ &= 4 \exp(-\frac{n_1}{4} \delta(1,2)^2). \end{align*} } The statement of the lemma now follows, combining these three cases with~\eqref{eqn: 2 arm ub main}. \ignore{Substituting the above in \eqref{eqn: 2 arm ub main} gives us: \begin{align*} \prob{\mathcal{A}_1} &\leq 4 \exp(-\frac{n_1}{4} \delta(1,2)^2) + 2 \exp \left ( -n_{1} \Delta(1,2)^2\right ) \\ &\leq 6 \exp(-\frac{n_1}{4} \Delta(1,2)^2). \end{align*} } \end{proof} \begin{proof}[Proof of Lemma~\ref{lemma:3arm-upper_bound}] The feasible instance consists of three arms, each of which has been drawn $n_1$ times. Let $\mathcal{B}_k$ denote the event that arm 1 is empirically feasible at the end of round $k$. Proceeding similarly as in \eqref{eqn: 2 arm ub main}, \begin{align*} &\prob{ \{\hat{J}(A_k) = 2\} \cap \mathcal{A}_1} \\ &\leq \prob{ \mathcal{A}_1 \cap \mathcal{B}_1 \cap \{\hat{J}(A_k) = 2\}} + 2 \exp \left ( - n_{1} \Delta(1,3)^2 \right ) \\ &=: \prob{\mathcal{G}} + 2 \exp \left ( - n_{1} \Delta(1,3)^2 \right ). \numberthis \label{eqn: 3 arm ub main} \end{align*} The term $\prob{\mathcal{G}}$ can be bounded depending upon the nature of arm~3. \noindent {\bf Case 1: Arm 3 is a feasible suboptimal arm.} \noindent Note that $\prob{\mathcal{G}}$ is the probability of arm 1 being rejected at the end of round 1, arm 2 being empirically optimal, and arm 1 looking empirically feasible. Event~$\mathcal{G}$ thus implies that arm~2 is also empirically feasible, and moreover, has a lower value of the objective attribute~$\hat\mu_1^1(\cdot)$ than arm~1. We now further decompose $\prob{\mathcal{G}}$ as follows: \begin{align*} \prob{\mathcal{G}} = &\prob{\mathcal{G} \cap\{\hat\Delta(2,1) = \hat\Delta(2,3)\}} \\ &\quad + \prob{\mathcal{G} \cap\{\hat\Delta(2,1) > \hat\Delta(2,3)\}} =: \prob{\mathcal{G}_1} + \prob{\mathcal{G}_2} \end{align*} {\bf Bounding~$\prob{\mathcal{G}_1}$:} The event~$\mathcal{G}_1$ implies that arm~1 is rejected based on our tie breaking rule. This can only occur if arm~3 appears empirically feasible, and moreover, appears superior to arm~1 on the objective attribute $\hat\mu_1^1(\cdot).$ Thus, \begin{align*} \prob{\mathcal{G}_1} \leq \prob{\hat\mu_1^1(1) \geq \hat\mu_1^1(3)} \leq 4 \exp(-\frac{n_1}{4} \delta(1,3)^2), \end{align*} where the last step follows from \eqref{eqn: 3 arm ub feasible subopt}. {\bf Bounding~$\prob{\mathcal{G}_2}$:} The event~$\mathcal{G}_1$ implies that arm~1 is rejected based on its estimated suboptimality gap alone. In this case, we must have \begin{align*} \hat\delta(2,3) = \hat\Delta(2,3) < \hat\Delta(2,1) \leq \hat\delta(2,1) = \sqrt{a_1} \left(\hat\mu_1^1(1) - \hat\mu_1^1(2)\right). \end{align*} Thus, $\prob{\mathcal{G}_2}$ is upper bounded by \begin{equation*} \prob{\{\sqrt{a_1} \left(\hat\mu_1^1(1) - \hat\mu_1^1(2)\right) > \hat\delta(2,3) \} \cap \mathcal{B}_1 \cap \{\hat{J}(A_k) = 2\}}. \end{equation*} Note that for $\sqrt{a_1} ( \hat\mu_1^1(1) - \hat\mu_1^1(2) ) \geq \hat\delta(2,3) $ to happen when arm~2 is empirically optimal and arm~1 is empirically feasible, it cannot be that arm~3 has a higher $\hat{\mu}^{1}_1(\cdot)$ than arm~1 (i.e., arm~3 appears inferior on the objective attribute), regardless of whether arm 3 is empirically feasible or infeasible. Thus, we have that: \begin{align*} \prob{\mathcal{G}_2} \leq \prob{\hat{\mu}_1^1(3) \leq \hat{\mu}_1^1(1) } \leq 4 \exp \left(-\frac{n_1}{4} \delta(1,3)^2 \right). \end{align*} Thus, for a feasible instance where arm 3 is suboptimal, the probability of arm 1 being rejected at the end of the first round when arm 2 is empirically optimal can be bounded as follows, combining \eqref{eqn: 3 arm ub main} with our bounds on $\prob{\mathcal{G}_1}$ and $\prob{\mathcal{G}_2}:$ \begin{equation} \prob{ \{\hat{J}(A_k) = 2\} \cap \mathcal{A}_1} \leq 10 \exp \left ( - \frac{n_1}{4} \Delta(1,3)^2 \right ). \label{eqn: 3 arm ub feasible subopt main} \end{equation} \noindent {\bf Case 2: Arm 3 is a deceiver arm.} Next, we consider the case where arm 3 is an infeasible arm. We bound~$\prob{\mathcal{G}}$ in the following way: \begin{align} \prob{\mathcal{G}} &= \prob{\mathcal{G} \cap \{\hat{\mu}^1_1(3) \leq \tau \}} + \prob{\mathcal{G} \cap \{\hat{\mu}^1_1(3) > \tau \}} \nonumber \\ &\leq \prob{\hat{\mu}^1_1(3) \leq \tau} + \prob{\mathcal{G} \cap \{\hat{\mu}^1_1(3) > \tau \}} \nonumber \\ &\leq 2 \exp (-n_1 \Delta(1,3)^2) + \prob{\mathcal{G} \cap \{\hat{\mu}^1_1(3) > \tau \}} \nonumber \\ &=2 \exp (-n_1 \Delta(1,3)^2) \nonumber \\ &\quad + \prob{\mathcal{G} \cap \{\hat{\mu}^1_1(3) > \tau \} \cap \{\hat\Delta(2,1) = \hat\Delta(2,3)\}} \nonumber \\ &\quad + \prob{\mathcal{G} \cap \{\hat{\mu}^1_1(3) > \tau \} \cap \{\hat\Delta(2,1) > \hat\Delta(2,3)\}} \nonumber \\ &=: 2 \exp (-n_1 \Delta(1,3)^2) + \prob{\mathcal{G}_1} + \prob{\mathcal{G}_2} \label{eq:lemma4_2} \end{align} \ignore{ \begin{align*} &\prob{ \mathcal{A}_1 \cap \mathcal{B}_1 \cap \{\hat{J}(A_k) = 2\}} \\ &=\prob{ \mathcal{A}_1 \cap \{\hat{\mu}^1_1(3) \leq \tau \} \cap\mathcal{B}_1 \cap \{\hat{J}(A_k) = 2\}} \\ &+ \prob{ \mathcal{A}_1 \cap \{\hat{\mu}^1_1(3) > \tau \} \cap \mathcal{B}_1 \cap \{\hat{J}(A_k) = 2\}} \\ &\leq \prob{\{\hat{\mu}^1_1(3) \leq \tau \} } \\ &+\prob{ \mathcal{A}_1 \cap \{\hat{\mu}^1_1(3) > \tau \} \cap \mathcal{B}_1 \cap \{\hat{J}(A_k) = 2\}} \\ &\leq 2 \exp (-n_1 a_2 (\mu_2(3)-\tau)^2 ) \\ &+ \prob{ \mathcal{A}_1 \cap \{\hat{\mu}^1_1(3) > \tau \} \cap \mathcal{B}_1 \cap \{\hat{J}(A_k) = 2\}} \\ &\leq 2 \exp (-n_1 \Delta(1,3)^2 ) \\ &+ \prob{ \mathcal{A}_1 \cap \{\hat{\mu}^1_1(3) > \tau \} \cap \mathcal{B}_1 \cap \{\hat{J}(A_k) = 2\}}, \end{align*} Note that $\prob{\mathcal{G}_1} = 0,$ since when there is a tie in the estimated suboptimality gaps of arms~1 and~3, with arm~1 appearing feasible, and arm~3 appearing infeasible, arm~3 would get rejected. Next, we bound~$\prob{\mathcal{G}_2}.$ \ignore{In this case, empirically infeasible arms would be rejected first in decreasing order of their $\hat\mu_2^1(\cdot)$ and empirically feasible arms would be rejected second in decreasing order of their $\hat\mu_1^1(\cdot)$. Thus, for arm 1 to be rejected when arm 2 is empirically optimal and arm 1 is empirically feasible, arm 3 has to be empirically feasible and hence, when we also have that arm 3 is empirically infeasible, this case does not contribute to arm 1 being rejected at the end of round 1.} Under the event~$\mathcal{G}_2,$ we have \begin{equation} \hat\Delta(2,1) > \hat\Delta(2,3) = \hat\delta(2,3) \geq \sqrt{a_2} \left ( \hat{\mu}_2^1(3) - \tau \right ). \label{eq:lemma4_1} \end{equation} \ignore{ Similar to the case of arm 3 being a suboptimal arm, we have that $\hat\Delta(2,3) = \hat\delta(2,3)$. We also have that \begin{align*} &\prob{ \mathcal{A}_1 \cap \{\hat{\mu}^1_1(3) > \tau \} \cap \mathcal{B}_1 \cap \{\hat{J}(A_k) = 2\}} \\ &= \prob{ \{ \hat\Delta(2,1) \geq \hat\Delta(2,3) \} \cap \{\hat{\mu}^1_1(3) > \tau \} \cap \mathcal{B}_1 \cap \{\hat{J}(A_k) = 2\}} \\ &= \prob{ \{ \hat\Delta(2,1) \geq \hat\delta(2,3) \} \cap \{\hat{\mu}^1_1(3) > \tau \} \cap \mathcal{B}_1 \cap \{\hat{J}(A_k) = 2\}}, \end{align*} where \begin{align*} \hat\Delta(2,1) &\leq \sqrt{a_2} ( \tau - \hat\mu_2^1(2) ), \\ \hat\delta(2,1) &= \sqrt{a_1} \left ( \hat{\mu}_1^1(1) - \hat{\mu}_1^1(2) \right ), \\ \hat\delta(2,3) &\geq \sqrt{a_2} \left ( \hat{\mu}_2^1(3) - \tau \right ). \end{align*} } We now take two cases for the nature of arm 2. \noindent {\bf Case 2a: Arm~2 is suboptimal.} In this case, we use $$\sqrt{a_1} \left ( \hat{\mu}_1^1(1) - \hat{\mu}_1^1(2) \right ) \geq \hat\Delta(2,1).$$ Combining this bound with~\eqref{eq:lemma4_1}, \begin{align*} \prob{\mathcal{G}_2} &\leq \prob{\sqrt{a_1} \left ( \hat{\mu}_1^1(1) - \hat{\mu}_1^1(2) \right ) \geq \sqrt{a_2} \left ( \hat{\mu}_2^1(3) - \tau \right )} \\ &= \mathbb{P} \Bigl ( \{ \sqrt{a_1} \left ( \hat{\mu}_1^1(1) - \mu_1(1) - \hat{\mu}_1^1(2) + \mu_1(2) \right ) \\ &\qquad\qquad \geq \sqrt{a_2} \left ( \hat{\mu}_2^1(3) -\mu_2(3) \right ) + \delta(1,3) + \delta(1,2) \} \Bigr ) \\ &\leq 6\exp \left (- \frac{n_1}{9} (\delta(1,3) + \delta(1,2))^2 \right ) \\ &\leq 6 \exp \left (- \frac{n_1}{9} \delta(1,3)^2 \right ). \end{align*} \noindent {\bf Case 2b: Arm~2 is a deceiver.} When arm 2 is a deceiver arm, we use $$\sqrt{a_2} ( \tau - \hat\mu_2^1(2) ) \geq \hat\Delta(2,1).$$ Combining this bound with~\eqref{eq:lemma4_1}, \begin{align*} \prob{\mathcal{G}_2} &\leq \prob{\sqrt{a_2} \left( \tau - \hat\mu_2^1(2) \right) \geq \sqrt{a_2} \left ( \hat{\mu}_2^1(3) - \tau \right )}\\ &= \mathbb{P} \Bigl (\{ ( \mu_2(3) - \hat{\mu}_2^1(3) ) - (\hat\mu_2^1(2) -\mu_2(2) ) \\ & \qquad\qquad \geq ( \mu_2(2) - \tau )+ (\mu_2(3)- \tau) \} \Bigr ) \\ &\leq 4 \exp \left (- \frac{n_1}{4} a_2 (( \mu_2(2) - \tau )+ (\mu_2(3)- \tau))^2 \right ) \\ &\leq 4 \exp \left (- \frac{n_1}{4} \delta(1,3)^2 \right ) . \end{align*} Thus, combining the results of the two cases with~\eqref{eqn: 3 arm ub main} and~\eqref{eq:lemma4_2} gives us the following bound on the probability of arm~1 being rejected at the end of the first round when arm~2 is empirically optimal: \begin{equation*} \prob{ \{\hat{J}(A_k) = 2\} \cap \mathcal{A}_1} \leq 10 \exp \left (- \frac{n_1}{9} \Delta(1,3)^2 \right ). \label{eqn: 3 arm feasible ub deceiver} \end{equation*} \noindent {\bf Case 3: Arm 3 is an infeasible suboptimal arm.} This case follows from Case~1 if $\delta(1,3)$ is dictated by the the suboptimality gap of arm~3, and from Case~2 if $\delta(1,3)$ is dictated by the the infeasibility gap of arm~3. \ignore{ Next, we consider the case where arm 3 is an infeasible suboptimal arm. Note that both \eqref{eqn: 3 arm ub feasible subopt main} and \eqref{eqn: 3 arm feasible ub deceiver} are valid bounds for this case. In the former, $\delta(1,3)=\sqrt{a_1}(\mu_1(3)-\mu_1(1)$, while in the latter, $\delta(1,3)=\sqrt{a_2}(\mu_2(3)-\tau$. Thus, we have that \begin{align*} &\prob{ \{\hat{J}(A_k) = 2\} \cap \mathcal{A}_1} \\&\leq 10 \exp \left (- \frac{n_1}{4} \max \{ a_1(\mu_1(3)-\mu_1(1)^2), a_2(\mu_2(3)-\tau\} )^2\right ) \\ &=10 \exp \left (- \frac{n_1}{4} \delta(1,3)^2 \right ). \end{align*} Thus, combining all three cases, we get that, for a feasible instance, when arm 2 is empirically optimal, the probability of arm 1 being rejected at the end of round 1 can be bounded in the following way: \begin{align*} \prob{ \{\hat{J}(A_k) = 2\} \cap \mathcal{A}_1} &\leq 10 \exp \left (- \frac{n_1}{9} \delta(1,3)^2 \right ) \\ & \leq 10 \exp \left (- \frac{n_1}{9} \Delta(1,3)^2 \right ) . \numberthis \label{eqn: 3 arm ub feasible final} \end{align*} \end{proof} \subsection{Underlying instance is infeasible} \begin{proof}[Proof of Lemma \ref{lemma:2arm-upper_bound}] The infeasible instance consists of two arms, each of which has been drawn $n_1$ times. Note that arm~1 is rejected either if arm~1 appears empirically feasible, or if arm~2 appears favourable relative to arm~1 on the constraint attribute. Thus, $$\prob{\mathcal{A}_1} \leq \prob{\hat\mu_2^1(1) \leq \tau} + \prob{\hat\mu_2^1(2) \leq \hat\mu_2^1(1)}.$$ Each of the two terms above can be bounded from above using~\eqref{eqn: conc_inequality}, as demonstrated before, to yield the statement of the lemma. \end{proof} \begin{proof}[Proof of Lemma~\ref{lemma:3arm-upper_bound}] The infeasible instance consists of three arms, each of which has been drawn $n_1$ times. Let $\mathcal{B}_k$ denote the event that arm~1 is empirically feasible at the end of round $k$. Proceeding similarly as in \eqref{eqn: 2 arm ub main}, \begin{align*} &\prob{ \{\hat{J}(A_k) = 2\} \cap \mathcal{A}_1} \\ &\leq \prob{ \mathcal{A}_1 \cap \mathcal{B}_1^{\mathsf{c}} \cap \{\hat{J}(A_k) = 2\} \cap \{\hat\mu_2(2), \hat\mu_2(3) > \tau \} } \\ &\quad + 6 \exp \left ( - n_{1} \Delta(1,3)^2 \right ), \numberthis \label{eqn: 3 arm ub main infeasible} \end{align*} the only difference being that here, we bound the probability of arm 1, 2 or 3 being empirically feasible using \eqref{eqn: conc_inequality}. We also use the fact that $\mu_2(1) < \mu_2(2) \leq \mu_2(3)$. The first term in \eqref{eqn: 3 arm ub main infeasible} is the probability of arm 1 being rejected at the end of round 1 when all three arms are empirically infeasible and arm 2 is empirically optimal. It thus implies $\hat\mu_2^1(1) >\hat\mu_2^1(3),$ yielding \begin{align*} &\prob{ \mathcal{A}_1 \cap \mathcal{B}_1^{\mathsf{c}} \cap \{\hat{J}(A_k) = 2\} \cap \{\hat\mu_2(2), \hat\mu_2(3) > \tau \} } \\ &\quad \leq \prob{\hat\mu_2^1(1) >\hat\mu_2^1(3)} \\ &\quad \leq 4 \exp \left (- \frac{n_1}{4} (\mu_2(3) - \mu_2(1))^2 \right ) \\ &\quad = 4 \exp \left (- \frac{n_1}{4} \delta(1,3)^2 \right ). \end{align*} Combining the above result and \eqref{eqn: 3 arm ub main infeasible}, we get that \begin{equation*} \prob{ \{\hat{J}(A_k) = 2\} \cap \mathcal{A}_1} \leq 10 \exp \left ( - \frac{n_1}{4} \Delta(1,3)^2 \right ). \label{eqn: 3 arm ub infeasible final} \end{equation*} \ignore{ Thus, combining \eqref{eqn: 3 arm ub feasible final} and \eqref{eqn: 3 arm ub infeasible final}, we get that the probability of rejecting arm 1 in any three armed instance at the end of round 1 when arm 2 is empirically optimal can be bounded in the following way: \begin{equation*} \prob{ \{\hat{J}(A_k) = 2\} \cap \mathcal{A}_1} \leq 10 \exp \left ( - \frac{n_1}{9} \delta(1,3)^2 \right ). \end{equation*} } \end{proof} \section{Proof of Theorem \ref{thm: 2 arms lb}} \label{app: 2_arm_lb_proof} \begin{proof} With some abuse of notation, we denote by $(J(\nu),F(\nu))$ the correct output for instance $\nu$. Consider any alternative bandit model $\nu^{\prime}=\left(\nu^{\prime}(1), \nu^{\prime}(2)\right)$ such that its correct output, $(J(\nu^{\prime}), F(\nu^{\prime})) \neq (J(\nu),F(\nu))$. Let $\mathcal{L}$ be a consistent algorithm. We apply Lemma 1 of \cite{kaufmann16} with the stopping time $\sigma=T$ a.s. on the event $\mathcal{H}= \{ \hat{J}([2]) = J(\nu) \} \cap \{ \hat{F}([2]) = F(\nu) \} $ to get: \begin{align*} \mathbb{E}_{\nu^{\prime}}\left[N_{1}(T)\right] \textrm{KL}\left(\nu^{\prime}(1), \nu(1)\right) &+\mathbb{E}_{\nu^{\prime}}\left[N_{2}(T)\right] \textrm{KL}\left(\nu^{\prime}(2), \nu(2)\right) \\ &\geq d\left(\mathbb{P}_{\nu^{\prime}}(\mathcal{H}), \mathbb{P}_{\nu}(\mathcal{H})\right), \numberthis \label{eqn: kaufmann_lemma1} \end{align*} where $\mathbb{E}_{\nu}(\cdot)$ and $\mathbb{P}_{\nu}(\cdot)$ denote the expectation and the probability, respectively, with respect to the randomness introduced by the interaction of the algorithm with the bandit instance $\nu$, and $d(\cdot, \cdot)$ denotes the binary relative entropy. Denote by $e_T(\nu)$ the probability of error of the algorithm on the instance $\nu$. We have that $e_{T}(\nu)=1-\mathbb{P}_{\nu}(\mathcal{H})$ and $e_{T}\left(\nu^{\prime}\right) \geq \mathbb{P}_{\nu^{\prime}}(\mathcal{H})$. As algorithm $\mathcal{L}$ is consistent, we have that for every $\epsilon>0, \exists T_{0}(\epsilon)$ such that for all $T \geq T_{0}(\epsilon), \mathbb{P}_{\nu^{\prime}}(\mathcal{H}) \leq \epsilon \leq \mathbb{P}_{\nu}(\mathcal{H})$. For $T \geq T_{0}(\epsilon)$, we have: \begin{align*} &\mathbb{E}_{\nu^{\prime}} \left[N_{1}(T)\right] \textrm{KL}\left(\nu^{\prime}(1), \nu(1)\right) +\mathbb{E}_{\nu^{\prime}}\left[N_{2}(T)\right] \textrm{KL}\left(\nu^{\prime}(2), \nu(2)\right) \\ &\quad \geq d\left(\epsilon, 1-e_{T}(\nu)\right) \geq(1-\epsilon) \log \frac{1-\epsilon}{e_{T}(\nu)}+\epsilon \log \epsilon. \end{align*} In the limit where $\epsilon$ goes to zero, we have, \begin{align*} &\limsup_{T \rightarrow \infty}-\frac{1}{T} \log e_{T}(\nu) \\ &\leq \limsup_{T \rightarrow \infty} \sum_{j=1}^{2} \frac{\mathbb{E}_{\nu^{\prime}}\left[N_{j}(T)\right]}{T} \textrm{KL}\left(\nu^{\prime}(j), \nu(j)\right) \\ &\leq \max_{j=1,2} \textrm{KL}\left(\nu^{\prime}(j), \nu(j)\right). \end{align*} Denote by $\mathcal{M}$ the set of two-armed bandit instances whose arms belong to $\mathcal{D}$. Minimizing the RHS over all $\nu^{\prime} \in \mathcal{M}$ whose correct output differs from that of $\nu$ gives us: \begin{align*} &\limsup_{T \rightarrow \infty}-\frac{1}{T} \log e_{T}(\nu) \\ &\leq \underset{ \substack{\nu^{\prime} \in \mathcal{M}: \\ J(\nu^{\prime} ) \neq J(\nu) \text{ or} \\ F(\nu^{\prime} ) \neq F(\nu) } }{\inf} \max_{j=1,2} \textrm{KL}\left(\nu^{\prime}(j), \nu(j)\right). \numberthis \label{eqn: kaufmann} \end{align*} Using the formula for the KL divergence between two multivariate distributions in \eqref{eqn: kaufmann} gives: \begin{align*} &\limsup_{T \rightarrow \infty}-\frac{1}{T} \log e_{T}(\nu) \\ &\leq \frac{1}{2} \underset{ \substack{\nu^{\prime} \in \mathcal{M}: \\ J(\nu^{\prime} ) \neq J(\nu) \text{ or} \\ F(\nu^{\prime} ) \neq F(\nu) } }{\inf} \max \Bigl \{ a_1\left(\mu_{1}(1)-\mu_{1}^{\prime}(1)\right)^2 \\ &+ a_2\left(\mu_{2}(1)-\mu_{2}^{\prime}(1)\right)^2 , a_1\left(\mu_{1}(2)-\mu_{1}^{\prime}(2)\right)^2 \\ &+ a_2\left(\mu_{2}(2)-\mu_{2}^{\prime}(2)\right)^2 \Bigr \}. \numberthis \label{eqn: gaps_lb} \end{align*} Evaluating the RHS of \eqref{eqn: gaps_lb} for each type of two-armed bandit instance gives the required result. There are broadly two cases involved here: $\nu$ being a feasible instance and $\nu$ being an infeasible instance. The former has three subcases for the non-optimal arm, i.e., arm 2: arm 2 is feasible suboptimal, deceiver, and infeasible suboptimal. The general methodology used here is to minimize both terms inside the maximum subject to the constraints on the arms. \noindent \textbf{Case 1: $\nu$ is feasible and $\nu(2)$ is feasible suboptimal}\\ We first evaluate the infimum over the two cases: $J(\nu^{\prime} ) \neq J(\nu), F(\nu^{\prime} ) = F(\nu)$, and $F(\nu^{\prime} ) \neq F(\nu)$, and then find the minimum of these two cases. In the former case, we have that $J(\nu^{\prime} ) \neq J(\nu), F(\nu^{\prime} ) = F(\nu)$, i.e., both $\nu$ and $\nu^{\prime}$ are feasible but their optimal arms are different, while in the latter case, we have that $\nu^{\prime}$ is infeasible. We first consider the former case. WLOG, we assume that $J(\nu)=1$ and $J(\nu^{\prime})=2$. \begin{itemize} \item \textit{Arm 1 of $\nu^{\prime}$ is feasible.}\\ In this case, we have that $\mu^{\prime}_2(1)\leq \tau$, and hence there are no restrictions on $\mu^{\prime}_2(1)$ and $\mu^{\prime}_2(2)$ (as long as they are below $\tau$). We thus set $\mu^{\prime}_2(1) = \mu_2(1) $ and $\mu^{\prime}_2(2)=\mu_2(2)$. It follows that: \begin{align*} &\underset{ \substack{\nu^{\prime} \in \mathcal{M}: \\ J(\nu^{\prime} ) =2 \\ \mu_2^{\prime}(2),\mu_2^{\prime}(1) < \tau } }{\inf} \max \Bigl \{ a_1\left(\mu_{1}(1)-\mu_{1}^{\prime}(1)\right)^2 \\ &+ a_2\left(\mu_{2}(1)-\mu_{2}^{\prime}(1)\right)^2 , a_1\left(\mu_{1}(2)-\mu_{1}^{\prime}(2)\right)^2 \\ &+ a_2\left(\mu_{2}(2)-\mu_{2}^{\prime}(2)\right)^2 \Bigr \} \\ &= \underset{ \substack{\nu^{\prime} \in \mathcal{M}: \\ \mu_1^{\prime}(2) \leq \mu_1^{\prime}(1) } }{\inf} \max \Bigl \{ a_1\left(\mu_{1}(1)-\mu_{1}^{\prime}(1)\right)^2 , \\ &a_1\left(\mu_{1}(2)-\mu_{1}^{\prime}(2)\right)^2 \Bigr \} \\ &= \frac{a_1(\mu_{1}(2) - \mu_{1}(1))^2}{4}, \end{align*} where the infimum is attained midway between $\mu_{1}(1)$ and $\mu_{1}(2)$. \item \textit{Arm 1 of $\nu^{\prime}$ is infeasible.}\\ In this case, we have that $\mu^{\prime}_2(1)>\tau$, and hence there are no restrictions on $\mu^{\prime}_1(1)$, $\mu^{\prime}_1(2)$, and $\mu^{\prime}_2(2)$ (as long as it is below $\tau$). We thus set $\mu^{\prime}_1(1)=\mu_1(1)$, $\mu^{\prime}_1(2)=\mu_1(2)$ and $\mu^{\prime}_2(2)=\mu_2(2)$. Thus, \begin{align*} &\underset{ \substack{\nu^{\prime} \in \mathcal{M}: \\ J(\nu^{\prime} ) =2 \\ \mu_2^{\prime}(2) < \tau < \mu_2^{\prime}(1) } }{\inf} \max \Bigl \{ a_1\left(\mu_{1}(1)-\mu_{1}^{\prime}(1)\right)^2 \\ &+ a_2\left(\mu_{2}(1)-\mu_{2}^{\prime}(1)\right)^2 , a_1\left(\mu_{1}(2)-\mu_{1}^{\prime}(2)\right)^2 \\ &+ a_2\left(\mu_{2}(2)-\mu_{2}^{\prime}(2)\right)^2 \Bigr \} \\ &= \underset{ \substack{\nu^{\prime} \in \mathcal{M}: \\ J(\nu^{\prime} ) =2 \\ \tau < \mu_2^{\prime}(1) } }{\inf} a_2\left(\mu_{2}(1)-\mu_{2}^{\prime}(1)\right)^2 \\ &= a_2 (\tau - \mu_2(1))^2. \end{align*} \end{itemize} It is enough to evaluate the infimum only for the case where $J(\nu^{\prime} ) \neq J(\nu), F(\nu^{\prime} ) = F(\nu)$ because in the case where $F(\nu^{\prime} ) \neq F(\nu)$, the infimum is at least $a_2 \max \left \{ (\tau - \mu_2(1))^2, (\tau - \mu_2(2))^2 \right \}$. Thus, combining the results of the cases discussed above, in the case of a feasible instance with optimal arm being arm 1 and arm 2 being a suboptimal feasible arm, we have that \begin{align*} &\limsup_{T \rightarrow \infty}-\frac{1}{T} \log e_{T}(\nu) \leq \\ &\frac{1}{2} \min \left \{a_2 (\tau - \mu_2(1))^2 , \frac{a_1(\mu_{1}(2) - \mu_{1}(1))^2}{4} \right \}. \end{align*} \noindent \textbf{Case 2: $\nu$ is feasible and $\nu(2)$ is a deceiver} \\ We first evaluate the infimum over the two cases: $J(\nu^{\prime} ) \neq J(\nu), F(\nu^{\prime} ) = F(\nu)$, and $F(\nu^{\prime} ) \neq F(\nu)$; and then find the minimum of these two cases. In the former case, we have that $J(\nu^{\prime} ) \neq J(\nu), F(\nu^{\prime} ) = F(\nu)$, i.e., both $\nu$ and $\nu^{\prime}$ are feasible but their optimal arms are different, while in the latter case, we have that $\nu^{\prime}$ is infeasible. We first consider the former case. WLOG, we assume that $J(\nu)=1$ and $J(\nu^{\prime})=2$. \begin{itemize} \item \textit{Arm 1 of $\nu^{\prime}$ is feasible.}\\ In this case, we have that $\mu^{\prime}_2(1)\leq \tau$. As we also have that $\mu_1(2) \leq \mu_1(1)$ and the only constraint on the first dimensions of the arms of instance $\nu^{\prime}$ is $\mu^{\prime}_1(2) \leq \mu^{\prime}_1(1)$, we set $\mu_1(2) = \mu^{\prime}_1(2)$ and $\mu_1(1) = \mu^{\prime}_1(1)$ to minimize each term inside the maximum. Thus, \begin{align*} &\underset{ \substack{\nu^{\prime} \in \mathcal{M}: \\ J(\nu^{\prime} ) =2 \\ \mu_2^{\prime}(1) ,\mu_2^{\prime}(2) < \tau } }{\inf} \max \Bigl \{ a_1\left(\mu_{1}(1)-\mu_{1}^{\prime}(1)\right)^2 \\ &+ a_2\left(\mu_{2}(1)-\mu_{2}^{\prime}(1)\right)^2 , a_1\left(\mu_{1}(2)-\mu_{1}^{\prime}(2)\right)^2 \\ &+ a_2\left(\mu_{2}(2)-\mu_{2}^{\prime}(2)\right)^2 \Bigr \} \\ &= \underset{ \substack{\nu^{\prime} \in \mathcal{M}: \\ J(\nu^{\prime} ) =2 \\ \mu_2^{\prime}(1) ,\mu_2^{\prime}(2) < \tau } }{\inf} \max \Bigl \{ a_2\left(\mu_{2}(1)-\mu_{2}^{\prime}(1)\right)^2 , \\ & a_2\left(\mu_{2}(2)-\mu_{2}^{\prime}(2)\right)^2 \Bigr \} \\ &= \underset{ \substack{\nu^{\prime} \in \mathcal{M}: \\ J(\nu^{\prime} ) =2 \\ \mu_2^{\prime}(1) ,\mu_2^{\prime}(2) < \tau } }{\inf} a_2\left(\mu_{2}(2)-\mu_{2}^{\prime}(2)\right)^2 \\ &= a_2 (\mu_2(2) - \tau )^2 . \end{align*} \item \textit{Arm 1 of $\nu^{\prime}$ is infeasible.}\\ In this case, we have that $\mu^{\prime}_2(1)>\tau$, and hence there are no restrictions on $\mu^{\prime}_1(1)$ and $\mu^{\prime}_1(2)$. In this case, we set $\mu^{\prime}_1(1)=\mu_1(1)$, $\mu^{\prime}_1(2)=\mu_1(2)$. Thus, \begin{align*} &\underset{ \substack{\nu^{\prime} \in \mathcal{M}: \\ J(\nu^{\prime} ) =2 \\ \mu_2^{\prime}(2) < \tau < \mu_2^{\prime}(1) } }{\inf} \max \Bigl \{ a_1\left(\mu_{1}(1)-\mu_{1}^{\prime}(1)\right)^2 \\ &+ a_2\left(\mu_{2}(1)-\mu_{2}^{\prime}(1)\right)^2 , a_1\left(\mu_{1}(2)-\mu_{1}^{\prime}(2)\right)^2 \\ &+ a_2\left(\mu_{2}(2)-\mu_{2}^{\prime}(2)\right)^2 \Bigr \} \\ &= \underset{ \substack{\nu^{\prime} \in \mathcal{M}: \\ J(\nu^{\prime} ) =2 \\ \mu_2^{\prime}(2) < \tau < \mu_2^{\prime}(1) } }{\inf} \max \Bigl \{ a_2\left(\mu_{2}(1)-\mu_{2}^{\prime}(1)\right)^2 , \\ & a_2\left(\mu_{2}(2)-\mu_{2}^{\prime}(2)\right)^2 \Bigr \} \\ &= a_2 \max \left \{ (\tau - \mu_2(1))^2, (\mu_2(2) - \tau )^2 \right \}. \end{align*} \end{itemize} We now consider the case where $F(\nu^{\prime} ) \neq F(\nu)$, i.e., $\nu^{\prime}$ is an infeasible instance. Here, as there are no constraints on arm 2 of the instance $\nu^{\prime}$ apart from $\mu^{\prime}_2(2)\geq \mu^{\prime}_2(1)\geq \tau$, we set $\mu^{\prime}(2)=\mu(2)$. We also set $\mu^{\prime}_1(1)=\mu_1(1)$ as the only constraints on arm 1 of the instance $\nu^{\prime}$ is that $\mu^{\prime}_2(2)\geq \mu^{\prime}_2(1)\geq \tau$. Thus, \begin{align*} &\underset{ \substack{\nu^{\prime} \in \mathcal{M}: \\ \mu_2^{\prime}(1) ,\mu_2^{\prime}(2) > \tau } }{\inf} \max \Bigl \{ a_1\left(\mu_{1}(1)-\mu_{1}^{\prime}(1)\right)^2 \\ &+ a_2\left(\mu_{2}(1)-\mu_{2}^{\prime}(1)\right)^2 , a_1\left(\mu_{1}(2)-\mu_{1}^{\prime}(2)\right)^2 \\ &+ a_2\left(\mu_{2}(2)-\mu_{2}^{\prime}(2)\right)^2 \Bigr \} \\ &=\underset{ \substack{\nu^{\prime} \in \mathcal{M}: \\ J(\nu^{\prime} ) =2 \\ \mu_2^{\prime}(1) ,\mu_2^{\prime}(2) > \tau } }{\inf} a_2\left(\mu_{2}(1)-\mu_{2}^{\prime}(1)\right)^2 \\ &= a_2 (\tau - \mu_2(1) )^2 . \end{align*} Thus, combining the results of the three cases above, we have that for a two-armed feasible instance $\nu$ with arm 1 being the optimal arm and arm 2 being a deceiver arm, \begin{align*} &\limsup_{T \rightarrow \infty}-\frac{1}{T} \log e_{T}(\nu) \leq \\ &\frac{1}{2} \min \left \{a_2 (\tau - \mu_2(1))^2 , a_2 (\mu_2(2) - \tau )^2 \right \}. \end{align*} \noindent \textbf{Case 3: $\nu$ is feasible and $\nu(2)$ is infeasible suboptimal} \\ We first evaluate the infimum over the two cases: $J(\nu^{\prime} ) \neq J(\nu), F(\nu^{\prime} ) = F(\nu)$, and $F(\nu^{\prime} ) \neq F(\nu)$; and then find the minimum over these two cases. In the former case, we have that $J(\nu^{\prime} ) \neq J(\nu), F(\nu^{\prime} ) = F(\nu)$, i.e., both $\nu$ and $\nu^{\prime}$ are feasible but their optimal arms are different, while in the latter case, we have that $\nu^{\prime}$ is infeasible. We first consider the former case. WLOG, we assume that $J(\nu)=1$ and $J(\nu^{\prime})=2$. \begin{enumerate} \item \textit{Arm 1 of $\nu^{\prime}$ is feasible.} \\ In this case, as there are no restrictions on the second dimensions of the arms of $\nu^{\prime}$ apart from them being smaller than $\tau$, we set $\mu^{\prime}_2(1)=\mu_2(1)$, $\mu^{\prime}_2(2)=\tau$. Also, as $\mu^{\prime}_1(2) \leq \mu^{\prime}_1(1)$, to attain the infimum in \eqref{eqn: gaps_lb}, it is clear that $\mu^{\prime}_1(2) \geq \mu_1(1)$ and $\mu^{\prime}_1(1) \leq \mu_1(2)$. Hence, we also set $\mu^{\prime}_1(2) = \mu^{\prime}_1(1)$. Thus, \begin{align*} &\underset{ \substack{\nu^{\prime} \in \mathcal{M}: \\J(\nu^{\prime} ) =2 \\ \mu_2^{\prime}(1) ,\mu_2^{\prime}(2) < \tau } }{\inf} \max \Bigl \{ a_1\left(\mu_{1}(1)-\mu_{1}^{\prime}(1)\right)^2 \\ &+ a_2\left(\mu_{2}(1)-\mu_{2}^{\prime}(1)\right)^2 , a_1\left(\mu_{1}(2)-\mu_{1}^{\prime}(2)\right)^2 \\ &+ a_2\left(\mu_{2}(2)-\mu_{2}^{\prime}(2)\right)^2 \Bigr \} \\ &=\underset{ \substack{\nu^{\prime} \in \mathcal{M}: \\ J(\nu^{\prime} ) =2 \\ \mu_2^{\prime}(1) ,\mu_2^{\prime}(2) < \tau } }{\inf} \max \Bigl \{ a_1\left(\mu_{1}(1)-\mu_{1}^{\prime}(1)\right)^2, \\ & a_1\left(\mu_{1}(2)-\mu_{1}^{\prime}(2)\right)^2 \\ &+ a_2\left(\mu_{2}(2)-\tau \right)^2 \Bigr \}. \\ \end{align*} For notational simplicity, let $M$ denote $\sqrt{a_1}(\mu_1(2)-\mu_1(1))$, $y$ denote $\sqrt{a_2}(\mu_2(2)-\tau)$ and $x$ denote $\sqrt{a_1}(\mu^{\prime}_1(1)-\mu_1(1))$. Note that $x<M$. Then the expression to be evaluated is: \begin{align*} \underset{x}{\inf} \max \{ x^2,y^2 +(M-x)^2 \}. \end{align*} We consider the following two cases: \begin{enumerate} \item $y>M$. \\ We have that \begin{align*} \frac{y^2+M^2}{2M} >M. \end{align*} As we also have that $x<M$, \begin{align*} x<\frac{y^2+M^2}{2M}, \end{align*} which gives that \begin{align*} y^2+(M-x)^2 > x^2. \end{align*} Thus, \begin{align*} \underset{x}{\inf} \max \{ x^2,y^2+(M-x)^2 \} &= \underset{x}{\inf} \{ y^2+(M-x)^2 \} \\ &= y^2. \end{align*} \item $y \leq M$. \\ We have that \begin{align*} \frac{y^2+M^2}{2M}\leq M. \end{align*} If \begin{align*} x \leq \frac{y^2+M^2}{2M}, \end{align*} then \begin{align*} \max \{ x^2,y^2 +(M-x)^2 \} &=y^2 +(M-x)^2, \end{align*} which is minimum at $x=\frac{y^2+M^2}{2M}$. Also, if \begin{align*} x &\geq \frac{y^2+M^2}{2M}, \end{align*} then \begin{align*} \max \{ x^2,y^2 +(M-x)^2 \} &= x^2, \end{align*} which is minimum at $x=\frac{y^2+M^2}{2M}$. In both cases, \begin{align*} \underset{x}{\inf} \max \{ x^2,y^2+(M-x)^2 \} &= \left ( \frac{y^2+M^2}{2M} \right)^2. \end{align*} \end{enumerate} Combining the two cases $y>M$ and $y\leq M$, we have that \begin{align*} \underset{x}{\inf} \max \{ x^2,y^2 +(M-x)^2 \}= \left ( \frac{y^2+z^2}{2z} \right)^2 , \end{align*} where $z=\max \{ y, M \}$. We also have that \begin{align*} \frac{z^2}{4} \leq \left ( \frac{y^2+z^2}{2z} \right)^2 \leq z^2, \end{align*} i.e., the infimum is within a constant factor of $z^2$. As our algorithm is motivated by these gaps and to avoid comparisons between the first and the second dimensions, we have that \begin{align*} \underset{x}{\inf} \max \{ x^2,y^2 +(M-x)^2 \} \leq z^2. \end{align*} \item \textit{Arm 1 of $\nu^{\prime}$ is infeasible.} \\ In this case, as there are no restrictions on the first dimensions of the arms of $\nu^{\prime}$, we set $\mu^{\prime}_1(1)=\mu_1(1)$, $\mu^{\prime}_1(2)=\mu_1(2)$. Thus, \begin{align*} &\underset{ \substack{\nu^{\prime} \in \mathcal{M}: \\ J(\nu^{\prime} ) =2 \\ \mu_2^{\prime}(2) < \tau < \mu_2^{\prime}(1) } }{\inf} \max \Bigl \{ a_1\left(\mu_{1}(1)-\mu_{1}^{\prime}(1)\right)^2 \\ &+ a_2\left(\mu_{2}(1)-\mu_{2}^{\prime}(1)\right)^2, a_1\left(\mu_{1}(2)-\mu_{1}^{\prime}(2)\right)^2 \\ &+ a_2\left(\mu_{2}(2)-\mu_{2}^{\prime}(2)\right)^2 \Bigr \} \\ &=\underset{ \substack{\nu^{\prime} \in \mathcal{M}: \\ J(\nu^{\prime} ) =2 \\ \mu_2^{\prime}(2) < \tau < \mu_2^{\prime}(1) } }{\inf} \max \Bigl \{ a_2\left(\mu_{2}(1)-\mu_{2}^{\prime}(1)\right)^2, \\ & a_2\left(\mu_{2}(2)-\mu_{2}^{\prime}(2)\right)^2 \Bigr \} \\ &= a_2 \max \left \{ (\tau - \mu_2(1))^2, (\mu_2(2) - \tau )^2 \right \}. \end{align*} \end{enumerate} Next, we consider the case where $F(\nu^{\prime} ) \neq F(\nu)$, i.e., $\nu^{\prime}$ is an infeasible instance. In this case, as there are no restrictions on the first dimensions of the arms of $\nu^{\prime}$, we set $\mu^{\prime}_1(1)=\mu_1(1)$, $\mu^{\prime}_1(2)=\mu_1(2)$. Moreover, as $\mu_2(2)>\tau$ and $\mu_2^{\prime}(2)>\tau$, we set $\mu_2(2)= \mu_2^{\prime}(2)$. Thus, \begin{align*} &\underset{ \substack{\nu^{\prime} \in \mathcal{M}: \\ \mu_2^{\prime}(1), \mu_2^{\prime}(2) > \tau } }{\inf} \max \Bigl \{ a_1\left(\mu_{1}(1)-\mu_{1}^{\prime}(1)\right)^2 \\ &+ a_2\left(\mu_{2}(1)-\mu_{2}^{\prime}(1)\right)^2, a_1\left(\mu_{1}(2)-\mu_{1}^{\prime}(2)\right)^2 \\ &+ a_2\left(\mu_{2}(2)-\mu_{2}^{\prime}(2)\right)^2 \Bigr \} \\ &=\underset{ \substack{\nu^{\prime} \in \mathcal{M}: \\ \mu_2^{\prime}(1), \mu_2^{\prime}(2) > \tau } }{\inf} a_2\left(\mu_{2}(1)-\mu_{2}^{\prime}(1)\right)^2 \\ &= a_2 (\tau - \mu_2(1))^2 . \end{align*} Thus, combining the results of all the cases discussed above, we have that for a two-armed feasible instance $\nu$ with arm 1 being the optimal arm and arm 2 being infeasible suboptimal, \begin{align*} &\limsup_{T \rightarrow \infty}-\frac{1}{T} \log e_{T}(\nu) \leq \\ & \quad \min \Bigl \{a_2 (\tau - \mu_2(1))^2 , \\ &\max \left \{ a_2(\mu_2(2)-\tau)^2, a_1(\mu_1(2)-\mu_1(1))^2 \right \} \Bigr \}. \end{align*} \noindent \textbf{Case 4: $\nu$ is infeasible} \\ We first evaluate the infimum over the two cases: $J(\nu^{\prime} ) \neq J(\nu), F(\nu^{\prime} ) = F(\nu)$, and $F(\nu^{\prime} ) \neq F(\nu)$; and then find the minimum of these two cases. In the former case, we have that $J(\nu^{\prime} ) \neq J(\nu), F(\nu^{\prime} ) = F(\nu)$, i.e., both $\nu$ and $\nu^{\prime}$ are infeasible but their optimal arms are different, while in the latter case, we have that $\nu^{\prime}$ is feasible. We first consider the former case. WLOG, we assume that $J(\nu)=1$ and $J(\nu^{\prime})=2$. As there are no restrictions on $\mu^{\prime}_1(1)$ and $\mu^{\prime}_1(2)$, we set $\mu^{\prime}_1(1)=\mu_1(1)$ and $\mu^{\prime}_1(2)=\mu_1(2)$. It follows that: \begin{align*} &\underset{ \substack{\nu^{\prime} \in \mathcal{M}: \\ J(\nu^{\prime} ) =2 \\ \mu_2^{\prime}(2),\mu_2^{\prime}(1) > \tau } }{\inf} \max \Bigl \{ a_1\left(\mu_{1}(1)-\mu_{1}^{\prime}(1)\right)^2 \\ &+ a_2\left(\mu_{2}(1)-\mu_{2}^{\prime}(1)\right)^2 , a_1\left(\mu_{1}(2)-\mu_{1}^{\prime}(2)\right)^2 \\ &+ a_2\left(\mu_{2}(2)-\mu_{2}^{\prime}(2)\right)^2 \Bigr \} \\ &= \underset{ \substack{\nu^{\prime} \in \mathcal{M}: \\ \tau < \mu_2^{\prime}(1) \leq \mu_2^{\prime}(2) } }{\inf} \max \Bigl \{ a_2\left(\mu_{2}(1)-\mu_{2}^{\prime}(1)\right)^2 , \\ &a_2 \left(\mu_{2}(2)-\mu_{2}^{\prime}(2)\right)^2 \Bigr \} \\ &= \frac{a_1(\mu_{2}(2) - \mu_{2}(1))^2}{4}, \end{align*} where the infimum is attained midway between $\mu_{2}(1)$ and $\mu_{2}(2)$. Next, we consider the case where $F(\nu^{\prime} ) \neq F(\nu)$, i.e., $\nu^{\prime}$ is a feasible instance. As the only restriction is that at least one arm of $\nu^{\prime}$ is feasible, we set $\mu^{\prime}_2=\mu_2$ and make arm 1 feasible. Thus, we have that \begin{align*} &\underset{ \substack{\nu^{\prime} \in \mathcal{M}: \\ \mathcal{K}(\nu^{\prime}) = \emptyset } }{\inf} \max \Bigl \{ a_1\left(\mu_{1}(1)-\mu_{1}^{\prime}(1)\right)^2 \\ &+ a_2\left(\mu_{2}(1)-\mu_{2}^{\prime}(1)\right)^2 , a_1\left(\mu_{1}(2)-\mu_{1}^{\prime}(2)\right)^2 \\ &+ a_2\left(\mu_{2}(2)-\mu_{2}^{\prime}(2)\right)^2 \Bigr \} \\ &\leq \underset{ \substack{\nu^{\prime} \in \mathcal{M}: \\ \tau > \mu_2^{\prime}(1) } }{\inf} a_1\left(\mu_{1}(1)-\mu_{1}^{\prime}(1)\right)^2 + a_2\left(\mu_{2}(1)-\mu_{2}^{\prime}(1)\right)^2 \\ &= a_2 (\mu_2(1) - \tau)^2. \end{align*} Thus, combining the results of the two cases discussed above, we have that for a two-armed infeasible instance $\nu$ with optimal arm being arm 1, \begin{align*} &\limsup_{T \rightarrow \infty}-\frac{1}{T} \log e_{T}(\nu) \leq \\ & \min \Bigl \{\frac{a_1(\mu_{2}(2) - \mu_{2}(1))^2}{4} , a_2 (\mu_2(1) - \tau)^2 \Bigr \}. \end{align*} \end{proof} \section{Problem formulation} \label{sec: prob} In this section, we describe the formulation of the constrained stochastic MAB problem studied here. We consider the fixed budget, pure exploration framework; the MAB instance is parameterized by a budget of $T$ rounds (a.k.a., arm pulls) and $K$ arms labelled $1, \ldots, K$, each of which is associated with an a priori unknown probability distribution. We consider a \emph{constrained} setting, wherein the optimal arm is defined to be the one that optimizes a certain attribute, subject to a constraint on another attribute. In a nutshell, the goal of the learner (a.k.a., algorithm) is to identify the optimal arm in the instance, and also to flag the instance as being feasible or infeasible (i.e., indicating whether any or none of the arms meets the constraint, respectively), using the budget of $T$ arm pulls for exploration. The rest of this section is devoted to formalizing this problem. Each arm $i$ is associated with a possibly multi-dimensional distribution $\nu(i)$. These distributions are unknown to the learner. Let $\mathcal{C}$ denote the space of arm distributions, i.e., $\nu(i) \in \mathcal{C}$ for all~$i.$ We define the objective and constraint attributes $\mu_1$ and $\mu_2,$ respectively, to be functions from $\mathcal{C}$ to $\mathbb{R}$. We henceforth refer to $\mu_j(i) = \mu_j(\nu(i))$ as the value of attribute $j$ ($j \in \{1,2\}$) associated with arm~$i,$ with $\mu(i)$ denoting the vector $(\mu_1(i),\mu_2(i)).$ The user specifies a threshold $\tau \in \mathbb{R}$, which defines an upper bound for the attribute $\mu_2.$ An \textit{instance} of this constrained MAB problem is specified by $(\nu, \tau)$ where $\nu=(\nu(1), \ldots, \nu(K))$. The arms for which the constraint is satisfied, i.e., $\mu_2(i) \leq \tau$, are called \textit{feasible arms}; and the set of feasible arms is denoted by $\mathcal{K}(\nu)$. The instance $(\nu,\tau)$ is said to be feasible if $\mathcal{K}(\nu) \neq \emptyset$, and infeasible if $\mathcal{K}(\nu) = \emptyset$. Consider a feasible instance. We define an arm to be \textit{optimal} if it has the least value of $\mu_1(\cdot),$ subject to the constraint $\mu_2(\cdot) \leq \tau$. For simplicity of exposition, we assume that there is a unique optimal arm.\footnote{As is well understood in the pure exploration, fixed budget setting, it is straightforward to handle the generalization where there are multiple optimal arms.} We formally denote the optimal arm as $$J([K]) = \argmin_{i \in \mathcal{K}(\nu)} \mu_1(i).$$ Here, $[K] = \{1,\ldots,K\}$. Without loss of generality, we assume $J([K]) = 1,$ i.e., arm~1 is optimal. An arm~$i$ is said to be \textit{suboptimal} if $\mu_1(i)>\mu_1(1)$ (irrespective of whether it is feasible or not). Further, an arm~$i$ is said to be a \textit{deceiver} if $\mu_1(i) \leq \mu_1(1),$ but $\mu_2(i) > \tau.$ The different types of arms in a feasible instance are illustrated in Figure~\ref{fig: feasible}. Next, consider an infeasible instance. In this case, an optimal arm is defined as the one with the smallest value of $\mu_2(\cdot),$ i.e., the one that is `least infeasible.' As before, we assume for simplicity that there is a unique optimal arm, denoted by $$J([K]) = \argmin_{i \in [K]} \mu_2(i).$$ An infeasible instance is illustrated in Figure~\ref{fig: infeasible}. In each round $t \in [T]$, the learner chooses an arm from the set of arms $[K]$ and observes a sample drawn from the corresponding distribution (independent of past actions and observations). At the end of $T$ rounds, the learner outputs a tuple $(\hat{J}([K]),\hat{F}([K]))$, where $\hat{J}([K]) \in [K]$ and $\hat{F}([K])$ is either \texttt{True} or \texttt{False}. The output $\hat{J}([K])$ is the learner's recommendation for the optimal arm. The output $\hat{F}([K])$ is a Boolean flag that indicates whether the learner deems the instance as being feasible (in which case $\hat{F}([K])=$\texttt{True}) or infeasible (in which case $\hat{F}([K])=$\texttt{False}). Let us denote by $F([K])$ the correct value of the feasibility flag for the instance, i.e., $F([K])=\texttt{True}$ if the instance is feasible and $F([K])=\texttt{False}$ otherwise. The algorithm is evaluated based on its probability of error $e_T$, which is, \begin{align*} e_T=\mathbb{P} \left ( \left \{ J([K]) \neq \hat{J}([K]) \right \} \cup \left \{ (\hat{F}([K]) \neq F([K]) \right \} \right ). \end{align*} For notational simplicity, we have suppressed the dependence of $e_T$ on the algorithm and the instance. The goal is to design algorithms that minimize the probability of error. \begin{figure} \centering \begin{subfigure}{0.49\linewidth} \centering \begin{tikzpicture}[scale=0.6, dot/.style = {circle, draw, fill=#1, inner sep=2pt} ] \draw[draw,latex-] (0,5) +(0,0.5cm) node[above right] {$\mu_2$} -- (0,0); \draw[draw,-latex] (0,0) -- (5,0) -- +(0.5cm,0) node[below right] {$\mu_1$}; \draw[dashed] (0,1.8) node[left] {$\tau$} -- (5,1.8); \node[dot=black, label=below:{1}] at (2, 1) {}; \node[dot=blue, label=below:{2}] at (3, 1.5) {}; \node[dot=red, label=3] at (1, 2.8) {}; \node[dot=yellow, label=4] at (4, 3.5) {}; \end{tikzpicture} \caption{Feasible instance} \label{fig: feasible} \end{subfigure} \hfill \begin{subfigure}{0.49\linewidth} \centering \begin{tikzpicture}[scale=0.6, dot/.style = {circle, draw, fill=#1, inner sep=2pt} ] \draw[draw,latex-] (0,5) +(0,0.5cm) node[above right] {$\mu_2$} -- (0,0); \draw[draw,-latex] (0,0) -- (5,0) -- +(0.5cm,0) node[below right] {$\mu_1$}; \draw[dashed] (0,1.8) node[left] {$\tau$} -- (5,1.8); \node[dot=black, label=1] at (2, 2.3) {}; \node[dot=blue, label=2] at (3, 3) {}; \node[dot=red, label=3] at (4, 3.4) {}; \node[dot=yellow, label=4] at (1, 4) {}; \end{tikzpicture} \caption{Infeasible instance } \label{fig: infeasible} \end{subfigure} \caption{Panel~(a) shows a feasible instance. Arm 1 is optimal, arm 2 is feasible suboptimal, arm 3 is a deceiver, and arm 4 is infeasible suboptimal. Panel~(b) shows an infeasible instance (arm 1 is optimal).} \label{fig:three graphs} \end{figure} Finally, as stochastic MAB algorithms require estimators for the attributes $\mu_1$ and $\mu_2$, which are functions of the data samples of each arm, we assume the following concentration properties for these estimators. Specifically, we assume that for $i \in \{1,2\}$ and distribution $G \in \mathcal{C}$, there exists an estimator $\hat{\mu}_{i,n}(G)$ of $\mu_i(G)$ which uses $n$ i.i.d. samples from~$G$, satisfying the following concentration inequality: There exists $a_i>0$ such that for all $\Delta>0$, \begin{align} \mathbb{P}\left(\left|\hat{\mu}_{i, n}(G)-\mu_{i}(G)\right| \geq \Delta\right) \leq 2 \exp \left(-a_{i} n \Delta^{2}\right). \label{eqn: conc_inequality} \end{align} Such concentration inequalities are commonly used for analyzing MAB algorithms.\footnote{The standard practice when dealing with classical (unconstrained) MAB problems is to specify both the set~$C$ of arm distributions (for example, as the set of 1-subGaussians) and the attribute being optimized~$\mu_1$ (for example, the mean of the arm distribution). These choices then imply natural estimators~$\hat{\mu}_1$ and their corresponding concentration properties. In this work, to avoid working with a specific distribution class and a specific set of arm attributes, and to emphasize the generality of the proposed approach, we simply assume that attribute estimates satisfy concentration inequalities of the form~\eqref{eqn: conc_inequality}. Moreover, this particular form for the concentration inequality is assumed only for ease of exposition; changes to this form (as might be needed, for example, if the arm distributions are sub-exponential or heavy-tailed) lead to minor modifications to our algorithms and bounds.} For instance, if the attributes can be expressed as expectations of sub-Gaussian or bounded random variables (which are themselves functions of the arm samples), concentration inequalities of the form \eqref{eqn: conc_inequality} would hold for the empirical average using the Cram\'er-Chernoff bound or the Hoeffding inequality respectively (refer Chapter~5 of \citet{lattimore}). Several risk measures like Value-at-Risk (VaR) and Conditional Value-at-Risk (CVaR) also admit estimators with concentration properties of the form~\eqref{eqn: conc_inequality}; see \cite{wang2010,cassel,kolla2019,bhat2019}. Finally, we define the notion of \emph{consistency} of an algorithm. An algorithm is said to be consistent over~$(\mathcal{C},\tau)$ if, for all instances of the form~$(\nu,\tau),$ where $\nu \in \mathcal{C}^K,$ $\lim_{T \rightarrow \infty}e_T = 0.$ \begin{comment} For a deceiver arm $i$, we define the gap $\Delta_i=\mu_2(i)-\tau$. For a suboptimal feasible arm $i$, we define the gap $\Delta_i=\mu_1(i)-\mu_1(1)$, which aligns with the suboptimality gap definition from the one-dimensional mean minimization case. For a suboptimal infeasible arm $i$, we define the gap $\Delta_i=\max \{\mu_1(i)-\mu_1(1),\mu_2(i)-\tau\}$. The gap for any arm $i$ is defined as $\mu_2(i)-\underset{k \in [K]}{\min} \mu_2(k)$. For $j\in\{1,2\}$, $1 \leq i \leq K$ and $t \in [T]$, let $T_{i}(t)$ denote the number of times arm $i$ was pulled till round $t$ and let $\hat{\mu}_{j,s}(i)=\frac{1}{s}\sum_{t=1}^{s}X_{i, t}^{j}$ be the empirical mean of the $j^{\textrm{th}}$ dimension of arm $i$ after $s$ pulls. \end{comment}
1,941,325,219,932
arxiv
\section{Introduction} The stochastic integrate-and-fire model for the membrane potential $V$ across a neuron in the brain has received a huge amount of attention since its introduction (see \cite{sacerdote:giraudo} for a comprehensive review). The central idea is to model $V$ by threshold dynamics, in which the potential is described by a simple linear (stochastic) differential equation up until it reaches a fixed threshold value $V_F$, at which point the neuron emits a `spike'. Experimentally, at this point an action potential is observed, whereby the potential increases very rapidly to a peak (hyperpolarization phase) before decreasing quickly to a reset value (depolarization phase). It then relatively slowly increases once more to the resting potential (refractory period). Since spikes are stereotyped events, they are fully characterized by the times at which they occur. The integrate-and-fire model is part of a family of spiking neuron models which take advantage of this by modeling only the spiking times and disregarding the nature of the spike itself. Specifically, in the integrate-and-fire model we observe jumps in the action potential as the voltage is immediately reset to a value $V_R$ whenever it reaches the threshold $V_F$, which is motivated by the fact that the time period during which the action potential is observed is very small. Despite its simplicity, versions of the integrate-and-fire model have been able to predict the spiking times of a neuron with a reasonable degree of accuracy \cite{jolivet:lewis:gerstner, kistler:gerstner:vanhemmen}. Many extensions of the basic integrate-and-fire model have been studied in the neuroscientific literature, including ones in which attempts are made to include noise and to describe the situation when many integrate-and-fire neurons are placed in a network and interact with each other. In \cite{lewis:rinzel,ostojic:brunel:hakim} the following equation describing how the potential $V_i$ of the $i$-th neuron in a network of $N$ behaves in time is proposed: \begin{equation} \label{Brunel} \frac{d}{dt}V_i(t) = -\lambda V_i(t) + \frac{\alpha}{N}\sum_{j}\sum_{k} \delta_0(t-\tau_{k}^j) + \frac{\beta}{N}\sum_{j\neq i}V_j(t) + I^{ext}_i(t) + \sigma\eta_i(t) \end{equation} for $V_i(t) < V_F$ and where $V_i(t)$ is immediately reset to $V_R$ when it reaches $V_F$. Here $I^{ext}_i(t)$ represents the external input current to the neuron, $\eta_i(t)$ is the noise (a white noise) which is importantly supposed to be independent from neuron to neuron, and the constants $\lambda, \beta, \alpha$ and $\sigma$ are chosen according to experimental data. Moreover, the interaction term is described in terms of $\tau_{k}^j$, which is the time of the $k$-th spike of neuron $j$, and the Dirac function $\delta_0$. Precisely, it says that whenever one of the other neurons in the network spikes, the potential across neuron $i$ receives a `kick' of size $\alpha/N$. The Dirac mass interactions give rise to the same kind of instantaneous behavior as the integrate-and-fire model. Although it is a simplification of reality, it produces some interesting phenomena from a biological perspective, see \cite{ostojic:brunel:hakim}. In the case of a large network, i.e. when $N$ is large, many authors approximate the interaction term by an instantaneous rate $\nu(t)$, the so-called mean-firing rate (see for example \cite{brunel, brunel:hakim, ostojic:brunel:hakim, renart:brunel:wang}). However, in the neuroscience literature little attention is paid to how this convergence is achieved. Mathematically the mean-field limit as $N\to \infty$ must be taken, but, as a first step, this requires a careful analysis of the asymptotic well-posedness. This is precisely the purpose of the paper: to focus on the unique solvability of the resulting nonlinear limit equation (the analysis of the convergence being left to further investigations). At first glance such a question may seem classical, given the volume of results available that guarantee the existence of a solution to distribution dependent SDEs. However, as quickly became apparent in our analysis, in the excitatory case ($\alpha >0$) the problem is in fact a delicate one, for which, to our knowledge, there are no existing results available. This difficulty is due to the nature of the interactions, which introduce the strong possibility of a solution that `blows up' in finite time. The validity of the study of this question, and its non-trivial nature, is further justified by the fact that several authors have recently been interested in exactly the same problem from a PDE perspective (\cite{perthame, carillo:gonzales:gualdani:schonbek}). Despite some serious effort and very interesting related results on their part, we understand that they were not able to prove the existence and uniqueness of global solutions to the limit equation, which is the main result of the present paper. \subsection{Precisions:} We now make precise the nonlinear equation of interest. Firstly, since the mathematical difficulties lie within the jump interaction term, we suppose that there is no external input current ($I^{ext}_i(t) \equiv 0$), and that the interaction term is composed solely of the jump or reset part ($\beta = 0$). Although this is a non-trivial simplification from a neuroscience perspective, it still captures all the mathematical complexity of the resulting mean-field equation. Without loss of generality, we also take the firing threshold $V_F =1$ and the reset value $V_R=0$ for notational simplicity. The nonlinear stochastic mean-field equation under study here is then \begin{equation} X_{t}=X_0 + \int_{0}^{t}b(X_{s})ds+\alpha\mathbb E(M_{t})+\sigma W_{t}-M_{t},\qquad t\geq0,\label{Intro: nonlinear eq} \end{equation} where $X_0<1$ almost surely, $\alpha\in \mathbb R$, $\sigma>0$, $(W_{t})_{t\geq0}$ is a standard Brownian motion in $\mathbb R$ and $b:\mathbb{R}\to\mathbb R$ is Lipschitz continuous. In comparison with \eqref{Brunel}, $b$ must be thought of as $b(x)=-\lambda x$. Equation \eqref{Intro: nonlinear eq} is then intended to describe the potential of one \textit{typical} neuron in the infinite network, its jumps (or resets) being given by \[ M_{t}=\sum_{k\geq1}{\rm{\textbf{1}}}_{[0,t]}(\tau_{k}) \] where $(\tau_{k})_{k\geq1}$ stands for the sequence of hitting times of $1$ by the process $(X_t)_{t\geq0}$. That is $(M_{t})_{t\geq0}$ counts the number of times $X_{t}$ hits the threshold before time $t$, so that $\mathbb E(M_t)$ denotes the \textit{theoretical} expected number of times the threshold is reached before $t$. Such a theoretical expectation corresponds to what we would envisage as the limit of the integral form of the interaction term \begin{equation*} \frac{1}{N} \int_{0}^t \sum_{j}\sum_{k} \delta(s-\tau_{k}^j) ds = \frac{1}{N} \sum_{j} \sum_{k} {\mathbf 1}_{\{\tau_{k}^j \leq t\}} \end{equation*} in \eqref{Brunel} when $N\to\infty$, assuming that neurons become asymptotically independent (as is observed in more classical particle systems -- see \cite{sznitman}). \subsection{PDE viewpoint and `blow-up' phenomenon:} As mentioned above, equation \eqref{Intro: nonlinear eq} has been rigorously studied from the PDE viewpoint before. When $\sigma\equiv1$, the Fokker-Planck equation for the density $p(t, y)dy = \mathbb{P}(X_t \in dy)$ is given by \[ \partial_t p(t, y) + \partial_y\left[\left(b(y) + \alpha e'(t)\right)p(t, y)\right] - \frac{1}{2}\partial^2_{yy}p(t, y) = \delta_0(y)e'(t), \qquad y< 1, \] where $e(t) = \mathbb E(M_t)$, subject to $p(t, 1) = 0$, $p(t, -\infty) = 0$, $p(0, y)dy = \mathbb{P}(X_0\in dy)$. Moreover, the condition that $p(t,y)$ must remain a probability density translates into the fact that \[ e'(t) = \frac{d}{dt}\mathbb E(M_t) = - \frac{1}{2}\partial_yp(t,1), \qquad \forall t>0, \] which describes the nonlinearity of the problem. In the case when $b(x) = -x$, this nonlinear Fokker-Planck equation is exactly the one studied in \cite{perthame} and \cite{carillo:gonzales:gualdani:schonbek}. Therein, the authors conclude that for some choices of parameters, no global-in-time solutions exist. The term `blow-up' is then used to describe the situation where the solution (defined in a weak sense) ceases to exist after some finite time. With our formulation, since $e'(t)$ corresponds to the mean firing rate of the infinite network, it is very natural to define a `blow-up' time as a time when $e'(t)$ becomes infinite. Intuitively, this can be understood as a point in time at which a large proportion of the neurons in the network all spike at exactly the same time i.e. the network \textit{synchronizes}. In \cite{perthame} and \cite{carillo:gonzales:gualdani:schonbek} it is shown that, in the cases $\alpha =0$ and $\alpha < 0$ (the latter one being referred to as `self-inhibitory' in neuroscience), the nonlinear Fokker-Planck equation has a unique solution that does not blow-up in finite time. However, in the so-called `self-excitatory' framework, i.e. for $\alpha>0$, existence of a solution for all time is left open. Instead, a negative result is established \cite[Theorem 2.2]{perthame}, stating that, for any $\alpha >0$, it is possible to find an initial probability distribution $\mathbb{P}(X_0 \in dy)$ such that any solution must blow-up in finite time, i.e. such that $e'(t) = \infty$ for some $t>0$. solvability in the long run may fail for small values of $\alpha$. \subsection{Present contribution} In this paper we thus investigate the case $\alpha \in (0,1)$. Our main contribution is to show that, given a starting point $X_0 = x_0$, we can find an explicit $\alpha$ small enough so that there does indeed exist a unique global-in-time solution to \eqref{Intro: nonlinear eq} (and hence to the associated Fokker-Planck equation) which does not blow-up (see Theorem \ref{solution up to T}). In view of the above discussions, our result complements and goes further than those found in \cite{perthame} and \cite{carillo:gonzales:gualdani:schonbek}, and the surprising difficulty of the problem is reflected in the rather involved nature of our proofs. As already said, Equation \eqref{Intro: nonlinear eq} can be thought of as of McKean-Vlasov-type, since the process $(X_t)_{t \geq 0}$ depends on the distribution of the solution itself. However, it is highly non-standard, since it actually depends on the distribution of the \textit{first hitting times} of the threshold by the solution. This renders the traditional approaches to McKean-Vlasov equations and propagation of chaos, such as those presented by Sznitman in \cite{sznitman}, inapplicable, because we have no \textit{a priori} smoothness on the law of the first hitting times. Thus our results are also new in this context. The general structure of the proof is at the intersection between probability and PDEs, the deep core of the strategy being probabilistic. The main ideas are inspired from the methods used to investigate the well-posedness of Markovian stochastic differential equations involving some non-trivial nonlinearity. Precisely, the first point is to tackle unique solvability in small time: when the parameter $\alpha$ is (strictly) less than 1 and the density of the initial condition decays linearly at the threshold, it is proven that the system induces a natural contraction in a well-chosen space provided the time duration is small enough. In this framework, the specific notion of a solution plays a crucial role as it defines the right space for the contraction. Below, solutions are sought in such a way that the mapping $e : t \mapsto {\mathbb E}(M_t)$ is continuously differentiable: this is a crucial point as it permits to handle the process $(X_t)_{t \geq 0}$ as a drifted Brownian motion. The second stage is then to extend existence and uniqueness from short to long times. The point is to prove that some key quantity is preserved as time goes by. Here, we prove that the system cannot accumulate too much mass in the vicinity of 1. Equivalently, this amounts to showing that the Lipschitz constant of the mapping $e : t \mapsto {\mathbb E}(M_t)$ cannot blow-up in a finite time. This is where the condition $\alpha$ small enough comes in: when $\alpha$ is small enough, we manage to give some estimates for the density of $X_{t}$ in the neighbourhood of $1$, the critical value of $\alpha$ explicitly depending upon the available bound of the density. Generally speaking, we make use of standard Gaussian estimates of Aronson type for the density. Unfortunately, the estimates we use are rather poor as they mostly neglect the right behavior of the density of $X_{t}$ at the boundary, thus yielding a non-optimal value. Anyhow, they serve as a starting point for proving a refined estimate of the gradient of the density at the boundary: this is the required ingredient for proving that, at any time $t$, the mass of $X_{t}$ decays linearly in the neighbourhood of 1, uniformly in $t$ in compact sets, and thus to apply iteratively the existence and uniqueness argument in small time. In this way, we prove by induction that existence and uniqueness hold on any finite interval and thus on the whole of $[0,\infty)$. It is worth mentioning that the main lines for proving the \textit{a priori} estimate on the Lipschitz constant of $e : t \mapsto {\mathbb E}(M_t)$ are probabilistic, thus justifying the use of a stochastic approach to handle the model. Indeed, the key step in the control of the Lipschitz constant of $e$ is an \textit{intermediate} estimate of H\"older type, the proof of which is inspired from the probabilistic arguments used by Krylov and Safonov \cite{krylov:safonov} for establishing the H\"older regularity of solutions to non-smooth PDEs. \subsection{Prospects} Our result is for a general Lipschitz function $b$, but there are two important specific cases that we keep in mind: the Brownian case when $b\equiv 0$ and the Ornstein-Uhlenbeck case when $b(x) = -\lambda x$, $\lambda\geq0$. The Ornstein-Uhlenbeck case is most relevant to neuroscience, but surprising difficulties remain in the purely Brownian case. In both these cases we are able to give an explicit $\alpha_0$ depending on the deterministic starting point $x_0$ such that \eqref{Intro: nonlinear eq} has a global solution for all $\alpha<\alpha_0$. However, our explicit values do not appear to be optimal: simulations suggest that for a given $x_0$ there exist solutions that do not blow up for $\alpha$ bigger than our explicit $\alpha_0$, while there exist solutions that blow up that do not satisfy the conditions of \cite{perthame}. Thus an interesting question is to determine for a given initial condition the critical value $\alpha_c$ such that for $\alpha<\alpha_c$ \eqref{Intro: nonlinear eq} does not exhibit blow-up. Another point is to relax the notion of solution in order to allow the mapping $e: t \mapsto {\mathbb E}(M_{t})$ to be non-differentiable (and thus to blow up). From the modeling point of view, this would permit to describe \textit{synchronization} in the network. Actually, based on our understanding of the problem and numerical simulations, our guess is that, in full generality, the mapping $e$ may be decomposed into a sequence of continuously differentiable pieces separated by isolated discontinuities. In that perspective, we feel that our work could serve as a basis for investigating the unique solvability of solutions that blow up: in order to design a proper uniqueness theory, it seems indeed quite mandatory to understand how general solutions behave in continuously differentiable regime (which is the precise purpose of the present paper) and then how discontinuities can emerge (which is left to further works). The layout of the paper is as follows. We present the main results in Section \ref{sec: Main results}. Solutions are defined in Section \ref{Solution as a fixed point} while Section \ref{section: Existence and uniqueness in small time} is devoted to proving the existence and uniqueness in small time. The proof of Theorem \ref{thm:gradientbd} is given in Section \ref{Long-time estimates}. \section{Main results} \label{sec: Main results} \subsection{Set-up} As stated in the introduction, we are interested in solutions to the nonlinear McKean-Vlasov-type SDE \begin{equation} X_{t}=X_0 + \int_{0}^{t}b(X_{s})ds+\alpha\mathbb E(M_{t})+W_{t}-M_{t},\qquad t\geq0,\label{simplified eq} \end{equation} where $X_0<1$ almost surely, $\alpha\in(0,1)$ and $(W_{t})_{t \geq 0}$ is a standard Brownian motion with respect to a filtration $({\mathcal F}_{t})_{t \geq 0}$ satisfying the usual conditions. The jumps, or resets, of the system are described by \begin{equation} \label{M} M_{t}=\sum_{k\geq1}{\rm{\textbf{1}}}_{[0,t]}(\tau_{k}), \quad \textrm{with}\quad \tau_{k} = \inf\{t>\tau_{k-1} : X_{t-} \geq 1\}, \quad k\geq 1 \ (\tau_{0}=0). \end{equation} We assume that $b:(-\infty, 1]\to\mathbb R$ is Lipschitz continuous such that \[ |b(x)|\leq \Lambda (|x|+1), \qquad |b(x) - b(y)| \leq K|x-y|, \qquad \forall x, y \in (-\infty, 1]. \] \begin{rem} \label{rem: sigma} By the time change $u = t/\sigma^2$, we could handle more general cases when the intensity of the noise in \eqref{simplified eq} is $\sigma >0$ instead of $1$. \end{rem} As discussed in the introduction, the key point is to look for a solution for which $t\mapsto \mathbb E(M_t)$ is continuously differentiable, which would correspond to a solution that does not exhibit a finite time blow-up. This leads to the following definition of a solution to \eqref{simplified eq}, where as usual $\mathcal{C}^1[0, T]$ denotes the space of continuously differentiable functions on $[0, T]$. \begin{defin}[Solution to \eqref{simplified eq}] \label{definition solution} The process $(X_t, M_t)_{0\leq t \leq T}$ will be said to be a solution to \eqref{simplified eq} up until time $T$ if $(M_t)_{0\leq t \leq T}$ satisfies \eqref{M}, the map $([0,T] \ni t \mapsto \mathbb E(M_t))\in \mathcal{C}^1[0, T]$ and $(X_t)_{0\leq t \leq T}$ is a strong solution of \eqref{simplified eq} up until time $T$. \end{defin} \subsection{Statements}Our main result is given by the following two theorems. The first guarantees that, when $\alpha$ is small enough, if there exists a solution to \eqref{simplified eq} on some finite time interval, then the solution does not blow-up on this interval. \begin{theorem} \label{thm:gradientbd} For a given $\epsilon \in (0,1)$, there exists a positive constant $\alpha_{0} \in (0,1]$, depending only upon $\epsilon$, $K$ and \(\Lambda\), such that, for any $\alpha \in (0, \alpha_{0})$ and any positive time $T>0$, there exists a constant ${\mathcal M}_{T}$, only depending on $T$, $\epsilon$, $K$ and $\Lambda$, such that, for any initial condition $X_{0}=x_{0} \leq 1 - \epsilon$, any solution to \eqref{simplified eq} according to Definition \ref{definition solution} satisfies $(d/dt)\mathbb E(M_{t}) \leq {\mathcal M}_{T}$, for all $t \in [0,T]$. \end{theorem} The second theorem is the main global existence and uniqueness result. \begin{thm} \label{solution up to T} For any initial condition $X_{0}=x_{0} <1$ and $\alpha \in (0,\alpha_0)$, where $\alpha_0 = \alpha_0(x_0)$ is as in Theorem \ref{thm:gradientbd} (taking $\epsilon = 1 - x_0$), there exists a unique solution to the nonlinear equation \eqref{simplified eq} on any $[0, T]$, $T>0$, according to Definition \ref{definition solution}. \end{thm} The size of the parameter $\alpha_0$ in Theorem \ref{thm:gradientbd} is found explicitly in terms of $\epsilon, K$ and $\Lambda$ (Proposition \ref{prop:holderbd:1}), but more precisely it derives from the fact that in the course of our proof we must first show that, \textit{a priori}, any solution on $[0, T]$ to the nonlinear equation \eqref{simplified eq} with $X_0 = x_0 \leq 1-\epsilon$ satisfies\footnote{In the whole paper, we use the very convenient notation $\tfrac{1}{dx}\mathbb P(X \in dx)$ to denote the density at point $x$ of the random variable $X$ (whenever it exists). } \begin{equation} \label{key fact} \frac{1}{dx}\mathbb{P}(X_t \in dx) < \frac{1}{\alpha}, \qquad t\in[0, T], \end{equation} in a neighborhood of the threshold $1$ (see Lemma \ref{lem:holderbd:5}). It is this restriction that determines the $\alpha_0$ in Theorem \ref{thm:gradientbd}, so that it depends only on the best \textit{a priori} estimates available for the density on the left-hand side of \eqref{key fact}. The stated explicit choice for $\alpha_0$ in Proposition \ref{prop:holderbd:1} merely ensures that \eqref{key fact} holds for all $\alpha<\alpha_0$ for any potential solution. \subsection{Illustration: The Brownian case} To further highlight the criticality of the system, we here illustrate the blow-up phenomenon in the Brownian case. Consider equation \eqref{simplified eq} with $b\equiv 0$, set $e(t) = \mathbb E(M_t)$ and fix $X_0 = x_0<1$. Then the conditions of Theorem \ref{solution up to T} are trivially satisfied, and so we know that there exists a global-in-time solution for all $\alpha \in(0, \alpha_0(x_0))$. One may then ask if we ever observe a blow-up phenomenon in this case. The affirmative answer can be seen by adapting the strategy in \cite{perthame} (note that the result in \cite{perthame} is written for a non-zero $\lambda$ but a similar argument applies when $\lambda=0$). For instance, choosing $x_{0}=0.8$, computations show that global in time solvability must fail for $\alpha \geq 0.539$. Moreover, tracking all the constants in the proof of Theorem \ref{thm:gradientbd} below, we can find that $\alpha_0(0.8) \approx .104$, which suggests that the system's behavior changes radically between these two values. Such a radical change can be observed numerically by investigating the graphs of $e(t) = \mathbb E(M_{t})$ for different values of $\alpha$ in order to detect the emergence of some discontinuity. Using a particle method to solve the nonlinear equation with $b\equiv 0$, we numerically observe in Figure \ref{fig:x0_0.8_explosion} that the graph of $e$ is regular for \(\alpha=0.38\) but has a jump for \(\alpha=0.39\). From the observations we have for other values of $\alpha$, it seems that global solvability fails for $\alpha \geq 0.39$ and holds for $\alpha \leq 0.38$. \begin{center} \begin{figure}[htb] \includegraphics[width=8cm,height=40mm]{t_EMt_xinit0_8_alpha038et039.pdf} \caption{Plot of $t\mapsto e(t)$ for \(x_0=0.8\), \(b(x)\equiv0\), \(\alpha=0.38\) (red) and \(\alpha=0.39\) (green).} \label{fig:x0_0.8_explosion} \end{figure} \end{center} As a summary, we present in Figure~\ref{fig:6} the various regions of the $\alpha$-parameter space $(0, 1)$ for $x_0 = 0.8$. The region $\textbf{D}$ stands for the set of $\alpha$'s for which global solvability fails. By the numerical experiments it seems that global solvability also fails in region $\textbf{C}$, while by the same experiments it seems that global solutions do exist for $\alpha$ in region $\textbf{B}$. In this article we prove that global solutions exist for $\alpha\in\textbf{A}$. \setlength{\unitlength}{1cm} \begin{center} \begin{figure}[htb] \begin{picture}(10,2.3) \label{summary x_0=0.8} \put(0, 1.5){\line(1, 0){10}} \put(0,1.25){\line(0, 1){.5}} \put(-0.1, 2){$0$} \put(10,1.25){\line(0, 1){.5}} \put(9.9, 2){$1$} \put(5.4,1.25){\line(0, 1){.5}} \put(5.1,2){{\small $0.54$}} \put(5.45, 1.25){$\underbrace{\hspace{4.5cm}}_\textbf{D}$} \put(3.9,1.25){\line(0, 1){.5}} \put(3.95,2){{\small $0.39$}} \put(3.95, 1.25){$\underbrace{\hspace{1.4cm}}_\textbf{C}$} \put(3.8,1.25){\line(0, 1){.5}} \put(3.05,2){{\small $0.38$}} \put(1.05, 1.25){$\underbrace{\hspace{2.7cm}}_\textbf{B}$} \put(1.0,1.25){\line(0, 1){.5}} \put(0.5,2){{\small $\alpha_0(0.8)$}} \put(0.05, 1.25){$\underbrace{\hspace{0.95cm}}_\textbf{A}$} \end{picture} \caption{Critical regions of $\alpha\in (0,1)$, for $x_0 = 0.8$ and $b(x) \equiv 0$. \label{fig:6}} \end{figure} \end{center} \vspace{-0.5cm} \section{Solution as a fixed point} \label{Solution as a fixed point} In this section we identify a solution to the nonlinear equation \eqref{simplified eq} as a fixed point of an appropriate map on an appropriate space. This will reduce the problem of finding a solution to identifying a fixed point of this map. Let $T>0$. For a general function $e\in\mathcal{C}^1[0, T]$, consider the linear SDE \begin{equation} X_{t}^{e}=X_{0}+\int_{0}^{t}b(X_{s}^{e})ds+\alpha e(t)+W_{t}-M_{t}^{e},\qquad t\in[0, T],\qquad X_{0}<1\ \mathrm{a.s.}\label{gamma definition2} \end{equation} where $(W_{t})_{t \geq 0}$ is a standard Brownian motion, $\alpha\in(0, 1)$, \begin{equation} \label{def M_t} M_{t}^{e}=\sum_{k\geq1}{\rm{\textbf{1}}}_{[0,t]}(\tau_{k}^{e}) \end{equation} and $\tau_{k}^{e}=\inf\{t>\tau_{k-1}^{e}:X_{t-}^{e}\geq1\}$ for $k\geq1$, $\tau_{0}^{e}=0$. The drift function $b$ is assumed to be Lipschitz as above. Note that the solution to this SDE is well defined (by solving \eqref{gamma definition2} iteratively from any $\tau_{k}^e$ to the next $\tau_{k+1}^e$ and by noticing that the jumping times $(\tau^e_{k})_{k \geq 0}$ cannot accumulate in finite time as the variations of $(X_{t}^e)_{t \geq 0}$ on any $[\tau_{k}^e,\tau_{k+1}^e)$, $k \geq 0$, are controlled in probability). We then define the map $\Gamma$ by setting \begin{equation} \Gamma(e)(t):=\mathbb E(M_{t}^{e})\label{gamma definition}. \end{equation} We note that any fixed point of $\Gamma$ that is continuously differentiable provides a solution to the nonlinear equation according to Definition \ref{definition solution} and vice versa. Thus, it is natural to look for a fixed point of $\Gamma$ in a subspace of $\mathcal{C}^1[0, T]$ where we are careful to uniformly control the size of the derivative. Moreover, since it is clear from the definition that $\Gamma(e)(0) = 0$ and $t\mapsto \Gamma(e)(t)$ is non-decreasing for any $e \in \mathcal{C}^1[0, T]$, we in fact restrict the domain of $\Gamma$ to the closed subspace $\mathcal{L}(T,A)$ of $\mathcal{C}^1[0, T]$ defined by \[ \mathcal{L}(T, A):=\left\{ e\in\mathcal{C}^{1}[0,T]: e(0)=0,\ e(s)\leq e(t)\ \forall s\leq t, \sup_{0\leq t\leq T}e'(t)\leq A\right\} \] for some $A\geq 0$. The map $\Gamma$ is thus defined as a map from ${\mathcal L}(T,A)$ into the set of non-decreasing functions on $[0,T]$. It in fact depends on $A$ as its domain of definition depends on $A$; for this reason, it should be denoted by $\Gamma^A$. Anyhow, since the family $(\Gamma^A)_{A \geq 0}$ is consistent in the sense that, for any $A' \leq A$, the restriction of $\Gamma^A$ to ${\mathcal L}(T,A')$ coincides with $\Gamma^{A'}$, we can use the simpler notation $\Gamma$. The following \textit{a priori} stability result provides further information about where to look for fixed points, the proof of which we leave until the end of the section. \begin{prop} \label{stability} Given $T>0$, $a >0$ and $e \in {\mathcal L}(T,A)$ it holds that \begin{equation*} \label{stability equation} \Bigl( \bigl( \forall t \in [0,T], \ e(t) \leq g_{a}(t) \bigr) \ {\rm and} \ \bigl( \mathbb E \bigl[(X_{0})_{+} \bigr] \leq a \bigr) \Bigr) \Rightarrow \bigl( \forall t \in [0,T], \ \Gamma(e)(t) \leq g_{a}(t)\bigr), \end{equation*} where $(x)_+$ denotes the positive part of $x\in\mathbb R$, with \begin{equation} \label{g(t)} g_{a}(t):= \frac{a + (4 + \Lambda T^{1/2}) t^{1/2}}{1 - \alpha} \exp \left( \frac{2 \Lambda t}{1-\alpha} \right). \end{equation} \end{prop} Letting $g(t) := g_{1}(t)$, $t \geq 0$, since $X_0<1$ a.s., it thus makes sense to look for fixed points of $\Gamma$ in the space \begin{equation} \label{H} \mathcal{H}(T, A) :=\left\{ e\in \mathcal{L}(T, A) : e(t) \leq g(t)\right\}. \end{equation} We equip $\mathcal{H}(T, A)$ with the norm $\|e\|_{\mathcal{H}(T, A)}=\|e\|_{\infty,T}+\|e'\|_{\infty,T}$ inherited from $\mathcal{C}^1[0, T]$. Here and throughout the paper, $\|\cdot\|_{\infty, T}$ denotes the supremum norm on $[0, T]$. $\mathcal{H}(T, A)$ is then a complete metric space, since it is a closed subspace of $\mathcal{C}^{1}[0,T]$. For $e\in \mathcal{H}(T, A)$ Proposition \ref{stability} implies that $\Gamma(e)$ is finite and cannot grow faster that $g$, though it remains to show that $\Gamma(e)$ is differentiable and that its derivative is bounded by $A$ in order to check that $\Gamma$ indeed maps ${\mathcal H}(T,A)$ into itself, for a suitable value of $A$ and $T$. The stability of ${\mathcal H}(T,A)$ by $\Gamma$ is discussed in Section \ref{subse:5:3}. \subsection{Proof of Proposition \ref{stability}:} \begin{proof}[Proof of Proposition \ref{stability}:] Fix $T>0$. We first note that we may write \begin{equation} \begin{split} M_{t}^{e}&= \sup_{s \leq t} \bigl\lfloor \bigl(Z_s^e\bigr)_{+} \bigr\rfloor, \\ Z_{t}^e &= X_{t}^{e}+M_{t}^{e} = X_{0} + \int_{0}^t b(X_{s}^e) ds + \alpha e(t) + W_{t}, \quad t \in [0,T], \label{integer representation} \end{split} \end{equation} where $\lfloor x \rfloor$ denotes the floor part of $x\in\mathbb R$. Indeed, one can see that for $t\in[\tau^e_{k},\tau^e_{k+1})$, $k\geq0$, \begin{align*} \sup_{s\leq t}\left\lfloor \left(Z_{s}^{e}\right)_{+}\right\rfloor & = \max \biggl( \max_{0\leq j\leq k-1} \Bigl( \sup_{s\in[\tau^e_{j},\tau^e_{j+1})}\left\lfloor \left(X_{s}^{e}+j\right)_{+}\right\rfloor \Bigr) ,\sup_{s\in[\tau^e_{k},t)}\left\lfloor \left(X_{s}^{e}+k\right)_{+}\right\rfloor \biggr) \\ & =\max \bigl( \max_{0\leq j\leq k-1} (j+1),k\bigr) =M_{t}^{e}, \end{align*} using the fact that $X_t^e<1$ for all $t\geq0$. Then, given $t \in [0,T]$ such that $Z_{t}^e \geq 0$, let $\rho^e := \sup \{ s \in [0,t] : Z_{s}^e < 0\}$ ($\sup \emptyset = 0$). Pay attention that $\rho^e$ is not a stopping time and that it depends on $t$. Then, for $s \in [\rho^e,t]$, \begin{equation} \label{eq:12:9:1} \vert b(X_{s}^e) \vert \leq \Lambda ( 1 + \vert X_{s}^e \vert ) \leq \Lambda \bigl( 1 + \vert Z_{s}^e \vert + M_{s}^e \bigr) = \Lambda \bigl( 1 + (Z_{s}^e)_{+} + M_{s}^e\bigr). \end{equation} By \eqref{integer representation}, we know that $M_{s}^e \leq \sup_{0 \leq r \leq s} (Z_{r}^e)_{+}$. Therefore, \begin{equation*} \vert b(X_{s}^e) \vert \leq \Lambda \left( 1+ 2 \sup_{0 \leq r \leq s} (Z_{r}^e)_{+} \right). \end{equation*} By \eqref{integer representation}, we obtain: \begin{equation} \label{eq:6:10:1} Z_{t}^e \leq Z_{\rho^e}^e + \Lambda \int_{\rho^e}^{t} \left( 1 + 2 \sup_{0 \leq r \leq s} (Z_{r}^e)_{+} \right) ds + \alpha e(t) + W_{t} - W_{\rho^e}. \end{equation} If $\rho^e>0$, then $Z_{\rho^e}^e =0$ as, obviously, $(Z_{s}^e)_{0 \leq s \leq T}$ is a continuous process. If $\rho^e=0$, then $X_{0} = Z_{\rho^e}^e \geq 0$ since $Z_{\rho^e}^e$ is non-negative. Therefore, \begin{equation} \label{eq:9:4:2} (Z_{t}^e)_{+} \leq (X_{0})_{+} + \Lambda \int_{0}^{t} \left( 1 + 2 \sup_{0 \leq r \leq s} (Z_{r}^e)_{+} \right) ds + \alpha e(t) + 2 \sup_{0 \leq s \leq t} \vert W_{s} \vert. \end{equation} Obviously, the above inequality still holds if $Z_{t}^e \leq 0$. We then notice that the process $(\sup_{0 \leq r \leq t} (Z_{r}^e)_{+})_{0 \leq t \leq T}$ has finite values as $(Z^e_{t})_{0 \leq t \leq T}$ is continuous. Therefore, taking the supremum in the left-hand side, applying Gronwall's lemma and taking the expectation, we deduce that $\mathbb E [ \sup_{0 \leq t \leq T} (Z_{t}^e)_{+}]$ is finite. Taking directly the expectation in \eqref{eq:9:4:2}, we see that \begin{equation} \label{eq:19:9:1} \mathbb E \left[ \sup_{0 \leq s \leq t} (Z_{s}^e)_{+} \right] \leq \mathbb E [(X_{0})_{+}] + \Lambda \int_{0}^{t} \left( 1 + 2 \mathbb E \left[ \sup_{0 \leq r \leq s} (Z_{r}^e)_{+}\right] \right) ds + \alpha e(t) + 4 t^{1/2}, \end{equation} for all $t\in[0, T]$. In particular, if ${\mathbb E}[(X_{0})_{+}] \leq a$, $e(t) \leq g_{a}(t)$ for all $ t \in [0,T]$ (where $g_a$ is given by \eqref{g(t)}), and $R^{e}$ is the deterministic hitting time \begin{equation*} R^{e} := \inf \left\{ t \in [0,T] : \mathbb E \left[ \sup_{0 \leq s \leq t} (Z_{s}^e)_{+} \right] > g_{a}(t) \right\} \quad (\inf \emptyset = + \infty), \end{equation*} then, for any $t \in (0,R^e \wedge T]$, \begin{equation*} \begin{split} \mathbb E \left[ \sup_{0 \leq s \leq t} (Z_{s}^e)_{+} \right] &\leq a + \Lambda \int_{0}^{t} \bigl( 1 + 2 g_{a}(s) \bigr) ds + \alpha g_{a}(t) + 4 t^{1/2} \\ &< \bigl( a+ ( 4+ \Lambda T^{1/2} ) t^{1/2} \bigr)\biggl[ 1 + \int_{0}^t \frac{2 \Lambda}{1-\alpha}\exp\left( \frac{2 \Lambda s}{1-\alpha} \right) ds \biggr] + \alpha g_{a}(t) \\ &= (1-\alpha) g_{a}(t) + \alpha g_{a}(t) = g_{a}(t). \end{split} \end{equation*} The strict inequality remains true when $t=0$ since ${\mathbb E}[(X_{0})_{+}] \leq a < g_{a}(0)$. Now, by the continuity of the paths of $Z^e$ and by the finiteness of $\mathbb E [ \sup_{0 \leq t \leq T} (Z_{t}^e)_{+}]$, we deduce that $\mathbb E [ \sup_{0 \leq s \leq t} (Z_{s}^e)_{+}]$ is continuous in $t$. Therefore, if $R^{e} < T$, then $\mathbb E [ \sup_{0 \leq s \leq R^e} (Z_{s}^e)_{+}]$ must be equal to $g(R^e)$, but, by the above inequalities, this sounds as a contradiction. By \eqref{integer representation}, this proves the announced bound. \end{proof} \section{Existence and uniqueness in small time} \label{section: Existence and uniqueness in small time} The main result of this section is the following: \begin{thm} \label{fixed point} Suppose there exist $\beta,\epsilon>0$ such that $\mathbb{P}(X_{0}\in dx)\leq\beta(1-x)dx$ for any $x \in (1-\epsilon,1]$ and that the density of $X_{0}$ on the interval $(1-\epsilon,1]$ is differentiable at point 1. Then there exist constants $A_{1}\geq 0$ and $T_{1}\in(0,1]$, depending upon $\beta, \epsilon, \alpha, \Lambda$ and $K$ only, such that $\Gamma \bigl(\mathcal{H}(A_{1},T_{1}) \bigr) \subset \mathcal{H}(A_{1},T_{1})$. Moreover, for all $e_{1},e_{2}\in\mathcal{H}(A_{1},T_{1})$, \[ \left\Vert \Gamma(e_{1})-\Gamma(e_{2})\right\Vert _{\mathcal{H}(A_{1},T_{1})}\leq\frac{1}{2}\left\Vert e_{1}-e_{2}\right\Vert _{\mathcal{H}(A_{1},T_{1})}. \] Hence there exists a unique fixed point of the restriction of $\Gamma$ to $\mathcal{H}(A_{1},T_{1})$, which provides a solution to \eqref{simplified eq} according to Definition \ref{definition solution} up until time $T_{1}$ (such that $[0, T_1] \ni t\mapsto \mathbb E(M_t)$ is in the space $\mathcal{H}(A_{1},T_{1})$). \end{thm} \subsection{Representation of $\Gamma$} Let $T> 0$. As a first step towards understanding the map $\Gamma$ defined above, we note that, given $e \in {\mathcal L}(T,A)$, using the definitions we can write \begin{equation*} \begin{split} \Gamma(e)(t) &=\mathbb E(M_{t}^{e})= \mathbb E \biggl( \sum_{k \geq 1} {\mathbf 1}_{[0,t]}(\tau_{k}^e) \biggr) \\ &= \sum_{k\geq1} \int_{0}^t \mathbb P \bigl( \tau_{k+1}^e \in (s,t] \, \vert \tau_{k}^e = s \bigr) \mathbb P( \tau_{k}^e \in ds) +\mathbb{P}(\tau_{1}^{e}\leq t), \end{split} \end{equation*} where $\mathbb P( \tau_{k}^e \in ds)$ is a convenient abuse of notation for denoting the law of $\tau_{k}^e$ and ${\mathcal B}(\mathbb R) \ni A \mapsto \mathbb P(\tau_{k+1}^e \in A \vert \tau_{k}^e = s)$ stands for the conditional law of $\tau_{k+1}^e$ given $\tau_{k}^e=s$. Here ${\mathcal B}(\mathbb R)$ is the Borel $\sigma$-algebra on $\mathbb R$. Moreover, observing that the solution $X^e$ to \eqref{gamma definition2} is a Markov process (which restarts from $0$ at time $\tau_{k}^e$ when $k \geq 1$), we may write \begin{equation} \Gamma(e)(t)=\mathbb E(M_{t}^{e})=\sum_{k\geq1}\int_{0}^{t} \mathbb{P}\bigl(\tau_{1}^{e^{\sharp_{s}}} \leq t-s \vert X_{0}^{e^{\sharp_{s}}}=0 \bigr)\mathbb{P}(\tau_{k}^{e}\in ds)+\mathbb{P}(\tau_{1}^{e} \leq t),\label{Markov property} \end{equation} where $e^{\sharp_{s}}$ stands for the mapping $([0,T-s] \ni t \mapsto e(t+s)-e(s) ) \in {\mathcal L}(T-s,A)$. With this decomposition it is clear that in order to analyse $\Gamma(e)$, and more importantly the derivative of $\Gamma(e)$ (recall we are looking for a fixed point in $\mathcal{H}(T, A)$), we must analyse the densities of the first hitting times of a barrier by a \textit{non-homogeneous} diffusion processes with a general Lipschitz drift term. Indeed, formally taking the derivative with respect to $t$ in \eqref{Markov property} introduces terms involving the density of $\tau_{1}^{e}$, where we recall that \[ \tau_{1}^{e} = \inf\{t>0: X_t^e \geq 1\} = \inf\left\{t>0: X_{0}+\int_{0}^{t}b(X_{s}^{e})ds+W_{t} \geq 1 -\alpha e(t)\right\}. \] The analysis of such densities is well-known to be a difficult problem. These problems remain even in the case where $b\equiv 0$. However, the fact that $e$ is continuously differentiable at least guarantees that the densities exist. In the case $b\equiv 0$ we refer to \cite[Theorem 14.4]{Peskir}. In the general case existence of these densities will be guaranteed in the next section by Lemma \ref{lem:killedprocess:1}. \subsection{General bounds for the density of the first hitting time for the non-homogeneous diffusion process} \label{General bounds} Fix $T>0$, and for $e\in\mathcal{C}^1[0,T]$ consider the stochastic processes $(\chi^e_{t})_{0 \leq t \leq T}$ which satisfies \begin{equation} \label{eq:21:10:2} d \chi^e_{t} = b(\chi^e_{t}) dt + \alpha e'(t) dt + dW_{t}, \quad t \in [0,T], \quad \chi^e_{0} <1\ \mathrm{a.s}, \end{equation} together with the stopping time \begin{equation*} \tau^e := \inf \{t \in [0,T] : \chi^e_{t} \geq 1\} , \qquad (\inf\emptyset = \infty). \end{equation*} Here $\alpha\in (0,1)$ and the drift $b$ is globally Lipschitz, exactly as above. \begin{lemma} \label{lem:killedprocess:1}Let $e\in \mathcal{C}^1[0, T]$. Suppose there exist $\beta,\epsilon>0$ such that $\mathbb{P}(\chi_{0}\in dx)\leq\beta(1-x)dx$ for any $x \in (1-\epsilon,1]$ and that the density of $\chi_{0}$ on the interval $(1-\epsilon,1]$ is differentiable at point 1. Then: \vspace{0.2cm} \noindent\textbf{(i)} For any $t \in (0,T]$, the law of the diffusion \(\chi^e_t\) killed at the threshold is absolutely continuous with respect to the Lebesgue measure. \vspace{0.2cm} \noindent\textbf{(ii)} Denoting the density of \(\chi^e_t\) killed at the threshold by \begin{equation} \label{eq:21:10:1} p_e(t,y) := \frac{1}{dy} \mathbb P \bigl( \chi^e_{t} \in dy, t < \tau^e\bigr), \quad t \in [0,T], \ y \leq 1, \end{equation} $p_e(t,y)$ is continuous in $(t, y)$ and continuously differentiable in $y$ on $(0,T] \times (-\infty,1]$ and admits Sobolev derivatives of order 1 in $t$ and of order 2 in $y$ in any $L^{\varsigma}$, $\varsigma \geq 1$, on any compact subset of $(0,T] \times (-\infty,1)$. When $\chi_{0} \leq 1-\epsilon$ a.s. it is actually continuous and continuously differentiable in $y$ on any compact subset of $([0,T] \times (-\infty,1]) \setminus (\{0\} \times (-\infty,1-\epsilon])$. \vspace{0.2cm} \noindent\textbf{(iii)} Almost everywhere on $(0,T] \times (-\infty,1)$, $p_e$ satisfies the Fokker-Planck equation: \begin{equation} \label{Fokker-Planck} \partial_{t} p_e(t,y) + \partial_{y} \bigl[ \bigl( b(y) + \alpha e'(t) \bigr) p_e(t,y) \bigr] - \frac{1}{2} \partial^2_{yy} p_e(t,y) =0, \end{equation} with the Dirichlet boundary condition $p_e(t,1)=0$ and the measure-valued initial condition $p_e(0,y) dy = \mathbb P(\chi_{0} \in dy)$, $p_e(t,y)$ and $\partial_{y}p_e(t,y)$ decaying to $0$ as $y \rightarrow - \infty$. \vspace{0.2cm} \noindent\textbf{(iv)} The first hitting time, $\tau^e$ has a density on $[0,T]$, given by \begin{equation} \frac{d}{dt}\mathbb{P}(\tau^e \leq t)=-\frac{1}{2}\partial_{y}p_e(t,1), \quad t \in [0,T], \label{deriv of stopping time} \end{equation} the mapping $[0,T] \ni t \mapsto \partial_{y}p_{e}(t,1)$ being continuous and its supremum norm being bounded in terms of $T$, $\alpha$, $\|e'\|_{\infty, T}$, $\beta$ and $b$ only. \end{lemma} Lemma \ref{lem:killedprocess:1} is quite standard. The analysis of the Green function of killed processes with smooth coefficients may be found in \cite[Chap. VI]{garroni:menaldi}. The need for considering Sobolev derivatives follows from the fact that $b$ is Lipschitz only. The argument to pass from the case $b$ smooth to the case $b$ Lipschitz only is quite standard: it follows from Calderon and Zygmund estimates, see \cite[Eq. (0.4), App. A]{stroock:varadhan}, that permit the control of the $L^{\varsigma}$ norm of the second order derivatives on any compact subset of $(0,T] \times (-\infty,1)$. A complete proof may be also found in the unpublished notes \cite{NOTES}. When $\chi_0 = x_0$ for some deterministic $x_0<1$, the conditions of the above Lemma are certainly satisfied. Therefore, for $e\in \mathcal{C}^1[0, T]$ it makes sense to consider the density $p_e(t, y)$, $t\in (0, T]$, $y\leq 1$ of the process killed at $1$ started at $x_0$. We will write $p_e(t, y) = p^{x_0}_e(t, y)$ in this case. The following two key results on $\partial_{y} p_{e}(t,1)$ are standard adaptations of heat kernel estimates (see for instance \cite[Chapter 1]{friedman}) for killed processes. The first one may be found in \cite[Chap. VI, Theorem 1.10]{garroni:menaldi} when $b$ is smooth and bounded. As explained in the beginning of \cite[Chap. VI, Subsec. 1.5]{garroni:menaldi}, it remains true when $b$ is Lipschitz continuous and bounded. The argument for removing the boundedness assumption on $b$ is explained in \cite{delarue:menozzi} in the case of a non-killed process. As shown in the unpublished notes \cite[Cor 4.3]{NOTES}, it can be adapted to the current case. The second result then follows from the so-called \textit{parametric} perturbation argument following \cite[Chapter 1]{friedman}. Again, the complete proof can be found in the unpublished notes \cite[Cor 5.3]{NOTES}. \begin{prop} \label{density estimate prop} Let $e\in \mathcal{C}^1[0, T]$. Then there exists a constant $\kappa(T)$ (depending only on $T$ and the drift function $b$) which increases with $T$ such that for all $x_0<1$, \[ |\partial_yp_e^{x_0}(t, 1)| \leq \kappa(T)(\|e'\|_{\infty, T} + 1)\frac{1}{t}\exp\left(-\frac{(1-x_0)^2}{\kappa(T) t}\right) \] for all $t \leq \min \{[(\|e'\|_{\infty, T}+1)\kappa(T)]^{-2},T \}$. In particular $\kappa(T)$ is independent of $e$. \end{prop} \begin{prop} \label{density difference prop} Let $e_1, e_2\in \mathcal{C}^1[0, T]$ and let $A = \max\{\|e'_1\|_{\infty, T}, \|e'_2\|_{\infty, T}\} $. Then there exists a constant $\kappa(T)$ (depending only on $T$ and the drift function $b$) which increases with $T$ such that for all $x_0<1$, \[ \left|\partial_yp_{e_{1}}^{x_{0}}(t, 1)-\partial_yp_{e_{2}}^{x_{0}}(t, 1)\right|\leq \kappa(T)(A + 1)\frac{1}{\sqrt{t}} \exp \left( -\frac{\left(1-x_0\right)^2}{\kappa(T)t} \right) \|e'_1-e'_2\|_{\infty,t}, \] for all $t \leq\min\{[(A+1)\kappa(T)]^{-2},T\}$. In particular $\kappa(T)$ is independent of $e_1$ and $e_2$. \end{prop} \subsection{Application to $\Gamma$:} \label{subse:5:3} In this section we return to the setting of Section \ref{Solution as a fixed point}, and apply the results of the previous subsection to complete the proof of Theorem \ref{fixed point}. The first result ensures the differentiability of $\Gamma(e)$ whenever $e\in\mathcal{L}(T, A)$, which is the first step in showing that $\Gamma$ is stable on the space $\mathcal{H}(T, A)$ for some $A$ (recall that $\mathcal{H}$ is simply a growth controlled subspace of $\mathcal{L}$). \begin{proposition} \label{prop:differentiability:Gamma} Let $e \in {\mathcal L}(T,A)$ and $X_{0}$ be such that there exist $\beta,\epsilon>0$ with $\mathbb{P}(X_{0}\in dx)\leq\beta(1-x)dx$ for any $x \in (1-\epsilon,1]$ ,and suppose that the density of $X_{0}$ on the interval $(1-\epsilon,1]$ is differentiable at point 1. Then the mapping $[0,T] \ni t \mapsto \Gamma(e)(t)$ is continuously differentiable. Moreover, \begin{equation} \frac{d}{dt}\bigl[ \Gamma(e) \bigr] (t) = - \int_{0}^{t}\frac{1}{2}\partial_{y}p_{e}^{(0,s)}(t-s,1) \frac{d}{ds} \bigl[ \Gamma(e) \bigr] (s) ds -\frac{1}{2}\partial_{y}p_{e}(t,1), \quad t \in [0,T], \label{Markov property 2} \end{equation} where $p_{e}$ represents the density of the process $X^e$ killed at $1$ and $p_{e}^{(0,s)}$ represents the density of the process $X^{e^{\sharp_{s}}}$ killed at $1$ with $X_{0}^{e^{\sharp_{s}}}=0$. \end{proposition} \begin{proof} We first check that $\Gamma(e)$ is Lipschitz continuous on $[0,T]$. Considering a finite difference in \eqref{Markov property} and using \eqref{deriv of stopping time}, we get, for $t,t+h \in [0,T]$, \begin{equation} \label{eq:3:10:10} \begin{split} &\Gamma(e)(t+h) - \Gamma(e)(t) = \sum_{k \geq 1} \int_{t}^{t+h} \mathbb P \bigl( \tau_{1}^{e^{\sharp_{s}}} \leq t+h-s \vert X^{e^{\sharp_{s}}}_{0}= 0 \bigr) \mathbb P (\tau_{k}^e \in ds) \\ &\hspace{15pt} - \frac{1}{2} \sum_{k \geq 1} \int_{0}^t \int_{t-s}^{t+h-s} \partial_{y} p_{e}^{(0,s)}(r,1)dr \mathbb P (\tau_{k}^e \in ds) - \frac{1}{2} \int_{t}^{t+h} \partial_{y} p_{e}(s,1) ds. \end{split} \end{equation} By Lemma \ref{lem:killedprocess:1} (ii), we can handle the two last terms in the above to find a constant $C>0$ (which depends on $e$) such that \begin{equation*} \begin{split} \Gamma(e)(t+h) - \Gamma(e)(t) &\leq \sum_{k \geq 1} \int_{t}^{t+h} \mathbb P \bigl( \tau_{1}^{e^{\sharp_{s}}} \leq t+h-s \vert X^{e^{\sharp_{s}}}_{0}= 0\bigr) \mathbb P (\tau_{k}^e \in ds) \\ &\hspace{15pt} + C h \bigl( 1 + \Gamma(e)(T) \bigr), \end{split} \end{equation*} the last term in the right-hand side being finite thanks to \eqref{eq:9:4:2} and the argument following it. Moreover, by \eqref{eq:9:4:2} and Gronwall's lemma, we deduce that \begin{equation} \label{eq:3:10:11} \begin{split} &\lim_{h \searrow 0} \sup_{0 \leq s \leq T-h} \mathbb P \bigl( \tau_{1}^{e^{\sharp_{s}}} \leq h \vert X^{e^{\sharp_{s}}}_{0}=0 \bigr) \\ &= \lim_{h \searrow 0} \sup_{0 \leq s \leq T-h} \mathbb P \left( \sup_{0 \leq r \leq h} Z_{r}^{e^{\sharp_{s}}} \geq 1 \vert X^{e^{\sharp_{s}}}_{0}= 0 \right) = 0, \end{split} \end{equation} where $Z^{e^{\sharp_{s}}}$ is given by \eqref{integer representation}. Therefore, there exists a mapping $\eta : \mathbb R_{+} \rightarrow \mathbb R_{+}$ matching $0$ at $0$ and continuous at $0$ such that \begin{equation*} \Gamma(e)(t+h) - \Gamma(e)(t) \leq \eta(h) \bigl[ \Gamma(e)(t+h) - \Gamma(e)(t) \bigr] + C h \bigl( 1 + \Gamma(e)(T) \bigr). \end{equation*} Choosing $h$ small enough, Lipschitz continuity easily follows. As a consequence, we can divide both sides of \eqref{eq:3:10:10} by $h$ and then let $h$ tend to $0$. By \eqref{eq:3:10:11}, we have for a given $t \in [0,T)$, \begin{equation*} \begin{split} &\lim_{h \searrow 0} h^{-1} \sum_{k \geq 1} \int_{t}^{t+h} \mathbb P \bigl( \tau_{1}^{e^{\sharp_{s}}} \leq t+h-s \vert X^{e^{\sharp_{s}}}_{0}=0 \bigr) \mathbb P (\tau_{k}^e \in ds) \\ &\leq \lim_{h \searrow 0}\left[ \sup_{0 \leq s \leq T-h} \mathbb P \bigl( \tau_{1}^{e^{\sharp_{s}}} \leq h \vert X^{e^{\sharp_{s}}}_{0}=0 \bigr) \frac{\Gamma(e)(t+h)-\Gamma(e)(t)}{h} \right] = 0. \end{split} \end{equation*} Handling the second term in \eqref{eq:3:10:10} by Lemma \ref{lem:killedprocess:1} and using the Lebesgue Dominated Convergence Theorem, we deduce that \begin{align} \frac{d}{dt}\Gamma(e)(t) =-\sum_{k\geq1}\int_{0}^{t}\frac{1}{2}\partial_{y}p_{e}^{(0,s)}(t-s,1)\mathbb{P}(\tau^e_{k}\in ds)-\frac{1}{2}\partial_{y}p_{e}(t,1).\nonumber \end{align} By Lemma \ref{lem:killedprocess:1}, we know that $\partial_{y}p_{e}^{(0,s)}(\cdot,1)$ and $\partial_{y} p_{e}(\cdot,1)$ are continuous (in $t$). This proves that $(d/dt)\Gamma(e)$ is continuous as well. Formula \eqref{Markov property 2} then follows from the relationship \begin{equation} \label{eq:5:10:1} \Gamma(e)(t) = \sum_{k\geq 1}\int_{0}^t \mathbb P(\tau_{k}^e \in ds), \quad t \in [0,T]. \end{equation} \end{proof} The second idea is to show that the difference between the derivatives of $\Gamma(e_1)$ and $\Gamma(e_2)$ is uniformly small in terms of the distance between two functions $e_1$ and $e_2$ in the space $\mathcal{H}(T, A)$ in small time. \begin{prop} \label{difference of derivatives} Let $T>0$ and $X_{0}$ be such that there exist $\beta,\epsilon>0$ with $\mathbb{P}(X_{0}\in dx)\leq\beta(1-x)dx$ for any $x \in (1-\epsilon,1]$, and suppose that the density of $X_{0}$ on the interval $(1-\epsilon,1]$ is differentiable at point 1. Suppose $e_{1},e_{2}\in\mathcal{H}(T, A)$ for some $A\geq0$. Then there exists a constant $\kappa(T)$, independent of $A$, $\beta$ and $\epsilon$, and increasing in $T$, and a constant $\tilde{\kappa}(T,\beta,\epsilon)$, independent of $A$ and increasing in $T$, such that for any $e_{1},e_{2}\in\mathcal{H}(T, A)$, \[ \sup_{0 \leq s\leq t}\left|\frac{d}{ds} \Bigl[ \Gamma({e_{1}}) - \Gamma({e_{2}}) \Bigr] (s)\right|\leq (A+1) \tilde{\kappa}(T,\beta,\epsilon) \sqrt{t} \|e'_{1}-e'_{2}\|_{\infty,t}, \] for $t \leq\min\{[(A+1)\kappa(T)]^{-2},T\}$. \end{prop} \begin{proof} We have by \eqref{Markov property 2} \begin{equation} \label{eq:4:10:1} \begin{split} \left|\frac{d}{dt} \Bigl[ \Gamma(e_{1}) - \Gamma(e_{2}) \Bigr](t) \right| &\leq\frac{1}{2}\int_{-\infty}^{1}\left| \Bigl[ \partial_{y}p_{e_{1}}^{x}-\partial_{y}p_{e_{2}}^{x}\Bigr](t,1)\right|\mathbb{P}(X_{0}\in dx) \\ &\hspace{5pt}+\frac{1}{2} \int_{0}^{t}\left| \Bigl[ \partial_{y}p_{e_{1}}^{(0,s)} -\partial_{y}p_{e_{2}}^{(0,s)} \Bigr] (t-s,1)\right|\frac{d}{ds} \Gamma(e_{1})(s) \\ &\hspace{5pt}+\frac{1}{2}\int_{0}^{t}\left|\partial_{y}p_{e_{2}}^{(0,s)}(t-s,1)\right|\left| \frac{d}{ds}\Bigl[ \Gamma(e_{1}) - \Gamma(e_{2})\Bigr](s)\right|ds \\ &:= \frac{1}{2} \bigl( L_{1} + L_{2} + L_{3} \bigr). \end{split} \end{equation} Suppose $t\leq T$ and $\sqrt{t}\leq [(A+1)\kappa(T)]^{-1}$, where $\kappa(T)$ is as in Proposition~\ref{density difference prop}. The value of $\kappa(T)$ will be allowed to increase when necessary below. Considering the first term only, we can use Proposition \ref{density difference prop} to see that \begin{align*} L_{1} &\leq (A+1)\beta \kappa(T) \left(\int_{1- \epsilon}^{1} \frac{1}{\sqrt{t}} \exp \left( -\frac{(1-x)^{2}}{\kappa(T)t} \right) (1-x)d{x}\right)\|e'_{1}-e'_{2}\|_{\infty,t} \\ &\quad + (A+1) \kappa(T)\left(\int_{- \infty}^{1-\epsilon} \frac{1}{\sqrt{t}} \exp \left( -\frac{(1-x)^{2}}{\kappa(T)t} \right) \mathbb P (X_{0} \in dx) \right)\|e'_{1}-e'_{2}\|_{\infty,t}. \end{align*} We deduce that there exists a constant $\tilde{\kappa}(T,\beta,\epsilon)>0$, which is independent of $A$ and which is allowed to increase as necessary from line to line below, such that \begin{equation} \label{eq:4:10:2} \begin{split} L_{1} &\leq (A+1) \beta \kappa(T) \sqrt{t}\left(\int_{0}^{\infty}z \exp \left( -\frac{z^{2}}{\kappa(T)} \right) dz\right)\|e'_{1}-e'_{2}\|_{\infty,t} \\ &\quad + (A+1) \kappa(T) \frac{1}{\sqrt{t}} \exp \left( -\frac{\epsilon^{2}}{\kappa(T)t} \right) \|e'_{1}-e'_{2}\|_{\infty,t} \\ &\leq (A+1) \tilde{\kappa}(T,\beta,\epsilon) \sqrt{t} \|e'_{1}-e'_{2}\|_{\infty,t}. \end{split} \end{equation} We can then use Proposition \ref{density difference prop} again to see that \begin{equation*} L_{2} \leq (A+1) \kappa(T) \sup_{0 < s \leq t} \left[ s^{-1/2} \exp \left( - \frac{1}{\kappa(T) s} \right) \right]\Gamma(e_{1})(t) \|e'_{1}-e'_{2}\|_{\infty,t}. \end{equation*} By Proposition \ref{stability} (since $e_1\in \mathcal{H}(T, A)$), we deduce that \begin{equation} \label{eq:4:10:3} L_{2} \leq (A+1)\kappa(T) \sqrt{t} \|e'_{1}-e'_{2}\|_{\infty,t}, \end{equation} where $\kappa(T)$ has been increased as necessary, and we have used the elementary inequality $\exp(-1/v)\leq v$ for all $v\geq0$. We finally turn to $L_{3}$ in \eqref{eq:4:10:1}. By Proposition \ref{density estimate prop}, we have that \begin{equation} \label{eq:4:10:5} \begin{split} \left|\partial_{y}p_{e_{2}}^{(0,s)}(t-s,1)\right| & \leq \kappa(T)(A+1)\frac{1}{(t-s)} \exp \left( -\frac{1}{\kappa(T)(t-s)} \right) \\ & \leq \kappa(T)(A+1), \end{split} \end{equation} again by increasing $\kappa(T)$. Thus, from \eqref{eq:4:10:1}, \eqref{eq:4:10:2}, \eqref{eq:4:10:3} and \eqref{eq:4:10:5}, we deduce \begin{align*} \left|\frac{d}{dt}\Big[\Gamma(e_{1}) -\Gamma(e_{2})\Big](t) \right| & \leq (A+1)\tilde{\kappa}(T,\beta,\epsilon) \sqrt{t} \|e'_{1}-e'_{2}\|_{\infty,t} \\ & \quad+ (A+1) \kappa(T)\int_{0}^{t}\left|\frac{d}{ds}\Big[\Gamma(e_{1}) -\Gamma(e_{2})\Big](s)\right|ds. \end{align*} By taking the supremum over all $s\leq t$ in the above, we have, for $t\leq (2\kappa(T)(A+1))^{-1}$, (which actually follows from the aforementioned condition $t\leq (\kappa(T)(A+1))^{-2}$ by assuming w.l.o.g. $\kappa(T) \geq 2$), \begin{equation*} \sup_{0 \leq s\leq t}\left|\frac{d}{ds}\Big[\Gamma(e_{1})-\Gamma(e_{2})\Big](s)\right| \leq 2(A+1) \tilde{\kappa}(T,\beta,\epsilon) \sqrt{t} \|e'_{1}-e'_{2}\|_{\infty,t}. \end{equation*} \end{proof} We can then finally complete this section with the proof of Theorem \ref{fixed point}. \begin{proof}{[}\textbf{Proof of Theorem \ref{fixed point}}{]} Choose $A_{1}=2\sup_{0\leq t\leq1}\left|(d/dt)\Gamma(0)(t)\right|+1$. Note that $A_{1}$ depends on $\beta$. Then choose $T_{1} \leq\min\{[(A_1 +1)\kappa(1)]^{-2},1\}$ such that \begin{equation} \sqrt{T_{1}}\tilde{\kappa}(1,\beta,\epsilon) (A_{1}+1) \leq\frac{1}{4},\label{smallness of time} \end{equation} where $\kappa(1)$ and $\tilde{\kappa}(1,\beta,\epsilon)$ are as in Proposition \ref{difference of derivatives}. By that result, if $e\in\mathcal{H}(A_{1},T_{1})$ then \begin{align*} \left|\frac{d}{dt}\Gamma(e)(t)\right| = \frac{d}{dt}\Gamma(e)(t)\leq\sqrt{t}\tilde{\kappa}(T_{1},\beta,\epsilon) ( A_{1} +1 A_{1}+\frac{d}{dt}\Gamma(0)(t) \end{align*} for all $t\leq\min\{[(A_{1}+1) \kappa(T_{1})]^{-2},T_{1}\}=T_{1}$. By definition, we have $T_{1}\leq1$ so that $\kappa(T_{1}) \leq \kappa(1)$ and $\tilde{\kappa}(T_{1},\beta,\epsilon)\leq \tilde{\kappa}(1,\beta,\epsilon)$. Therefore \[ \frac{d}{dt}\Gamma(e)(t)\leq\sqrt{t}\tilde{\kappa}(1,\beta,\epsilon) (A_{1} +1 A_{1}+\frac{d}{dt}\Gamma(0)(t) \] for all $t \leq T_{1}$. Hence for all $t \leq T_{1}$ \begin{align*} \frac{d}{dt}\Gamma(e)(t) & \leq\frac{A_{1}}{2}+\sup_{0\leq t\leq1}\left(\frac{d}{dt}\Gamma(0)(t)\right)\leq A_{1} \end{align*} by \eqref{smallness of time}, so that $\Gamma(e)\in\mathcal{H}(A_{1},T_{1})$. To prove that $\Gamma$ is a contraction on $\mathcal{H}(A_{1},T_{1})$, first note that for $e\in\mathcal{H}(A_{1},T_{1})$ \[ \|e'\|_{\infty,T_{1}}\leq\|e\|_{\mathcal{H}(A,T_{1})}\leq2\|e'\|_{\infty,T_{1}} \] by the mean-value theorem, since $e(0)=0$ and $T_{1} \leq 1$. Thus for any $e_{1},e_{2}\in\mathcal{H}(A_{1},T_{1})$ \begin{align*} &\left\Vert \Gamma(e_{1})-\Gamma(e_{2})\right\Vert _{\mathcal{H}(A_{1},T_{1})} \leq2\left\Vert \Gamma(e_{1})'-\Gamma(e_{2})'\right\Vert _{\infty,T_{1}} \\ &\quad \leq2\sqrt{T_{1}}\tilde{\kappa}(T_{1},\beta,\epsilon) (A_{1} +1) \|e'_{1}-e'_{2}\|_{\infty,T_{1}} \leq\frac{1}{2}\|e_{1}-e_{2}\|_{\mathcal{H}(A_{1},T_{1})}, \end{align*} by our choice of $T_{1}$ and using Proposition \ref{difference of derivatives} once more. Since $\mathcal{H}(A_{1},T_{1})$ is a closed subspace of $\mathcal{C}^{1}[0,T]$ (a complete metric space), the existence of a fixed point for $\Gamma$ follows from the Banach Fixed Point Theorem. \end{proof} \section{Long-time estimates} \label{Long-time estimates} In order to extend the existence and uniqueness from small time to any arbitrarily prescribed interval, we need an \textit{a priori} bound for the Lipschitz constant of $e : t \mapsto \mathbb E(M_{t})$ on any finite interval $[0,T]$, which is given by Theorem \ref{thm:gradientbd}. The purpose of this section is to prove this result. As already mentioned, the key point is inequality \eqref{key fact}. Loosely, it says that, in \eqref{Brunel}, the particles that are below $1-dx$ at time $t$ receive a kick of order $\alpha \mathbb P(X_{t} \in dx) < dx$. In other words, only the particles close to $1$ can jump, which guarantees some control on the continuity of $e$. Precisely, Proposition \ref{prop:holderbd:1} gives a bound for the $1/2$-H\"older constant of $e$. Inquality \eqref{key fact} is proved by using \textit{a priori} heat kernel bounds when $\alpha$ is small enough, this restriction determining the value of $\alpha_{0}$ in Theorem \ref{thm:gradientbd}. Once the $1/2$-H\"older constant of $e$ has been controlled, we provide in Lemma \ref{lem:gradientbd:41} a H\"older estimate of the oscillation (in space) of $p$ in the neighbourhood of $1$. The proof is an adaptation of \cite{krylov:safonov}. Finally, in Proposition \ref{prop:gradientbd:5}, a barrier technique yields a bound for the Lipschitz constant of $p$ in the neighbourhood of $1$. In the whole section, for a given initial condition $X_{0}=x_{0} <1$, we thus assume that there exists a solution to \eqref{simplified eq} according to Definition \ref{definition solution} i.e. such that $e: [0,T] \ni t \mapsto \mathbb E(M_{t})$ is continuously differentiable. \subsection{Reformulation of the equation and \textit{a priori} bounds for the solution} In the whole proof, we shall use a reformulated version of \eqref{simplified eq}, in a similar way to Proposition \ref{stability} (see \eqref{integer representation}). Indeed, given a solution $(X_{t},M_{t})_{0 \leq t \leq T}$ to \eqref{simplified eq} on some interval $[0,T]$ according to Definition \ref{definition solution}, we set $Z_{t} = X_{t} + M_{t}$, $t \in [0,T]$. Then $(Z_{t})_{0 \leq t \leq T}$ has continuous paths and satisfies \begin{equation} \label{eq:gradientbd:2} Z_{t} = X_{0} + \int_{0}^t b(X_{s}) ds + \alpha \mathbb E(M_{t}) + W_{t}, \quad t \in [0,T]. \end{equation} where \begin{equation} \label{eq:7:9:1} M_{t} = \lfloor \bigl( \sup_{0 \leq s \leq t} Z_{s} \bigr)_{+} \rfloor = \sup_{0 \leq s \leq t} \lfloor (Z_{s})_{+} \rfloor. \end{equation} The following is easily proved by adapting the proof of Proposition \ref{stability}: \begin{lemma} \label{lem:gradientbd:1'} There exists a constant $B(T,\alpha,b)$, only depending upon $T$, $\alpha$, $b$ and non-decreasing in $\alpha$, such that \begin{equation} \label{eq:4:9:1} \sup_{0 \leq t \leq T} e(t) = e(T) \leq \mathbb E \left[ \sup_{0 \leq t \leq T} (Z_{t})_{+} \right] \leq B(T,\alpha,b). \end{equation} A possible choice for $B$ is \begin{equation*} B(T,\alpha,b) = \frac{\mathbb E[(X_{0})_{+}] + 4 T^{1/2} + \Lambda T}{1 - \alpha} \exp \left( \frac{2 \Lambda T}{1-\alpha} \right). \end{equation*} \end{lemma} \subsection{Local H\"older bound of the solution} We now turn to the critical point of the proof. Indeed, in the next subsection, we shall prove that, for $\alpha$ small enough, the function $t \mapsto e(t) = \mathbb E(M_{t})$ generated by some solution to \eqref{simplified eq} according to Definition \ref{definition solution} (so that $e$ is continuously differentiable) satisfies an \textit{a priori} $1/2$-H\"older bound, with an explicit H\"older constant. This acts as the keystone of the argument to extend the local existence and uniqueness result into a global one. As a first step, the proof consists of establishing a local H\"older bound for $e$ in the case when the probability that the process $X$ lies in the neighbourhood of $1$ is not too large. \begin{lemma} \label{lem:holderbd:5} Consider a solution $(X_{t})_{0 \leq t \leq T}$ to \eqref{simplified eq} on some interval $[0,T]$, with $T>0$ and initial condition $X_{0}=x_{0}<1$. Assume in addition that there exists some time $t_{0} \in [0,T]$ and two constants $\epsilon \in (0,1)$ and $c \in (0,1/\alpha)$ such that for any Borel subset $A \subset [1-\epsilon,1]$, \begin{equation} \label{eq:holderbd:1} \mathbb P (X_{t_{0}} \in A ) \leq c \vert A \vert, \end{equation} where $\vert A \vert$ stands for the Lebesgue measure of $A$. Then, with \begin{equation*} {\mathcal B}_0 = \frac{ \exp(2 \Lambda) [(8 + 5c + 8\epsilon^{-1}) \Lambda + 4 (2+ c+ \epsilon^{-1}) ]}{1- c \alpha}, \end{equation*} it holds that, for any $h \in (0,1)$, \begin{equation*} \left. \begin{array}{l} {\mathcal B}_{0} \exp(2\Lambda h) h^{1/2} \leq \epsilon/2 \\ t_{0}+h \leq T \end{array} \right\} \Rightarrow e(t_{0}+h) - e(t_{0}) \leq {\mathcal B}_{0} h^{1/2}. \end{equation*} \end{lemma} \begin{proof} By the Markov property, we can assume $t_{0}=0$, with $T$ being understood as $T-t_{0}$. Indeed, setting \begin{equation} \label{eq:12:9:2} X_{t}^{\sharp_{t_{0}}} := X_{t_{0}+t}, \quad t \in [0,T-t_{0}], \end{equation} we observe that, for $t \in [0,T-t_{0}]$, \begin{equation} \label{eq:12:9:3} X_{t}^{\sharp_{t_{0}}} = X_{t_{0}} + \int_{0}^t b(X_{r}^{\sharp_{t_{0}}}) dr + \alpha \mathbb E(M_{t+t_{0}}-M_{t_{0}}) + W_{t+t_{0}} - W_{t_{0}} - \bigl(M_{t+t_{0}} - M_{t_{0}} \bigr). \end{equation} Here $M_{t+t_{0}} - M_{t_{0}}$ represents the number of times the process $X$ reaches $1$ within the interval $(t_{0},t+t_{0}]$. Therefore, this also matches the number of times the process $X^{\sharp_{t_{0}}}$ hits $1$ within the interval $(0,t]$, so that ${X}^{\sharp_{t_{0}}}$ indeed satisfies the nonlinear equation \eqref{simplified eq} on $[0,T-t_{0}]$, with ${X}^{\sharp_{t_{0}}}_{0}=X_{t_{0}}$ as initial condition and with respect to the shifted Brownian motion $({W}_{t}^{\sharp_{t_{0}}} := W_{t_{0}+t} - W_{t_{0}})_{0 \leq t \leq T-t_{0}}$. In what follows, $t_{0}$ is thus assumed to be zero, the new $T$ standing for the previous $T-t_{0}$ and the new $X_{0}$ matching the previous $X_{t_{0}}$ and thus satisfying \eqref{eq:holderbd:1}. For a given $h\in(0, 1)$, such that $h \leq T$, and a given ${\mathcal B}_{0} >0$ (the value of which will be fixed later), we then define the deterministic hitting time: \begin{equation*} R = \inf \bigl\{t \in [0,h] : \mathbb E ( M_{t}) = e(t) \geq {\mathcal B}_{0} h^{1/2} \bigr\}. \end{equation*} Following the proof of \eqref{eq:9:4:2} (see more specifically \eqref{eq:12:9:1}), we have, for any $t \in [0,h \wedge R]$, \begin{equation*} \begin{split} M_{t} \leq \sup_{0\leq s\leq t}(Z_{s})_{+} &\leq (X_{0})_{+} + \Lambda \int_{0}^{t} \bigl( 1 + (Z_{s})_{+} + M_{s} \bigr) ds + \alpha e(t) + 2 \sup_{0 \leq s \leq t} \vert W_{s} \vert \\ &\leq (X_{0})_{+} + 2\Lambda \int_{0}^{t} \bigl( 1 + M_{s} \bigr) ds + \alpha {\mathcal B}_{0} h^{1/2} + 2 \sup_{0 \leq s \leq t} \vert W_{s} \vert \\ &\leq (X_{0})_{+} + 2 \Lambda h + 2\Lambda \int_{0}^t M_{s} ds + \alpha {\mathcal B}_{0} h^{1/2} + 2 \sup_{0 \leq s \leq t} \vert W_{s} \vert, \end{split} \end{equation*} where we have used \eqref{eq:7:9:1} to pass from the first to the second line. By Gronwall's Lemma, we obtain \begin{equation} \label{eq:3:9:1} \begin{split} M_{t} &\leq \exp(2\Lambda h) \left[ (X_{0})_{+} + 2 \Lambda h + \alpha {\mathcal B}_{0} h^{1/2} + 2 \sup_{0 \leq s \leq h} \vert W_{s} \vert \right] \\ &\leq (X_{0})_{+} + \exp(2\Lambda h) \left[ 4 \Lambda h + \alpha {\mathcal B}_{0} h^{1/2} + 2 \sup_{0 \leq s \leq h} \vert W_{s} \vert \right], \end{split} \end{equation} as $\exp(2 \Lambda h) \leq 1+ 2 \Lambda h \exp(2 \Lambda h)$ and $(X_{0})_{+} \leq 1$. Assume that ${\mathcal B}_{0} \exp(2\Lambda h) h^{1/2} \leq \epsilon/2 \leq 1/2$. Then, by Doob's $L^2$ inequality for martingales, \begin{equation} \label{eq:4:9:5} \begin{split} \sum_{k \geq 2} \mathbb P ( M_{t} \geq k ) &\leq \sum_{k \geq 2}\mathbb P \left( \exp(2\Lambda h) \left[ 4\Lambda h + 2 \sup_{0 \leq s \leq h} \vert W_{s} \vert \right] \geq k - 3/2 \right) \\ &\leq 2 \exp(2\Lambda h ) \mathbb E \bigl[ 4\Lambda h + 2 \sup_{0 \leq s \leq h} \vert W_{s} \vert \bigr] \\ &\leq \exp(2\Lambda h ) \bigl[ 8 \Lambda h + 8 h^{1/2}\bigr]. \end{split} \end{equation} Moreover, \begin{equation*} \begin{split} &\mathbb P(M_{t} \geq 1) \\ &\leq \mathbb P \left( (X_{0})_{+} + \exp(2\Lambda h) \left[ 4 \Lambda h + \alpha {\mathcal B}_{0} h^{1/2} + 2 \sup_{0 \leq s \leq h} \vert W_{s} \vert \right] \geq 1 \right) \\ &\leq \mathbb P \left( X_{0} \in [1-\epsilon,1], X_{0} + \exp(2\Lambda h) \left[ 4 \Lambda h + \alpha {\mathcal B}_{0} h^{1/2} + 2 \sup_{0 \leq s \leq h} \vert W_{s} \vert \right] \geq 1 \right) \\ &\hspace{15pt} + \mathbb P \left( \exp(2\Lambda h) \left[ 4 \Lambda h + 2 \sup_{0 \leq s \leq h} \vert W_{s} \vert \right] \geq \epsilon/2 \right) \\ &:= I_{1} + I_{2}, \end{split} \end{equation*} where we have used ${\mathcal B}_{0} \exp(2\Lambda h) h^{1/2} \leq \epsilon/2$ in the third line. By Doob's $L^1$ maximal inequality, we deduce that \begin{equation} \label{eq:holderbd:4} I_{2} \leq 2 \exp(2\Lambda h) \epsilon^{-1} \mathbb E \bigl[ 4 \Lambda h + 2 \vert W_{h} \vert \bigr] \leq \exp(2\Lambda h) \epsilon^{-1} \bigl[ 8 \Lambda h + 4 h^{1/2} \bigr]. \end{equation} We now switch to $I_{1}$. By independence of $X_{0}$ and $(W_{s})_{0 \leq s \leq T}$ and by \eqref{eq:holderbd:1}, \begin{equation*} \begin{split} I_{1} &\leq c \int_{0}^{\epsilon} \mathbb P \left( \exp(2\Lambda h) \left[ 4 \Lambda h + \alpha {\mathcal B}_{0} h^{1/2} + 2 \sup_{0 \leq s \leq h} \vert W_{s} \vert \right] \geq x \right) dx \\ &\leq c \int_{0}^{+ \infty} \mathbb P \left( \exp(2\Lambda h) \left[ 4 \Lambda h + \alpha {\mathcal B}_{0} h^{1/2} + 2 \sup_{0 \leq s \leq h} \vert W_{s} \vert \right] \geq x \right) dx \\ &= c \exp(2\Lambda h) \mathbb E \left[ 4 \Lambda h + \alpha {\mathcal B}_{0} h^{1/2} + 2 \sup_{0 \leq s \leq h} \vert W_{s} \vert \right]. \end{split} \end{equation*} By Doob's $L^2$ inequality, \begin{equation*} I_{1} \leq c \exp(2\Lambda h) \bigl[ 4 \Lambda h + \alpha {\mathcal B}_{0} h^{1/2} + 4 h^{1/2}\bigr]. \end{equation*} Together with \eqref{eq:holderbd:4}, we deduce that \begin{equation*} \mathbb P(M_{t} \geq 1) \leq \exp(2\Lambda h) \bigl[ 4 (c + 2\epsilon^{-1}) \Lambda h + 4 (c+\epsilon^{-1}) h^{1/2} + c \alpha {\mathcal B}_{0} h^{1/2} \bigr]. \end{equation*} From \eqref{eq:4:9:5}, we finally obtain, for $t \leq R \wedge h$, \begin{equation*} \begin{split} \mathbb E (M_{t}) & = \sum_{k\geq1}\mathbb{P}(M_t\geq k)\\ &\leq \exp(2\Lambda h) \bigl[ 4 (2 + c + 2 \epsilon^{-1}) \Lambda h + 4 ( 2+ c+ \epsilon^{-1}) h^{1/2} + c \alpha {\mathcal B}_{0} h^{1/2} \bigr] \\ &\leq \exp(2\Lambda h) \bigl[ (8 + 5c + 8 \epsilon^{-1}) \Lambda h + 4 ( 2+ c+\epsilon^{-1}) h^{1/2} \bigr] + c \alpha {\mathcal B}_{0} h^{1/2}, \end{split} \end{equation*} provided ${\mathcal B}_{0} \exp(2\Lambda h) h^{1/2} \leq \epsilon/2\leq 1/2$, which implies \[ c \alpha {\mathcal B}_{0} \exp(2\Lambda h) h^{1/2} \leq c \alpha {\mathcal B}_{0} h^{1/2} + c \Lambda h, \] using the fact that $\exp(2\Lambda h) \leq 1 + 2\Lambda h \exp(2\Lambda h)$. Therefore, if $R \leq h$, then we can choose $t= R$ in the left-hand side above. By continuity of $e$ on $[0,T]$, it then holds $e(R)={\mathcal B}_{0} h^{1/2}$, so that \begin{equation*} \begin{split} (1- c \alpha) {\mathcal B}_{0} h^{1/2} &\leq \exp(2 \Lambda h) \bigl[ (8 + 5 c + 8\epsilon^{-1}) \Lambda h + 4 ( 2+ c+\epsilon^{-1}) h^{1/2} \bigr] \\ &< \exp(2 \Lambda) \bigl[ (8 + 5 c + 8\epsilon^{-1}) \Lambda + 4 ( 2+ c+\epsilon^{-1}) \bigr] h^{1/2}, \end{split} \end{equation*} which is not possible when \begin{equation*} {\mathcal B}_{0} = \frac{\exp(2 \Lambda) [ (8 + 5 c + 8\epsilon^{-1}) \Lambda + 4 ( 2+ c+\epsilon^{-1}) ] }{1- c \alpha}. \end{equation*} Precisely, with ${\mathcal B}_{0}$ as above and ${\mathcal B}_{0}\exp(2 \Lambda h) h^{1/2} \leq \epsilon/2$ it cannot hold $R \leq h$. \end{proof} \subsection{Global H\"older bound} In this subsection, we shall prove: \begin{proposition} \label{prop:holderbd:1} Let $\epsilon \in (0,1)$. Then there exists a positive constant $\alpha_{0} \in (0,1]$, only depending upon $\epsilon$, $K$ and $\Lambda$, such that: whenever $\alpha < \alpha_{0}$, there exists a constant ${\mathcal B}$, only depending on $\alpha$, $\epsilon$, $K$ and $\Lambda$, such that, for all positive times $T>0$ and initial conditions $X_{0}=x_{0} \leq 1 - \epsilon$, any solution to \eqref{simplified eq} according to Definition \ref{definition solution} satisfies \begin{equation*} \left. \begin{array}{l} {\mathcal B} h^{1/2} \leq \epsilon/2 \\ t_{0}+h \leq T \end{array} \right\} \Rightarrow e(t_{0}+h) - e(t_{0}) \leq {\mathcal B} h^{1/2}, \end{equation*} for any $h \in (0,1)$ and $t_{0} \in [0,T]$. Note that ${\mathcal B}$ above may differ from ${\mathcal B}_{0}$ in the statement of Lemma \ref{lem:holderbd:5}. The constant $\alpha_{0}$ can be described as follows. Defining $T_{0}$ as the largest time less than 1 such that \begin{equation*} \bigl(1 - \epsilon\bigr) \exp(\Lambda T_{0}) \leq 1 - 7\epsilon/8, \quad \Lambda T_{0} \exp(\Lambda T_{0}) \leq \epsilon/8, \end{equation*} $\alpha_{0}$ can be chosen as the largest (positive) real satisfying (with $B(T_0, \alpha_0, b)$ as in Lemma \ref{lem:gradientbd:1'}) \begin{equation*} \begin{split} &\alpha_{0} B(T_{0},\alpha_{0},b) \leq \epsilon/4, \\ &\alpha_{0} 2^{3/2} (c')^{3/2} \exp\left(-\tfrac12\right) \bigl[ \epsilon^{-1} + B(T_{0},\alpha_{0},b) \bigr] \leq 1, \\ &\alpha_{0} \left[ c' T_{0}^{-1/2} + 2^{3/2} (c')^{3/2} \exp\left(-\tfrac12\right) B(T_{0},\alpha_{0},b) \right] \leq 1. \end{split} \end{equation*} Here the constant $c'$ is defined by the following property: $c'>0$, depending on $K$ only, is such that for any diffusion process $(U_{t})_{0 \leq t \leq 1}$ satisfying \begin{equation*} dU_{t} = F(t,U_{t}) dt + dW_{t}, \quad t \in [0,1], \end{equation*} where $U_{0}=0$ and $F : [0,T] \times \mathbb R \rightarrow \mathbb R$ is $K$-Lipschitz in $x$ such that $F(t,0)=0$ for any $t \in [0,1]$, it holds that \begin{equation*} \frac{1}{dx}\mathbb P ( U_{t} \in dx ) \leq \frac{c'}{\sqrt{t}} \exp \left( - \frac{x^2}{c' t }\right), \quad x \in \mathbb R, \ t \in (0,1]. \end{equation*} \end{proposition} The proof relies on the following: \begin{lemma} \label{lem:holderbd:2} Given an initial condition $X_{0}=x_{0} \leq1-\epsilon$, with $\epsilon \in (0,1)$, and a solution $(X_{t})_{0 \leq t \leq T}$ to \eqref{simplified eq} on some interval $[0,T]$ according to Definition \ref{definition solution}, the random variable $X_{t}$ has a density on $(-\infty,1]$, for any $t \in (0,T]$. Moreover, defining $T_{0}$ as in the statement of Proposition \ref{prop:holderbd:1} and choosing $\alpha \leq \alpha_{1}$ satisfying \begin{equation*} \alpha_{1} B(T_{0},\alpha_{1},b) \leq \epsilon/4, \end{equation*} it holds, for $x \in [1-\epsilon/4,1)$, \begin{equation*} \begin{split} &\frac{1}{dx}\mathbb P (X_{t} \in dx ) \leq 2^{3/2} (c')^{3/2} \exp\left(-\tfrac12\right) \bigl[ \epsilon^{-1} + B(T_{0},\alpha,b) \bigr] \quad \textrm{if} \ t \leq T_{0} \\ &\frac{1}{dx} \mathbb P (X_{t} \in dx ) \leq c' T_{0}^{-1/2} + 2^{3/2} (c')^{3/2} \exp\left(-\tfrac12\right) B(T_{0},\alpha,b) \quad \textrm{if} \ t > T_{0}, \end{split} \end{equation*} where the constant $c'$ is also as in the statement of Proposition \ref{prop:holderbd:1}. \end{lemma} Before we prove Lemma \ref{lem:holderbd:2}, we introduce some materials. As usual, we set $e(t) =\mathbb E(M_{t})$, for $t \in [0,T]$, the mapping $e$ being assumed to be continuously differentiable on $[0,T]$. Moreover, with $(X_{t})_{0\leq t \leq T}$, we associate the sequence of hitting times $(\tau_{k})_{k \geq 0}$ given by \eqref{M} . We then investigate the marginal distributions of $(X_{t})_{0\leq t \leq T}$. Given a Borel subset $A \subset (-\infty,1]$, we write in the same way as in the proof of \eqref{Markov property} \begin{equation} \label{eq:holderbd:11} \begin{split} \mathbb P(X_{t} \in A) &= \mathbb P ( X_{t} \in A, \tau_{1} >t ) \\ &\hspace{15pt} + \sum_{k \geq 1} \int_{0}^t \mathbb P ( X_{t} \in A, \tau_{k+1} >t \vert \tau_{k} = s ) \mathbb P (\tau_{k} \in ds), \end{split} \end{equation} where the notation $\mathbb P ( \cdot \vert \tau_{k} = s)$ stands for the conditional law given $\tau_{k}=s$. Following \eqref{eq:12:9:2} and \eqref{eq:12:9:3}, we can shift the system by length $s\in[0, T]$. Precisely, we know that $(X_{r}^{\sharp_{s}} :=X_{s+r})_{0\leq r \leq T-s}$ satisfies \begin{equation} \label{eq:holderbd:30} {X}_{r}^{\sharp_{s}} = X_{s} + \int_{0}^r b\bigl( X_{u}^{\sharp_{s}}\bigr) du + \alpha e^{\sharp_{s}}(r) + W_{s+r} - W_{s} -M_{r}^{\sharp_{s}}, \end{equation} with $e^{\sharp_{s}}(r) := e(s+r) - e(s)$, ${M}_{r}^{\sharp_{s}} := M_{s+r}- M_{s}$ and $\tau_{k}^{\sharp_{s}} := \inf \{ u > \tau_{k-1}^{\sharp_{s}} : X_{s+u-} \geq 1 \}$ for $k \geq 1$, ($\tau_{0}^{\sharp_{s}} := 0$). Conditionally on $\tau_{k} = s$, the law of $(X_{r}^{\sharp_{s}})_{0 \leq r \leq T-s}$ until $\tau_{1}^{\sharp_{s}}$ coincides with the law of $(\hat{Z}_{r}^{\sharp_{s},0})_{0 \leq r \leq T-s}$ until the first time it reaches $1$, where, for a given ${\mathcal F}_{0}$-measurable initial condition $\zeta$ with values in $(-\infty,1)$, $(\hat{Z}_{r}^{\sharp_{s},\zeta})_{0 \leq r \leq T-s}$ stands for the solution of the SDE: \begin{equation} \label{eq:gradientbd:100} \hat{Z}_{r}^{\sharp_{s},\zeta} = \zeta + \int_{0}^r b \bigl( \hat{Z}_{u}^{\sharp_{s},\zeta} \bigr) du + \alpha e^{\sharp_{s}}(r) + W_{r}, \quad r \in [0,T-s]. \end{equation} Below, we will write $\hat{Z}^{\zeta}_r$ for $\hat{Z}_{r}^{\sharp_{0},\zeta}$. By \eqref{eq:holderbd:11}, \begin{equation} \label{eq:holderbd:11:b} \begin{split} \mathbb P(X_{t} \in A) &\leq \mathbb P ( \hat{Z}_{t}^{X_{0}} \in A ) + \sum_{k \geq 1} \int_{0}^t \mathbb P (\hat{Z}_{t-s}^{\sharp_{s},0} \in A ) \mathbb P(\tau_{k} \in ds) \\ &= \mathbb P ( \hat{Z}_{t}^{X_{0}} \in A ) + \int_{0}^t \mathbb P ( \hat{Z}_{t-s}^{\sharp_{s},0} \in A ) e'(s) ds, \end{split} \end{equation} for any Borel set $A\subset (-\infty, 1]$, the passage from the first to the second line following from \eqref{eq:5:10:1}. \vspace{5pt} \noindent We can now turn to: \begin{proof} [Proof of Lemma \ref{lem:holderbd:2}] Given an initial condition $x_{0} \in (-\infty,1-\epsilon]$ for $\epsilon\in (0, 1)$, we know from Delarue and Menozzi \cite{delarue:menozzi} that $\hat{Z}_{t}^{x_{0}}$ has a density for any $t \in (0,T]$ (and thus $\hat{Z}_{t-s}^{\sharp_{s},0}$ as well for $0 \leq s < t$). From \eqref{eq:holderbd:11:b}, we deduce that the law of $X_{t}$ has a density on $(-\infty,1]$ since $\mathbb P(X_{t} \in A)=0$ when $\vert A \vert = 0$, where $\vert A \vert$ stands for the Lebesgue measure of $A$. Moreover, there exists a constant $c' \geq 1$, depending on $K$ only, such that, for any $t \in [0,T \wedge 1]$:\begin{equation} \label{eq:4:9:6} \frac{1}{dx}\mathbb P(\hat{Z}_{t}^{x_{0}} \in dx) \leq \frac{c'}{\sqrt{t}} \exp \left( - \frac{[x- \vartheta_{t}^{x_{0}}]^2}{c' t} \right), \end{equation} where $\vartheta_{t}^{x_{0}}$ is the solution of the ODE: \begin{equation}\label{eq definition vartheta} \frac{d}{dt} \vartheta_{t} = b ( \vartheta_{t} ) + \alpha e'(t), \quad t \in [0,T], \end{equation} with $\vartheta_{0}^{x_{0}} = x_{0}$. Above, the function \([0,T] \ni t \mapsto e(t)\) represents \([0,T] \ni t \mapsto \mathbb{E}(M_t)\) given \(X_0=x_0\), which means that the initial condition $x_{0}$ of $X_{0}$ upon which $e$ depends is fixed once and for all, independently of the initial condition of $\vartheta$. In particular, as the initial condition of \(\vartheta\) varies, the function \(e\) does not. We emphasize that $c'$ is independent of $e$ and can be taken to be that defined in Proposition \ref{prop:holderbd:1}. Indeed, we can write $\mathbb P(\hat{Z}_{t}^{x_{0}} \in dx)$ as $\mathbb P( \hat{Z}_{t}^{x_{0}} - \vartheta_{t}^{x_{0}} \in d(x - \vartheta_{t}^{x_{0}}))$, with \begin{equation*} \begin{split} d \bigl( \hat{Z}_{t}^{x_{0}} - \vartheta_{t}^{x_{0}} \bigr) &= F \bigl(t, \hat{Z}_{t}^{x_{0}} - \vartheta_{t}^{x_{0}} \bigr) dt + dW_{t}, \quad t \in [0,T], \quad \hat{Z}_{0}^{x_{0}} - \vartheta_{0}^{x_{0}} = 0 \ ; \\ F(t,x) &= b \bigl( x + \vartheta_{t}^{x_{0}} \bigr) - b\bigl( \vartheta_{t}^{x_{0}} \bigr), \quad t \in [0,T], \quad x \in \mathbb R. \end{split} \end{equation*} We then notice that $F(t,\cdot)$ is $K$-Lipschitz continuous (since $b$ is) and satisfies $F(t,0)=0$, so that, referring to \cite{delarue:menozzi}, all the parameters involved in the definition of the constant $c'$ are independent of $e$. The fact that $c'$ is independent of $e$ is crucial. As a consequence, we can bound $(1/dx)\mathbb P(\hat{Z}_{t-s}^{\sharp_{s},0} \in dx)$ in a similar way, that is, with the same constant $c'$ as in \eqref{eq:4:9:6}: for any $0 \leq s < t \leq T$, with $t-s \leq 1$, \begin{equation} \label{eq:4:9:7} \frac{1}{dx} \mathbb P( \hat{Z}_{t-s}^{\sharp_{s},0} \in dx) \leq \frac{c'}{\sqrt{t-s}} \exp \left( - \frac{[x-\vartheta_{t-s}^{\sharp_{s},0}]^2}{c'(t-s)} \right), \end{equation} where $\vartheta^{\sharp_{s},0}$ is the solution of the ODE: \begin{equation*} \frac{d}{dt} \vartheta_{t}^{\sharp_{s}} = b \bigl( \vartheta_{t}^{\sharp_{s}} \bigr) + \alpha \frac{d}{dt}e^{\sharp_{s}}(t), \quad t \in [0,T-s], \end{equation*} with $\vartheta_{0}^{\sharp_{s}, 0} = 0$ as initial condition. \vspace{5pt} \textit{Bound of the density in small time.} Keep in mind that $X_{0}=x_{0} \leq 1-\epsilon$. Therefore, by the comparison principle for ODEs, $\vartheta^{x_{0}}_{t} \leq \vartheta^{1-\epsilon}_{t}$ for any $t \in [0,T]$, so that by Gronwall's Lemma \begin{equation*} \vartheta_{t}^{x_{0}} \leq \vartheta_{t}^{1-\epsilon} \leq \bigl( 1 - \epsilon + \Lambda T + \alpha e(T) \bigr) \exp(\Lambda T). \end{equation*} By Lemma \ref{lem:gradientbd:1'}, we know that $e(T) \leq B(T,\alpha,b)$, so that \begin{equation} \label{eq:12:9:5} \vartheta_{t}^{x_{0}} \leq \bigl( 1 - \epsilon + \Lambda T + \alpha B(T,\alpha,b) \bigr) \exp(\Lambda T). \end{equation} Now choose $T_0$ as in Proposition \ref{prop:holderbd:1}, i.e. $T_{0} \leq 1$ such that \begin{equation*} \bigl(1 - \epsilon \bigr) \exp(\Lambda T_{0}) \leq 1 - 7\epsilon/8, \quad \Lambda T_{0} \exp(\Lambda T_{0}) \leq \epsilon/8, \end{equation*} and then take $\alpha_{1} \in (0,1)$ such that \begin{equation*} \alpha_{1} B(T_{0},\alpha_{1},b) \exp(\Lambda T_{0}) \leq \epsilon/4. \end{equation*} Then, whenever $\alpha \leq \alpha_{1}$, it holds that \begin{equation*} \vartheta_{t}^{x_{0}} \leq 1-\epsilon/2, \quad t \in [0,T_{0} \wedge T]. \end{equation*} Therefore, for $x \geq 1- \epsilon/4$, \begin{equation} \label{eq:5:10:2} \exp \left( - \frac{[x-\vartheta_{t}^{x_{0}}]^2}{c' t} \right) \leq \exp \left( - \frac{\epsilon^2}{16 c' t}\right), \quad t \in [0,T_{0} \wedge T]. \end{equation} Similarly, \begin{equation*} \vartheta_{t-s}^{\sharp_{s},0} \leq 3\epsilon/8 \leq 3/8, \quad 0\leq s \leq t \leq T_{0} \wedge T. \end{equation*} Indeed, $e^{\sharp_{s}}(T-s) \leq e(T)$ for $s \in [0,T]$, so that \eqref{eq:12:9:5} applies to $\vartheta_{t-s}^{\sharp_{s},0}$ with $1-\epsilon$ therein being replaced by $0$. Therefore, for $x \geq 1- \epsilon/4$, it holds that $x-\vartheta_{t-s}^{\sharp_{s},0} \geq 3/4-3/8 = 3/8 \geq 1/4$, so that \begin{equation} \label{eq:5:10:3} \exp \left( - \frac{[x-\vartheta_{t-s}^{\sharp_{s},0}]^2}{c' (t-s)} \right) \leq \exp \left( - \frac{1}{16 c' (t-s)}\right), \quad 0 \leq s < t \leq T_{0} \wedge T. \end{equation} In the end, for $x \in (1-\epsilon/4,1)$ and $t \leq T_{0} \wedge T$, we deduce from \eqref{eq:holderbd:11:b}, \eqref{eq:4:9:6}, \eqref{eq:4:9:7}, \eqref{eq:5:10:2}, \eqref{eq:5:10:3} and Lemma \ref{lem:gradientbd:1'} again, that \begin{equation} \label{eq:1:11:1} \frac{1}{dx} \mathbb P( X_{t} \in dx ) \leq c' \varpi_{0} \bigl[ \epsilon^{-1} + e(T \wedge T_{0}) \bigr] \leq c' \varpi_{0} \bigl[ \epsilon^{-1}+ B(T_{0},\alpha,b) \bigr], \end{equation} where \begin{equation*} \begin{split} \varpi_{0} &= \sup_{t >0} \left[ t^{-1/2} \exp \left( - \frac{1}{16 c' t} \right) \right] = 4 \sqrt{c'}\sup_{u >0} \bigl[ u \exp(-u^2) \bigr] = 2^{3/2} \sqrt{c'} \exp\left(-\frac12\right). \end{split} \end{equation*} \textit{Bound of the density in long time.} We now discuss what happens for $T > T_{0}$ and $t \in [T_{0},T]$. Then, \begin{equation} \label{eq:pi definition} \begin{split} \frac{1}{dx} \mathbb P( X_{t} \in dx ) &\leq \frac{1}{dx} \mathbb P\left(X_{t} \in dx, \tau_{1}^{\sharp_{t-T_{0}}} \leq T_{0} \right) + \frac{1}{dx} \mathbb P\left( X_{t} \in dx, \tau_{1}^{\sharp_{t-T_{0}}} > T_{0} \right) \\ &= \pi_{1} + \pi_{2}, \end{split} \end{equation} with $\tau_{1}^{\sharp_{t-T_{0}}} = \inf \{u>0 : X_{t-T_{0}+u-} \geq 1\} = \inf \{u>0 : X^{\sharp_{t-T_{0}}}_{u-} \geq 1\}$. The above expression says that we split the event ($X_{t}$ is in the neighbourhood of $x$) into two disjoint parts according to the fact that $X$ reaches the threshold or not within the time window $[t-T_{0},t]$. We have chosen this interval to be of length \(T_0\) in order to apply the results in small time. We first investigate $\pi_{2}$. The point is that, on the event that $\tau_{1}^{\sharp_{t-T_{0}}} > T_{0}$ and within the time window $[t-T_{0},t]$, $X$ behaves as a standard diffusion process without any jumps, namely as a process with the same dynamics as $\hat{Z}^{\sharp_{t-T_{0}},X_{t-T_{0}}}$. Following \eqref{eq:4:9:6}, we then have \begin{align} \pi_{2} &= \frac{1}{dx} \mathbb P \left( \hat{Z}_{T_{0}}^{\sharp_{t-T_{0}},X_{t-T_{0}}} \in dx, \tau_{1}^{\sharp_{t-T_{0}}} > T_{0} \right) \nonumber\\ &\leq \frac{1}{dx} \mathbb P \left( \hat{Z}_{T_{0}}^{\sharp_{t-T_{0}},X_{t-T_{0}}} \in dx \right) \leq \sup_{z \leq 1} \frac{1}{dx} \mathbb P \left(\hat{Z}_{T_{0}}^{\sharp_{t-T_{0}},z} \in dx \right) \leq c' T_{0}^{-1/2}\label{eq:4:9:9}. \end{align} We now turn to $\pi_{1}$. Here we write \begin{align*} \pi_1 &= \frac{1}{dx} \mathbb P\left(X_{t} \in dx, \tau_{1}^{\sharp_{t-T_{0}}} \leq T_{0} \right) = \sum_{k\geq 1} \frac{1}{dx}\mathbb P\left(X_{t} \in dx, \tau_{k}^{\sharp_{t-T_{0}}} \leq T_{0} < \tau_{k+1}^{\sharp_{t-T_{0}}}\right)\\ &= \sum_{k\geq 1}\int_0^{T_0} \frac{1}{dx} \mathbb P\left(X_{t} \in dx, T_{0} < \tau_{k+1}^{\sharp_{t-T_{0}}}\vert \tau_{k}^{\sharp_{t-T_{0}}} =s \right) \mathbb{P}(\tau_{k}^{\sharp_{t-T_{0}}} \in ds)\\ &= \sum_{k\geq 1}\int_0^{T_0} \frac{1}{dx}\mathbb P\left(\hat{Z}_{T_0 -s}^{\sharp_{s+t-T_{0}},0}\in dx, T_{0} < \tau_{k+1}^{\sharp_{t-T_{0}}}\right) \mathbb{P}(\tau_{k}^{\sharp_{t-T_{0}}} \in ds), \end{align*} since on the event $\{\tau_{k}^{\sharp_{t-T_{0}}} \leq T_{0} < \tau_{k+1}^{\sharp_{t-T_{0}}}\}$, given that the $k$-th (and last) jump of $X$ in the interval $[t-T_0, t]$ occurs at time $t - T_0 +s$ with $s\in [0, T_0]$, we have that the process $X_r$ for $r\in [t - T_0 +s, t]$ coincides with the process $\hat{Z}_{u}^{\sharp_{s+t-T_{0}},0}$ for $u\in[0, T_0 -s]$. Thus \begin{equation} \label{eq:4:9:9:b} \begin{split} \pi_{1} &\leq \sum_{k \geq 1}\int_{0}^{T_{0}} \frac{1}{dx} \mathbb P \left( \hat{Z}_{T_0 -s}^{\sharp_{s+t-T_{0}},0} \in dx \right) \mathbb P ( \tau_{k}^{\sharp_{t-T_{0}}} \in ds) \\ &= \int_{0}^{T_{0}} \frac{1}{dx} \mathbb P \left( \hat{Z}_{T_{0}-s}^{\sharp_{s+t-T_{0}},0} \in dx \right) e'(s+t-T_{0}) ds. \end{split} \end{equation} By \eqref{eq:4:9:7}, we have \begin{equation*} \begin{split} &\int_0^{T_0}\frac{1}{dx} \mathbb P \left( \hat{Z}_{T_{0}-s}^{\sharp_{s+t-T_{0}},0} \in dx \right)e'(s+t - T_0)ds \\ &\hspace{15pt} \leq \int_{0}^{T_{0}}\frac{c'}{\sqrt{T_{0}-s}} \exp \left( - \frac{[x- {\vartheta}^{\sharp_{s+t-T_{0}},0}_{T_{0}-s}]^2}{c' (T_{0}-s)} \right) e'(s+t-T_{0}) ds. \end{split} \end{equation*} Recalling that $e^{\sharp_{t-T_{0}}}(s)= \mathbb E(M_{s+t-T_{0}}-M_{t-T_{0}})$, it is well seen that the mapping $[0,T_{0}] \ni s \mapsto e^{\sharp_{t-T_{0}}}(s)$ satisfies Lemma \ref{lem:gradientbd:1'}, that is \begin{equation*} \sup_{0 \leq s \leq T_{0}} e^{\sharp_{t-T_{0}}}(s) = \sup_{0 \leq s \leq T_{0}} \bigl[ e(s+t-T_{0}) - e(t-T_{0}) \bigr] = e(t) - e(t-T_{0}) \leq B(T_{0},\alpha,b). \end{equation*} Therefore, we can follow the same strategy as in short time, see \eqref{eq:5:10:3} and \eqref{eq:1:11:1}. Indeed, for $\alpha \leq \alpha_{1}$, by the choice of $T_0$ as before, it holds that \begin{equation*} \pi_{1} \leq c' \varpi_{0} B(T_{0},\alpha,b), \end{equation*} for $x\in[1-\epsilon/4, 1)$. Using \eqref{eq:4:9:9} and the above bound, we deduce that, for $t \in [T_{0},T]$, \begin{equation*} \frac{1}{dx} \mathbb P\left(X_{t} \in dx \right) \leq c' \bigl[ T_{0}^{-1/2} + \varpi_{0} B(T_{0},\alpha,b) \bigr]. \end{equation*} \end{proof} \begin{proof} [Proof of Proposition \ref{prop:holderbd:1}] Proposition \ref{prop:holderbd:1} follows from the combination of Lemmas \ref{lem:holderbd:5} and \ref{lem:holderbd:2}. Indeed, given $T_0$ and $\alpha_0$ as defined in Proposition \ref{prop:holderbd:1}, then by Lemma \ref{lem:holderbd:2} it follows that $\mathbb{P}(X_t\in A) <(1/\alpha)|A|$ for any Borel subset $A\subset [1-\epsilon/4, 1]$, any $\alpha<\alpha_0$ and any $t\in [0, T]$. The result follows by Lemma \ref{lem:holderbd:5}, with $\mathcal{B}$ being given by $\mathcal{B}_0\exp(2\Lambda)$ with $\epsilon$ in \(\mathcal{B}_0\) replaced by $\epsilon/4$. \end{proof} \subsection{Estimate of the density of the killed process} In light of the previous subsection, for a solution $(X_{t})_{0 \leq t \leq T}$ to \eqref{simplified eq} such that the mapping $[0,T] \ni t \mapsto e(t)=\mathbb E(M_{t})$ is continuously differentiable, we here investigate \begin{equation*} \frac{1}{dx} \mathbb P \left(X_{t} \in dx, t < \tau_{1}\right), \quad t \in [0,T], \ x \leq 1, \end{equation*} where $\tau = \inf\{t>0: X_{t-}\geq1\}$ as usual. This is the density of the killed process $(X_{t \wedge \tau_{1}})_{0\leq t \leq T}$, which makes sense because of Lemma \ref{lem:killedprocess:1}. Here is the main result of this subsection: \begin{lemma} \label{lem:gradientbd:41} Let $\epsilon \in (0,1)$, $T >0$ and ${\mathcal B} >0$. Moreover, let $(\chi_{t})_{0 \leq t \leq T}$ denote the solution to the SDE \begin{equation*} d \chi_{t} = b(\chi_{t}) dt + \alpha e'(t) dt + dW_{t}, \quad t \in [0,T] \ ; \quad \chi_{0}=x_0, \end{equation*} for some continuously differentiable non-decreasing deterministic mapping $[0,T] \ni t \mapsto e(t)$ satisfying \begin{equation*} e(0)=0, \quad \quad e(t) - e(s) \leq {\mathcal B} (t-s)^{1/2}, \quad 0 \leq s\leq t\leq T. \end{equation*} Then there exist two positive constants $\mu_{T}$ and $\eta_{T}$, only depending upon $T$, ${\mathcal B}$, $\epsilon$, $K$ and $\Lambda$, such that, for any initial condition $x_{0} \leq 1 - \epsilon$, \begin{equation} \label{eq:12:9:27} p(t,y) \leq \mu_{T} (1-y)^{\eta_{T}} , \quad t \in [0,T], \ y \in [1-\epsilon/4,1], \end{equation} where $p(t, y)$ denotes the density of $\chi_t$ killed at $1$ as in \eqref{lem:killedprocess:1}. \end{lemma} \begin{proof} \textit{First Step.} The first step is to provide a probabilistic representation for $p$. For a given $(T,x) \in (0,+\infty) \times (-\infty,1)$, we consider the solution to the SDE: \begin{equation} \label{eq:12:9:25} dY_{t} = - \bigl[ b(Y_{t}) + \alpha e'(T-t) \bigr] dt + dW_{t}, \quad t \in [0,T], \quad Y_{0}=y, \end{equation} together with some stopping time $\rho \leq \rho_{0} \wedge T$, where $\rho_{0}=\inf\{t \in [0,T] : Y_{t} \geq 1\}$ (with $\inf \emptyset = + \infty$). Then, by Lemma \ref{lem:killedprocess:1} and the It\^o-Krylov formula (see \cite[Chapter II, Section 10]{krylov}), \begin{equation*} \begin{split} &d \bigl( p (T-t,Y_{t}) \bigr) \\ &= - \partial_{t} p(T-t,Y_{t}) dt - \bigl[ b(Y_{t}) + \alpha e'(T-t)\bigr] \partial_{y} p(T-t,Y_{t}) dt + \frac{1}{2} \partial_{yy}^2 p(T-t,Y_{t}) dt \\ &\hspace{15pt} + \partial_{y} p(T - t,Y_{t}) dW_{t} \\ &= b'(Y_{t}) p(T-t,Y_{t}) dt + \partial_{y} p(T - t,Y_{t}) dW_{t}, \end{split} \end{equation*} for $0 \leq t \leq \rho$. Therefore, the Feynman-Kac formula yields \begin{equation} \label{eq:22} p(T,y) = {\mathbb E} \biggl[ p(T-\rho,Y_{\rho}) {\mathbf 1}_{\{Y_{\rho} \not = 1\}} \exp \biggl( - \int_{0}^{\rho} b'(Y_{s}) ds\biggr) \big\vert Y_{0} = y \biggr], \end{equation} the indicator function following from the Dirichlet boundary condition satisfied by $p(\cdot,1)$. \vspace{5pt} \textit{Second Step.} We now specify the choice of $\rho$. Given some free parameters $L \geq 1$ and $\delta \in (0,\epsilon/4)$ such that $L \delta \leq \epsilon/4$, we assume that the initial condition $y$ in \eqref{eq:12:9:25} is in $(1-\delta,1)$ and then consider the stopping time \begin{equation} \label{eq:12:9:26a} \rho = \inf\{ t \in [0,T] : Y_{t} \not \in (1-L \delta,1)\} \wedge \delta^2. \end{equation} Assume that $\delta^2 \leq T$. By \eqref{eq:22}, we deduce that \begin{equation} \label{eq:20} p(T,y) \leq \exp(K \delta^2) (1 - {\mathbb P}(Y_{\rho}=1) ) \sup_{(t,z) \in {\mathcal Q}(\delta,L)} p(t,z), \end{equation} with \begin{equation*} {\mathcal Q}(\delta,L) = \bigl\{(t,z) \in [T- \delta^2,T] \times [1 - L \delta,1] \bigr\}. \end{equation*} The point is then to give a lower bound for ${\mathbb P}(Y_{\rho}=1)$. By assumption, we know that $e$ is $(1/2)$-H\"older continuous on $[0,T]$. Therefore, since $Y_{0}=y \in (1-\delta,1)$, we have, for any $t \in [0,\rho]$, \begin{equation*} Y_{t} \geq 1-\delta - m \delta^2 - \alpha {\mathcal B} \delta + W_{t}, \end{equation*} with \begin{equation} \label{eq:31:7:3} m = \sup_{0 \leq z \leq 1} \vert b(z) \vert. \end{equation} Therefore, for $m \delta \leq 1$, \begin{equation*} Y_{t} \geq 1- 2 \delta - \alpha {\mathcal B} \delta + W_{t}, \quad t \in [0,\rho], \end{equation*} so that \begin{equation} \label{eq:21} \bigl\{Y_{\rho}=1\bigr\} \supset \left\{ \sup_{0 \leq t \leq \delta^2} W_{t} > (2 +\alpha {\mathcal B}) \delta \right\} \cap \left\{ \inf_{0 \leq t \leq \delta^2} W_{t} > (2 + \alpha {\mathcal B}- L) \delta \right\}. \end{equation} Choosing $L=3 + \alpha {\mathcal B}$ and applying a scaling argument, we deduce that \begin{equation} \label{eq:18:10:1} \begin{split} &\mathbb P \left( \left\{ \sup_{0 \leq t \leq \delta^2} W_{t} > (2+ \alpha {\mathcal B}) \delta \right\} \cap \left\{ \inf_{0 \leq t \leq \delta^2} W_{t} > (2 + \alpha {\mathcal B} - L) \delta \right\} \right) \\ &= \mathbb P \left( \left\{ \sup_{0 \leq t \leq 1} W_{t} > (2 + \alpha {\mathcal B}) \right\} \cap \left\{ \inf_{0 \leq t \leq 1} W_{t} > - 1 \right\} \right) =: c'' \in (0,1). \end{split} \end{equation} We note that the above quantity $c''$ is independent of $\delta$ and $T$. Moreover, we deduce from \eqref{eq:21} that $\mathbb P(Y_{\rho}=1) \geq c''$ and therefore, from \eqref{eq:20}, that \begin{equation*} p(T,y) \leq (1-c'') \exp(K \delta^2) \sup_{z \in {\mathcal I}(L\delta)} \sup_{t \in [0,T]} p(t,z), \end{equation*} with ${\mathcal I}(r)=[1-r,1]$, for $r>0$. Choosing $\delta$ small enough such that $(1-c'') \exp(K \delta^2) \leq (1-c''/2)$, we obtain \begin{equation*} p(T,y) \leq \left(1- \frac{c''}{2}\right) \sup_{z \in {\mathcal I}(L\delta)} \sup_{t \in [0,T]} p(t,z), \quad y \in {\mathcal I}(\delta). \end{equation*} Modifying $c''$ if necessary ($c''$ being chosen as small as needed), we can summarize the above inequality as follows: for $\delta \leq c''$, \begin{equation} \label{eq:23} p(T,y) \leq (1-c'') \sup_{ z \in {\mathcal I}(L \delta)} \sup_{t \in [0,T]} p(t,z), \quad y \in {\mathcal I}(\delta). \end{equation} We now look at what happens when $T \leq \delta^2$ in \eqref{eq:20}. In this case we can replace $\rho$ in the previous argument by $\rho \wedge T$. Observing that $p(T-\rho \wedge T,Y_{\rho \wedge T})=0$ on the event $\{\rho \geq T\} \cup \{Y_{\rho \wedge T}=1\}$ (since $p(0,\cdot)=0$ on $[1-\epsilon/4,1]$) and following \eqref{eq:20}, we obtain, for $y \in {\mathcal I}(\delta)$, \begin{equation} \label{eq:20:prime} p(T,y) \leq \exp( K \delta^2) \bigl[1 - {\mathbb P}\bigl(\{Y_{\rho \wedge T}=1\} \cup \{\rho \geq T\} \bigr) \bigr] \sup_{(t,z) \in {\mathcal Q}'(\delta,L)} p(t,z), \end{equation} with ${\mathcal Q}'(\delta,L) = \{(t,z) \in [0,T] \times [1 - L \delta,1] \}$. Now, the right-hand side of \eqref{eq:21} is included in the event $\{Y_{\rho \wedge T}=1\} \cup \{\rho \geq T\}$ so that \eqref{eq:18:10:1} yields a lower bound for ${\mathbb P}(\{Y_{\rho \wedge T}=1\} \cup \{\rho \geq T\})$. Therefore, we can repeat the previous arguments in order to prove that \eqref{eq:23} also holds when $T \leq \delta^2$, which means that \eqref{eq:23} holds true in both cases. Therefore, by replacing $T$ by $t$ in the left-hand side in \eqref{eq:23} and by letting $t$ vary within $[0,T]$, we have in any case, \begin{equation*} \sup_{y \in {\mathcal I}(\delta)}\sup_{t \in [0,T]} p(t,y) \leq (1-c'') \sup_{z \in {\mathcal I}(L \delta)} \sup_{t \in [0,T]} p(t,z). \end{equation*} By induction, for any integer $n \geq 1$ such that $L^n \delta \leq r_{0}$, with $r_{0} = c'' \wedge (\epsilon/4)$, \begin{equation*} \sup_{y \in {\mathcal I}(\delta)} \sup_{t \in [0,T]} p(t,y) \leq (1-c'')^n \sup_{z \in {\mathcal I}( L^n \delta)} \sup_{t \in [0,T]} p(t,z). \end{equation*} Given $\delta \in (0, r_{0}/L)$, the maximal value for $n$ is $n = \lfloor \ln [r_{0}/\delta]/\ln L \rfloor$. We deduce that, for any $\delta \in (0, r_{0}/L)$, \begin{equation} \label{eq:13:9:15} \sup_{y \in {\mathcal I}(\delta)} \sup_{t \in [ 0,T]} p(t,y) \leq (1-c'')^{( \ln[r_{0}/ \delta]/ \ln L) -1} \sup_{z \in {\mathcal I}(\epsilon/4)}\sup_{t \in [0,T]} p(t,z). \end{equation} Following \eqref{eq:4:9:6}, we know that \begin{equation} \label{eq:18:10:2} \sup_{z \in {\mathcal I}(\epsilon/4)}\sup_{t \in [0,T]} p(t,z) \leq \sup_{z \in {\mathcal I}(\epsilon/4)} \sup_{t \in [0,T]} \left[ \frac{c_{T}}{\sqrt{t}} \exp \left( - \frac{[z- \vartheta_{t}^{x_{0}}]^2}{c_{T} t} \right) \right], \end{equation} for some constant $c_{T}$ only depending upon $T$ and $K$ and where $(\vartheta_{t}^{x_{0}})_{0 \leq t \leq T}$ stands for the solution of the ODE \begin{equation*} \frac{d \vartheta}{d t} = b(\vartheta_{t}) + \alpha e'(t), \quad t \in [0,T] \ ; \quad \vartheta_{0} = x_{0}. \end{equation*} Pay attention that we here use the same notation as in \eqref{eq definition vartheta} for the solution of the above ODE but here $e(t)$ is not given as some $\mathbb E(M_{t})$. Actually, we feel that there is no possible confusion here. Notice also that $e$ is fixed and does not depend upon the initial condition $x_{0}$. By the comparison principle for ODEs and then by Gronwall's Lemma, we deduce from the fact that $e$ is $(1/2)$-H\"older continuous that \begin{equation*} \vartheta_{t}^{x_{0}} \leq \vartheta_{t}^{1-\epsilon} \leq \bigl[ 1 - \epsilon + \Lambda t + {\mathcal B} t^{1/2} \bigr] \exp(\Lambda t), \quad t \in [0,T]. \end{equation*} Using the above inequality, we can bound the right-hand side in \eqref{eq:18:10:2}. Precisely, the above inequality says that the exponential term in the supremum decays exponentially fast as $t$ tends to $0$ so that the term inside the supremum can be bounded when $t$ is small; when $t$ is bounded away from $0$, the term inside the supremum is bounded by $c_{T}/\sqrt{t}$. It is plain to deduce that \begin{equation} \label{eq:12:9:26b} \sup_{z \in {\mathcal I}(\epsilon/4)}\sup_{t \in [0,T]} p(t,z) \leq c_{T}, \end{equation} for a new value of $c_{T}$, possibly depending on $\epsilon$ as well. Therefore, for $\delta \in (0,r_{0}/L)$, \eqref{eq:13:9:15} yields \begin{equation*} \sup_{y \in {\mathcal I}(\delta)} \sup_{t \in [ 0,T]} p(t,y) \leq \frac{c_{T}}{(1-c'')} \left( \frac{\delta}{r_{0}} \right)^{\eta}, \end{equation*} with $\eta = - \ln(1-c'')/ \ln L $. This proves \eqref{eq:12:9:27} for $y \in (1-r_{0}/L,1)$. Note that $\eta$ is here independent of $T$, contrary to what is indicated in the statement of Lemma \ref{lem:gradientbd:41}. However, we feel it is simpler to indicate $T$ in $\eta_{T}$ as the constant ${\mathcal B}$ in the sequel will be chosen in terms of $T$ thus making $\eta$ depend on $T$. Using \eqref{eq:12:9:26b}, we can easily extend the bound to any $y \in (1-\epsilon/4,1)$ by modifying if necessary the parameters $\mu_{T}$ and $\eta_{T}$ therein. This completes the proof. \end{proof} \subsection{Bound for the gradient} Here is the final step to complete the proof of Theorem \ref{thm:gradientbd}: \begin{proposition} \label{prop:gradientbd:5} Let $\epsilon \in (0,1)$, $T >0$ and ${\mathcal B} >0$. Moreover, let $(\chi_{t})_{0 \leq t \leq T}$ denote the solution to the SDE \begin{equation*} d \chi_{t} = b(\chi_{t}) dt + \alpha e'(t) dt + dW_{t}, \quad t \in [0,T], \quad \chi_{0}=x_0, \end{equation*} for some continuously differentiable non-decreasing deterministic mapping $[0,T] \ni t \mapsto e(t)$ satisfying \begin{equation*} e(0)=0 \quad ; \quad e(t) - e(s) \leq {\mathcal B} (t-s)^{1/2}, \quad 0 \leq s\leq t\leq T. \end{equation*} Then there exists a constant ${\mathcal M}_{T} >0$, only depending upon $T$, ${\mathcal B}$, $\epsilon$, $K$ and $\Lambda$, such that, for any initial condition $x_{0} \leq 1 - \epsilon$ and any integer $n$ such that $n \geq \lceil 4/\epsilon \rceil$, \begin{equation*} \vert \partial_{y} p(t,1) \vert \leq \frac{{\mathcal M}_{T} n^{-\eta_{T} }}{1 - \exp[-{\mathcal M}_{T}^{-1} (1+\alpha C_{T}) n^{-1}]} (1+\alpha C_{T}), \quad t \in [0,T], \end{equation*} where $p(t, y)$ is the density of $\chi_t$ killed at $1$ as in \eqref{lem:killedprocess:1}, $\eta_T$ is as in Lemma \ref{lem:gradientbd:41}, and \begin{equation*} C_{T} = \sup_{0 \leq t \leq T} e'(t). \end{equation*} \end{proposition} \begin{proof} We consider the barrier function \begin{equation} \label{eq:5:10:13} q(t,y) = \Theta \exp( Kt) \bigl[1 - \exp ( \gamma (y-1) )\bigr], \quad t \geq 0, \ y \in \mathbb R, \end{equation} where $\gamma$ and $\Theta$ are free nonnegative parameters. Then, for $t>0$ and $y<1$, \begin{equation*} \begin{split} &\partial_{t} q(t,y) + \bigl( b(y) + \alpha e'(t) \bigr) \partial_{y} q(t,y) - \frac{1}{2} \partial^2_{yy} q(t,y) \\ &= \Theta \exp(Kt) \exp(\gamma (y-1)) \bigl( - (b(y) + \alpha e'(t)) \gamma + \frac{1}{2} \gamma^2 \bigr) + K q(t,y). \end{split} \end{equation*} Keeping in mind that $\sup_{0 \leq t \leq T} e'(t) = C_{T}$ and choosing \begin{equation}\label{eq 17 10 defgamma} \gamma = 2 ( \max(m,1) + \alpha C_{T}), \end{equation} where \(m=\sup_{0\leq z\leq 1}|b(z)|\) as before, we obtain, for $t \in [0,T]$ and $y \in (0,1)$, \begin{equation*} - \bigl(b(y) + \alpha e'(t)\bigr) \gamma + \frac{1}{2} \gamma^2 \geq - 2 ( \max(m,1) + \alpha C_{T})^2 + 2 ( \max(m,1) + \alpha C_{T})^2 = 0. \end{equation*} Thus, for $t \in [0,T]$ and $y \in (0,1)$, \begin{equation*} \partial_{t} q(t,y) + \bigl( b(y) + \alpha e'(t) \bigr) \partial_{y} q(t,y) - \frac{1}{2} \partial^2_{yy} q(t,y) \geq K q(t,y) \geq - b'(y) q(t,y), \end{equation*} which reads \begin{equation} \label{eq:26} \partial_{t} q(t,y) + \partial_{y} \bigl[ \bigl( b(y) + \alpha e'(t) \bigr) q(t,y) \bigr] - \frac{1}{2} \partial^2_{yy} q(t,y) \geq 0. \end{equation} For a given integer $n \geq \lceil 4/\epsilon \rceil$, we choose $\Theta$ as the solution of \begin{equation} \label{eq:5:10:14} \Theta \left[ 1- \exp\left(- \frac{2 ( \max(m,1) + \alpha C_{T})}{n}\right) \right] = \mu_{T} n^{-\eta_{T}}, \end{equation} with $\mu_{T}$ and $\eta_{T}$ as in the statement of Lemma \ref{lem:gradientbd:41}. Pay attention that the factor in the left-hand side cannot be $0$ as $\max(m,1) >0$. Notice also $q$ thus depends upon $n$. By Lemma \ref{lem:gradientbd:41}, we deduce that \begin{equation*} q\left(t,1-\frac{1}{n}\right) \geq p\left(t,1-\frac{1}{n}\right), \quad 0 \leq t \leq T. \end{equation*} Now, we can apply the comparison principle for PDEs (see \cite[Chap. IX, Thm. 9.7]{lieberman}). Indeed, we also observe that $q(0,y) \geq p(0,y)=0$ for $y \in [1-1/n,1]$ and $q(t,1)=p(t,1) = 0$ for $t \in [0,T]$. Therefore, by \eqref{eq:26}, we have \begin{equation} \label{eq:5:10:12} p(t,y) \leq q(t,y), \quad t \in [0,T], \ y \in \left[1- \frac{1}{n},1\right]. \end{equation} Since $p(t,1)=0=q(t, 1)$, we deduce \begin{equation} \label{eq:gradientbd:50} \vert \partial_{y} p(t,1) \vert \leq \vert \partial_{y} q(t,1) \vert = \frac{2 \mu_{T} ( \max(m,1) + \alpha C_{T}) n^{-\eta_{T}}}{1- \exp[- 2 (\max(m,1) + \alpha C_{T})/n]} \exp(Kt). \end{equation} \end{proof} We now complete the proof of Theorem \ref{thm:gradientbd}. We make use of Proposition \ref{prop:differentiability:Gamma}. Recall \eqref{Markov property 2} \begin{equation*} e'(t) = - \int_{0}^{t}\frac{1}{2}\partial_{y}p^{(0,s)}(t-s,1) e'(s) ds -\frac{1}{2}\partial_{y}p(t,1), \quad t \in [0,T], \end{equation*} where $p$ represents the density of the process $X$ killed at $1$ and $p^{(0,s)}$ represents the density of the process $X^{{\sharp_{s}}}$ driven by $e^{\sharp_{s}}=e(\cdot+s)-e(s)$ (see \eqref{eq:holderbd:30}) killed at $1$ with $X_{0}^{\sharp_{s}}=0$ as initial condition. By Proposition \ref{prop:holderbd:1} and Lemma \ref{lem:gradientbd:1'}, we know that, for a given $s \in [0,T)$ and for the prescribed values of $\alpha$, the mapping $[0,T-s] \ni r \mapsto e^{\sharp_{s}}(r)$ is $1/2$-H\"older continuous, the H\"older constant only depending upon $T$, $\alpha$, $\epsilon$, $K$ and $\Lambda$ (Proposition \ref{prop:holderbd:1} permits to bound the increments of $e^{\sharp_{s}}$ on small intervals and Lemma \ref{lem:gradientbd:1'} gives a trivial bound for the increments of $e^{\sharp_{s}}$ on large intervals). Therefore, by Proposition \ref{prop:gradientbd:5}, we know that \begin{equation} \label{eq:13:9:12} \vert \partial_{y}p^{(0,s)}(t-s,1) \vert \leq \frac{{\mathcal M}_{T} n^{-\eta_{T} }}{1 - \exp[-{\mathcal M}_{T}^{-1} (1+ \alpha C_{T}) n^{-1}]} (1+ \alpha C_{T}), \quad t \in [s,T], \end{equation} for $n \geq \lceil 4/\epsilon \rceil$ and for some constant ${\mathcal M}_{T}$ only depending upon $T$, $\alpha$, $\epsilon$, $K$ and $\Lambda$. The same bound also holds true for $\partial_{y}p(t,1)$. We deduce that, for any $t \in [0,T]$ and any $n$ such that $n \geq \lceil 4/\epsilon \rceil$, \begin{equation*} e'(t) \leq \frac{ {\mathcal M}_{T} n^{-\eta_{T}}}{1- \exp[- {\mathcal M}_{T}^{-1} (1+ \alpha C_{T}) n^{-1} ]}(1+ \alpha C_{T}) \frac{e(T)+1}{2}. \end{equation*} By Lemma \ref{lem:gradientbd:1'}, we have a bound for $e(T)=\mathbb E(M_{T})$, which means that we can bound $(e(T)+1)/2$ in the right-hand side above by modifying the constant ${\mathcal M}_{T}$. Recalling \begin{equation*} C_{T} = \sup_{0 \leq t \leq T} e'(t), \end{equation*} we deduce that \begin{equation} \label{eq:5:10:10} C_{T} \bigl( 1 - \exp[ - {\mathcal M}_{T}^{-1} (1 + \alpha C_{T}) n^{-1} ] \bigr) \leq {\mathcal M}_{T} (1+ \alpha C_{T}) n^{-\eta_{T}}. \end{equation} Choosing $n$ large enough such that the right hand side is less than $(1+ \alpha C_{T})/2$ (so that $n$ depends on $T$) and multiplying by $\alpha$, we get (since $\alpha \in (0,1)$): \begin{equation*} \frac{\alpha C_{T}}{2} \leq \frac{1}{2} + \alpha C_{T} \exp[- {\mathcal M}_{T}^{-1} (1+\alpha C_{T}) n^{-1} ]. \end{equation*} This shows that $\alpha C_{T}$ must be bounded in terms of ${\mathcal M}_{T}$ and $n$. Precisely, we have \begin{equation*} \alpha C_{T} \leq 1 + 2 \sup_{r \geq 0} \bigl[ r \exp[- {\mathcal M}_{T}^{-1} (1+r) n^{-1} ] \bigr]:=R < + \infty. \end{equation*} By \eqref{eq:5:10:10}, we deduce that \begin{equation*} C_{T} \leq \sup_{0 \leq r \leq R} \left[\frac{ {\mathcal M}_{T} (1+ r) n^{-\eta_{T}}}{ 1 - \exp[ - {\mathcal M}_{T}^{-1} (1 + r) n^{-1} ] }\right], \end{equation*} which is independent of $\alpha$ (for $\alpha \in (0,\alpha_{0}]$), as required. \hfill $\quad \Box$ \bigskip \section{Proofs of Theorem \ref{solution up to T}} \label{section: Existence and uniqueness for all time} In this section we put everything together to arrive at our goal, which is the proof of Theorem \ref{solution up to T}. We first need the following lemma, which is a corollary of Theorem \ref{thm:gradientbd}. The point is that the result will allow us to re-apply the fixed point result on successive time intervals, since it guarantees that the conditions of the fixed point result are satisfied at the final point of any interval on which we know there is a solution. \begin{lem} \label{density bound} For any $T>0$, initial condition $X_{0}=x_{0} <1$, and $\alpha<\alpha_0$, where $\alpha_0=\alpha_0(x_0)$ is as in Theorem \ref{thm:gradientbd}, there exists a constant $C_{\rm den}(T)$ depending only on $T$, $x_0$, $K$ and $\Lambda$ such that any solution to \eqref{simplified eq} on $[0, T]$ satisfies \[ \frac{1}{dy}\mathbb{P}\left(X_{t}\in dy\right)\leq C_{\rm den}(T)(1-y), \] for all $y \in (1-\epsilon/8,1)$ and $t \in [0,T]$, with $\epsilon=\min(1,1-x_{0})$. \end{lem} \begin{proof} We assume that $(X_{t})_{0\leq t\leq T}$ is a solution to \eqref{simplified eq} with $X_{0}=x_0$ up until time $T$, and set $e(t) = \mathbb E(M_t)$. Following the notation of Section \ref{section: Existence and uniqueness in small time} (see also the last part of the proof of Theorem \ref{thm:gradientbd}), for $y\leq1$ and $t\leq T$, let \begin{equation*} \begin{split} &p(t,y):=\frac{1}{dy}\mathbb{P}\left(X_{t}\in dy,t < \tau_{1}\right), \\ &p^{(0,s)}(t,y) := \frac{1}{dy}\mathbb{P}\left(X_{t}^{\sharp_{s}}\in dy,t < \tau_{1}^{\sharp_{s}} \vert X_{0}^{\sharp_{s}}=0\right). \end{split} \end{equation*} By Theorem \ref{thm:gradientbd}, we know that $e$ is ${\mathcal M}_{T}$-Lipschitz continuous, so that by \eqref{eq:5:10:12}, \begin{equation*} p(t,y) \leq q(t,y), \quad t \in [0,T], \ y \in \left[1- \frac{1}{n},1\right], \end{equation*} where $n$ stands for $\lceil 4/\epsilon \rceil$ and $q$ is given by \eqref{eq:5:10:13}, with \(\gamma\) and \(\Theta\) being fixed by \eqref{eq 17 10 defgamma} and \eqref{eq:5:10:14}, with $C_{T} = {\mathcal M}_{T}$. By the specific form of $q$, this says that there exists a constant $C_{T}'$, depending only on $T$, $x_0$, $K$ and $\Lambda$, such that \begin{equation*} p(t,y) \leq C_{T}' (1-y), \quad t \in [0,T], \ y \in \left[1- \frac{\epsilon}{8},1\right], \end{equation*} using the elementary inequality $1 - \exp(-x) \leq x$ for $x\in \mathbb R$. Clearly, the same argument applies to $p^{(0,s)}(t-s,y)$, i.e. \begin{equation*} p^{(0,s)}(t-s,y) \leq C_{T}' (1-y), \quad 0 \leq s < t \leq T, \ y \in \left[1- \frac{\epsilon}{8},1\right]. \end{equation*} Now, following the proof of \eqref{eq:holderbd:11}, we get for $t \in [0,T]$ and $y \in [1-\epsilon/8,1]$, \begin{equation} \label{eq:10:11:12} \begin{split} \frac{1}{dy} \mathbb P( X_{t} \in dy) &= p(t,y) + \int_{0}^t p^{(0,s)}(t-s,y) e'(s) ds \leq C_{T}' ( 1+ e(T) ) (1-y), \end{split} \end{equation} where we use Lemma \ref{lem:killedprocess:1} for justifying the passage to the density in \eqref{eq:holderbd:11}. By Lemma \ref{lem:gradientbd:1'}, this completes the proof. \end{proof} Finally, we can then prove the main result of the present paper: \begin{proof}[Proof of Theorem \ref{solution up to T}:] We would like a solution up until fixed time $T>0$. The idea is to iterate the fixed point result (Theorem \ref{fixed point}), which is possible thanks to Lemma \ref{density bound}. Indeed, by Theorem \ref{fixed point}, we have that there exists a solution to \eqref{simplified eq} with $X_0 = x_0$ up until some small time $T_{1}>0$. By Lemma \ref{density bound}, we thus have that \begin{equation} \label{eq:estim_induction} \frac{1}{dy}\mathbb{P}(X_{T_{1}}\in dy)\leq C_{\rm den}(T_{1})(1-y), \quad y \in \left[1-\frac{\epsilon}{8},1\right], \end{equation} where $\epsilon = \min(1-x_{0},1)$. If $T_{1}\geq T$ we are done. If not, we have the above density bound for $(1/dy)\mathbb{P}(X_{T_{1}}\in dy)$. We also know from \eqref{eq:10:11:12} and Lemma \ref{lem:killedprocess:1} that the density of $X_{T_{1}}$ is differentiable at $y=1$. Therefore, we can apply Theorem \ref{fixed point} again to see that there exists a solution to \eqref{simplified eq} on some interval $[T_{1},T_{1}+T_{2}]$ starting from $X_{T_{1}}$. As $T_{2}$ only depends upon $X_{T_{1}}$ through $\epsilon$ (this is the statement of Theorem \ref{fixed point}) and $C_{\rm den}(T_{1})$ and as these quantities can be bounded in terms of $T$, $\epsilon$, $K$, $\Lambda$ only, we then see that \[ T_{2} \geq \phi(T) \] for some constant $\phi(T)$ that refers to $T$, $\alpha$, $\epsilon$, $K$, $\Lambda$ only. Now we know that there exists a solution to \eqref{simplified eq} with $X_0 = x_{0}$ on $[0,T_{1}+T_{2}]$. If $T_{1}+T_{2}>T$ we are done. If not, by Lemma \ref{density bound} once again, \[ \frac{1}{dy} \mathbb{P}(X_{T_{1}+T_{2}}\in dy)\leq C_{\rm den}(T_{1}+T_{2})(1-y), \quad y \in \left[1-\frac{\epsilon}{8},1\right], \] and we can then repeat the argument $n$ times to get a solution up until time $T_{1}+\dots+T_{n}$, where all $T_{k}\geq\phi(T)$ for $k\geq2$ i.e. each time step is of size at least $\phi(T)$. It is then clear that there exists $n\geq1$ such that $T_{1}+\dots+T_{n}\geq T$, and so we are done for the existence of a solution. Uniqueness of the solution proceeds in the same way. Given another solution $(X_{t}',M_{t}')_{0\leq t \leq T}$ on the interval $[0,T]$ in the sense of Definition \ref{definition solution}, it must satisfy the \textit{a priori} estimates in the statements of Theorem \ref{thm:gradientbd} and Lemmas \ref{lem:gradientbd:1'} and \ref{density bound}. In particular, dividing the interval $[0,T]$ into subintervals of length $\phi(T)$ (except for the last interval the length of which might be less than $\phi(T)$), with the same $\phi(T)$ as above, we can apply the contraction property in Theorem \ref{fixed point} on each subinterval iteratively. Precisely, choosing $A_{1}$ accordingly in Theorem \ref{fixed point}, we prove by induction that the two solutions coincide on $[0,\phi(T)]$, $[0,2\phi(T)]$, and so on. \end{proof} \subsection*{Acknowledgements} The authors would like to thank D. Talay for his involvement in a number of fruitful discussions. \bibliographystyle{siam}
1,941,325,219,933
arxiv
\section{Introduction} Deep learning techniques have received notable attention in various vision tasks, such as image classification~\cite{he2016deep}, object detection~\cite{ren2015faster}, and semantic segmentation~\cite{long2015fully}. Yet, the success of deep neural networks heavily relies on a tremendous volume of valuable training images. One possible solution is to collaboratively curate numerous data samples from different parties (\eg, different mobile devices and companies). However, collecting distributed data into a centralized storage facility is costly and time-consuming. Additionally, in real practice, decentralized image data should not be directly shared, due to privacy concerns or legal restrictions~\cite{gdpr,hippa}. In this case, conventional centralized machine learning frameworks fail to satisfy the data privacy protection constraint. \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{fig/1.pdf} \caption{Illustration of different parameter decoupling manners for model personalization in Federated Learning. The previous approaches combine local and global parameters in a layer-wise mechanism, including LG-Fed~\cite{lgfed} in low-level input layers (a) and FedPer~\cite{fedper} in high-level output layers (b). Instead, we achieve model personalization via channel-wise decoupling (c).} \label{fig:demo} \end{figure} Therefore, the data-private distributed training paradigms, especially Federated Learning (FL), have received an increasing popularity~\cite{fedavg,fl,Sun_2021_CVPR,Zhuang_2021_ICCV,Zhang_2021_ICCV,Li_2021_CVPR,Liu_2021_CVPR,Gong_2021_ICCV,Guo_2021_CVPR}. To be more specific, in FL, a shared model is globally trained with an orchestration of local updates within data stored at each client. A pioneering FL algorithm named Federated Average (FedAVG), aggregates parameters at the central server by communication across clients once per global epoch, without explicit data sharing~\cite{fedavg}. Compared with local training, the federation on a larger scale of training data has demonstrated its superiority to boost the generalization ability on unseen data, with the orchestration of distributed private data~\cite{fedavg,fedmixed}. However, data heterogeneity is one of the most fundamental challenges faced by FL. The concept of independent and identically distributed (IID) is clear, while data can be non-IID in many ways, \eg, feature skew, label distribution skew, or concept shift~\cite{openchallenge}. Previously, sharp performance degradation was observed on FedAVG with unbalanced and non-IID data. This ill-effect is attributed to the weight divergence, which can be quantified by the earth mover’s distance between distributions over classes \cite{fednoniid}. Although each client can train a private model locally by optimizing the objective with no information change among each other, it would inevitably result in overfitting and a poor generalization ability on new samples. As suggested in~\cite{fedavg}, simply sharing a small subset of data globally greatly enhances the generalization of FedAVG. However, this scheme cannot be directly applied to real-world tasks due to the violation of privacy concerns. Consequently, researchers have sought to train a collection of models that is stylized for each local distribution to enable stronger performance for each participating client without requiring any data sharing~\cite{fednoniid}, which is known as personalized federated learning PerFL~\cite{tan2021towards}. Various approaches have been proposed to accomplish the model personalization in FL~\cite{tan2021towards,meta,multi_task,fedmixed}. Among these different paradigms, one popular solution is to directly assign personalized parameters for each local client. For this line of methods, the private personalized parameters are trained locally and not shared with the central server. Existing works have made attempts to achieve personalization by assigning personalized parameters in either top layers~\cite{fedper} or bottom layers~\cite{lgfed}. However, these approaches usually require prior knowledge for the determination of which layers to be personalized. More critically, we observe performance degradation that existing PerFL~approaches fail to achieve a consistent generalization over comprehensive settings of data heterogeneity~\cite{quinonero2009dataset}. Additionally, existing layer-wise personalization approaches cannot effectively handle the discrepancy between the learned local and global model representations due to the weight divergence \cite{fednoniid}. The inferior performance of some local clients motivates us to seek a more generic yet efficient combination between the local and global information. In light of these challenges, we propose \textbf{CD\textsuperscript{2}-pFed}, a novel Cyclic Distillation-guided Channel Decoupling framework for model personalization in FL. As shown in \figref{fig:demo}, different from previous layer-wise personalization approaches, \eg, FedPer~\cite{fedper} and LG-Fed~\cite{lgfed}, the proposed novel \emph{channel decoupling} paradigm dynamically decouples the parameters at the channel dimension for personalization instead. By employing learnable personalized weights at all layers, our channel decoupling paradigm no longer requires heuristics for designing specific personalization layers. More importantly, our method achieves model personalization for both low-level and high-level layers, which facilitates tackling feature heterogeneity, distribution skew, and concept shift. To bridge the semantic gap between the learned visual representation from the decoupled channels, we further propose a novel \emph{cyclic distillation} scheme by mutually distilling the local and global model representation (\ie, soft predictions by the private and shared weights) from each other. Benefiting from the distilled knowledge, our channel decoupling framework enables synergistic information exchange between the global and local model training, therefore preventing biased local model training on non-IID data. Extensive experimental results on both heterogeneous data and exterior unseen samples~\cite{lgfed} demonstrate that our method largely improves the generalization of FedAVG with negligible additional computation overhead. Below, we summarize the major contributions of this work. \begin{itemize} \item We propose a novel \emph{channel decoupling} paradigm to decouple the global model at the channel dimension for personalization. % Instead of using personalization layers for tackling either feature or label distribution skew, our approach provides a unified solution to address a broad range of data heterogeneity. \item To further enhance the collaboration between private and shared weights in channel decoupling, we design a novel \emph{cyclic distillation} scheme to narrow the divergence between them. \item We compare our method with previous state-of-the-art PerFL~approaches on four benchmark datasets, including synthesized and real-world image classification tasks, with different kinds of heterogeneity. Results demonstrate the superiority of our method over state-of-the-art PerFL~approaches. \end{itemize} \section{Related Work} \subsection{Data Heterogeneity} Federated Averaging (FedAVG) is a prevailing federated learning algorithm to train a global model with distributed data \cite{fedavg}. Under the assumption of unbalanced and IID (independent and identically distributed) characteristics, FedAvg has achieved notable empirical success on its robustness and performance. However, facing the variety and diversity of real-world data, the condition of non-IID is unrealistic to be ensured. Instead, statistical data heterogeneity is a more general case, where the data distribution of local clients deviates significantly from the global distribution. It results in sharp performance degradation, \eg, up to 11.0\% for MNIST and 51.0\% for CIFAR-10 \cite{fednoniid}, where the common predictor does not generalize well on local data. One of the most fundamental challenges in training a robust FL is the presence of non-IID data. Concretely, the underlying distribution for arbitrary paired clients is very likely to differ from each other. There are various formulations on the existence of non-IID, including the feature distribution skew, label distribution skew, and the concept shift \cite{openchallenge}. To tackle the data heterogeneity, Li \etal proposed an optimization scheme, namely FedProx, to re-parametrize FedAvg with variable amounts of work to be performed locally across devices \cite{fedprox}. FedBN employs local batch normalization to alleviate the feature distribution skew before the model aggregation \cite{fedbn}. \subsection{Personalized Federated Learning} Three major challenges that restrict the generalization ability of federated global model on local data are 1) device heterogeneity, 2) data heterogeneity due to the non-IID distribution, 3) model heterogeneity to adapt to local environment \cite{whypfed}. Among those, data-IID is the most practical issue. Yet, the privacy protection mechanisms in conventional FL conflict with achieving higher performance for each individual user \cite{whypfed2}. Subsequently, Personalized Federated Learning (PFL) has received numerous attentions from researchers to cope with the above three challenges. In PFL, the global model is personalized for each local client and plays an intermediate paradigm between pure local training and FL \cite{pfl}. Leveraging a personalized model per client, PFL can integrate the client’s own dataset and orchestration of data from other clients into the training process. There are various techniques to adapt the global model for personalization \cite{survey}, including transfer learning \cite{transfer}, multi-task learning \cite{multi_task}, meta-learning \cite{meta}, knowledge distillation \cite{distill}, and network decoupling \cite{fedper,lgfed}. We mainly focus on the network decoupling methods, where the global network is decoupled into personalized layers which reside locally, and global layers. For example, FedPer splits a neural network architecture into base layers, which are trained centrally by FedAvg, and the top personalized layers \cite{fedper}, which are trained separately with the personalized layers. Very similarly to FedPer, LG-Fed personalizes the bottom layer, while keeping the top layers shared across all involved clients \cite{lgfed}. FedPer shows its superiority with an observable labeling skew such as on FLICKR-AES \cite{pia}, while LG-Fed with data skews such as CIFAR non-IID split. However, both skews exist in real-world tasks. Therefore, in this work, we attempt to bridge the gap between top layers personalization and bottom layers personalization with a unified channel decoupling scheme. \subsection{Knowledge Distillation} The key idea of knowledge distillation (KD) is to transfer the dark knowledge from a pre-trained teacher model to a lightweight student network by learning its soft predictions, intermediate features or attention maps~\cite{kd,quinonero2009dataset,kd3}. KD has broad applications in machine learning and computer visions fields, including transfer learning, semi-supervised learning, reinforcement learning \cite{kd0,kd2,kd4,zhang2020distilling,xu2020knowledge}. KD has also achieved remarkable performance as a regularization scheme. For example, Yun~\etal~\cite{cskd} distilled predictive distribution between samples of the same label to mitigate overconfident predictions and reduce intra-class variations in an image classification task.% In FL scenarios, an ensemble distillation scheme was proposed to replace the aggregation for model fusion~\cite{kdfl}. Besides, Li~\etal~\cite{fedmd} adopted knowledge distillation to personalize the global model by translating knowledge between participants. \section{Methodology} \subsection{Problem Formulation} We consider a set of $K$ clients, which are all connected to a central server. Moreover, each client only has access to its local data, denoted as $\mathcal{D}_i$, with no data sharing between clients. In a data heterogeneous setting, the underlying distribution of $\mathcal{D}_i$, denoted as $\mathcal{P}_i$ are not identical, \ie, $\mathcal{P}_i \not= \mathcal{P}_j$. Specifically, there are three common categories to depict the non-IID characteristic in FL~\cite{openchallenge}: 1) feature distribution skew (covariate shift), \ie, $\mathcal{P}_i(x) \not= \mathcal{P}_j(x)$; 2) label distribution skew (prior probability shift), \ie, $\mathcal{P}_i(y) \not= \mathcal{P}_j(y)$; and 3) same label but different features (concept drift), \ie, $\mathcal{P}_i(x|y) \not= \mathcal{P}_j(x|y)$. The goal of our work is to train a collection of $K$ models to adapt to the local dataset without exchanging their local data with other parties. The network at the $i$-th client ($i\in\{1,\cdots,K\}$) is composed of private personalized parameters $w_i$, and global shared parameters $w_0$. More formally, we formulate the loss function corresponding to the $i$-th client as $F_i:\mathbb{R}^d\to\mathbb{R}$, and then the overall objective in personalized federated learning is defined as follows, \begin{equation} \min_{\{w_i\}_{i=0}^K} F(w_0,w_1,\cdots,w_K) = \sum_{i=1}^K \alpha_i \cdot F_i(w_0,w_K). \end{equation} The balancing weight $\alpha_i$ depends on the scale of the private dataset, \ie, $\alpha_i = \frac{|\mathcal{D}_i|}{\sum_{j} |\mathcal{D}_j|}$. In this scenario, we consider supervised learning, leading to \begin{equation} F_i(w_0,w_i) = \mathbb{E}_{(\textbf{x}_j,y_j) \sim \mathcal{P}_i} \left[ l_i(\textbf{x}_j,y_j;w_0,w_i)\right], \label{eq:local_loss} \end{equation} where $l_i$ measures the sample-wise loss between the prediction of the network parameterized by $(w_0, w_i)$ and the ground truth label $y_j$ when given the input image $\textbf{x}_i$. \begin{figure}[!t] \centering \includegraphics[width=0.8\linewidth]{fig/3.pdf} \caption{Illustration of Channel Decoupling with progressive model personalization.} \label{fig:kd} \end{figure} \subsection{Channel Decoupling for Model Personalization} As shown in Figure~\ref{fig:framework}, we propose a vertically channel-wise decoupling framework to personalize the global model for non-IID federated learning. Concretely, we assign an adaptive proportion of learnable personalized weights at each layer from the target model, from top to bottom. In this fashion, our framework is intended to achieve higher personalization capacity for both simple and complex patterns (\eg, image and label-level personalization). We define a uniform personalization partition rate $p\in[0,1]$ to determine the precise proportion of the personalized channels in each layer. It follows that these $p$ proportion of channel parameters are trained locally, without the aggregation by the central server. Subsequently, these private weights vary from each other, written as $w_i$ where the subscript is associated with its client ID. The remaining $(1-p)$ percentage of the shared weights, denoted as $w_0$, are trained with common FL algorithms, such as FedAvg~\cite{fedavg}. A larger value in $p$ represents a higher degree of personalization. Therefore, the case when $p=0$ degenerates into the conventional FedAvg \cite{fedavg} with no model personalization, and conversely $p=1$ denotes a full local training procedure in an absence of the federated communications. One significant benefit from our vertical decoupling strategy, compared with horizontally layer-wise personalization ones, such as LG-Fed \cite{lgfed} and Fed-Per \cite{fedper}, is to enable a model personalization from lower to topper layers, resulting in a potential to a more general framework for weights personalization, as well as improving its capacity to address a broader range of data heterogeneity, such as both feature and label distribution heterogeneity. \para{Progressive model personalization.} One key element in our proposed channel decoupling scheme is the determination of the personalized ratio $p$ in each layer. As aforementioned, the personalized ratio $p$ controls the volume of private weights to learn the local representation, which determines the capacity to learn a good representation on the heterogeneous data. We are motivated to provide a better initialization for the personalization from the globally learned representation. Thus, as shown in Figure~\ref{fig:kd}, instead of a fixed $p$, the model capability to learn local personalized features is taken into account by a progressive increment scheme. That is, in the initial stage, we set $p$ to a small value to facilitate learning global representation for faster convergence, and afterwards gradually increase its value. Consequently, we increase the value of $p$ progressively depending on the global epoch number $T$. Similar to the learning rate schedule, a variety of schemes exist for the increment, such as cyclical learning rate \cite{smith2017cyclical}, cosine annealing \cite{loshchilov2016sgdr}. For simplicity, here we apply the linear growth scheme, \ie, \begin{equation} p_t = p \cdot \frac{t}{T} \label{eq:p}, \end{equation} where $T$ is the total global epoch number, $t$ is the current epoch number, and $p$ is the maximum personalization ratio. \subsection{Cyclic Distillation} The backbone neural network is decoupled into personalized weights $w_i$ and shared weights $w_0$ respectively, and afterwards trained simultaneously by optimizing the local supervised loss in Eq. \eqref{eq:local_loss}. However, as the local personalized and global parameters are learned with different distribution data, the statics of these parameters suffer from divergence and subsequently lead to the performance degradation~\cite{fednoniid}. Additionally, explicit consistency regularization between two parts is absent, during the optimization of local supervised objectives in most previous works. To cope with this issue, we make the first attempt at introducing self distillation into PerFL~to improve the client-side weights inner communications between the private and shared model weights and thus reduce the gap between the learned representations from local and global weights. Motivated by inplace distillation~\cite{inplacekd}, the key idea in the proposed Cyclic Distillation is to impose a consistency regularization between $w_i$ and $w_0$ in the local training procedure, as depicted in \figref{fig:framework}. We write the subnet parameterized by $w_i$, $w_0$ as $g_{w_i}$, $g_{w_0}$, and the network composed of $(w_i,w_0)$ as $g_{w_i,w_0}$. Notably, $g_{w_i}$ intends to learn personalized local representation from $\mathcal{D}_i$, whereas $g_{w_0}$ to learn global general representation. For each input sample $\textbf{x}_i$, we collect the predictions $\widetilde{y}_i$, $\widetilde{y}_i^L$, $\widetilde{y}_i^G$ from $g_{w_i,w_0}$, $g_{w_i}$, $g_{w_0}$, respectively. The overall prediction $\widetilde{y}_i$ minimizes the cross entropy loss $\mathcal{L}_{CE}$ with ground truth $y_i$. The cyclic distillation loss is defined as: \begin{equation} \mathcal{L}_{CD} = \frac{1}{2}\left(KL(\widetilde{y}_i^L,\widetilde{y}_i^G) + KL(\widetilde{y}_i^G,\widetilde{y}_i^L)\right), \label{eq:kd} \end{equation} where $KL(\cdot,\cdot)$ denotes the Kullback-Leibler (KL) divergence. It can impose an consistency regularization between $w_i$ and $w_0$, guiding the predictions from $w_i$ and $w_0$ to align with each other. Consequently, the overall loss function is \begin{equation} \mathcal{L} = \mathcal{L}_{CE} + \lambda \cdot \mathcal{L}_{CD}, \label{eq:loss} \end{equation} with the balancing coefficient $\lambda$ set to 1 in this work. \paragraph{Temporal average moving for personalized weights.} To stabilize the training performance, we utilize an exponential moving average (EMA) scheme for the local weights update in personalized channel $w_i$ for a more smoothing training dynamics. We use the superscript $l$ to mark the corresponded local epoch number, then the EMA update of $w_i$ at $t$ is \begin{equation} w_i^l = \beta_t w_i^{\prime l} + (1-\beta_t) w_i^{l-1}, \label{eq:ema} \end{equation} where $w_i^{\prime l}$ is the raw update from Eq. \eqref{eq:loss}. The smoothing coefficient $\beta_t$ depends on the current global epoch number and follows a ramp-up strategy in previous works \cite{rampup} \ie, \begin{equation} \beta_t = \begin{cases} \beta \cdot \exp(-5(1 - \frac{t}{t_0})^2), & t \le t_0 \\ \beta, & t > t_0 \end{cases} ,\label{eq:alpha} \end{equation} where $\beta$ is set to 0.5, and $t_0$ is set to 10\% of the total federated epoch numbers. \paragraph{Method overview.} We summarize the thorough local training procedure at each client in Alg. \ref{alg}. Afterwards, the central server collects all global weights $w_0$ from each client and adopts FedAvg to aggregate them. \begin{algorithm}[tbp!] \caption{Local training with CD\textsuperscript{2}-pFed~at client $i$.} \label{alg} \begin{algorithmic}[1] \Require local epoch number $\eta_i$ \Ensure $w_0^t$ \State Download $w_0^{t-1}$ from Central Server \State Update Personalized Ratio $p$ \For $~l=1,2,\cdots,\eta_i$ \State Sample of Batch of Data from $\mathcal{D}_i$ \State Forward and Compute Cross Entropy Loss $\mathcal{L}_{CE}$ \State Compute Cyclic Distillation Loss $\mathcal{L}_{CD}$ in Eq. \eqref{eq:kd} \State Update the Weights \State Adjust Personalized Weights $w_i^l$ with Eq. \eqref{eq:ema} \EndFor \State Upload $w_0^t$ to Central Server \end{algorithmic} \end{algorithm} \section{Experiments} \subsection{Datasets} Focusing on image classification tasks, we use four benchmark datasets to evaluate the proposed CD\textsuperscript{2}-pFed, namely CIFAR-10, CIFAR-100, FLICKR-AES, and a combination of public and private histology images, termed as HISTO-FED in this paper. \para{CIFAR-10} contains a total number of 60000 color images sized at $32\times32$ in 10 classes, with 5000 training images and 1000 test images per class \cite{cifar}. We focus on a highly non-IID setting, \ie characterized as label distribution skew. We follow previous works~\cite{lgfed,fedavg} to assign images from at most $s \in \{2,3,4,5,8,10\}$ classes to each client. A higher $s$ corresponds to higher variance in data distribution. For example, $s=10$ is an IID setting, while $s=2$ is the highest heterogeneous data split. We set the client number $K=10$ for CIFAR-10, as the literature \cite{lgfed,fedper}. \para{CIFAR-100} contains 500 training images and 100 testing images per class, with a total number of 100 classes \cite{cifar}. Similar to CIFAR-10, the color images are scaled at $32\times32$. We set the client number $K=30$ and assign at most $s=40$ classes to each client \cite{fedper}, which is also non-IID (\ie label distribution skew). \para{FLICKR-AES} is used to evaluate the performance of personalized image aesthetics in many literature \cite{pia}. The images are randomly split to 80\% for training and 20\% for testing. Additionally, REAL-CUR is leveraged as an external test set to evaluate the global model representation in the context of real-world personal photo ranking. Images from 14 personal albums, with an average of 197 to 222 images per album, were collected and rated by one user \cite{pia}. Due to the personal bias in aesthetic scoring, the non-IID is characterized as concept shift, leading to non-IID data distribution. We use a subset of $K= 30$ users as clients, the same as the setting in previous work \cite{fedper}. \para{HISTO-FED} is a medical image datasets, consisting of both public and private hematoxylin \& eosin (H\&E) stained histological whole-slide images of human colorectal cancer (CRC) and normal tissue. They are curated from four medical centers. We set the client number $K=3$ where each of them uses images from one medical center, and the remaining center is used as the external test set. Client 1 and client 2 use subsets of total slides number N = 86 and 50 from two public datasets NCT-CRC-HE-100K, CRC-VAL-HE-7K respectively. Each of them has a number of 7180 image patches, spitted from slides. Client 3 and external test set have 7000, and 4000 image patches respectively, curated from a private dataset of slides number $N=20$ and 10. Each image is labeled with one of nine categories. All images involved in this research received appropriate ethical approval. Due to the stain variance \cite{histofed}, the images between clients are highly non-IID, depicted as the feature skew. (as illustrated in \figref{fig:data}). \begin{figure}[h!] \centering \includegraphics[width=0.8\linewidth]{fig/4.pdf} \caption{Illustrate examples of stain variant histology whole slides from different medical centers.} \label{fig:data} \end{figure} \subsection{Experimental Settings} \para{Backbone Architectures.} We adopt the following network backbones for performance evaluation: 1) LeNet-5 \cite{lenet} for CIFAR-10, 2) ResNet-34 \cite{resnet} for CIFAR-100, FLICK-AES, and 3) ResNet-32 for HISTO-FED, following the previous works \cite{lgfed,fedper,histofed}. All backbone networks are trained from scratch, without loading any pre-trained weights. \para{Hyper Parameters.} At each local client, we employ stochastic gradient descent optimizer where the Nesterov momentum and the weight decay rate are set to 0.9 and $5\times10^{-4}$ respectively. The local epoch number $\eta_i=1$, batch size $b=128$ for CIFAR-10; $\eta_i=4$, $b=128$ for CIFAR-100; $\eta_i =4$ $b=4$ for FLICK-AES and HISTO-FED. The total epoch number $T$ is set to 50. Additionally, in CD\textsuperscript{2}-pFed, we set maximum smoothing coefficient for EMA $\alpha=0.5$ in Eq. \eqref{eq:alpha}, balancing coefficient in loss function $\lambda=1$ in Eq. \eqref{eq:loss}, and $p=0.5$ on CIFAR-10/100 and HISTO-FED and $p=0.8$ on FLICK-AES in Eq. \eqref{eq:p}. \para{Comparison Methods.} We compare our CD\textsuperscript{2}-pFed~ with FedAvg \cite{fedavg}, local training, LG-Fed \cite{lgfed} and FedPer \cite{fedper}. FedAvg is the conventional federated learning algorithm, where no personalization is involved \cite{fedavg}. Local training trains a collection of $K$ models for each client, without communications between clients. Personalized Federated Learning models play an intermediate role between FedAvg and local training, by personalizing the global structure with local unshared weights. In this work, we primarily compared our method with a model modification-based personalization scheme, which is more similar to ours. LG-Fed jointly learns compact local representations on each device with lower layers and a global model in top layers across all clients \cite{lgfed}. FedPer designed a base plus top personalization layer structure, conversely to LG-Fed which assigns personalization to bottom layers. \para{Implementations.} All experiments are conducted on one NVIDIA Tesla V100 GPU with 32Gb memory. The proposed CD\textsuperscript{2}-pFed ~is implemented on Pytorch 1.6.0 in Python 3.7.0 environment. We used an public implementation for FedAvg, LG-Fed, and FedPer for comparison. All the extra hyper-parameters involved in the compared methods are retained as their original settings. \para{Evaluation Metrics.} We use two metrics on CIFAR-10, and CIFAR-100, following previous work \cite{lgfed}. 1) \emph{Local Test Top-1 Classification Accuracy} (\%). We know precisely the client where the data sample belongs, thus we can choose the particular trained local model to predict. It evaluates the performance of model personalization. 2) \emph{New Test Top-1 Classification Accuracy} (\%). We do not know the client where the data sample belongs to, thus we employ an ensemble of all local models to derive averaged predictions, where the local model will be uploaded to the central server. This index measures the compatibility between local and global model representation. On FLICKR-AES and HISTO-FED, we use one more index, namely the \emph{External Test Top-1 Classification Accuracy} (\%). Specifically, we use external test samples in addition to the local or new test. Thus, these images potentially are potentially sampled from different distributions, intended to verify the generalization on the global model representation. Notably, we do not perform the external validation on CIFAR-10/100 due to the absence of external samples. \subsection{Experimental Results on Synthesized Data} \para{Effect of Data Heterogeneity.} We first evaluate the performance on CIFAR-10 on different levels of data heterogeneity, which is quantified by $s$. As shown in \figref{fig:res_data}, on all degree of heterogeneity, \ie, $s$, CD\textsuperscript{2}-pFed~consistently outperforms LG-Fed and FedPer. Moreover, the performance gap monotonically increases with the heterogeneity. When $s=10$, \ie, in a IID setting, achieves very marginally the same test accuracy. In the rest of this section, we focus on the most non-IID settings. \begin{figure}[htbp] \centering \includegraphics[width=0.7\linewidth]{fig/5.png} \caption{Effect of data heterogeneity on model personalization.} \label{fig:res_data} \end{figure} \para{Results on CIFAR-10.} As shown in \Cref{tab-cifar}, our proposed PFL frameworks significantly improve the backbone network by 31.83\%, on the highest degree of data heterogeneity, \ie, $s=2$. This empirical success shows the effectiveness of personalization by the channel-wise ensemble. Additionally, compared with the state-of-the-art layer-wise personalization scheme \cite{lgfed,fedper}, our approach achieves both the best local and new classification accuracy, suggesting our model learns more capable local and global representation. Additionally, the superiority of the new test accuracy suggests that our scheme also achieves higher generalization on unseen data in personalization, attributed to the equal role of personalized and shared weights they played in FL. \begin{table}[h!] \caption{Comparison of personalized federated learning methods on CIFAR-10 with a highest heterogeneity non-IID split , \ie, $s=2$. The best results are marked in \textbf{bold}, and results reported in \cite{lgfed} are indicated by *. Two metrics namely the local test and new test classification accuracy, are used for evaluating the model personalization and generalization respectively. }\label{tab-cifar} \centering \begin{tabular}{l|c|c} \hline Methods & Local ($\uparrow$) & New ($\uparrow$)\\ \hline FedAvg\cite{fedavg}* & 58.99$\pm$1.50 & 58.99$\pm$1.50 \\ Local Train*& 87.92$\pm$2.14 & 10.03$\pm$0.06 \\ LG-Fed\cite{lgfed}* & 91.77$\pm$0.56 & 60.79$\pm$1.45 \\ FedPer\cite{fedper} & 83.29$\pm$0.98 & 57.77$\pm$1.98 \\ Ours & \textbf{91.82$\pm$0.43} & \textbf{61.31$\pm$1.53} \\ \hline \end{tabular} \end{table} \para{Results on CIFAR-100.} As illustrated in \Cref{tab-cifar100}, a local test accuracy improvement of 28.75\% is achieved with CD\textsuperscript{2}-pFed, showing its effectiveness on model personalization for the local dataset with more wealthy categories. Meanwhile, there is a 5.92 new test accuracy improvement, yielding its generalization on unseen data. We also outperform layer-wise personalization methods such as FedPer, LG-Fed. \begin{table}[h!] \caption{Comparison of personalized federated learning methods on CIFAR-100 with non-IID split $s=40$. The best results are marked in \textbf{bold}.}\label{tab-cifar100} \centering \begin{tabular}{l|c|c} \hline Methods & Local ($\uparrow$) & New ($\uparrow$)\\ \hline FedAvg\cite{fedavg} & 29.23$\pm$1.75 & 29.23$\pm$1.75 \\ Local Train& 44.59$\pm$0.90 & 11.98$\pm$0.22 \\ LG-Fed\cite{lgfed} & 56.77$\pm$0.75 & 34.50$\pm$1.02 \\ FedPer\cite{fedper} & 53.24$\pm$2.33 & 30.47$\pm$1.73\\ Ours & \textbf{57.98$\pm$0.64} & \textbf{35.15$\pm$0.56} \\ \hline \end{tabular} \end{table} \subsection{Experimental Results on Real-world Data} \para{Results on FLICKR-AES.} We test the local training performance on FLICKR-AES due to the small scale of local clients, making it easy to suffer from overfitting. In, FLICR-AES, the labeling distribution is non-IID, fitting the philosophy of FedPer. With a marginal outperform to LG-FED, FedPer shows its superiority of top layer personalization in tackling with label distribution skew, while LG-Fed suffers an inferior performance. It is worth noticing that LG-Fed only slightly outperforms the baseline FedAvg, this is attributed to the fact that skew exists in label distributions where the personalized bottom layers in LG-Fed are difficult to learn. The empirical comparison is summarized in \Cref{tab-aes}, where our framework outperforms state-of-the-art personalization schemes on both local and external tests. Additionally, CD\textsuperscript{2}-pFed~can significantly reduce the test variance, leading to more stable and robust predictions. Conclusively, CD\textsuperscript{2}-pFed~equipped with both top and bottom personalization does not suffer from the ill-effect on LG-Fed in facing label distribution skew. \begin{table}[h!] \caption{Comparison of personalized federated learning methods on FLICKR-AES, and the external validation of REAL-CUR. The best results are marked in \textbf{bold}.}\label{tab-aes} \centering \begin{tabular}{l|c|c} \hline Methods & Local ($\uparrow$) & External ($\uparrow$) \\ \hline FedAvg\cite{fedavg} & 24.50$\pm$2.01 & 20.08$\pm$1.34 \\ LG-Fed\cite{lgfed} & 25.78$\pm$2.40 & 20.98$\pm$1.34 \\ FedPer\cite{fedper} & 43.26$\pm$3.23 & 40.55$\pm$1.78 \\ Ours & \textbf{47.89$\pm$2.03} & \textbf{45.67$\pm$1.67} \\ \hline \end{tabular} \end{table} \para{Results on HISTO-FED.} The internal and external test results of CD\textsuperscript{2}-pFed~ on four clients consistently outperform the baselines, in \Cref{tab-med}. We achieve higher improvement from the internal validation than the external one, which suggests that our model can personalize the global model well. These empirical results show the robustness and success of federated personalization of CD\textsuperscript{2}-pFed~ on medical images, in addition to natural image classification. \begin{table}[h!] \caption{Results of real-world medical images, \ie, stain variant histology slides, on each clients. Performance are evaluated by client side test accuracy. The best results are marked in \textbf{bold}.}\label{tab-med} \centering \resizebox{1\linewidth}{!}{ \begin{tabular}{l|c|c|c|c} \hline Methods & client \#1 & client \#2 & client \#3 & External ($\uparrow$)\\ \hline FedAvg\cite{fedavg} & 65.23 & 65.31 & 65.45 & 60.03\\ Local Train& 75.53 & 74.87 & 74.21 & 34.31 \\ LG-Fed\cite{lgfed} & 76.32 & 76.90 & 77.01 & 63.22\\ FedPer\cite{fedper} & 75.43 & 75.21 & 75.56 & 57.89\\ Ours & \textbf{77.39} & \textbf{77.45} & \textbf{77.38} & \textbf{65.66}\\ \hline \end{tabular} } \end{table} \para{Discussion.} Comprehensive experiments on four datasets, characterized with different non-IID settings, confirmed that our CD\textsuperscript{2}-pFed~is the only method to consistently achieve state-of-the-art results. Although, LG-Fed performs better to FedAvg in the existence of feature skew, and FedPer better with a label distribution skew, their performance sharply declines when the non-IID settings are interchanged. It is assumed that LG-Fed personalizes the bottom layer to better learn from highly heterogeneous images, while FedPer personalizes the top layer to distinguish unbalanced samples. Our CD\textsuperscript{2}-pFed, containing both the low-level and high-level personalization can reduce the reliance on prior knowledge for personalization decisions. \subsection{Ablation Analysis} The proposed CD\textsuperscript{2}-pFed~is composed of three functional components to assist the channel decoupling, namely the progressive personalization ratio increment scheme (LI), temporal average moving for the personalized weights (TA), and cyclic distillation (CD). To test the effectiveness of each scheme, we perform ablation studies on CIFAR-10 with a $s=2$ split. As shown in \Cref{tab:abl}, we can observe that 1) with all components, CD\textsuperscript{2}-pFed~achieves the best performance, demonstrating the effectiveness of integrating three schemes \ie LI + TA + CD, with a 1.51\% local test accuracy improvement to the origin channel decoupling; 2) CD achieves the highest improvement, TA second, and LI the least; 3) LI and TA can stabilize the training, resulting to a smaller stand deviation. We also visualize the training performance of non-IID CIFAR-10 in \figref{fig:res_cifar}, where our CD\textsuperscript{2}-pFed~ achieves a faster convergence compared with existing layer-wise personalization methods, which requires fewer communication rounds during the federation. \begin{figure}[htbp] \centering \includegraphics[width=0.63\linewidth]{fig/6.png} \caption{Effect on the performance of LeNet-5 on CIFAR-10, compared with different PFL frameworks \cite{lgfed,fedper}. As illustrated, our CD\textsuperscript{2}-pFed~achieve higher test accuracy, together with a significant faster convergence speed.} \label{fig:res_cifar} \end{figure} \begin{table}[htbp!] \caption{Ablation study on non-IID CIFAR-10 split, with $s=2$, to evaluate the effectiveness of each component.} \label{tab:abl} \centering \begin{tabular}{c|c|c|c|c} \hline LI & TA & CD & Local ($\uparrow$) & New ($\uparrow$) \\ \hline &&& 90.31$\pm$0.67 & 59.12$\pm$0.32 \\ \textcolor{myGreen}{\ding{51}}&&& 90.36$\pm$0.65 & 59.14$\pm$0.30 \\ &\textcolor{myGreen}{\ding{51}}&& 90.45$\pm$0.21 & 59.45$\pm$0.54 \\ &&\textcolor{myGreen}{\ding{51}} & 90.58$\pm$0.44 & 60.57$\pm$0.32 \\ &\textcolor{myGreen}{\ding{51}}&\textcolor{myGreen}{\ding{51}} & 91.67$\pm$0.54 & 61.20$\pm$2.03 \\ \textcolor{myGreen}{\ding{51}}& &\textcolor{myGreen}{\ding{51}} & 91.00$\pm$1.03 & 59.84$\pm$1.53 \\ \textcolor{myGreen}{\ding{51}}&\textcolor{myGreen}{\ding{51}}& & 90.81$\pm$0.38 & 59.34$\pm$0.56 \\ \textcolor{myGreen}{\ding{51}} & \textcolor{myGreen}{\ding{51}} & \textcolor{myGreen}{\ding{51}} & \textbf{91.82$\pm$0.43} & \textbf{61.31$\pm$1.53} \\ \hline \end{tabular} \end{table} \section{Limitations and Conclusions} In this paper, we propose CD\textsuperscript{2}-pFed~to vertically decouple channels in the global model for personalization. Our vertical decoupling method can personalize the local model, with guidance towards learning on high- and low-level feature representation. Subsequently, it can handle a variety of settings of data heterogeneity including the feature skew, label distribution skew, and concept shift. Empirically, compared with the previous layer-wise split which only learns one part of them, our framework shows a consistent success on four benchmark datasets. We also propose cyclic distillation to impose a consistency regularization and prevent the weights divergence in personalization. However, the cyclic distillation is currently designed by using soft predictions, which restricted itself to classification tasks. The extensions to segmentation and detection are left to future work. To stabilize the training process, we leverage a temporal average moving for personalized weights and a progressive increase scheme for the personalization ratio. Yet, we primarily assign a fixed personalization ratio for all layers, which yields an interesting future direction on searching a layer-specific optimal ratio. \paragraph{Acknowledgements.} The work described in this paper is supported by grants from HKU Startup Fund and HKU Seed Fund for Basic Research (Project No. 202009185079).
1,941,325,219,934
arxiv
\section{Introduction} Supernova (SN) rates at high redshift are very important not only because of their ability to allow the direct determination of cosmological parameters such as $H_{o}$, ${\Omega}_{0}$ and ${\Omega}_{\Lambda}$, but also in understanding nucleosynthesis rates, galaxy evolution and star formation. More specifically, the evolution of Type Ia SN rates with redshift can allow one to probe the past history of star formation in the universe, whereas their Type II SN counterparts directly probe the instantaneous star formation because of the short-lived massive stars which explode. Furthermore, observations of high-$z$ Type Ia supernovae would help us to better understand the nature of the supernova progenitors (Ruiz-Lapuente and Canal 1998). On the observational side, there has been a renewed interest in the search of SNe (Cappellaro et al. 1997 hereafter C97). An increasing number of Type Ia SNe, which are the most homogeneous and most luminous SNe, are detected at redshift $z\sim$ 1 (Perlmutter 1997, Tonry et al. 1997) and recently the Supernova Cosmology Project has reported the first measurement of the Type Ia SN rate at $z {\sim}$ 0.4 (Pain et al. 1997; hereafter P97). Thanks to recent deep redshift surveys such as the Canada-France Survey (CFRS) and the Hubble Deep Field (HDF), it has been possible to study galaxy populations as a ``whole'' in order to infer the global history of star formation in the universe. More specifically, Madau et al. (1996; hereafter M96) have investigated the cosmic star and metal formation in the early universe by combining ground-based data with the HDF observations. In this letter we use the chemical and photometric population synthesis code of Sadat and Guiderdoni (1998; hereafter SG98) to make predictions on the cosmic evolution of SN rates with redshift based on the CSFR derived from observations. We expect that such studies can provide additional constraints on the CSFR history and metallicity evolution and will allow one to interpret SN rate measurements at high redshift which are now becoming available. We hereafter adopt $H_{0}$ = 50 kms$^{-1}$Mpc$^{-1}$, ${\Omega}_{0}$ = 1 and ${\Omega}_{\Lambda}$ = 0. \section{Type Ia and II Supernovae} The model we use is a spectrophotometric model based on stellar libraries from Kurucz (1992) supplemented by Bessell et al. (1989, 1991) for M Giants and Brett (1995) for M dwarfs. The stars are followed along stellar tracks computed by the Geneva team (Charbonnel et al. 1996). The Initial Mass Function we choose is a ``standard'' IMF defined by: ${\phi}(m)$ $\propto$ $m^{-x}$, with index $x$=0.25 for $0.1 < m < 1 M_{\odot}$, 1.35 for ${1 < m < 2 M_{\odot}}$ and $x$ =1.7 for ${2 < m < 120 M_{\odot}}$. A more detailed description of the model is presented in a forthcoming paper (SG98). Note that, as a first step, we here restrict ourselves to models with a single metallicity (Z=Z$_{\odot}$). The chemical evolution will be studied in SG98. The progenitors of Type II,Ib,c SNe are stars with masses $ M > 8 M_{\odot}$. The rates are directly computed from the IMF. The nature of the Type Ia SNe progenitors is still a matter of debate. Indeed, it is not yet clear which of the two main competing models -- {\it double degenerate} (DD, Iben \& Tutukov 1984) or {\it single degenerate} (SD, Whelan \& Iben 1973) -- applies for the SNIa precursors. Here we assume the SD model where Type Ia SNe are produced by C-ignition and total disruption of a cold degenerate WD when this latter exceeds the Chandrasekhar mass after mass transfer in a close binary system. We estimate the rate of Type Ia SNe according to the formalism by Ferrini et al. (1992; see also Greggio \& Renzini 1983). In our SD modelling, the free parameters $m_{Binf}$ and $\alpha_{0}$, are respectively the lower limit of the total mass of the binary systems which can produce Type Ia SN, and the fraction of the total mass of stars which belong to these systems. They have been adjusted in order to reproduce the main properties of the solar neighbourhood. We find that the couple $[m_{Binf},{\alpha}_{0}]=[3,0.05]$ correctly reproduces the data. In order to assess the uncertainties in the nature of the progenitors, we have also used the empirical parameterization of the rates of SN Ia as derived by Ciotti et al. (1991; hereafter C91) $R_{Ia}\propto \theta_{SN} t^{-s}$. $\theta_{SN}$ is the normalizing parameter. The $R_{Ia}$ evolution is controlled by the free parameter $s$. In order to account for the Fe content in clusters of galaxies, C91 concluded that $s > 1.4$. Note that our standard modelling roughly corresponds to $s \sim 1.6$. The other free parameter is the rise time which is fixed to $t_{15,0}$=0.05 in units of 15 Gyr (Renzini et al. 1993). \begin{figure*}[btp] \centerline{\epsfig{file=lum3_obsx,height=3.4in,width=5.3in}} \caption{\small Evolution with redshift of the luminosity density at rest-frame wavelengths. The data points with error bars are taken from Lilly et al. (1996) ({\it filled squares}), and Connolly et al. (1997) ({\it empty hexagons}). {\it Open triangles} and {\it filled triangles} respectively correspond to 1600 {\AA} HDF measurements uncorrected (Madau 1997) and corrected (Pettini et al. 1997) for extinction. The {\it empty square} is from Treyer et al. (1997). The two curves correspond to the two different parameterizations M1 ({\it solid line}) and M2 ({\it dashed line}) of the CSFR.} \end{figure*} \section{The Cosmic Star Formation Rate} By combining different photometric surveys such as the CFRS, up to $z$ = 1 (Lilly et al. 1995) and the HDF up to $z$ = 4 (M96), it has been possible to derive a picture of the star formation rate history of the whole universe (M96, Madau et al. 1997; M97). To compute the evolution of the cosmic supernova rates per comoving volume (CSNR), we make use of the CSFR as derived in M97 (model M1). We also introduce another CSFR law to account for a possible dust extinction correction (model M2). M1 and M2 have the same shape at low $z$ in agreement with the CSFR derived from observations, but at higher $z$, we have allowed the CSFR to be higher in model M2 than in M1 in agreement with Pettini et al. (1997). These time-dependent CSFRs are then introduced as an input in our code. The amplitude is normalized in order to match the luminosity density evolution in the UV--continuum with the following constants of proportionality between the observed L$_{UV}$ and our derived SFR: ($7.5\times 10^{19}, 6.5\times 10^{19}$) in units of W Hz$^{-1}$ (M$_{\odot}$ yr$^{-1}$)$^{-1}$ at (1500 \AA, 2800 \AA). Before computing the SN rates, we first have checked whether our simple model with a ``standard'' IMF and the parameterization of the CSFR is able to reproduce the observed volume averaged density luminosities $\rho_{\nu}$ at different ${\lambda}$. Figure 1 shows that the agreement at longer ${\lambda}$ is satisfying. The 4400 and 10000 {\AA} luminosity densities seem to hint at model M2 in agreement with the IR/submm background which also suggests the presence of extinction at high $z$ (Guiderdoni et al. 1997). \section {Cosmic Evolution of SN Rates} \begin{figure}[btp] \centerline{\epsfig{file=sfr_snratexb.ps,height=3.0in,width=3.5in}} \caption{\small Evolution of SN rest-frame rates per comoving volume for model M1 ({\it lower curves}) and M2 ({\it upper curves}): $R_{II/Ib,c}$ ({\it solid lines}); $R_{Ia}$ ({\it dashed lines}). CSFRs are shown as {\it dotted lines}. Data are derived from the UV fluxes with the constants of proportionality mentionned in section 3.} \end{figure} \noindent In this section, we use the CSFR to derive the CSNR for both II/Ib,c and Ia Types. To our knowledge, this is the first time that the evolution of SN rates with redshift is predicted from a self-consistent spectro-photometric modelling of galaxy evolution at high $z$ and independently of the details of individual galaxy evolution. As already mentionned, the direct measurement of SNe can be used as an independent test for the cosmic star and metal formation in the universe. In figure 2, we have plotted the predicted evolution of the CSNR per unit of comoving volume with redshift. The Type II/Ib,c rate shows the same shape as the instantaneous CSFR, that is the rise, peak and drop from high redshift to the present time. This means that the SNII/Ib,c rate can be used as an independent tracer of the star formation rate. The Type Ia rate $R_{Ia}$ has a different shape from the CSFR while coincidently it has nearly the same behaviour as the B-band luminosity. The most important point is the time delay we observe between the stellar (binary in this case) birth and the explosion time. The occurrence of the SNIa rate peak is shifted by a few Gyrs relatively to the CSFR peak. However, at $z \ge$ 0.9, Type Ia SN can be used as a probe of the past history of the CSFR. Furthermore, as can be seen from figure 3, $R_{Ia}$ has very distinct shape (different normalization and different time of peak occurrence) depending on the adopted CSFR shape, and this difference is higher at higher $z$. Therefore, measurements of SNIa at $z {\sim}$ 1 would be able to discriminate between the models. \section{Comparison with observations} \begin{figure}[btp] \centerline{\epsfig{file=sn1rate_SD4.ps,height=3.0in,width=3.5in}} \caption{\small Predicted evolution of the rest-frame Type Ia SN rate per comoving volume. The curves labelled with ${s}$ correspond to $R_{Ia}$ as derived assuming the C91 parameterization normalized in order to reproduce the present day value for the same CSFR. The horizontal bar is the observational redshift range.} \end{figure} \begin{figure}[btp] \centerline{\epsfig{file=snratetolx.ps,height=3.0in,width=3.5in}} \caption{\small Evolution of SNII ({\it upper curves}) and SNI{a} ({\it lower curves}) rates for models M1({\it solid lines}) and M2 ({\it dotted lines}) compared to the observations. The {\it filled circle} and {\it open circle} respectively correspond to SNII/Ib,c and SNIa rates from C97, and the {\it filled triangle} is from P97.} \end{figure} \noindent Supernova rate has been measured in nearby galaxies by several authors (Cappellaro \& Turatto 1988, Evans et al. 1989, C97). At higher redshift, Pain and his collaborators have recently reported the first high-$z$ ($z \sim$ 0.4) rest-frame Type Ia SN rate with a value of ${\sim}$ 0.82 h$^{2}$ SNu. In order to compare the model to the observations, we have converted the observed rate in SNu (SNe/100 yr/ 10$^{10}$ L$_{B\odot}$) into a rate in yr$^{-1}$ using blue luminosities as computed by our code. From figure 3, we can see that although the available measurement of the SNIa rate does not allow one to discriminate between models M1 and M2, the redshift is still low and the statitics is poor. Measurement at $z \sim$ 1 with the same statistics, would begin to discriminate between models, independently of the adopted model for Type Ia progenitors. By investigating different models for SNIa rates using C91 empirical parameterization (see figure 3 caption), we have found that a comparison between these model predictions and observations at low and intermediate $z$ shows that the best match is obtained for models with a rapidly declining Type Ia rate ($s > 1.4$) consistent with the constraints on $s$ from chemical evolution. In figure 4 we have plotted the predicted SNIa/II rates in SNu. As can be seen, the SNIa rate is nearly constant. This is expected since both the $R_{SNIa}$ per unit of comoving volume and the B-band luminosity have the same shape. Therefore the peak observed in $R_{SNIa}$ has been erased, leading to a constant ratio. However, this is not true for the Type II SN rate which turns out to be a monotonic function increasing with redshift. This is a consequence of the fact that the stellar population which emits the bulk of the B-band luminosity is older than the SNII progenitors. Note that the SN rates in SNu do note change drastically with the adopted CSFR prescription. This means that SN rates in SNu loose a large part of the information on the absolute value of the CSFR. Our model reproduces fairly well the observed local Type Ia rate and our prediction for high-$z$ rates is in good agreement (within the error bars) with the observed Type Ia rate of P97. At $z \sim$ 1 we expect that the SNIa rate in SNu does not change from its local value while the SNII rate is expected to be 3 times the local value. This leads to the decrease of the Type Ia/II SN ratio from $z$=0 to $z$=1 (see also Yungelson \& Livio 1997). However, the Type II SN rate is not observed yet and this prediction could be confirmed with future instruments such as the NGST. \section{Conclusion} We have computed the CSNR from the CSFR using a self-consistent modelling. We found that: \begin{itemize} \item{ The adopted standard IMF allows one to fit the observed local and high-$z$ colours of the universe. } \item{Type II SN rates are very good tracers of instantaneous star formation and could be used to set constraints on the CSFR at $2 < z < 4$ with the new generation of telescopes such as the NGST.} \item {Type II SN rates $R_{SNII}$ (expressed in SNu) are an increasing function with redshift while the $R_{SNIa}$/$R_{SNII}$ ratio is decreasing.} \item{Our model predictions for SN rates are in a good agreement with observations locally as well as at higher $z$, and consistent with the current limits on the CSFR.} \item{At redshift $z \sim 1$, a comparison between the predicted and the observed Type Ia rates would allow us to put stong constraints on models for SN Ia progenitors. In the frame of C91 models, we found that the best match is obtained for models with a steep evolution of SNIa rate (i.e. $s > 1.4$) consistent with the chemical evolution of Fe in clusters of galaxies.} \item{Intermediate and high-$z$ Type Ia SN rates can be used as an independent test to constrain the CSFR. The current data at $z \sim 0.4$ are not sufficient to disentangle between the models but observations of Type Ia supernovae at $z\sim 1$ would be critical for understanding the CSFR and would allow one to assess whether the $z > 1$ CSFR is higher than directly observed from the UV.} \end{itemize}
1,941,325,219,935
arxiv
\section{Introduction} The genesis of this work is our interest in properties of classes of varieties described by equations. Starting from Mal'cev's description of congruence permutability as in \cite{Mal.OTGT}, the problem of characterizing properties of classes of varieties as Mal'cev conditions has led to several results. In \cite{Pix.DAPO} A. Pixley found a strong Mal'cev condition defining the class of varieties with distributive and permuting congruences. In \cite{Jon.AWCL} B. J\'{o}nsson shows a Mal'cev condition characterizing congruence distributivity, in \cite{Day.ACOM} A. Day shows a Mal'cev condition characterizing the class of varieties with modular congruence lattices. These results are examples of a more general theorem obtained independently by Pixley \cite{Pix.LMC} and R. Wille \cite{Wil.K} that can be considered as a foundational result in the field. They proved that if $p \leq q$ is a lattice identity, then the class of varieties whose congruence lattices satisfy $p \leq q$ is the intersection of countably many Mal'cev classes. \cite{Pix.LMC} and \cite{Wil.K} include an algorithm to generate Mal'cev conditions associated with congruence identities. These researches have also led to the problem of studying equations where the variables run not only over the congruence lattices but in possibly strictly larger sets as the lattices of all tolerances or of all compatible reflexive relations. Results about this problem can be found in \cite{CHL.OMCF, Fio.MCCT, GM.QLOV, KK.TSOC, Lip.UOAR, Wer.AMCF}. Furthermore, the study of Mal'cev conditions and more generally of properties of closed sets of operations has become more and more popular in the recent years because of its deep connection with CSPs and PCSPs problems \cite{BG.PCSS, Bul.ADTF, BKO.AATP, Fio.CSOF, Fio.CSOF2, Fio.EOAS, BJK.TCOC, Leh.CCOF, Ros.MCOA, Zhuk.APOT}. Following this branch of research, our aim is to introduce a new type of equations, that we will call \emph{commutator equations}. Inspired by the definition and characterization of varieties with a \emph{weak difference} term (see \cite{KK.TSOC} chapter $6.1$) we model the new concept of equation to generalize this definition. A careful reader can easily see that the \emph{weak difference} is a weakening of a Mal'cev term obtained relaxing the characterizing equations to what we call commutator equations. This produce a property defining a Mal'cev condition strictly weaker than the one characterizing the class of varieties with a Mal'cev term. In Section \ref{Notations} and \ref{LabelledG} we introduce some of the basic concepts for our investigation. In Section \ref{Taylor} we prove an equivalent of the Taylor's result \cite{Tay.VOHL} for commutator equations. Namely, Theorem \ref{TaylorThm} shows that a variety $\vv{V}$ is Taylor if and only if the variety $\vv{V}'$ generated by the abelian algebras of the idempotent reduct of $\vv{V}$ is Taylor. Furthermore, we provide a new characterization of this property in terms of congruence equations. This result establishes a connection between a variety $\vv V$ and the variety $\vv{V}'$ generated by the abelian algebras of the idempotent reduct of $\vv{V}$ that we develop further in Section \ref{Alg}. In this last section our aim is to provide a Pixley-Wille type of algorithm for commutator equation. In the main result of the section, Theorem \ref{ThmweakAlg}, we prove that, under mild assumptions on the lattice terms involved, weakening standard term equations produced by a congruence equation via the Pixley-Wille algorithm to commutator equations gives a property describing a Mal'cev class. \section{Preliminaries and Notations}{} \label{Notations} In this section we recall some of the basic definitions of universal algebra and we introduce the few new definitions we need. For other elementary concepts in general algebra (such as lattices, algebras, varieties, etc.) our textbook reference is \cite{BS.ACIU}; for more advanced topics (such as abelian congruences and the commutator of congruences) we refer the reader to \cite{KK.TSOC}. For the general theory of Mal'cev conditions and Mal'cev classes we refer to the classical treatment in \cite{Taylor1973} or the more modern approach in \cite{KK.TSOC}. We recall the definition of \emph{Centralizer}. Let $\mathbf{A}$ be an algebra. For $\alpha, \beta, \in \operatorname{Con}(\mathbf{A})$ we define $M(\alpha,\beta)$ as the set of matrices of the form \begin{center} \begin{equation*} \begin{pmatrix} f(\mathbf{a},\mathbf{u}) & f(\mathbf{a},\mathbf{v}) \\ f(\mathbf{b},\mathbf{u}) & f(\mathbf{b},\mathbf{v}) \end{pmatrix} \end{equation*} \end{center} where $f \in \operatorname{Pol}_{n+m}(\mathbf{A})$ and $\mathbf{a}, \mathbf{b} \in A^n$, $\mathbf{u}, \mathbf{v} \in A^m$ with $\mathbf{a} \equiv_{\alpha} \mathbf{b}$ and $\mathbf{u} \equiv_{\beta} \mathbf{v}$. For $\alpha, \beta, \delta \in \operatorname{Con}(\mathbf{A})$, we say that $\alpha$ \emph{centralizes} $\beta$ \emph{modulo} $\delta$ if for all possible matrices $M \in M(\alpha,\beta)$ with $f \in \operatorname{Pol}(\mathbf{A})$ we have that: \begin{center} if $f(\mathbf{a},\mathbf{u}) \equiv_{\delta} f(\mathbf{a},\mathbf{v})$, then $f(\mathbf{b},\mathbf{u}) \equiv_{\delta} f(\mathbf{b},\mathbf{v})$. \end{center} In this case we write \index{$C(\alpha,\beta;\delta)$}$C(\alpha,\beta;\delta)$ and we call the ternary relation $C$ \emph{centralizer}. We refer to \cite{KK.TSOC}[Section $2.5$] for the various properties of the centralizer relation. We recall also the definition of the \emph{two-term Centralizer}. Let $\mathbf{A}$ be an algebra. For $\alpha, \beta, \delta \in \operatorname{Con}(\mathbf{A})$ we say that $\alpha$ \emph{two-term centralizes} $\beta$ \emph{modulo} $\delta$ if for all possible matrices \begin{center} \begin{equation*} \begin{pmatrix} f(\mathbf{a},\mathbf{u}) & f(\mathbf{a},\mathbf{v}) \\ f(\mathbf{b},\mathbf{u}) & f(\mathbf{b},\mathbf{v}) \end{pmatrix} \text{and} \begin{pmatrix} t(\mathbf{a},\mathbf{u}) & t(\mathbf{a},\mathbf{v}) \\ t(\mathbf{b},\mathbf{u}) & t(\mathbf{b},\mathbf{v}) \end{pmatrix} \in M(\alpha,\b) \end{equation*} \end{center} where $f, t \in \operatorname{Pol}_{n+m}(\mathbf{A})$ and $\mathbf{a}, \mathbf{b} \in A^n$, $\mathbf{u}, \mathbf{v} \in A^m$ with $\mathbf{a} \equiv_{\alpha} \mathbf{b}$ and $\mathbf{u} \equiv_{\beta} \mathbf{v}$ we have that: \begin{center} if $f(\mathbf{a},\mathbf{u}) \equiv_{\delta} t(\mathbf{a},\mathbf{u})$, $f(\mathbf{a},\mathbf{v}) \equiv_{\delta} t(\mathbf{a},\mathbf{v})$, and $f(\mathbf{b},\mathbf{u}) \equiv_{\delta} t(\mathbf{b},\mathbf{u})$ then $f(\mathbf{b},\mathbf{v}) \equiv_{\delta} t(\mathbf{b},\mathbf{v})$. \end{center} We write \index{$C_{2T}(\alpha,\beta;\delta)$}$C_{2T}(\alpha,\beta;\delta)$ and we call the ternary relation $C_{2T}$ \emph{two-term centralizer}. We will use the TC-commutator as defined in \cite{FM.CTFC} by Freese and McKenzie and we will refer to \cite{KK.TSOC} for properties and basic results about the commutator. Moreover, in Theorem \ref{ThmweakAlg} we use the concepts of \emph{linear commutator} and \emph{two terms commutator}, in symbols respectively $[\a,\b]_L$ and $[\a,\b]_{2T}$, as defined for example in \cite{KK.TBTC} and \cite{Lip.ACOV}. Furthermore, a \emph{weak difference term} for a variety $\vv V$ is a ternary term $d(x, y, z)$ such that that for all $\alg A \in \vv V$, $a, b \in A$, and $\theta \in \operatorname{Con}(\mathbf{A})$ with $(a,b) \in \theta$, we have: \begin{equation*} d(b, b, a) \mathrel{[\theta, \theta]} a \mathrel{[\theta, \theta]}d(a, b, b), \end{equation*} see \cite[Chapter $6$]{KK.TSOC} for a deep treatment of the notion. We can observe that a weak difference term is Mal’cev on any block of an abelian congruence, thus in particular any abelian algebra in a variety which has a weak difference term is affine, see \cite{KK.TSOC}. As a generalization of this definition we introduce the notion of \emph{commutator equation}. Let $p$ and $q$ terms for a language $\vv L$. Then we denote by $p \approx q$ an equation involving the two terms. Through all the paper we will refer to this standard type of equations as \emph{term equations}. As a relaxation of this concept we introduce the definition of commutator equation. Let $\alg A$ be an algebra and let $p$ and $q$ be $n$-ary terms of $\alg A$. We say that $\alg A$ \emph{satisfies} the commutator equation $p \approx_{C} q$, in symbols $A \models p \approx_{C} q$, if for all $\theta \in \operatorname{Con}(\alg A)$ and for all $a_1, \dots a_n \in A$ in the same $\theta$-class, we have: \begin{equation*} p(a_1, \dots a_n) \mathrel{[\theta, \theta]} q(a_1, \dots a_n). \end{equation*} Clearly this concept of equation is weaker than the standard term equation. If an algebra satisfies the term equation $p \approx q$, then it satisfies also the commutator equation $p \approx_{C} q$ We recall the definition of \emph{Taylor term}, introduced in \cite{Tay.VOHL} which will be widely mentioned through all the paper. An $n$-ary \emph{Taylor term} is a term satisfying the following term equations: \begin{align*} f(x_{11},\dots,x_{1n}) &\approx f(y_{11},\dots,y_{1n}) \\&\ \cdot \\&\ \cdot \\&\ \cdot \\varphi(x_{n1},\dots,x_{nn}) &\approx f(y_{n1},\dots,y_{nn}) \end{align*} with $x_{ij}, y_{ij} \in \{x,y\}$ and $x_{ii}\not=y_{ii}$. These equations describe the class of varieties satisfying a non-trivial idempotent Mal'cev condition \cite{Tay.VOHL} and are called \emph{Taylor varieties}. Before moving forward, we recall the definition of \emph{Herringbone terms} that can be found also in \cite[Section $8.2$]{KK.TSOC} or in \cite{Lip.ACOV}. These terms are of interest since they are deeply connected with well behaved commutators in varieties as can be seen in \cite{KK.TBTC} and \cite{Lip.ACOV}. \begin{Def}\label{DefHerring} We call $\emph{Herringbone terms}$ the two families of lattice terms $\{x^i\}_{i \in {\mathbb{N}}}$ and $\{y^i\}_{i \in {\mathbb{N}}}$ in the variables $\{x,y,z\}$ defined by: \begin{align*} y^{0}(x,y,z)& = y; z^{0}(x,y,z) = z; \\y^{n+1}(x,y,z) &= y \vee (x \wedge z^{n}(x,y,z)) \\z^{n+1}(x,y,z) &= z \vee (x \wedge y^{n}(x,y,z)). \end{align*} \end{Def} We use the superscript instead of the standard subscript since the latter is in conflict to the usual subscrit notation for a sequence of variables. Note that this lattice terms form two possibly infinite ascending chains. Furthermore, let $p$ be a term for the language $\{\wedge,\vee,\circ\}$. Let $k \in {\mathbb{N}}$. We denote by $p^{(k)}$ the $\{\wedge,\circ\}$-term obtained from $p$ substituting any occurrence of $\vee$ with the $k$-fold relational product $\circ^{(k)}$. \section{Labelled graphs and regular terms} \label{LabelledG} In order to show the main results of Section \ref{Alg} we recall the definition \emph{labelled graph associated with a term} as defined in \cite{Cze.AMTC}, \cite{CD.HSWW}, \cite{KK.TSOC}. We first give the definition of labelled graph. \begin{Def} Let $S$ be a set of labels. Then a \emph{labelled graph} is a directed graph $(V,E)$ with a labelling function $l: E \rightarrow S$. \end{Def} Following \cite{KK.TSOC}, let $p$ be a $\{\wedge,\circ\}$-term in the variables $\{x_1, \dots, x_t\}$. We introduce a construction producing a finite sequence of labelled graphs $\{\mathbf{G}_i(p)\}_{i \in I}$ . The sequence starts with the labelled graph $\mathbf{G}_1(p) = (\{y_1,y_2\},$ $ \{(y_1,y_2$ $)\})$ with $l((y_1,y_2)) = p$ having an edge $(y_1,y_2)$ labelled with $p$ and connecting two vertices $y_1$ and $y_2$. For $s \geq 1$, from the labelled graph $\mathbf{G}_s(p)$ we continue selecting $w \not\in \{x_1, \dots, x_t\}$ such that $w$ is a label of an edge $(y_i, y_j)$ of $\mathbf{G}_s(p)$. Then we have two cases. \begin{itemize} \item If $w = u \wedge v$ for some $u$ and $v$ $\{\wedge,\circ\}$-terms. Then $\mathbf{G}_{s+1}(p)$ is obtained from $\mathbf{G}_s(p)$ by replacing the edge $(y_i,y_j)$ labelled $w$ with two edges connecting the same vertices $(y_i,y_j)$ and labelled $u$ and $v$ respectively; \item if $w = u \circ v$ for some $u$ and $v$ $\{\wedge,\circ\}$-terms. Then $\mathbf{G}_{s+1}(p)$ is obtained from $\mathbf{G}_s(p)$ by introducing a new vertex $y_k$ and replacing the edge $(y_i,y_j)$ labelled $w$ with two edges $(y_i,y_k)$ and $(y_k,y_j)$, labelled $u$ and $v$ respectively, connecting the same vertices in serial through $y_k$. \end{itemize} The construction ends when for some $n \in {\mathbb{N}}$ none of the above steps can be performed for the labelled graph $\mathbf{G}_{n}(p)$. Thus $l(e) \in \{x_1, \dots, x_t\}$ for every $e$ edge in $\mathbf{G}_{n}(p)$. We note that the choice of $w$ can determine different sequences from a term $p$ but we can observe that the last graph of the sequence is always the same up to reorder of the vertices. We call the last graph of this sequence \emph{labelled graph associated with} $p$, denoted by $\mathbf{G}(p)$. The main reason to introduce $\mathbf{G}(p)$ is stated in \cite[Proposition 3.1]{CD.HSWW} and in Claim $4.8$ of \cite{KK.TSOC}. The latter can be generalized to tolerances as in \cite[Proposition $2.1$]{Lip.FCIT} and also to relations in general \cite{Fio.MCCT}[Proposition $3.3$]. We include the most general version of this technical result in order to use it in Section \ref{Alg}. \begin{Prop}[Proposition $3.3$ of \cite{Fio.MCCT}] \label{PropKK} Let $\mathbf{A}$ be an algebra, let $R_i \subseteq A \times A$, for $1 \leq i \leq n$, and let $p$ be a $\{\circ ,\wedge\}$-term. Then: \begin{enumerate} \item[(1)] Let $Y \rightarrow A$: $y_s \mapsto a_s$ be an assignment such that for all edges $(y_i, y_j)$ with label $X_k$ of $\mathbf{G}(p)$, we have $(a_i,a_j) \in R_k$. Then $(a_1, a_2) \in p(R_1,\dots,R_n)$. \item[(2)] Conversely, given any $(a_1, a_2) \in p(R_1,\dots,R_n)$, there is an assignment $Y \rightarrow A$: $y_s \mapsto a_s$ extending $y_1 \mapsto a_1$, $y_2 \mapsto a_2$ such that $(a_i,a_j) \in R_k$ whenever $(y_i, y_j)$ is an edge labelled with $X_k$ of $\mathbf{G}(p)$, where $(y_1,y_2)$ is the only edge of the graph $\mathbf{G}_1(p)$. \end{enumerate} \end{Prop} \section{Taylor varieties and abelian algebras} \label{Taylor} In this section our aim is to show a property describing Taylor varieties logically weaker then the standard presented in \cite{Tay.VOHL}. Taylor varieties are of interest for several reasons. Surprisingly the class of Taylor varieties forms a strong Mal'cev class \cite{Ols.TWNI}, and this class of varieties has a deep connection with the Feder-Vardi conjecture that was proven to be true in \cite{Bul.ADTF} and \cite{Zhuk.APOT} independently. Furthermore, modern results concerning the relationship between commutator and Taylor varieties can be found in \cite{Ker.RMDO}. Thus for many reasons Taylor varieties have always been a central theme of research in universal algebra. Before going deeply into the theory of commutator equation we make an easy observation. \begin{Rem}\label{idempRem} Let $\vv V$ be a variety satisfying the commutator equation $x \approx_{C} p(x,\dots, x)$. Then $p$ is idempotent. \end{Rem} The remark follows from the fact that the commutator equation has to hold also for the $0$ congruence. In order to prove the main result of the section we present a lemma that can be seen in \cite{Lip.ACOV} in a slightly different version without a proof. From now on when not specified $\beta^n = y^n(\alpha, \beta, \gamma)$ and $\gamma^n = z^n(\alpha, \beta, \gamma)$. \begin{Lem}\label{herringLem} Let $\alg{A}$ be an algebra and $\alpha, \beta, \gamma \in \operatorname{Con}(\alg A)$. Let $\delta = \bigcup_{n \in \mathbb{N}}(\alpha \wedge \beta_{n})$. Then, \begin{equation*} [\alpha \wedge (\beta \vee \gamma), \alpha \wedge (\beta \vee \gamma)] \leq \delta. \end{equation*} \end{Lem} \begin{proof} We observe that Definition \ref{DefHerring} yields that the chains of congruences $\{\alpha \wedge \beta^{n}\}_{n \in \mathbb{N}}$ and $\{\alpha \wedge \gamma^{n}\}_{n \in \mathbb{N}}$ are cofinal since: \begin{equation*} \alpha \wedge \beta^{n+1} \geq \alpha \wedge \gamma^{n} \geq \alpha \wedge \beta^{n-1}. \end{equation*} Hence $$\delta = \bigcup_{n \in \mathbb{N}}(\alpha \wedge \beta^{n})=\bigcup_{n \in \mathbb{N}}(\alpha \wedge \gamma^{n}).$$ This identity and the fact that $\wedge$ distributes over the union implies: \begin{equation}\label{cofinaleq} \alpha \wedge (\beta \vee (\alpha \wedge \delta))= \alpha \wedge (\beta \vee (\alpha \wedge \bigcup_{n \in \mathbb{N}}(\alpha \wedge \gamma^{n})) = \bigcup_{n \in \mathbb{N}}(\alpha \wedge \beta^{n}) = \delta. \end{equation} Clearly a similar equality holds substituting $\beta$ with $\gamma$. From this two identities and the properties centralizer relation, see \cite{KK.TSOC}[Theorem $2.19$ $(8)$], we obtain that $C(\beta,\alpha;\delta)$ and $C(\gamma,\alpha;\delta)$. The semidistributivity of the centralizer in the first component (\cite{KK.TSOC}[Theorem $2.19$ $(5)$]) yields $C(\beta \vee \gamma,\alpha;\delta)$. From the definition of commutator we obtain $[\beta \vee \gamma,\alpha] \leq \delta$ and the claim follows from the monotonicity of the commutator. \end{proof} Lemma \ref{herringLem} in conjunction with the next theorem will be a key element to prove the main result of the section. The following theorem, connecting Ol\v{s}\'{a}k's result \cite{Ols.TWNI} to the equation characterizing Taylor varieties in \cite{KK.TBTC}, exploits \cite{Ols.TWNI} and the congruence equation in \cite{KK.TBTC}[Lemma $4.6$] to get a simplified version of this equation. Indeed, the congruence equation characterizing Taylor varieties in \cite{KK.TBTC} is produced using an arbitrary $n$-ary Taylor term and thus with a $6$-ary Ol\v{s}\'{a}k term can be refined to a more readable version involving a fixed number of congruences. \begin{Thm}\label{TaylorThmorig} Let $\vv{V}$ be a variety. Then the following are equivalent: \begin{enumerate} \item[(1)] $\vv{V}$ is Taylor; \item[(2)] $\vv{V}$ satisfies the term equations: \begin{equation}\label{Olsak eq} t(x,y,y,y,x,x) \approx t(y,x,y,x,y,x) \approx t(y,y, x,x,x,y); \end{equation} for some idempotent term $t$. \item[(3)] $\vv{V}$ satisfies the following congruence equation in the variables $\{\alpha_1, \dots, \alpha_6, \beta_1, \dots, \beta_6\}$: \begin{equation}\label{Tayeq} \bigwedge^{6}_{i = 1}(\a_{i} \circ \b_{i}) \leq (\bigvee^{6}_{i=1}\a_{i} \wedge \bigwedge^{2}_{i=1} (\gamma \vee \theta_{i})) \vee (\bigvee^{6}_{i=1}\b_{i} \wedge \bigwedge^{2}_{i=1} (\gamma \vee \theta_{i})); \end{equation} where $\gamma = \bigwedge^{6}_{i = 1} (\a_{i} \vee \beta_i)$ and \begin{equation*} \theta_{i}= (\bigvee_{j \in L_{i}} \a_{j} \vee \bigvee_{j \in L'_{i}} \b_{j}) \wedge (\bigvee_{j \in R_{i}} \a_{j} \vee \bigvee_{j \in R'_{i}} \b_{j}) \end{equation*} with \begin{align*} &L_1 = L_2 =\{1,5,6\} &L'_{1} = L'_2 = \{ 2,3,4 \} \\&R_{1} = \{ 2,4,6\} &R'_{1} = \{1,3,5 \} \\&R_{2} = \{ 3,4,5\} &R'_{2} = \{1,2,6 \} \end{align*} \end{enumerate} \end{Thm} \begin{proof} The equivalence of $(1)$ and $(2)$ follows from Ol\v{s}\'{a}k's result \cite{Ols.TWNI}. $(2) \Rightarrow (3)$ follows from \cite{KK.TBTC}[Lemma $4.6$] adjusting the proof for the $6$-ary Taylor term in $(2)$. For sake of completeness we include the modified version of the proof. Let $\alg A \in \vv V$ and let $(a,b) \in \bigwedge^{6}_{i = 1}(\a_{i} \circ \b_{i})$. Thus there exist $u_1,\dots u_6 \in A$ such that $a \mathrel{\a_i} u_i \mathrel{\b_i} b$, for all $i \in \{1,\dots,6\}$. Let $u = t(u_1,\dots,u_6)$ and $v = t(a,b,b,b,a,a) = t(b,a,b,a,b,a) = t(b,b,a,a,a,b)$. We prove that: \begin{enumerate} \item[(a)] $(a,u) \in \bigvee^{6}_{i=1}\a_{i}$; \item[(b)] $(u,b) \in \bigvee^{6}_{i=1}\b_{i}$; \item[(c)] $ (a,u), (u,b) \in \bigwedge^{2}_{i=1} (\gamma \circ \theta_{i})$. \end{enumerate} For claims (a) and (b) we observe that: \begin{align*} a &= t(a,\dots, a) \mathrel{\a_1} \circ \cdots \circ \mathrel{\a_6} t(u_1,\dots,u_6) \\b &= t(b,\dots, b) \mathrel{\b_1} \circ \cdots \circ \mathrel{\b_6} t(u_1,\dots,u_6) \end{align*} For claim (c) we notice that $a$, $b$ and $v$ are in the same $\gamma$-class. Furthermore: \begin{equation*} u = t(u_1,\dots,u_n) \mathrel{\theta_{i}} v. \end{equation*} Thus $a \mathrel{\gamma} v \mathrel{\theta_{i}} u$ and $b \mathrel{\gamma} v \mathrel{\theta_{i}} u$ and (c) holds. Putting (a), (b), and (c) together we obtain that $(a,b)$ is in the right side of the inequality (\ref{Tayeq}) and $(3)$ holds. For $(3) \Rightarrow (1)$ we can observe that the equation (\ref{Tayeq}) is a spacial case of the equation in \cite{KK.TBTC}[proof of Lemma $4.6$], hence (\ref{Tayeq}) is non-trivial and thus it implies the satisfaction of a non-trivial idempotent Mal'cev condition. This can be seen considering an $8$-element set $\{a, b, u_1, \dots, u_n\}$ with the partitions $\a_i$ generated by $(a,u_i)$ and $\b_i$ generated by $(b,u_i)$, for all $i \in \{1,\dots,6\}$. Hence $(1)$, $(2)$ and $(3)$ are equivalent. \end{proof} We can see that equation (\ref{Tayeq}) could be further simplified using the relation product and avoiding completely the use of $\vee$ and thus generating a strong Mal'cev condition through the Pixley-Wille algorithm \cite{Pix.LMC,Wil.K}. We are ready to prove the main result of the section which uses Theorem \ref{TaylorThmorig} to find a Mal'cev condition equivalent to be Taylor involving commutator equations. \begin{Thm}\label{TaylorThm} Let $\vv{V}$ be a variety. Then the following are equivalent: \begin{enumerate} \item[(1)] $\vv{V}$ is Taylor; \item[(2)] the variety $\vv{V}'$ generated by abelian algebras of the idempotent reduct of $\vv{V}$ is Taylor; \item[(3)] $\vv V$ satisfies the commutator equations: \begin{align*}\label{weakTay} t(x,y,y,y,x,x) &\approx_{C} t(y,x,y,x,y,x) \approx_{C} t(y,y, x,x,x,y) \\t(x,x,x,x,x,x) &\approx_{C} x; \end{align*} \item[(4)] $\vv{V}$ satisfies the following congruence equation: \begin{equation*}\label{Tayeqweak} \bigwedge^{6}_{i = 1}(\a_{i} \circ \b_{i}) \leq (\bigvee^{6}_{i=1}\a_{i} \wedge \bigwedge^{6}_{i=1} (\tau \vee \theta_{i})) \vee (\bigvee^{6}_{i=1}\b_{i} \wedge \bigwedge^{6}_{i=1} (\tau \vee \theta_{i})); \end{equation*} where \begin{equation*} \theta_{i}=(\bigvee_{j \in L_{i}} \a_{j} \vee \bigvee_{j \in L'_{i}} \b_{j}) \wedge ( [\tau,\tau] \circ \bigvee_{j \in R_{i}} \a_{j} \vee \bigvee_{j \in R'_{i}} \b_{j}) \end{equation*} with $\tau = \bigwedge^{6}_{i = 1}(\a_{i} \vee\b_{i})$ and \begin{align*} &L_1 = L_2 =\{1,5,6\} &L'_{1} = L'_2 = \{ 2,3,4 \} \\&R_{1} = \{ 2,4,6\} &R'_{1} = \{1,3,5 \} \\&R_{2} = \{ 3,4,5\} &R'_{2} = \{1,2,6 \} \end{align*} \item[(5)] there exists $n \in {\mathbb{N}}$ such that $\vv V$ satisfies the following congruence equation in the variables $\{\alpha_1, \dots, \alpha_6, \beta_1, \dots, \beta_6\}$: \begin{equation}\label{weakTayHarreq} \bigwedge^{6}_{i = 1}(\a_{i} \circ \b_{i}) \leq (\bigvee^{6}_{i=1}\a_{i} \wedge \bigwedge^{2}_{i=1} (\tau \vee \theta_{i})) \vee (\bigvee^{6}_{i=1}\b_{i} \wedge \bigwedge^{2}_{i=1} (\tau \vee \theta_{i})); \end{equation} where $\tau = \bigwedge^{6}_{i = 1} (\a_{i} \vee \beta_i)$ and \begin{equation*} \theta_{i}=(\bigvee_{j \in L_{i}} \a_{j} \vee \bigvee_{j \in L'_{i}} \b_{j}) \wedge ((\bigwedge_{i=1}^{5}(\a_i \vee \b_i) \wedge \beta_6^n) \circ \bigvee_{j \in R_{i}} \a_{j} \vee \bigvee_{j \in R'_{i}} \b_{j}) \end{equation*} with $\beta_6^n = z^n(\bigwedge^{5}_{i = 1} (\a_{i} \vee \beta_i), \alpha_6, \beta_6)$ and \begin{align*} &L_1 = L_2 =\{1,5,6\} &L'_{1} = L'_2 = \{ 2,3,4 \} \\&R_{1} = \{ 2,4,6\} &R'_{1} = \{1,3,5 \} \\&R_{2} = \{ 3,4,5\} &R'_{2} = \{1,2,6 \} \end{align*} \end{enumerate} \end{Thm} \begin{proof} $(1) \Rightarrow (2)$ is trivial since if $\vv V$ is Taylor then also the idempotent reduct of $\vv V$ is Taylor. For $(2) \Rightarrow (3)$, let us assume that the variety $\vv{V}'$ generated by abelian algebras of the idempotent reduct of $\vv{V}$ is Taylor. Thus it has an Ol\v{s}\'{a}k term $t$ that satisfies the equations in (\ref{Olsak eq}). Let $\alg A \in \vv V$ and let $\theta \in \operatorname{Con}(\alg A)$. Factoring $\alg A$ by $[\theta, \theta]$ we see that is sufficient to verify the equations in (3) for abelian congruences. Let $\alg A'$ be the idempotent reduct of $\alg A$. We can observe that every $\theta$-class of $\alg A$ is an abelian subalgebra of $\alg A'$ and thus it has a term $t$ satisfying (\ref{Olsak eq}) by (2). We claim that $t$ satisfies the equations in (3) for $\vv V$. Let $(a,b) \in \theta$, then $t(a,b,b,b,a,a) = t(b,a,b,a,b,a) = t(b,b,a,a,a,b)$ and $t(a,a,a,a,a,a) = a$. The commutator equations (\ref{Olsak eq}) follow directly remembering that a quotient by $[\theta, \theta]$ has been applied. For $(3) \Rightarrow (4)$, suppose that $(3)$ holds. Then let $\alg A \in \vv V$ and let $\a_1, \dots, a_6, \b_1, \dots, \b_6 \in \operatorname{Con} (\alg A)$ with $(a,b) \in \bigwedge^{6}_{i = 1}(\a_{i} \circ \b_{i})$. Then, with a similar argument of $(2) \Rightarrow (3)$ in the proof of Theorem \ref{TaylorThmorig} we have that there exist $u_1,\dots u_6 \in A$ such that $a \mathrel{\a_i} u_i \mathrel{\b_i} b$, for all $i \in \{1,\dots,6\}$. Let $u = t(u_1,\dots,u_6)$ and $v = t(a,b,b,b,a,a)$. Since $(a,b) \in \tau$ we have that $v \mathrel{[\tau, \tau]} t(b,a,b,a,b,a) \mathrel{[\tau, \tau]} t(b,b,a,a,a,b)$, by the equations in (3). We prove that: \begin{enumerate} \item[(a)] $(a,u) \in \bigvee^{6}_{i=1}\a_{i}$; \item[(b)] $(u,b) \in \bigvee^{6}_{i=1}\b_{i}$; \item[(c)] $ (a,u), (u,b) \in \bigwedge^{2}_{i=1} (\tau \circ \theta_{i})$. \end{enumerate} For claims (a) and (b) we can observe that $t$ is idempotent, by Remark \ref{idempRem}, and thus. \begin{align*} a &= t(a,\dots, a) \mathrel{\a_1} \circ \cdots \circ \mathrel{\a_6} t(u_1,\dots,u_6) \\b &= t(b,\dots, b) \mathrel{\b_1} \circ \cdots \circ \mathrel{\b_6} t(u_1,\dots,u_6). \end{align*} For claim (c) we notice that $a$, $b$ and $v$ are in the same $\tau$-class. Furthermore, we can prove that $v \mathrel{\theta_{i}} u$, for $i = 1,2$. \begin{align*} t(u_1,\dots,u_n) & \mathrel{\bigvee_{j \in L_{1}} \a_{j} \vee \bigvee_{j \in L'_{1}} \b_{j}} v \\t(u_1,\dots,u_n) & \mathrel{\bigvee_{j \in R_{1}} \a_{j} \vee \bigvee_{j \in R'_{1}} \b_{j}} t(b,a,b,a,b,a) \mathrel{[\tau,\tau]} v \\t(u_1,\dots,u_n) & \mathrel{\bigvee_{j \in R_{2}} \a_{j} \vee \bigvee_{j \in R'_{2}} \b_{j}} t(b,b,a,a,a,b) \mathrel{[\tau,\tau]} v. \end{align*} Hence, $a \mathrel{\tau} v \mathrel{\theta_{i}} u$ and $b \mathrel{\tau} v \mathrel{\theta_{i}} u$ thus (c) holds. Putting (a), (b), and (c) together we obtain that $(a,b)$ is in the right side of the inequality in $(4)$. $(4) \Rightarrow (5)$ follows applying Lemma \ref{herringLem} to the equation in $(4)$ with $\alpha = \bigwedge^{5}_{i=1}(\alpha_i \vee \beta_i)$, $\beta = \alpha_6$, and $\gamma = \beta_6$. Thus $[\bigwedge^{6}_{i=1}(\alpha_i \vee \beta_i), \bigwedge^{6}_{i=1}(\alpha_i \vee \beta_i)] \leq \bigcup_{n \in \mathbb{N}}(\alpha \wedge \b^n)$ and hence there exists $n \in {\mathbb{N}}$ such that: \begin{equation*} [\tau,\tau] \leq \bigwedge_{i=1}^{5}(\a_i \vee \b_i) \wedge \beta_6^n \end{equation*} This yields the equation in $(5)$. For $(5) \Rightarrow (1)$ we show that equation (\ref{Tayeqweak}) is non-trivial and this implies $(1)$ through the Pixley-Wille algorithm, see \cite{Pix.LMC, Wil.K}. Let $\alg A = \langle \{a,b, c_1 \dots c_6\}, \pi \rangle$ be an algebra whose basic operations are only projections. Clearly every equivalence class is a congruence in $\operatorname{Con}(\alg A)$. Let $\alpha_i$ be the partitions which identify only $\{a, c_i\}$ and let $\b_i$ be the partitions which identify only $\{b, c_i\}$, for all $i \in \{1,\dots,6\}$. Then $(a,b)$ is in the left side of (\ref{weakTayHarreq}). We prove that $(a,b)$ is not in the right side. First we can observe that $\bigwedge_{i=1}^{5}(\a_i \vee \b_i) \wedge \b_6^n = 0$ for all $n \in {\mathbb{N}}$, since $\b_6^n = \b_6$ for all $n \in {\mathbb{N}}$. Furthermore, $\bigvee_{i=1}^{6}\a_i$ has exactly the two classes $\{a, c_1,\dots, c_6\}$ and $\{b\}$ while the two classes of $\bigvee_{i=1}^{6}\b_i$ are $\{c_1,\dots, c_6, b\}$ and $\{a\}$. Moreover, we can observe that $(a,c_j), (c_j,b) \notin \bigwedge^{2}_{i=1} \tau \vee \theta_i$ for all $j = 1,\dots, 6$. Consequently $\bigwedge_{i=1}^2 \tau \vee \theta_i $ has $\{a,b\}$ as the only non-trivial class. Hence we can conclude that $\{a\}$ is a class of $\bigvee^{6}_{i=1}\a_{i} \wedge \bigwedge^{2}_{i=1} (\tau \vee \theta_{i})$ and $\{b\}$ is a class of $\bigvee^{6}_{i=1}\b_{i} \wedge \bigwedge^{2}_{i=1} (\tau \vee \theta_{i})$. Thus $(a,b) \notin (\bigvee^{6}_{i=1}\a_{i} \wedge \bigwedge^{2}_{i=1} (\tau \vee \theta_{i})) \vee (\bigvee^{6}_{i=1}\b_{i} \wedge \bigwedge^{2}_{i=1} (\tau \vee \theta_{i}))$ as wanted. \end{proof} The previous theorem is an improvement of Taylor's characterization of the class of varieties satisfying a non-trivial idempotent Mal'cev condition obtained with a relaxation of the hypothesis. Note that there is no specific need of the Ol\v{s}\'{a}k term in the proof and it also works using a generic $n$-ary Taylor term. Nevertheless, the use the Ol\v{s}\'{a}k term allows to obtain simpler equations. Moreover, we can also see that the technique exploited in the previous theorem is clearly applicable in a wider context. In the next section we develop this intuition. \section{A Pixley-Wille type algorithm for commutator equations} \label{Alg} In this section our aim is to develop a Pixley-Wille type algorithm for commutator equations, something that can be partially seen in \cite{Lip.ACOV}[Theorem $3.4$] without a proof. Investigating this problem is of interest since a description of a systematic way of characterizing Mal'cev conditions described by commutator equations can produce idempotent Mal'cev conditions logically weaker than their congruence counterpart. Let $p \leq q$ be an equation for the language $\{\wedge, \circ\}$ in the variables $\{X_s\}_{s \in I}$. Let $\mathbf{G}(p)$ and $\mathbf{G}(q)$ be obtained from $p$ and $q$ through the procedure described in Section \ref{LabelledG} and let $\{x_1, \dots, x_n\}$ be the set of vertices of $\mathbf{G}(p)$. We define: \begin{align} \label{eqp} T_s(p) &:= \{(x_i,x_j) \mid (x_i,x_j) \text{ is an edge of } \mathbf{G}(p) \text{ with label } X_s \} \\Tt(q) &:=\{(t_i,t_j) \mid (x_i,x_j) \text{ is an edge of } \mathbf{G}(q)\}\nonumber \\Tt_s(q) &:=\{(t_i,t_j) \mid (x_i,x_j) \text{ is an edge of } \mathbf{G}(q) \text{ with label } X_s\},\nonumber \end{align} where the elements $\{t_1,\dots,t_l\}$ of the pairs in $Tt(q)$ are $n$-ary terms of unspecified type with $t_1 = x_1$ and $t_2 = x_n$ in the variables $\{x_1,\dots,x_n\}$, where $n$ is the number of vertices in $\mathbf{G}(p)$. Let $R \subseteq A \times A$ be a relation over the set $A$. We denote by $\mathrm{Eqv}(R)$ the equivalence relation generated by $R$. We define $\mathrm{Eq}(p \leq q)$ as the set of all equations of the form: \begin{equation}\label{Pixeq} t_i(x_{i_1},\dots,x_{i_m}) \approx t_j(x_{i_1},\dots,x_{i_n}) \end{equation} such that $(t_i,t_j) \in Tt_s(q)$ and the vector of indices $(i_1,\dots,i_n) \subseteq {\mathbb{N}}^n$ satisfies $i_d = \mathrm{min}(\{i \mid (x_{i}, x_d) \in \mathrm{Eqv}(T_s(p))\}) $ for all $d \in [m]$. This means that the variables that are in the equivalence relation generated by the pairs in $T_s(p)$ are collapsed. The equations (\ref{Pixeq}) are produced by the Pixley-Wille algorithm \cite{Pix.LMC,Wil.K} that characterizes the Mal'cev condition described by the congruence equation $p \leq q$. In the previous definition we introduced terms of unspecified type. We consider those terms to be either of one letter $x_1, x_2, \dots$ or composed by a primitive operation symbols applied to letters, e.g $t(x_1,\dots,$ $ x_n)$. Namely, those terms are placeholders used to write equations that can be instantiated in the language of a given variety. Let $\mathrm{Eq}$ be a set terms equations of unspecified type and let $\vv{V}$ be a variety of type $\tau$. Then a $\tau$-\emph{realization} of $\mathrm{Eq}$ is a set of equations obtained from $\mathrm{Eq}$ by replacing each operation symbol, in all of its occurrences in $\mathrm{Eq}$, by some fixed term symbol of type $\tau$. Furthermore, a variety $\vv{V}$ satisfies a set of equations $\mathrm{Eq}$ of terms of unspecified type if there exists a $\tau$-realization of $\mathrm{Eq}$ such that the obtained equations hold in $\vv{V}$. As usual in the literature from now on we will use the same symbols for terms used as placeholders in some equations and their $\tau$-realization in a variety of type $\tau$. Let $p$ be a term for the language $\{\wedge, \circ\}$ and $q$ be a term for the language $\{\wedge, \circ, \vee\}$. Then we say that a variety $\vv{V}$ satisfies $\mathrm{Eq}(p \leq q)$ if there exists $k \in {\mathbb{N}}$ such that $\vv{V}$ satisfies $\mathrm{Eq}(p \leq q^{(k)})$. We modify the last part of the algorithm in \cite{Pix.LMC, Wil.K} in order to obtain a set of equations $\mathrm{Eq}_{C}(p \leq q)$ that characterizes the Mal'cev condition describing the class of varieties which satisfy $p \leq q$ over the algebras that belong to the subvariety generated by abelian algebras of the idempotent reduct. \begin{Alg} \label{PixWilAlg} \theoremstyle{definition} Let $p \leq q$ be an equation for the language $\{\wedge, \circ\}$. Let $\mathbf{G}(p)$ and $\mathbf{G}(q)$ be obtained from $p$ and $q$ with the procedure in Section \ref{LabelledG}. Let us consider $T_s(p) $ and $Tt_s(q) $ as in \eqref{eqp}. We define $\mathrm{Eq}_{C}(p \leq q)$ as the set of all equations of the form: \begin{equation*} t_i(x_{i_1},\dots,x_{i_m}) \approx_{C} t_j(x_{i_1},\dots,x_{i_m}) \end{equation*} such that $(t_i,t_j) \in Tt_s(q)$ and the vector of indices $(i_1,\dots,i_m) \subseteq {\mathbb{N}}^n$ satisfies $i_d = \mathrm{min}(\{i \mid (x_{i}, x_d) \in \mathrm{Eqv}(T_s(p))\}) $ for all $d \in [m]$. This means that the variables that are in the equivalence relation generated by the pairs in $T_s(p)$ are collapsed. \end{Alg} In order to prove the main result of the section we need the following technical Lemma inspired by \cite{Lip.ACOV}[Theorem $3.1$]. \begin{Lem}\label{lem:classher} Let $\vv{V}$ be a variety, let $\alg{F}_{\vv{V}}(3)$ be the $3$-generated free algebra of $\vv{V}$, let $\a = \mathrm{Cg}(\{(x,z)\}), \b = \mathrm{Cg}(\{(x,y)\}), \gamma = \mathrm{Cg}(\{(y,z)\})$, and let $t,s \in \alg{F}_{\vv{V}}(3)$ with $(t, s) \in \a \wedge y^n(\alpha,\b,\gamma)$ for some $n \in {\mathbb{N}}$. Then, for all $\alg{A} \in \vv{V}$ and $\delta \in \operatorname{Con}(\alg{A})$ with $[\delta,\delta]_{2T} = 0$, we have $t^{\alg{A}}(a,b,c) = s^{\alg{A}}(a,b,c)$ for all $a,b,c$ in the same $\delta$-class. \end{Lem} \begin{proof} First we prove that for all $\alg{A} \in \vv{V}$, $\delta \in \operatorname{Con}(\alg{A})$ with $[\delta,\delta]_{2T} = 0$, and $u, v$ ternary terms such that $u^{\alg{A}}(a,b,a) = v^{\alg{A}}(a,b,a)$ and $u^{\alg{A}}(a,b,b)$ $ = v^{\alg{A}}(a,b,b)$ for all $a,b$ in the same $\delta$-class, then $u = v$ over the same $\delta$-block. This claim is included in a symmetric version in \cite{Lip.ACOV}[Theorem $3.1$]. In order to prove the claim we can observe that for all $a,b,c$ in the same $\delta$-class we have: \begin{center} \begin{equation*} \begin{pmatrix} u^{\alg{A}}(b,a,b) & u^{\alg{A}}(b,b,b) \\ u^{\alg{A}}(a,a,c) & u^{\alg{A}}(a,b,c) \end{pmatrix} \text{and} \begin{pmatrix} v^{\alg{A}}(b,a,b) & v^{\alg{A}}(b,b,b) \\ v^{\alg{A}}(a,a,c) & v^{\alg{A}}(a,b,c) \end{pmatrix} \in M(\delta,\delta) \end{equation*} \end{center} and hence $u \mathrel{[\delta, \delta]_{2T}} v$ over the same $\delta$-block and this yields $u = v$ over the same $\delta$-class. Moreover, suppose that $(t, s) \in \a \wedge y^1(\alpha,\b,\gamma) = \a \wedge (\b \vee (\a \wedge \gamma))$. Then there exists $k \in {\mathbb{N}}$ such that $(t, s) \in \a \wedge (\b \circ^{(2k+1)} (\a \vee \gamma))$. Thus $t \mathrel{\a} s$ and there exist $v_1, w_1, \dots, v_n, w_n \in \alg{F}_{\vv{V}}(3)$ such that \begin{equation*} s \mathrel{\b} v_1 \mathrel{\a \wedge \gamma} w_1 \cdots v_k \mathrel{\a \wedge \gamma} w_k \mathrel{\b} t. \end{equation*} By a standard Mal'cev argument, $t(x,y,x) = s(x,y,x)$, $s(x,x,y) = v_{1}(x,x,y)$, $t(x,x,y) = w_{k}(x,x,y)$, $w_{j-1}(x,x,y) = v_{j}(x,x,y)$, $v_i(x,y,x)$ $ = w_i(x,y,x)$, and $v_i(x,y,y) = w_i(x,y,y)$ are identities of $\vv{V}$ for all $i \in \{1, \dots, k\}$, $j \in \{2, \dots, k\}$. Let $\alg{A} \in \vv{V}$ and let $\delta \in \operatorname{Con}(\alg{A})$ with $[\delta,\delta]_{2T} = 0$. We can apply the previous claim to conclude that $v_i^{\alg{A}}(a,b,c)$ $ = w_i^{\alg{A}}(a,b,c)$ for $a,b,c$ in the same $\delta$-class. Thus, $s^{\alg{A}}(a,a,c) = t^{\alg{A}}(a,a,c)$ with $a \mathrel{\delta} c$. Moreover, from $t(x,y,x) = s(x,y,x)$ and $s^{\alg{A}}(a,a,$ $c)= t^{\alg{A}}(a,a,c)$ follows that $t^{\alg{A}}(a,b,c) = s^{\alg{A}}(a,b,c)$ for all $a,b,c$ in the same $\delta$-class, applying a symmetric version of the previous claim. Thus we proved the statement for $(t, s) \in \a \wedge y^1(\alpha,\b,\gamma)$. If $(t, s) \in \a \wedge y^n(\alpha,\b,\gamma)$ with $n >1$ the thesis follows applying inductively the previous strategy. \end{proof} Now we are ready to prove the main theorem of the section. The next result provides a bridge between congruence equations that hold in a variety $\vv V$ and congruence equations that hold in the variety $\vv V'$ generated by abelian algebras of the idempotent reduct and thus is interesting for several reasons. Indeed, a systematic production of a Mal'cev condition by congruence equation valid in $\vv V'$ is somehow surprising. This implies also that the commutator equations generated by the algorithm \ref{PixWilAlg} under some mild assumptions produce a Mal'cev condition, result that justifies their study. Note that the proof of the next theorem makes deep use of a modified version of what is also called Mal'cev argument (\cite[Lemma $12.1$]{BS.ACIU}). \begin{Thm}\label{ThmweakAlg} Let $\vv V$ be a variety and suppose that $\a \wedge (\b \circ \gamma) \leq p(\alpha, \b ,\gamma)$ is a congruence equation that fails on a $3$-element set with congruences with pair-wise empty intersection, where $p$ is a term for the language $\{\circ, \wedge, \vee\}$. Then the following are equivalent: \begin{enumerate} \item [(1)] the abelian algebras of the idempotent reduct of $\vv V$ satisfy $\a \wedge (\b \circ \gamma) \leq p(\alpha, \b ,\gamma)$; \item [(2)] $\vv V$ satisfies the commutator equations $\mathrm{Eq}_{C}(\a \wedge (\b \circ \gamma) \leq p(\alpha, \b ,$ $\gamma))$ (see Algorithm \ref{PixWilAlg}); \item [(3)] $\vv V$ satisfies the equations: \begin{equation}\label{commgenEq2} \a \wedge (\b \circ \gamma) \leq p(\a', \b',\gamma'); \end{equation} where $\alpha' = \alpha \circ G(\alpha,\b,\gamma)\circ \a$, $\b' = \b \circ G(\b,\a,\gamma)\circ \b$, and $\gamma' = \alpha \circ G(\gamma,\a,\b)\circ \a$ with $G(\alpha,\b,\gamma) \in \{[\beta \wedge (\a \vee \gamma), \beta \wedge (\a \vee \gamma)] , [\gamma \wedge (\a \vee \b),\gamma \wedge (\a \vee \b)] \}$. \item [(4)] $\vv V$ satisfies the equations: \begin{equation}\label{herrgenEq} \a \wedge (\b \circ \gamma) \leq p(\a', \b',\gamma'); \end{equation} where $\alpha' = \mathrel{\alpha} \circ \mathrel{F(\a,\b, \gamma)} \circ \mathrel{\alpha} $, $\b' = \mathrel{\b} \circ \mathrel{F(\b,\a, \gamma)} \circ \mathrel{\b} $, and $\gamma' = \mathrel{\gamma} \circ \mathrel{F(\gamma, \b, \a)} \circ \mathrel{\gamma}$ with $F(\a,\b, \gamma) \in \{\b \wedge y^n(\b,\a,\gamma), \b \wedge z^n(\b,\a,\gamma), \gamma \wedge y^n(\gamma,\b,\a), \gamma \wedge z^n(\gamma,\b,\a)\}$. \end{enumerate} \end{Thm} \begin{proof} Let us start from $(1) \Rightarrow (2)$. Let $\alg A \in \vv V$ and let $\theta \in \operatorname{Con}(\alg A)$. Factoring $\alg A$ by $[\theta, \theta]$ we verify that $\mathrm{Eq}(\a \wedge (\b \circ \gamma) \leq p)$ hold in every $\theta/[\theta, \theta]$-block of the quotient and thus $\mathrm{Eq}_{C}(\a \wedge (\b \circ \gamma) \leq p)$ hold in $\alg A$. Let $\alg A'$ be the idempotent reduct of $\alg A/[\theta, \theta]$. Since we factored by $[\theta, \theta]$ we can see that every $\theta/[\theta, \theta]$-class of $\alg A/[\theta, \theta]$ is an abelian subalgebra of $\alg A'$ and thus it has terms satisfying $\mathrm{Eq}(\a \wedge (\b \circ \gamma) \leq p)$ via the Pixley-Wille algorithm \cite{Pix.LMC, Wil.K}. We can observe that those terms witness the satisfaction of $\mathrm{Eq}_{C}(\a \wedge (\b \circ \gamma) \leq p)$ in $\alg{A}$, since a quotient by $[\theta,\theta]$ has been applied. For $(2) \Rightarrow (3)$. From the hypothesis we have that there exists $k \in {\mathbb{N}}$ such that $\mathrm{Eq}_{C}(\alpha \wedge (\beta \circ \gamma) \leq p^{(k)})$ hold in $\vv{V}$. Then let $\mathbf{A} \in \vv{V}$, $a_1 ,a_2 \in A$, and let $\alpha, \b, \gamma \in \operatorname{Con}(\mathbf{A})$ be such that: \begin{equation*} (a_1, a_2) \in \a \wedge (\b \circ \gamma). \end{equation*} Hence there exists $a_3 \in \alg A$ such that $(a_1,a_3) \in \b$ and $(a_3, a_2) \in \gamma$. We want to prove that \begin{equation} \label{GoalnewPW} (a_1, a_2) \in p^{(k)}(\a', \b',\gamma'). \end{equation} Let $Z = \{z_1,\dots,z_u\}$ be the set of vertices of $\mathbf{G}(p^{(k)})$ and let $X_s$ be a variable of $\mathbf{G}(p^{(k)})$. From the definition of $\mathrm{Eq}_{C}(\a \wedge (\b \circ \gamma) \leq p^{(k)})$, we have that for all $(t_i,t_j) \in T_s(p^{(k)})$ there exist two $3$-ary terms $t_i, t_j$ such that: \begin{equation*} t_i(x_{i_1},x_{i_2}, x_{i_3}) \approx_{C} t_j(x_{i_1},x_{i_2}, x_{i_3}) \in \mathrm{Eq}_{C}(\a \wedge (\b \circ \gamma) \leq p^{(k)}) \end{equation*} where the variables in $T_s(p)$-relation are collapsed. Let $\alpha$ be the variable substituted to $X_s$. Then, fixing this congruence, we have that $T_s(p) = \{(x_1,x_3)\}$ and \begin{align*} &t_i^{\alg{A}}(a_1, a_2, a_3) \mathrel{\a } t_i^{\alg{A}}(a_{1}, a_{1}, a_{3}) \mathrel{[\b \wedge (\a \vee \gamma ),\b \wedge (\a \vee \gamma )]} t_j^{\alg{A}}(a_{1}, a_{1}, a_{3}) \\&t_j^{\alg{A}}(a_{1}, a_{1}, a_{3}) \mathrel{\a} t_j^{\alg{A}}(a_1, a_2, a_3). \end{align*} With the same argument we can check that for all $\theta_s \in \{\alpha,\b, \gamma\}$ and $\{\tau_s, \delta_s\} = \{\alpha, \b, \gamma\} \setminus \{\theta_s\}$: \begin{align*}\label{commutatoreq} &t_i^{\alg{A}}(a_1, a_2, a_3) \mathrel{\theta_s } t_i^{\alg{A}}(a_{i_1}, a_{i_2}, a_{i_3}) \\&t_i^{\alg{A}}(a_{i_1}, a_{i_2}, a_{i_3}) \mathrel{[\tau_s \wedge (\theta_s \vee \delta_s),\tau_s \wedge (\theta_s \vee \delta_s)]} t_j^{\alg{A}}(a_{i_1}, a_{i_2}, a_{i_3}) \\&t_j^{\alg{A}}(a_{i_1}, a_{i_2}, a_{i_3}) \mathrel{\theta_s} t_j^{\alg{A}}(a_1, a_2, a_3) \end{align*} where $\theta_s$ is the congruence substituted to the variable $X_s$ of $\mathbf{G}(p^{(k)})$. Let $\rho:Z \rightarrow A$ be the assignment such that $z_1 \mapsto a_1$,$z_2 \mapsto a_2$ and $z_i \mapsto t_i^{\alg{A}}(a_1, a_2, a_3)$ for all $3 \leq i \leq u$. Thus we have that $(t_i^{\alg{A}}(a_1, a_2, a_3), $ $ t_j^{\alg{A}}(a_1, a_2, a_3)) \mathrel{\in} \mathrel{\theta_s} \circ \mathrel{[\tau_s \wedge (\theta_s \vee \delta_s),\tau_s \wedge (\theta_s \vee \delta_s)]} \circ \mathrel{\theta_s}$ whenever $(z_i,z_j) \in \mathbf{G}(p^{(k)})$. By $(1)$ of Proposition \ref{PropKK}, we have that $(a_1,a_2) \in p^{(k)}(\a', \b',\gamma')$ $ \subseteq p(\a', \b',\gamma')$. For $(3) \Rightarrow (4)$ follows applying Lemma \ref{herringLem}. Indeed let $\alg F_{\vv V}(3)$ the free algebra of $\vv V$ over $3$ generators $\{x,y,z\}$. We see that (\ref{commgenEq2}) hold in $\alg F_{\vv V}(3)$ by the hypothesis. Let $\a = \mathrm{Cg}(\{(x,z)\})$, $\b = \mathrm{Cg}(\{(x,y)\})$, and $\gamma = \mathrm{Cg}(\{(y,z)\})$. By Lemma \ref{herringLem} we have that $[\a \wedge(\b \vee \gamma),\a \wedge(\b \vee \gamma)] \leq \bigcup \a \wedge y^n(\a,\b,\gamma), \bigcup \a \wedge z^n(\a,\b,\gamma)$. Thus we can substitute every occurrence in (\ref{commgenEq2}) of $[\a \wedge(\b \vee \gamma),\a \wedge(\b \vee \gamma)]$ with $\bigcup \a \wedge y^n(\a,\b,\gamma)$ or $\bigcup \a \wedge z^n(\a,\b,\gamma)$ and the obtained equation holds in $\alg F_{\vv V}(3)$. Permuting the variables we can substitute every occurrence of $[\beta \wedge (\a \vee \gamma),\beta \wedge (\a \vee \gamma)]$ with $\bigcup \b \wedge y^n(\b,\a,\gamma)$ or $\bigcup \b \wedge z^n(\b,\a,\gamma)$ and of $[\gamma \wedge (\a \vee \b),\gamma \wedge (\a \vee \b)]$ with $\bigcup \gamma \wedge y^n(\gamma,\b,\a)$ or $\bigcup \gamma \wedge z^n(\gamma,\b,\a)$. Since we have done finitely many substitutions in the equation (\ref{commgenEq2}) then there exists an $n \in {\mathbb{N}}$ such that the equations (\ref{herrgenEq}) hold in $\alg F_{\vv V}(3)$. Hence, the equations (\ref{herrgenEq}) hold in $\vv V$ via a standard Mal'cev argument. $(4) \Rightarrow (1)$. Let $\vv V'$ be the variety generated by the abelian algebras of the idempotent reduct of $\vv V$. Since the equation (\ref{herrgenEq}) is a congruence equation then, by the Pixley-Wille algorithm \cite{Pix.LMC} \cite{Wil.K}, it generates an idempotent Mal'cev condition that holds in $\vv V'$ and we claim that this Mal'cev condition is non-trivial. In fact, $\a \wedge(\b \circ \gamma) \leq p(\alpha,\b,\gamma)$ is non-trivial and it fails on a $3$-element set choosing $\alpha, \b, \gamma$ in a proper way. Furthermore, $\alpha, \b, \gamma$ can be chosen with a pairwise empty intersection and thus with $\a \wedge \b^n = \a \wedge \gamma^n = 0$ and the same holds permuting $\alpha,\b,\gamma$. Hence, we can see that if $\a \wedge(\b \circ \gamma) \leq p(\alpha,\b,\gamma)$ fails on a $3$-element set then also the equations (\ref{herrgenEq}) fail choosing the same congruences. Furthermore, let $\alg{A}$ be an abelian algebra of the idempotent reduct of $\vv{V}$. By \cite{KK.TBTC}[Corollary $4.5$], $[\phi,\phi] = 0 \Rightarrow [\phi,\phi]_{2T} = 0$ for Taylor varieties since $[\phi, \phi]_{L} = 0 \Rightarrow [\phi,\phi]_{2T} = 0$, where $[\phi,\phi]_{L}$ is the linear commutator \cite{KK.TBTC} and thus $[\phi,\phi]_{2T} = 0$ for all $\phi \in \operatorname{Con}(\alg{A})$. From the hypothesis we have that $\alg{A}$ satisfies $\a \wedge (\b \circ \gamma) \leq p(\a', \b',\gamma')$ and we prove that $\alg{A}$ satisfies $\a \wedge (\b \circ \gamma) \leq p(\a, \b,\gamma)$. Let $(a,b) \in \a \wedge(\b \circ \gamma)$, then there exists $c \in A$ such that $(a,b) \in \alpha$, $(a,c) \in \b$, and $(c,b) \in \gamma$. We show that $(a,b) \in p(\a, \b,\gamma)$. Let $\alg{F}(\{x,y,z\})$ be the free algebra over three generators in the variety $\vv{W}$ generated by the abelian algebras of the idempotent reduct of $\vv{V}$. Furthermore, let $\overline{\a} = \mathrm{Cg}(\{(x,z)\})$, $\overline{\b} = \mathrm{Cg}(\{(x,y)\})$, and $\overline{\gamma} = \mathrm{Cg}(\{(y,z)\})$. Since $\alg{F}_{\vv{W}}(\{x,y,z\})$ satisfies (\ref{herrgenEq}), by the Pixley-Wille algorithm \cite{Pix.LMC, Wil.K} we have that for all $(t_i, t_j) \in Tt_s(p(\overline{\alpha},\overline{\b},\overline{\gamma}))$ there exist $s_i, s_j \in \alg{F}_{\vv{W}}(\{x,y,z\})$ such that $t_i\mathrel{\a_1} s_i \mathrel{F(\a_1, \a_2, \a_3)} s_j \mathrel{\a_1} t_j$ where $\a_1 \circ F(\a_1, \a_2, \a_3) \circ \a_1$ is the expression substituted to $X_s$ in $p$ and $\{\a_1, \a_2, \a_3\} = \{\overline{\alpha},\overline{\b},\overline{\gamma}\}$. Hence, $t_i(x_{i_1}, x_{i_2}, x_{i_3}) = s_i(x_{i_1}, x_{i_2}, x_{i_3})$ and $t_j(x_{i_1}, x_{i_2}, x_{i_3}) = s_j(x_{i_1}, x_{i_2}, x_{i_3})$ are identities of $\vv{V}$, where $x_{i_1}, x_{i_2}, x_{i_3} \in \{x,y,z\}$ and the variables in $\a_1$ relation are collapsed. Furthermore, $s_i \mathrel{F(\a_1, \a_2, \a_3)} s_j$ and thus we can apply Lemma \ref{lem:classher} to these terms. Let $\psi:Y \rightarrow A$ be an assignment extending $y_1 \mapsto a$,$y_2 \mapsto b$ such that $y_i \mapsto t_i^{\alg{A}}(a,b,c)$, where $\{t_i\}_{i\in I}$ is the set of terms in appearing in $Tt(p(\alpha,\b,\gamma))$. Let us consider the case $\a_1 = \b$, then, as a consequence of Lemma \ref{lem:classher}, $t_i(x_{1}, x_{1}, x_{3}) = s_i(x_{1}, x_{1}, x_{3})$, and $t_j(x_{1}, x_{1}, x_{3}) = s_j(x_{1}, x_{1}, x_{3})$, we have that \begin{equation*} t_i^{\alg{A}}(a,b,c) \mathrel{\b} t_i^{\alg{A}}(a,a,c) = s_i^{\alg{A}}(a,a,c) = s_j^{\alg{A}}(a,a,c) = t_j^{\alg{A}}(a,a,c) \mathrel{\b} t_j^{\alg{A}}(a,b,c). \end{equation*} Thus $t_i(a,b,c) \mathrel{\b} t_j(a,b,c)$ and with the same strategy we can show similar relations for $\a_1 \in \{ \a, \gamma\}$ and hence, by Proposition \ref{PropKK}, the assignment $\psi$ witness the satisfaction of $\a \wedge (\b \circ \gamma) \leq p(\a, \b,\gamma)$ in $\alg{A}$. Thus, the abelian algebras of idempotent reduct of $\vv{V}$ satisfy $\a \wedge (\b \circ \gamma) \leq p(\alpha, \b ,\gamma)$ and $(1)$ holds. \end{proof} We can see that the previous theorem requires to restrict the congruence equation $p \leq q$ that holds in the variety generated by the abelian algebras of the idempotent reduct of a variety to have $p = \a \wedge (\b \circ \gamma)$. Although quite restrictive, this hypothesis is satisfied by the vast majority of the interesting known Mal'cev conditions characterized by congruence equations (modularity, distributivity, and many others). Moreover, the hypothesis $p = \a \wedge (\b \circ \gamma)$ can be weaken further as shown in Theorem \ref{TaylorThm} but the difficulty in doing so comes from the constraints given by the commutator equations which run over a set of variables in the same $\theta$-class. This blocks the generalization of the proof of $(2) \Rightarrow (3)$ of Theorem \ref{ThmweakAlg}. \section*{Conclusions} In this work we have observed how some Mal'cev conditions can be characterized through commutator equations. This interesting connection that generalizes the Mal'cev condition produced by the weak difference term, might have many outcomes in the study of the relationship between Mal'cev conditions and commutator properties or TCT-types as shown in results such as \cite{KK.TBTC, KK.TSOC, Lip.ACOV}. Our aim with this work was to build the foundations of a new type of equations well-behaved in terms of Mal'cev conditions that could produce new insights about them. Further developments could be obtained using the procedure of Theorem \ref{ThmweakAlg} to known congruence equations in order to discover if the generated commutator equations furnish a known Mal'cev conditions or if they characterize new ones and if they are new what are the properties associated to them. Moreover, in commutator theory has been of great interest the study of the properties of commutators in varieties that satisfy certain Mal'cev conditions \cite{KK.TBTC, KK.TSOC}. Thus, a natural problem that could be worth to investigate is how various properties described by commutator equations change the behaviour of the commutator or of the solvability theory in varieties satisfying them. Examples of prototype results about commutator and solvability theory can be found in \cite{KK.TSOC}[Chapter $6$] for the weak difference term. Finally, we want to highlight that this paper concerns properties satisfied by abelian algebras of the idempotent reduct of a variety. The fact that under mild assumptions those kind of properties generally produce Mal'cev conditions shows an unexpected deep connection between a variety and its abelian algebras of the idempotent reduct and, if needed, allows to check Mal'cev conditions in the more manageable setting of abelian algebras. \section*{Acknowledgement} The author thanks Paolo Aglianò, Erhard Aichinger, Sebastian Kreinecker, and Bernardo Rossi for many hours of fruitful discussions.
1,941,325,219,936
arxiv
\section{Introduction} The problem of identifying the transition matrices in linear time-invariant (LTI) systems has been extensively studied in the literature \cite{lai1983asymptotic,kailath2000linear,buchmann2007asymptotic}. Recent papers establish finite-time rates for accurately learning the dynamics in various online and offline settings~\cite{faradonbeh2018finite,simchowitz2018learning,sarkar2019near}. Notably, existing results are established when the goal is to identify the transition matrix of \emph{a single} system. However, in many application areas of LTI systems, one observes state trajectories of \emph{multiple} dynamical systems. So, in order to be able to efficiently use the full data of all state trajectories and utilize the possible commonalities the systems share, we need to estimate the transition matrices of all systems \emph{jointly}. The range of applications is remarkably extensive, including dynamics of economic indicators in US states \cite{pesaran2015time,stock2016dynamic,skripnikov2019joint}, flight dynamics of airplanes at different altitudes \cite{bosworth1992linearized}, drivers of gene expressions across related species \cite{fujita2007modeling,basu2015network}, time series data of multiple subjects that suffer from the same disease \cite{seth2015granger,skripnikov2019regularized}, and commonalities among multiple subsystems in control engineering \cite{Sudhakara2022scalable}. In all these settings, there are strong similarities in the dynamics of the systems, which are unknown and need to be learned from the data. Hence, it becomes of interest to develop a joint learning strategy for the system parameters, by pooling the data of the underlying systems together and learn the \emph{unknown} similarities in their dynamics. In particular, this strategy is of extra importance in settings wherein the available data is limited, for example when the state trajectories are short or the dimensions are not small. In general, joint learning (also referred to as multitask learning) approaches aim to study estimation methods subject to \emph{unknown} similarities across the data generation mechanisms. Joint learning methods are studied in supervised learning and online settings \cite{caruana1997multitask,ando2005framework,maurer2006bounds,maurer2016benefit,alquier2017regret}. Their theoretical analyses are obtained by adopting a number of technical assumptions regarding the data, including independence, identical distributions, boundedness, richness, and isotropy. However, for the problem of joint learning of dynamical systems, additional technical challenges are present. First, the observations are temporally dependent. Second, the number of unknown parameters is the \textit{square} of the dimension of the system, which impacts the learning accuracy. Third, in many applications the dynamics matrices of the underlying LTI systems may possess eigenvalues of (almost) unit magnitude, which in turn renders common approaches for dependent data (e.g., mixing) inapplicable \cite{faradonbeh2018finite,simchowitz2018learning,sarkar2019near}. Fourth, the spectral properties of the transition matrices play a critical role on the magnitude of the estimation errors. Technically, the state vectors of the systems scale exponentially with the multiplicities of the eigenvalues of the transition matrices (which can be as large as the dimension). Accordingly, novel techniques are required for considering all important factors and new analytical tools are needed for establishing useful rates for estimation error. Further details and technical discussions are provided in \pref{sec:joint-learning}. We focus on a commonly used setting for joint learning that involves two layers of uncertainties. It lets all systems share a common basis, while coefficients of the linear combinations are \emph{idiosyncratic} for each system. This setting is adopted in multitask regression, linear bandits, and Markov decision processes \cite{du2020few,tripuraneni2020provable,hu2021near,lu2021power}. From another point of view, this assumption that the system transition matrices are \emph{unknown} linear combinations of \emph{unknown} basis matrices can be considered as a first-order approximation for {unknown} non-linear dynamical systems \cite{kang1993approximate,li2004iterative}. Further, these compound layers of uncertainties subsume a recently studied case for mixtures of LTI systems where under additional assumptions such as exponential stability and distinguishable transition matrices, joint learning from unlabeled state trajectories outperforms individual system identification \cite{chen2022learning}. The main contributions of this work can be summarized as follows. We provide novel finite-time estimation error bounds for jointly learning multiple systems, and establish that pooling the data of state trajectories can drastically decrease the estimation error. Our analysis also presents effects of different parameters on estimation accuracy, including dimension, spectral radius, eigenvalues multiplicity, tail properties of the noise processes, and heterogeneity among the systems. Further, we study learning accuracy in the presence of model misspecifications and show that the developed joint estimator can robustly handle moderate violations of the shared structure in the dynamics matrices. In order to obtain the results, we employ advanced techniques from random matrix theory and prove sharp concentration results for sums of multiple dependent random matrices. Then, we proceed to establishing tight and simultaneous high-probability confidence bounds for the sample covariance matrices of the systems under study. The analyses precisely characterize the dependence of the presented bounds on the spectral properties of the transition matrices, condition numbers, and block-sizes in the Jordan decomposition. Further, to address the important issue of temporal dependence, we extend self-normalized martingale bounds to {multiple matrix-valued martingales}, subject to shared structures across the systems. We also present a robustness result by showing that the error due to misspecifications can be nicely controlled. The remainder of the paper is organized as follows. The problem is formulated in \pref{sec:formulation}. In \pref{sec:joint-learning}, we describe the joint-learning procedure, study the per-system estimation error, and provide the roles of various key quantities. Then, investigation of robustness to model misspecification and the impact of violating the shared structure are discussed in \pref{sec:misspec}. We provide numerical illustrations for joint learning in \pref{sec:numerical} and present the proofs of our results in the subsequent sections. Finally, the paper is concluded in \pref{sec:conc}. \paragraph*{Notation.} For a matrix $A$, $A'$ denotes the transpose of $A$. For square matrices, we use the following order of eigenvalues in terms of their magnitudes: $\abr{\lambda_{\max}(A)} = \abr{\lambda_1(A)} \ge \abr{\lambda_2(A)} \ge \cdots \ge \abr{\lambda_d(A)} = \abr{\lambda_{\min}(A)}$. For singular values, we employ $\sigma_{\min}(A)$ and $\sigma_{\max}(A)$. For any vector $v \in \CC^d$, let $\norm{v}_p$ denote its $\ell_p$ norm. We use $\Mnorm{A}_{\gamma \rightarrow \beta}$ to denote the operator norm defined as follows for $\beta, \gamma \in [1,\infty]$ and $A \in \CC^{d_1 \times d_2}$: $\opnorm{A}{\gamma \rightarrow \beta} = \sup\limits_{v \neq \zero } {\norm{Av}_\beta}/{\norm{v}_{\gamma}}$. When $\gamma=\beta$, we simply write $\Mnorm{A}_\beta$. For functions $f,g: \Xcal \rightarrow \mathbb{R}$, we say that $f \lesssim g$ if $f(x) \le c g(x)$ for a universal constant $c>0$. Similarly, we use the order notation $f = O(g)$ and $f = \Omega(h)$, if $0 \le f(n) \le c_1 g(n)$ for all $n \ge n_1$ and $0 \le c_2 h(n) \le f(n)$ for all $n \ge n_2$, for large enough constants $c_1, c_2,n_1,n_2$. For any two matrices $A,B \in \mathbb{R}^{d_1 \times d_2}$, we define the inner product $\inner{A}{B} = \tr{A'B}$. Then, the Frobenius norm becomes $\norm{A}_F = \sqrt{\inner{A}{A}}$. The sigma-field generated by $X_1, X_2, \ldots, X_n$ is denoted by $\sigma(X_1, X_2,\ldots X_n)$. We denote the $i$-th component of the vector $x \in \mathbb{R}^d$ by $x[i]$. For any $n \in \NN$, $[n]$ denotes the set $\cbr{1,2,\ldots,n}$. Finally, we use $\vee$ ($\wedge$) to denote the maximum (minimum). \section{Problem Formulation}\label{sec:formulation} \begin{figure}[ht] \centering \subfigure[$\lambda_{\max}(A)<1$]{\label{fig:a}\includegraphics[width=0.48\columnwidth]{stable.png}} \subfigure[$\lambda_{\max}(A)\approx1$]{\label{fig:b}\includegraphics[width=0.48\columnwidth]{unit_root.png}} \caption{Logarithm of the magnitude of the state vectors vs. time, for different block-sizes in the Jordan forms of the transition matrices, which is denoted by $l$ in \pref{eq:JordanDef}. The exponential scaling of the state vectors with $l$ can be seen in both plots.} \label{fig:state-size} \end{figure} Our main goal is to study the rates of jointly learning dynamics of multiple LTI systems. Data consists of state trajectories of length $T$ from $M$ different systems. Specifically, for $m \in [M]$ and $t = 0,1,\ldots, T$, let $x_m(t) \in \mathbb{R}^{d}$ denote the state of the $m$-th system, that evolves according to the Vector Auto-Regressive (VAR) process \begin{align} \label{eq:mt-lti} x_m(t+1) = A_m x_m(t) + \eta_m(t+1). \end{align} Above, $A_m \in \mathbb{R}^{d \times d}$ denotes the true unknown transition matrix of the $m$-th system and $\eta_m(t+1)$ is a mean zero noise. For succinctness, we use $\Theta^*$ to denote the set of all $M$ transition matrices $\{A_m\}_{m=1}^M$. Moreover, the transition matrices are \textit{related} as specified in \pref{assum:linear_model}, that we will shortly discuss. Note that the above setting includes systems with longer memories. Indeed, if the states $\tilde{x}_m(t) \in \mathbb{R}^{\tilde{d}}$ obey $$\tilde{x}_m(t) = B_{m,1} \tilde{x}_m(t-1) + \cdots + B_{m,q}\tilde{x}_m(t-q)+ \eta_m(t),$$ then, by concatenating $\tilde{x}_m(t-1), \cdots, \tilde{x}_m(t-q)$ in one larger vector $x_m(t-1)$, the new state dynamics is \pref{eq:mt-lti}, for $d =q\tilde{d}$ and $A_m = \begin{bmatrix} B_{m,1} \cdots B_{m,q-1} & B_{m,q}\\ I_{(q-1)\tilde{d}} & 0 \end{bmatrix}.$ We assume that the system states do not explode in the sense that the spectral radius of the transition matrix $A_m$ can be \emph{slightly} larger than one. This is required for the systems to be able to operate for a reasonable time length~\cite{juselius2002high,faradonbeh2018bfinite}. Note that this assumption still lets the state vectors grow with time, as shown in \pref{fig:state-size}. \begin{assump} For all $m \in [M]$, we have $\abr{\lambda_1(A_m)} \le 1+\rho/T$, where $\rho>0$ is a fixed constant. \end{assump} In addition to the magnitudes of the eigenvalues, further properties of the transition matrices heavily determine the temporal evolution of the systems. A very important one is the size of the largest block in the Jordan decomposition of $A_m$, which will be rigorously defined shortly. This quantity is denoted by $l$ in \pref{eq:JordanDef}. The impact of $l$ on the state trajectories is illustrated in \pref{fig:state-size}, wherein we plot the \emph{logarithm} of the magnitude of state vectors for linear systems of dimension $d=32$. The upper plot depicts state magnitude for stable systems and for blocks of the size $l=2,4,8,16$ in the Jordan decomposition of the transition matrices. It illustrates that the state vector scales \emph{exponentially} with $l$. Note that $l$ can be as large as the system dimension $d$. Moreover, the case of transition matrices with eigenvalues close to (or exactly on) the unit circle is provided in the lower panel in \pref{fig:state-size}. It illustrates that the state vectors grow polynomially with time, whereas the scaling with the block-size $l$ is exponential. Therefore, in design and analysis of joint learning methods, one needs to carefully consider the effects of $l$ and $\abr{\lambda_1(A_m)}$. To that end, we define $\alpha\left(A_m\right)$ in \pref{eq:alpha_def} and use it in our theoretical analyses that will be discussed later on. \iffalse To describe the assumption on the noise sequence for each system, we first define the sub-Gaussian norm of a random variable. \begin{definition}[\cite{vershynin2018high}] \label{def:subg-norm} For a sub-Gaussian random variable $X$, the sub-Gaussian norm $\norm{X}_{\psi_2}$ is defined as \begin{align*} \norm{X}_{\psi_2} = \inf \{t > 0: \EE \exp(X^2/t^2) \le 2\}. \end{align*} For such a random variable, equivalently we also have: $\PP[|X| \ge t] \le 2 \exp(-ct^2/\norm{X}_{\psi_2}^2)$ and $\EE[X^2] \le c'\norm{X}_{\psi_2}^2$ where $c,c'$ are absolute constants. If $\EE[X] = 0$, then $\EE\sbr{\exp(\lambda X)} \le \exp(c'\lambda^2\norm{X}_{\psi_2}^2)$. \end{definition} \fi Next, we express the probabilistic properties of the stochastic processes driving the dynamical systems. Let $\Fcal_t \coloneqq \sigma(x_0,\eta_1,\eta_2,\dots,\eta_t)$ denote the filtration generated by the the initial state and the sequence of noise vectors. Based on this, we adopt the following ubiquitous setting that lets the noise process $\{\eta_m(t)\}_{t=1}^\infty$ be a sub-Gaussian martingale difference sequence. Note that by definition, $\eta_m(t)$ is $\Fcal_{t}$-measurable. \begin{assump} \label{assum:noise} For all systems $m \in [M]$, we have $\EE\sbr{\eta_m(t)|\Fcal_{t-1}} = \zero$ and $\EE\sbr{\eta_m(t) \eta_m(t)'|\Fcal_{t-1}} = C$. Further, $\eta_m(t)$ is sub-Gaussian; $\EE\sbr{\exp \inner{\lambda}{\eta_m(t)} | \Fcal_{t-1}} \le \exp(\norm{\lambda}^2 \sigma^2/2)$, for all $\lambda \in \mathbb{R}^d$. Henceforth, we denote $c^2=\max(\sigma^2, \lambda_{\max}(C))$. \end{assump} The above assumption is widely-used in the finite-sample analysis of statistical learning methods~\cite{abbasi2011improved,faradonbeh2020input}. It includes normally distributed martingale difference sequences, for which \pref{assum:noise} is satisfied with $\sigma^2 = \lambda_{\max}(C)$. Moreover, if the coordinates $\eta_m(t)[i]$ are (conditionally) independent and have sub-Gaussian distributions with constant $\sigma_i$, it suffices to let $\sigma^2 = \sum_{i=1}^d \sigma_i^2$. For a single system $m \in [M]$, its underlying transition matrices $A_m$ can be \emph{individually} learned from its own state trajectory data by using the least squares estimator \cite{faradonbeh2018finite,sarkar2019near}. We are interested in jointly learning the transition matrices of all $M$ systems under the assumption that they share the following common structure. \begin{assump}[Shared Basis] \label{assum:linear_model} Each transition matrix $A_m$ can be expressed as \begin{align} \label{eq:linear_model} A_m = \sum_{i=1}^k \beta^*_{m}[i] W^*_i, \end{align} where $\{W^*_i\}_{i=1}^k$ are common ${d \times d}$ matrices and $\beta^*_m \in \mathbb{R}^k$ contains the idiosyncratic coefficients for system $m$. \end{assump} This assumption is commonly-used in the literature of jointly learning multiple parameters~\cite{du2020few,tripuraneni2020provable}. Intuitively, it states that each system evolves by combining the effects of $k$ systems. These $k$ unknown systems behind the scene are shared by all systems $m \in [M]$, the weight of each of which is reflected by the idiosyncratic coefficients that are collected in $\beta^*_m$ for system $m$. Thereby, the model allows for a rich heterogeneity across systems. The main goal is to estimate $\Theta^*=\{A_m\}_{m=1}^M$ by observing $x_m(t)$ for $1 \leq m \leq M$ and $0 \leq t \leq T$. To that end, we need a reliable joint estimator that can leverage the unknown shared structure to learn from the state trajectories more accurately than individual estimations of the dynamics. Importantly, to theoretically analyze effects of all quantities on the estimation error, we encounter some challenges for joint learning of multiple systems that do \emph{not} appear in single-system identification. Technically, the least-squares estimate of the transition matrix of a single system admits a closed form expression that lets the main challenge of the analysis be concentration of the sample covariance matrix of the state vectors. However, since closed forms are not achievable for joint-estimators, learning accuracy cannot be directly analyzed. To address this, we first bound the prediction error and then use that for bounding the estimation error. To establish the former, after appropriately decomposing the joint prediction error, we study its decay rate with the length of the trajectories and dimension, as well a the trade-offs between the number of systems, number of basis matrices, and magnitudes of the state vectors. Then, we deconvolve the prediction error to the estimation error and the sample covariance matrices and show useful bounds that can tightly relate the largest and smallest eigenvalues of the sample covariance matrices across all systems. Note that this step that is not required in single-system identification is based on novel probabilistic analysis for dependent random matrices. In the sequel, we introduce a joint estimator for utilizing the structure in \pref{assum:linear_model} and analyze its learning accuracy. Then, in \pref{sec:misspec} we consider violations of the structure in \pref{eq:linear_model} and establish robustness guarantees \section{Joint Learning of LTI Systems} \label{sec:joint-learning} In this section, we propose an estimator for jointly learning the $M$ transition matrices. Then, we establish that the estimation error decays at a significantly faster rate than competing procedures that learn each transition matrix $A_m$ separately by using only the data trajectory of system $m$. Based on the parameterization in \pref{eq:linear_model}, we solve for $\widehat{\Wb} = \{\widehat{W}_i\}_{i=1}^k$ and $\widehat{B} = \left[ \hat \beta_1 | \hat \beta_2 | \cdots \hat \beta_M \right] \in \mathbb{R}^{k \times M}$, as follows: \begin{align} \widehat{\Wb}, \hat B \coloneqq & \argmin_{\Wb, B} \Lcal(\Theta^*, \Wb, B), \label{eq:mt_opt} \end{align} where $\Lcal(\Theta^*, \Wb, B)$ is the averaged squared loss across all $M$ systems: \begin{align*} \frac{1}{MT}{\sum_{m=1}^M \sum_{t=0}^T \norm{x_m(t+1) - \rbr{\sum_{i=1}^k \beta_{m}[i] W_i} x_m(t)}_2^2 }. \end{align*} In the analysis, we assume that one can approximately find the minimizer in \pref{eq:mt_opt}. Although the loss function in \pref{eq:mt_opt} is non-convex, thanks to its structure, computationally fast methods for accurately finding the minimizer are applicable. Specifically, the loss function in \pref{eq:mt_opt} is quadratic and the non-convexity appears in the form of a rank constraint over the joint parameter $\Theta^*$. In \pref{eq:mt_opt}, we choose an explicit low-rank representation $(\Wb,B)$ (see \cite{burer2003nonlinear}). For such problems, it has been shown that gradient descent converges to the true low-rank matrix at a linear rate, under mild conditions \cite{wang2017unified}. In fact, it is shown that simple methods such as stochastic gradient descent have global convergence and these low-rank formulations do not lead to any spurious local minima \cite{ge2017no}. Further, the loss function is biconvex in the parameters $\Wb$ and $B$. As such, alternating minimization techniques converge to global optima, under standard assumptions \cite{jain2017non}. Note that a near-optimal minimum is sufficient here as the problem involves unavoidable estimation errors. Moreover, the error of the joint estimator in \pref{eq:mt_opt} degrades gracefully in the presence of moderate optimization errors. For instance, suppose that the optimization problem is solved up to an error of $\epsilon$ from the global optimum. It can be shown that an additional term of magnitude $O\rbr{\epsilon/\lambda_{\min}(C)}$ arises in the estimation error, due to this optimization error. Numerical experiments in \pref{sec:numerical} illustrate the implementation of \pref{eq:mt_opt}. In the sequel, we provide key results for the joint estimator in \pref{eq:mt_opt} and establish the decay rate for $\sum_{m=1}^M \norm{A_m - \hat{A}_m}_F^2$, that holds with high probability The analysis leverages high probability bounds on the sample covariance matrices of all systems, denoted by \begin{equation*} \Sigma_m = \sum_{t=0}^{T-1} x_m(t)x_m(t)'. \end{equation*} For that purpose, we utilize the Jordan forms of matrices, as follows. For matrix $A_m$, its Jordan decomposition is $A_m = P^{-1}_m \Lambda_m P_m$, where $\Lambda_m$ is a block diagonal matrix; $\Lambda_m = \diag(\Lambda_{m,1},\ldots\Lambda_{m,q_m})$, and for $i=1,\ldots q_m$, each block $\Lambda_{m,i} \in \CC^{l_{m,i} \times l_{m,i}}$ is a Jordan matrix of the eigenvalue $\lambda_{m,i}$. A Jordan matrix of size $l$ for $\lambda \in \CC$ is \begin{align} \label{eq:JordanDef} \begin{bmatrix} \lambda & 1 & 0 & \ldots & 0 & 0 \\ 0 & \lambda & 1 & 0 & \ldots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & 0 & \ldots & 0 & \lambda \end{bmatrix} \in \CC^{l \times l}. \end{align} Henceforth, we denote the size of each Jordan block by $l_{m,i}$, for $i=1,\cdots,q_m$, and the size of the largest Jordan block for system $m$ by $l^*_m$. Note that for \emph{diagonalizable} matrices $A_m$, since $\Lambda_m$ is diagonal, we have $l^*_m=1$. Now, using this notation, we define {\begin{align} \label{eq:alpha_def} \alpha(A_m) = \begin{cases} \opnorm{P_m^{-1}}{\infty\rightarrow 2}\opnorm{P_m}{\infty} f(\Lambda_m) &\abr{\lambda_{m,1}} < 1, \\ \opnorm{P_m^{-1}}{\infty\rightarrow 2}\opnorm{P_m}{\infty} e^{\rho+1} & \abr{\lambda_{m,1}} \le 1+\frac{\rho}{T},\end{cases} \end{align}} where $\lambda_{m,1}=\lambda_1 \left( A_m \right)$ and $$f(\Lambda_m) = e^{1/|\lambda_{m,1}|} \sbr{\frac{l^*_m-1}{- \log |\lambda_{m,1}|} + \frac{(l^*_m-1)!}{(-\log |\lambda_{m,1}|)^{l^*_{m,1}}}}.$$ The quantities in the definition of $\alpha\left(A_m\right)$ can be interpreted as follows. The term $\opnorm{P_m^{-1}}{\infty\rightarrow 2}\opnorm{P_m}{\infty}$ is intuitively similar to the condition number of the matrix $P_m$ in the Jordan decomposition. Indeed, it reflects how much rotation must be applied to $A_m$ to make it block-diagonal. Moreover, $f\left(\Lambda_m\right)$ for stable matrices, and $e^{\rho+1}$ for transition matrices with (almost) unit eigenvalues, capture the \emph{long term} influences of the eigenvalues. In other words, the amount that $\eta_m(t)$ contributes to the growth of $\norm{x_m(s)}$, for $s \gg t$ and $\abr{\lambda_{m,1}}<1$, is indicated by $f\left(\Lambda_m\right)$. For $\abr{\lambda_{m,1}}\leq 1+\rho/T$, the scaling of $\norm{x_m(s)}$ is also multiplied by polynomials of time, since influences of the noise vectors $\eta_m(t)$ do not decay as $s-t$ grows, because of the accumulations caused by the unit eigenvalues. The exact expressions are in \pref{thm:cov_conc} below. To introduce the following result, we define $\bar b_m$ next. First, for some $\delta_C>0$ that will be determined later, for system $m$, define $\bar b_m = b_T(\delta_C/3) + \norm{x_m(0)}_\infty$, where $b_T(\delta)=\sqrt{2\sigma^2 \log \left(2dMT \delta^{-1}\right)}$. Then, we establish high probability bounds on the sample covariance matrices $\Sigma_m$ with the detailed proof shown in \pref{sec:cov_conc_proof}. \begin{theorem}[Bounds on covariance matrices] \label{thm:cov_conc} For each system $m$, let $\underbar \Sigma_m=\underbar \lambda_m I$ and $\bar \Sigma_m = \bar \lambda_m I$, where $\underbar\lambda_{m} \coloneqq 4^{-1}\lambda_{\min}(C)T$, and \begin{align*} \bar{\lambda}_m \coloneqq \begin{cases} \alpha(A_m)^2 \bar b_m^2 T ,& \text{if } \abr{\lambda_{m,1}} < 1,\\ \alpha(A_m)^2 \bar b_m^2 T^{2l^*_m + 1}, & \text{if } \abr{\lambda_{m,1}} \le 1+\frac{\rho}{T}. \end{cases} \end{align*} Then, there is $T_0$, such that for $m \in [M]$ and $T \ge T_0$: \begin{align} \PP\sbr{0 \prec \underbar{\Sigma}_m \preceq \Sigma_m \preceq \bar{\Sigma}_m} \ge 1-\delta_C. \end{align} \end{theorem} Note that, the above two expressions for $\bar \lambda_m$ are not contradictory, as the first bound for $\abr{\lambda_{m,1}}<1$ is tighter, whereas the second one is sharper for $\abr{\lambda_{m,1}} \approx 1$. For establishing the above, we extend existing tools for learning linear systems~\cite{abbasi2011improved,faradonbeh2018finite,vershynin2018high,sarkar2019near}. Specifically, we leverage truncation-based arguments and introduce the quantity $\alpha(A_m)$ that captures the effect of the spectral properties of the transition matrices on the magnitudes of the state trajectories. Further, we develop strategies for finding high probability bounds for largest and smallest singular values of random matrices and for studying self-normalized matrix-valued martingales. Importantly, \pref{thm:cov_conc} provides a tight characterization of the sample covariance matrix for each system, in terms of the magnitudes of eigenvalues of $A_m$, as well as the largest block-size in the Jordan decomposition of $A_m$. The upper bounds show that $\bar \lambda_m$ grows exponentially with the dimension $d$, whenever $l^*_m = \Omega(d)$. Further, if $A_m$ has eigenvalues with magnitudes close to $1$, then scaling with time $T$ can be as large as $T^{2d+1}$. The bounds in \pref{thm:cov_conc} are more general than $\tr{\sum_{t=0}^T A_m^t A'_m{}^{t}}$ that appears in some analyses \cite{sarkar2019near,simchowitz2018learning}, and can be used to calculate the above term. Finally, \pref{thm:cov_conc} indicates that the classical framework of persistent excitation \cite{green1986persistence,boyd1986necessary,jenkins2018convergence} is not applicable, since the lower and upper bounds of eigenvalues grow at drastically different rates. Next, we express the joint estimation error rates. \begin{definition} Denote $\Ecal_C=\big\{ 0 \prec \underbar{\Sigma}_m \preceq \Sigma_m \preceq \bar{\Sigma}_m \big\}$, and let $\bar{\lambda} = \max_m \bar{\lambda}_m$, $\underbar{\lambda} = \min_m \underbar{\lambda}_m$, $\boldsymbol{\kappa}_m = \bar{\lambda}_m/\underbar{\lambda}_m$, $\boldsymbol\kappa = \max_m \boldsymbol\kappa_m$, and $\boldsymbol\kappa_\infty = \bar{\lambda}/\underbar{\lambda}$. Note that $\boldsymbol\kappa_\infty > \boldsymbol\kappa$. \end{definition} \begin{theorem} \label{thm:estimation_error} Under \pref{assum:linear_model}, the estimator in \pref{eq:mt_opt} returns $\hat A_m$ for each system $m \in [M]$, such that with probability at least $1-\delta$, the following holds: \begin{align*} \frac{1}{M}\sum_{m=1}^M \norm{\hat A_m - A_m}_F^2 \lesssim \frac{c^2}{\underbar \lambda} \rbr{k \log \boldsymbol\kappa_\infty + \frac{d^2k}{M}\log \frac{\boldsymbol\kappa dT}{\delta}}. \end{align*} \end{theorem} The proof is provided in \pref{sec:estimation-error-proof}. By putting Theorems \ref{thm:cov_conc} and \ref{thm:estimation_error} together, the estimation error per-system is \begin{align} \label{eq:lti_mt_bound} \frac{c^2 k \log \boldsymbol\kappa_\infty}{\lambda_{\min}(C)T} + \frac{c^2d^2k\log \frac{\boldsymbol\kappa dT}{\delta}}{M\lambda_{\min}(C)T}. \end{align} Equation \pref{eq:lti_mt_bound} demonstrates the effects of learning the systems in a joint manner. The first term in \pref{eq:lti_mt_bound} can be interpreted as the error in estimating the idiosyncratic components $\beta_m$ for each system. The convergence rate is $O\rbr{{k}/{T}}$, as each $\beta_m$ is a $k$-dimensional parameter and for each system, we have a trajectory of length $T$. More importantly, the second term in \pref{eq:lti_mt_bound} indicates that the joint estimator in \pref{eq:mt_opt} effectively increases the sample size for the shared components $\{W_i\}_{i=1}^k$, by pooling the data of all systems. So, the error decays as $O({d^2k}/{MT})$, showing that the effective sample size for $\Wb^*$ is $MT$. In contrast, for individual learning of LTI systems, the rate is known \cite{faradonbeh2018finite,faradonbeh2018optimality,simchowitz2018learning,sarkar2019near} to be \begin{align*} \norm{\hat A_m - A_m}_F^2 \lesssim \frac{c^2d^2 }{\lambda_{\min}(C)T} \log \frac{\alpha(A_m)T}{\delta}. \end{align*} Thus, the joint estimation error rate significantly improves, especially when \begin{align} \label{eq:low-dim} k < d^2 \text{ and } k < M. \end{align} Note that the above conditions are as expected. First, when $k \approx d^2$, the structure in \pref{assum:linear_model} does \emph{not} provide any commonality among the systems. That is, for $k = d^2$ the LTI systems can be totally arbitrary and \pref{assum:linear_model} is automatically satisfied. This prevents reductions in the effective dimension of the unknown transition matrices, and also prevents joint learning from being any different than individual learning. Similarly, $k \approx M$ precludes all commonalities and indicates that $\{A_m\}_{m=1}^M$ are too heterogeneous to allow any improved learning under joint estimation. Importantly, when the largest block-size $l^*_m$ varies significantly across the $M$ systems, a higher degree of shared structure is needed to improve the joint estimation error for all systems. Since $\boldsymbol \kappa$ and $\boldsymbol \kappa_\infty$ depend exponentially on $l^*_m$ (as shown in \pref{fig:state-size} and \pref{thm:cov_conc}) and $l^*_m$ can be as large as $d$, we can have $\log \boldsymbol \kappa_\infty = \log \boldsymbol \kappa = \Omega(d)$. Hence, in this situation we incur an additional dimension dependence in the error of the joint estimator. Note that the above effect of $l^*_m$ is unavoidable (regardless of the employed estimator). Moreover, in this case, joint learning rates improve if $k \leq d \text{ and } kd \leq M$. Therefore, our analysis highlights the important effects of the large blocks in the Jordan form of the transition matrices. The above is an inherent difference between estimating dynamics of LTI systems and learning from \emph{independent} observations. In fact, the analysis established in this work is fairly general and includes stochastic matrix regressions that the data of system $m$ consists of \begin{align} \label{eq:stoch_reg} y_m(t) = A_m x_m(t) + \eta_m(t), \end{align} wherein the regressors $x_m(t)$ are drawn from some distribution $\Dcal_m$, and $y_m(t)$ is the response. Assume that $x_m(t),y_m(t)$ are independent for all $m,t$. Now, the sample covariance matrix $\Sigma_m$ for each system does not depoend on $A_m$. Hence, the error for the joint estimator is not affected by the block-sizes in the Jordan decomposition of $A_m$. Therefore, in this setting, joint learning always leads to improved per-system error rates, as long as the necessary conditions $k < d^2$ and $k < M$ hold. \iffalse \subsection{Proof of \pref{thm:estimation_error}} To begin, recall that using the low rank structure in \pref{eq:low_rank}, we can rewrite the true parameter $\tilde \Theta^* \in \mathbb{R}^{d^2 \times M}$ as $\tilde{\Theta}^* = W^*B^*$, where $W^* \in \mathbb{R}^{d^2 \times k}$ and $B^* \in \mathbb{R}^{k \times M}$. For any parameter set $\Theta = WB \in \mathbb{R}^{d^2 \times M}$, we define $\Xcal(\Theta) \in \mathbb{R}^{dT \times M}$ as $ \Xcal(\Theta) \coloneqq \sbr{\Xcal_1(\Theta) | \Xcal_2(\Theta) \cdots | \Xcal_M(\Theta)}$, where each column $\Xcal_m(\Theta) \in \mathbb{R}^{dT}$ is the prediction of states $x_m(t+1)$ with $\Theta_m$. That is, $\Xcal_m(\Theta) = (x_m(0)', x_m(0)'\Theta_m' , x_m(1)'\Theta_m', \ldots, x_m(T-1)'\Theta_m')'$. Thus, each $Td$-size column of $\Xcal(\tilde \Theta^*)$ stacks $A_m x_m(t)$ (, i.e., the conditional mean of the next state) for the system $m$. Similarly, $\Xcal(\hat \Theta)$ is constructed using $\hat A_m x_m(t)$, for $t = 0,1,\ldots T-1$. Since $\hat{\Theta} = (\hat W, \hat B)$ minimizes \pref{eq:mt_opt}, the squared prediction error $\Lcal(\Theta^*, \hat W, \hat B)$ is less than $\Lcal(\Theta^*, W^*, B^*)$. So, by doing some algebra, we obtain: \begin{align} \label{eq:fro_ip} \nbr{\Xcal(\tilde \Theta^* -\hat{\Theta})}_F^2 \le 2\inner{ Z}{\Xcal\rbr{\tilde \Theta^* - \hat{\Theta}}}, \end{align} where $Z = [z_1 | z_2 \cdots | z_M] \in \mathbb{R}^{Td \times M}$, and each column of $Z$ is $z_m = \tilde \eta_m \coloneqq \rbr{\eta_m(1), \eta_m(2),\ldots,\eta_m(T)}$. Now, using \pref{assum:linear_model}, for the shared basis $W^*$ of size $\mathbb{R}^{d^2 \times k}$, we have \begin{align*} \textrm{rank}(\tilde \Theta^*-\hat{\Theta}) = \textrm{rank}(W^*B^* - \hat W \hat B) \le 2k. \end{align*} Hence, we can write $\Delta \coloneqq \tilde \Theta^* - \hat{\Theta} = UR$, where $U \in O^{d^2 \times 2k}$ is an orthonormal matrix and $R \in \mathbb{R}^{2k \times M}$. So, we can rewrite $\hat{W}\hat{\beta}_m - W\beta_m$ as $Ur_m$, where $r_m \in \mathbb{R}^{2k}$ is the idiosyncratic projection vector for system $m$. Our first step is to bound the prediction error for all systems $m \in [M]$ in the following lemma: \begin{lemma} \label{lem:breakup} For any fixed orthonormal matrix $\bar{U} \in \mathbb{R}^{d^2 \times 2k}$, under the event $\Ecal_C$, the prediction error can be decomposed as \begin{align} \frac{1}{2}\sum_{m=1}^M \nbr{ \tilde{X}_m (W^*\beta^*_m - \hat{W} \hat{\beta}_m) }_2^2 \le {} & \sqrt{\sum_{m=1}^M \nbr{\tilde{\eta}_m^\top \tilde{X}_m \bar{U}}_{\bar{V}^{-1}_m}^2}\sqrt{2\sum_{m=1}^M \nbr{ \tilde{X}_m (W^*\beta^*_m - \hat{W} \hat{\beta}_m) }_2^2} \nonumber \\ {} & + \sqrt{\sum_{m=1}^M \nbr{\tilde{\eta}_m^\top \tilde{X}_m \bar{U}}_{\bar{V}^{-1}_m}^2} \sqrt{2\sum_{m=1}^M \nbr{\tilde{X}_m(\bar{U}-U)r_m}^2} \nonumber \\ {} & + \sum_{m=1}^M \inner{\tilde{\eta}_m}{\tilde{X}_m (U-\bar{U}) r_m}, \label{eq:pred_error_breakup} \end{align} where $\bar V_m = \bar U^\top \tilde{\Sigma}_m \bar U + \bar U^\top\tilde{\Sigma}_{m,\mathop{\mathrm{dn}}}\bar U$ with $\tilde{\Sigma}_{m,\mathop{\mathrm{dn}}} \in \RR^{d^2 \times d^2}$ is a block diagonal matrix constructed using the $d \times d$ blocks $\underbar \Sigma_m$. \end{lemma} In the detailed proof which is provided in preprint, we bound each term on the RHS of \pref{eq:pred_error_breakup}. Specifically, we bound the term $\sqrt{\sum_{m=1}^M \nbr{\tilde{\eta}_m^\top \tilde{X}_m \bar{U}}_{\bar{V}^{-1}_m}^2}$ using a multitask self-normalized martingale bound that holds with probability at least $1-\delta_U$ (see \pref{prop:mt-self-norm}). We show an upper-bound for $\sqrt{\sum_{m=1}^M \nbr{\tilde{X}_m(\bar{U}-U)r_m}^2}$ in \pref{prop:proj_cover_error}, while $\sum_{m=1}^M \inner{\tilde{\eta}_m}{\tilde{X}_m (U-\bar{U}) r_m}$ is bounded in \pref{prop:breakup_inner_prod}. To establish the above upper-bounds, we leverage high probability bounds on the total magnitude of all noise (see \pref{prop:noise-magnitude}). In addition, since the decomposition in \pref{eq:pred_error_breakup} holds for any orthonormal matrix $\bar U$, we let $\bar U$ be an element in an $\epsilon$-net of the space of orthonormal matrices in $\mathbb{R}^{d^2 \times 2k}$, denoted by $\Ncal_\epsilon$, such that $\norm{\bar U - U}_F \le \epsilon$. Moreover, we employ a union bound over all elements in $\Ncal_\epsilon$. By setting $\delta_U = 3^{-1}\delta \abr{\Ncal_\epsilon}^{-1}$, $\delta_C = 3^{-1}\delta$, and $\delta_Z = 3^{-1}\delta$, the failure probability is at most $\delta$. After bounding each term separately, we get a quadratic inequality for the prediction error $\norm{\Xcal\rbr{\tilde \Theta^* - \hat \Theta}}_F$, which leads to: \begin{align*} \norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F^2 \lesssim {} & k c^2\rbr{M \log \boldsymbol\kappa_\infty + d^2\log \frac{\boldsymbol\kappa dT}{\delta}}. \end{align*} Finally, using the lower bound on the smallest eigenvalue of each system that we established in \pref{thm:cov_conc}, we obtain the desired bound on the estimation error. \fi \section{Robustness to Misspecifications} \label{sec:misspec} In \pref{thm:estimation_error}, we showed that in \pref{assum:linear_model} can be utilized for obtaining an improved estimation error, by jointly learning the $M$ systems. Next, we consider the impacts of misspecified models on the estimation error and study robustness of the proposed joint estimator against violations of the structure in \pref{assum:linear_model}. First, we consider the deviation of the dynamics of each system $m \in [M]$ from the shared structure. Specifically, by employing the matrix $D_m$ to denote the deviation of system $m$ from \pref{assum:linear_model}, let \begin{align} \label{eq:linear_model_pert} A_m = \rbr{\sum_{i=1}^k \beta^*_m[i] W^*_i} + D_m. \end{align} Denote the \emph{total misspesification} by $\bar{\zeta}^2=\sum_{m=1}^M \norm{D_m}_F^2$. We study the consequences of the above deviations, assuming that the same joint learning method as before is used for estimating the transition matrices. We establish the robustness of the proposed estimation method. \begin{theorem} \label{thm:estimation_error_pert} The estimator in \pref{eq:mt_opt} returns $\hat A_m$ for each system $m \in [M]$, such that with probability at least $1-\delta$, we have: \begin{align} \frac{1}{M}\sum_{m=1}^M \norm{\hat A_m - A_m}_F^2 \lesssim & {} \frac{c^2}{\underbar \lambda} \rbr{k \log \boldsymbol\kappa_\infty + \frac{d^2k}{M}\log \frac{\boldsymbol\kappa dT}{\delta}} + \frac{\rbr{\boldsymbol\kappa_\infty +1}\bar\zeta^2}{M}. \label{eq:est_error_bound_pert} \end{align} \end{theorem} \iffalse \aditya{I think we should simply remove the sketch for \pref{thm:estimation_error_pert}. It will save some space which we can use to write the introduction in a more understandable manner.} \begin{proof}[Proof sketch for \pref{thm:estimation_error_pert}] For the misspecified case, we can again start by the fact that $\hat W, \hat B$ minimize the empirical loss, which implies: \begin{align*} & \frac{1}{2}\norm{ \Xcal (W^*B^* - \hat W\hat B) }_2^2 \le {} \inner{Z}{\Xcal \rbr{W^*B^* - \hat W \hat B}} & \\ & \qquad + \sum_{m=1}^M 2\inner{ \tilde{X}_m\tilde{D}_m}{ \tilde{X}_m \rbr{W^*B^* - \hat W \hat B}} & \end{align*} Thus, in addition to the previous terms in \pref{eq:pred_error_breakup}, we also need to handle the additional missspecification term in red. Using \pref{assum:linear_model_pert}, we can bound the additional term as: \begin{align*} & \sum_{m=1}^M 2\inner{ \tilde{X}_m\tilde{D}_m}{ \tilde{X}_m \rbr{W^*B^* - \hat W \hat B}} &\\ & \le \sqrt{\sum_{m=1}^M \bar \lambda_m \zeta_m^2} \norm{\Xcal\rbr{W^*B^* - \hat W \hat B}}_F \\ {} & \quad \le \sqrt{\bar \lambda \zeta^2} \norm{\Xcal\rbr{W^*B^* - \hat W \hat B}}_F \end{align*} We add this term to the prediction error breakup and accounting for the term in the analysis leads to the final estimation error bound under bounded misspecification. We refer the reader to \pref{app:estimation-error} for a detailed proof. \end{proof} \fi The proof of \pref{thm:estimation_error_pert} can be found in \pref{sec:estimation-error_pert_proof}. In \pref{eq:est_error_bound_pert}, we observe that the total misspecification $\bar \zeta^2$ imposes an additional error of $(\boldsymbol\kappa_\infty+1) \bar \zeta^2$ for jointly learning all $M$ system. Hence, to obtain accurate estimates of the transition matrices, we need the total misspecification $\bar \zeta^2$ to be smaller than the number of systems $M$, as one can expect. The discussion following \pref{thm:estimation_error} is still applicable in the misspecified setting and indicates that in order to have accurate estimates, the number of the shared bases $k$ must be smaller than $M$ as well. In addition, compared to individual learning, the joint estimation error improves \emph{despite the unknown model misspecifications}, as long as \begin{align} \label{eq:low_dim_misspec} \frac{\boldsymbol \kappa_\infty \bar \zeta^2}{M} \lesssim \frac{d^2}{T}. \end{align} The condition in \pref{eq:low_dim_misspec} shows that when the total misspecification is proportional to the number of systems; $\bar \zeta^* = \Omega(M)$, we pay a constant factor proportional to $\boldsymbol \kappa_\infty$ on the per-system estimation error. Note that in case all systems are stable, according to \pref{thm:cov_conc}, the maximum condition number $\boldsymbol \kappa_\infty$ does \emph{not} grow with $T$, but it scales exponentially with $l^*_m$. The latter again indicates an important consequence of the largest block-sizes in Jordan decomposition that this work introduces. Moreover, when a transition matrix $A_m$ has eigenvalues close to or on the unit circle in the complex plane, by \pref{thm:cov_conc} the factor $\boldsymbol \kappa_\infty$ grows polynomially with $T$. Thus, for systems with infinite memories or accumulative behaviors, misspecifications can significantly deteriorate the benefits of joint learning. Intuitively, the reason is that effects of notably small misspecifications can accumulate over time and contaminate the whole data of state trajectories, because of the unit eigenvalues of the transition matrices $A_m$. Therefore, the above strong sensitivity to deviations from the shared model for systems with unit eigenvalues is unavoidable. \begin{figure}[ht] \centering \subfigure[System matrices $A_m$ are stable]{\label{fig:err-a}\includegraphics[width=0.48\columnwidth]{stable-err.png}} \subfigure[System matrices $A_m$ have a unit root]{\label{fig:err-b}\includegraphics[width=0.48\columnwidth]{unit-root-err.png}} \caption{Per-system estimation errors vs. the number of systems $M$, for the proposed joint learning method and individual least-squares estimates of the linear dynamical systems.}\label{fig:joint-err} \end{figure} More generally, if for the total misspecification we have $\bar \zeta^2 = O(M^{1-a})$, for some $a>0$, joint estimation improves over the individual estimators, as long as $T {\boldsymbol \kappa_\infty} \lesssim M^a{d^2}$. Hence, when all systems are stable, the joint estimation error rate improves when the number of systems satisfies $T^{1/a} \lesssim M$. Otherwise, idiosyncrasies in system dynamics dominates the commonalities. Note that larger values of $a$ correspond to \emph{smaller} misspecifications. On the other hand, \pref{thm:estimation_error_pert} implies that in systems with (almost) unit eigenvalues, the impact of $\bar \zeta^2$ is amplified. Indeed, by \pref{thm:cov_conc}, for unit-root systems, joint learning improves over individual estimators when $d^2M^a \gg T^{2l^*_m+2}$. That is, for benefiting from the shared structure and utilizing pooled data, the number of systems $M$ needs to be as large as $d^2 T^{(2l^*_m+2)/a}$. In contrast, if $\bar \zeta^2 = O(M^{1-a})$ for some $a>0$, the joint estimation error for the stochastic matrix regression problem in \pref{eq:stoch_reg} incurs only an additive factor of $O(1/M^a)$, regardless of the largest block-sizes in the Jordan decompositions and unit-root eigenvalues. Hence, \pref{thm:estimation_error_pert} further highlights the stark difference between joint learning from \emph{independent, bounded, and stationary} observations and from state trajectories of LTI systems. \iffalse \section{INCORPORATING THE PRESENCE OF CONTROL INPUTS} \aditya{Certainty equivalence based regret minimization methods for lqr control use a perturbed stabilizing controller to collect data: $u(t) = K_0 x(t) + z(t)$.} \begin{itemize} \item Show a covariance upper bound and lower bound by assuming that the closed loop system $L_m = A_m + B K_m$ is stable and use the results from previous works on LTI systems. \end{itemize} \fi \section{Numerical Illustrations} \label{sec:numerical} We complement our theoretical analyses with a set of numerical experiments which demonstrate the benefit of jointly learning the systems. To that end, we compare the estimation error for the joint estimator in \pref{eq:mt_opt} against the ordinary least-squares (OLS) estimates of the transition matrices for each system individually (which is optimal \cite{simchowitz2018learning}). For solving \pref{eq:mt_opt}, we use a minibatch gradient-descent-based implementation in PyTorch \cite{paszke2019pytorch} and choose Adam optimization algorithm \cite{kingma2015adam}. Note that since the rank constraints are explicitly enforced in \pref{eq:mt_opt}, no projection is required during the optimization. \begin{figure}[ht] \centering \subfigure[System matrices $A_m$ are stable]{\label{fig:ms-err-a}\includegraphics[width=0.48\columnwidth]{ms-stable-err-20.png}} \subfigure[System matrices $A_m$ have a unit root]{\label{fig:ms-err-b}\includegraphics[width=0.48\columnwidth]{ms-unit-root-err-20.png}} \caption{Per-system estimation errors are reported vs. the number of systems $M$, for varying proportions of misspecified systems; $M^{-a}$, for $a \in \{0,0.25,0.5\}$. }\label{fig:joint-err-ms} \end{figure} For {generating the systems}, we consider settings with the number of bases $k=10$, dimension $d=25$, trajectory length $T=200$, and the number of systems $M \in \{1,10,20,50,100,200\}$. We simulate two cases: \\(i) the spectral radii are in the range $[0.7,0.9]$, and \\(ii) all systems have an eigenvalue of magnitude $1$. The matrices $\{W_i\}_{i=1}^{10}$ are generated randomly, such that each entry of $W_i$ is sampled independently from the standard normal distribution $N(0,1)$. Using these matrices, we generate $M$ systems by randomly generating the idiosyncratic components $\beta_m$ from a standard normal distribution. For generating the state trajectories, the noise vectors are isotropic Gaussian with variance $4$. We simulate the joint learning problem both with and without model misspecifications. For the latter, deviations from the shared structure are simulated by the components $D_m$, which are added with probability $1/M^a$ for $a \in \{0, 0.25, 0.5\}$. The matrices $D_m$ are generated with Gaussian entries, leading to $\norm{D_m}_F^2 \approx 6.25$ and $\bar{\zeta}^2 \approx 6.25~M$. To report the {results}, for each value of $M$ in \pref{fig:joint-err} (resp. \pref{fig:joint-err-ms}), we average the errors from $10$ (resp. $20$) random replicates and plot the standard deviation as the error bar. \pref{fig:joint-err} depicts the estimation errors for both stable and unit-root transition matrices, versus $M$. It can be seen that the joint estimator exhibits the expected improvement against the individual one. More interestingly, in \pref{fig:ms-err-a}, we observe that for stable systems, the joint estimator performs worse than the individual one, when significant violations from the shared structure occurs in all systems (i.e., $a=0$). Note that it corroborates \pref{thm:estimation_error_pert}, since in this case the total misspecification $\bar \zeta^2$ \textit{scales linearly} with $M$. However, if the proportion of systems which violate the shared structure in \pref{assum:linear_model} decreases, the joint estimation error improves as expected ($a=0.25, 0.5$). \pref{fig:ms-err-b} depicts the estimation error for the joint estimator under misspecification for systems that have an eigenvalue on the unit circle in the complex plane. Our theoretical results suggest that the number of systems needs to be significantly larger in this case to circumvent the cost of misspecification in joint learning. The figure corroborates this result, wherein we observe that the joint estimation error is larger than the individual one, if all systems are misspecified (i.e., $a=0$). Decreases in the total misspecification (i.e., $a=0.25,0.5$) improves the error rate for joint learning, but requires larger number of systems than the stable case. \begin{figure}[ht] \centering \includegraphics[width=0.55\columnwidth]{choose_k.png} \caption{Estimation and validation prediction errors versus the hyperparameter $k'$, for the true value $k=10$.} \label{fig:choose_k} \end{figure} Finally, we discuss the choice of the number of bases $k$ for applying the joint estimator to real data. It can be handled by model selection methods such as elbow criterion using a validation dataset, AIC, or BIC \cite{akaike1974new,schwarz1978estimating}. In fact, for all $k' \ge k$, the structural assumption is satisfied and leads to similar learning rates, while $k' < k$ can lead to larger estimation errors. In \pref{fig:choose_k}, we provide an empirical simulation (with $T=250, M=50$) and report the per-system estimation error, as well as the prediction error on a validation data (which is a subset of size $50$). Across all $10$ runs in the experiment, we observed that if the hyperparameter $k'$ is chosen according to the elbow criteria, the resulting number of basis models is either equal to the true value $k=10$ or is slightly larger. \section{Proof of \pref{thm:cov_conc}} \label{sec:cov_conc_proof} In this and the following sections, we provide the detailed proofs for our theoretical results. We start by analysing the sample covariance matrix for each system which is then used to derive the estimation error rates in \pref{thm:estimation_error} and \pref{thm:estimation_error_pert}. Proofs for the auxiliary lemmas and the intermediate results are delegated to the appendix of this paper. Now, we prove high probability bounds for covariance matrices $\Sigma_m = \Sigma_m(T) = \sum_{t=0}^T x_m(t) x_m(t)'$ in \pref{thm:cov_conc}. \subsection{Upper Bounds on Covariance Matrices} To prove an upper bound on each system covariance matrix, we use an approach for LTI systems that relies on bounding norms of exponents of matrices \cite{faradonbeh2018finite}. Using the notation $l_m^*$ and $\alpha(A_m)$ in \eqref{eq:alpha_def} as before and $\xi_m = \opnorm{P_m^{-1}}{\infty\rightarrow 2}\opnorm{P_m}{\infty}$, the first step is to bound the sizes of all state vectors under the event $\Ecal_{\mathrm{bdd}}(\delta)$ in \pref{prop:truncation}. \begin{proposition}[Bounding $\norm{x_m(t)}$] For all $t \in [T], m \in [M]$, under the event $\Ecal_{\mathrm{bdd}}(\delta)$, with probability at least $1-\delta$ we have: \begin{align*} \norm{x_m(t)} \le \begin{cases} \alpha(A_m) \bar b_m(\delta), & \text{if } |\lambda_{m,1}| < 1, \\ \alpha(A_m) \bar b_m(\delta) t^{l^*_{m}}, & \text{if } |\lambda_{m,1}| \le 1+\frac{\rho}{T} \end{cases}. \end{align*} where $\bar b_m(\delta) = \rbr{b_T(\delta) + \norm{x_m(0)}_\infty}$. \end{proposition} \begin{proof} As before, each transition matrix $A_m$ admits a Jordan normal form as follows: $A_m = P_m^{-1}\Lambda_m P_m$, where $\Lambda_m$ is a block-diagonal matrix $\Lambda_m = \diag\rbr{\Lambda_{m,q},\ldots,\Lambda_{m,q}}$. Each Jordan block $\Lambda_{m,i}$ is of size $l_{m,i}$. To begin, note that for each system, each state vector satisfies: \begin{align*} x_m(t) = {} & \sum_{s=1}^t A_m^{t-s} \eta_m(s) + A_m^t x_m(0) \\ = {} & \sum_{s=1}^t P_m^{-1}\Lambda_m^{t-s}P_m \eta_m(s) + P_m^{-1}\Lambda_m^t P x_m(0). \end{align*} Now, letting $b_T(\delta)$ be the same as in \pref{prop:truncation}, we can bound the $\ell_2$-norm of the state vector as follows: \begin{align*} \norm{x_m(t)} \le {} & \opnorm{P_m^{-1}}{\infty \rightarrow 2} \opnorm{\sum_{s=1}^t \Lambda_m^{t-s}}{\infty} \opnorm{P_m}{\infty}b_T(\lambda) + \opnorm{P_m^{-1}}{\infty \rightarrow 2} \opnorm{\Lambda_m^{t}}{\infty} \opnorm{P_m}{\infty}\norm{x_m(0)}_\infty \\ \le {} & \xi_m\rbr{\sum_{s=0}^t \opnorm{\Lambda_m^{t-s}}{\infty}} \big(b_T(\delta) + \norm{x_m(0)}_\infty\big). \end{align*} For any matrix, the $\ell_\infty$ norm is equal to the maximum row sum. Since the powers of a Jordan matrix will follow the same block structure as the original one, we can bound the operator norm $\opnorm{A_m^{t-s}}{\infty}$ by the norm of each block. The maximum row sum for the $s$-th power of a Jordan block is: $\sum_{j=0}^{l-1} \binom{s}{j} \lambda^{s-j}$. Using this, we will bound the size of each state vector for the case when \begin{enumerate}[label=(\Roman*)] \item the spectral radius of $A_m$ $\abr{\lambda_1(A_m)} < 1$ and, \item when $\abr{\lambda_1(A_m)} \le 1+\frac{\rho}{T}$ for some constant $\rho>0$. \end{enumerate} \paragraph*{Case I} When the Jordan block for a system matrix has eigenvalues strictly less than 1, we can state the following bounds: \begin{align*} \sum_{s=0}^t \opnorm{\Lambda_m^{t-s}}{\infty} \le {} & \max_{i \in [q_m]} \sum_{s=0}^{t} \sum_{j=0}^{l_{m,i}-1} \binom{s}{j} \abr{\lambda_{m,i}}^{s-j} \\ \le {} & \sum_{s=0}^{t} \sum_{j=0}^{l^*_m-1} \binom{s}{j} \abr{\lambda_{m,1}}^{s-j} \\ \le {} & \sum_{s=0}^{t} \abr{\lambda_{m,1}}^s \sum_{j=0}^{l^*_{m}-1} \frac{s^j}{j!} \abr{\lambda_{m,1}}^{-j} \\ \le {} & \sum_{s=0}^{t} \abr{\lambda_{m,1}}^s s^{l^*_{m}-1} \sum_{j=0}^{l^*_{m}-1} \frac{\abr{\lambda_{m,1}}^{-j}}{j!} \\ \le {} & e^{1/|\lambda_{m,1}|} \sum_{s=0}^{\infty} \abr{\lambda_{m,1}}^s s^{l^*_{m}-1} \\ \lesssim {} & e^{1/|\lambda_{m,1}|} \sbr{\frac{l^*_{m}-1}{- \log |\lambda_{m,1}|} + \frac{(l^*_{m}-1)!}{(-\log |\lambda_{m,1}|)^{l^*_{m}}}}. \end{align*} Thus, for this case, the magnitude of each state vector can be upper bounded as $\norm{x_m(t)} \le \alpha(A_m) (b_T(\delta) + \norm{x_m(0)}_\infty)$. When the matrix $A_m$ is diagonalizable, each Jordan block is of size $1$, which leads to the upper-bound $\sum_{s=0}^t \opnorm{\Lambda_m^{t-s}}{\infty} \le \frac{1}{1-\lambda_1}$, for all $t \ge 0$. Therefore for diagonalizable $A_m$, we can let $\alpha(A_m) = \frac{\opnorm{P_m^{-1}}{\infty\rightarrow 2}\opnorm{P_m}{\infty}}{1-\lambda_1} $. \paragraph*{Case II} When $\abr{\lambda_{m,1}} \le 1+\frac{\rho}{T}$, we get $\abr{\lambda_{m,1}}^t \le e^{\rho}$, for all $t \le T$. Therefore, with $l^*_m$ as the largest multiplicity of the (near)-unit root, we have: \begin{align*} \sum_{s=0}^t \opnorm{\Lambda_m^{t-s}}{\infty} \le {} & \sum_{s=0}^{t} \sum_{j=0}^{l^*_m-1} \binom{s}{j} \abr{\lambda_{m,1}}^{s-j} \le {} e^{\rho} \sum_{s=0}^{t} \sum_{j=0}^{l^*_m-1} \binom{s}{j} \\ \le {} & e^\rho \sum_{s=0}^{t} \sum_{j=0}^{l^*_m-1} s^j/j! \le {} e^\rho \sum_{s=0}^{t} s^{l^*_m-1}\sum_{j=0}^{l^*_m-1} 1/j! \\ \le {} & e^{\rho+1} \sum_{s=0}^{t}s^{l^*_m-1} \lesssim {} e^{\rho+1} t^{l^*_m}. \end{align*} Therefore, the magnitude of each state vector grows polynomially with $t$ and further depends on the multiplicity of the unit root. When the matrix $A_m$ is diagonalizable, the Jordan block for the unit root is of size $1$ which bounds the term as $\sum_{s=0}^t \opnorm{\Lambda_m^{t-s}}{\infty} \le e^\rho t $. Therefore, for system matrices with unit roots, the bound on each state vector is \[\norm{x_m(t)} \le \alpha(A_m) \rbr{ b_T(\delta) + \norm{x_m(0)}_\infty} t^{l^*_m}.\] \end{proof} Using the high probability upper bound on the size of each state vector, we can upper bound the covariance matrix for each system as follows: \begin{lemma}[Upper bound on $\Sigma_m$] \label{lem:cov_upper_bound} For all $m \in [M]$, under the event $\Ecal_{\mathrm{bdd}}(\delta)$, with probability at least $1-\delta$ and $m \in [M]$, the sample covariance matrix $\Sigma_m$ of system $m$ can be upper bounded as follows: \begin{enumerate}[label=(\Roman*)] \item When all eigenvalues of the matrix $A_m$ are strictly less than $1$ in magnitude ($|\lambda_{m,i}| < 1$), we have \begin{align*} \lambda_{\max}(\Sigma_m) \le \alpha(A_m)^2 \rbr{b_T(\delta) + \norm{x_m(0)}_\infty}^2 T. \end{align*} \item When some eigenvalues of the matrix $A_m$ are close to $1$, i.e. $|\lambda_1(A_m)| \le 1+\frac{\rho}{T}$, we have: \begin{align*} \lambda_{\max}(\Sigma_m) \le \alpha(A_m)^2 \rbr{b_T(\delta) + \norm{x_m(0)}_\infty}^2 T^{2l_{m,1} + 1}. \end{align*} \end{enumerate} \end{lemma} \begin{proof} First note that we have: \begin{align*} \lambda_{\max}(\Sigma_m) = \opnorm{\sum_{t=0}^T x_m(t) x_m(t)'}{2} \le \sum_{t=0}^T \norm{x_m(t)}_2^2. \end{align*} Therefore, when all eigenvalues of $A_m$ are strictly less than $1$, we have: \begin{align*} \lambda_{\max}(\Sigma_m) \le T \alpha(A_m)^2 \rbr{b_T(\delta) + \norm{x_m(0)}_\infty}^2. \end{align*} For the case when $\lambda_1(A_m) \le 1+\frac{\rho}{T}$, we get: \begin{align*} \lambda_{\max}(\Sigma_m) \le {} & \alpha(\Lambda_m)^2 \sum_{t=0}^T t^{2l_{m,1}} \le {} \alpha(A_m)^2 \rbr{b_T(\delta) + \norm{x_m(0)}_\infty}^2 T^{2l_{m,1} + 1}. \end{align*} \end{proof} \subsection{Lower Bound for Covariance Matrices} A lower bound result for the idiosyncratic covariance matrices can be derived using the probabilistic inequalities shown in the appendix and the proof outline used in Proposition 10.1 in the work of Sarkar et al. \cite{sarkar2019near}. We provide a detailed proof below. \begin{lemma}[Covariance lower bound.] Define $\kappa = \tfrac{d\sigma^2}{\lambda_{\min}(C)^2}$. For all $m \in [M]$, if the per-system sample size $T$ is large enough such that $T$ is greater than: \begin{align*} \varkappa \rbr{c_\eta \log \tfrac{18}{\delta} \vee 16\rbr{ \log \rbr{\alpha(A)^2\bar b_m(\delta)^2 + 1} + 2 \log \tfrac{5}{\delta}}}, \end{align*} {\small if $\abr{\lambda_{m,1}} < 1$, and \begin{align*} \varkappa \rbr{c_\eta \log \tfrac{18}{\delta} \vee 16\rbr{ \log \rbr{\alpha(A)^2\bar b_m(\delta)^2T^{2l^*_m} + 1} + 2 \log \tfrac{5}{\delta}}} \end{align*}} if $1-\frac{\rho}{T} \le \abr{\lambda_{m,1}} \le 1+\frac{\rho}{T}$, then with probability at least $1-3\delta$, the sample covariance matrix $\Sigma_m$ for system $m$ can be bounded from below as follows: $\Sigma_m(T) \succeq \frac{T\lambda_{\min}(C)}{4}I$. \end{lemma} \begin{proof} We bound the covariance matrix under the events $\Ecal_{\mathrm{bdd}}(\delta)$, $\Ecal_{\eta}(\delta)$, and when the event in \pref{prop:matrix-self-norm} holds. As we consider a bound for all systems, we drop the system subscript $m$ here. Using the dynamics stated in \pref{eq:mt-lti}, we have: \begin{align*} \Sigma(T) \succeq {} & A\Sigma(T-1)A' + \sum_{t=1}^T \eta(t)\eta(t)' + \sum_{t=0}^{T-1} \rbr{Ax(t)\eta(t+1)' + \eta(t+1)x(t)'A'}. \end{align*} Under $\Ecal_{\mathrm{bdd}}(\delta)$, by \pref{prop:noise-conc}, with probability at least $1-\delta$, we get: \begin{align*} \Sigma(T) \succeq {} & A\Sigma(T-1)A' + \frac{3\lambda_{\min}(C)T}{4} + \sum_{t=0}^{T-1} \rbr{Ax(t)\eta(t+1)' + \eta(t+1)x(t)'A'}. \end{align*} Thus, for any vector $u \in \Scal^{d-1}$, we have \begin{align*} u'\Sigma(T)u \ge {} & u'A\Sigma(T-1)A'u + \frac{3\lambda_{\min}(C)T}{4} + \sum_{t=0}^{T-1} u'\rbr{Ax(t)\eta(t+1)' + \eta(t+1)x(t)'A'}u. \end{align*} Now, in \pref{prop:matrix-self-norm}, with $V = T \cdot I$, we can show the same result for martingale term $\sum_{t=0}^{T-1} A_m X_m(t)\eta_m(t+1)'$ and $\bar V_m(s) \coloneqq \sum_{t=0}^s A_m X_m(t)X_m(t)'A_m' + V$. Hence, with probability at least $1-\delta$, we have: \begin{align*} \norm{\sum_{t=0}^{T-1} Ax(t)\eta(t+1)'u} \le {} & \sqrt{u'A\Sigma(T-1)A'u + T} \sqrt{8d\sigma^2 \log \rbr{\frac{5\det \rbr{\bar{V}_m(T-1)}^{1/2d} \det \rbr{TI}^{-1/2d} }{\delta^{1/d}}}}. \end{align*} Thus, we get: \begin{align*} u'\Sigma(T)u \succeq {} & u'A\Sigma(T-1)A'u - \sqrt{u'A\Sigma(T-1)A'u + T} \sqrt{16d\sigma^2 \log \rbr{\tfrac{\lambda_{\max}(\bar V(T-1))}{T}} + 32d\sigma^2 \log \tfrac{5}{\delta} } + \frac{3\lambda_{\min}(C)T}{4}. \end{align*} Hence, we have: \begin{align*} u'\frac{\Sigma(T)}{T}u \succeq {} & u'\frac{A\Sigma(T-1)A'}{T}u + \frac{3\lambda_{\min}(C)}{4} - \sqrt{u'\frac{A\Sigma(T-1)A'}{T}u + 1} \frac{\lambda_{\min}(C)}{2} \\ \succeq {} & \frac{\lambda_{\min}(C)}{4}, \end{align*} whenever $T$ is larger than \begin{align*} \frac{16d\sigma^2}{\lambda_{\min}(C)^2}\rbr{ \log \rbr{\frac{\lambda_{\max}(\bar V(T-1))}{T}} + 2 \log \frac{5}{\delta}} = {} & \tfrac{16d\sigma^2}{\lambda_{\min}(C)^2}\rbr{ \log \rbr{\tfrac{\lambda_{\max}\rbr{\sum_{t=0}^{T-1} A X(t)X(t)'A'}}{T} + 1} + 2 \log \tfrac{5}{\delta}}. \end{align*} Using the upper bound analysis in \pref{lem:cov_upper_bound}, we show that it suffices for $T$ to be lower bounded as \begin{align*} T \ge \frac{16d\sigma^2}{\lambda_{\min}(C)^2}\rbr{ \log \rbr{\alpha(A)^2\bar b_m(\delta)^2 + 1} + 2 \log \frac{5}{\delta}}, \end{align*} when $A$ is strictly stable and \begin{align*} T \ge \frac{16d\sigma^2}{\lambda_{\min}(C)^2}\rbr{ \log \rbr{\alpha(A)^2 \bar b_m(\delta)^2 T^{2l^*} + 1} + 2 \log \frac{5}{\delta}}, \end{align*} when $\abr{\lambda_1(A)} \le 1+\frac{\rho}{T}$. Since, both quantities on the RHS grow at most logarithmically with $T$, there exists $T_0$ such that it holds for all $T \ge T_0$. Combining the failure probability for all events, we get the desired result. \end{proof} \section{Proof of \pref{thm:estimation_error}} \label{sec:estimation-error-proof} In this section, we use the result in \pref{thm:cov_conc} to analyse the average estimation error across the $M$ systems for the estimator in \pref{eq:mt_opt} under \pref{assum:linear_model}. For ease of presentation, we rewrite the problem by transforming the vector output space to scalar values. To proceed, we introduce some notation to express each transition matrix in vector form and rewrite \pref{eq:mt_opt}, as follows. First, for each state vector $x_m(t) \in \mathbb{R}^d$, we create $d$ different covariates of size $\mathbb{R}^{d^2}$. So, for $j=1,\cdots, d$, the vector $\tilde{x}_{m,j}(t) \in \mathbb{R}^{d^2}$ contains $x_m(t)$ in the $j$-th block of size $d$ and $0's$ elsewhere. Then, we express the system matrix $A_m \in \mathbb{R}^{d \times d}$ as a vector $\tilde{A}_m \in \mathbb{R}^{d^2}$. Similarly, the concatenation of all vectors $\tilde{A}_m$ can be coalesced into the matrix $\tilde{\Theta} \in \mathbb{R}^{d^2 \times M}$. Analogously, $\tilde{\eta}_m(t)$ will denote the concatenated $dt$ dimensional vector of noise vectors for system $m$. Thus, the structural assumption in \pref{eq:linear_model} can be written as: \begin{align} \label{eq:low_rank} \tilde{A}_m = W^*\beta^*_m, \end{align} where $W^* \in \mathbb{R}^{d^2 \times k}$ and $\beta^*_m \in \mathbb{R}^k$. Similarly, the overall parameter set can be factorized as $\tilde{\Theta}^* = W^*B^*$, where the matrix $B^* = [\beta^*_1 | \beta^*_2 | \cdots \beta^*_M] \in \mathbb{R}^{k \times M}$ contains the true weight vectors $\beta^*_m$. Thus, expressing the system matrices $A_m$ in this manner leads to a low rank structure in \pref{eq:low_rank}, so that the matrix $\tilde \Theta^*$ is of rank $k$. Using the vectorized parameters, the evolution for the components $j \in [d]$ of all state vectors $x_m(t)$ can be written as: \begin{align} x_{m}(t+1)[j] = \tilde{A}_m\tilde{x}_{m,j}(t) + \eta_{m}(t+1)[j]. \end{align} For each system $m \in [M]$, we therefore have a total of $dT$ samples, where the statistical dependence now follows a block structure: $d$ covariates of $x_m(1)$ are all constructed using $x_m(0)$, next $d$ using $x_m(1)$ and so forth. To estimate the parameters, we solve the following optimization problem: \begin{align} \label{eq:mt_uopt} \hat{W}, \{\hat{\beta}_m\}_{m=1}^M \coloneqq & {} \argmin_{W, \{\beta_m\}_{m=1}^M} \underbrace{ \sum_{m,t}\sum_{j=1}^d \rbr{ x_{m}(t+1)[j] - \langle W \beta_m, \tilde{x}_{m,j}(t)\rangle }^2 }_{\Lcal(W, \beta)} \nonumber \\ = & {} \argmin_{W, \{\beta_m\}_{m=1}^M} \sum_{m=1}^M \nbr{y_m - \tilde{X}_m W \beta_m }_2^2, \end{align} where $y_m \in \mathbb{R}^{Td}$ contains all $T$ state vectors stacked vertically and $\tilde{X}_m \in \mathbb{R}^{Td \times d^2}$ contains the corresponding matrix input. We denote the covariance matrices for the vectorized form by $\tilde{\Sigma}_m=\sum_{t=0}^{T-1} \tilde x_m(t) \tilde x_m(t)'$. Recall, that the sample covariance matrices for all systems are denoted by $\Sigma_m=\sum_{t=0}^{T-1} x_m(t)x_m(t)'$ We further use the following notation: for any parameter set $\Theta = WB \in \mathbb{R}^{d^2 \times M}$, we define $\Xcal(\Theta) \in \mathbb{R}^{dT \times M}$ as $ \Xcal(\Theta) \coloneqq \sbr{\Xcal_1(\Theta) | \Xcal_2(\Theta) \cdots | \Xcal_M(\Theta)}$, where each column $\Xcal_m(\Theta) \in \mathbb{R}^{dT}$ is the prediction of states $x_m(t+1)$ with $\Theta_m$. That is, \[\Xcal_m(\Theta) = (x_m(0)', x_m(0)'A_m', \ldots , x_m(T-1)'A_m' )'.\] Thus, $\Xcal(\tilde \Theta^*) \in \mathbb{R}^{Td \times M}$ denotes the ground truth mapping for the training data of the $M$ systems and $\Xcal(\tilde \Theta^* - \hat \Theta) \in \mathbb{R}^{Td \times M}$ is the prediction error across all coordinates of the $MT$ state vectors that each of which is of dimension $d$. By our shared basis assumption (\pref{assum:linear_model}), we have $\Delta \coloneqq \tilde \Theta^* - \hat{\Theta} = UR$, where $U \in O^{d^2 \times 2k}$ is an orthonormal matrix and $R \in \mathbb{R}^{2k \times M}$. We start by the fact that the estimates $\hat{W}$ and $\hat{\beta}_m$ minimize \pref{eq:mt_opt}, and therefore, have a smaller squared prediction error than $(W^*, B^*)$. Hence, we get the following inequality: \begin{align} \label{eq:sqloss_ineq} \frac{1}{2} \sum_{m=1}^M \norm{ \tilde{X}_m (W^*\beta^*_m - \hat{W} \hat{\beta}_m) }_2^2 \le & {} \sum_{m=1}^M \inner{\tilde \eta_m}{\tilde{X}_m \rbr{\hat{W}\hat{\beta}_m - W^*\beta^*_m}} . \end{align} We can rewrite $\hat{W}\hat{\beta}_m - W^*\beta^*_m=Ur_m$, for all $m \in [M]$, where $r_m \in \mathbb{R}^{2k}$ is an idiosyncratic projection vector for system $m$. Our first step is to bound the prediction error for all systems $m \in [M]$ in the following lemma: \begin{lemma} \label{lem:breakup} For any fixed orthonormal matrix $\bar{U} \in \mathbb{R}^{d^2 \times 2k}$, the total squared prediction error in \pref{eq:mt_opt} for $(\hat W, \hat B)$ can be decomposed as follows: \begin{align} \frac{1}{2}\sum_{m=1}^M \norm{\tilde{X}_m (W^*\beta^*_m - \hat{W} \hat{\beta}_m)}_F^2 \le {} & \sqrt{\sum_{m=1}^M \norm{\tilde\eta_m^\top \tilde{X}_m \bar{U}}_{\bar{V}^{-1}_m}^2}\sqrt{2\sum_{m=1}^M \norm{ \tilde{X}_m (W^*\beta^*_m - \hat{W} \hat{\beta}_m) }_2^2} \nonumber \\ {} & + \sqrt{\sum_{m=1}^M \norm{\tilde\eta_m^\top \tilde{X}_m \bar{U}}_{\bar{V}^{-1}_m}^2} \sqrt{\sum_{m=1}^M \norm{\tilde{X}_m(\bar{U}-U)r_m}^2} \nonumber \\ {} & + \sum_{m=1}^M \inner{\tilde\eta_m}{\tilde{X}_m (U-\bar{U}) r_m} . \label{eq:pred_error_breakup} \end{align} \end{lemma} The proof of \pref{lem:breakup} can be found in \pref{app:estimation-error}. Our next step is to bound each term on the rhs of eq.~\eqref{eq:pred_error_breakup} individually. To that end, let $\Ncal_{\epsilon}$ be an $\epsilon$-cover of the set of orthonormal matrices in $\mathbb{R}^{d^2 \times 2k}$. In eq.~\eqref{eq:pred_error_breakup}, we select the matrix $\bar{U}$ to be an element of $\Ncal_{\epsilon}$ such that $\norm{\bar{U} - U}_F \le \epsilon$. Note that since $\Ncal_{\epsilon}$ is an $\epsilon$-cover, such matrix $\bar{U}$ exists. We can bound the size of such a cover using \pref{lem:lowrank-cover}, and obtain $|\Ncal_\epsilon| \le \rbr{\frac{6\sqrt{d}}{\epsilon}}^{2d^2k}$. We now bound each term in the following propositions using the auxiliary results in \pref{sec:aux_results}. The detailed proofs for these results are available in \ref{app:estimation-error}. Using \pref{prop:noise-magnitude}, we bound the following expression in the second term of \eqref{eq:pred_error_breakup} below: \begin{proposition} \label{prop:proj_cover_error} Under \pref{assum:linear_model}, for the noise process $\{\eta_m(t)\}_{t=1}^\infty$ defined for each system, with probability at least $1-\delta_Z$, we have: \begin{align*} \sum_{m=1}^M \norm{\tilde{X}_m(\bar{U}-U)r_m}^2 \lesssim {} & \boldsymbol\kappa\epsilon^2 \rbr{MT\tr{C} +\sigma^2\log \frac{2}{\delta_Z} }. \end{align*} \end{proposition} Based on the bound in \pref{prop:proj_cover_error}, we can bound the third term in \eqref{eq:pred_error_breakup} as follows: \begin{proposition} \label{prop:breakup_inner_prod} Under \pref{assum:noise} and \pref{assum:linear_model}, with probability at least $1-\delta_Z$, we have: \begin{align} \label{eq:breakup_inner_prod} \sum_{m=1}^M \inner{\tilde\eta_m}{\tilde{X}_m (U-\bar{U}) r_m} \lesssim {} & \sqrt{\boldsymbol\kappa} \epsilon \rbr{MT \tr{C} + \sigma^2\log \frac{1}{\delta_Z}}. \end{align} \end{proposition} Next, we show the following multitask self-normalized martingale concentration result similar to \pref{lem:self-norm}, but with a projection step to a low-rank subspace. \begin{proposition} \label{prop:mt-self-norm} For an arbitrary orthonormal matrix $\bar{U} \in \mathbb{R}^{d^2 \times 2k}$ in the $\epsilon$-cover $\Ncal_\epsilon$ defined in \pref{lem:lowrank-cover}, let $\Sigma \in \mathbb{R}^{d^2 \times d^2}$ be a positive definite matrix, and define $S_{m}(\tau) = \tilde \eta_m(\tau)^\top \tilde{X}_m(\tau) \bar{U}$, $\bar{V}_m(\tau) = \bar{U}'\rbr{\tilde{\Sigma}_m(\tau) + \Sigma}\bar{U}$, and $V_0 = \bar{U}'\Sigma\bar{U}$. Then, define $\Ecal_{1}(\delta_U)$ as the event where: \begin{align*} \sum_{m=1}^M \norm{S_{m}(T)}_{\bar{V}_m^{-1}(T)}^2 \le 2\sigma^2\log \rbr{\frac{\Pi_{m=1}^M\frac{\det(\bar{V}_m(T))}{\det (V_0)}}{\delta_U}}. \end{align*} For $\Ecal_{1}(\delta_U)$, we have: \begin{align} \label{eq:mt-self-norm} \PP\sbr{\Ecal_1(\delta_U)} \ge 1-\rbr{\frac{6\sqrt{2k}}{\epsilon}}^{2d^2k}\delta_U. \end{align} \end{proposition} \subsection{Proof of the Estimation Error in \pref{thm:estimation_error}} \label{sec:final_proof} \begin{proof} We now use the bounds we have shown for each term before and give the final steps by using the error decomposition in \pref{lem:breakup}. Let $\abr{\Ncal_{\epsilon}}$ be the cardinality of the $\epsilon$-cover of the set of orthonormal matrices in $\mathbb{R}^{d^2 \times 2k}$ that we defined in \pref{lem:breakup}. Let $\VV$ denote the expression $\Pi_{m=1}^M\frac{\det(\bar{V}_m(t))}{\det (V_0)}$. So, substituting the termwise bounds from \pref{prop:proj_cover_error}, \pref{prop:breakup_inner_prod}, and \pref{prop:mt-self-norm} in \pref{lem:breakup}, with probability at least $1-\abr{\Ncal_{\epsilon}}\delta_U - \delta_Z$, it holds that: \begin{align} \frac{1}{2}\norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F^2 \lesssim {} & \sqrt{\sigma^2\log \rbr{\frac{\VV}{\delta_U}}}\norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F + \sqrt{\sigma^2\log \rbr{\frac{\VV}{\delta_U}}} \sqrt{\boldsymbol\kappa\epsilon^2 \rbr{MT\tr{C} +\sigma^2\log \frac{1}{\delta_Z} }} \nonumber \\ {} & + \sqrt{\boldsymbol\kappa} \epsilon \rbr{MT \tr{C} + \sigma^2\log \frac{1}{\delta_Z}}. \label{eq:inter_breakup} \end{align} For the matrix $V_0$, we now substitute $\Sigma = \underbar{\lambda}I_{d^2}$, which implies that $\det(V_0)^{-1} = \det(1/\underbar{\lambda}I_{2k}) = \rbr{1/\underbar{\lambda}}^{2k}$. Similarly, for the matrix $\bar{V}_m(T)$, we get $\det(\bar{V}_m(T)) \le \bar{\lambda}^{2k}$. Thus, substituting $\delta_U = 3^{-1}\delta\abr{\Ncal}_\epsilon^{-1}$ and $\delta_C=3^{-1}\delta$ in \pref{thm:cov_conc}, with probability at least $1-2\delta/3$, the upper-bound in \pref{prop:mt-self-norm} becomes: \begin{align*} \sum_{m=1}^M \norm{\tilde\eta_m^\top \tilde{X}_m \bar{U}}_{\bar{V}^{-1}_m}^2 \le {} & \sigma^2\log \rbr{\frac{\Pi_{m=1}^M\frac{\det(\bar{V}_m(t))}{\det (V_0)}}{\delta_U}} \\ \lesssim {} & \sigma^2Mk \log \boldsymbol\kappa_\infty + \sigma^2 d^2k \log \frac{k}{\delta \epsilon}. \end{align*} Substituting this in \pref{eq:inter_breakup} with $\delta_Z = \delta/3$ with $c^2 = \max(\sigma^2, \lambda_{\max}(C))$, with probability at least $1-\delta$, we have: \begin{align*} \frac{1}{2} \norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F^2 \lesssim {} & \sqrt{c^2Mk \log \boldsymbol\kappa_\infty + c^2 d^2k \log \frac{k}{\delta \epsilon}}\Bigg(\norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F + \sqrt{\boldsymbol\kappa\epsilon^2 \rbr{c^2dMT +c^2\log \frac{1}{\delta}}}\Bigg) & \\ {} & + \sqrt{\boldsymbol\kappa} \epsilon \rbr{c^2dMT + c^2\log \frac{1}{\delta}} . & \end{align*} Noting that $\log \frac{1}{\delta} \lesssim d^2k\log \frac{k}{\delta \epsilon}$ for $\epsilon = \frac{k}{\sqrt{\boldsymbol\kappa}dT}$, with probability at least $1-\delta$, we get: \begin{align*} \frac{1}{2} \norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F^2 \lesssim {} & \rbr{\sqrt{c^2Mk \log \boldsymbol\kappa_\infty + c^2 d^2k \log \tfrac{\boldsymbol\kappa dT}{\delta}}}\norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F \\ {} & + \sqrt{c^2Mk \log \boldsymbol\kappa_\infty + c^2 d^2k \log \tfrac{\boldsymbol\kappa dT}{\delta}} \sqrt{c^2\rbr{\tfrac{k^2M}{dT} + \tfrac{k^3}{T^2}\log \tfrac{\boldsymbol\kappa dT}{\delta} }} + c^2\rbr{Mk + \tfrac{dk^2}{T}\log \tfrac{\boldsymbol\kappa dT}{\delta}} . \end{align*} As $k \le d$, we can rewrite the above inequality as: \begin{align*} \frac{1}{2} \norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F^2 \lesssim {} & \sqrt{c^2\rbr{Mk \log \boldsymbol\kappa_\infty + d^2k \log \frac{\boldsymbol\kappa dT}{\delta}}} \norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F + c^2\rbr{Mk \log \boldsymbol\kappa_\infty + \frac{d^2k}{T}\log \frac{\boldsymbol\kappa dT}{\delta}}. \end{align*} The above quadratic inequality for the prediction error $\norm{\Xcal(W^*B^* - \hat W \hat B)}_F^2$ implies the following bound, which holds with probability at least $1-\delta$: \begin{align*} \norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F^2 \lesssim c^2\rbr{Mk \log \boldsymbol\kappa_\infty + d^2k\log \frac{\boldsymbol\kappa dT}{\delta}}. \end{align*} Since the smallest eigenvalue of the matrix $\Sigma_m = \sum_{t=0}^T X_m(t) X_m(t)'$ is at least $\underbar \lambda$ (\pref{thm:cov_conc}), we can convert the above prediction error bound to an estimation error bound and get \begin{align*} \norm{W^*B^* - \hat W \hat B}_F^2 \lesssim {} & \frac{c^2\rbr{Mk \log \boldsymbol\kappa_\infty + d^2k\log \frac{\boldsymbol\kappa dT}{\delta}}}{\underbar \lambda}, \end{align*} which implies the following estimation error bound for the solution of \pref{eq:mt_opt}: \begin{align*} \sum_{m=1}^M \norm{\hat A_m - A_m}_F^2 \lesssim \frac{c^2\rbr{Mk \log \boldsymbol\kappa_\infty + d^2k\log \frac{\boldsymbol\kappa dT}{\delta}}}{\underbar \lambda}. \end{align*} \end{proof} \section{Proof of \pref{thm:estimation_error_pert}} \label{sec:estimation-error_pert_proof} Here, we provide the key steps for bounding the average estimation error across the $M$ systems for the estimator in \pref{eq:mt_opt} in presence of misspecifications $D_m \in \mathbb{R}^{d \times d}$: \begin{align*} A_m = \rbr{\sum_{i=1}^k \beta^*_m[i] W^*_i} + D_m, \quad \text{where }\norm{D_m}_F \le \zeta_m. \end{align*} In the presence of misspecifications, we have $\Delta \coloneqq \tilde \Theta^* - \hat{\Theta} = VR + D$ where $V \in O^{d^2 \times 2k}$ is an orthonormal matrix, $R \in \mathbb{R}^{2k \times M}$ and $D \in \mathbb{R}^{d^2 \times M}$ is the misspecification error. As the analysis here shares its template with the proof of \pref{thm:estimation_error}, we provide a sketch here with the complete details provided in \pref{app:estimation-error_pert}. Same as in \pref{sec:estimation-error-proof}, we start with the fact that $(\hat W, \hat B)$ minimize the squared loss in \pref{eq:mt_opt}. However, in this case, we get an additional term dependent on the misspecifications $D_m$: \begin{align} \label{eq:sqloss_ineq_pert} \frac{1}{2} \sum_{m=1}^M \norm{ \tilde{X}_m (W^*\beta^*_m - \hat{W} \hat{\beta}_m) }_2^2 \le & {} \sum_{m=1}^M \inner{\tilde \eta_m}{\tilde{X}_m \rbr{\hat{W}\hat{\beta}_m - W^*\beta^*_m}} {+ \sum_{m=1}^M 2\inner{ \tilde{X}_m\tilde{D}_m}{ \tilde{X}_m \rbr{\hat{W}\hat{\beta}_m - W^*\beta^*_m}}}. \end{align} We follow a similar proof strategy as in \pref{sec:estimation-error-proof} and account for the additional terms arising due to the misspecifications $D_m$. The error in the shared part, $\hat{W}\hat{\beta}_m - W^*\beta^*_m$, can still be rewritten as $Ur_m$ where $U \in \mathbb{R}^{d^2 \times 2k}$ is a matrix containing an orthonormal basis of size $2k$ in $\mathbb{R}^{d^2}$ and $r_m \in \mathbb{R}^{2k}$ is the system specific vector. We now show a term decomposition similar to \pref{lem:breakup}: \begin{lemma} \label{lem:breakup_pert_app} Under the misspecified shared linear basis structure in \pref{eq:linear_model_pert}, for any fixed orthonormal matrix $\bar{U} \in \mathbb{R}^{d^2 \times 2k}$, the low rank part of the total squared error can be decomposed as follows: \begin{align} \frac{1}{2}\sum_{m=1}^M \norm{\tilde{X}_m (W^*\beta^*_m - \hat{W} \hat{\beta}_m)}_F^2 \le {} & \sqrt{\sum_{m=1}^M \norm{\tilde\eta_m^\top \tilde{X}_m \bar{U}}_{\bar{V}^{-1}_m}^2}\sqrt{2\sum_{m=1}^M \norm{ \tilde{X}_m (W^*\beta^*_m - \hat{W} \hat{\beta}_m) }_2^2} + \sum_{m=1}^M \inner{\tilde\eta_m}{\tilde{X}_m (U-\bar{U}) r_m} \nonumber \\ {} & + \sqrt{\sum_{m=1}^M \norm{\tilde\eta_m^\top \tilde{X}_m \bar{U}}_{\bar{V}^{-1}_m}^2} \sqrt{2\sum_{m=1}^M \norm{\tilde{X}_m(\bar{U}-U)r_m}^2} \nonumber \\ {} & {+ 2\sqrt{\bar \lambda}\bar{\zeta} \sqrt{\sum_{m=1}^M \norm{\tilde{X}_m \rbr{\hat{W}\hat{\beta}_m - W^*\beta^*_m}}_2^2}}. \label{eq:app_pred_error_breakup_pert} \end{align} \end{lemma} We will now bound each term on the rhs of eq.~\eqref{eq:app_pred_error_breakup_pert} individually similar to \pref{sec:estimation-error-proof} where we choose the matrix $\bar U$ to be an element of $\Ncal_{\epsilon}$ which is an $\epsilon$-cover of the set of orthonormal matrices in $\mathbb{R}^{d^2 \times 2k}$. The proof of \pref{lem:breakup_pert_app} and the propositions below can be found in \pref{app:estimation-error}. \begin{proposition}[Bounding $\sum_{m=1}^M \norm{\tilde{X}_m(\bar{U}-U)r_m}^2$] \label{prop:proj_cover_error_pert} For the multitask model specified in \pref{eq:linear_model_pert}, for the noise process $\{\eta_m(t)\}_{t=1}^\infty$ defined for each system, with probability at least $1-\delta_Z$, we have: \begin{align} \sum_{m=1}^M \norm{\tilde{X}_m(\bar{U}-U)r_m}^2 \lesssim {} & \boldsymbol\kappa\epsilon^2 \rbr{MT\tr{C} +\sigma^2\log \frac{2}{\delta_Z} {+ \bar{\lambda}\bar{\zeta}^2} }. \label{eq:pred_error_coarse_pert} \end{align} \end{proposition} \begin{proposition}[Bounding $\sum_{m=1}^M \inner{\tilde\eta_m}{\tilde{X}_m (U-\bar{U}) r_m}$] \label{prop:breakup_inner_prod_pert} Under \pref{assum:noise} and the shared structure in \pref{eq:linear_model_pert}, with probability at least $1-\delta_Z$ we have: \begin{align} \label{eq:breakup_inner_prod_pert} \sum_{m=1}^M \inner{\tilde\eta_m}{\tilde{X}_m (U-\bar{U}) r_m} \lesssim {} & \sqrt{\boldsymbol\kappa} \epsilon \rbr{MT \tr{C} + \sigma^2\log \frac{1}{\delta_Z}} {+\sqrt{\boldsymbol\kappa \bar\lambda} \sqrt{MT \tr{C} + \sigma^2\log \frac{1}{\delta_Z}} \epsilon \bar \zeta}. \end{align} \end{proposition} \paragraph*{Putting Things Together:}~\\~\\ We now use the decomposition in \pref{lem:breakup_pert_app} and the term-wise upper bounds to derive the final estimation error rate. As before, we substitute the termwise bounds from \pref{prop:proj_cover_error_pert}, \pref{prop:breakup_inner_prod_pert} and \pref{prop:mt-self-norm} in \pref{lem:breakup_pert_app} with values $\delta_U = \delta/3\abr{\Ncal}_\epsilon$, $\delta_C=\delta/3$ (in \pref{thm:cov_conc}), $\delta_Z = \delta/3$. Noting that $k \le d$ and $\log \frac{1}{\delta} \lesssim d^2k\log \frac{k}{\delta \epsilon}$, by setting $\epsilon = \frac{k}{\sqrt{\boldsymbol\kappa}dT}$ we finally get the following quadratic inequality in the error term $\Xi \coloneqq \norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F$: \begin{align*} \frac{1}{2} \Xi^2 \lesssim {} & \rbr{\sqrt{c^2\rbr{Mk \log \boldsymbol\kappa_\infty + d^2k \log \frac{\boldsymbol\kappa dT}{\delta}}}{+ \sqrt{\bar \lambda}\bar{\zeta}}} \Xi + c^2\rbr{Mk \log \boldsymbol\kappa_\infty + \frac{d^2k}{T}\log \frac{\boldsymbol\kappa dT}{\delta}} \\ {} & {+ c \sqrt{\frac{\bar \lambda \bar \zeta^2}{T}\rbr{Mk \log \boldsymbol\kappa_\infty + \frac{d^2k}{T}\log \frac{\boldsymbol\kappa dT}{\delta}}}}. \end{align*} The quadratic inequality for the prediction error $\norm{\Xcal(W^*B^* - \hat W \hat B)}_F^2$ implies the following bound with probability at least $1-\delta$: \begin{align*} \Xi^2 \lesssim {} & c^2\rbr{Mk \log \boldsymbol\kappa_\infty + d^2k\log \frac{\boldsymbol\kappa dT}{\delta}} + {\bar \lambda \bar \zeta^2}. \end{align*} Since $\underbar \lambda = \min_m \underbar \lambda_m$, we can convert the prediction error bound to an estimation error bound for the solution of \pref{eq:mt_opt}: \begin{align*} \sum_{m=1}^M \norm{\hat A_m - A_m}_F^2 \lesssim {} & \frac{c^2\rbr{Mk \log \boldsymbol\kappa_\infty + d^2k\log \frac{\boldsymbol\kappa dT}{\delta}}}{\underbar \lambda} + {(\boldsymbol\kappa_\infty + 1) \bar \zeta^2}. \end{align*} \section{Auxiliary Probabilistic Inequalities} \label{sec:aux_results} In this section, we state the general probabilistic inequalities which we used in proving the main results in the previous sections. The proofs for these results can be found in \pref{app:gen_ineq}. \begin{proposition}[Bounding the noise sequence] \label{prop:truncation} For $T = 0,1,\ldots$, and $0<\delta<1$, let $\Ecal_{\mathrm{bdd}}$ be the event \begin{align} \Ecal_{\mathrm{bdd}}(\delta) \coloneqq \cbr{\max_{1 \le t \le T, m \in [M]} \norm{\eta_m(t)}_\infty \le \sqrt{2\sigma^2 \log \tfrac{2dMT}{\delta}}}. \end{align} Then, we have $\PP[\Ecal_{\mathrm{bdd}}] \ge 1-\delta$. For simplicity, we denote the aboove upper-bound by $b_T(\delta)$. \end{proposition} \begin{proposition}[Noise covariance concentration] \label{prop:noise-conc} For $T$ and $0 < \delta < 1$, let $\Ecal_{\eta}$ be the event \begin{align*} \Ecal_{\eta}(\delta) \coloneqq \cbr{\tfrac{3\lambda_{\min}(C)}{4} I \preceq \tfrac{1}{T}\sum_{t=1}^T \eta_m(t) \eta_m(t)' \preceq \tfrac{5\lambda_{\max}(C)}{4} I}. \end{align*} Then, if $T \ge T_{\eta}(\delta) \coloneqq \frac{c_\eta d\sigma^2 }{\lambda_{\min}(C)^2}\log 18/\delta$, we have $\PP[\Ecal_{\mathrm{bdd}}(\delta) \cap \Ecal_{\eta}(\delta)] \ge 1-2\delta$. \end{proposition} Define $Z \in \RR^{dT \times M}$ as the pooled noise matrix as follows: \begin{align} \label{eq:tot_noise} Z = \sbr{\tilde\eta_1(T) | \tilde\eta_2(T) \cdots | \tilde \eta_M(T)}, \end{align} with each column vector $\eta_m(T) \in \mathbb{R}^{dT}$ as the concatenated noise vector $(\eta_m(1), \eta_m(2), \ldots, \eta_m(T))$ for the $m$-th system. \begin{proposition}[Bounding total magnitude of noise] \label{prop:noise-magnitude} For the joint noise matrix $Z \in \mathbb{R}^{dT \times M}$ defined in \pref{eq:tot_noise}, with probability at least $1-\delta$, we have: \begin{align*} \norm{Z}_F^2 \le MT\tr{C} + \log \frac{2}{\delta}. \end{align*} We denote the above event by $\Ecal_Z(\delta)$. \end{proposition} The following result shows a self-normalized martingale bound for vector valued noise processes. \begin{proposition} \label{prop:matrix-self-norm} For the system in \pref{eq:mt-lti}, for any $0<\delta<1$ and system $m \in [M]$, with probability at least $1-\delta$, we have: \begin{align*} \opnorm{\bar{V}_m^{-1/2}(T-1) \sum_{t=0}^{T-1} x_m(t) \eta_m(t+1)'}{2} \le {} & \sigma \sqrt{8d \log \rbr{\frac{5\det \rbr{\bar{V}_m(T-1)}^{1/2d} \det \rbr{V}^{-1/2d} }{\delta^{1/d}}}}, \end{align*} where $\bar{V}_{m}(s) = \sum_{t=0}^s x_m(t) x_m(t)' + V$ and $V$ is a deterministic positive definite matrix. \end{proposition} \begin{lemma}[Covering low-rank matrices \cite{du2020few}] \label{lem:lowrank-cover} Let $O^{d \times d'}$ be the set of matrices with orthonormal columns ($d > d'$). Then there exists a subset $\Ncal_\epsilon \subset O^{d \times d'}$ that forms an $\epsilon$-net of $O^{d \times d'}$ in Frobenius norm such that $|\Ncal_\epsilon| \le (\tfrac{6\sqrt{d'}}{\epsilon})^{dd'}$, i.e., for every $V \in O^{d \times d'}$, there exists $V' \in \Ncal_\epsilon$ and $\|V - V'\|_F \le \epsilon$. \end{lemma} \section{Concluding Remarks} \label{sec:conc} We studied the problem of jointly learning multiple LTI systems, under the assumption that their transition matrices can be expressed based on an unknown shared basis. Our finite-time analysis for the proposed joint estimator shows that pooling data across systems can provably improve over individual estimators, even in presence of misspecifications. The presented results highlight the critical roles of the spectral properties of the system matrices and the size of the basis, in the efficiency of joint estimation. Further, we characterize fundamental differences between joint estimation of LTI systems using dependent state trajectories and learning from independent stationary observations. Considering different shared structures and extension of the presented results to explosive systems or those with high-dimensional transition matrices, as well as joint learning of multiple non-linear dynamical systems, are interesting avenues for future work and this work paves the road towards them. \subsubsection*{\bibname}} \bibliographystyle{plain} \newcommand{\mathbb{R}}{\mathbb{R}} \newcommand{\wtilde}[1]{\widetilde{#1}} \newcommand{\wbar}[1]{\overline{#1}} \newcommand{\what}[1]{\widehat{#1}} \newcommand{\mathop{\mathrm{dn}}}{\mathop{\mathrm{dn}}} \newcommand{\mathop{\mathrm{up}}}{\mathop{\mathrm{up}}} \newcommand{\Mnorm}[1]{{\left\vert\kern-0.30ex\left\vert\kern-0.30ex\left\vert #1 \right\vert\kern-0.30ex\right\vert\kern-0.30ex\right\vert}} \newcommand{\eigmax}[1]{\left|\lambda_{\max}\left( #1 \right)\right|} \newcommand{\eigmin}[1]{\left|\lambda_{\min}\left( #1 \right)\right|} \newcount\Comments \Comments= \definecolor{darkred}{rgb}{0.7,0,0} \definecolor{drkgreen}{rgb}{0,0.5,0} \definecolor{purple}{rgb}{0.6,0.0,0.8} \newcommand{\kibitz}[2]{\ifnum\Comments=1{\textcolor{#1}{\textsf{\footnotesize #2}}}\fi} \newcommand{\ambuj}[1]{\kibitz{purple}{[AT: #1]}} \newcommand{\aditya}[1]{\kibitz{drkgreen}{[AM: #1]}} \newcommand{\george}[1]{\kibitz{magenta}{[GM: #1]}} \newcommand{\kazem}[1]{\kibitz{red}{[MF: #1]}} \section{Proofs of Auxiliary Results} \label{app:gen_ineq} Here, we give proofs of the probabilistic inequalities and intermediate results in \pref{sec:aux_results}. \paragraph*{Proposition}[Restatement of \pref{prop:truncation}] For $T = 0,1,\ldots$, and $0<\delta<1$, let $\Ecal_{\mathrm{bdd}}$ be the event \begin{align} \Ecal_{\mathrm{bdd}}(\delta) \coloneqq \cbr{\max_{1 \le t \le T, m \in [M]} \norm{\eta_m(t)}_\infty \le \sqrt{2\sigma^2 \log \tfrac{2dMT}{\delta}}}. \end{align} Then, we have $\PP[\Ecal_{\mathrm{bdd}}] \ge 1-\delta$. For simplicity, we denote the aboove upper-bound by $b_T(\delta)$. \begin{proof} Let $e_i$ be the $i$-th member of the standard basis of $\mathbb{R}^d$. Using the sub-Gaussianity of the random vector $\eta_{m}(t)$ given the sigma-field $\Fcal_{t-1}$, we have \begin{align*} \PP\sbr{\abr{\inner{e_i}{\eta_m(t)}} > \sqrt{2\sigma^2 \log \frac{2}{\delta'}}} \le \delta'. \end{align*} Therefore, taking a union bound over all basis vectors $i=1,\cdots, d$, all systems $m \in [M]$, and all time steps $t=1 ,\cdots, T$ that $\eta_m(t) > \sqrt{2\sigma^2 \log \frac{2}{\delta'}}$, we get the desired result by letting $\delta'=\delta (dMT)^{-1}$. \end{proof} \paragraph*{Proposition}[Restatement of \pref{prop:noise-conc}] For $T$ and $0 < \delta < 1$, let $\Ecal_{\eta}$ be the event \begin{align*} \Ecal_{\eta}(\delta) \coloneqq \cbr{\tfrac{3\lambda_{\min}(C)}{4} I \preceq \tfrac{1}{T}\sum_{t=1}^T \eta_m(t) \eta_m(t)' \preceq \tfrac{5\lambda_{\max}(C)}{4} I}. \end{align*} Then, if $T \ge T_{\eta}(\delta) \coloneqq \frac{c_\eta d\sigma^2 }{\lambda_{\min}(C)^2}\log 18/\delta$, we have $\PP[\Ecal_{\mathrm{bdd}}(\delta) \cap \Ecal_{\eta}(\delta)] \ge 1-2\delta$. \begin{proof} Here, we will bound the largest eigenvalue of the deviation matrix $\sum_{t=1}^T \eta_m(t)\eta_m(t)' - TC$. For the spectral norm of this matrix, using Lemma 5.4 from Vershynin \cite{vershynin2018high}, we have: \begin{align*} \opnorm{\sum_{t=1}^T \eta_m(t)\eta_m(t)' - TC}{2} \le \frac{1}{1-2\tau}\sup_{v \in \Ncal_{\tau}} \abr{v' \rbr{\sum_{t=1}^T \eta_m(t)\eta_m(t)' - TC} v} , \end{align*} where $\Ncal_{\tau}$ is a $\tau$-cover of the unit sphere $\Scal^{d-1}$. Now, it holds that $\abr{\Ncal_{\tau}} \le \rbr{1 + 2/\tau}^d$. Thus, we get: \begin{align*} \PP\sbr{\opnorm{\sum_{t=1}^T \eta_m(t)\eta_m(t)' - TC}{2} \ge \epsilon} \le {} & \PP\sbr{\sup_{v \in \Ncal_{\tau}} \abr{v' \rbr{\sum_{t=1}^T \eta_m(t)\eta_m(t)' - TC} v } \ge (1-2\tau) \epsilon}. \end{align*} Using some martingale concentration arguments, we first bound the probability on the RHS for a fixed vector $v \in \Ncal_{\tau}$. Then, taking a union bound over all $v \in \Ncal_{\tau}$ will lead to the final result. For a given $t$, since $\eta_m(t)'v$ is conditionally sub-Gaussian with parameter $\sigma$, the quantity $v'\eta_m(t)\eta_m(t)'v-v'Cv$ is a conditionally sub-exponential martingale difference. Using Theorem 2.19 of Wainwright \cite{wainwright2019high}, for small values of $\epsilon$, we have \begin{align*} \PP\sbr{ \abr{v' \rbr{\sum_{t=1}^T \eta_m(t)\eta_m(t)' - TC} v} \ge (1-2\tau)\epsilon} \le 2 \exp\rbr{-\frac{c_\eta(1-2\tau)^2\epsilon^2}{T\sigma^2}} , \end{align*} where $c_\eta$ is some universal constant. Taking a union bound, setting total failure probability to $\delta$, and letting $\tau = 1/4$, we obtain that with probability at least $1-\delta$, it holds that \begin{align*} \lambda_{\max}\rbr{\sum_{t=1}^T \eta_m(t)\eta_m(t)' - TC} \le c_\eta\sigma \sqrt{T\log \rbr{\frac{2 \cdot 9^d}{\delta}}}. \end{align*} According to Weyl's inequality, for $T \ge T_{\eta}(\delta) \coloneqq \frac{c_\eta d\sigma^2 }{\lambda_{\min}(C)^2}\log 18/\delta$, we have: \begin{align*} \frac{3\lambda_{\min}(C)}{4}I \preceq \frac{1}{T}\sum_{t=1}^T \eta_m(t) \eta_m(t)' \preceq \frac{5\lambda_{\max}(C)}{4}I. \end{align*} \end{proof} \paragraph*{Proposition}[Restatement of \pref{prop:noise-magnitude}] For the joint noise matrix $Z \in \mathbb{R}^{dT \times M}$ defined in \pref{eq:tot_noise}, with probability at least $1-\delta$, we have: \begin{align*} \norm{Z}_F^2 \le MT\tr{C} + \log \frac{2}{\delta}. \end{align*} We denote the above event by $\Ecal_Z(\delta)$. \begin{proof} For each system $m$, we know that $\EE[\eta_m(t)[i]^2|\Fcal_{t-1}] = C_{ii}^2$. Similar to the previous proof, we know that $\eta_m(t)[i]^2$ follows a conditionally sub-exponential distribution given $\Fcal_{t-1}$. Using the sub-exponential bound for martingale difference sequences, for large enough $T$ we get: \begin{align*} \norm{Z}_F^2 = \sum_{m=1}^M \sum_{t=1}^T \norm{\eta_m(t)}^2 \le MT\tr{C} + \log \frac{2}{\delta}, \end{align*} with probability at least $1-\delta$. \end{proof} Next, we show the proof of \pref{prop:matrix-self-norm} based on the following well-known probabilistic inequalities below. The first inequality is a concentration bound for self-normalized martingales, which can be found in Lemma 8 and Lemma 9 in the work of Abbasi-Yadkori et al. \cite{abbasi2011improved}. More details about self-normalized process can be found in the work of Victor et al. \cite{victor2009self}. \begin{lem} \label{lem:self-norm} Let $\{\Fcal_t\}_{t=0}^\infty$ be a filtration. Let $\{\eta_t\}_{t=1}^\infty$ be a real valued stochastic process such that $\eta_t$ is $\Fcal_t$ measurable and $\eta_t$ is conditionally $\sigma$-sub-Gaussian for some $R > 0$, i.e., \begin{align*} \forall \lambda \in \mathbb{R}, \EE\sbr{\exp(\lambda \eta_t) | \Fcal_{t-1}} \le e^{\frac{\lambda^2 \sigma^2}{2}} \end{align*} Let $\{X_t\}_{t=1}^\infty$ be an $\mathbb{R}^d$-valued stochastic process such that $X_t$ is $\Fcal_t$ measurable. Assume that $V$ is a $d \times d$ positive definite matrix. For any $t \ge 0$, define \begin{align*} \bar{V}_t = V + \sum_{s=1}^t X_s X_s', \quad S_t = \sum_{s=1}^t \eta_{s+1}X_s. \end{align*} Then with probability at least $1 - \delta$, for all $t \ge 0$ we have \begin{align*} \norm{S_t}_{\bar{V}_t^{-1}}^2 \le 2\sigma^2 \log \rbr{\frac{\det \rbr{\bar{V}_t}^{1/2} \det \rbr{V}^{-1/2}}{\delta}}. \end{align*} \end{lem} The second inequality is the following discretization-based bound shown in Vershynin \cite{vershynin2018high} for random matrices: \begin{proposition} \label{prop:matrix-l2} Let $M$ be a random matrix. For any $\epsilon<1$, let $\Ncal_{\epsilon}$ be an $\epsilon$-net of $\Scal^{d-1}$ such that for any $w \in \Scal^{d-1}$, there exists $\bar w \in \Ncal_{\epsilon}$ with $\norm{w-\bar w} \le \epsilon$. Then for any $\epsilon < 1$, we have: \begin{align*} \PP\sbr{\opnorm{M}{2} > z} \le \PP\sbr{\max_{\bar w \in \Ncal_\epsilon} \norm{M\bar w} > (1-\epsilon)z}. \end{align*} \end{proposition} With the aforementioned results, we now show the proof of \pref{prop:matrix-self-norm}. \paragraph*{Proposition}[Restatement of \pref{prop:matrix-self-norm}] For the system in \pref{eq:mt-lti}, for any $0<\delta<1$ and system $m \in [M]$, with probability at least $1-\delta$, we have: \begin{align*} \opnorm{\bar{V}_m^{-1/2}(T-1) \sum_{t=0}^{T-1} x_m(t) \eta_m(t+1)'}{2} \le {} & \sigma \sqrt{8d \log \rbr{\frac{5\det \rbr{\bar{V}_m(T-1)}^{1/2d} \det \rbr{V}^{-1/2d} }{\delta^{1/d}}}},& \end{align*} where $\bar{V}_{m}(s) = \sum_{t=0}^s x_m(t) x_m(t)' + V$ and $V$ is a deterministic positive definite matrix. \begin{proof} For the partial sum $S_m(t) = \sum_{s=0}^t x_m(s) \eta_m(s+1)'$, using \pref{prop:matrix-l2} with $\epsilon=1/2$, we get: \begin{align*} \PP\sbr{\opnorm{\bar{V}_m^{-1/2}(T-1) S_m(T-1)}{2} \ge y} \le \sum_{\bar w \in \Ncal_{\epsilon}} \PP\sbr{\norm{\bar{V}_m^{-1/2}(T-1) S_m(T-1)\bar w}^2 \ge \frac{y^2}{4}}, \end{align*} where $\bar w$ is a fixed unit norm vector in $\Ncal_{\epsilon}$. We can now apply \pref{lem:self-norm} with the $\sigma$ sub-Gaussian noise sequence $\eta_t' w$ to get the final high probability bound. \end{proof} \section{Remaining Proofs from \pref{sec:estimation-error-proof}} \label{app:estimation-error} In this section, we provide a detailed analysis of the average estimation error across the $M$ systems for the estimator in \pref{eq:mt_opt}. We start with the proof of \pref{lem:breakup} which bounds the prediction error for all systems $m \in [M]$: \paragraph*{Lemma}[Restatement of \pref{lem:breakup}] For any fixed orthonormal matrix $\bar{U} \in \mathbb{R}^{d^2 \times 2k}$, the total squared prediction error in \pref{eq:mt_opt} for $(\hat W, \hat B)$ can be decomposed as follows: \begin{align} & \frac{1}{2}\sum_{m=1}^M \norm{\tilde{X}_m (W^*\beta^*_m - \hat{W} \hat{\beta}_m)}_F^2 & \nonumber\\ \le {} & \sqrt{\sum_{m=1}^M \norm{\tilde\eta_m^\top \tilde{X}_m \bar{U}}_{\bar{V}^{-1}_m}^2}\sqrt{2\sum_{m=1}^M \norm{ \tilde{X}_m (W^*\beta^*_m - \hat{W} \hat{\beta}_m) }_2^2} + \sum_{m=1}^M \inner{\tilde\eta_m}{\tilde{X}_m (U-\bar{U}) r_m} & \nonumber \\ {} & + \sqrt{\sum_{m=1}^M \norm{\tilde\eta_m^\top \tilde{X}_m \bar{U}}_{\bar{V}^{-1}_m}^2} \sqrt{\sum_{m=1}^M \norm{\tilde{X}_m(\bar{U}-U)r_m}^2} . & \label{eq:app_pred_error_breakup} \end{align} \begin{proof} We first define $\tilde{\Sigma}_{m,\mathop{\mathrm{up}}}$ and $\tilde{\Sigma}_{m,\mathop{\mathrm{dn}}}$ as the block diagonal matrices in $\mathbb{R}^{d^2 \times d^2}$, with each $d\times d$ block of $\tilde{\Sigma}_{m,\mathop{\mathrm{up}}}$ and $\tilde{\Sigma}_{m,\mathop{\mathrm{dn}}}$ containing $\bar{\Sigma}_m$ and $\underbar{\Sigma}_m$, respectively. Let $V_m = U^\top \tilde{\Sigma}_m U + U^\top\tilde{\Sigma}_{m,\mathop{\mathrm{dn}}}U$ be the regularized covariance matrix of projected covariates $\tilde{X}_m U$. For any orthonormal matrix $\bar{U} \in \mathbb{R}^{d^2 \times 2k}$, we define $\bar V_m = \bar U^\top \tilde{\Sigma}_m \bar U + \bar U^\top\tilde{\Sigma}_{m,\mathop{\mathrm{dn}}}\bar U$, and proceed as follows: \begin{align} \frac{1}{2}\sum_{m=1}^M \norm{ \tilde{X}_m (W^*\beta^*_m - \hat{W} \hat{\beta}_m) }_2^2 = {} & \sum_{m=1}^M \inner{\tilde \eta_m}{\tilde{X}_m \bar{U} r_m} + \sum_{m=1}^M \inner{\tilde\eta_m}{\tilde{X}_m (U-\bar{U}) r_m} \label{eq:coverU}\\ \le {} & \sum_{m=1}^M \norm{\tilde\eta_m^\top \tilde{X}_m \bar{U}}_{\bar{V}^{-1}_m}\norm{r_m}_{\bar{V}_m} + \sum_{m=1}^M \inner{\tilde\eta_m}{\tilde{X}_m (U-\bar{U}) r_m} \label{eq:c-s}\\ \le {} & \sum_{m=1}^M \norm{\tilde\eta_m^\top \tilde{X}_m \bar{U}}_{\bar{V}^{-1}_m}\norm{r_m}_{{V}_m} + \sum_{m=1}^M \inner{\tilde\eta_m}{\tilde{X}_m (U-\bar{U}) r_m} \nonumber \\ {} & + \sum_{m=1}^M \norm{\tilde\eta_m^\top \tilde{X}_m \bar{U}}_{\bar{V}^{-1}_m}\rbr{\norm{r_m}_{\bar{V}_m}-\norm{r_m}_{{V}_m}} \label{eq:add-subtract}\\ \le {} & \sqrt{\sum_{m=1}^M \norm{\tilde\eta_m^\top \tilde{X}_m \bar{U}}_{\bar{V}^{-1}_m}^2}\sqrt{\sum_{m=1}^M \norm{r_m}^2_{V_m}} + \sum_{m=1}^M \inner{\tilde\eta_m}{\tilde{X}_m (U-\bar{U}) r_m} \nonumber \\ {} & + \sqrt{\sum_{m=1}^M \norm{\tilde\eta_m^\top \tilde{X}_m \bar{U}}_{\bar{V}^{-1}_m}^2} \sqrt{\sum_{m=1}^M \rbr{\norm{r_m}_{\bar{V}_m} - \norm{r_m}_{V_m}}^2} \label{eq:c-s2} . \end{align} The first equality in \pref{eq:coverU} uses the fact that the error matrix $\tilde \Theta^* - \hat \Theta$ is of rank at most $2k$ and then introduces a matrix $\bar U$ leading to the two terms on the RHS. In inequality \pref{eq:c-s}, we use Cauchy-Schwartz inequality to bound the first term with respect to the norm induced by matrix $\bar{V}_m$. The next step \pref{eq:add-subtract} again follows by simple algebra where rewrite the term $\norm{r_m}_{\bar{V}_m}$ as $\norm{r_m}_{{V}_m} + (\norm{r_m}_{\bar{V}_m} -\norm{r_m}_{{V}_m} )$ and collect the terms accordingly. Finally, in the last step \pref{eq:c-s2}, we again use the Cauchy-Schwarz inequality to rewrite the first and last terms from the previous step. Now, note that $\norm{r}^2_{V_m} = \norm{Ur}_{\tilde \Sigma_m}$. Thus, for the last term in the RHS of \pref{eq:c-s2}, we can use the reverse triangle inequality for any two vectors $a,b \in \mathbb{R}^{2k}$, $\abr{\norm{a}-\norm{b}} \le \norm{a-b}$ with $a=\bar U r_m$ and $b=U r_m$. Hence, we have: \begin{align} \frac{1}{2}\sum_{m=1}^M \norm{ \tilde{X}_m (W^*\beta^*_m - \hat{W} \hat{\beta}_m) }_2^2 \le {} & \sqrt{\sum_{m=1}^M \norm{\tilde\eta_m^\top \tilde{X}_m \bar{U}}_{\bar{V}^{-1}_m}^2}\sqrt{\sum_{m=1}^M \norm{Ur_m}^2_{\tilde{\Sigma}_m + \tilde\Sigma_{m,\mathop{\mathrm{dn}}}}} + \sum_{m=1}^M \inner{\tilde\eta_m}{\tilde{X}_m (U-\bar{U}) r_m} \nonumber \\ {} & + \sqrt{\sum_{m=1}^M \norm{\tilde\eta_m^\top \tilde{X}_m \bar{U}}_{\bar{V}^{-1}_m}^2} \sqrt{\sum_{m=1}^M \norm{(\bar U - U)r_m}_{\tilde{\Sigma}_m + \tilde\Sigma_{m,\mathop{\mathrm{dn}}}}^2} \end{align} Further, since $\tilde \Sigma_m \succeq \tilde \Sigma_{m,\mathop{\mathrm{dn}}}$ (\pref{thm:cov_conc}), we have $r'U' \rbr{\tilde \Sigma_m - \tilde \Sigma_{m,\mathop{\mathrm{dn}}}} Ur \ge 0$ for all $r\in \RR^{2k}$. Thus, we can rewrite the previous inequality as: \begin{align*} \frac{1}{2}\sum_{m=1}^M \norm{ \tilde{X}_m (W^*\beta^*_m - \hat{W} \hat{\beta}_m) }_2^2 \le {} & \sqrt{\sum_{m=1}^M \norm{\tilde\eta_m^\top \tilde{X}_m \bar{U}}_{\bar{V}^{-1}_m}^2}\sqrt{2\sum_{m=1}^M \norm{Ur_m}^2_{\tilde{\Sigma}_m}} + \sum_{m=1}^M \inner{\tilde\eta_m}{\tilde{X}_m (U-\bar{U}) r_m} \\ {} & + \sqrt{\sum_{m=1}^M \norm{\tilde\eta_m^\top \tilde{X}_m \bar{U}}_{\bar{V}^{-1}_m}^2} \sqrt{2\sum_{m=1}^M \norm{\tilde{X}_m(\bar{U}-U)r_m}^2} \\ \le {} & \sqrt{\sum_{m=1}^M \norm{\tilde\eta_m^\top \tilde{X}_m \bar{U}}_{\bar{V}^{-1}_m}^2}\sqrt{2\sum_{m=1}^M \norm{ \tilde{X}_m (W^*\beta^*_m - \hat{W} \hat{\beta}_m) }_2^2}\\ {} & + \sum_{m=1}^M \inner{\tilde\eta_m}{\tilde{X}_m (U-\bar{U}) r_m} \\ {} & + \sqrt{\sum_{m=1}^M \norm{\tilde\eta_m^\top \tilde{X}_m \bar{U}}_{\bar{V}^{-1}_m}^2} \sqrt{2\sum_{m=1}^M \norm{\tilde{X}_m(\bar{U}-U)r_m}^2}. \end{align*} In the second inequality, we simply expand the term $\norm{Ur_m}^2_{\tilde{\Sigma}_m}$ using the relation $\hat W \hat \beta_m - W^* \beta_m^* = Ur_m$. \end{proof} We will now give a detailed proof of the bound of each term on the rhs of \pref{eq:pred_error_breakup}. As stated in the main text, we select the matrix $\bar{U}$ to be an element of $\Ncal_{\epsilon}$ (cover of set of orthonormal matrices in $\mathbb{R}^{d^2 \times 2k}$) such that $\norm{\bar{U} - U}_F \le \epsilon$. \paragraph*{Proposition}[Restatement of Proposition \ref{prop:proj_cover_error}] Under \pref{assum:linear_model}, for the noise process $\{\eta_m(t)\}_{t=1}^\infty$ defined for each system, with probability at least $1-\delta_Z$, we have: \begin{align} \sum_{m=1}^M \norm{\tilde{X}_m(\bar{U}-U)r_m}^2 \lesssim {} & \boldsymbol\kappa\epsilon^2 \rbr{MT\tr{C} +\sigma^2\log \frac{2}{\delta_Z} }. \label{eq:app_pred_error_coarse} \end{align} \begin{proof} In order to bound the term above, we use the squared loss inequality in \pref{eq:sqloss_ineq} as follows: \begin{align*} \norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F^2 \le 2 \inner{Z}{\Xcal (W^*B^* - \hat{W} \hat{B})} \le 2\norm{Z}_F \norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F, \end{align*} which leads to the inequality $\norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F \le 2 \norm{Z}_F $. Using the concentration result in \pref{prop:noise-magnitude}, with probability at least $1-\delta_Z$, we get \begin{align*} \|Z\|_F \lesssim \sqrt{MT\tr{C} + \sigma^2\log \frac{1}{\delta_Z}}. \end{align*} Thus, we have $\norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F \lesssim 2 \sqrt{MT\tr{C} + \sigma^2\log \frac{1}{\delta_Z}}$, with probability at least $1-\delta_Z$ which gives: \begin{align*} \sum_{m=1}^M \norm{\tilde{X}_m(\bar{U}-U)r_m}^2 \le {} & \sum_{m=1}^M \norm{\tilde{X}_m}^2\norm{\bar{U}-U}^2 \norm{r_m}^2 \le {}\sum_{m=1}^M \bar{\lambda}_m \epsilon^2 \norm{r_m}^2 \\ = {} & \sum_{m=1}^M \bar{\lambda}_m \epsilon^2 \norm{Ur_m}^2 \le {}\sum_{m=1}^M \bar{\lambda}_m \epsilon^2 \frac{\norm{\tilde{X}_m U r_m}^2}{\underbar{\lambda}_m} \\ \le {} & \boldsymbol\kappa \epsilon^2 \sum_{m=1}^M \norm{\tilde{X}_m U r_m}^2 = {} \boldsymbol\kappa \epsilon^2 \sum_{m=1}^M \norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F^2 \\ \lesssim {} & \boldsymbol\kappa\epsilon^2 \rbr{MT\tr{C} +\sigma^2\log \frac{1}{\delta_Z}}. \end{align*} \end{proof} \vspace{-3em} \paragraph*{Proposition}[Restatement of Proposition~\ref{prop:breakup_inner_prod}] Under \pref{assum:noise} and \pref{assum:linear_model}, with probability at least $1-\delta_Z$, we have: \begin{align} \label{eq:app_breakup_inner_prod} \sum_{m=1}^M \inner{\tilde\eta_m}{\tilde{X}_m (U-\bar{U}) r_m} \lesssim {} & \sqrt{\boldsymbol\kappa} \epsilon \rbr{MT \tr{C} + \sigma^2\log \frac{1}{\delta_Z}}. \end{align} \begin{proof} Using Cauchy-Schwarz inequality and \pref{prop:proj_cover_error}, we bound the term as follows: \begin{align*} \sum_{m=1}^M \inner{\tilde\eta_m}{\tilde{X}_m (U-\bar{U}) r_m} \le {} & \sqrt{\sum_{m=1}^M \norm{\tilde{\eta}_m}^2} \sqrt{\sum_{m=1}^M \norm{\tilde{X}_m(\bar{U}-U)r_m}^2} \\ \lesssim {} & \sqrt{MT \tr{C} + \sigma^2\log \frac{1}{\delta_Z}} \sqrt{\boldsymbol\kappa\epsilon^2 \rbr{MT\tr{C} +\sigma^2\log \frac{1}{\delta_Z} }}\\ \lesssim {} & \sqrt{\boldsymbol\kappa} \epsilon \rbr{MT \tr{C} + \sigma^2\log \frac{1}{\delta_Z}}. \end{align*} \end{proof} \paragraph*{Proposition}[Restatement of Proposition~\ref{prop:mt-self-norm}] For an arbitrary orthonormal matrix $\bar{U} \in \mathbb{R}^{d^2 \times 2k}$ in the $\epsilon$-cover $\Ncal_\epsilon$ defined in \pref{lem:lowrank-cover}, let $\Sigma \in \mathbb{R}^{d^2 \times d^2}$ be a positive definite matrix, and define $S_{m}(\tau) = \tilde \eta_m(\tau)^\top \tilde{X}_m(\tau) \bar{U}$, $\bar{V}_m(\tau) = \bar{U}'\rbr{\tilde{\Sigma}_m(\tau) + \Sigma}\bar{U}$, and $V_0 = \bar{U}'\Sigma\bar{U}$. Then, consider the following event: \begin{align*} \Ecal_{1}(\delta_U) \coloneqq \cbr{\omega \in \Omega: \sum_{m=1}^M \norm{S_{m}(T)}_{\bar{V}_m^{-1}(T)}^2 \le 2\sigma^2\log \rbr{\frac{\Pi_{m=1}^M\det(\bar{V}_m(T))\det (V_0)^{-1}}{\delta_U}}}. \end{align*} For $\Ecal_{1}(\delta_U)$, we have: \begin{align} \label{eq:app_mt-self-norm} \PP\sbr{\Ecal_1(\delta_U)} \ge 1-\rbr{\frac{6\sqrt{2k}}{\epsilon}}^{2d^2k}\delta_U. \end{align} \paragraph*{Proof of \pref{prop:mt-self-norm}} First, using the vectors $\tilde{x}_{m,j}(t)$ defined in \pref{sec:joint-learning}, for the matrix $\bar{U}$, define $\bar{x}_{m,j}(t) = \bar{U}' \tilde{x}_{m,j}(t) \in \mathbb{R}^{2k}$. It is straightforward to see that $\bar{V}_m(t) = \sum_{s=1}^t \sum_{j=1}^d \bar{x}_{m,j}(s) \bar{x}_{m,j}(s)' + V_0$. Now, we show that the result can essentially be stated as a corollary of the following result for a univariate regression setting: \begin{lem}[Lemma 2 of \cite{hu2021near}] Consider a fixed matrix $\bar U \in \RR^{p \times 2k}$ and let $\bar V_{m}(t) = \bar U' (\sum_{s=0}^t x_m(s)x_m(s)') \bar U + \bar U' V_0 \bar U$. Consider a noise process $w_{m}(t+1) \in \RR$ adapted to the filtration $\Fcal_t = \sigma(w_m(1),\ldots,w_m(t), X_m(1), \ldots, X_m(t))$. If the noise $w_m(t)$ is conditionally sub-Gaussian for all $t$: $\EE\sbr{\exp\rbr{ \lambda \cdot w_m(t+1)}} \le \exp\rbr{\lambda^2 \sigma^2/2} $, then with probability at least $1-\delta$, for all $t \ge 0$, we have: \begin{align*} \sum_{m=1}^M \norm{\sum_{s=0}^t w_m(s+1) \bar U' x_m(s)}_{\bar V_m^{-1}(t)}^2 \le 2 \log \rbr{\frac{\Pi_{m=1}^M \rbr{\det (\bar V_m(t))}^{1/2} \rbr{\bar U' V_0 \bar U}^{-1/2}}{\delta}} \end{align*} \end{lem} In order to use the above result in our case, we consider the martingale sum $\sum_{t=0}^T \sum_{j=1}^d \tilde{\eta}_{m,j}(t) \bar U'\tilde{x}_{m,j}(t)$. Under \pref{assum:noise}, we can use the same argument as in the proof of Lemma 2 in Hu et al. \cite{hu2021near} as: \begin{align*} \exp{\rbr{\sum_{j=1}^d \frac{\eta_m(t+1)[j]}{\sigma}\inner{\lambda}{\bar{x}_{m,j}(t)} }} \le \exp{\rbr{\sum_{j=1}^d \frac{1}{2}\inner{\lambda}{\bar{x}_{m,j}(t)}^2 }}. \end{align*} Thus, for a fixed matrix $\bar U$ and $T\ge0$, with probability at least $1-\delta_U$, \begin{align*} \sum_{m=1}^M \norm{S_{m}(T)}_{\bar{V}_m^{-1}(T)}^2 \le 2\sigma^2\log \rbr{\frac{\Pi_{m=1}^M\det(\bar{V}_m(T))\det (V_0)^{-1}}{\delta_U}} \end{align*} Finally, we take a union bound over the $\epsilon$-cover set of orthonormal matrices $\mathbb{R}^{d^2 \times 2k}$ to bound the total failure probability by $\abr{\Ncal_\epsilon}\delta_U = \rbr{\frac{6\sqrt{2k}}{\epsilon}}^{2d^2k}\delta_U$. \iffalse In order to prove the proposition, we develop an analysis similar to that of \citet{abbasi2011improved}. Here, we outline a detailed proof for the multi-task LTI setting of we study in this paper, which is more general than the result of \citet{hu2021near} for independent observations. First, using the vectors $\tilde{x}_{m,j}(t)$ defined in \pref{sec:joint-learning}, for the matrix $\bar{U}$, define $\bar{x}_{m,j}(t) = \bar{U}' \tilde{x}_{m,j}(t) \in \mathbb{R}^{2k}$. It is straightforward to see that $\bar{V}_m(t) = \sum_{s=1}^t \sum_{j=1}^d \bar{x}_{m,j}(s) \bar{x}_{m,j}(s)' + V_0$. Then, we state and prove the following lemma. \MF{We never use this lemma. So, what purpose is it serving here?} \begin{lem} Let $\lambda \in \mathbb{R}^{2k}$ and $t >0$ be arbitrary and define \begin{align*} M_t^{\lambda} = \exp{\rbr{\sum_{s=0}^t \sum_{m=1}^M\sum_{j=1}^d \sbr{\frac{\eta_m(s+1)[j]}{\sigma}\inner{\lambda}{\bar{x}_{m,j}(s)} - \frac{1}{2}\inner{\lambda}{\bar{x}_{m,j}(s)}^2 }}}. \end{align*} Suppose that $\tau$ is a stopping time for to filtration $\{\Fcal_t\}_{t\ge0}$. Then, we have: \begin{align*} \EE\sbr{M^\lambda_t} \le 1. \end{align*} \end{lem} \begin{proof} \begin{align*} D^\lambda_t = \exp{\rbr{\sum_{m=1}^M\sum_{j=1}^d \sbr{\frac{\eta_m(t+1)[j]}{\sigma}\inner{\lambda}{\bar{x}_{m,j}(t)} - \frac{1}{2}\inner{\lambda}{\bar{x}_{m,j}(t)}^2 }}}. \end{align*} Since $\eta_m(t)$ are by \pref{assum:noise} conditionally sub-Gaussian, it holds that \begin{align*} \exp{\rbr{\sum_{j=1}^d \frac{\eta_m(t+1)[j]}{\sigma}\inner{\lambda}{\bar{x}_{m,j}(t)} }} \le \exp{\rbr{\sum_{j=1}^d \frac{1}{2}\inner{\lambda}{\bar{x}_{m,j}(t)}^2 }}. \end{align*} Now, note that each system $m \in [M]$ is statistically independent of the other systems. So, we have \begin{align*} \EE\sbr{D^\lambda_t|\Fcal_t} \le 1. \end{align*} Now, since $\tau$ is a stopping time, $\tau=t$ is $\Fcal_t$-measurable. So, as $M^\lambda_t$ is the product of $D^\lambda_s$ for $s=0,\cdots, t$, conditioning on the filtration, the smoothing property of conditional expectation leads to the desired result. \MF{Please double check this proof.} \end{proof} \MF{I cannot understand how the lemma below is used.} Now, to proceed towards proving \pref{prop:mt-self-norm}, define $S_{m}(\tau) = \tilde{\eta}_m(\tau)^\top \tilde{X}_m(\tau) \bar{U}$, where $\tilde{\eta}_m(t)$ is defined in \pref{sec:joint-learning}. Using the above lemma, we can now directly use the following high probability bound: \begin{lem}[Lemma 4 in \citet{hu2021near}] For a stopping time $\tau$ wrt to the filtration $\{\Fcal_t\}_{t \ge 0}$, with probability at least $1-\delta$, we have: \begin{align*} \sum_{m=1}^M \norm{S_{m}(\tau)}_{\bar{V}_m^{-1}(\tau)}^2 \le 2\sigma^2\log \rbr{\frac{\Pi_{m=1}^M\det(\bar{V}_m(\tau))\det (V_0)^{-1}}{\delta_U}} . \end{align*} \end{lem} We finish the proof of \pref{prop:mt-self-norm} with the following stopping time argument. \begin{proof} We start with a fixed matrix $\bar U$. Define a bad event as $B_t(\delta) = \Ecal_1(t,\delta_U)^c$ \MF{what is $\Ecal$??}as follows: \begin{align*} B_t(\delta_U) = \cbr{\omega \in \Omega: \sum_{m=1}^M \norm{S_{m}(t)}_{\bar{V}_m^{-1}(t)}^2 \ge 2\sigma^2\log \rbr{\frac{\Pi_{m=1}^M\det(\bar{V}_m(t))\det (\bar{V}_{\mathop{\mathrm{dn}}}(t))^{-1}}{\delta_U}}}. \end{align*} With the stopping time $\tau(\omega) = \min\cbr{t \ge 0: \omega \in B_t(\delta_U)}$, we have: \begin{align*} \PP\sbr{\bigcup_{t \ge 0} B_t(\delta_U)} = {} & \PP\sbr{\tau < \infty}\\ = {} & \PP\sbr{\sum_{m=1}^M \norm{S_{m}(\tau)}_{\bar{V}_m^{-1}(\tau)}^2 \ge 2\sigma^2\log \rbr{\frac{\Pi_{m=1}^M\det(\bar{V}_m(\tau))\det (\bar{V}_{\mathop{\mathrm{dn}}}(\tau))^{-1}}{\delta_U}}, \tau < \infty} \\ \le {} & \PP\sbr{\sum_{m=1}^M \norm{S_{m}(\tau)}_{\bar{V}_m^{-1}(\tau)}^2 \ge 2\sigma^2\log \rbr{\frac{\Pi_{m=1}^M\det(\bar{V}_m(\tau))\det (\bar{V}_{\mathop{\mathrm{dn}}}(\tau))^{-1}}{\delta_U}}} \\ \le {} & \delta_U. \end{align*} \MF{That supermartingale lemma is used in the proof of the last ineq above, but since we are using a result from that paper, it is not needed. Indeed, the whole arguments above are not needed. Also, we never use stopping time stuff. So, removing all stopping time arguments and just going with our fixed sample size or $T$, is enough. In their paper, they have a trajectory dependent sample size, so stopping times are needed for them.} Now, as the estimates $\hat W, \hat B$ depend on the data of observed trajectories, we take a union bound to account for all possible orthonormal matrix $U$. Thus, taking a union bound over all matrices in the $\epsilon$-cover $\Ncal_\epsilon$, we can see that the failure probability for any $U$ is bounded by $\rbr{\frac{6\sqrt{2k}}{\epsilon}}^{2d^2k}\delta_U$. \MF{the last a few lines are unclear, and are not formal enough.} \MF{In general, the above couple of pages are not well-organized.} \end{proof} \fi \subsection{Putting Things Together} \label{app:final_proof} We now use the bounds we have shown for each term before and give the final steps in the proof of \pref{thm:estimation_error} by using the error decomposition in \pref{lem:breakup} as follows: From \pref{lem:breakup}, with $a$ we have: \begin{align*} \norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F^2\le {} & \sqrt{2\sum_{m=1}^M \norm{\tilde\eta_m^\top \tilde{X}_m \bar{U}}_{\bar{V}^{-1}_m}^2}\norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F + \sum_{m=1}^M \inner{\tilde\eta_m}{\tilde{X}_m (U-\bar{U}) r_m} \\ {} & + \sqrt{\sum_{m=1}^M \norm{\tilde\eta_m^\top \tilde{X}_m \bar{U}}_{\bar{V}^{-1}_m}^2} \sqrt{2\sum_{m=1}^M \norm{\tilde{X}_m(\bar{U}-U)r_m}^2}. \end{align*} Now, let $\abr{\Ncal_{\epsilon}}$ be the cardinality of the $\epsilon$-cover of the set of orthonormal matrices in $\mathbb{R}^{d^2 \times 2k}$ that we defined in \pref{lem:breakup}. So, substituting the termwise bounds from \pref{prop:proj_cover_error}, \pref{prop:breakup_inner_prod}, and \pref{prop:mt-self-norm}, with probability at least $1-\abr{\Ncal_{\epsilon}}\delta_U - \delta_Z$, it holds that: \begin{align} \frac{1}{2}\norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F^2 \lesssim {} & \sqrt{\sigma^2\log \rbr{\frac{\Pi_{m=1}^M\det(\bar{V}_m(t))\det (V_0)^{-1}}{\delta_U}}}\norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F \nonumber \\ {} & + \sqrt{\sigma^2\log \rbr{\frac{\Pi_{m=1}^M\det(\bar{V}_m(t))\det (V_0)^{-1}}{\delta_U}}} \sqrt{\boldsymbol\kappa\epsilon^2 \rbr{MT\tr{C} +\sigma^2\log \frac{1}{\delta_Z} }} \nonumber \\ {} & + \sqrt{\boldsymbol\kappa} \epsilon \rbr{MT \tr{C} + \sigma^2\log \frac{1}{\delta_Z}}. & \label{eq:app_inter_breakup} \end{align} For the matrix $V_0$, we now substitute $\Sigma = \underbar{\lambda}I_{d^2}$, which implies that $\det(V_0)^{-1} = \det(1/\underbar{\lambda}I_{2k}) = \rbr{1/\underbar{\lambda}}^{2k}$. Similarly, for the matrix $\bar{V}_m(T)$, we get $\det(\bar{V}_m(T)) \le \bar{\lambda}^{2k}$. Thus, substituting $\delta_U = 3^{-1}\delta\abr{\Ncal}_\epsilon^{-1}$ and $\delta_C=3^{-1}\delta$ in \pref{thm:cov_conc}, with probability at least $1-2\delta/3$, the upper-bound in \pref{prop:mt-self-norm} becomes: \begin{align*} \sum_{m=1}^M \norm{\tilde\eta_m^\top \tilde{X}_m \bar{U}}_{\bar{V}^{-1}_m}^2 \le {} & \sigma^2\log \rbr{\frac{\Pi_{m=1}^M\det(\bar{V}_m(t))\det (V_0)^{-1}}{\delta_U}} \\ \le {} & \sigma^2 \log \rbr{ \frac{\bar{\lambda}}{\underbar{\lambda}} }^{2Mk} + \sigma^2 \log \rbr{ \frac{18k}{\delta \epsilon} }^{2d^2k} \\ \lesssim {} & \sigma^2Mk \log \boldsymbol\kappa_\infty + \sigma^2 d^2k \log \frac{k}{\delta \epsilon}. \end{align*} Substituting this in \pref{eq:app_inter_breakup} with $\delta_Z = \delta/3$, with probability at least $1-\delta$, we have: \begin{align*} \frac{1}{2} \norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F^2 \lesssim {} & \sqrt{\sigma^2Mk \log \boldsymbol\kappa_\infty + \sigma^2 d^2k \log \frac{k}{\delta \epsilon}}\norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F \\ {} & + \sqrt{\sigma^2Mk \log \boldsymbol\kappa_\infty + \sigma^2 d^2k \log \frac{k}{\delta \epsilon}} \sqrt{\boldsymbol\kappa\epsilon^2 \rbr{MT\tr{C} +\sigma^2\log \frac{1}{\delta} }} \\ {} & + \sqrt{\boldsymbol\kappa} \epsilon \rbr{MT \tr{C} + \sigma^2\log \frac{1}{\delta}} \\ \lesssim {} & \sqrt{c^2Mk \log \boldsymbol\kappa_\infty + c^2 d^2k \log \frac{k}{\delta \epsilon}}\norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F \\ {} & + \sqrt{c^2Mk \log \boldsymbol\kappa_\infty + c^2 d^2k \log \frac{k}{\delta \epsilon}} \sqrt{\boldsymbol\kappa\epsilon^2 \rbr{c^2dMT +c^2\log \frac{1}{\delta}}} \\ {} & + \sqrt{\boldsymbol\kappa} \epsilon \rbr{c^2dMT + c^2\log \frac{1}{\delta}} \end{align*} Noting that $k \le d$ and $\log \frac{1}{\delta} \lesssim d^2k\log \frac{k}{\delta \epsilon}$ for $\epsilon = \frac{k}{\sqrt{\boldsymbol\kappa}dT}$, we obtain: \begin{align*} \frac{1}{2} \norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F^2 \lesssim {} & \rbr{\sqrt{c^2Mk \log \boldsymbol\kappa_\infty + c^2 d^2k \log \frac{k}{\delta \epsilon}} }\norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F \\ {} & + \sqrt{c^2Mk \log \boldsymbol\kappa_\infty + c^2 d^2k \log \frac{k}{\delta \epsilon}} \sqrt{\boldsymbol\kappa\epsilon^2 \rbr{c^2dMT +c^2d^2k\log \frac{k}{\delta\epsilon} }} \\ {} & + \sqrt{\boldsymbol\kappa} \epsilon \rbr{c^2dMT + c^2d^2k\log \frac{k}{\delta\epsilon}} \\ \lesssim {} & \rbr{\sqrt{c^2Mk \log \boldsymbol\kappa_\infty + c^2 d^2k \log \frac{\boldsymbol\kappa dT}{\delta}}}\norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F \\ {} & + \sqrt{c^2Mk \log \boldsymbol\kappa_\infty + c^2 d^2k \log \frac{\boldsymbol\kappa dT}{\delta}} \sqrt{c^2\rbr{\frac{k^2M}{dT} + \frac{k^3}{T^2}\log \frac{\boldsymbol\kappa dT}{\delta} }} \\ {} & + c^2\rbr{Mk + \frac{dk^2}{T}\log \frac{\boldsymbol\kappa dT}{\delta}} \\ \lesssim {} & \rbr{\sqrt{c^2\rbr{Mk \log \boldsymbol\kappa_\infty + d^2k \log \frac{\boldsymbol\kappa dT}{\delta}}} }\norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F \\ {} & + c^2\rbr{Mk \log \boldsymbol\kappa_\infty + \frac{d^2k}{T}\log \frac{\boldsymbol\kappa dT}{\delta}}. \end{align*} The above quadratic inequality for the prediction error $\norm{\Xcal(W^*B^* - \hat W \hat B)}_F^2$ implies the following bound, which holds with probability at least $1-\delta$. \begin{align*} \norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F^2 \lesssim c^2\rbr{Mk \log \boldsymbol\kappa_\infty + d^2k\log \frac{\boldsymbol\kappa dT}{\delta}}. \end{align*} Since the smallest eigenvalue of the matrix $\Sigma_m = \sum_{t=0}^T X_m(t) X_m(t)'$ is at least $\underbar \lambda$ (\pref{thm:cov_conc}), we can convert the above prediction error bound to an estimation error bound and get \begin{align*} \norm{W^*B^* - \hat W \hat B}_F^2 \lesssim {} & \frac{c^2\rbr{Mk \log \boldsymbol\kappa_\infty + d^2k\log \frac{\boldsymbol\kappa dT}{\delta}}}{\underbar \lambda}, \end{align*} which implies the following estimation error bound for the solution of \pref{eq:mt_opt}: \begin{align*} \sum_{m=1}^M \norm{\hat A_m - A_m}_F^2 \lesssim \frac{c^2\rbr{Mk \log \boldsymbol\kappa_\infty + d^2k\log \frac{\boldsymbol\kappa dT}{\delta}}}{\underbar \lambda}. \end{align*} \section{Detailed Proof of the Estimation Error in \pref{thm:estimation_error_pert}} \label{app:estimation-error_pert} In this section, we provide a detailed proof of the average estimation error across the $M$ systems for the estimator in \pref{eq:mt_opt} in presence of misspecifications $D_m \in \mathbb{R}^{d \times d}$: \begin{align*} A_m = \rbr{\sum_{i=1}^k \beta^*_m[i] W^*_i} + D_m, \quad \text{where }\norm{D_m}_F \le \zeta_m \end{align*} Recall that, in this case, we get an additional term in the squared loss decomposition for $(\hat W, \hat B)$ which depends on the misspecifications $D_m$ as follows: \begin{align} \label{eq:app_sqloss_ineq_pert} \frac{1}{2} \sum_{m=1}^M \norm{ \tilde{X}_m (W^*\beta^*_m - \hat{W} \hat{\beta}_m) }_2^2 \le & {} \sum_{m=1}^M \inner{\tilde \eta_m}{\tilde{X}_m \rbr{\hat{W}\hat{\beta}_m - W^*\beta^*_m}} {+ \sum_{m=1}^M 2\inner{ \tilde{X}_m\tilde{D}_m}{ \tilde{X}_m \rbr{\hat{W}\hat{\beta}_m - W^*\beta^*_m}}}. \end{align} The error in the shared part, $\hat{W}\hat{\beta}_m - W^*\beta^*_m$, can still be rewritten as $Ur_m$ where $U \in \mathbb{R}^{d^2 \times 2k}$ is a matrix containing an orthonormal basis of size $2k$ in $\mathbb{R}^{d^2}$ and $r_m \in \mathbb{R}^{2k}$ is the system specific vector. We now prove the squared loss decomposition result stated in \pref{lem:breakup_pert_app}: \paragraph*{Lemma} [Restatement of \pref{lem:breakup_pert_app}] Under the misspecified shared linear basis structure in \pref{eq:linear_model_pert}, for any fixed orthonormal matrix $\bar{U} \in \mathbb{R}^{d^2 \times 2k}$, the low rank part of the total squared error can be decomposed as follows: \begin{align*} & \frac{1}{2}\sum_{m=1}^M \norm{\tilde{X}_m (W^*\beta^*_m - \hat{W} \hat{\beta}_m)}_F^2 & \nonumber\\ \le {} & \sqrt{\sum_{m=1}^M \norm{\tilde\eta_m^\top \tilde{X}_m \bar{U}}_{\bar{V}^{-1}_m}^2}\sqrt{2\sum_{m=1}^M \norm{ \tilde{X}_m (W^*\beta^*_m - \hat{W} \hat{\beta}_m) }_2^2} + \sum_{m=1}^M \inner{\tilde\eta_m}{\tilde{X}_m (U-\bar{U}) r_m} & \nonumber \\ {} & + \sqrt{\sum_{m=1}^M \norm{\tilde\eta_m^\top \tilde{X}_m \bar{U}}_{\bar{V}^{-1}_m}^2} \sqrt{2\sum_{m=1}^M \norm{\tilde{X}_m(\bar{U}-U)r_m}^2} {+ 2\sqrt{\bar \lambda}\bar{\zeta} \sqrt{\sum_{m=1}^M \norm{\tilde{X}_m \rbr{\hat{W}\hat{\beta}_m - W^*\beta^*_m}}_2^2}}. & \end{align*} \begin{proof} Letting $\tilde{\Sigma}_{m,\mathop{\mathrm{up}}}$ and $\tilde{\Sigma}_{m,\mathop{\mathrm{dn}}}$ as defined in \pref{app:estimation-error}, recall that we define $V_m = U^\top \tilde{\Sigma}_m U + U^\top\tilde{\Sigma}_{m,\mathop{\mathrm{dn}}}U$ be the regularized covairance matrix of projected covariates $\tilde{X}_m U$. For any orthonormal matrix $\bar{U} \in \mathbb{R}^{d^2 \times 2k}$, we define $\bar V_m = \bar U^\top \tilde{\Sigma}_m \bar U + \bar U^\top\tilde{\Sigma}_{m,\mathop{\mathrm{dn}}}\bar U$ and proceed as follows: \begin{align*} & \frac{1}{2}\sum_{m=1}^M \norm{ \tilde{X}_m (W^*\beta^*_m - \hat{W} \hat{\beta}_m) }_2^2 &\\ \le {} & \sum_{m=1}^M \inner{\tilde \eta_m}{\tilde{X}_m \bar{U} r_m} + \sum_{m=1}^M \inner{\tilde\eta_m}{\tilde{X}_m (U-\bar{U}) r_m} {+ \sum_{m=1}^M 2\inner{ \tilde{X}_m\tilde{D}_m}{ \tilde{X}_m \rbr{\hat{W}\hat{\beta}_m - W^*\beta^*_m}}} \\ \le {} & \sum_{m=1}^M \norm{\tilde\eta_m^\top \tilde{X}_m \bar{U}}_{\bar{V}^{-1}_m}\norm{r_m}_{\bar{V}_m} + \sum_{m=1}^M \inner{\tilde\eta_m}{\tilde{X}_m (U-\bar{U}) r_m} {+ \sum_{m=1}^M 2\norm{\tilde{X}_m\tilde{D}_m}_2 \norm{\tilde{X}_m \rbr{\hat{W}\hat{\beta}_m - W^*\beta^*_m}}_2} \\ \le {} & \sqrt{\sum_{m=1}^M \norm{\tilde\eta_m^\top \tilde{X}_m \bar{U}}_{\bar{V}^{-1}_m}^2}\sqrt{\sum_{m=1}^M \norm{r_m}^2_{V_m}} + \sum_{m=1}^M \inner{\tilde\eta_m}{\tilde{X}_m (U-\bar{U}) r_m} \\ {} & + \sqrt{\sum_{m=1}^M \norm{\tilde\eta_m^\top \tilde{X}_m \bar{U}}_{\bar{V}^{-1}_m}^2} \sqrt{\sum_{m=1}^M \rbr{\norm{r_m}_{\bar{V}_m} - \norm{r_m}_{V_m}}^2} {+ 2\sqrt{\sum_{m=1}^M \norm{\tilde{X}_m\tilde{D}_m}_2^2} \sqrt{\sum_{m=1}^M \norm{\tilde{X}_m \rbr{\hat{W}\hat{\beta}_m - W^*\beta^*_m}}_2^2}}. \end{align*} The first equality uses the fact that the error matrix is low rank upto a misspecification term. The first inequality follows by using Cauchy-Schwarz inequality. In the last step, we have used the sub-additivity of square root to rewrite the first term in two parts. Now, we can rewrite the error as: \begin{align*} & \frac{1}{2}\sum_{m=1}^M \norm{ \tilde{X}_m (W^*\beta^*_m - \hat{W} \hat{\beta}_m) }_2^2 &\\ \le {} & \sqrt{\sum_{m=1}^M \norm{\tilde\eta_m^\top \tilde{X}_m \bar{U}}_{\bar{V}^{-1}_m}^2}\sqrt{2\sum_{m=1}^M \norm{Ur_m}^2_{\tilde{\Sigma}_m}} + \sum_{m=1}^M \inner{\tilde\eta_m}{\tilde{X}_m (U-\bar{U}) r_m} \\ {} & + \sqrt{\sum_{m=1}^M \norm{\tilde\eta_m^\top \tilde{X}_m \bar{U}}_{\bar{V}^{-1}_m}^2} \sqrt{\sum_{m=1}^M \norm{\tilde{X}_m(\bar{U}-U)r_m}^2} {+ 2\sqrt{\bar \lambda_m \zeta_m^2} \sqrt{\sum_{m=1}^M \norm{\tilde{X}_m \rbr{\hat{W}\hat{\beta}_m - W^*\beta^*_m}}_2^2}} \\ \le {} & \sqrt{\sum_{m=1}^M \norm{\tilde\eta_m^\top \tilde{X}_m \bar{U}}_{\bar{V}^{-1}_m}^2}\sqrt{2\sum_{m=1}^M \norm{ \tilde{X}_m (W^*\beta^*_m - \hat{W} \hat{\beta}_m) }_2^2} + \sum_{m=1}^M \inner{\tilde\eta_m}{\tilde{X}_m (U-\bar{U}) r_m} \\ {} & + \sqrt{\sum_{m=1}^M \norm{\tilde\eta_m^\top \tilde{X}_m \bar{U}}_{\bar{V}^{-1}_m}^2} \sqrt{\sum_{m=1}^M \norm{\tilde{X}_m(\bar{U}-U)r_m}^2} {+ 2\sqrt{\bar \lambda}\bar{\zeta} \sqrt{\sum_{m=1}^M \norm{\tilde{X}_m \rbr{\hat{W}\hat{\beta}_m - W^*\beta^*_m}}_2^2}}. \end{align*} \end{proof} We will now bound each term individually. For the matrix $\bar{U}$, we choose it to be an element of $\Ncal_{\epsilon}$ which is an $\epsilon$-cover of the set of orthonormal matrices in $\mathbb{R}^{d^2 \times 2k}$. Therefore, for any $U$, there exists $\bar{U}$ such that $\norm{\bar{U} - U}_F \le \epsilon$. We can bound the size of such a cover using \pref{lem:lowrank-cover} as $|\Ncal_\epsilon| \le \rbr{\frac{6\sqrt{d}}{\epsilon}}^{2d^2k}$. \paragraph*{Proposition} [Restatement of \pref{prop:proj_cover_error_pert}] For the multi-task model specified in \pref{eq:linear_model_pert}, for the noise process $\{\eta_m(t)\}_{t=1}^\infty$ defined for each system, with probability at least $1-\delta_Z$, we have: \begin{align*} \sum_{m=1}^M \norm{\tilde{X}_m(\bar{U}-U)r_m}^2 \lesssim {} & \boldsymbol\kappa\epsilon^2 \rbr{MT\tr{C} +\sigma^2\log \frac{2}{\delta_Z} {+ \bar{\lambda}\bar{\zeta}^2} }. \end{align*} \begin{proof} In order to bound the term above, we use the squared loss inequality in \pref{eq:app_sqloss_ineq_pert} and \pref{eq:linear_model_pert} as follows: \begin{align*} \norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F^2 \le {} & 2 \inner{Z}{\Xcal (W^*B^* - \hat{W} \hat{B})} {+ 2\sum_{m=1}^M \inner{\tilde{X}_m\tilde{D}_m}{\tilde{X}_m \rbr{W^*\beta^*_m - \hat{W}\hat{\beta}_m}}}\\ \le {} & 2\norm{Z}_F \norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F {+ 2 \sqrt{\sum_{m=1}^M \norm{\tilde{X}_m\tilde{D}_m}_2^2} \norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F}\\ \le {} & 2\norm{Z}_F \norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F {+ 2 \sqrt{\sum_{m=1}^M \bar{\lambda}_m \norm{D_m}_F^2} \norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F} \\ \le {} & 2\norm{Z}_F \norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F {+ 2 \sqrt{\bar{\lambda} \bar \zeta^2} \norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F}, \end{align*} which leads to the inequality $\norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F \le 2 \norm{Z}_F {+ 2 \sqrt{\bar{\lambda} \bar \zeta^2}}$. Using the concentration result in \pref{prop:noise-magnitude}, with probability at least $1-\delta_Z$, we get \begin{align*} \|Z\|_F \lesssim \sqrt{MT\tr{C} + \sigma^2\log \frac{1}{\delta_Z}}. \end{align*} Thus, we have $\norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F \lesssim 2 \sqrt{MT\tr{C} + \sigma^2\log \frac{1}{\delta_Z}} {+ 2 \sqrt{\bar{\lambda} \bar \zeta^2}}$ with probability at least $1-\delta_Z$. We now use this to bound the initial term: \begin{align*} \sum_{m=1}^M \norm{\tilde{X}_m(\bar{U}-U)r_m}^2 \le {} & \sum_{m=1}^M \norm{\tilde{X}_m}^2\norm{\bar{U}-U}^2 \norm{r_m}^2 \\ \le {} & \sum_{m=1}^M \bar{\lambda}_m \epsilon^2 \norm{r_m}^2 \\ = {} & \sum_{m=1}^M \bar{\lambda}_m \epsilon^2 \norm{Ur_m}^2 \\ \le {} & \sum_{m=1}^M \bar{\lambda}_m \epsilon^2 \frac{\norm{\tilde{X}_m U r_m}^2}{\underbar{\lambda}_m}\\ \le {} & \boldsymbol\kappa \epsilon^2 \sum_{m=1}^M \norm{\tilde{X}_m U r_m}^2 \\ = {} & \boldsymbol\kappa \epsilon^2 \sum_{m=1}^M \norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F^2 \\ \lesssim {} & \boldsymbol\kappa\epsilon^2 \rbr{MT\tr{C} +\sigma^2\log \frac{1}{\delta_Z} {+ \bar{\lambda} \bar \zeta^2} }. \end{align*} \end{proof} \paragraph*{Proposition} [Restatement of \pref{prop:breakup_inner_prod_pert}] Under \pref{assum:noise} and the shared structure in \pref{eq:linear_model_pert}, with probability at least $1-\delta_Z$ we have: \begin{align*} \sum_{m=1}^M \inner{\tilde\eta_m}{\tilde{X}_m (U-\bar{U}) r_m} \lesssim {} & \sqrt{\boldsymbol\kappa} \epsilon \rbr{MT \tr{C} + \sigma^2\log \frac{1}{\delta_Z}} {+\sqrt{\boldsymbol\kappa \bar\lambda} \sqrt{MT \tr{C} + \sigma^2\log \frac{1}{\delta_Z}} \epsilon \bar \zeta}. \end{align*} \begin{proof} Using Cauchy-Schwarz inequality, we bound the term as follows: \begin{align*} \sum_{m=1}^M \inner{\tilde\eta_m}{\tilde{X}_m (U-\bar{U}) r_m} \le {} & \sqrt{\sum_{m=1}^M \norm{\tilde{\eta}_m}^2} \sqrt{\sum_{m=1}^M \norm{\tilde{X}_m(\bar{U}-U)r_m}^2} \\ \lesssim {} & \sqrt{MT \tr{C} + \sigma^2\log \frac{1}{\delta_Z}} \sqrt{\boldsymbol\kappa\epsilon^2 \rbr{MT\tr{C} +\sigma^2\log \frac{1}{\delta_Z} {+ \bar{\lambda} \bar \zeta^2} }}\\ \lesssim {} & \sqrt{\boldsymbol\kappa} \epsilon \rbr{MT \tr{C} + \sigma^2\log \frac{1}{\delta_Z}} {+\sqrt{\boldsymbol\kappa \bar\lambda} \sqrt{MT \tr{C} + \sigma^2\log \frac{1}{\delta_Z}} \epsilon \bar \zeta}. \end{align*} \end{proof} \subsection{Putting Things Together} We now use the bounds we have shown for each term before and give the final steps in the proof of \pref{thm:estimation_error_pert} by using the error decomposition in \pref{lem:breakup_pert_app} as follows: \begin{proof} From \pref{lem:breakup_pert_app}, we have: \begin{align*} & \norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F^2 & \\ \le {} & \sqrt{2\sum_{m=1}^M \norm{\tilde\eta_m^\top \tilde{X}_m \bar{U}}_{\bar{V}^{-1}_m}^2}\norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F + \sum_{m=1}^M \inner{\tilde\eta_m}{\tilde{X}_m (U-\bar{U}) r_m} & \\ {} & + \sqrt{\sum_{m=1}^M \norm{\tilde\eta_m^\top \tilde{X}_m \bar{U}}_{\bar{V}^{-1}_m}^2} \sqrt{\sum_{m=1}^M \norm{\tilde{X}_m(\bar{U}-U)r_m}^2} {+ 2\sqrt{\bar \lambda}\bar{\zeta} \norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F}. & \end{align*} \end{proof} Now, substituting the termwise bounds from \pref{prop:proj_cover_error_pert}, \pref{prop:breakup_inner_prod_pert} and \pref{prop:mt-self-norm}, with probability at least $1-\abr{\Ncal_{\epsilon}}\delta_U - \delta_Z$ we get: \begin{align} & \frac{1}{2}\norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F^2 & \nonumber\\ \lesssim {} & \sqrt{\sigma^2\log \rbr{\frac{\Pi_{m=1}^M\det(\bar{V}_m(t))\det (V_0)^{-1}}{\delta_U}}}\norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F {+ \sqrt{\bar \lambda}\bar{\zeta} \norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F} & \nonumber \\ {} & + \sqrt{\sigma^2\log \rbr{\frac{\Pi_{m=1}^M\det(\bar{V}_m(t))\det (V_0)^{-1}}{\delta_U}}} \sqrt{\boldsymbol\kappa\epsilon^2 \rbr{MT\tr{C} +\sigma^2\log \frac{1}{\delta_Z} {+ \bar{\lambda}\bar{\zeta}^2} }} \nonumber \\ {} & + \sqrt{\boldsymbol\kappa} \epsilon \rbr{MT \tr{C} + \sigma^2\log \frac{1}{\delta_Z}} {+\sqrt{\boldsymbol\kappa \bar\lambda} \sqrt{MT \tr{C} + \sigma^2\log \frac{1}{\delta_Z}} \epsilon \bar \zeta}. & \label{eq:app_inter_breakup_pert} \end{align} In the definition of $V_0$, we now substitute $\Sigma = \underbar{\lambda}I_{d^2}$ thereby implying $\det(V_0)^{-1} = \det(1/\underbar{\lambda}I_{2k}) = \rbr{1/\underbar{\lambda}}^{2k}$. Similarly, for matrix $\bar{V}_m(T)$, we get $\det(\bar{V}_m(T)) \le \bar{\lambda}^{2k}$. Thus, substituting $\delta_U = \delta/3\abr{\Ncal}_\epsilon$ and $\delta_C=\delta/3$ (in \pref{thm:cov_conc}), with probability at least $1-2\delta/3$, we get: \begin{align*} \sum_{m=1}^M \norm{\tilde\eta_m^\top \tilde{X}_m \bar{U}}_{\bar{V}^{-1}_m}^2 \le {} & \sigma^2\log \rbr{\frac{\Pi_{m=1}^M\det(\bar{V}_m(t))\det (V_0)^{-1}}{\delta_U}} \\ \le {} & \sigma^2 \log \rbr{ \frac{\bar{\lambda}}{\underbar{\lambda}} }^{2Mk} + \sigma^2 \log \rbr{ \frac{18k}{\delta \epsilon} }^{2d^2k} \\ \lesssim {} & \sigma^2Mk \log \boldsymbol\kappa_\infty + \sigma^2 d^2k \log \frac{k}{\delta \epsilon}. \end{align*} Substituting this in \pref{eq:app_inter_breakup_pert} with $\delta_Z = \delta/3$, with probability at least $1-\delta$ we have: \begin{align*} \frac{1}{2} \norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F^2 \lesssim {} & \sqrt{\sigma^2Mk \log \boldsymbol\kappa_\infty + \sigma^2 d^2k \log \frac{k}{\delta \epsilon}}\norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F {+ \sqrt{\bar \lambda}\bar{\zeta} \norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F} \\ {} & + \sqrt{\sigma^2Mk \log \boldsymbol\kappa_\infty + \sigma^2 d^2k \log \frac{k}{\delta \epsilon}} \sqrt{\boldsymbol\kappa\epsilon^2 \rbr{MT\tr{C} +\sigma^2\log \frac{1}{\delta} {+ \bar{\lambda}\bar{\zeta}^2} }} \\ {} & + \sqrt{\boldsymbol\kappa} \epsilon \rbr{MT \tr{C} + \sigma^2\log \frac{1}{\delta}} {+\sqrt{\boldsymbol\kappa \bar\lambda} \sqrt{MT \tr{C} + \sigma^2\log \frac{1}{\delta}} \epsilon \bar \zeta} \\ \lesssim {} & \sqrt{c^2Mk \log \boldsymbol\kappa_\infty + c^2 d^2k \log \frac{k}{\delta \epsilon}}\norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F {+ \sqrt{\bar \lambda}\bar{\zeta} \norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F} \\ {} & + \sqrt{c^2Mk \log \boldsymbol\kappa_\infty + c^2 d^2k \log \frac{k}{\delta \epsilon}} \sqrt{\boldsymbol\kappa\epsilon^2 \rbr{c^2dMT +c^2\log \frac{1}{\delta} {+ \bar{\lambda}\bar{\zeta}^2} }} \\ {} & + \sqrt{\boldsymbol\kappa} \epsilon \rbr{c^2dMT + c^2\log \frac{1}{\delta}} {+\sqrt{\boldsymbol\kappa \bar\lambda} \sqrt{c^2 dMT + c^2\log \frac{1}{\delta}} \epsilon \bar \zeta} \end{align*} Noting that $k \le d$ and $\log \frac{1}{\delta} \lesssim d^2k\log \frac{k}{\delta \epsilon}$, by setting $\epsilon = \frac{k}{\sqrt{\boldsymbol\kappa}dT}$ we have: \begin{align*} \frac{1}{2} \norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F^2 \lesssim {} & \rbr{\sqrt{c^2Mk \log \boldsymbol\kappa_\infty + c^2 d^2k \log \frac{k}{\delta \epsilon}} {+ \sqrt{\bar \lambda}\bar{\zeta}}}\norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F \\ {} & + \sqrt{c^2Mk \log \boldsymbol\kappa_\infty + c^2 d^2k \log \frac{k}{\delta \epsilon}} \sqrt{\boldsymbol\kappa\epsilon^2 \rbr{c^2dMT +c^2d^2k\log \frac{k}{\delta\epsilon} {+ \bar{\lambda}\bar{\zeta}^2} }} \\ {} & + \sqrt{\boldsymbol\kappa} \epsilon \rbr{c^2dMT + c^2d^2k\log \frac{k}{\delta\epsilon}} {+ \sqrt{\rbr{c^2 dMT + c^2d^2k\log \frac{k}{\delta \epsilon}} \boldsymbol\kappa \bar \lambda \epsilon^2 \bar \zeta^2}} \\ \lesssim {} & \rbr{\sqrt{c^2Mk \log \boldsymbol\kappa_\infty + c^2 d^2k \log \frac{\boldsymbol\kappa dT}{\delta}} {+ \sqrt{\bar \lambda}\bar{\zeta}}}\norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F \\ {} & + \sqrt{c^2Mk \log \boldsymbol\kappa_\infty + c^2 d^2k \log \frac{\boldsymbol\kappa dT}{\delta}} \sqrt{c^2\rbr{\frac{k^2M}{dT} + \frac{k^3}{T^2}\log \frac{\boldsymbol\kappa dT}{\delta} {+ \frac{\bar\lambda k^2 \bar \zeta^2}{d^2T^2} }}} \\ {} & + c^2\rbr{Mk + \frac{dk^2}{T}\log \frac{\boldsymbol\kappa dT}{\delta}} {+ \sqrt{c^2\rbr{\frac{k^2M}{dT} + \frac{k^3}{T^2}\log \frac{\boldsymbol\kappa dT}{\delta}} \bar \lambda \bar \zeta^2}} \\ \lesssim {} & \rbr{\sqrt{c^2\rbr{Mk \log \boldsymbol\kappa_\infty + d^2k \log \frac{\boldsymbol\kappa dT}{\delta}}} {+ \sqrt{\bar \lambda}\bar{\zeta}}}\norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F \\ {} & + c^2\rbr{Mk \log \boldsymbol\kappa_\infty + \frac{d^2k}{T}\log \frac{\boldsymbol\kappa dT}{\delta}} {+ c \sqrt{\frac{\bar \lambda \bar \zeta^2}{T}\rbr{Mk \log \boldsymbol\kappa_\infty + \frac{d^2k}{T}\log \frac{\boldsymbol\kappa dT}{\delta}}}}. \end{align*} The quadratic inequality for the prediction error $\norm{\Xcal(W^*B^* - \hat W \hat B)}_F^2$ implies the following bound that holds with probability at least $1-\delta$: \begin{align*} \norm{\Xcal (W^*B^* - \hat{W} \hat{B})}_F^2 \lesssim c^2\rbr{Mk \log \boldsymbol\kappa_\infty + d^2k\log \frac{\boldsymbol\kappa dT}{\delta}} + {\bar \lambda \bar \zeta^2}. \end{align*} Since $\underbar \lambda = \min_m \underbar \lambda_m$, we can convert the prediction error bound to an estimation error bound as follows: \begin{align*} \norm{W^*B^* - \hat W \hat B}_F^2 \lesssim {} & \frac{c^2\rbr{Mk \log \boldsymbol\kappa_\infty + d^2k\log \frac{\boldsymbol\kappa dT}{\delta}}}{\underbar \lambda} + {\boldsymbol\kappa_\infty \bar \zeta^2}, \end{align*} which finally implies the estimation error bound for the solution of \pref{eq:mt_opt}: \begin{align*} \sum_{m=1}^M \norm{\hat A_m - A_m}_F^2 \lesssim \frac{c^2\rbr{Mk \log \boldsymbol\kappa_\infty + d^2k\log \frac{\boldsymbol\kappa dT}{\delta}}}{\underbar \lambda} + {(\boldsymbol\kappa_\infty + 1) \bar \zeta^2}. \end{align*}
1,941,325,219,937
arxiv
\section{Introduction} As the physical pion mass is accessible in lattice QCD simulations nowadays with larger physical volumes and several lattice spacings with different lattice actions to estimate the associated systematic errors, lattice QCD calculation is getting mature, particularly for flavor physics where the quark masses, heavy-light decay constants, CKM matrices, and strong coupling constant are reviewed and averaged by FLAG~\cite{Aoki:2016frl}. On the other hand, the baryon physics is not as settled as that of mesons. Part of the reasons is illustrated in the Parisi-Lepage consideration of the signal-to-noise ratio of the nucleon two-point function. Since the variance of the nucleon propagator has three-pions as the lowest state in the correlator, the signal-to-noise (S/N) ratio is proportional to $e^{-(m_N - 3/2 m_{\pi})t}$~\cite{Parisi:1983ae,Lepage:1989hd} and noticeably grows exponentially with $t$ when the pion mass is close to the physical one in lattice calculations. This is why baryon physics is more noisy than that of mesons. One special problem associated with the correlators of mesons involving annihilation channels or glueballs is that the signal falls off exponential with time, but the noise remains constant. Thus, after certain time separation, the signal falls below the noise and succumbs to the sign problem. Another aspect of the DI is observed in the DI three-point function involving a quark loop or the topological charge in the neutron electric dipole moment (nEDM) calculation from the $\theta$ term, the fluctuations of the quark loop and the topological charge are proportion to $\sqrt{V}$ which pose a challenge for calculations large volumes lattices. In this work, we shall show that the constant error and $\sqrt{V}$ fluctuation in the DI have the same origin and they can be ameliorated with the help of the property of the cluster decomposition principle so that the S/N ratio can be improved by a factor of $\sqrt{V/V_{R_s}}$ where $V_{R_s}$ is the volume with radius $R_s$ which is the effective correlation length between the operators. \section{Cluster decomposition principle and variance reduction} One often invokes the locality argument to justify that experiments conducted on Earth is not affected by events on the Moon. This is a consequence of the cluster decomposition principle (CDP) in that if color-singlet operators in a correlator are separated by a large enough space-like distance, the correlator will be zero. In other words, the operators are not correlated in this circumstance. To be specific, it is shown~\cite{Araki:1962zhd} that under the assumptions of translation invariance, stability of the vacuum, existence of a lowest non-zero mass and local commutativity, one has \begin{equation} \label{CDP} \begin{split} &|\langle0|\mathcal{B}_1(x_1)\mathcal{B}_2(x_2)|0\rangle_s |\leq A r^{-\frac{2}{3}}e^{-Mr} \end{split} \end{equation} for a large enough space-like distance \mbox{$r=|x_1-x_2|$,} where $\langle0|\mathcal{B}_1(x_1)\mathcal{B}_2(x_2)|0\rangle_s\equiv\langle0|\mathcal{B}_1(x_1)\mathcal{B}_2(x_2)|0\rangle -\langle 0|\mathcal{B}_1(x_1)|0\rangle \langle 0|\mathcal{B}_2(x_2)|0\rangle$ is the vacuum-subtracted correlation function. $\mathcal{B}_1(x_1)$ and $\mathcal{B}_2(x_2)$ are two color-singlet operator clusters whose centers are at $x_1$ and $x_2$ respectively, $M$ is the smallest non-zero mass for the correlator, and $A$ is a constant. This is the asymptotic behavior of a boson propagator $K_1(r)/r$. This means the correlation between two operator clusters far apart with large enough space-like distance $r$ tends to be zero at least as fast as $r^{-\frac{2}{3}}e^{-Mr}$. Given that the longest correlation length in QCD is $1/m_{\pi}$, one has $M \ge m_{\pi}$. Since the Euclidean separation is always `space-like', the cluster decomposition principle (CDP) is applicable to the Euclidean correlators. Some of the recent attempts to reduce variances in the calculations of strangeness in the nucleon~\cite{Bali:2009dz}, the $\rho$ meson mass~\cite{Chen:2015tpa}, the light-by-light contribution in the muonic $g-2$~\cite{Blum:2015gfa}, the factorization of fermion determinant~\cite{Ce:2016ajy}, and reweighting of nEDM calculation with topological charge density~\cite{Shintani:2015vsx} have applied concepts similar or related to that of CDP. In this work, we prove that, applying the CDP explicitly, the error of an DI correlator can be improved by a factor of $\sqrt{V/V_{R_s}}$. In evaluating the correlators, one often takes a volume sum over the three-dimensional coordinates. To estimate at what distance the large distance behavior saturates, we integrate the fall-off to a cut off distance $R$, \begin{eqnarray} \label{integral} \int_0^{R}\!\!\! d^3r\,r^{-\frac{3}{2}}e^{-Mr} \!= \!4\pi \!\left(\!\!\frac{\sqrt{\pi}{\rm erf}(\sqrt{MR})}{2M^{\frac{3}{2}}}\!-\!\frac{\sqrt{R}e^{-MR}}{M}\!\right)\!\!, \end{eqnarray} where ${\rm erf}$ is the error function. Since the kernel of the integral decays very quickly, the integral has already gained more than $99.5\%$ of its total value for $R=8/M$. Assuming the fall-off behavior dominates the volume-integrated correlator, we consider $R_s\sim\frac{8}{M}$ as an effective cutoff and the correlation with separation $r> R_s$ has negligible contribution. To test the principle of cluster decomposition with lattice data, we consider the two-point correlator for a fixed $t$ with a cutoff of $R$ in the relative coordinate between the two color-singlet operators $\mathcal{O}_1$ and $\mathcal{O}_2$ \begin{equation} \label{general_form} C(R,t)=\frac{1}{V}\langle \sum_{\vec{x}}\sum_{r<R}\mathcal{O}_1(\vec{x}+\vec{r'},t)\, \mathcal{O}_2(\vec{x},0)\rangle, \end{equation} where $r = \sqrt{|\vec{r'}|^2 + t^2}$. The correlation functions in the present work are calculated using valence overlap fermions on the RBC-UKQCD $2 + 1$ flavor domain-wall configurations. More detailed definitions and numerical implementations can be found in previous works ~\cite{Li:2010pw,Gong:2013vja,Sufian:2016pex,Yang:2015zja}. We examine the nucleon two-point function first on the $48^3\times 96$ lattice (48I) with the physical sea quark mass~ \cite{Blum:2014tka}. We use 3 valence quark masses corresponding to pion masses $70$ MeV, $149$ MeV and $260$ MeV respectively. \begin{figure}[t] \centering \includegraphics[width=0.4\textwidth]{figures/N2pf.pdf} \caption{Nucleon two-point functions at $t=9$ for three different valence quark masses as a function of the cutoff $R$.} \label{fig_twopoint} \end{figure} The results for the nucleon correlators at $t = 9$ for three different valence quark masses are plotted in Fig.~\ref{fig_twopoint} as a function of $R$ which is the cutoff of the Euclidean distance $r$ between the point source and the sink. We see that the nucleon correlator basically saturates after \mbox{$R \sim 15 = 1.71$ fm} with $a = 0.114$ fm for the three cases. This agrees well with our earlier estimate of a saturation radius $R_s = 8/M$ which corresponds to $\sim 1.66$ fm. This shows that the CDP works and Eq.~(\ref{CDP}) gives a good estimate of $R_s$. Since the signal of the correlators falls off exponentially with $r$, summing over $r$ beyond the saturating radius $R_s$ does not change the signal and will only gather noise. Let's consider the disconnected insertion next and see how the S/N ratio can be improved with this observation. In the case of the DI, the variance of the correlator in Eq.~(\ref{general_form}) \mbox{$\frac{1}{V^2}\langle |\sum_{\vec{x}}\sum_{r<R}\mathcal{O}_1(\vec{x}+\vec{r'},t)\, \mathcal{O}_2(\vec{x},0)|^2\rangle$} can have a vacuum insertion in addition to the exponential fall off in $t$ due to the $\mathcal{O}^{\dagger}\mathcal{O}$ operator. \begin{eqnarray} \label{Variance} V\!ar(R,t) &=&\frac{1}{V^2}\sum_{\vec{x},\vec{y}}\left(\langle \sum_{r_1 <R} \mathcal{O}_1(\vec{x}+\vec{r_1'},t) \sum_{r_2 <R}\mathcal{O}_1^{\dagger}(\vec{y}+\vec{r'_2},t)\rangle\right. \nonumber \\ &\cdot& \left.\langle \mathcal{O}_2(\vec{x},0) \mathcal{O}_2^{\dagger}(\vec{y},0)\rangle\right) + ...., \end{eqnarray} where $r_1 = \sqrt{|\vec{r_1'}|^2 + t^2}$ and $r_2 = \sqrt{|\vec{r_2'}|^2 + t^2}$ respectively. For the case where $\vec{r_1'}$ and $\vec{r_2'}$ are integrated over the whole lattice volume, the sum over the positions $\vec{x}, \vec{y}, \vec{x}+\vec{r'_1}$ and $\vec{y}+\vec{r'_2}$ can be carried out independently. Consequently, $\mathcal{O}_1$ and $\mathcal{O}_2$ in the DI fluctuate independently which leads to a variance which is the product of their respective variances. In this case, the leading vacuum insertion is a constant, independent of $t$. This is the reason why the noise remains constant over $t$ in DI. On the other hand, the constant variance is reduced to $V_{R_s}/V$ when $r$ it is integrated to $R_s$, while the signal is not compromised. The sub-leading contribution (denoted by ...) in Eq.~(\ref{Variance}) has an exponential decay in $t$ with a mass in the scalar channel. It is clear that to leading order, the ratio of the cutoff S/N at $R_s$ to that without cutoff is \begin{equation} \label{S/N_ratio} \frac{S/N(R_s)}{S/N(L)} \sim \sqrt{\frac{V}{\,\,\,V_{R_s}}}. \end{equation} We shall consider several DI examples involving volume summations over two or more coordinates. Since the convoluted sum with a relative coordinate in Eq.~(\ref{general_form}) can be expensive, we shall invoke the standard convolution theorem by calculating the product of two functions $\tilde{K}(\vec{p},t)=\tilde{\mathcal{O}}_1(-\vec{p})\tilde{\mathcal{O}}_2(\vec{p}),$ where $\tilde{\mathcal{O}}_1(-\vec{p})/\tilde{\mathcal{O}}_2(\vec{p})$ is the Fourier transforms of $\mathcal{O}_1 (\vec{x})/\mathcal{O}_2 (\vec{x})$ in each configuration on their respective time slices. Then \begin{equation} \label{general_form_integral_2} C(R,t)=\langle \int_{r<R}d\vec{r'}K(\vec{r'},t)\rangle. \end{equation} where $K(\vec{r'},t)$ is the Fourier transform of $\tilde{K}(\vec{p},t)$. In this way, the cost of the double-summation, which is order $V^2$, is reduced to that of the fast Fourier transform (FFT) which is in the order of $V\,{\rm log}V$. \subsection{Scalar matrix element of the strange quark} The first example is the disconnected insertion for the nucleon matrix element with a scalar loop which involves a three-point function and can be expressed as \begin{equation} \label{eq_dis_ins} C_3(R,\tau,t)=\langle \sum_{\vec{x}}\sum_{r<R}\mathcal{O}_N(\vec{x},t) S(\vec{x}+\vec{r'},\tau)\bar{\mathcal{O}}_N(\mathcal{G},0)\rangle, \end{equation} where $S$ is the vacuum-subtracted scalar loop, $\mathcal{G}$ denotes the source grid for increasing statistics~\cite{Li:2010pw}. Note here $r = \sqrt{|\vec{r'}|^2 + (t-\tau)^2}$ is the 4-D distance and $r_x$ is the spatial separation between the loop and the sink. Since the low-modes dominate the strangeness in the nucleon~\cite{Yang:2015uis}, we calculate the strange quark loop with low-modes only to illustrate the CDP effect. The sum over the spatial relative coordinate between the scalar quark loop $S$ and the sink interpolation operator $O_N(\vec{x},t)$ is carried out through the convolution in Eq.~(\ref{general_form_integral_2}). This calculation is done on the domain-wall $32^3\times64$ (32ID) lattice \cite{Blum:2014tka} with pion mass $\sim170$ MeV and the lattice size is $4.6$ fm. \begin{figure}[t] \centering \includegraphics[width=0.4\textwidth]{figures/CI_S_1.pdf} \caption{The value and error of $C_3/C_2 (R, \tau=5,t=10)$ are plotted in the upper panel as a function of $R$. The lower panel displays the three-point function in Eq.~(\ref{eq_dis_ins}) as a function of $r$ without summing over it. The green band shows the signal and the blue band the error.} \label{CDP} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.4\textwidth]{figures/CI_S_2.pdf} \caption{DI calculation for the strange scalar matrix element in the nucleon as a function of $\tau -t/2$. For each of the source-sink separations at 1.00 fm (upper panel) and 1.57 fm (lower panel), two results with a cutoff of $R_s=27$ and $R_s=12$ are plotted.} \label{fig_scalar1} \end{figure} The upper panel of Fig.~\ref{CDP} gives the value and the error of the ratio of three-to-two point functions $C_3/C_2(R,\tau,t)$ in Eq.~(\ref{eq_dis_ins}) as a function of $R$ at $\tau = 4$ and $t =9 $. The nucleon source-sink separation is $1.29$ fm in this case. We see the error grows after $R$ is greater than $\sim 12\, (\sim 1.7 {\rm fm})$ while the central value remains constant within errors. This behavior reflects the fact that the signal without summing over $|\vec{r'}|$ falls off exponentially with $r$, while the error remains constant as shown in the lower panel. Fig.~\ref{fig_scalar1} shows $DI (scalar)$-- the disconnected three-point to two-point function ratio to obtain the scalar matrix element for the strange quark in the nucleon as a function of $\tau -t/2$ for two source-sink separations at 1.00 fm (upper panel) and 1.57 fm (lower panel). Two results with cutoffs of $R_s = 27$ and $R_s = 12$ are plotted. $R_s$ is the cutoff radius for the relative coordinate between the sink and the quark loop in the spatial sum. It can be seen that the central values of the two cutoffs are all consistent within errors, while the errors with cutoff $R_s = 12$ are smaller than the ones with cutoff $27$, which includes the whole spatial volume, by a factor of 4 or so. Thus, cutting off the spatial sum at the saturation distance is equivalent to gaining $\sim 16$ times more statistics. \subsection{Glueball mass} Next, we consider the glueball correlators in Eq.~(\ref{general_form}) on the 48I lattice with $La = 5.5$ fm. The correlators from the scalar $E^2$ and $B^2$ operators and the pseudoscalar $E\cdot B$ operator are presented in Fig.~\ref{fig_glueball}, where they are plotted as a function of $R$ in Eq.~(\ref{general_form}) at $t=4$. We note the scalar correlators saturate after $R\sim9\, (\sim 1.0 {\rm fm})$ and the pseudo scalar one saturates after $R\sim12\, (\sim 1.2 {\rm fm})$, which can be understood in terms of the different ground state masses in these two channels. Again, comparing the error at $R=9$ to that at $R = 24$, the latter includes the whole spatial volume, for the scalar case, the error is reduced by a factor of $\sim 4$ which is in reasonable agreement with the prediction of $\sim(\frac{24}{9})^\frac{3}{2} = 4.4 $ from Eq.~(\ref{S/N_ratio}). For the pseudoscalar case, the improvement is around 3 times and is consistent with the estimate of $\sim(\frac{24}{12})^\frac{3}{2}=2.8$. \begin{figure}[t] \centering \includegraphics[width=0.4\textwidth]{figures/G2pf.pdf} \caption{Scalar (operator $E^2$ and $B^2$) and pseudo scalar ($E\cdot B$) glueball correlators at $t=4$ as a function of cutoff $R$.} \label{fig_glueball} \end{figure} \begin{figure}[b] \centering \includegraphics[width=0.4\textwidth]{figures/48_alpha_new.pdf} \caption{The CP-violation phase $\alpha^1$ calculated on the 48I lattice as a function of cutoff $R$. For each $R$, the value is averaged from $t=6$ to 13.} \label{fig_alpah1} \end{figure} \subsection{Neutron electric dipole moment:} Finally, we examine the CP-violation phase $\alpha^1$ on the same 48I lattice which is needed for calculating the neutron electric dipole moment (nEDM). The phase is defined as \begin{equation} \alpha^1=\frac{{\rm Tr}[C_{3\mathcal{Q}}(t)\gamma_5]}{{\rm Tr}[C_2(t)\Gamma_e]} \end{equation} for large enough $t$, where $C_2(t)$ is the common nucleon two-point function, $\Gamma_e=\frac{1+\gamma_4}{2}$ is the parity projector, $C_{3\mathcal{Q}}(t)$ is the nucleon propagator weighted with the total topological charge $\mathcal{Q}$ \begin{equation} \label{Q} C_{3\mathcal{Q}}(t)=\langle \sum_{\vec{x}}\mathcal{O}_N(\vec{x},t) \bar{\mathcal{O}}_N(\mathcal{G},0)\mathcal{Q}\rangle. \end{equation} We can turn the total topological charge into the summation of its density, i.e. $\mathcal{Q }= \sum_x q(x)$ where we use the plaquette definition for $q(x)$. Then the expression of $C_{3\mathcal{Q}}(t)$ with a cutoff $R$ can be cast in the same form as in Eq.~(\ref{eq_dis_ins}), except the scalar quark quark loop $S$ is replaced with the local topological charge $q(x)$ and the sum of the topological charge density is over the four sphere with a radius $R$. The result of $\alpha^1$ as a function of $R$ in Fig.~\ref{fig_alpah1} shows that the signal saturates after $R\sim16$. Cutting off the sum of $q(x)$ at this R leads to a factor of $~\sim 3.6$ times reduction in error compared to the case of reweighting with the total topological charge as in Eq.~(\ref{Q}). This example indicates that for four-dimensional sums, our new method employing the CDP can also improve the S/N. As we illustrated in the introduction, the nEDM from the $\theta$ term suffers from a $\sqrt{V}$ problem. It is shown here it is related to the vacuum insertion in the variance. This problem is resolved by turning the topological charge into a 4-D sum of the local charge density and applying the CDP by cutting off the relative 4-D distance in the sum. \section{Systematic and statistical errors} So far, we have taken a simple cutoff $R_s = 8/M$ to illustrate the efficacy of the variance reduction. This ad hoc choice inevitably incurs a systematic error. Since the asymptotic behavior of the integral of the correlator as a function of the separation $R$ is similar to that of the effective mass, we fit it as such and apply the Akaike information criterion (AIC)~\cite{akaike74} to obtain the statistical error and the estimated systematic error by using analysis with different fitting windows and models. The details of the application of AIC to the strange matrix element is provided in the Appendix as an example. It turns out that both the systematic and statistical errors are stable against multiple choices of windows and two fitting models. The statistical error is close to that at the cutoff distance when the plateau emerges (i.e. $R_s = 8/M$). Using a representative fit with 2 fitting formulas and 80 combinations of 8 data points each for a total of 160 fits and 100 bootstrap samplings, we obtain the value of the strange matrix element that we considered earlier to be \mbox{0.160 (15) (15).} The statistical error (first one) and the systematic error (second one) are practically the same. This is to be compared with the original value of 0.143(45) without taking the CDP into account. The systematic error is to be added to the total systematic error of the calculation. \section{Discussion and summary} Regarding the nucleon correlator in Fig.~\ref{fig_twopoint}, we notice that there is no conspicuous increase of the error as a function of $R$. This is because, unlike the DI, the variance does not have an vacuum insertion for the CI. The leading contribution to the variance is expected to be $e^{- 3 m_{\pi} r}$. This has a longer range than that of the signal which falls off with the nucleon mass. Therefore, in principle, one would expect some gain in the S/N when $R$ is cut off at $R_s$. Therefore, the corresponding ratio of S/N in Eq. (\ref{S/N_ratio}) is \begin{equation} \label{S/N_ratio_CI} \frac{S/N(R_s)}{S/N(L)} \sim \sqrt{\frac{A(L/2, 3m_{\pi})}{A(R_s, 3m_{\pi})}}. \end{equation} For the 48I lattice in Fig.~\ref{fig_twopoint}, this ratio is 1.13 for the physical pion mass with the cutoff $R_s = 8/M$. This is not nearly as much a gain as in the DI where the variance is dominated by the vacuum insertion. In the CI case, the noise saturates at $\sim 8/(3 m_{\pi})= 3.5$ fm. There is no gain for a lattice with a size larger than this. In summary, we have shown that the exponential fall off of the Cluster Decomposition Principle (CDP) seems to hold numerically for the several correlators that we examined. For the disconnected insertions (DI), we find that the vacuum insertion dominates the variance so that the relevant operators fluctuate independently and is independent of the time separation. This explains why the signal fall off exponentially, while the error remains constant in the DI. To demonstrate the efficacy of employing the CDP to reduce the variance, we have restricted the volume sum of the relative coordinate between the operators to the saturation radius $R_s$ to show that there is an effective gain of $V/V_s$ in statistics without compromising the signal. This applies to all DI cases. For the cases we have considered, namely the glueball mass, the strangeness content in the nucleon, and the CP violation angle in the nucleon due to the $\theta$ term, we found that for lattices with a physical sizes of 4.5 - 5.5 fm, the errors of these quantities can be reduced a factor of 3 to 4. We have applied the Akaike information criterion (AIC)~\cite{akaike74} in the Appendix to estimate the systematic and statistical errors incurred by applying the CDP. For the strangeness content, we find that the systematic error is practically of the same size as that of the statistical one when the cluster decomposition is taken into account. This results in a 2 to 3 times reduction in the overall error. For the connected insertions, there is no vacuum insertion in the variance, the gain in statistics is limited. \section {Acknowledgement} This work is supported in part by the U.S. DOE Grant $\text{No.}$ $\text{DE-SC}0013065$. This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725. This work used the Extreme Science and Engineering Discovery Environment (XSEDE) Stampede at TACC through allocation MCA01S007, which is supported by National Science Foundation grant number ACI-1053575~\cite{XSEDE}. We also thank National Energy Research Scientific Computing Center (NERSC) for providing HPC resources that have contributed to the research results reported within this paper. We acknowledge the facilities of the USQCD Collaboration used for this research in part, which are funded by the Office of Science of the U.S. Department of Energy. \section{Appendix} \subsection{Error estimate using Akaike information criterion (AIC)} AIC is founded in information theory. It is an estimator of the relative relevance of the statistical models that are used to describe a given set of data~\cite{akaike74,AIC}. \\ The usual task is to fit the data generated by some unknown process (function) $f$ with different trial models. However, we cannot tell which model is a better representation of $f$ with certainty in practice, because we do not know $f$. {Akaike (1974)} showed that we can estimate, via AIC, how much more (or less) information is lost when comparing one model to another. The estimate, though, is only valid asymptotically. If the number of data points is small, then some correction is often necessary (e.g., AICc)~\cite{AIC}. \subsection{Definition of the AIC value} \hspace{0.4cm} Considering a model $M$ for some data $x$ with $k$ parameters, the maximum value of the likelihood function for the model $L_{\max}$ is represented by the following probability \begin{equation} L_{\max} = P (x | \vec{\theta}, M), \end{equation} where $\vec{\theta}$ is the parameter vector that maximizes the likelihood function. The maximum likelihood function is related to the minimum $\chi^2$ values in the standard fitting via \begin{equation} L_{\max} = e^{- \frac{\chi_{\min}^2}{2}} . \end{equation} \hspace{0.4cm} The AIC value of the model is defined as (see ref.~\cite{AIC} and the reference therein for details) \begin{equation} {\rm AIC} = 2 k - 2 \ln (L_{\max}) = 2 k + \chi^2_{\min}, \end{equation} This favors models with the minimum AIC value and penalizes those with many fitting parameters. \subsection{Practical application} \hspace{0.4cm} In practice, we take a weighted average of all the models as is carried out in Ref.~\cite{Borsanyi:2014jba,Durr:2015dna}. The normalized weight for each model is \begin{equation} \label{weight} w_{i =} \frac{e^{- \frac{{\rm AIC}_i}{2}}}{\sum_i e^{- \frac{{\rm AIC}_i}{2}}}, \end{equation} where $i$ is the index of the models. \\ When handling the systematic errors of lattice calculations, we usually need to take consideration of various fitting formulas with different combinations of data points used for the fit. The AIC method can be helpful in these cases. Let's consider a case where we have $N$ configurations and, for each configuration, there are $M$ data points (e.g. different time separation $t$ of a hadron two-point correlator). We plan to use $P$ models to fit the data (the number $P$ includes different fit ranges and different combinations of data points) in order to obtain the mean value, the systematic error and the statistical error of some model parameters (e.g. the mass of the ground state). The detailed procedure are given as follows: \begin{enumerate} \item The mean value is the weighted average of all the $P$ fit models (formulas and combinations of data points). \begin{equation} \bar{x} = \sum_{i = 1}^P w_i x_i, \end{equation} $x_i$ is the fitting result from each model and $w_i$ is the normalized weight in Eq.~(\ref{weight}). \item The systematic error is taken to be the standard error of the weighted mean, \begin{equation} \sigma_{\rm sys} = \sqrt{\sum_{i = 1}^P w_i \sigma_i^2}, \hspace{1cm} \sigma_i = (x_i - \bar{x}). \end{equation} \item The statistical error can be obtained from bootstrap resampling. We first do $N_b$ times bootstrap operation, in each bootstrap sample, we fit these $P$ models and save the weighted mean value. After that, we will have $N_b$ AIC weighted mean values. The bootstrap error of these weighted mean values gives the final statistical error. \end{enumerate} In summary, we need to do $ (1 + N_b) \times P$ times of correlated fits to obtain all the relevant results. \subsection{The cluster decomposition case} In the cluster decomposition case, we shall consider the strange matrix element as a function of the cutoff radius $R$ as illustrated in the following figure. We can use the AIC method to estimate the mean value, the systematic error and the statistical error of the ratio. To apply the AIC method to this particular case, we need first to determine our fitting formulas. In view of the fact that the ratio between the three-point function and the two-point function falls off exponentially as a function of the relative separation between the quark loop and the sink of the nucleon propagator, the accumulated sum of the separation with a cutoff $R$ is expected to be a constant after certain $R$, such as $R_s = 8/M$, as illustrated in the upper panel of Fig.~\ref{CDP}. This is much like fitting the effective mass plot to isolate the ground state. The two formulas (models) we will use are \begin{equation} f_1 (R) = C_0 \end{equation} and \begin{equation} f_2 (R) = C_0 + C_1 \frac{\sqrt{R} e^{- m R}}{m} . \end{equation} The first one is the asymptotic form for $R \rightarrow \infty$ and the second one is the form commensurate with that from the cluster decomposition principle to cover more range of $R$ in the fitting. Then, we need to choose the combinations of data points. To make sure that every combination has the same weight, we set the number of data points (marked as $N_d$) contained in each combination to be equal. And to enlarge the number of different combinations (marked as $N_c$) we can have, we do not force the points in each combination to be contiguous. (e.g., if the total range of data points is $R \in [8, 27]$, and we set $N_d = 4$, $N_c = 5$, the possible combinations can be $[8, 9, 12, 15]$, $[15, 18, 21, 22]$, $[15, 16, 17, 18]$, $[10, 12, 20, 22]$ and $[20, 22, 24, 27]$.) If $N_c$ is large enough, combinations will include both contiguous data points and noncontiguous data point. So in this sense, this is a more general way to choose data points. Having the formulas and combinations, we can then proceed to do the fittings. The final results might be affected by 3 factors: $N_b$ (number of bootstraps), $N_d$ (number of data points) and $N_c$ (number of combinations). We vary these 3 factors to check if the results are stable in these fits. \begin{center} \begin{table} [ht] \caption{The mean values, and the systematic and statistical errors are given for various fits. $N_b$ is the number of bootstrap samples, $N_d$ the number is data points in each fit, and $N_c$ is the number of different combinations of the $N_d$ data points.} \vspace{0.5cm} \begin{tabular}{|p{1.0cm}|p{1.0cm}|p{1.0cm}|p{1.5cm}|p{1.5cm}|p{1.5cm}|} \hline $N_b$ & $N_d$ & $N_c$ & mean & $E_{{\rm sys}}$ & $E_{{\rm sta}}$\\ \hline 100 & 6 & 80 & 0.161 & 0.013 & 0017\\ 100 & 8 & 80 & 0.160 & 0.015 & 0.015\\ 100 & 10 & 80 & 0.163 & 0.019 & 0.012\\ \hline 100 & 8 & 60 & 0.161 & 0.015 & 0.015\\ 100 & 8 & 80 & 0.160 & 0.015 & 0.015\\ 100 & 8 & 100 & 0.161 & 0.015 & 0.015\\ \hline 50 & 8 & 80 & 0.160 & 0.015 & 0.015\\ 100 & 8 & 80 & 0.160 & 0.015 & 0.015\\ 200 & 8 & 80 & 0.160 & 0.015 & 0.015\\ \hline \end{tabular} \end{table} \end{center} The data range used is $R \in [10, 27]$. After several hundreds of thousands of correlated fits, the results are show in the above table. The values and errors are pretty stable no matter how we vary $N_b$, $N_d$ or $N_c$. We decide to take the representative results from $N_b=100, N_d = 8$, and $N_c = 80$ which gives the value of $0.160 (15) (15)$ as our final estimation. The systematic error (the second parenthesis) is comparable to the statistical one (the first parenthesis) from 100 bootstrap samples. This is to be compared to the original value of 0.143(45) when the sum over the relative coordinate between the quark loop and the sink of the nucleon propagator is carried out to cover the whole spatial volume. The total number of analysis $P$ is the product of the number of models (2 in this case) and $N_c$, the number of different combinations of $N_d$ data points. In this case, $P = 2 \times 80 = 160$. \bibliographystyle{apsrev4-1}
1,941,325,219,938
arxiv
\section{Introduction} \IEEEPARstart{C}{ognitive} radios (CR) \cite{GSSinCRN:Mitola00}\cite{GSSinCRN:Haykin05}, having capable of sensing spectrum availability, is considered as a promising technique to alleviate spectrum scarcity due to current static spectrum allotment policy \cite{GSSinCRN:FCC}. Traditional CR link availability is solely determined by the spectrum sensing conducted at the transmitter (i.e. CR-Tx). If the CR-Tx with packets to relay senses the selected channel to be available, it precedes this opportunistic transmission. To facilitate the spectrum sensing, at time instant $t_n$, we usually use a hypothesis testing as follows. \begin{equation} \label{e27} \begin{aligned} H_1: Y&=I+N\\ H_0: Y&=S+I+N \end{aligned} \end{equation} where $Y$ means the observation at CR-Tx; $S$ represents signal from primary system (PS); $I$ is the interference from co-existing multi-radio wireless networks; $N$ is additive white Gaussian noise (AWGN). They are all random variables at time $t_n$. We can conduct this hypothesis testing in several ways based on different criterions and different assumptions \cite{GSSinCRN:Ma09}-\cite{GSSinCRN:Cabric04}: \begin{enumerate} \item{Energy Detection \cite{GSSinCRN:Digham03}-\cite{GSSinCRN:Penna09}:} Energy detection is widely considered due to its simple complexity and no need of \emph{a priori} knowledge of PS. However, due to noise and interference power uncertainty, the performance of energy detection severely degrades \cite{GSSinCRN:Sahai04}\cite{GSSinCRN:Zeng07}, and the detector fails to differentiate PS from the interference. \item{Cyclostationary Detection \cite{GSSinCRN:Tu09}-\cite{GSSinCRN:Guo09}:} Stationary PS signal can be exploited to achieve a better and more robust detector. Stationary observation coupled with the periodicity of carriers, pulse trains, repeating spreading code, etc., results in a cyclostationary model for the signal, which can be exploited in frequency domain by the spectral correlation function \cite{GSSinCRN:Gardner88}\cite{GSSinCRN:Gardner91}, with high computational complexity and long observation duration. \item{Locking Detection \cite{GSSinCRN:Cabric04}\cite{GSSinCRN:Cabric06}:} In practical communication systems, pilots and preambles are usually periodically transmitted to facilitate synchronization and channel estimation, etc. These known signals can be used for locking detection to distinguish PS from noise and the interference. However, locking detection requires more \emph{a priori} information about PS, including frame structure, modulation types, and coding schemes, etc. \item{Covariance-based Detection \cite{GSSinCRN:Zeng07}-\cite{GSSinCRN:Zayen09}:} Because of the dispersive channels, the utility of multiple antennas, or even over-sampling, the signals from PS are correlated and can be utilized to differentiate PS from white noise. The existence of PS can be determined by evaluating the structure of covariance matrix of the received signals. The detector can be implemented blindly \cite{GSSinCRN:Zayen09} by singular value decomposition, that is, it requires no \emph{a priori} knowledge of PS and noise, but needs good computational complexity. \item{Wavelet-based Detection \cite{GSSinCRN:Tian06}:} The detection is implemented by modeling the entire wideband signals as consecutive frequency subbands where the power spectral characteristic is smooth within each subband but exhibits an abrupt change between neighboring subbands, at the price of high sampling rate due to wideband signals. \end{enumerate} However, above spectrum sensing mechanisms, focusing on physical layer detection or estimation at CR-Tx, ignore the spectrum availability at CR receiver. We could illustrate the insufficiency of traditional spectrum sensing model, especially to network CRs. Due to existence of fading channels and noise uncertainty along with limited sensing duration \cite{GSSinCRN:Ghasemi082}, even when there is no detectable transmission of PS during this venerable period, the receiver of this opportunistic transmission (i.e. CR-Rx) may still suffer from collisions from simultaneous transmission(s), as Fig. \ref{Fig_1} shows. The CR-Rx locates in the middle of CR-Tx and PS-Tx and PS activities are hidden to CR-Tx, which induces a challenge to spectrum sensing. We can either develop more powerful sensing techniques such as cooperative sensing \cite{GSSinCRN:Unnikrishnan08}-\cite{GSSinCRN:Mishra06} to alleviate hidden terminal problem, or a more realistic mathematical model for spectrum sensing what we are going to do hereafter. The organization of this paper is as follows. We elaborate a realistic definition of link availability and system model in Section II and present general spectrum sensing with/without cooperation in Section III an IV respectively. In Section V, realistic operation of CRN is suggested to investigate impacts of spectrum sensing and cooperative scheme on network operation. Numerical results and examples are illustrated in Section VI. Finally, conclusions are made in Section VII. \section{System Model} From a viewpoint of information theory, spectrum sensing can be modeled as a binary channel that transmit CR link availability (one bit information in link layer) to CR-Tx with transition probabilities representing spectrum sensing capability, probability of missing detection and probability of false alarm. Therefore, traditional spectrum sensing mechanisms could be explained by a mathematical structure of defining link availability. \begin{defi} CR link availability, between CR-Tx and CR-Rx, is specified by an indicator function \begin{equation*} \mathbf{1}^{link}= \begin{cases} 1, &\text{CR link is available for opportunistic transmission}\\ 0, &\text{otherwise} \end{cases} \end{equation*} \end{defi} \begin{defi} \label{d1} CR-Tx senses the spectrum and determines link availability based on its observation as \begin{equation*} \mathbf{1}^{Tx}= \begin{cases} 1, &\text{CR link is available for transmission at CR-Tx}\\ 0, &\text{otherwise} \end{cases} \end{equation*} \end{defi} \begin{lemma} \label{l5} Traditional spectrum sensing for CR link suggests $\mathbf{1}^{link}=\mathbf{1}^{Tx}$. \end{lemma} The definition of link availability is pretty much similar to the clear channel assessment (CCA) of medium access control (MAC) in the IEEE 802.11 wireless local area networks (WLAN), or the medium availability indicator in \cite{GSSinCRN:Chen09}. We may have a correspondence between link availability in dynamic channel access of cognitive radio networks (CRN), and the CCA in MAC of WLAN. As we explain in Figure \ref{Fig_1} and/or take interference into testing scenario, we may note that \textbf{Lemma \ref{l5}} is not generally true. To generally model spectrum sensing, including hidden terminal scenarios, we have to reach two simultaneous conditions: (1) CR-Tx senses the link available to transmit (2) CR-Rx can successfully receive packets, which means no PS signal at CR-Rx side, nor significant interference to prohibit successful CR packet reception (i.e. beyond a target SINR). In other words, at CR-Rx, \begin{equation} \label{e28} \text{SINR}_{CR-Rx}=\frac{P_{CR-Tx}}{P_{PS}+P_I+P_N}\geq \eta_{outage} \end{equation} where $\eta_{outage}$ is the SINR threshold at CR-Rx for outage in reception over fading channels, $P_{CR-Tx}$ is the received power from opportunistic transmission from CR-Tx, $P_{PS}$ is the power contributed from PS simultaneous operation for general network topology such as ad hoc, $P_I$ is the total interference power from other co-existing radio systems \cite{GSSinCRN:Ghasemi08}, and $P_N$ is band-limited noise power, with assuming independence among PS, CR, interference systems, and noise. Based on this observation, CR link availability should be composed of localized spectrum availability at CR-Tx and CR-Rx, which may not be identical in general. The inconsistency of spectrum availability at CR-Tx and CR-Rx is rarely noted in current literatures. However, this factor not only suggests spatial behavior for CR-Tx and CR-Rx but also is critical to some networking performance such as throughput of CRN, etc. \cite{GSSinCRN:Jafar07} developed a brilliant two-switch model to capture distributed and dynamic spectrum availability. However, \cite{GSSinCRN:Jafar07} focused on capacity from information theory and it is hard to directly extend the model in studying network operation of CRN. Actually, two switching functions can be generalized as indicator functions to indicate the activities of PS based on the sensing by CR-Tx and CR-Rx respectively \cite{GSSinCRN:Chen09}. Generalizing the concept of \cite{GSSinCRN:Jafar07}\cite{GSSinCRN:Srinivasa07} to facilitate our study in spectrum sensing and further impacts on network operation, we represent the spectrum availability at CR-Rx by an another indicator function. \begin{defi} The true availability for CR-Rx can be indicated by \begin{equation*} \mathbf{1}^{Rx}= \begin{cases} 1, &\text{CR link is available for reception at CR-Rx}\\ 0, &\text{otherwise} \end{cases} \end{equation*} \end{defi} Please note that the activity of PS estimated at CR-Rx in \cite{GSSinCRN:Jafar07} may not be identical to $1^{Rx}$. That is, even when CR-Rx senses that PS is active, CR-Rx may still successfully receive packets from CR-Tx if the received power from CR-Tx is strong enough to satisfy (\ref{e28}). We call this rate-distance nature \cite{GSSinCRN:Chen07} that is extended from an overlay concept \cite{GSSinCRN:Srinivasa07}. Therefore, we consider a more realistic mathematical model for CR link availability that can be represented as multiplication (i.e. AND operation) of the indicator functions of spectrum availability at CR-Tx and CR-Rx to satisfy two simultaneous conditions for CR link availability. \begin{prop} \label{p1} $\mathbf{1}^{link}=\mathbf{1}^{Tx}\mathbf{1}^{Rx}$ \end{prop} To obtain the spectrum availability at CR-Rx (i.e. $\mathbf{1}^{Rx}$) and to eliminate hidden terminal problem, a handshake mechanism has been proposed by sending Request To Send (RTS) and Clear To Send (CTS) frames. However, the effectiveness of RTS/CTS degrades in general ad hoc networks \cite{GSSinCRN:Xu02}. Furthermore, since CRs have lower priority in the co-existing primary/secondary communication model, CRs should cherish the venerable duration for transmission and reduce the overhead caused by information exchange and increases spectrum utilization accordingly. Therefore, the next challenge would be that $\mathbf{1}^{Rx}$ cannot be known \emph{a priori} at CR-Tx, due to no centralized coordination nor information exchange in advance among CRs when CR-Tx wants to transmit. As a result, general spectrum sensing turns out to be a composite hypothesis testing. In this paper, we introduce statistical inference that is seldom applied in traditional spectrum sensing to predict/estimate spectrum availability at CR-Rx and to regard it as performance lower bound in general spectrum sensing. Further examining \textbf{Proposition~\ref{p1}}, we see that prediction of $\mathbf{1}^{Rx}$ is necessary when $\mathbf{1}^{Tx}=1$, which is equivalent to prediction of $\mathbf{1}^{link}$. In this paper, we model $\mathbf{1}^{Rx}$ when $\mathbf{1}^{Tx}=1$ as a Bernoulli process with the probability of spectrum availability at CR-Rx $\Pr(\mathbf{1}^{Rx}=1|\mathbf{1}^{Tx}=1)=\alpha$. The value of $\alpha$ exhibits spatial behavior of CR-Tx and CR-Rx and thus impacts of hidden terminal problem. If $\alpha$ is large, CR-Rx is expected to be close to CR-Tx and hidden terminal problem rarely occurs (and vise versa). \section{General Spectrum Sensing} The prediction of $\mathbf{1}^{Rx}$ at CR-Tx can be modeled as a hypothesis testing, that is, detecting $\mathbf{1}^{Rx}$ with \emph{a priori} probability $\alpha$ but no observation. To design optimum detection, we consider minimum Bayesian risk criterion, where Bayesian risk is defined by \begin{equation} \label{e30} R=w\Pr(\mathbf{1}^{link}=0|\mathbf{1}^{Tx}=1)P_F+\Pr(\mathbf{1}^{link}=1|\mathbf{1}^{Tx}=1)P_M \end{equation} In (\ref{e30}), $P_F=\Pr(\hat{\mathbf{1}}^{link}=1|\mathbf{1}^{link}=0,\mathbf{1}^{Tx}=1)$, $P_M=\Pr(\hat{\mathbf{1}}^{link}=0|\mathbf{1}^{link}=1,\mathbf{1}^{Tx}=1)$, and $w\geq 0$ denotes the normalized weighting factor to evaluate costs of $P_F$ and $P_M$, where $\hat{\mathbf{1}}^{link}$ represents prediction of $\mathbf{1}^{link}$. We will show that the value of $w$ relates to the outage probability of CR link in Section V. Since $\mathbf{1}^{Rx}$ is unavailable at CR-Tx, we have to develop techniques to "obtain" some information of spectrum availability at CR-Rx. Inspired by the CRN tomography \cite{GSSinCRN:Yu09}\cite{GSSinCRN:Yu092}, we may want to derive the statistical inference of $\mathbf{1}^{Rx}$ based on earlier observation. It is reasonable to assume that CR-Tx can learn the status of $\mathbf{1}^{Rx}$ at previous times when $\mathbf{1}^{Tx}=1$, which is indexed by $n$. That is, at time $n$, CR-Tx can learn the value of $\mathbf{1}^{Rx}[n-1],\mathbf{1}^{Rx}[n-2],\ldots$. In other words, we can statistically infer $\mathbf{1}^{Rx}[n]$ from $\mathbf{1}^{Rx}[n-1]$, $\mathbf{1}^{Rx}[n-2]$, $\ldots$, $\mathbf{1}^{Rx}[n-L]$, where $L$ is the observation depth. This leads to a classical problem from Bayesian inference. \begin{lemma} \label{l8} Through the Laplace formula \cite{GSSinCRN:Casella02}, the estimated probability of spectrum availability at CR-Rx is \begin{equation} \label{e1} \hat{\alpha}=\frac{N+1}{L+2} \end{equation} where $N=\sum_{l=1}^{L}{\mathbf{1}^{Rx}[n-l]}$. \end{lemma} \begin{prop} Inference-based spectrum sensing at CR-Tx thus becomes \begin{equation} \label{e13} \hat{\mathbf{1}}^{link}= \begin{cases} \mathbf{1}^{Tx}, &\text{if $\hat{\alpha}\geq\frac{w}{w+1}$}\\ 0, &\text{otherwise} \end{cases} \end{equation} where $\hat{\alpha}$ is in (\ref{e1}). \end{prop} \begin{proof} Since the optimum detector under Bayesian criterion is the likelihood ratio test \cite{GSSinCRN:Poor94}, we have \begin{equation} \label{e41} \frac{\Pr(\mathbf{1}^{Rx}=1|\mathbf{1}^{Tx}=1)}{\Pr(\mathbf{1}^{Rx}=0|\mathbf{1}^{Tx}=1)}= \frac{\alpha}{1-\alpha} {\substack{ \overset{\hat{\mathbf{1}}^{link}=1}{\geq}\\ \underset{\hat{\mathbf{1}}^{link}=0}{<} }} \frac{C_{01}-C_{00}}{C_{10}-C_{11}}=w \end{equation} where $C_{ij}$ denotes the cost incurred by determining $\hat{\mathbf{1}}^{link}=j$ when $\mathbf{1}^{link}=i$. According to Bayesian risk in (\ref{e30}), we have $C_{11}=C_{00}=0$ and $C_{01}/C_{10}=w$. Rearranging the inequality, we obtain the proposition. \end{proof} \begin{rem} CR-Tx believes CR link is available and forwards packets to CR-Rx if the probability of spectrum available at CR-Rx $\alpha$ is high enough. Otherwise, CR-Tx is prohibited from using the link even when CR-Tx feels free for transmission because it can generate unaffordable cost, that is, intolerable interference to PS or collisions at CR-Rx. \end{rem} \section{General Cooperative Spectrum Sensing} \subsection{Single Cooperative Node} Spectrum sensing at cooperative node, which can be represented $\mathbf{1}^{Co}$, is to explore more information about $\mathbf{1}^{Rx}$ and therefore alleviates hidden terminal problem. We can use Fig. \ref{Fig_2} to depict the scenario. In case the existence of obstacles, $\mathbf{1}^{Rx}$ is totally orthogonal to $\mathbf{1}^{Tx}$. $\mathbf{1}^{Co}$ is useful simply because of more correlation between $\mathbf{1}^{Co}$ and $\mathbf{1}^{Rx}$. From above observation, we only care about correlation of $\mathbf{1}^{Rx}$ and $\mathbf{1}^{Co}$ when $\mathbf{1}^{Tx}=1$ and assume \begin{align*} \Pr(\mathbf{1}^{Co}=1|\mathbf{1}^{Rx}=1,\mathbf{1}^{Tx}=1)&=\beta\\ \Pr(\mathbf{1}^{Co}=0|\mathbf{1}^{Rx}=0,\mathbf{1}^{Tx}=1)&=\gamma \end{align*} Thus the correlation between $\mathbf{1}^{Co}$ and $\mathbf{1}^{Rx}$, $\rho$, and corresponding properties become \begin{equation} \rho =\frac{\sqrt{\alpha(1-\alpha)}(\beta+\gamma-1)} {\sqrt{(\alpha\beta+(1-\alpha)(1-\gamma)) (\alpha(1-\beta)+(1-\alpha)\gamma)}} \end{equation} \begin{lemma} \label{l1} $\rho$ is a strictly concave function with respect to $\alpha\in(0,1)$ if $1<\beta+\gamma<2$ but a strictly convex function if $0<\beta+\gamma<1$. In addition, $\mathbf{1}^{Co}$ and $\mathbf{1}^{Rx}$ are independent if and only if $\rho=0$, i.e, $\beta+\gamma=1$. \end{lemma} \begin{proof} Let \begin{equation*} f(\alpha)=\sqrt{\frac{\alpha(1-\alpha)}{(\alpha\beta+(1-\alpha)(1-\gamma)) (\alpha(1-\beta)+(1-\alpha)\gamma)}} \end{equation*} Then taking first order and second order differentiation with respect to $\alpha$, we have \begin{align*} f'(\alpha)&=\frac{-\alpha^2\beta(1-\beta)+(1-\alpha)^2\gamma(1-\gamma)} {2(\alpha(1-\alpha))^{1/2}(\alpha\beta+(1-\alpha)(1-\gamma))^{3/2} (\alpha(1-\beta)+(1-\alpha)\gamma)^{3/2}}\\ f{''}(\alpha)&=K(\alpha)[-10\alpha^2(1-\alpha)^2\beta\gamma(1-\beta)(1-\gamma)- 4\alpha^4(1-\alpha)\beta(1-\beta)(\beta\gamma+(1-\beta)(1-\gamma))\\ &+(3-4\alpha)\alpha^4\beta^2(1-\beta)^2- 4\alpha(1-\alpha)^4\gamma(1-\gamma)(\beta\gamma+(1-\beta)(1-\gamma))\\ &+(-1+4\alpha)(1-\alpha)^4\gamma^2(1-\gamma)^2] \end{align*} where $K(\alpha)>0$ for $\alpha,\beta,\gamma\in(0,1)$ and $\beta+\gamma\neq1$. In addition, $\beta\gamma+(1-\beta)(1-\gamma)-\beta(1-\beta)= \beta(\beta+\gamma-1)+(1-\beta)(1-\gamma) =\beta\gamma+(1-\beta)(1-\beta-\gamma)>0$ for $\beta,\gamma\in(0,1)$. Similarly, we have $\beta\gamma+(1-\beta)(1-\gamma)>\gamma(1-\gamma)$ for $\beta,\gamma\in(0,1)$. Therefore, combining the second and the third terms, and the fourth and the last terms in the bracket of $f{''}(\alpha)$, we have \begin{align*} f{''}(\alpha)&<K(\alpha)[-10\alpha^2(1-\alpha)^2\beta\gamma(1-\beta)(1-\gamma) -\alpha^4\beta^2(1-\beta)^2-(1-\alpha)^4\gamma^2(1-\gamma)^2] <0 \end{align*} if $\alpha\in(0,1)$. Since $\rho=(\beta+\gamma-1)f(\alpha)$, we prove the first statement of the lemma. For the second statement, obviously, $\rho=0$ if $\mathbf{1}^{Co}$ and $\mathbf{1}^{Rx}$ are independent. Reversely, if $\rho=0$, i.e., $\beta+\gamma=1$, we have \begin{align*} \Pr(\mathbf{1}^{Co}=1|\mathbf{1}^{Tx}=1)&= \sum_{s=0}^{1}{\Pr(\mathbf{1}^{Rx}=s|\mathbf{1}^{Tx}=1)\Pr(\mathbf{1}^{Co}=1|\mathbf{1}^{Rx}=s,\mathbf{1}^{Tx}=1)}\\ &=(1-\alpha)(1-\gamma)+\alpha\beta=(1-\alpha)\beta+\alpha\beta=\beta\\ &=\Pr(\mathbf{1}^{Co}=1|\mathbf{1}^{Rx}=1,\mathbf{1}^{Tx}=1)\\ &=1-\gamma=\Pr(\mathbf{1}^{Co}=1|\mathbf{1}^{Rx}=0,\mathbf{1}^{Tx}=1) \end{align*} Similarly, we can show $\Pr(\mathbf{1}^{Co}=0|\mathbf{1}^{Tx}=1)=\Pr(\mathbf{1}^{Co}=0|\mathbf{1}^{Rx}=1,\mathbf{1}^{Tx}=1) =\Pr(\mathbf{1}^{Co}=0|\mathbf{1}^{Rx}=0,\mathbf{1}^{Tx}=1)$ and complete the proof. \end{proof} By statistical inference, CR-Tx can learn statistical characteristic of $\mathbf{1}^{Rx}$ and $\mathbf{1}^{Co}$, i.e., $\{\alpha,\beta,\gamma\}$, by previous observations. From a viewpoint of hypothesis testing, we would like to detect $\mathbf{1}^{Rx}$ with \emph{a priori} probability $\alpha$ and one observation $\mathbf{1}^{Co}$, which is the detection result at the cooperative node. In addition, probability of detection and probability of false alarm at the cooperative node are $\beta$ and $1-\gamma$ respectively. \begin{prop} \label{p4} Spectrum sensing with one cooperative node becomes \begin{equation} \label{e15} \hat{\mathbf{1}}^{link}= \begin{cases} \mathbf{1}^{Tx}, &\text{if $\alpha\geq\max\{\alpha_1,\alpha_2\}$}\\ \mathbf{1}^{Tx}\mathbf{1}^{Co}, &\text{if $\alpha_2<\alpha <\alpha_1, \rho>0$}\\ \mathbf{1}^{Tx}\bar{\mathbf{1}}^{Co}, &\text{if $\alpha_1<\alpha <\alpha_2, \rho<0$}\\ 0, &\text{if $\alpha\leq\min\{\alpha_1,\alpha_2\}$} \end{cases} \end{equation} where $\bar{\mathbf{1}}^{Co}$ is the complement of $\mathbf{1}^{Co}$, $\alpha_1=w\gamma/(1-\beta+w\gamma)$ and $\alpha_2=w(1-\gamma)/(\beta+w(1-\gamma))$. \end{prop} \begin{proof} The likelihood ratio test based on observed signal $\mathbf{1}^{Co}$ can be written as follows. For $\mathbf{1}^{Co}=1$, \begin{equation*} \frac{\Pr(\mathbf{1}^{Co}=1|\mathbf{1}^{Rx}=1,\mathbf{1}^{Tx}=1)} {\Pr(\mathbf{1}^{Co}=1|\mathbf{1}^{Rx}=0,\mathbf{1}^{Tx}=1)}= \frac{\beta}{1-\gamma} {\substack{ \overset{\hat{\mathbf{1}}^{link}=1}{\geq}\\ \underset{\hat{\mathbf{1}}^{link}=0}{<} }} \frac{w(1-\alpha)}{\alpha} \end{equation*} For $\mathbf{1}^{Co}=0$, \begin{equation*} \frac{\Pr(\mathbf{1}^{Co}=0|\mathbf{1}^{Rx}=1,\mathbf{1}^{Tx}=1)} {\Pr(\mathbf{1}^{Co}=0|\mathbf{1}^{Rx}=0,\mathbf{1}^{Tx}=1)}= \frac{1-\beta}{\gamma} {\substack{ \overset{\hat{\mathbf{1}}^{link}=1}{\geq}\\ \underset{\hat{\mathbf{1}}^{link}=0}{<} }} \frac{w(1-\alpha)}{\alpha} \end{equation*} Rearranging the above inequalities, we obtain the proposition. \end{proof} It is interesting to note that cooperative spectrum sensing is not always helpful, that is, it does not always further decrease Bayesian risk. We see that if $\alpha$ is large (greater than $\max\{\alpha_1,\alpha_2\}$), that is, hidden terminal problem rarely occurs because either CR-Rx is close to CR-Tx or CR-Tx adopt cooperative sensing to determine $\mathbf{1}^{Tx}$, prediction of $\mathbf{1}^{Rx}$ is unnecessary at CR-Tx. On the other hand, if $\alpha$ is small (less than $\min\{\alpha_1,\alpha_2\}$), CR-Tx is prohibited from forwarding packets to CR-Rx even with the aid of cooperative sensing. In the following, we adopt minimum error probability criterion (i.e., $w=1$) and give an insight into the condition that cooperative sensing is helpful. Although we set $w=1$, we do not lose generality because we can scale \emph{a priori} probability $\alpha$ to $\alpha/(\alpha+w(1-\alpha))$ as $w\neq1$. Applying \textbf{Lemma \ref{l1}} and the fact that $\rho|_{\alpha=\alpha^1_C}=\rho|_{\alpha=\alpha^2_C}$ when $w=1$, we can reach the following corollary. \begin{cor} \label{cor1} If we adopt minimum error probability criterion, spectrum sensing with one cooperative node becomes \begin{equation} \label{e16} \hat{\mathbf{1}}^{link}= \begin{cases} \mathbf{1}_{[\alpha\geq1/2]}\mathbf{1}^{Tx}, &\text{if $|\rho|\leq\Psi$}\\ (\mathbf{1}_{[\rho>0]}\mathbf{1}^{Co}+\mathbf{1}_{[\rho<0]}\bar{\mathbf{1}}^{Co})\mathbf{1}^{Tx}, &\text{if $|\rho|>\Psi$} \end{cases} \end{equation} where $\mathbf{1}_{[s]}$ is an indicator function, which is equal to 1 if the statement $s$ is true else equal to 0, and \begin{equation*} \Psi=\left|\frac{\beta+\gamma-1}{\sqrt{2(\beta\gamma+(1-\beta)(1-\gamma))}}\right| \end{equation*} \end{cor} \begin{rem} The effectiveness of a cooperative node only depends on the correlation of spectrum availability at CR-Rx and the cooperative node. If the correlation is low, information provided by the cooperative node is irrelevant to the spectrum sensing which degenerates to (\ref{e13}). \end{rem} By establishing a simple indicator model in link layer, we mathematically demonstrate the limit of a cooperative node in general spectrum sensing. It is natural to ask what will happen for multiple cooperative nodes and how to compare sensing capability among cooperative nodes. In the following, we provide metrics to measure sensing capability of cooperative nodes from link and network perspectives. \subsection{Preliminaries} Before exploring multiple cooperative nodes, we introduce notations and properties to systematically construct relation between joint probability mass function (pmf) and marginal pmf of spectrum availability at cooperative nodes. We first define notations in the following. \begin{defi} For an $m\times n$ matrix $\mathbf{A}$ and two $n\times 1$ vectors $\mathbf{u}$ and $\mathbf{v}$, \begin{equation*} \begin{matrix} \mathbf{A}[i,\star]:&\text{the } i\text{th row of } \mathbf{A} & \mathbf{A}[\star,j]:&\text{the } j\text{th column of } \mathbf{A}\\ \mathbf{A}^T:&\mathbf{A}^T[i,j]=\mathbf{A}[j,i] & \mathbf{A}^{RC}:&\mathbf{A}^{RC}[\star,j]=\mathbf{A}[\star,n-j+1]\\ \mathbf{A}^{RR}:&\mathbf{A}^{RR}[i,\star]=\mathbf{A}[m-i+1,\star] & \mathbf{1}_{m\times n}:&\mathbf{A}[i,j]=1\\ \mathbf{0}_{m\times n}:&\mathbf{A}[i,j]=0 & \mathbf{I}_{n}:&n\times n \text{ identity matrix}\\ \mathbf{u}\odot\mathbf{v}:&\mathbf{u}\odot\mathbf{v}[i]=\mathbf{u}[i]\mathbf{v}[i]& \mathbf{u}\preceq\mathbf{v}:&\mathbf{u}[i]\leq \mathbf{v}[i]\\ \|\mathbf{u}\|_p:&(\sum_i{|\mathbf{u}[i]|^p})^{1/p} & \mathbf{1}_{[\mathbf{u}\geq0]}:&\mathbf{1}_{[\mathbf{u}\geq0]}[i]=\mathbf{1}_{[\mathbf{u}[i]\geq0]} \end{matrix} \end{equation*} \end{defi} Let \begin{align} \mathbf{A}^{n}_k&= \begin{bmatrix} \mathbf{A}^n_{0,k} & \mathbf{A}^n_{1,k} & \cdots & \mathbf{A}^n_{k,k} \end{bmatrix} \quad 0\leq n\leq k \label{e3} \\ \mathbf{G}^{(1)}_{m,k}&= \begin{bmatrix} (\mathbf{A}^0_k)^T & (\mathbf{A}^1_k)^T & (\mathbf{A}^2_k)^T & \cdots & (\mathbf{A}^m_k)^T \end{bmatrix}^T \label{e20} \\ \mathbf{G}^{(0)}_{m,k}&=(\mathbf{G}^{(1)}_{m,k})^{RC} \label{e4} \end{align} where $\mathbf{A}^n_{m,k}$ is a $\binom{k}{n}\times\binom{k}{m}$ matrix, $\mathbf{A}^0_{m,k}=\mathbf{1}_{1\times\binom{k}{m}},0\leq m\leq k$, $\mathbf{A}^k_{m,k}=\mathbf{0}_{1\times\binom{k}{m}}, 0\leq m\leq k-1$, $\mathbf{A}^k_{k,k}=1$, and for $1\leq n\leq k-1$ \begin{align*} \mathbf{A}^n_{0,k}&=\mathbf{0}_{\binom{k}{n}\times1} \qquad \mathbf{A}^n_{k,k}=\mathbf{1}_{\binom{k}{n}\times 1} \\ \mathbf{A}^n_{m,k}&= \begin{bmatrix} \mathbf{A}^n_{m,k-1} & \mathbf{A}^n_{m-1,k-1} \\ \mathbf{0}_{\binom{k-1}{n-1}\times\binom{k-1}{m}} & \mathbf{A}^{n-1}_{m-1,k-1} \end{bmatrix}, \quad 1\leq m\leq k-1 \end{align*} The role of $\mathbf{A}^{1}_k$ (or $\mathbf{A}^1_{m,k},0\leq m\leq k$) is to specify arrangements of joint pmf and marginal pmf such that their relation can be easily established by $\mathbf{G}^{(s)}_{m,k}, s=0,1$. In the following, we show properties of $\mathbf{A}^n_{m,k}$ and $\mathbf{G}^{(s)}_{m,k}$. \begin{lemma} \label{l2} Let $\mathcal{I}_{m,k}(j)=\{i|\mathbf{A}^1_{m,k}[i,j]=1,1\leq i\leq k\}$, $1\leq j\leq\binom{k}{m}$ and we have \begin{align} \label{e17} |\mathcal{I}_{m,k}(j)|&=m\\ \mathcal{I}_{m,k}(j)&=\mathcal{I}_{m,k}(l)\quad \text{if and only if $j=l$} \label{e22} \\ \mathbf{A}^n_{m,k}[i,j]&=\mathbf{1}_{[\mathcal{I}_{m,k}(j)\supseteq\mathcal{I}_{n,k}(i)]} \label{e18} \end{align} where $|\mathcal{I}|$ denotes number of elements in the set $\mathcal{I}$. \end{lemma} \begin{rem} Since $\mathbf{A}^1_{m,k}$ is a $k\times\binom{k}{m}$ matrix, from (\ref{e17}), (\ref{e22}), we conclude that $\mathbf{A}^1_{m,k}[\star,j],1\leq j\leq\binom{k}{m}$ contains all possible permutations of $m$ ones and $k-m$ zeros. \end{rem} \begin{lemma} \label{t1} Let $S_m=\sum_{i=0}^{m}{\binom{k}{i}}$. Let $\overline{\mathbf{G}}^{(1)}_{m,k}$ and $\underline{\mathbf{G}}^{(0)}_{m,k}$ be $S_m\times S_m$ matrices, $\underline{\mathbf{G}}^{(1)}_{m,k}$ and $\overline{\mathbf{G}}^{(0)}_{m,k}$ be $(2^k-S_m)\times S_m$ matrices, and $\mathbf{G}^{(s)}_{m,k}=\begin{bmatrix} \overline{\mathbf{G}}^{(s)}_{m,k} & \underline{\mathbf{G}}^{(s)}_{m,k} \end{bmatrix},s=0,1$. \begin{gather*} \overline{\mathbf{G}}^{(1)}_{m,k}= \begin{bmatrix} 1 & \mathbf{1}_{1\times k} & \cdots & \mathbf{1}_{1\times \binom{k}{n}} & \cdots & \mathbf{1}_{1\times \binom{k}{m}} \\ \mathbf{0}_{k\times 1} & \mathbf{I}_{k} & \cdots & \mathbf{A}^1_{n,k} & \cdots & \mathbf{A}^1_{m,k} \\ \vdots & \ddots & \ddots & \vdots & \ddots & \vdots\\ \mathbf{0}_{\binom{k}{n}\times1} & \cdots & \mathbf{0} & \mathbf{I}_{\binom{k}{n}} & \cdots & \mathbf{A}^n_{m,k} \\ \vdots & \vdots & \vdots & \ddots & \ddots & \vdots\\ \mathbf{0}_{\binom{k}{m}\times1} & \mathbf{0}_{\binom{k}{m}\times k} & \cdots & \cdots & \mathbf{0} & \mathbf{I}_{\binom{k}{m}} \\ \end{bmatrix}\\ \underline{\mathbf{G}}^{(0)}_{m,k}=(\overline{\mathbf{G}}^{(1)}_{m,k})^{RC} \end{gather*} are nonsingular and their inverse matrices become \begin{gather} (\overline{\mathbf{G}}^{(1)}_{m,k})^{-1}= \begin{bmatrix} 1 & -\mathbf{1}_{1\times k} & \cdots & (-1)^n\mathbf{1}_{1\times \binom{k}{n}} & \cdots & (-1)^m\mathbf{1}_{1\times \binom{k}{m}} \\ \mathbf{0}_{k\times 1} & \mathbf{I}_{k} & \cdots & (-1)^{1+n}\mathbf{A}^1_{n,k} & \cdots & (-1)^{1+m}\mathbf{A}^1_{m,k} \\ \vdots & \ddots & \ddots & \vdots & \ddots & \vdots\\ \mathbf{0}_{\binom{k}{n}\times1} & \cdots & \mathbf{0} & \mathbf{I}_{\binom{k}{n}} & \cdots & (-1)^{n+m}\mathbf{A}^n_{m,k} \\ \vdots & \vdots & \vdots & \ddots & \ddots & \vdots\\ \mathbf{0}_{\binom{k}{m}\times1} & \mathbf{0}_{\binom{k}{m}\times k} & \cdots & \cdots & \mathbf{0} & \mathbf{I}_{\binom{k}{m}} \end{bmatrix}\\ (\underline{\mathbf{G}}^{(0)}_{m,k})^{-1}=(\overline{\mathbf{G}}^{(1)}_{m,k})^{-RR} \end{gather} \end{lemma} Let $\mathbf{p}^{(s)}_{m,k}[i]=\Pr(\mathbf{1}^{Co}_1=\mathbf{A}^1_{m,k}[1,i],\ldots,\mathbf{1}^{Co}_k=\mathbf{A}^1_{m,k}[k,i]| \mathbf{1}^{Rx}=s,\mathbf{1}^{Tx}=1)$, where $s=0,1$, $0\leq m\leq k$ and $1\leq i\leq\binom{k}{m}$, $\mathbf{1}^{Co}_i$ denotes spectrum availability at the $i$th cooperative node, and let \begin{equation} \label{e19} \mathbf{P}^{(s)}_k=\begin{bmatrix}(\mathbf{p}^{(s)}_{0,k})^T & (\mathbf{p}^{(s)}_{1,k})^T & \cdots & (\mathbf{p}^{(s)}_{k,k})^T \end{bmatrix}^T \end{equation} Therefore, $\mathbf{P}^{(s)}_k$ characterizes joint pmf of spectrum availability at $k$ cooperative nodes, $\mathbf{1}^{Co}_1,\ldots,\mathbf{1}^{Co}_k$. Similarly, we divide $\mathbf{P}^{(s)}_k$ into two parts. Let $\overline{\mathbf{P}}^{(1)}_{m,k}$ and $\underline{\mathbf{P}}^{(0)}_{m,k}$ be $S_m\times1$ vectors, $\underline{\mathbf{P}}^{(1)}_{m,k}$ and $\overline{\mathbf{P}}^{(0)}_{m,k}$ be $(2^k-S_m)\times1$ vectors, and $\mathbf{P}^{(s)}_k=\begin{bmatrix} (\overline{\mathbf{P}}^{(s)}_{m,k})^T & (\underline{\mathbf{P}}^{(s)}_{m,k})^T \end{bmatrix}^T,s=0,1$. In addition, let \begin{equation*} \mathbf{q}^{(s)}_{m,k}[j]=\Pr(\mathbf{1}^{Co}_{k_1}=s,\ldots,\mathbf{1}^{Co}_{k_m}=s| \mathbf{1}^{Rx}=s,\mathbf{1}^{Tx}=1,\{k_1,\ldots,k_m\}=\mathcal{I}_{m,k}(j)) \end{equation*} $s=0,1$, $1\leq m\leq k$ and $1\leq j\leq \binom{k}{m}$, which specifies the $m$th order marginal pmf of $\mathbf{1}^{Co}_1,\ldots,\mathbf{1}^{Co}_k$. Arrange them into a vector form, we define \begin{equation} \mathbf{Q}^{(s)}_{m,k}=\begin{bmatrix}1 & (\mathbf{q}^{(s)}_{1,k})^T & (\mathbf{q}^{(s)}_{2,k})^T & \cdots & (\mathbf{q}^{(s)}_{m,k})^T\end{bmatrix}^T,s=0,1 \label{e7} \end{equation} Please note that $\mathbf{q}^{(1)}_{k,k}=\mathbf{Q}^{(1)}_{k,k}[2^k]=\mathbf{P}^{(1)}_k[2^k]$ and $\mathbf{q}^{(0)}_{k,k}=\mathbf{Q}^{(0)}_{k,k}[2^k]=\mathbf{P}^{(0)}_k[1]$. \begin{lemma} \label{t2} Marginal and joint pmf of spectrum availability at k cooperative nodes satisfy $\mathbf{G}^{(s)}_{m,k}\mathbf{P}^{(s)}_k=\mathbf{Q}^{(s)}_{m,k}$ or \begin{align} \overline{\mathbf{P}}^{(1)}_{k,K}&=(\overline{\mathbf{G}}^{(1)}_{k,K})^{-1} (\mathbf{Q}^{(1)}_{k,K}-\underline{\mathbf{G}}^{(1)}_{k,K}\underline{\mathbf{P}}^{(1)}_{k,K})\\ \underline{\mathbf{P}}^{(0)}_{k,K}&=(\underline{\mathbf{G}}^{(0)}_{k,K})^{-1} (\mathbf{Q}^{(0)}_{k,K}-\overline{\mathbf{G}}^{(0)}_{k,K}\overline{\mathbf{P}}^{(0)}_{k,K}) \end{align} \end{lemma} \begin{lemma} \label{cor3} $\mathbf{P}^{(s)}_k$ provides equivalent information to $\mathbf{Q}^{(s)}_{k,k},s=0,1$, or specifically, \begin{equation} \label{e8} \mathbf{P}^{(s)}_k=\mathbf{c}^{(s)}_k+\mathbf{Q}^{(s)}_{k,k}[2^k]\mathbf{b}^{(s)}_k,s=0,1 \end{equation} where \begin{align*} \mathbf{c}^{(1)}_k&= \begin{bmatrix} (\overline{\mathbf{G}}^{(1)}_{k-1,k})^{-1}\mathbf{Q}^{(1)}_{k-1,k} \\ 0 \end{bmatrix} \qquad \mathbf{c}^{(0)}_k= \begin{bmatrix} 0\\ (\underline{\mathbf{G}}^{(0)}_{k-1,k})^{-1}\mathbf{Q}^{(0)}_{k-1,k} \end{bmatrix} \\ \mathbf{b}^{(1)}_k&= \begin{bmatrix} (-1)^k\mathbf{1}_{\binom{k}{0}\times1}^T & \cdots & (-1)^{(k-j)}\mathbf{1}_{\binom{k}{j}\times1}^T & \cdots & (-1)^0\mathbf{1}_{\binom{k}{k}\times1}^T \end{bmatrix}^T\\ \mathbf{b}^{(0)}_k&=(-1)^k\mathbf{b}^{(1)}_k \end{align*} \end{lemma} \subsection{Multiple Cooperative Nodes} Assume there are $K$ cooperative nodes with corresponding spectrum availability $\mathbf{1}^{Co}_1,\mathbf{1}^{Co}_2,\ldots,\mathbf{1}^{Co}_{K}$ and their joint pmf conditionally on $\mathbf{1}^{Rx}=s$ and $\mathbf{1}^{Tx}=1$, $\mathbf{P}^{(s)}_K,s=0,1$. \begin{prop} \label{p2} Spectrum sensing with multiple cooperative nodes becomes \begin{equation} \label{e14} \hat{\mathbf{1}}^{link}=\mathbf{1}^{Tx}\bigoplus_{j=1}^{2^K}{\mathbf{\Gamma}[j]\prod_{i=1}^{K} {[\mathbf{A}^1_K[i,j]\mathbf{1}^{Co}_k\oplus(1-\mathbf{A}^1_K[i,j])\bar{\mathbf{1}}^{Co}_k]}} \end{equation} and \begin{equation} R\left(\mathbf{P}^{(1)}_K,\mathbf{P}^{(0)}_K\right)= \sum_{i=1}^{2^K}{\min\{w(1-\alpha)\mathbf{P}^{(0)}_K[i],\alpha \mathbf{P}^{(1)}_K[i]\}} \end{equation} where $\mathbf{\Gamma}=\mathbf{1}_{[\alpha\mathbf{P}^{(1)}_K-w(1-\alpha)\mathbf{P}^{(0)}_K\geq0]}$ and $\oplus$ denotes OR operation. \end{prop} \begin{proof} By the likelihood ratio test, we have \begin{equation*} \frac{\mathbf{P}^{(1)}_K[i]}{\mathbf{P}^{(0)}_K[i]} {\substack{\overset{\hat{\mathbf{1}}^{link}=1}{\geq}\\ \underset{\hat{\mathbf{1}}^{link}=0}{<}}} \frac{w(1-\alpha)}{\alpha} \end{equation*} Therefore, the optimum detector becomes \begin{equation*} \mathbf{\Gamma}[i]=\hat{\mathbf{1}}^{link}|_{\mathbf{1}^{Co}_1=\mathbf{A}^1_K[1,i],\cdots,\mathbf{1}^{Co}_K=\mathbf{A}^1_K[K,i]} =\mathbf{1}_{[\alpha\mathbf{P}^{(1)}_K[i]-w(1-\alpha)\mathbf{P}^{(0)}_K[i]\geq0]} \end{equation*} With the result and combining binary arithmetic we obtain (\ref{e14}). In terms of Bayesian risk, \begin{align*} R_{opt}&=\sum_{i=1}^{2^K}{[w(1-\alpha)\mathbf{P}^{(0)}_K[i] \Pr(\hat{\mathbf{1}}^{link}=1|\mathbf{1}^{Co}_1=\mathbf{A}^1_K[1,i],\cdots,\mathbf{1}^{Co}_K=\mathbf{A}^1_K[K,i],\mathbf{1}^{Rx}=0,\mathbf{1}^{Tx}=1)+}\\ &\quad \alpha\mathbf{P}^{(1)}_K[i] \Pr(\hat{\mathbf{1}}^{link}=0|\mathbf{1}^{Co}_1=\mathbf{A}^1_K[1,i],\cdots,\mathbf{1}^{Co}_K=\mathbf{A}^1_K[K,i],\mathbf{1}^{Rx}=1,\mathbf{1}^{Tx}=1)]\\ &=\sum_{i=1}^{2^K}{[w(1-\alpha)\mathbf{P}^{(0)}_K[i]\mathbf{\Gamma}[i]+ \alpha\mathbf{P}^{(1)}_K[i](1-\mathbf{\Gamma}[i])]}\\ &=\sum_{i=1}^{2^K}{\min\{w(1-\alpha)\mathbf{P}^{(0)}_K[i],\alpha \mathbf{P}^{(1)}_K[i]\}} \end{align*} \end{proof} We consider two cooperative nodes insightfully to understand how multiple cooperative nodes improve performance of spectrum sensing. For spectrum availability at two cooperative nodes $\mathbf{1}^{Co}_1,\mathbf{1}^{Co}_2$, let $\mathbf{q}^{(1)}_{1,2}=\begin{bmatrix} \beta_1 & \beta_2 \end{bmatrix}^T$ and $\mathbf{q}^{(0)}_{1,2}=\begin{bmatrix} \gamma_1 & \gamma_2 \end{bmatrix}^T$. In addition, their joint probability is specified as follows. When $\mathbf{1}^{Rx}=1$, PS is likely inactive and then $\mathbf{1}^{Co}_1$ and $\mathbf{1}^{Co}_2$ are independent. On the other hand, when $\mathbf{1}^{Rx}=0$, $\mathbf{1}^{Co}_1$ and $\mathbf{1}^{Co}_2$ are correlated with correlation $\rho_{12}$. With the constraints on $\mathbf{q}^{(0)}_{1,2}$, we have \begin{equation} \label{e33} \mathbf{P}^{(0)}_2= \begin{bmatrix} \gamma_1\gamma_2+\Delta & (1-\gamma_1)\gamma_2-\Delta & \gamma_1(1-\gamma_2)-\Delta & (1-\gamma_1)(1-\gamma_2)+\Delta \end{bmatrix}^T \end{equation} where $\Delta=\sqrt{\gamma_1\gamma_2(1-\gamma_1)(1-\gamma_2)}\rho_{12}$. \subsubsection{Independent ($\rho_{12}=0$)} In case $\mathbf{1}^{Co}_1$ and $\mathbf{1}^{Co}_2$ are conditionally independent, it leads to conventional assumption in cooperative spectrum sensing. \begin{prop} For two cooperative nodes with independent spectrum availability, spectrum sensing becomes \begin{equation} \label{e32} \hat{\mathbf{1}}^{link}= \begin{cases} \mathbf{1}^{Tx}, &\text{if $\alpha\geq\alpha_{(4)}$}\\ \mathbf{1}^{Tx}\bigoplus_{i=1}^2{(\mathbf{1}_{[\rho_i>0]}\mathbf{1}^{Co}_i+\mathbf{1}_{[\rho_i<0]}\bar{\mathbf{1}}^{Co}_i)}, &\text{if $\alpha_{(3)}\leq\alpha<\alpha_{(4)}$}\\ \mathbf{1}^{Tx}(\mathbf{1}_{[\rho_k>0]}\mathbf{1}^{Co}_k+\mathbf{1}_{[\rho_k<0]}\bar{\mathbf{1}}^{Co}_k), &\text{if $\alpha_{(2)}<\alpha<\alpha_{(3)},k=\arg\max_iM_R(i)$} \\ \mathbf{1}^{Tx}\prod_{i=1}^2{(\mathbf{1}_{[\rho_i>0]}\mathbf{1}^{Co}_i+\mathbf{1}_{[\rho_i<0]}\bar{\mathbf{1}}^{Co}_i)}, &\text{if $\alpha_{(1)}<\alpha\leq\alpha_{(2)}$} \\ 0, &\text{if $\alpha\leq\alpha_{(1)}$} \end{cases} \end{equation} where \begin{equation} \label{e36} (\alpha_{(1)},\alpha_{(2)},\alpha_{(3)},\alpha_{(4)})= \begin{cases} (\alpha^{C}_4,\alpha^{C}_2,\alpha^{C}_3,\alpha^{C}_1) &\text{if $\rho_1>0,\rho_2>0,\delta^+_{1}>\delta^+_{2}$}\\ (\alpha^{C}_4,\alpha^{C}_3,\alpha^{C}_2,\alpha^{C}_1) &\text{if $\rho_1>0,\rho_2>0,\delta^+_{1}<\delta^+_{2}$}\\ (\alpha^{C}_2,\alpha^{C}_4,\alpha^{C}_1,\alpha^{C}_3) &\text{if $\rho_1>0,\rho_2<0,\delta^+_{1}>\delta^-_{2}$}\\ (\alpha^{C}_2,\alpha^{C}_1,\alpha^{C}_4,\alpha^{C}_3) &\text{if $\rho_1>0,\rho_2<0,\delta^+_{1}<\delta^-_{2}$}\\ (\alpha^{C}_3,\alpha^{C}_1,\alpha^{C}_4,\alpha^{C}_2) &\text{if $\rho_1<0,\rho_2>0,\delta^-_{1}>\delta^+_{2}$}\\ (\alpha^{C}_3,\alpha^{C}_4,\alpha^{C}_1,\alpha^{C}_2) &\text{if $\rho_1<0,\rho_2>0,\delta^-_{1}<\delta^+_{2}$}\\ (\alpha^{C}_1,\alpha^{C}_3,\alpha^{C}_2,\alpha^{C}_4) &\text{if $\rho_1<0,\rho_2<0,\delta^-_{1}>\delta^-_{2}$}\\ (\alpha^{C}_1,\alpha^{C}_2,\alpha^{C}_3,\alpha^{C}_4) &\text{if $\rho_1<0,\rho_2<0,\delta^-_{1}<\delta^-_{2}$}\\ \end{cases} \end{equation} $\alpha^{C}_i=w\mathbf{P}^{(0)}_2[i]/(\mathbf{P}^{(1)}_2[i]+w\mathbf{P}^{(0)}_2[i])$, $M_R(i)=\mathbf{1}_{[\rho_i\geq0]}\delta^+_{i}+\mathbf{1}_{[\rho_i<0]}\delta^-_{i}$, $\delta^+_{i}=\gamma_i\beta_i/((1-\gamma_i)(1-\beta_i))$, $\delta^-_{i}=(1-\gamma_i)(1-\beta_i)/(\gamma_i\beta_i)$, and $\rho_i$ denotes the correlation between $\mathbf{1}^{Co}_i$ and $\mathbf{1}^{Rx}$. \end{prop} We list all valid orders of $(\alpha^{C}_1,\alpha^{C}_2,\alpha^{C}_3,\alpha^{C}_4)$ in (\ref{e36}) and the the spectrum sensing is determined according to the value of $\alpha$ by arguing (\ref{e14}). We observe that $\alpha$ is high ($\alpha_{(3)}\leq\alpha<\alpha_{(4)}$), any one of cooperative nodes helps the spectrum sensing, which leads the spectrum sensing to OR operation. However, when $\alpha$ is low ($\alpha_{(1)}<\alpha\leq\alpha_{(2)}$), CR-Tx requires more evidence to claim available link and the spectrum sensing becomes AND operation. In addition, it is interesting to note that there exists a region of $\alpha$ ($\alpha_{(2)}<\alpha<\alpha_{(3)}$) such that CR-Tx only depends on one of two cooperative nodes, which motivates us to define a metric or a measure to evaluate cooperative nodes. \begin{defi} \textbf{Reliability} of a cooperative node is measured by $M_R$. $\mathbf{1}^{Co}_i$ is said to be more or equally \textbf{reliable} than (to) $\mathbf{1}^{Co}_j$ if $M_R(i)\geq M_R(j)$, which is denoted by $\mathbf{1}^{Co}_i\unrhd\mathbf{1}^{Co}_j$. \end{defi} \begin{prop} For $K$ cooperative nodes with independent spectrum availability, without loss of generality, we assume $\rho_i\geq0$ for $i=1,\ldots,n$ and $\rho_i<0$ for $i=n+1,\ldots,K$. Spectrum sensing becomes $\hat{\mathbf{1}}^{link}=\mathbf{1}^{Tx}\mathbf{1}_{[s]}$, where \begin{equation} \label{e34} s=\left\{\sum_{i\in\mathcal{C}^+\cup\mathcal{C}^-}{\log M_R(i)} \geq\log\left(\frac{w(1-\alpha)}{\alpha}\right) +\sum_{i=1}^{n}{\log\left(\frac{\gamma_i}{1-\beta_i}\right)} +\sum_{i=n+1}^{K}{\log\left(\frac{1-\gamma_i}{\beta_i}\right)}\right\} \end{equation} and $\mathcal{C}^+=\{i|\mathbf{1}^{Co}_i=1,i=1,\ldots,n\}$, $\mathcal{C}^-=\{i|\mathbf{1}^{Co}_i=0,i=n+1,\ldots,K\}$. \end{prop} \begin{proof} Since $\mathbf{1}^{Co}_1,\mathbf{1}^{Co}_2,\ldots,\mathbf{1}^{Co}_{K}$ are independent, the likelihood ratio test becomes \begin{align*} \frac{\mathbf{P}^{(1)}_K[i]}{\mathbf{P}^{(0)}_K[i]} &=\prod_{i\in\mathcal{C}^+}\frac{\beta_i}{1-\gamma_i} \prod_{i\in\{1,\ldots,n\}\setminus\mathcal{C}^+}\frac{1-\beta_i}{\gamma_i} \prod_{i\in\mathcal{C}^-}\frac{1-\beta_i}{\gamma_i} \prod_{i\in\{n+1,\ldots,K\}\setminus\mathcal{C}^-}\frac{\beta_i}{1-\gamma_i}\\ &=\prod_{i\in\mathcal{C}^+}\frac{\beta_i\gamma_i}{(1-\beta_i)(1-\gamma_i)} \prod_{i\in\{1,\ldots,n\}}\frac{1-\beta_i}{\gamma_i} \prod_{i\in\mathcal{C}^-}\frac{(1-\beta_i)(1-\gamma_i)}{\beta_i\gamma_i} \prod_{i\in\{n+1,\ldots,K\}}\frac{\beta_i}{1-\gamma_i}\\ &=\prod_{i\in\mathcal{C}^+\cup\mathcal{C}^-}M_R(i) \prod_{i\in\{1,\ldots,n\}}\frac{1-\beta_i}{\gamma_i} \prod_{i\in\{n+1,\ldots,K\}}\frac{\beta_i}{1-\gamma_i} {\substack{\overset{\hat{\mathbf{1}}^{link}=1}{\geq}\\ \underset{\hat{\mathbf{1}}^{link}=0}{<}}} \frac{w(1-\alpha)}{\alpha} \end{align*} Taking logarithm at both sides and rearranging the formula, we complete the proof. \end{proof} We observe that reliability $M_R(i)$ is used to quantify the information of $\mathbf{1}^{Rx}$ provided by the $i$th cooperative node. Please note that if $\mathbf{1}^{Co}_i$ is independent of $\mathbf{1}^{Rx}$, $M_R(i)=1$ and $\mathbf{1}^{Co}_i$ is irrelevant to the spectrum sensing. Therefore, reliability can imply sensing capability, that is, one cooperative node with higher reliability has better sensing capability. Reliability can thus serve a criterion to select cooperative nodes when number of cooperative nodes is limited due to appropriate overhead caused by information exchange. Specially, if there are $K$ equally reliable cooperative nodes, each cooperative node provides equal amount of information about $\mathbf{1}^{Rx}$ and the spectrum sensing rule turns out to be Counting rule. This is a generalization from identically and independently distributed (i.i.d.) observations \cite{GSSinCRN:Viswanathan89} in conventional distributed detection to equally reliable observations. \subsubsection{Correlated ($\rho_{12}\neq0$)} When there exists correlation between spectrum availability at cooperative nodes, the joint probabilities $\mathbf{P}^{(0)}_2$ are shifted by $\Delta$, as in (\ref{e33}). For example, if the correlation is positive $\alpha^{C}_1$ and $\alpha^{C}_4$ increase while $\alpha^{C}_2$ and $\alpha^{C}_3$ decrease. If the correlation increases, eventually, the order of $(\alpha^{C}_1,\alpha^{C}_2,\alpha^{C}_3,\alpha^{C}_4)$ will switch and the spectrum sensing in (\ref{e32}) will change accordingly. However, the correlated case becomes tedious and we consider a simple but meaningful example, where cooperative nodes have symmetric error rates (i.e. $\beta_i=\gamma_i$) and reliability becomes $\mathbf{1}_{[\rho_i\geq0]}\beta_i+\mathbf{1}_{[\rho_i<0]}(1-\beta_i)$. Similarly, with the aid of (\ref{e14}) and (\ref{e33}), the spectrum sensing can be easily derived. \begin{prop} \label{p3} For two cooperative nodes with correlated spectrum availability and symmetric error rate satisfying $\rho_1>0, \rho_2>0$, and $\mathbf{1}^{Co}_1\rhd\mathbf{1}^{Co}_2$, the spectrum sensing would be (\ref{e32}) with modifications according to $\Delta$. \begin{equation} \label{e40} \hat{\mathbf{1}}^{link}= \begin{cases} \mathbf{1}^{Tx}(\mathbf{1}^{Co}_1\mathbf{1}^{Co}_2\oplus\bar{\mathbf{1}}^{Co}_1\bar{\mathbf{1}}^{Co}_2) &\text{if $\alpha_{(2)}<\alpha<\alpha_{(3)},\Delta<(1-2\beta_1)\beta_2$}\\ \mathbf{1}^{Tx}(\mathbf{1}^{Co}_1\oplus\bar{\mathbf{1}}^{Co}_2) &\text{if $\alpha_{(3)}<\alpha<\alpha_{(4)},\Delta<(1-2\beta_2)\beta_1$}\\ \mathbf{1}^{Tx}\mathbf{1}^{Co}_1\bar{\mathbf{1}}^{Co}_2 &\text{if $\alpha_{(1)}<\alpha<\alpha_{(2)},\Delta\geq(2\beta_2-1)(1-\beta_1)$}\\ \mathbf{1}^{Tx}(\mathbf{1}^{Co}_1\otimes\mathbf{1}^{Co}_2) &\text{if $\alpha_{(2)}<\alpha<\alpha_{(3)},\Delta\geq(2\beta_1-1)(1-\beta_2)$} \end{cases} \end{equation} where \begin{equation} \label{e37} (\alpha_{(1)},\alpha_{(2)},\alpha_{(3)},\alpha_{(4)})= \begin{cases} (\alpha^{C}_4,\alpha^{C}_1,\alpha^{C}_2,\alpha^{C}_3) &\text{if $\Delta_{min}\leq\Delta<(1-2\beta_1)\beta_2,\beta_1(1+\beta_2)<1$}\\ (\alpha^{C}_4,\alpha^{C}_2,\alpha^{C}_1,\alpha^{C}_3) &\text{if $(1-2\beta_1)\beta_2\leq\Delta<(1-2\beta_2)\beta_1$}\\ (\alpha^{C}_4,\alpha^{C}_2,\alpha^{C}_3,\alpha^{C}_1) &\text{if $(1-2\beta_2)\beta_1\leq\Delta<(2\beta_2-1)(1-\beta_1)$}\\ (\alpha^{C}_2,\alpha^{C}_4,\alpha^{C}_3,\alpha^{C}_1) &\text{if $(2\beta_2-1)(1-\beta_1)\leq\Delta<(2\beta_1-1)(1-\beta_2)$}\\ (\alpha^{C}_2,\alpha^{C}_3,\alpha^{C}_4,\alpha^{C}_1) &\text{if $(2\beta_1-1)(1-\beta_2)\leq\Delta\leq\Delta_{max},\beta_1(2-\beta_2)<1$}\\ \end{cases} \end{equation} $\Delta_{min}=-(1-\beta_1)(1-\beta_2)$, $\Delta_{max}=(1-\beta_1)\beta_2$, and $\otimes$ denotes XOR operation. \end{prop} All possible switching orders of $(\alpha^{C}_1,\alpha^{C}_2,\alpha^{C}_3,\alpha^{C}_4)$ according to $\Delta$ are listed in (\ref{e37}) and the first and the last orders are impossible unless an additional condition is satisfied to make the regions of $\Delta$ valid, i.e. $\beta_1(1+\beta_2)<1$ and $\beta_1(2-\beta_2)<1$ respectively. Since $\mathbf{1}^{Co}_1$ and $\mathbf{1}^{Co}_2$ are correlated when $\mathbf{1}^{Rx}=0$ (i.e. $\rho_{12}\neq0$ or $\Delta\neq0$), not only $\mathbf{1}^{Co}_1$ and $\mathbf{1}^{Co}_2$ alone but also the identity of $\mathbf{1}^{Co}_1$ and $\mathbf{1}^{Co}_2$ (i.e. $\mathbf{1}^{Co}_1\otimes\mathbf{1}^{Co}_2$ or $\mathbf{1}^{Co}_1\mathbf{1}^{Co}_2\oplus\bar{\mathbf{1}}^{Co}_1\bar{\mathbf{1}}^{Co}_2$) can provide information about $\mathbf{1}^{Rx}$ and thus can be used to determine CR link availability. This is actually similar to covariance-based detection. For example, if $\Delta\geq(2\beta_2-1)(1-\beta_1)\geq0$, $\mathbf{1}^{Co}_1$ and $\mathbf{1}^{Co}_2$ are probably identical when spectrum is unavailable at CR-Rx, i.e. $\mathbf{1}^{RX}=0$, and $\mathbf{1}^{Co}_2$ in (\ref{e32}) is then replaced by $\mathbf{1}^{Co}_1\otimes\mathbf{1}^{Co}_2$. Furthermore, when $\Delta$ increases and is greater than $(2\beta_1-1)(1-\beta_2)$, the roles of $\mathbf{1}^{Co}_1$ and $\mathbf{1}^{Co}_1\otimes\mathbf{1}^{Co}_2$ switch because the identity of $\mathbf{1}^{Co}_1$ and $\mathbf{1}^{Co}_2$ can provide more information about $\mathbf{1}^{RX}$ than $\mathbf{1}^{Co}_1$ alone. Alternatively, when $\Delta<0$, the results can be similarly explained. In addition, it is interesting to note that even if $\mathbf{1}^{Co}_2$ is independent to $\mathbf{1}^{Rx}$ (i.e. $\beta_2=1/2$), $\mathbf{1}^{Co}_2$ may become helpful due to the correlation between $\mathbf{1}^{Co}_1$ and $\mathbf{1}^{Co}_2$. In the next section, we will further investigate impacts of correlation between $\mathbf{1}^{Co}_1$ and $\mathbf{1}^{Co}_2$ on network operation. \subsection{Multiple Cooperative Nodes with Limited Statistical Information} In CRN or self-organizing networks, due to lacking of centralized coordination, each node in CRN can only sense and exchange local information. In addition, dynamic wireless channels and mobility of nodes make the situation severer and one node can only acquire information within limited sensing duration. We can either design systems under simplified assumptions, which may result in severe performance degradation, or apply advanced signal processing techniques based on minimax criterion \cite{GSSinCRN:Poor94}, robust to outliers in networks, as we are going to do hereafter. To derive the optimum Bayesian detection in \textbf{Proposition \ref{p2}}, we have to acquire joint pmf of spectrum availability at $K$ cooperative nodes, which may require long observation interval to achieve acceptable estimation error. If we only have up to the $k$th order marginal pmf (related to capability of observation), i.e $\mathbf{Q}^{(s)}_{k,K},s=0,1$ according to \textbf{Lemma \ref{cor3}}, our design criterion becomes minimax criterion, that is, we find the least-favorable joint pmf $\mathbf{P}^{(s)}_K,s=0,1$ such that maximizes Bayesian risk and then conduct the optimum Bayesian detection under that joint probability. Therefore, the problem can be formulated as follows. \begin{prop} [Robust Cooperative Sensing] Cooperative spectrum sensing with limited statistical information $\mathbf{Q}^{(s)}_{k,K},s=0,1$ becomes (\ref{e14}) with $(\mathbf{P}^{(1)}_K,\mathbf{P}^{(0)}_K)$ replaced by $(\mathbf{P}^{(1)}_{opt},\mathbf{P}^{(0)}_{opt})$, where \begin{equation} \begin{aligned} \label{e35} (\mathbf{P}^{(1)}_{opt},\mathbf{P}^{(0)}_{opt})&= \arg\max_{\mathbf{P}^{(1)}_K,\mathbf{P}^{(0)}_K} R\left(\mathbf{P}^{(1)}_K,\mathbf{P}^{(0)}_K\right)= \arg\min_{\mathbf{P}^{(1)}_K,\mathbf{P}^{(0)}_K} \|w(1-\alpha)\mathbf{P}^{(0)}_K-\alpha\mathbf{P}^{(1)}_K\|_1\\ \text{s.t.}\quad \overline{\mathbf{P}}^{(1)}_{k,K}&=(\overline{\mathbf{G}}^{(1)}_{k,K})^{-1} (\mathbf{Q}^{(1)}_{k,K}-\underline{\mathbf{G}}^{(1)}_{k,K}\underline{\mathbf{P}}^{(1)}_{k,K})\\ \underline{\mathbf{P}}^{(0)}_{k,K}&=(\underline{\mathbf{G}}^{(0)}_{k,K})^{-1} (\mathbf{Q}^{(0)}_{k,K}-\overline{\mathbf{G}}^{(0)}_{k,K}\overline{\mathbf{P}}^{(0)}_{k,K})\\ \mathbf{0}_{2^K\times1}&\preceq\mathbf{P}^{(s)}_K\preceq\mathbf{1}_{2^K\times1},s=0,1 \end{aligned} \end{equation} \end{prop} The last equality in the objective function is based on the fact that $\min(x,y)=(x+y-|x-y|)/2$ and that the sum of probability distribution is equal to one. The result is reasonable because in order to minimize the objective function, the likelihood ratio $\mathbf{P}^{(1)}_K[i]/\mathbf{P}^{(0)}_K[i]$ approaches to the optimum threshold $w(1-\alpha)/\alpha$, which induces poor performance of the detector and therefore increases Bayesian risk. Furthermore, we could apply \textbf{Lemma \ref{t2}} to set the constraints on joint pmf. Since vector norm is a convex function, the problem can be solved by well-developed algorithms in convex optimization \cite{GSSinCRN:Boyd04}. In last part, we proposed a simple methodology to select cooperative nodes based on reliability under assumption of independent observations. However, in practice, there exists correlation among spectrum availability at cooperative nodes and spectrum sensing may change as we showed in \textbf{Proposition \ref{p3}}. In addition, since the statistical information is limited within reasonable observation interval, CR-Tx can select cooperative nodes to minimize maximum Bayesian risk by minimax criterion. \section{Application to Realistic Operation of CRN} In preceding sections, we only considered single CR link in CRN. However, CRN is not just a link level technology if we want to successfully route packets from source to destination through CRs and PS. In the following, we suggest a simple physical layer model for CRN and investigate the impacts of spectrum sensing on network operation and the role of a cooperative node playing in CRN, which is impossible to be revealed from traditional treatment of spectrum sensing. Since spectrum sensing may not be ideal and there exists hidden terminal problem, we further define the true state for PS. \begin{defi} The true state for PS can be represented by the indicator \begin{equation*} \mathbf{1}^{PS}= \begin{cases} 1, &\text{PS either does not exist or is inactive}\\ 0, &\text{PS exists and is active} \end{cases} \end{equation*} \end{defi} Therefore, with the definition of $\alpha$, we have \begin{equation} \label{e38} \alpha=\frac{\sum_{s=0}^{1}{\Pr(\mathbf{1}^{Tx}=1,\mathbf{1}^{Rx}=1|\mathbf{1}^{PS}=s)\Pr(\mathbf{1}^{PS}=s)}} {\sum_{s=0}^{1}{\Pr(\mathbf{1}^{Tx}=1|\mathbf{1}^{PS}=s)\Pr(\mathbf{1}^{PS}=s)}} \end{equation} To connect relations between indicator functions of link availability and realistic operation of CRN, we propose a simple received power model. \subsection{Received Power Model} We model the received power from PS and background noise as log-normal distribution, or $10log_{10}(P_S)\sim N(\mu_S,\sigma^2_S)$ and $10log_{10}(P_N)\sim N(\mu_0,\sigma^2_0)$, where $\sigma^2_S$ and $\sigma^2_0$ are used to quantify the measurement uncertainty of the received power from PS and noise respectively. In addition, $\mu_0$ is a constant whereas $\mu_S$ should be varied according to path loss and shadowing. More specifically, let $\mu_S=K_0-10alog_{10}(d_{CR})-b_{CR}$, where $K_0$ is a constant, $d_{CR}$ denotes distance from CR (either CR-Tx or CR-Rx) to PS as in Fig \ref{Fig_2}, $a$ means path loss exponent, and $b_{CR}$ represents shadowing effect. When $\mathbf{1}^{PS}=1$, the received signal only comes from noise. However, when $\mathbf{1}^{PS}=0$, the received signal is the superposition of signal from PS and noise, which results in addition of two log-normal random variables. We could simply model the received power as another log-normal random variable with parameters $\mu_{CR}$ and $\sigma_{CR}$, and under assumption of $\sigma_S>\sigma_0$, we have \begin{align} \label{e42} \mu_{CR}&= \begin{cases} \mu_0, &\text{if $\mu_S\leq\mu_0-\sigma_S$}\\ \mu_S, &\text{if $\mu_S\geq\mu_0+\sigma_S$}\\ (\mu_S+\mu_0+\sigma_S)/2, &\text{otherwise} \end{cases}\\ \sigma_{CR}^2&= \begin{cases} \sigma^2_0, &\text{if $\mu_S\leq\mu_0-\sigma_S$}\\ \sigma^2_S, &\text{if $\mu_S\geq\mu_0+2\sigma_S$}\\ \frac{\sigma^2_S-\sigma^2_0}{3\sigma_S}(\mu_S-\mu_0)+ \frac{\sigma^2_S+2\sigma^2_0}{3}, &\text{otherwise} \end{cases} \label{e43} \end{align} By simulation, the distribution of the simplified model, although not exactly identical to, is close to the simulated distribution, especially in terms of mean and variance. It justifies our simplified model. \subsection{Spectrum Sensing at CR-Tx and Reception at CR-Rx} Recall the conditions that CRs can successfully communicate over a link. Assume CR-Tx adopts an energy detector in the hypothesis testing (\ref{e27}) and there is no interference from co-existing systems. The detector can be represented as \begin{equation*} P_{Tx}{\substack{\overset{\mathbf{1}^{Tx}=1}{\leq}\\ \underset{\mathbf{1}^{Tx}=0}{>}}}\tau_{Tx} \quad \text{(in dB)} \end{equation*} where $P_{Tx}$ denotes the received power at CR-Tx and $\tau_{Tx}$ is a fixed threshold since the detector is designed under a given SINR. On the other hand, to successfully receive packets, the SINR at CR-Rx should be greater than minimum value $\eta_{outage}$ as shown in (\ref{e28}). Similarly, spectrum availability at CR-Rx can be represented as \begin{equation*} P_{Rx}{\substack{\overset{\mathbf{1}^{Rx}=1}{\leq}\\ \underset{\mathbf{1}^{Rx}=0}{>}}}\tau_{Rx} \quad \text{(in dB)} \end{equation*} where $P_{Rx}$ denotes the received power from PS and noise at CR-Rx. Different from CR-Tx, $\tau_{Rx}$ is varied according to the received power from CR-Tx. For simplicity, we only consider propagation loss in modeling the received power from CR-TX and have $\tau_{Rx}=L_0-10alog_{10}(r_{Rx})$, where $L_0$ is a constant and $r_{Rx}$ denotes distance between CR-Tx and CR-Rx. We suppose that the measurement uncertainties and hence the received power from PS and noise at CR-Tx and CR-Rx are independent. However, to model spatial behavior for CR-Tx and CR-Rx, we consider the relation of shadowing between CR-Tx and CR-Rx. Intuitively, the relation should depend on locations of CR-Tx, CR-Rx, PS, along with the obstacle size and we proceed based on a linear model \begin{equation} \label{e29} b_{Rx}= \begin{cases} \max\left\{b_{Tx}(1-\frac{r_{Rx}}{2\kappa}),0\right\}, &\text{if $r_{Rx}\cos(\theta_{Rx})\leq d_{Tx}$} \\ 0, &\text{if $r_{Rx}\cos(\theta_{Rx})> d_{Tx}$} \end{cases} \end{equation} where $\kappa$ denotes parameter of obstacle size and $\theta_{Rx}$ is the angle between line segments with starting point at CR-Tx and end points at CR-Rx and PS, as shown in Fig. \ref{Fig_2} for illustration. In this model, shadowing at CR-Rx $b_{Rx}$ linearly decreases with respect to the distance between CR-Tx and CR-Rx $r_{Rx}$ with rate inverse proportional to the obstacle size $\kappa$ and is equal to zero when CR-Rx is far apart from CR-Tx or PS is located in the middle of CR-Tx and CR-Rx. Additionally, since shadowing parameter achieves maximum at CR-Tx, this results in the worst case scenario in spectrum sensing. Finally, from log-normal fading distribution, \begin{equation*} \alpha=\frac{\Pr(\mathbf{1}^{PS}=1)Q\left(\frac{\mu_0-\tau_{Tx}}{\sigma_0}\right) Q\left(\frac{\mu_0-\tau_{Rx}}{\sigma_0}\right)+ \Pr(\mathbf{1}^{PS}=0)Q\left(\frac{\mu_{Tx}-\tau_{Tx}}{\sigma_{Tx}}\right) Q\left(\frac{\mu_{Rx}-\tau_{Rx}}{\sigma_{Rx}}\right)} {\Pr(\mathbf{1}^{PS}=1)Q\left(\frac{\mu_0-\tau_{Tx}}{\sigma_0}\right) +\Pr(\mathbf{1}^{PS}=0)Q\left(\frac{\mu_{Tx}-\tau_{Tx}}{\sigma_{Tx}}\right)} \end{equation*} where $Q(x)$ denotes the right-tail probability of a Gaussian random variable with zero mean and unit variance. \subsection{Cooperative Spectrum Sensing} Under above proposed signal model, the analysis can be easily extended to cooperative spectrum sensing. Considering a cooperative node conducting an energy detector, we have \begin{equation*} P_{Co}{\substack{\overset{\mathbf{1}^{Co}=1}{\leq}\\ \underset{\mathbf{1}^{Co}=0}{>}}}\tau_{Co} \quad \text{(in dB)} \end{equation*} Furthermore, the correlation due to geography is established similar to (\ref{e29}). We could therefore calculate $\beta$ and $\gamma$ similar to $\alpha$, as \begin{align*} \beta&=\frac{\sum_{s=0}^{1}{\Pr(\mathbf{1}^{Tx}=1,\mathbf{1}^{Rx}=1,\mathbf{1}^{Co}=1|\mathbf{1}^{PS}=s)\Pr(\mathbf{1}^{PS}=s)}} {\sum_{s=0}^{1}{\Pr(\mathbf{1}^{Tx}=1,\mathbf{1}^{Rx}=1|\mathbf{1}^{PS}=s)\Pr(\mathbf{1}^{PS}=s)}} \\ \gamma&=\frac{\sum_{s=0}^{1}{\Pr(\mathbf{1}^{Tx}=1,\mathbf{1}^{Rx}=0,\mathbf{1}^{Co}=0|\mathbf{1}^{PS}=s)\Pr(\mathbf{1}^{PS}=s)}} {\sum_{s=0}^{1}{\Pr(\mathbf{1}^{Tx}=1,\mathbf{1}^{Rx}=0|\mathbf{1}^{PS}=s)\Pr(\mathbf{1}^{PS}=s)}} \end{align*} With the relation between statistical information $\{\alpha,\beta,\gamma\}$ and received power model, we can mathematically determine allowable transmission region of CR-Tx. \subsection{Neighborhood of CR-Tx} In Section III and IV, we have developed spectrum sensing under different assumptions and note that spectrum sensing depends on the value of $\alpha$, i.e., spatial behavior of CR-Tx and CR-Rx. Especially, there is even a region of $\alpha$ that CR-Tx is prohibited from forwarding packets to CR-Rx and the link from CR-Tx to CR-Rx is disconnected (i.e. $\hat{\mathbf{1}}^{link}=0$). This undesirable phenomenon alters CRN topology and heavily affects network performance, such as throughput of CRN, etc. Therefore, we would like to theoretically study link properties in CRN and first define the regions of $\alpha$ as follows. \begin{defi} The set $\{\alpha|\Pr(\hat{\mathbf{1}}^{link}=0)=1\}$ is called \textbf{prohibitive region} while $\{\alpha|\Pr(\hat{\mathbf{1}}^{link}=1)\neq0\}$ is called \textbf{admissive region}. The boundary between these two sets is called \textbf{critical boundary} of $\alpha$ and is denoted by $\alpha_C$. Therefore, $\{\alpha|\hat{\mathbf{1}}^{link}=0\}=\{\alpha|0\leq\alpha<\alpha_C\}$ and $\{\alpha|\hat{\mathbf{1}}^{link}=1\}=\{\alpha|\alpha_C\leq\alpha\leq1\}$. \end{defi} If $\alpha$ lies in the prohibitive region, the link from CR-Tx to CR-Rx is disconnected. The property and the engineering meaning of $\alpha_C$ are addressed as follows. \begin{lemma} \label{l6} $\alpha_C$ is a decreasing function with respect to number of cooperative nodes. \end{lemma} \begin{proof} It is easy to show that for fixed $w$, $\alpha$ decreases as the threshold of the likelihood ratio test $w(1-\alpha)/\alpha$ increases. Therefore, $\alpha_C$ can be determine by the largest likelihood ratio. Assume the largest likelihood ratio with $k-1$ cooperative nodes occurs at $i_{max}$, i.e., $i_{max}=\arg\max_i\{\mathbf{P}^{(1)}_{k-1}[i]/\mathbf{P}^{(0)}_{k-1}[i]\}$. When the $k$th cooperative node enters, let \begin{align*} \tilde{\beta}_k&=\Pr(\mathbf{1}^{Co}_k=1|\mathbf{1}^{Co}_1=\mathbf{A}^1_{m,k-1}[1,i_{max}],\ldots, \mathbf{1}^{Co}_{k-1}=\mathbf{A}^1_{m,k-1}[k-1,i_{max}],\mathbf{1}^{Rx}=1,\mathbf{1}^{Tx}=1)\\ \tilde{\gamma}_k&=\Pr(\mathbf{1}^{Co}_k=0|\mathbf{1}^{Co}_1=\mathbf{A}^1_{m,k-1}[1,i_{max}],\ldots, \mathbf{1}^{Co}_{k-1}=\mathbf{A}^1_{m,k-1}[k-1,i_{max}],\mathbf{1}^{Rx}=0,\mathbf{1}^{Tx}=1) \end{align*} Then, there are two likelihood ratio with $k$ cooperative nodes, say $i$th and $j$th, becoming \begin{align*} \frac{\mathbf{P}^{(1)}_{k}[i]}{\mathbf{P}^{(0)}_{k}[i]}&= \frac{\mathbf{P}^{(1)}_{k-1}[i_{max}]\tilde{\beta}_k} {\mathbf{P}^{(0)}_{k-1}[i_{max}](1-\tilde{\gamma}_k)}\\ \frac{\mathbf{P}^{(1)}_{k}[j]}{\mathbf{P}^{(0)}_{k}[j]}&= \frac{\mathbf{P}^{(1)}_{k-1}[i_{max}](1-\tilde{\beta}_k)} {\mathbf{P}^{(0)}_{k-1}[i_{max}]\tilde{\gamma}_k} \end{align*} Since either $\tilde{\beta}_k+\tilde{\gamma}_k\geq1$ or $\tilde{\beta}_k+\tilde{\gamma}_k<1$, one of the $i$th and the $j$th likelihood ratio is not less than $\mathbf{P}^{(1)}_{k-1}[i]/\mathbf{P}^{(0)}_{k-1}[i]$, which results in lower $\alpha_C$. \end{proof} \begin{lemma} \label{l7} The following two statements are equivalent: \begin{enumerate} \item $\alpha\geq\alpha_C$ \item $\Pr(\mathbf{1}^{link}=1|\hat{\mathbf{1}}^{link}=1)\geq w/(w+1)$ \end{enumerate} \end{lemma} \begin{proof} Since $\alpha\geq\alpha_C$ if and only if $\Pr(\hat{\mathbf{1}}^{link}=1)\neq0$, we have \begin{align*} &\Pr(\mathbf{1}^{link}=1|\hat{\mathbf{1}}^{link}=1) \\ &=\Pr(\mathbf{1}^{Tx}=1,\mathbf{1}^{Rx}=1|\mathbf{1}^{Tx}=1,\hat{\mathbf{1}}^{Rx}=1) \\ &=\frac{\Pr(\mathbf{1}^{Rx}=1,\hat{\mathbf{1}}^{Rx}=1|\mathbf{1}^{Tx}=1)} {\Pr(\hat{\mathbf{1}}^{Rx}=1|\mathbf{1}^{Tx}=1)}\\ &=\frac{\Pr(\mathbf{1}^{Rx}=1|\mathbf{1}^{Tx}=1)\Pr(\hat{\mathbf{1}}^{Rx}=1|\mathbf{1}^{Rx}=1,\mathbf{1}^{Tx}=1)} {\sum_{s=0}^{1}{\Pr(\mathbf{1}^{Rx}=s|\mathbf{1}^{Tx}=1)\Pr(\hat{\mathbf{1}}^{Rx}=1|\mathbf{1}^{Rx}=s,\mathbf{1}^{Tx}=1)}} \\ &=\frac{\frac{\alpha}{1-\alpha}\frac{\Pr(\hat{\mathbf{1}}^{Rx}=1|\mathbf{1}^{Rx}=1,\mathbf{1}^{Tx}=1)} {\Pr(\hat{\mathbf{1}}^{Rx}=1|\mathbf{1}^{Rx}=0,\mathbf{1}^{Tx}=1)}} {1+\frac{\alpha}{1-\alpha}\frac{\Pr(\hat{\mathbf{1}}^{Rx}=1|\mathbf{1}^{Rx}=1,\mathbf{1}^{Tx}=1)} {\Pr(\hat{\mathbf{1}}^{Rx}=1|\mathbf{1}^{Rx}=0,\mathbf{1}^{Tx}=1)}}\\ &\geq\frac{\frac{\alpha}{1-\alpha}\frac{w(1-\alpha)}{\alpha}} {1+\frac{\alpha}{1-\alpha}\frac{w(1-\alpha)}{\alpha}} =\frac{w}{w+1} \end{align*} The inequality holds because the likelihood ratio is greater than $w(1-\alpha)/\alpha$ if $\hat{\mathbf{1}}^{Rx}=1$ and $x/(c+x)$ is a increasing function with respect to $x$. Reversely, the conditional probability $\Pr(\mathbf{1}^{link}=1|\hat{\mathbf{1}}^{link}=1)$ is well-defined if and only if $\Pr(\hat{\mathbf{1}}^{link}=1)\neq0$, which implies $\alpha\geq\alpha_C$. \end{proof} In \textbf{Lemma \ref{l7}}, $\Pr(\mathbf{1}^{link}=1|\hat{\mathbf{1}}^{link}=1)$ could be interpreted as the probability of successful transmission in CRN and the weighting factor in Bayesian risk (\ref{e30}) can be determined by the constraint on the outage probability $P_{out}=\Pr(\mathbf{1}^{link}=0|\hat{\mathbf{1}}^{link}=1)$. That is, if a CRN maintains $P_{out}<\zeta$, $w=(1-\zeta)/\zeta$. Therefore, the condition that allows CR-TX forwarding packets to CR-Rx (i.e. $\alpha$ belongs to admissive region) guarantees the outage probability of CR link. Further considering the proposed physical layer models, we can establish and define a geographic region, where CR-Tx is allowed forwarding packets to CR-Rx as long as CR-Rx lies in the region. \begin{defi} \textbf{Neighborhood} of CR-Tx $\mathcal{N}$ is $\{(r_{Rx},\theta_{Rx})|\alpha\geq\alpha_C\}$ or equivalently becomes $\{(r_{Rx},\theta_{Rx})|\Pr(\mathbf{1}^{link}=1|\hat{\mathbf{1}}^{link}=1)\geq w/(w+1)\}$. \textbf{Coverage} of CR-Tx is neighborhood of CR-Tx without PS. \end{defi} Please not that the coverage of CR-Tx is defined without the existence of PS and the neighborhood is the effective area in real operation coexisting with PS. When we consider a path loss model between CR-TX and CR-Rx, coverage becomes a circularly shaped region. However, due to hidden terminal problem as in Fig. \ref{Fig_1} and \ref{Fig_2}, where PS is either apart from CR-Tx or is blocked by obstacles, the probability of collision at CR-Rx could increase and CR-Tx may be prohibited from forwarding packets to CR-Rx. Therefore, neighborhood of CR-Tx shrinks from its coverage and is no longer circular shape. In addition, hidden terminal problem is location dependent, that is, PS is hidden to CR-Tx but not to CR-Rx in Fig. \ref{Fig_1} and \ref{Fig_2}. Thus, CR-Rx is possibly allowed forwarding packets to CR-Tx. From such observations, CR links are directional and can be mathematically characterized as follows. \begin{defi} $CR_i$ is said to be \textbf{connective} to $CR_j$ if $CR_j$ is located in the neighborhood of $CR_i$, which is denoted by $\mathbf{1}^{link}_{ij}=1$. Otherwise, $\mathbf{1}^{link}_{ij}=0$ if $CR_i$ is not \textbf{connective} to $CR_j$. \end{defi} According to above arguments, it is possible that CR-Rx is connective to CR-Tx but the reserve is not true. Mathematical conclusion is developed in the following, and is numerically verified in Fig. \ref{Fig_4}and \ref{Fig_5} in Section VI. \begin{prop} \label{p6} Connective relation is asymmetric, that is, for two cognitive radios, $CR_i$ is connective to $CR_j$ does not imply $CR_j$ is connective to $CR_i$, or mathematically, $\mathbf{1}^{link}_{ij}=1\nRightarrow\mathbf{1}^{link}_{ji}=1$. \end{prop} \begin{proof} We analytically illustrate using Fig. \ref{Fig_1}, where $CR_i$ lies in the middle of $CR_j$ and PS-Tx and PS-Tx is hidden to $CR_j$ but not to $CR_i$. Let $w=9$ to guarantee the outage probability of CR link less than $0.1$ and let $\Pr(\mathbf{1}^{PS}=1)=0.7$, i.e., the spectrum utility of PS is only $30\%$. If $CR_i$ wants to forward packets to $CR_j$ (i.e. $CR_i$ is CR-Tx and $CR_j$ is CR-Rx), $CR_i$ can successfully detect the activity of PS and $\Pr(\mathbf{1}^{Tx}=1|\mathbf{1}^{PS}=1)\approx1$ and $\Pr(\mathbf{1}^{Tx}=1|\mathbf{1}^{PS}=0)\approx0$. Therefore, $CR_i$ forwards packets to $CR_j$ only when $\mathbf{1}^{PS}=1$. In addition, since $CR_i$ is located in the transmission range of $CR_j$, $CR_j$ is located in the transmission range of $CR_i$ in a pure path loss model and $\Pr(\mathbf{1}^{Rx}=1|\mathbf{1}^{Tx}=1,\mathbf{1}^{PS}=1)\approx1$. Applying (\ref{e38}) and (\ref{e13}), we have $\alpha\approx1$ and $\mathbf{1}^{link}_{ij}=1$. On the other hand, when $CR_j$ wants to forward packets to $CR_i$, $CR_j$ becomes CR-Tx and $CR_i$ becomes CR-Rx. Since PS-Tx is hidden to $CR_j$, at $CR_j$, the received signal power from PS is below noise power, and $\mu_{CR}=\mu_0$ and $\sigma_{CR}^2=\sigma_0^2$ in (\ref{e42}) and (\ref{e43}). Therefore, $\Pr(\mathbf{1}^{Tx}=1|\mathbf{1}^{PS}=1)\approx1$ and $\Pr(\mathbf{1}^{Tx}=1|\mathbf{1}^{PS}=0)\approx1$. That is, $CR_j$ always feels the spectrum available and intends to forward packets to $CR_i$. However, when $\mathbf{1}^{PS}=0$, collisions occurs at $CR_i$ and $\Pr(\mathbf{1}^{Rx}=1|\mathbf{1}^{Tx}=1,\mathbf{1}^{PS}=0)\approx0$. Similarly, by (\ref{e38}) and (\ref{e13}), we have $\alpha\approx\Pr(\mathbf{1}^{PS}=1)=0.7<w/(w+1)=0.9$ and $\mathbf{1}^{link}_{ji}=0$. \end{proof} \textbf{Proposition \ref{p6}} mathematically suggests that links in CRN are generally asymmetric and even unidirectional as the argument in \cite{GSSinCRN:Centin09}. Therefore, traditional feedback mechanism such as acknowledgement and automatic repeat request (ARQ) in data link layer may not be supported in general. This challenge can be alleviated via cooperative schemes. Roles of a cooperative node in CR network operation thus include \begin{enumerate} \item Extend neighborhood of CR-Tx to its coverage \item Ensure bidirectional links in CRN (i.e. enhance probability to maintain bidirectional) \item Enable feedback mechanism for the purpose of upper layers \end{enumerate} Since neighborhood increases as $\alpha_C$ decreases, by \textbf{Lemma \ref{l6}}, the capability of cooperative schemes to extend neighborhood increases when number of cooperative nodes increases. Therefore, spectrum sensing capability mathematically determine CRN topology. It also suggests the functionality of cooperative nodes in topology control \cite{GSSinCRN:Thomas07}\cite{GSSinCRN:Chen072} and network routing \cite{GSSinCRN:Centin09}, which is critical in CRN due to asymmetric links and heterogeneous network architecture \cite{GSSinCRN:Centin09}. Here, we illustrate impacts of correlation between spectrum availability at cooperative nodes on neighborhood. Recall \textbf{Proposition \ref{p3}}, where we considered two cooperative nodes with $\beta_i=\gamma_i$, $\rho_i>0, i=0,1$, and $\mathbf{1}^{Co}_1\rhd\mathbf{1}^{Co}_2$. From (\ref{e37}), we have \begin{equation} \label{e39} \alpha_C= \begin{cases} \alpha_4^C &\text{if $\Delta<(2\beta_2-1)(1-\beta_1)$}\\ \alpha_2^C &\text{otherwise} \end{cases} \end{equation} Therefore, as $\Delta$ increases from $\Delta_{min}$, $\alpha_C=\alpha_4^C$ increases from 0 and achieves maximum at $\Delta=(2\beta_2-1)(1-\beta_1)$. At this point, $\alpha_C=\alpha_4^C=\alpha_2^C=w(1-\gamma_1)/(\beta_1+w(1-\gamma_1))$, which is the critical boundary with node one alone. If $\Delta$ further increases to $\Delta_{max}$, $\alpha_C=\alpha_2^C$ decreases to 0. We conclude that positive correlation between $\mathbf{1}^{Co}_1$ and $\mathbf{1}^{Co}_2$ shrinks the neighborhood, compared to the independent case ($\rho_{12}=0$), unless the correlation is high enough, i.e., $\Delta>(2\beta_2-1)(1-\beta_1)/\beta_2$ by solving $\alpha_2^C|_{\Delta}<\alpha_4^C|_{\Delta=0}$ according to (\ref{e39}). If one CR has larger neighborhood area, it is expected to be connective to more CRs and to have higher probability to forward packets successfully and higher throughput of CRN accordingly. The result offers a novel dimension to evaluate cooperative nodes. That is, different from criterions in link level, such as minimum Bayesian risk or maximum reliability as we mentioned in last section, maximum neighborhood area is a novel criterion to select the best cooperative node from the viewpoint of network operation. \begin{prop} \label{p5} (\textbf{Optimum Selection of Cooperative Node}) For a CRN with a constraint on the outage probability $P_{out}<\zeta$, there are one CR and $K$ cooperative nodes, indexed by $k$. The best cooperative node for the CR under maximum neighborhood area criterion is \begin{equation} \label{e31} k_{opt}=\arg\underset{k}{\max}\mathcal{N}_A(k) \quad \text{with $w=(1-\zeta)/\zeta$} \end{equation} where $\mathcal{N}_A(k)$ represents neighborhood area of the CR with the aid of the $k$th cooperative node. \end{prop} In CRN, CRs could act as relay nodes to relay packets to the destination. Assume the destination is in the direction $\theta$ of a CR with respect to PS. It is intuitive for the CR to forward packets to the direction around $\theta$, say $\theta\pm\epsilon$. Let $\mathcal{N}_{\theta\pm\epsilon}=\{\mathcal{N}|\theta_{Rx}\in(\theta-\epsilon,\theta+\epsilon)\}$ and then the best cooperative node may become (\ref{e31}) with $\mathcal{N}$ replaced by $\mathcal{N}_{\theta\pm\epsilon}$. \section{Experiments} \subsection{General Spectrum Sensing} \subsubsection{Spectrum Sensing Performance} The performance of spectrum sensing, measured by Bayesian risk (\ref{e30}), is plotted by Bayesian risk versus the probability of spectrum availability at CR-Rx $\alpha$ in Fig. \ref{Fig_3}. We set the weighting factor $w=9$ ($w$ is defined in (\ref{e41})) to guarantee the outage probability of CR link less than $0.1$. Larger Bayerian risk represents worse performance because spectrum sensing induces more possibility of collisions at CR-Rx or of losing opportunity to utilize spectrum. We see that traditional spectrum sensing without considering spectrum availability at CR-Rx (i.e. $\mathbf{1}^{Rx}$) has large Bayesian risk when $\alpha$ becomes small because collisions usually occur when CR-Tx determines link availability only by localized spectrum availability at CR-Tx (i.e. $\mathbf{1}^{Tx}$). On the other hand, by considering $\mathbf{1}^{Rx}$ in our general spectrum sensing (\ref{e13}) with known $\alpha$, Bayesian risk decreases when $\alpha$ is less than $0.9$, which is the critical boundary of $\alpha$, $\alpha_C$, and CR-Tx is prohibited from forwarding packets to CR-Rx. Therefore, risk occurs due to loss of opportunity to utilize spectrum. However, in practice, $\alpha$ is unknown and needs to be estimated by \textbf{Lemma \ref{l8}}. We set observation depth (i.e. duration) $L=15$ and show expected Bayesian risk of inference-based spectrum sensing (\ref{e13}) with respect to observed sequence $\mathbf{1}^{Rx}[n-1],\ldots,\mathbf{1}^{Rx}[n-L]$. The performance degrades around $\alpha_C$ and even worse than that of traditional spectrum sensing when $\alpha\geq\alpha_C$ because the estimation error may cause the estimated $\alpha$ (\ref{e1}) to across $\alpha_C$ and results in different sensing rules; however, it is close to the performance with known $\alpha$. This verifies the effectiveness of inference-based spectrum sensing. Fig. \ref{Fig_3} also shows Bayesian risk of cooperative sensing (\ref{e15}) under different sensing capability of a cooperative node, i.e. reliability $M_R$. We assume statistical information $\{\alpha,\beta,\gamma\}$ can be perfectly estimated. The performance curve is composed of three line segments as in (\ref{e15}) and shows performance improvement in the middle segment due to the aid of the cooperative node. However, in the right and the left segments, the cooperative node becomes useless and the performance is equal to that of non-cooperative sensing. In comparison of sensing capability of a cooperative node, the one with larger reliability is expected to have higher correlation between spectrum availability at CR-Rx and the cooperative node (i.e. $\mathbf{1}^{Co}$) and to provide more information about $\mathbf{1}^{Rx}$; therefore, it achieves lower Bayesian risk and lower $\alpha_C$ (i.e. larger admissive region). In addition, when $\beta+\gamma=1$ ($\beta=0.8,\gamma=0.2$), $\mathbf{1}^{Rx}$ and $\mathbf{1}^{Co}$ are independent and the performance is identical to that of spectrum sensing with known $\alpha$ in Fig. \ref{Fig_3}. \subsubsection{Impacts of Correlation between Spectrum Availability at Cooperative Nodes} We next investigate performance of spectrum sensing with two cooperative nodes with respect to the correlation between spectrum availability at these two nodes (i.e. $\mathbf{1}^{Co}_1$ and $\mathbf{1}^{Co}_2$) $\rho_{12}$. We set $\beta_1=\gamma_1=0.75$, $\beta_2=\gamma_2=0.7$ as the scenario in \textbf{Proposition \ref{p3}} and depict Bayesian risk in Fig. \ref{Fig_6}. Generally speaking, Bayesian risk decreases (increases) as $\rho_{12}$ increases in high (low) $\alpha$. We also observe that $\alpha_C$ decreases when number of cooperative nodes increases and $\alpha_C$ increases when $\rho_{12}$ increases unless the correlation is high enough. For example, if two cooperative nodes are close in location and $\rho_{12}=0.8$, $\alpha_C$ is less than that when $\rho_{12}=0$. It is also interesting to note that there exists a region of $\alpha$ such that the identity of $\mathbf{1}^{Co}_1$ and $\mathbf{1}^{Co}_2$ instead of $\mathbf{1}^{Co}_1$ determines CR link availability as in (\ref{e40}) and Bayesian risk is less than that with $\mathbf{1}^{Co}_1$ alone. The results further suggest trade-off between performance in link layer and network layer when we select cooperative nodes. That is, for one CR link with $\alpha>0.5$ (e.g. CR-Rx is close to CR-Tx or spectrum utilization of PS is low), large $\rho_{12}$ is preferred to achieve low risk. However, from network perspective, to achieve high number of CR links that are admissive to CR-Tx (i.e. to achieve large neighborhood and low $\alpha_C$) and thus high throughput of CRN, small $\rho_{12}$ is preferred. \subsubsection{Robust Spectrum Sensing} For multiple cooperative nodes, with six nodes in our simulation, we show the performance of cooperative spectrum sensing with limited statistical information $\mathbf{Q}^{(s)}_{k,K},s=0,1$ due to limited sensing duration. We first find least-favorable joint pmf $\mathbf{P}^{(s)}_{opt},s=0,1$ by (\ref{e35}) and then compute corresponding Bayesian risk, which is shown in Fig. \ref{Fig_7} under different order of known marginal pmf $k$ (i.e. capability of observation). That is, CR-Tx only acquires pmf of $k$ out of six cooperative nodes. The risk is compared to that with the optimum sensing rule (\ref{e14}) and that with assumption of independent spectrum availability (\ref{e34}). Obviously, Bayesian risk decreases and approaches to that in the optimum case as the order of known marginal pmf $k$ increases because more information is acquired to generate $\mathbf{P}^{(s)}_{opt},s=0,1$ closer to the true one $\mathbf{P}^{(s)}_{K},s=0,1$. We observe that when the order $k$ is greater than 3, robust spectrum sensing outperforms the case of traditional independence assumption. Therefore, if CR-Tx would like to select six cooperative nodes, CR-Tx only requires statistical information about spectrum availability among three out of six cooperative nodes, i.e. $\mathbf{Q}^{(s)}_{3,6},s=0,1$, to achieve better performance than the case according to reliability criterion. \subsection{Neighborhood of CR-Tx} \subsubsection{Without Obstacles} In Fig. \ref{Fig_4}, we illustrate neighborhood of CR-Tx ("$+$" in the figure) without blocking. The neighborhood boundary with/without a cooperative node ("$\circ$" in the figure) is depicted by a thick and a thin line respectively. The parameters are set as follows: $\mu_0=0$, $\sigma_0^2=1$, $\sigma_S^2=8$, $K_0=10$, $a=3$, $L_0=3$, $\tau_{Tx}=\tau_{Co}=3$, and $\Pr(\mathbf{1}^{PS}=1)=0.6$. In Fig. \ref{Fig_4}(a), PS ("$\ast$" in the figure) is placed near to CR-Tx ($(0.7,0)$). We observe that CR-Tx almost perfectly detects the state of PS, i.e., $\Pr(\mathbf{1}^{Tx}=1|\mathbf{1}^{PS}=1)\approx1$ and $\Pr(\mathbf{1}^{Tx}=1|\mathbf{1}^{PS}=0)\approx0$, and $\alpha\approx\Pr(\mathbf{1}^{Rx}=1|\mathbf{1}^{Tx}=1,\mathbf{1}^{PS}=1)$ by (\ref{e38}). Therefore, neighborhood of CR-Tx approaches to its coverage and the cooperative node is not necessary in this case from a viewpoint of network operation. However, when PS is apart from CR-Tx ($(1.7,0)$) as we shown in Fig. \ref{Fig_1}, the neighborhood at PS side shrinks and is no longer circularly shaped because PS is hidden to CR-Tx and hence probability of collision at CR-Rx increases when $\mathbf{1}^{PS}=0$. Fig. \ref{Fig_4}(b)$\sim$(d) illustrate the neighborhood under different locations of the cooperative node. We observe that neighborhood area decreases when the cooperative node moves away from PS and there even exists a region where cooperative sensing can not help. Therefore, the cooperative node in Fig. \ref{Fig_4}(b) is the best among these three nodes according to maximum neighborhood area criterion in \textbf{Proposition \ref{p5}}. We present an example of existence of unidirectional link in CRN. In Fig. \ref{Fig_4}(b), assume one CR is located at $(1,0)$. Obviously, the CR-Tx is not connective to the CR and therefore is prohibited from forwarding packets to the CR. However, by Fig. \ref{Fig_4}(a), the CR is connective to CR-Tx, which makes the link unidirectional (only from the CR to CR-Tx). As \textbf{Proposition \ref{p6}}, this also shows asymmetric connective relation even under rather ideal radio propagation. With the aid of a cooperative node located at $(0.4,0.3)$, the link returns to a bidirectional link. \subsubsection{With Obstacles} Alternatively, we consider effects of shadowing due to blocking, as we illustrated in Fig. \ref{Fig_2}. We set shadowing parameters $b_{Tx}=25$ and parameter of obstacle size $\kappa=0.3$ and $0.7$ in Fig. \ref{Fig_5}(a)(b) and Fig. \ref{Fig_5}(c)(d) respectively. We observe that small obstacles size (i.e. small $\kappa$) can result in more substantial shrink of the neighborhood, compared to large obstacles size (i.e. large $\kappa$). The reason is: if $\kappa$ is small, only a small region around CR-Tx falls in deep shadowing and the state of PS can be successfully detected outside that region. Therefore, this leads to high probability of collision at CR-Rx as $\mathbf{1}^{PS}=0$. On the other hand, if $\kappa$ is large, CRs are likely separated from PS by obstacles, which results in large "distance" between CR and PS. Here, "distance" is measured by received signal power \cite{GSSinCRN:Tu09}\cite{GSSinCRN:Chen07}. In comparison of the capability of a cooperative node, the one in small $\kappa$ has good capability to recover the neighborhood to its coverage even when the node is at opposite side of PS. However, for large $\kappa$, the cooperative node may also be in deep shadowing and becomes useless to recover neighborhood of CR-Tx. \section{Conclusion} In this paper, we showed that CR link availability should be determined by spectrum availability at both CR-Tx and CR-Rx, which may not be identical due to hidden terminal problem (Fig. \ref{Fig_1} and \ref{Fig_2}). In order to fundamentally explore the spectrum sensing at link level and its impacts on network operation, we established an indicator model of CR link availability and applied statistical inference to predict/estimate unknown spectrum availability at CR-Tx due to no centralized coordinator nor information exchange between CR-Tx and CR-Rx in advance. We therefore expressed conditions for CR-Tx to forward packets to CR-Rx under guaranteed outage probability. These conditions, along with physical channel models, define neighborhood of CR-Tx, which is no longer circularly shaped as coverage. This results in asymmetric or even unidirectional links in CRN, as we illustrated in Section VI. The impairment of CR links can be alleviated via cooperative scheme. Therefore, spectrum sensing capability determines network topology and thus throughput of CRN. Several factors with impacts on spectrum sensing are analyzed, including: \begin{enumerate} \item Correlation of spectrum availability at cooperative nodes \item Capability of observation at CR-Tx (i.e. available statistical information at CR-Rx) \item Locations of cooperative nodes and environment (i.e. obstacles) \end{enumerate} Furthermore, limits of cooperative scheme were also addressed in link level and network level. In addition, to measure sensing capability and then to select cooperative nodes is an important issue because we would like to minimize information exchange to increase spectrum utilization. Criterions from link level (maximum reliability or minimum Bayesian risk) and network level (maximum neighborhood area) perspectives were accordingly proposed. We numerically demonstrated existence of trade-off in designing systems in different layers. In addition, robust spectrum sensing was proposed to deal with local and partial information due to no centralized coordination and limited sensing duration in CRN. More useful results in CRN extended from this research can be expected in future works.
1,941,325,219,939
arxiv
\section{Introduction} \IEEEPARstart{E}{nergy} poverty is a common problem in many areas around the world. Fossil fuel has deleterious effects on the environment and a major concern for the current situation of global warming. A viable alternative to provide clean energy to the remote places could be the DC microgrid system. Number of people lacking electricity-access is dwindling in Latin America, North Africa, Middle East, South Asia, China and East Asia but it is still on the increase in some places like Sub-Saharian Africa. This population will remain unchanged through 2030 based on International Energy Agency (IEA) projection\cite{IEA_2011}. The price of PV components is also falling over the last few years. In addition, more compact and modular design of PV has made the system more acceptable for remote and rural regions. Decentralized electricity generation is imperative because extension of the grid in the remote regions demands extravagant connection cost \cite{Alstone_2015}. Solar lantern has been used to harness solar energy for lighting and battery charging while solar home system has addressed the electricity need of an individual household in the rural and remote regions \cite{Komatsu_2011}. But solar lantern and solar home system cannot take advantage of multiplexing power generation or storage among several interconnected households. Microgrid brings into contact multiple households and maneuvers one or more generation sources. Microgrid can be a part of larger microgrid or run on recluse islanded-mode. While solar home system needs to be designed for peak load of the household, microgrid can be contrived for the peak demand of the community which will be more cost efficient\cite{Bardouille_2012}. A diesel generator can be used as another generation source when the irradiation level is unsatisfactory. Grid connectivity can be considered as another source of generation\cite{Boroyevich_2010}. The fact that microgrid can manipulate power over the interconnected households infuses confidence for its usage. DC microgrid uses distributed Point-Of-Load (POL) converters which is much more efficient than always-on central inverters because plethora of DC appliances are utilized in the remote and rural regions. In AC microgrid, inverters experience grid-wide brownouts when correlated peaks are observed on the load\cite{Quetchenbach_2013}. DC microgrid with local storage provides energy individulally to each house hold negating this effect. Again, the overall efficiency suffers in case of low demand in AC microgrids; whereas the distributed storage system provides better resemblance in case of average demand for the power converters. The DC-AC conversion losses in the inverters and the central battery bank is more than the DC-DC conversion and distributed battery system used in the DC microgrid \cite{Madduri_2015}-\cite{Brent_2010}. In this paper, a DC microgrid is simulated where a boost average DC-DC converter is guided by MPPT and then FB converter is used to bring down the voltage to battery level. The system is monitored for distinctive solar insolation levels along with separable load ratings and found to work at similar efficiency. The efficiency is also found to be stable in various loading conditions of the PMUs and different switching frequencies of FB converter. The overall scalable microgrid architecture is evaluated in regard to power sharing in a set of two PMUs. Each PMU supports a battery intended to transmit consistent power to attached loads like home appliances. This paper concerns with equal and different power sharing between two PMUs to assess power efficiency and reliable power transmission from the source converter to the load side. In the follow-up, an adaptive look-up table has been generated comprising the evaluated power quantities and performance parameters for different switching frequencies and battery voltage levels. The proposed microgrid architecture, PMU circuit configuration and its operation, simulated profiles of microgrid's operation, generated look-up table and a brief explanation of the overall implemented system are articulated in this paper section-by-section. \begin{figure}[ht!] \centering \includegraphics[width = 3 in, height = 1.5 in] {block_diagram.png} \caption{Proposed DC microgrid architecture in MATLAB/ Simulink} \label{Block Diagram of DC microgrid} \end{figure} \begin{figure}[ht!] \centering \includegraphics[width = 3.5 in, height = 1.3 in] {block_diagram_PMU.png} \caption{PMU circuit structure in MATLAB/ Simulink} \end{figure} \section{Proposed DC Microgrid Architecture} The PV array acts as a constant current source. Based on irradiation level, the perturb and observation method tracks the maximum power point and generates the duty cycle required for the boost average DC-DC converter to level up the voltage. The PMU consists of a FB converter where MOSFETs are used as the switching devices. The main purpose is to monitor the equal and unequal load sharing of the PMUs and then generate an adaptive look up table for which the power source can be utilized accordingly. \subsection{Perturb and Observation Algorithm based MPPT Control} Maximum Power Point Tracking (MPPT) is a methodology used in photovoltaic (PV) converters to continuously adjust the impedance seen by the solar array to keep the PV system operating at, or close to, the peak power point of the PV panel under varying conditions, like changing solar irradiance, temperature, and load. The algorithm senses the voltage $V(k)$ and current $I(k)$ to measure the instantaneous power $P(k)$ at the $k^{th}$ sample. Then it compares it with previous power sample $P(k-1)$. If the two power samples are different then it then the voltage sample $V(k)$ is compared with its previous sample $V(k-1)$ and the duty cycle $D_b$ of the boost converter is adjusted accordingly to ensure that the system operates at “maximum power point” (or peak voltage) on the power voltage curve, as shown below. The algorithms account for factors such as variable irradiance (sunlight) and temperature to ensure that the PV system generates maximum power at all times. \begin{figure}[ht!] \centering \includegraphics[width =1.5 in, height = 0.7 in] {boost_average} \caption{Boost converter average model} \label{Average Model for Boost Converter} \end{figure} \subsection{Boost Converter Average Model} The average model for boost converter is designed with a controlled voltage source in the input and a controlled current source in the output. In Fig. \ref{Average Model for Boost Converter}, $V_a$ and $I_a$ are the input current and voltage whereas $V_{dc}$ and $I_{dc}$ are the output current and voltage of the boost converter. The equations for $V_a$ and $I_{dc}$ are \begin{equation}\label{eq2-} V_a(k)=(1-D_b)V_{dc}(k-1) \end{equation} \begin{equation}\label{eq1-} I_{dc}(k)=\frac{(1-D_b)V_{dc}(k-2)I_a(k-1)}{2V_{dc}(k-2)-V_{dc}(k-3)} \end{equation} \begin{figure}[ht!] \centering \includegraphics[width = 3 in, height = 1.6 in] {PMU_ckt2} \caption{Circuit diagram of the PMU Block and the control switch pattern} \label{Circuit diagram of the PMU Block} \end{figure} \begin{figure}[ht!] \centering \includegraphics[width = 3 in, height = 1 in] {control_switch2} \caption{Inductive voltage pattern considering ideal active and passive switches } \label{CSemiconductor Losses at switch 1 and 2} \end{figure} \begin{figure}[ht!] \centering \includegraphics[width = 3 in, height = 2.8 in] {semiconductor_losses_new} \caption{PMU circuit operation analysis considering non-ideal active and passive switches; (a) PMU in switch position-1 and (b) PMU in switch position-2} \end{figure} \begin{figure}[ht!] \centering \includegraphics[width = 3 in, height = 1 in] {control_switch3} \caption{Inductive voltage pattern considering non-ideal active and passive Switches } \end{figure} \subsection{PMU Circuit Operation Analysis} \subsubsection{Ideal Active and Passive Switches} The MOSFETs in the PMUs are the active switches and each has an internal diode paralleled with a RC snubber circuit. When a gate signal is applied to the MOSFET it acts as the switch on-resistance and when the gate signal is turned off the current moves through the anti-parallel diode. $V_g$ is the input voltage for the PMUs and the control switch are delineated as 1 and 2 inside a circle in \ref{Circuit diagram of the PMU Block}. A linear transformer is used to lower the voltage into desired level as well as provide electrical isolation. The four diodes $D_1$,$D_2$,$D_3$ and $D_4$ are the passive switches. The $L_{lk}$ models the leakage inductance and has a damping effect on the circuit. The change in current through the output inductor $L_{out}$, and the change in voltage across the output capacitor $C_{out}$ are small compared through their nominal values over a switching period \cite{Vlatkovic_1992}-\cite{Madduri_2015}. Considering the control switch pattern in Fig. \ref{Circuit diagram of the PMU Block}, $D+D\textprime=1$ at switch position 1 during $DT_s$ subinterval \begin{equation}\label{eq-1} V_{L_1}=V_g-V_p\hspace{5pt} and\hspace{5pt} V_{L_2}=V_s-V \end{equation} at switch position 2 during $D\textprime T_s$ subinterval \begin{equation}\label{eq-2} V_{L_1}=-V_g-V_p\hspace{5pt} and\hspace{5pt} V_{L_2}=-V_s-V \end{equation} \begin{figure}[ht!] \centering \includegraphics[width = 2.5 in, height = 1.2 in] {equivalent_circuit} \caption{Input-to-output port of the PMU circuit acting in non-ideal switching mode} \end{figure} \begin{figure*}[ht!] \centering \includegraphics[width = 6.5 in, height = 3 in] {100KHz_12V_Combined} \caption{Simulation results for $12V$ batteries with same and different capacity in two separate PMUs in $50 kHz$ switching frequency for the active switches. The irradiance pattern was kept identical for each case. The green dotted line represents the case when the two separate PMUs have same battery capacity while the blue solid line depicts the case with two different battery capacity in two separate PMUs.} \label{100KHz_12V_Combined} \end{figure*} \begin{figure*}[ht!] \centering \includegraphics[width = 6.5 in, height = 3 in] {50KHz_36V_Combined} \caption{Simulation results for $36V$ batteries with same and different capacity in two separate PMUs in $50 kHz$ switching frequency for the active switches. The irradiance pattern was kept identical for each case. The green dotted line represents the case when the two separate PMUs have same battery capacity while the blue solid line depicts the case with two different battery capacity in two separate PMUs.} \label{50KHz_36V_Combined} \end{figure*} in both cases, small ripple approximation is considered here, \begin{equation}\label{eq-3} V_p=nV_s \end{equation} Using volt-sec balance for inductor $L_{lk}$ and $L_{out}$ taking equation \eqref{eq-1}, \eqref{eq-2} and \eqref{eq-3} \[(V_g-nV_s)D+(-V_g-nV_s)D\textprime=0\] \begin{equation}\label{eq-4} V_s=\frac{1}{n}V_g(2D-1) \end{equation} \[(V_s-V)D+(-V_s-V)D\textprime=0\] \[V_s(2D-1)=V\] from \eqref{eq-4}, voltage gain, \begin{equation}\label{eq-5} M(D)=\frac{V}{V_g}=\frac{1}{n}(2D-1)^2 \end{equation} \subsubsection{Active and Passive Switches with Semiconductor Losses} At switch 1 \begin{equation}\label{eq-6} V_{L_1}=V_g-V_p-2I_1R_{on} \end{equation} \begin{equation}\label{eq-7} V_{L_2}=V_s-V-V_D-2I_2R_{on} \end{equation} At switch 2 \begin{equation}\label{eq-8} V_{L_1}=-V_g-V_p-2I_1R_{on} \end{equation} \begin{equation}\label{eq-9} V_{L_2}=-V_s-V-2V_D-2I_2R_{on} \end{equation} Now using volt-second balance for $L_1$ and $L_2$ and $V_p=nV_S$ and $I_2=I_1$ \[(V_g-V_p-2I_1R_{on})D+(-V_g-V_p-2I_1R_{on})D\textprime=0\] \[V_g(2D-1)-2I_1R_{on}=nV_s\] \begin{equation}\label{eq-10} V_s=\frac{1}{n}[V_g(2D-1)-2I_1R_{on}] \end{equation} and \[(V_s-V-2V_D-2I_2R_D)D+(-V_s-V-2V_D-2I_2R_D)D\textprime=0\] \[V_s(2D-1)=V+2V_D+2I_2R_D\] \[\frac{1}{n}[V_g(2D-1)-2I_1R_{on}](2D-1)=V+2V_D+2I_2R_D\] \begin{equation}\label{eq-11} \frac{1}{n}V_g(2D-1)^2=\frac{2}{n}I_1R_{on}(2D-1)+V+2V_D+2nI_1R_D \end{equation} From \eqref{eq-11} \[\frac{1}{n}V_g(2D-1)^2[1-\frac{2I_1R_{on}}{V_g(2D-1)}-\frac{2nV_D}{V_g(2D-1)^2}-\frac{2n^2I_1R_D}{V_g(2D-1)^2}]=V\] Now, voltage gain, \begin{dmath}\label{eq-12} M\textprime(D)=\frac{V}{V_g}=M(D)[1-\frac{2I_1R_{on}}{V_g(2D-1)}-\frac{2nV_D}{V_g(2D-1)^2}-\frac{2n^2I_1R_D}{V_g(2D-1)^2}] \end{dmath} \section{Results and Analysis} \begin{table*}[] \centering \caption{Performance Analysis of the Microgrid Architecture for Different Switching Frequencies and Battery Ratings} \label{simulation_data_table} \begin{tabular}{|l|l||l|l|l|l|l|l|l|l|l|l|l|l|l|l|} \hline Switching & Battery & Irradiance & \multicolumn{2}{l|}{PV Array} & \multicolumn{3}{l|}{Source Converter} & MPPT & \multicolumn{3}{l|}{PMU1} & \multicolumn{3}{l|}{PMU2} & Efficiency\\ \cline{4-5}\cline{6-8}\cline{10-12}\cline{13-15} Frequency & & (W/m^2) & V & I & V & I & P_{mean} & Duty & V & I & P_{mean} & V & I & P_{mean} & (Percent) \\ & & & & & & & & Cycle & & & & & & & \\\hline\hline 100 kHz & 12V 5.4Ah & 1000 & 51.88 & 5.798 & 99.09 & 3.033 & 300.8 & 0.4768 & 13.02 & 10.81 & 140.7 & 13.02 & 10.81 & 140.7 & 93.55 \\\cline{2-16} & 12V 5.4Ah & 1000 & 51.88 & 5.798 & 98.28 & 3.059 & 300.8 & 0.4724 & 12.96 & 9.485 & 122.9 & 12.75 & 12.40 & 158.1 & 93.41 \\ & 12V 10.8 Ah & & & & & & & & & & & & & & \\\cline{2-16} & 36V 5.4Ah & 1000 & 51.87 & 5.799 & 115.6 & 2.600 & 300.8 & 0.5516 & 37.61 & 3.757 & 141.3 & 37.61 & 3.757 & 141.3 & 93.95 \\\cline{2-16} & 36V 5.4Ah & 1000 & 51.88 & 5.798 & 114.9 & 2.616 & 300.8 & 0.5486 & 37.57 & 3.511 & 131.9 & 37.12 & 4.056 & 150.6 & 93.92 \\ & 36V 10.8 Ah & & & & & & & & & & & & & & \\\hline\hline 50 kHz & 12V 5.4Ah & 1000 & 51.89 & 5.797 & 99.94 & 3.007 & 300.8 & 0.4810 & 13.02 & 10.83 & 141.0 & 13.02 & 10.83 & 141.0 & 93.55 \\\cline{2-16} & 12V 5.4Ah & 1000 & 51.92 & 5.793 & 98.82 & 3.042 & 300.8 & 0.4749 & 12.95 & 9.363 & 121.3 & 12.75 & 12.56 & 160.2 & 93.41 \\ & 12V 10.8 Ah & & & & & & & & & & & & & & \\\cline{2-16} & 36V 5.4Ah & 1000 & 51.88 & 5.797 & 106.3 & 2.827 & 300.8 & 0.5123 & 37.65 & 3.721 & 140.1 & 37.65 & 3.721 & 140.1 & 93.95 \\\cline{2-16} & 36V 5.4Ah & 1000 & 51.92 & 5.793 & 106.4 & 2.824 & 300.8 & 0.5122 & 37.65 & 3.728 & 140.4 & 37.12 & 3.780 & 140.4 & 93.92 \\ & 36V 10.8 Ah & & & & & & & & & & & & & & \\\hline \end{tabular} \end{table*} The DC microgrid structure is simulated using MATLAB 2017a Simulink. SunPower SPR-315E-WHT-D Module which follows NREL (National Renewable Energy Laboratory) System Advisor Model (Jan. 2014), is used as the PV module which takes two inputs of Sun irradiance and Cell temperature. At $1000 W/m^2$ and $40$ deg. Celsius the maximum power point is $300.8 W$. Inside the source converter block, a MPPT Control and a Boost average converter are employed to achieve the maximum power point as well as boosting the voltage before transmission. The MPPT control uses Perturb and Observation (P\&O) algorithm which takes instantaneous voltage and current of the solar module as the input. There is a enable pin in the MPPT block which is switched to 1 to allow the continuous opetation of the algorithm. Four other control parameters for P\&O algorithm are Initial value for $D_b$ output ($D_{b_{init}}$), Upper limit for $D_b$ ($D_{b_{max}}$), Lower limit for $D_b$ ($D_{b_{min}}$), Increment value used to increase/decrease ($\Delta D_b$) where $D_b$ is the Boost converter duty cycle.The boost converter uses the output duty cycle of the PO and generates the desired voltage realizing \eqref{eq1-} and \eqref{eq2-}. A constant of $10^{-6}$ is used to avoid the divide by zero condition in \eqref{eq2-}. Inside the PMUs, the value of leakage inductance $L_{lk}$, outer inductance $L_{out}$ and the outer capacitance is $26\mu H$, $10\mu H$ and $7500 \mu F$. The nominal power of the linear transformer is set to 150 VA in case of equal loads and frequency is adjusted according to the switching frequency of the active switches. The MOSFETs' on-resistance $R_{on}$ and diode resistance $R_D$ has the value $0.1 \Omega$ and $0.01 \Omega$ respectively. Lead acid battery is used with nominal voltage of $12V$, rated capacity $5.4Ah$, initial state of charge $75\%$ and battery response time is $50min$. The irradiation was varied from $1000 W/m^2$ to $500 W/m^2$, $500 W/m^2$ to $50W/m^2$ and then back to $1000 W/m^2$ from $50W/m^2$ for $12V$ and $36V$ batteries with same and different battery capacity in two different switching frequencies of $50 kHz$ and $100 kHz$. Fig. \ref{100KHz_12V_Combined} shows the simulation results for $12V$ batteries with same and different capacity in two separate PMUs in $100 kHz$ switching frequency for the active switches. The output voltage of the source converter tends to show minor change due to fluctuation of the irradiance which is very encouraging. The average power is divided into PMUs equally when the battery parameters are uniform. The battery with higher rated capacity draws higher power resulting in unequal power sharing between the two PMUs although the source converter output voltage is almost the same as the former case. The average power from the PV array and the sum of the average power of the PMUs are very closely matched in both cases and proportionate to the irradiance pattern. An interesting observation is that when the irradiance jumps back to $1000 W/m^2$ from $50W/m^2$ the average power slows down to move to $300.8$ towards the end. Table \ref{simulation_data_table} shows the simulation results for two different battery voltage in two disparate switching frequency. For $36V$ batteries the transformer ratings were adjusted to maintain the output of the PMUs to the desired level. The MPPT duty cycle moved to a different point resulting in a slightly different source converter output voltage around $110V$ but keeps average power to $300.8W$. The efficiency estimated in all different cases is more than $93\%$ which ensures the stability of the system in different battery voltage level and frequency. \section{Conclusion} The design presented in this paper is a simplified version of the scalable DC microgrid architecture. The control phenomenon of the load converter and the load side of the home PMU configuration are different from those of the generic structure proposed in \cite{Madduri_2015}. Basic PMW signals are used instead of phase-shifted PWM as the switch control attributes in the PMUs. The look-up table generated from multiple simulations is an attempt to establish a one-way communication method for smart load sharing among different PMUs. One-way communication will reduce the complexities of the electrical circuits and can prove to be a cost effective solution which can have a major effect in popularizing the DC microgrid in the developing countries. Moreover, phase-shifted PWM can be implemented for zero voltage switching of the converter and practical implication of the system.
1,941,325,219,940
arxiv
\section{Introduction} \label{intro} Inflationary model building is notoriously hard due to the difficulty to protect the flatness of the inflationary direction against potentially large quantum corrections of different origin. This problem is particularly severe for models with observable tensor modes since they generically require the inflaton to travel over a trans-Planckian distance during inflation \cite{Lyth:1996im}. The most promising way-out is based on the presence of symmetries which forbid dangerous corrections to the inflaton potential. These symmetries can only be postulated at the effective field theory level but can instead be derived if inflation is embedded within the framework of a consistent UV theory like string theory \cite{McAllister:2007bg, Baumann:2009ni, Cicoli:2011zz, Burgess:2013sla}. From this point of view, the inflaton is the pseudo Nambu-Goldstone boson associated with these symmetries which need to be slightly broken in order to generate the inflaton potential. The small breaking parameter suppresses higher dimensional operators which can spoil the flatness of the inflaton potential. The two main symmetries used for inflationary model building are compact axionic shifts \cite{Westphal:2014ana,Pajer:2013fsa} and non-compact rescaling symmetries for volume moduli \cite{Burgess:2014tja}. These symmetries have allowed the realisation of several very promising mechanisms to drive inflation in string compactifications. However some of these mechanisms rely mainly on local constructions which lack a full global realisation in terms of moduli stabilisation. This is crucial to have full control over the inflationary dynamics since it determines the properties of all directions orthogonal to the inflaton and fixes all the mass and energy scales in the model. On top of moduli stabilisation, other important issues to trust inflationary models building are the study of the post-inflationary cosmological history starting with reheating \cite{Green:2007gs, Brandenberger:2008kn, Barnaby:2009wr, Cicoli:2010ha} and the interplay between inflation and other phenomenological implications of the same model like the supersymmetry breaking scale \cite{Conlon:2008cj, He:2010uk, Antusch:2011wu, Buchmuller:2014pla}, the nature of dark matter \cite{Acharya:2008bk, Allahverdi:2013noa, Aparicio:2015sda} and dark radiation \cite{Cicoli:2012aq, Higaki:2012ar, Hebecker:2014gka, Cicoli:2015bpq} or the origin of the matter-antimatter asymmetry \cite{Kane:2011ih, Allahverdi:2016yws}. Some string inflation models admit a global realisation with moduli stabilisation but only within the context of an effective supergravity description which assumes the existence of a particular Calabi-Yau background and a suitable form of the superpotential and the K\"ahler potential that define the theory. This is for example the state-of-the-art of fibre inflation models \cite{Burgess:2016owb} where inflation is driven by a K\"ahler modulus and the final prediction for the tensor-to-scalar ratio is between $0.01$ \cite{Cicoli:2016chb} and $0.006$ \cite{Cicoli:2008gp}. In this case, the underlying Calabi-Yau manifolds is assumed to have a fibration structure so that the overall internal volume is controlled by two cycles, the base and the fibre. At leading perturbative order beyond the tree-level approximation, only the overall volume develops a mass while the fibre modulus remains exactly massless. This makes this field a perfect candidate to drive inflation. The perturbative corrections which depend on the fibre modulus and generate its potential are subdominant because of supersymmetry \cite{Cicoli:2007xp}. Moreover the flatness of the fibre modulus potential is protected by an effective shift symmetry associated with the underlying no-scale structure of type IIB compactifications. Even if this symmetry is approximate, since it is broken by loop effects, it is still sufficient to suppress higher dimensional operators \cite{Burgess:2014tja}. In this paper we make these inflationary models more robust by embedding them in concrete Calabi-Yau manifolds with an explicit choice of orientifold involution and brane setup which is globally consistent and can, at the same time, reproduce the form of the inflationary potential of fibre inflation models. We first derive the topological conditions on the underlying Calabi-Yau manifold which are imposed by the requirement of a successful moduli stabilisation and inflationary mechanism. This singles out Calabi-Yau manifolds with at least $h^{1,1}=3$ that feature a K3 or $T^4$ fibration over a $\mathbb{P}^1$ base and a shrinkable rigid (del Pezzo) divisor \cite{Cicoli:2008va, Cicoli:2011it}. We therefore perform a systematic scan through the Kreuzer-Skarke list of toric Calabi-Yau three-folds \cite{Kreuzer:2000xy} to find those with the required structure and find $45$ different examples. We then choose different orientifold involutions and D3/D7 brane setups which satisfy tadpole cancellation conditions and have the right structure to generate the typical potential of fibre inflation models via both string loop \cite{Berg:2005ja, Berg:2007wt, Cicoli:2007xp} and higher derivative $\alpha'$ corrections to the effective action \cite{Ciupke:2015msa}. In the end we perform a detailed analysis of these global models showing that all K\"ahler moduli can be fixed inside the K\"ahler cone and inflation can take place successfully. This is the first viable realisation of fibre inflation models in explicit Calabi-Yau orientifold constructions which are globally consistent. This definitely represents an important step forward in our understanding of string inflationary models even if further work in the future is needed. In fact, we shall show that Calabi-Yau examples with $h^{1,1}=3$ are not rich enough to allow non-trivial gauge fluxes on D7-branes which would generate a chiral visible sector. The minimal case which can potentially lead to a global embedding of fibre inflation with a visible chiral sector requires $h^{1,1}=4$. We leave the study of this case for the future. This paper is organised as follows. In Sec. \ref{sec2}, after a brief review of the main features of fibre inflation scenarios, we outline the strategy that we shall follow to build viable global models. We then describe the topological and model building requirements of our constructions and present the results of our search through the Kreuzer-Skarke list of toric Calabi-Yau three-folds. We finally explain how we choose the orientifold involution and brane setup and how we compute the resulting string loop corrections to the 4D scalar potential. In Sec. \ref{ExplEx} we then present concrete global models in explicit Calabi-Yau examples with $h^{1,1}=3$. More explicit global examples are described in App. \ref{App}. \section{Global embedding of fibre inflation} \label{sec2} Before presenting our strategy to realise a viable global embedding of fibre inflation models, let us start by reviewing the main features of these inflationary models. \subsection{A brief review} \label{Review} Successful realisations of fibre inflation models require `weak Swiss-cheese' Calabi-Yau (CY) 3-folds whose volume form does not completely diagonalise and in general looks like: \begin{equation} \vo = f_{3/2}\left(\tau_j\right) - \sum_{i=1}^{N_{\rm small}} \lambda_i \tau_i^{3/2}\qquad\text{with}\qquad i\neq j=1,...,h^{1,1} - N_{\rm small}\,, \label{WSC} \end{equation} where $f_{3/2}\left(\tau_j\right)$ is a homogeneous function of degree $3/2$. After fixing at semi-classical level all complex structure moduli and the dilaton by turning on background fluxes \cite{Giddings:2001yu}, the K\"ahler moduli-dependent K\"ahler potential and superpotential are taken of the form: \begin{equation} K = -2\ln \vo + K_{\alpha'} + K_{g_s} \qquad \text{and} \qquad W = W_0 + \sum_{i=1}^{N_{\rm small}} A_i\,e^{-a_i T_i}\,, \end{equation} where $\vo$ is the Einstein-frame CY volume in string units, $K_{\alpha'}$ is an $\mathcal{O}(\alpha'^3)$ correction which depends just on the overall volume $\vo$ \cite{Becker:2002nn, Minasian:2015bxa, Bonetti:2016dqh}, $K_{g_s}$ contains both $\mathcal{O}(g_s^2 \,\alpha'^2)$ and $\mathcal{O}(g_s^2 \,\alpha'^4)$ string loop effects which depend on all $T$-moduli \cite{Berg:2005ja, Berg:2007wt, Cicoli:2007xp}, while $W_0$ is the flux superpotential which can be considered as constant after complex structure and dilaton stabilisation. According to the general LVS moduli stabilisation procedure, $N_{\rm small}$ blow-up modes plus the overall volume mode get stabilised at leading order giving rise to an AdS vacuum by the interplay of non-perturbative effects in $W$ and $\mathcal{O}(\alpha'^3)$ corrections to $K$ \cite{Cicoli:2008va}. This leaves $N_{\rm flat} = h^{1,1} - N_{\rm small} - 1$ flat directions which can naturally drive inflation and develop a potential at subleading order by either $\mathcal{O}(g_s^2 \,\alpha'^4)$ string loop corrections \cite{Berg:2005ja, Berg:2007wt, Cicoli:2007xp} or higher-derivative $F^4\,\mathcal{O}(\alpha'^3)$ effects \cite{Cicoli:2016chb}. Note that $\mathcal{O}(g_s^2 \,\alpha'^2)$ corrections to $K$ contribute effectively to the scalar potential as $\mathcal{O}(g_s^4 \,\alpha'^4)$ effects since their leading order contribution cancels off because of supersymmetry \cite{Cicoli:2007xp}. This crucial cancellation for inflationary model-building has been name `extended no-scale structure' and can be traced back to the presence of an approximate non-compact shift symmetry \cite{Burgess:2014tja}. A particularly simple situation arises when $N_{\rm small}=h^{1,1}-2$ since it leads to just $N_{\rm flat} = h^{1,1} - N_{\rm small} - 1 = 1$ flat direction. In this case, the general expression for the CY volume (\ref{WSC}) reduces to: \begin{equation} \vo = \lambda \sqrt{\tau_1}\,\tau_2 - \sum_{i = 1}^{h^{1,1} - 2} \lambda_i \tau_i^{3/2}\,, \label{eq:VolumeNflat1} \end{equation} where $\tau_1$ is the volume of a ${\mathbb T}^4$ or a K3 fibre over a ${\mathbb P}^1$ base whose volume is given by $t_1=\lambda\tau_2/\sqrt{\tau_1}$ \cite{Math}. Trading the large modulus $\tau_2$ for $\vo \simeq \lambda \sqrt{\tau_1}\, \tau_2$ and working order by order in a large volume expansion, the dominant contribution to the scalar potential at $\mathcal{O}\left(\vo^{-3}\right)$ can be schematically written as: \begin{equation} V_{\mathcal{O}\left(\vo^{-3}\right)} = V_{\scriptscriptstyle \rm LVS}(\vo, \tau_i) + V_{\scriptscriptstyle \rm dS}(\vo)\,, \quad \quad i = 1, \dots, N_{\rm small}\,. \label{eq:PotentialOrder3} \end{equation} In the last expression, $V_{\scriptscriptstyle \rm LVS}(\vo, \tau_i)$ is generated by non-perturbative and $\mathcal{O}(\alpha'^3)$ effects and gives rise to standard LVS vacua which clearly leave $\tau_1$ unfixed at this level of approximation. The order of magnitude of the LVS potential is \cite{Cicoli:2008va}: \begin{equation} V_{\scriptscriptstyle \rm LVS} (\vo,\tau_i) \simeq \left(\frac{g_s}{8 \pi}\right) \frac{\xi W_0^2}{g_s^{3/2} \vo^3} \,, \label{eq:VLVS} \end{equation} where $\xi$ is an $\mathcal{O}(1)$ topological quantity. $V_{\scriptscriptstyle \rm dS}$ is instead a model-dependent term which contributes to the vacuum energy and can give rise to a dS solution by properly tuning flux quanta. Its microscopic origin can involve anti-branes \cite{Kachru:2003aw} (for recent progress see \cite{antiDdS}), non-perturbative effects at singularities \cite{Cicoli:2012fh} or T-branes \cite{Cicoli:2015ylx}. The flat direction parameterised by $\tau_1$ can drive inflation if it is lifted at subleading order by additional perturbative corrections to $K$ which generate a new contribution $V_{\rm inf}(\tau_1) \ll V_{\scriptscriptstyle \rm LVS} (\vo,\tau_i)$.\footnote{Non-perturbative corrections to $K$ are negligible in the region where the EFT is under control \cite{Cicoli:2008va}.} The two main effects which can generate $V_{\rm inf}$ are string loop and higher derivative corrections which we briefly discuss below. \subsubsection{String loop corrections} Despite the fact that open string 1-loop corrections have been computed explicitly only in simple toroidal cases \cite{Berg:2005ja}, their dependence on K\"ahler moduli for a generic CY manifold has been carefully conjectured in \cite{Berg:2007wt}. In Einstein frame, 1-loop corrections to the K\"ahler potential take two different forms \cite{Berg:2007wt}: \begin{eqnarray} \text{Kaluza-Klein loops:} \quad K^{\scriptscriptstyle \rm KK}_{g_s} &=& g_s \sum_i \frac{C_i^{\scriptscriptstyle \rm KK} t_i^{\perp}}{\vo} \,, \label{LoopCorrKkk} \\ \text{Winding loops:} \quad K^{\scriptscriptstyle \rm W}_{g_s} &=& \sum_i \frac{C_i^{\scriptscriptstyle \rm W}}{\vo\, t_i^{\cap}}\,. \label{LoopCorrKw} \end{eqnarray} Kaluza-Klein (KK) corrections can be seen in the closed string channel as arising due to the exchange of KK modes between stacks of non-intersecting D3/D7-branes and/or O3/O7-planes. In (\ref{LoopCorrKkk}) $t_i^\perp = a_{ij} t_j$ are the 2-cycles transverse to the stack of parallel D-branes/O-planes. On the other hand, winding corrections can be seen as due to the exchange between stacks of intersecting D-branes/O-planes of closed strings wound around non-contractible 1-cycles at the intersection locus. Accordingly, in (\ref{LoopCorrKw}) $t_i^{\cap} = b_{ij} t_j$ are the 2-cycles where D-branes/O-planes intersect. Moreover, $C_i^{\scriptscriptstyle \rm KK}$ and $C_i^{\scriptscriptstyle \rm W}$ are unknown flux-dependent coefficients which can be treated as constants after complex structure and dilaton stabilisation. It is useful to keep track of the order at which these corrections arise both in the $\alpha'$ and $g_s$ expansion. $K_{g_s}^{\scriptscriptstyle \rm KK}$ is an $\mathcal{O}\left(g_s^2 \alpha'^2\right)$ effect while $K^{\scriptscriptstyle \rm W}_{g_s}$ appears at $\mathcal{O}\left(g_s^2 \alpha'^4\right)$.\footnote{The $\alpha'$ and $g_s$ dependence can be worked out by rewriting the corrections to $K$ in terms of the string frame dimensionful volume ${\rm Vol}_s$ by performing the substitution $\vo \rightarrow {\rm Vol}_s/\left(\alpha'^3 g_s^{3/2}\right)$.} However, the leading KK contribution to the scalar potential vanishes due to the extended no-scale structure, and so the first KK loop correction arises only at $\mathcal{O}\left(g_s^4 \alpha'^4\right)$ and looks like \cite{Cicoli:2007xp}: \begin{equation} V_{g_s}^{\scriptscriptstyle \rm KK} = \left(\frac{g_s}{8 \pi}\right) g_s^2 \frac{W_0^2}{\vo^2} \sum_{ij} C_i^{\scriptscriptstyle \rm KK} C_j^{\scriptscriptstyle \rm KK} K_{ij} \,, \label{VgsKK} \end{equation} where $K_{ij}$ is the tree-level K\"ahler metric. Being an $\mathcal{O}\left(g_s^4 \alpha'^4\right)$ effect, (\ref{VgsKK}) behaves effectively as a 2-loop KK effect. On the other hand, the leading winding contribution to the scalar potential is non-vanishing and reads \cite{Cicoli:2007xp}: \begin{equation} V_{g_s}^{\scriptscriptstyle \rm W} = -2 \left(\frac{g_s}{8 \pi}\right) \frac{W_0^2}{\vo^2} \, K_{g_s}^{\scriptscriptstyle \rm W} \,. \label{VgsW} \end{equation} \subsubsection{Higher derivative effects} The 10D type IIB action for bulk fields receives $\alpha'$ corrections which start contributing at $\mathcal{O}(\alpha'^3)$ and are encoded in several eight-derivative operators: \begin{equation} S_{\rm IIB}^{10D} = S_0 + \alpha'^3 \, S_3 + \dots\,, \end{equation} where the dots indicate the presence of subleading corrections for bulk fields, as well as additional terms related to local sources. $S_3$ denotes a set of eight-derivative operators which can be schematically written as: \begin{equation} S_3\sim \frac{1}{\alpha'^4} \int d^{10}x \, \sqrt{-g} \left[\mathcal{R}^4 + \mathcal{R}^3 \left(G_3^2 + ..\right) + \mathcal{R}^2 \left(G_3^4 + ..\right) + \mathcal{R} \left(G_3^6 + ..\right) + \left(G_3^8 + ..\right) \right], \label{8DerivOper} \end{equation} where $G_3$ is the type IIB 3-form flux, while the dots in each bracket stand for all possible combinations of fluxes giving rise to an operator with the proper number of derivatives. The second term in (\ref{8DerivOper}), gives rise to the following contribution in the scalar potential \cite{Becker:2002nn}:\footnote{The contribution (\ref{Valpha'}) has been actually derived by first performing a dimensional reduction of the first term in (\ref{8DerivOper}) and then by using supersymmetry arguments.} \begin{equation} V_{\alpha'} = \left(\frac{g_s}{8 \pi}\right) \frac{3 \xi W_0^2}{4 g_s^{3/2} \vo^3}\,, \label{Valpha'} \end{equation} which gives rise to LVS minima due to its interplay with non-perturbative effects and fixes the scale of $V_{\scriptscriptstyle \rm LVS}(\vo,\tau_i)$. This term is of order $F^2$, as can be easily inferred from the $W_0$ dependence. The parameter $\xi$ is completely determined by the CY Euler number $\chi(X)=2(h^{1,1}-h^{1,2})$ since $\xi=-\frac{\chi(X) \zeta(3)}{2(2\pi)^3}$ \cite{Becker:2002nn}. Note that genuinely $N=1$ $\mathcal{O}(\alpha'^3)$ corrections give rise to an effective Euler number by shifting $\chi(X) \rightarrow \chi_{\rm eff} = \chi(X) + 2 \int_X D_{\rm O7}^3$, where $D_{\rm O7}$ is the two-form dual to the divisor wrapped by the O7-plane \cite{Minasian:2015bxa}. In \cite{Ciupke:2015msa}, the authors were able to infer $F^4$ contributions to the scalar potential which arise from the third term in (\ref{8DerivOper}) and for a general CY take the simple form:\footnote{See also \cite{Weissenbacher:2016gey} for four-derivative terms in the absence of background fluxes. These effects should give rise to small corrections to the moduli canonical normalisation, and so we shall neglect them.} \begin{equation} V_{F^4} = - \left(\frac{g_s}{8 \pi}\right)^2 \frac{\lambda\,W_0^4}{g_s^{3/2} \vo^4} \sum_{i=1}^{h^{1,1}}\Pi_i t_i \,, \label{VF4} \end{equation} where $t_i$ are the 2-cycles of the generic CY manifold $X$, while $\Pi_i$ are topological numbers defined as: \begin{equation} \Pi_i = \int_X c_2 \wedge \hat{D}_i \,. \label{Pii} \end{equation} Here $c_2$ is the CY second Chern class, $\hat{D}_i$ is a basis of harmonic 2-forms such that the K\"ahler form can be expanded as $J = t_i \hat{D}_i$ and $\lambda$ is an unknown combinatorial factor which is expected to be between $10^{-2}$ and $10^{-3}$ \cite{Ciupke:2015msa}. Note that $\Pi_i \,t_i \geq 0$ in a basis of the K\"ahler cone where $t_i \geq 0$ $\forall i=1,..,h^{1,1}(X)$, implying that all $\Pi_i$ can also be taken as semi-positive definite. \subsubsection{Inflationary potentials} \label{InflationaryModels} Different combinations of perturbative corrections to $K$ can give rise to a different inflationary potential $V_{\rm inf}(\tau_1)$. In fact, depending on compactification details like intersection numbers, divisor topology, brane setup and choice of gauge fluxes and other microscopic parameters, some of the corrections described above can be absent or irrelevant for the stabilisation of $\tau_1$ and the inflationary dynamics. Let us now briefly describe the main features of the different inflationary models proposed so far within this framework. \subsubsection*{Fibre inflation: KK and winding loops} The first attempt to realise inflation in fibred CY manifolds with additional shrinkable divisors is `fibre inflation' \cite{Cicoli:2008gp}. In this model the remaining flat direction $\tau_1$ is lifted by the inclusion of both KK and winding loop corrections to $K$. On the other hand, the effect of higher derivative $F^4$ terms is neglected. The scalar potential for the canonically normalised inflaton $\phi$ takes the form:\footnote{Here and in the following $\phi$ represents the displacement of the field from the minimum of the potential.} \begin{equation} V_{\scriptscriptstyle \rm FI} \simeq \frac{W_0^2}{\langle \vo \rangle^{10/3}} \left[(3 - R) - 4 \left(1 + \frac{R}{6}\right) e^{- k \phi/2} + \left(1 + \frac{2}{3} R\right) e^{- 2 k \phi} + R \,e^{k \phi} \right]\,, \label{VFI} \end{equation} where $k = 2/\sqrt{3}$ and $R$ is a numerical coefficient which is naturally small since $R\propto g_s^4\ll 1$. The minimum of the potential in $\phi=0$ is generated by the competition between the two negative exponentials in (\ref{VFI}) while the term proportional to $e^{- k \phi/2}$ yields an inflationary plateau which can support slow-roll inflation. For large values of $\phi$ the positive exponential in (\ref{VFI}) causes a steepening of the potential which violates the slow-roll conditions. However, due to the smallness of $R$ in the regime with $g_s\ll 1$ where perturbation theory is under control, the inflationary plateau can naturally produce enough efoldings of inflation. The largest tensor-to-scalar ratio which is possible to get with the spectral index compatible with observations is $r \simeq 0.006$ \cite{Cicoli:2008gp}. Note that horizon exit can only occur in the plateau since, if it happened close to the steepening region, the spectral index would become too blue. \subsubsection*{Left-right inflation: KK loops and higher derivatives} Ref. \cite{Broy:2015zba} considered the same CY geometry and included $F^4$ higher derivative corrections but neglected winding loops. The resulting inflationary potential contains four terms: two positive KK corrections and two $F^4$ terms whose sign is undetermined. Depending on the sign of these $\alpha'$ effects the potential features an inflationary plateau which can support slow-roll inflation either from left to right or from right to left. Note however that the flatness of the potential tends to be spoiled by dangerous terms which cause a rapid steepening unless one of the two topological quantities controlling $F^4$ terms (see their definition in (\ref{Pii})) is hierarchically smaller than the other by a factor at least of order $10^{-4}$ \cite{Broy:2015zba}. The typical prediction of these generalised fibre inflation models is a relation between the tensor-to-scalar ratio $r$ and the spectral index $n_s$ of the form $r=2 f^2 \left(n_s-1\right)^2$ where $f$ is an effective decay constant controlling the strength of the inflationary plateau generated by the term $e^{-\phi/f}$ \cite{Burgess:2016owb}. Note that in the original fibre inflation model $f=2/k$ \cite{Cicoli:2008gp}. The two inflationary potentials proposed in \cite{Broy:2015zba} look like: \begin{equation} V_{\scriptscriptstyle \rm R} \simeq V_0 \left(1 - e^{k \phi/2}\right)^2 \qquad \text{and}\qquad V_{\scriptscriptstyle \rm L} \simeq V_0 \left(1 - e^{- k \phi}\right)^2\,, \label{VLR} \end{equation} where: \begin{equation} V_0 \simeq \left(\frac{\lambda}{g_s}\right)^2 \frac{W_0^6}{\langle\vo\rangle^4} \,. \end{equation} In both cases the minimum is due the interplay between $F^4$ and string loop corrections, while the inflationary plateau is generated by higher derivative effects which take different forms in the two potentials in (\ref{VLR}). In terms of inflationary observables, inflation to the right reproduces the same predictions of fibre inflation since $f=2/k$. On the other hand, inflation to the left has $f=1/k$, and so, from the typical $r$-$n_s$ relation of generalised fibre inflation models, it predicts a tensor-to-scalar ratio smaller by a factor of $4$. Hence for $50$-$60$ e-foldings and a spectral index compatible with observations, the final predictions for tensor modes are: $r_{\scriptscriptstyle \rm R} \simeq 0.006$ and $r_{\scriptscriptstyle \rm L} \simeq r_{\scriptscriptstyle \rm R}/4 \simeq 0.0015$ \cite{Broy:2015zba}. \subsubsection*{$\alpha'$-inflation: winding loops and higher derivatives} Another interesting attempt to realise string inflation in fibred CY manifolds with small blow-up modes is `$\alpha'$ inflation' \cite{Cicoli:2016chb}. In this model the inflaton $\tau_1$ develops a potential via higher derivative $F^4$ effects and 1-loop winding corrections. KK loop corrections are neglected since they effectively contribute to the scalar potential only at 2-loop order due to extended no-scale cancellation. The inflationary scalar potential takes the form: \begin{equation} V_{\alpha'{\scriptscriptstyle \rm I}} \simeq \frac{W_0^2}{\langle \vo \rangle^3 \langle\tau_1\rangle} \left[\left(1 - e^{- k \phi/2}\right)^2 -R \left(1- e^{k \phi/2} \right)\right]\,, \label{Vap} \end{equation} where $\langle\tau_1\rangle$ is the value of the inflaton at the minimum and the coefficient $R$ is given by: \begin{equation} R \simeq \frac{\Pi_2}{\Pi_1} \frac{\langle\tau_1\rangle^{3/2}}{\langle \vo \rangle}\,. \end{equation} If the underlying parameters are chosen to yield an anisotropic compactification at the minimum with $\langle \tau_1\rangle^{3/2}\ll \langle\vo\rangle$, i.e. $\langle \tau_1\rangle\ll \langle\tau_2\rangle$, and the topological quantity $\Pi_2$ is hierarchically smaller than $\Pi_1$, i.e. $\Pi_2\ll \Pi_1$, the coefficient $R$ becomes very small. In the limit $R\to 0$, the potential (\ref{Vap}) represents another example of generalised fibre inflation models with $f=2/k$ where the plateau is generated by winding loops. Hence it reproduces the same predictions of both fibre and left-right inflation, i.e. $r\simeq 0.006$. However, in $\alpha'$ inflation the coefficient of the positive exponential is smaller than in fibre inflation, and so horizon exit can take place also close to the steepening region without obtaining a blue spectral index. In this case the prediction for the tensor-to-scalar ratio can raise to $r \simeq 0.01$ with $n_s \simeq 0.97$ \cite{Cicoli:2016chb}. \subsection{Global constructions} After reviewing the main features of fibre inflation scenarios, we are now ready to describe our strategy to build global inflationary models. Let us start by outlining the general topological and model-building requirements. \subsubsection{General requirements} In order to realise a successful embedding of fibre inflation models in globally consistent CY orientifolds, we shall follow the following steps: \begin{enumerate} \item Search through the Kreuzer-Skarke (KS) list of toric CY 3-folds \cite{Kreuzer:2000xy} to find those with a fibration structure and at least a shrinkable rigid divisor for LVS moduli stabilisation, i.e. $N_{\rm small}\geq 1$. Requiring in addition at least one flat direction $N_{\rm flat}=h^{1,1}-N_{\rm small}-1\geq 1$, we end up with CY manifolds with Hodge number $h^{1,1}\geq 2+N_{\rm small}\geq 3\,$. \item Choose an orientifold involution and a D3/D7 brane setup which satisfy both D3- and D7-tadpole cancellation conditions and,\footnote{In the concrete examples of Sec. \ref{ExplEx} we shall not explicitly turn on 3-form background fluxes, and so we will be able to check only the D7-tadpole cancellation condition. However we shall ensure that D3-tadpole cancellation leaves enough space to turn on background fluxes \cite{Cicoli:2011qg}.} at the same time, have enough structure to generate appropriate string loop and higher derivative $\alpha'$ contributions to the scalar potential which can successfully drive inflation. \item Turn on gauge fluxes on D7-branes to generate a chiral visible sector. \item Find a dS vacuum after fixing explicitly all K\"ahler moduli inside the K\"ahler cone. \end{enumerate} If successful, this strategy would lead to the first globally consistent CY orientifold examples with a chiral visible sector and a viable inflationary mechanism together with dS moduli stabilisation. We shall show that CY cases with $h^{1,1}=3$ are not rich enough to satisfy points $(3)$ and $(4)$ above, and so can just lead to a global embedding of fibre inflation models without a chiral visible sector and an explicit dS uplifting mechanism. In order to be able to construct a model where all the points above can potentially be satisfied, one should instead focus on CY cases with at least $h^{1,1}=4$. \subsubsection{Weak Swiss-cheese CYs} The simplest scenarios have $N_{\rm flat}=1$ flat direction and $N_{\rm small}=h^{1,1}-2=1$ small blow-up mode which implies $h^{1,1}=3$. These CY examples allow for the realisation of globally consistent inflationary models but are too simple to include a chiral visible sector and an explicit sector responsible for achieving a dS vacuum. In fact, chirality can arise only in the presence of non-vanishing D7 worldvolume fluxes. However these gauge fluxes generate also moduli-dependent Fayet-Iliopoulos terms which together with soft term contributions from matter fields can lift at leading order all K\"ahler moduli charged under anomalous $U(1)$s \cite{Cicoli:2012fh}. This stabilisation method can have the net effect of generating a dS uplifting contribution \cite{CYembedding} corresponding to a T-brane background \cite{Cicoli:2015ylx} but reduces the number of flat directions which can be used to drive inflation. Thus in the $h^{1,1}=3$ case, the only flat direction would become too heavy, and so would not represent anymore a natural inflaton candidate. On the other hand, for $N_{\rm small}=h^{1,1}-3$, there are $N_{\rm flat} = h^{1,1}-N_{\rm small}-1=2$ flat directions. Requiring at least $N_{\rm small}=1$ shrinkable divisor, this necessarily leads to CY cases with $h^{1,1}=4$. This situation can now potentially allow for the realisation of inflationary models together with a chiral visible sector and an explicit dS mechanism. In fact, in the presence of chirality, one of the flat directions would be lifted in the process of dS uplifting via D-term driven non-vanishing matter F-terms \cite{CYembedding, Cicoli:2015ylx}, while the other might play the r\^ole of the inflaton. In the $h^{1,1}=3$ case, dS vacua could instead be realised via anti-branes \cite{Kachru:2003aw}. Let us now describe separately these two different situations with $h^{1,1}=3$ and $h^{1,1}=4$ (and $N_{\rm small}=1$) which are both relevant for LVS inflationary constructions. \subsubsection*{$h^{1,1}=3$ case} In the $h^{1,1}=3$ case, we shall focus on CY 3-folds $X$ with intersection polynomial of the form: \begin{equation} I_3 = a\, D_f\, D_b^2+\, b\, D_s^3\,, \label{IntPol} \end{equation} where $a$ and $b$ are model dependent integers, while $f$, $b$ and $s$ stay respectively for `fibre', `base' and `small'. This terminology is justified by a theorem by Oguiso which states that if the intersection polynomial is linear in a particular divisor $D_f$, then $D_f$ is either a K3 or a ${\mathbb T}^4$ fibre over a ${\mathbb P}^1$ base \cite{Math}. Moreover (\ref{IntPol}) includes also a shrinkable del Pezzo (dP) 4-cycle $D_s$ suitable to support non-perturbative effects which fix it at small size compared to the overall volume \cite{Cicoli:2008va}. Expanding the K\"ahler form $J$ in the divisor basis $\{D_b, \, D_f, \, D_s\}$ as $J = t_f\, D_b+t_b \, D_f+ t_s\, D_s$, the overall volume can be written as: \begin{equation} \vo = \frac{1}{3!}\int_X J \wedge J \wedge J = \frac{a}{2}\, t_f^2\, t_b +\, \frac{b}{6}\, t_s^3 \,. \end{equation} Considering the 4-cycle volume moduli given by: \begin{eqnarray} \tau_b = \frac{\partial \vo}{\partial t_f} = a\, t_b \, t_f\,, \qquad \tau_f = \frac{\partial \vo}{\partial t_b}=\frac{a}{2}\, t_f^2\,, \qquad \tau_s = \frac{\partial \vo}{\partial t_s}=\frac{b}{2}\,t_s^2\,, \end{eqnarray} the CY volume can be rewritten as: \begin{equation} \vo = c_a\,\tau_b\, \, \sqrt{\tau_f} - c_b\,\tau_s^{3/2}\qquad\text{where}\quad c_a= \frac{1}{\sqrt{2\,a}}>0\quad\text{and}\quad c_b = \frac13 \sqrt{\frac{2}{b}}>0\,. \end{equation} The positivity of $\vo$ when $\tau_s\to 0$ forces the integer $a$ to be positive. Moreover, as we shall see below, the fact that $D_s$ is a dP surface ensures $b>0$ while its shrinkability implies the K\"ahler cone condition $t_s <0\,$. \subsubsection*{$h^{1,1}=4$ case} The intersection polynomial for fibred CY 3-folds $X$ with $h^{1,1} = 4$ and a shrinkable dP surface looks like \cite{Cicoli:2011it}: \begin{equation} I_3 = a\, D_f\, D_{b_1} \, D_{b_2}+\, b\, D_s^3\,. \end{equation} Expanding the K\"ahler form $J$ in the divisor basis $\{D_{b_1}, \,D_{b_2},\, D_f, \, D_s\}$ as $J = t_{f_1}\, D_{b_1}+ t_{f_2}\, D_{b_2}+t_b \, D_f+ t_s\, D_s$, the overall volume becomes: \begin{equation} \vo = \frac{1}{3!}\int_X J \wedge J \wedge J = a\, t_{f_1}\, t_{f_2}\, t_b +\, \frac{b}{6}\, t_s^3 \,. \end{equation} The 4-cycle volume moduli read: \begin{equation} \tau_{b_1} = a\, t_{f_2} \, t_b\,, \qquad \tau_{b_2} = a\, t_{f_1} \, t_b\,, \qquad \tau_f = a\, t_{f_1} \, t_{f_2}\,, \qquad \tau_s = \frac{b}{2}\, t_s^2\,, \end{equation} and so the CY volume can be rewritten as: \begin{equation} \vo = c_a\,\sqrt{\tau_{b_1}\, \tau_{b_2}\, \tau_f} - c_b\, \tau_s^{3/2}\qquad\text{where}\quad c_a= \frac{1}{\sqrt{a}}>0\quad\text{and}\quad c_b = \frac13\sqrt{\frac{2}{b}}>0\,. \label{VoLu} \end{equation} The relevance of this type of CY volume for cosmological LVS applications has been recently pointed out in \cite{Burgess:2016owb} since (\ref{VoLu}) is analogous to the volume of the simple toroidal example ${\mathbb T}^6/({\mathbb Z}_2 \times {\mathbb Z}_2)$ ($\vo = \sqrt{\tau_1\, \tau_2\, \tau_3}$) with the only difference being the addition of a blow-up mode. Note that a K3-fibred CY 3-fold with $h^{1,1}=3$ and volume given by (\ref{VoLu}) with $c_b=0$ has been presented in \cite{Gao:2013pra}. \subsubsection{Divisor topologies} As we have seen above, `weak Swiss-cheese' CY 3-folds suitable for LVS cosmological applications are K3 or ${\mathbb T}^4$ fibrations over a ${\mathbb P}^1$ base with additional shrinkable dP divisors. Before presenting the results of a scan over the KS list for this kind of CY spaces, let us describe more in depth the topological features of dP, K3 and ${\mathbb T}^4$ surfaces. \subsubsection*{Del Pezzo divisors} Del Pezzo divisors are Fano surfaces defined as the algebraic surfaces with ample canonical bundle. These are either dP$_n$ divisors obtained by blowing up ${\mathbb P}^2$ on 8 points ($0 \leq n \leq 8$) or ${\mathbb P}^1 \times {\mathbb P}^1$. Here we shall just consider dP$_n$ surfaces which have the following Hodge diamond: \begin{eqnarray} {\rm dP}_n \equiv \begin{tabular}{ccccc} & & 1 & & \\ & 0 & & 0 & \\ 0 & & $n+1$ & & 0 \\ & 0 & & 0 & \\ & & 1 & & \\ \end{tabular}\qquad \qquad \qquad \forall \, n =0,1,..,8\,. \nonumber \end{eqnarray} These surfaces are rigid as $h^{2,0}({\rm dP}_n) = 0$ (in particular dP$_0={\mathbb P}^2$), and do not contain non-contractible 1-cycles since $h^{1,0}({\rm dP}_n) =0$. Del Pezzo divisors can be of two types: diagonal and non-diagonal \cite{Cicoli:2011it}. Note that a diagonal dP divisor $D_s$ is shrinkable since one can always find a divisor basis where the only term involving $D_s$ in the intersection polynomial is $D_s^3$. Divisors with the same Hodge diamond as above but with $n>8$ are still rigid but not del Pezzo. We shall denote them as `NdP$_n$' with $n>8$. These surfaces are not genuine local effects, and so intuitively could be thought of as blow-ups of line-like singularities. Thus NdP$_n$ divisors are generically non-diagonal \cite{Cicoli:2011it}. A necessary condition for $D_s$ to be dP is that its triple self-intersection is positive and intersections of $(D_i \, D_s^2)$-type with $i\neq s$ are either negative or zero: \begin{equation} \int_{D_i \cap D_s} \, c_1(D_s) = \int_X D_i \wedge D_s \wedge \left(-c_1({\cal N}_{D_s | X})\right) = - \int_X D_i \wedge D_s \wedge D_s \geq 0 \qquad \forall \, i \neq s\,. \end{equation} Here we have used the fact that the first Chern class of a divisor $c_1(D)$ can be written in terms of the first Chern class of its normal bundle $c_1({\cal N}_{D | X})$ as $c_1(D) = - c_1({\cal N}_{D | X}) = - [D]$ where $[D]$ is the homology class of $D$. Two important quantities characterising the topology of a divisor $D$ are the Euler characteristic $\chi(D)$ and the holomorphic Euler characteristic $\chi_h(D)$ \cite{Blumenhagen:2008zz}: \begin{eqnarray} \chi(D) &\equiv& \sum_{i=0}^4 {(-1)}^i \, b_i(D) = \int_X \, D \wedge\left( D \wedge D + c_2(X) \right)\,, \label{chi} \\ \chi_h({D}) &\equiv& \sum_{i=0}^2 {(-1)}^i \, h^{i,0}(D) = \frac{1}{12} \int_X \, D \wedge \left(2 \, D\wedge D + c_2(X) \right)\,, \label{chih} \end{eqnarray} where $b_i(D)$ and $h^{i,0}(D)$ are respectively the Betti and Hodge numbers on the divisor. These two relations also imply: \begin{equation} \chi_h(D) = \frac{1}{12}\left(\chi(D) + \int_X D \wedge D \wedge D\right) \,. \label{eq:chisAndDDD} \end{equation} For connected dP divisors, $\chi_h({\rm dP}_n) = 1$ and $\chi({\rm dP}_n) = n+3$ which from (\ref{eq:chisAndDDD}) give: \begin{equation} \int_X \, D \wedge D \wedge D = 9-n\,. \label{chisAndDDDdPn} \end{equation} According to (\ref{chisAndDDDdPn}), $D^3_{|_X}>0$ for a dP$_n$ with $n\leq 8$ (for example $D^3_{|_X} = 9$ implies that $D$ is a dP$_0$), while $D^3_{|_X}\leq 0$ indicates that a rigid divisor is a non-shrinkable NdP$_n$ with $n>8$. \subsubsection*{K3 and ${\mathbb T}^4$ surfaces} K3 and ${\mathbb T}^4$ surfaces are the only two classes of CY 2-folds. Their Hodge diamonds are: \begin{eqnarray} {\rm K3} \equiv \begin{tabular}{ccccc} & & 1 & & \\ & 0 & & 0 & \\ 1 & & 20 & & 1 \\ & 0 & & 0 & \\ & & 1 & & \\ \end{tabular} \qquad \qquad\text{and}\qquad\qquad {\mathbb T}^4 \equiv \begin{tabular}{ccccc} & & 1 & & \\ & 2 & & 2 & \\ 1 & & 4 & & 1 \\ & 2 & & 2 & \\ & & 1 & & \\ \end{tabular}\nonumber \end{eqnarray} As we have seen above, if the CY intersection polynomial is linear in $D_f$, then the CY has the structure of a $D_f$ fibre over a ${\mathbb P}^1$ base \cite{Math}. The condition ${D_f^3}_{|_X}= {D_f^2\,D_i}_{|_X}=0$ $\forall i\neq f$ forces $D_f$ to be either a K3 or a ${\mathbb T}^4$ divisor. In fact, for $D_f={\rm K3}$ we have $\chi(D_f) = 12 \,\chi_h(D_f) = 24$, while for $D_f={\mathbb T}^4$ we have $\chi(D_f)=\chi_h(D_f)=0$. Thus in both cases (\ref{eq:chisAndDDD}) implies: \begin{equation} \int_X D_f^3 = \chi_h(D_f) - \frac{\chi(D_f)}{12} = 0\,. \label{eq:fibre1} \end{equation} More in general the condition (\ref{eq:fibre1}) is equivalent to require: \begin{equation} \quad h^{1,1}(D_f) = 10\, h^{0, 0}(D_f) - 8\, h^{1,0}(D_f) +10\, h^{2,0}(D_f)\,, \end{equation} which is satisfied by K3, ${\mathbb T}^4$ and other topologies. However the additional condition ${D_f^2\,D_i}_{|_X}=0$ $\forall i\neq f$ reduces all possibilities to be either K3 or ${\mathbb T}^4$ \cite{Math}. In this paper we shall present several K3-fibred CY examples, whereas ${\mathbb T}^4$ fibrations are mostly realised in toroidal setups. \subsubsection{Scanning results} Following the topological requirements described above, we considered all the $244$ reflexive lattice polytopes of the KS list \cite{Kreuzer:2000xy} which result in $340$ different CY spaces with $h^{1,1} =3$ after considering the maximal triangulations, and performed a detailed scan to look for `weak Swiss-cheese' CY 3-folds suitable for realising the minimal setup of fibre inflation models. We found $102$ CY 3-folds which are K3-fibred and admit at least one dP$_n$ divisor (with $0\leq n\leq 8$). Imposing the further condition that this dP$_n$ divisor should be shrinkable, reduces this number to $45$. We did not perform a proper scan for `weak Swiss-cheese' CY 3-folds with $h^{1,1}=4$ which have the potential to include also a chiral visible sector and an explicit dS uplifting mechanism. We leave this search for future work. \subsubsection{Orientifold involution and brane setup} In the explicit examples which we will present in Sec. \ref{ExplEx}, we shall always focus on simple orientifold involutions of the form $\sigma: x_i \to -x_i$ which give rise to an O7-plane wrapped around the `coordinate divisor' $D_i$ defined by $x_i = 0$ plus possible additional O3-planes. We shall then choose an appropriate D7-brane setup which cancels D7-tadpoles and, at the same time, generates string loop corrections to the K\"ahler potential suitable to support fibre inflation models. In order to cancel all D7-charges, we shall introduce $N_a$ D7-branes wrapped around suitable divisors (say $D_a$) and their orientifold images ($D_a^\prime$) such that \cite{Blumenhagen:2008zz}: \begin{equation} \sum_a\, N_a \left([D_a] + [D_a^\prime] \right) = 8\, [{\rm O7}]\,. \label{eq:D7tadpole} \end{equation} D7-branes and O7-planes also give rise to D3-tadpoles which receive contributions also from background 3-form fluxes $H_3$ and $F_3$, D3-branes and O3-planes. The D3-tadpole cancellation condition reads \cite{Blumenhagen:2008zz}: \begin{equation} N_{\rm D3} + \frac{N_{\rm flux}}{2} + N_{\rm gauge} = \frac{N_{\rm O3}}{4} + \frac{\chi({\rm O7})}{12} + \sum_a\, \frac{N_a \left(\chi(D_a) + \chi(D_a^\prime) \right) }{48}\,, \label{eq:D3tadpole} \end{equation} where $N_{\rm flux} = (2\pi)^{-4} \, (\alpha^\prime)^{-2}\int_X H_3 \wedge F_3$ is the contribution from background fluxes and $N_{\rm gauge} = -\sum_a (8 \pi)^{-2} \int_{D_a}\, {\rm tr}\, {\cal F}_a^2$ is due to D7 worldvolume fluxes. For the simple case where D7-tadpoles are cancelled by placing 4 D7-branes (plus their images) on top of an O7-plane, (\ref{eq:D3tadpole}) reduces to: \begin{equation} N_{\rm D3} + \frac{N_{\rm flux}}{2} + N_{\rm gauge} =\frac{N_{\rm O3}}{4} + \frac{\chi({\rm O7})}{4}\,. \label{eq:D3tadpole1} \end{equation} As a consistency check for a given orientifold involution, one has to ensure that the right-hand-side of (\ref{eq:D3tadpole1}) is an integer. As explained above, in the explicit examples of Sec. \ref{ExplEx} with $h^{1,1}=3$ we shall not turn on gauge fluxes on D7-branes in order to preserve the flatness of the inflaton direction. Thus we shall always have $N_{\rm gauge}=0\,$. Moreover, we shall not explicitly turn on $H_3$ and $F_3$ fluxes but we will always consider orientifold involutions such that the right-hand-side of (\ref{eq:D3tadpole}) is a positive and large integer. This ensures that D3-tadpole cancellation leaves enough freedom to turn on background fluxes for dilaton and complex structure stabilisation. \subsubsection{Computation of string loop effects} \label{ComputeLoops} Given a particular choice of orientifold involution and brane setup, the location of D-branes and O-planes determines the K\"ahler moduli dependence of open string 1-loop corrections to the scalar potential. In particular, parallel stacks of D-branes/O-planes induce KK corrections, while winding loop effects arise only in the presence of D-branes/O-planes which intersect over a 2-cycle containing non-contractible 1-cycles \cite{Berg:2007wt}. In order to understand which involution and brane setup yields a form of these corrections suitable to drive inflation, we shall first compute the intersection curve $D_i\cap D_j$ between each couple of coordinate divisors $D_i$ and $D_j$ and check if this curve can contain non-contractible 1-cycles, i.e. $h^{1,0}(D_i\cap D_j)\neq 0$. If $D_i \cap D_j = \emptyset$ and both divisors are wrapped by D7-branes/O7-planes, string loop corrections are of KK-type and depend on the transverse direction ($t^\perp$) between the two non-intersecting objects. Even if an explicit determination of such a direction requires a detailed knowledge of the CY metric, we are just interested in its dependence on the K\"ahler moduli which for the particular examples of Sec. \ref{ExplEx} can be easily inferred from very general considerations. On the other hand, if $D_i \cap D_j \neq \emptyset$, the volume of the intersection 2-cycle is given by: \begin{equation} t^\cap = \int_X J \wedge D_i \wedge D_j\,. \end{equation} If both $D_i$ and $D_j$ are wrapped by D7-branes and/or O7-planes, the scalar potential will receive $t^\cap$-dependent winding loop corrections only if $h^{1,0}(D_i\cap D_j)\neq 0$. Finally notice that in the presence of O3-planes, KK corrections always arise due to the exchange of closed KK modes between D7-branes/O7-planes and O3-planes. \section{Explicit global examples} \label{ExplEx} In this section, we shall present all the topological and model-building details of global fibre inflation models in explicit CY orientifolds with $h^{1,1}=3\,$. \subsection{Toric data} Let us consider the CY 3-fold $X$ defined by the following toric data: \begin{table}[H] \centering \begin{tabular}{|c|ccccccc|} \hline & $x_1$ & $x_2$ & $x_3$ & $x_4$ & $x_5$ & $x_6$ & $x_7$ \\ \hline 6 & 0 & 0 & 1 & 1 & 1 & 0 & 3 \\ 8 & 0 & 1 & 1 & 1 & 0 & 1 & 4 \\ 8 & 1 & 0 & 1 & 0 & 1 & 1 & 4 \\ \hline & dP$_8$ & NdP$_{10}$ & SD$_1$ & NdP$_{15}$ & NdP$_{13}$ & K3 & SD$_2$ \\ \hline \end{tabular} \end{table} \noindent with Hodge numbers $(h^{2,1}, h^{1,1}) = (99, 3)$ and Euler number $\chi(X)=-192$. The Stanley-Reisner (SR) ideal is: \begin{equation} {\rm SR} = \{x_1 x_5,\, x_1 x_6 x_7,\, x_2 x_3 x_4,\, x_2 x_6 x_7,\, x_3 x_4 x_5 \} \,. \nonumber \end{equation} This corresponds to the polytope ID $\#192$ in the CY database of Ref. \cite{Altman:2014bfa}. The intersection polynomial in the basis of smooth divisors $\{D_1, D_6, D_7\}$ is given by: \begin{equation} I_3={D_1^3+ 9 \,D_7^2\, D_1 -3\, D_7\, D_1^2+ 18\, D_7^2\, D_6+ 81\, D_7^3}\,, \label{I3a} \end{equation} while the second Chern class is: \begin{equation} c_2 (X) = -\frac{14}{3}\, D_3\, D_7 + \frac23\, D_5\, D_7 +\frac83\, D_7^2\,. \label{eq:c2a} \end{equation} The coordinate divisors are written in terms of the divisor basis as: \begin{eqnarray} & & \hskip-0cm D_2 = D_6 - D_1, \qquad D_3 = \frac13 \left(D_7 - D_6 \right), \quad \\ & & D_4 = \frac13 \left(D_7 - D_6 -3\, D_1 \right), \qquad D_5 = \frac13 \left(D_7 -4\, D_6 + 3\, D_1 \right). \nonumber \end{eqnarray} \subsubsection{Coordinate divisors} A detailed analysis using \texttt{cohomCalg} \cite{Blumenhagen:2010pv, Blumenhagen:2011xn} shows that $D_1$ is a dP$_8$ surface while $D_6$ is a K3. This can also be explicitly seen from the various intersections listed in Tab. \ref{Tab2}. \begin{table}[H] \centering \begin{tabular}{|c|ccccccc|} \hline & $D_1$ & $D_2$ & $D_3$ & $D_4$ & $D_5$ & $D_6$ & $D_7$ \\ \hline $D_1^2$ & 1 & -1 & -1 & -2 & 0 & 0 & -3 \\ $D_2^2$ & 1 & -1 & -1 & -2 & 0 & 0 & -3 \\ $D_4^2$ & 4 & -2 & -2 & -6 & 0 & 2 & -4 \\ $D_5^2$ & 0 & 2 & -2 & -2 & -4 & 2 & -4 \\ $D_6^2$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline \end{tabular} \caption{Intersections between coordinate divisors.} \label{Tab2} \end{table} \noindent For $D_1$, Tab. \ref{Tab2} shows that $D_1^3= 1 >0$ while $D_1^2\, D_i \leq 0$ $\forall\,i \neq 1$. Thus $D_1$ satisfies the necessary condition to be a dP surface. Moreover, from (\ref{chisAndDDDdPn}) we have that $D_1$ is a dP$_8$ surface with $\chi(D_1)=11$ and $\chi_h(D_1)=1$. On the other hand, the divisor $D_6$ satisfies: \begin{equation} \int_{D_6}\, c_1(D_6) \wedge i^* D_i = -D_6^2\, D_i = 0\,, \qquad \qquad \forall \, i \neq 6\,. \end{equation} In addition, using (\ref{chi}) and (\ref{chih}) we find that $\chi(D_6) =24$ and $\chi_h(D_6) = 2$, signaling that $D_6$ is a K3 surface. Furthermore, $D_2$, $D_4$ and $D_5$ are NdP$_n$ divisors which are rigid but not dP surfaces (as introduced in \cite{Cicoli:2011it}). To be more specific, \texttt{cohomCalg} gives the following Hodge numbers: \begin{eqnarray} D_2 \equiv \begin{tabular}{ccccc} & & 1 & & \\ & 0 & & 0 & \\ 0 & & 11 & & 0 \\ & 0 & & 0 & \\ & & 1 & & \\ \end{tabular} \qquad D_4 \equiv \begin{tabular}{ccccc} & & 1 & & \\ & 0 & & 0 & \\ 0 & & 16 & & 0 \\ & 0 & & 0 & \\ & & 1 & & \\ \end{tabular} \qquad D_5 \equiv \begin{tabular}{ccccc} & & 1 & & \\ & 0 & & 0 & \\ 0 & & 14 & & 0 \\ & 0 & & 0 & \\ & & 1 & & \\ \end{tabular} \nonumber \end{eqnarray} Tab. \ref{Tab2} also shows that $D_2$, $D_4$ and $D_5$ do not satisfy the necessary condition for being a dP, although they are rigid. Using triple intersection numbers and the general relations (\ref{chi}) and (\ref{chih}), we find $\chi(D_2) = 13$, $\chi(D_4) = 18$, $\chi(D_5) = 16$ and $\chi_h(D_2) = \chi_h(D_4)=\chi_h(D_5)=1$. Finally, $D_3$ and $D_7$ are two `special deformation' divisors SD$_1$ and SD$_2$ (in the notation of \cite{Gao:2013pra}) with $h^{01} =0$ and $h^{20} \neq 0$ since their Hodge diamonds read: \begin{eqnarray} & & \hskip-0.75cm D_3 \equiv \begin{tabular}{ccccc} & & 1 & & \\ & 0 & & 0 & \\ 2 & & 29 & & 2 \\ & 0 & & 0 & \\ & & 1 & & \\ \end{tabular} \qquad \qquad D_7 \equiv \begin{tabular}{ccccc} & & 1 & & \\ & 0 & & 0 & \\ 23 & & 159 & & 23 \\ & 0 & & 0 & \\ & & 1 & & \\ \end{tabular}\,,\nonumber \end{eqnarray} showing that $\chi(D_3) = 35$ and $\chi(D_7) = 207$. \subsubsection{Volume form} Using (\ref{I3a}), and expanding the K\"ahler form in the basis $\{D_1, D_6, D_7\}$ as $J = t_1 \, D_1 + t_6 \, D_6 + t_7 \, D_7$, the overall CY volume becomes: \begin{equation} \vo =\frac{27}{2} \, t_7^3 +9 \,t_7^2 \, t_6+\frac92 \, t_7^2 \, t_1-\frac32 \,t_7 \, t_1^2+\frac16\,t_1^3\,. \label{vo1} \end{equation} Given that the 4-cycle volume moduli look like: \begin{equation} \tau_1 = \frac12\left(t_1 - 3 \,t_7\right)^2\,, \qquad \tau_6 = 9 \, t_7^2\,, \qquad \tau_7 = \frac{3}{2}\left( 27\, t_7^2 - t_1^2 + 12 t_7 \, t_6 + 6 \, t_7\, t_1\right)\,, \end{equation} the overall volume $\vo$ takes the following form: \begin{equation} \vo = \frac16 \left( \sqrt{\tau_6} \, (\tau_7-2 \tau_6 + 3 \tau_1) - 2 \,\sqrt{2} \, \tau_1^{3/2}\right) = t_6\tau_6 +\frac23 \,\tau_6^{3/2} -\frac{\sqrt{2}}{3}\,\tau_1^{3/2}\,. \label{eq:volEx1a} \end{equation} This expression for the volume reflects the fact that $D_6$ is a K3 fibre over a ${\mathbb P}^1$ base of size $t_6$ while $D_1$ is a shrinkable dP$_8$ corresponding to a small divisor in the LVS framework. Moreover it suggests to trade the basis element $D_7$ for $D_x =D_7-2 D_6 + 3 D_1$, since the intersection polynomial (\ref{I3a}) would simplify to: \begin{equation} I_3 = D_1^3 + 18 \, D_6\, D_x^2\,. \end{equation} In turn, expanding the K\"ahler form in the new basis as $J = t_s \, D_1 + t_b \, D_6 + t_f \, D_x$, where $s$, $b$ and $f$ stay respectively for `small', `base' and `fibre', the volume form reduces to the minimal version needed for embedding fibre inflation models: \begin{equation} \vo = 9\, t_b \, t_f^2 + \frac16\,t_s^3 = \frac16 \,\sqrt{\tau_f}\tau_b - \frac{\sqrt{2}}{3}\,\tau_s^{3/2} = t_b\tau_f- \frac{\sqrt{2}}{3}\,\tau_s^{3/2}\,, \label{simpleVol} \end{equation} where we have used the following conversion relations in the second step: \begin{equation} t_s = - \sqrt{2}\, \sqrt{\tau_s}\,, \qquad t_b = \frac{\tau_b}{6\, \sqrt{\tau_f}}\,, \qquad t_f = \frac13\,\sqrt{\tau_f}\,. \label{ttau} \end{equation} Let us finally mention that $D_x$ is a connected divisor with the following Hodge numbers: \begin{eqnarray} D_x \equiv \begin{tabular}{ccccc} & & 1 & & \\ & 1 & & 1 & \\ 9 & & 92 & & 9 \\ & 1 & & 1 & \\ & & 1 & & \\ \end{tabular}, \qquad \chi(D_x) = 108\,.\nonumber \end{eqnarray} \subsection{Brane setups} Let us now present different globally consistent brane setups which can lead to fibre inflation models. We start by describing possible choices for the orientifold involution. \subsubsection{Orientifold involution} We focus on orientifold involutions of the form $x_i \to -x_i$ with $i=1,...,7$ which feature an O7-plane on $D_i$ and O3-planes at the fixed points listed in Tab. \ref{Tab3}. \begin{table}[H] \centering \begin{tabular}{|c|c|c|c|c|c|} \hline $\sigma$ & O7 & O3 & $N_{\rm O3}$ & $\chi({\rm O7})$ & $\chi_{\rm eff}$ \\ \hline \hline $x_1 \to -x_1$ & $D_1$ & $\{{D_2 D_3 D_7}, {D_2 D_4 D_5}, {D_3 D_5 D_6}, {D_4 D_6 D_7} \}$ & $\{3,2,2,6\}$ & 11 & -190 \\ $x_2 \to -x_2$ & $D_2$ & $\{{D_1 D_3 D_7}, {D_3 D_4 D_6}, {D_5 D_6 D_7} \}$ & $\{3,2,6\}$ & 13 & -194 \\ $x_3 \to -x_3$ & $D_3$ & $\{{D_1 D_2 D_7}, {D_2 D_4 D_6}, {D_4 D_5 D_7}\}$ & $\{3,0,2\}$ & 35 & -190 \\ $x_4 \to -x_4$ & $D_4$ & $\{{D_2 D_3 D_6}, {D_3 D_5 D_7}\}$ & $\{0,2\}$ & 18 & -204 \\ $x_5 \to -x_5$ & $D_5$ & $\{{D_1 D_2 D_4}, {D_1 D_3 D_6}, {D_3 D_4 D_7}\}$ & $\{2,0,2\}$ & 16 & -200 \\ $x_6 \to -x_6$ & $D_6$ & $\{{D_1 D_4 D_7}, {D_2 D_5 D_7}\}$ & $\{6,6\}$ & 24 & -192 \\ $x_7 \to -x_7$ & $D_7$ & $\{{D_1 D_2 D_3}, {D_1 D_4 D_6}, {D_2 D_5 D_6}\}$ & $\{1,0,0\}$ & 207 & -30 \\ \hline \end{tabular} \caption{Fixed point set for involutions of the form $x_i \to -x_i$ with $i=1,...,7$.} \label{Tab3} \end{table} The effective non-trivial fixed point set in Tab. \ref{Tab3} has been obtained after taking care of the SR ideal symmetry. Moreover, the total number of O3-planes $N_{\rm O3}$ is obtained from the triple intersections restricted to the CY hypersurface, while the effective Euler number $\chi_{\rm eff}$ has been computed, using (\ref{chi}) and (\ref{chih}), as $\chi_{\rm eff} = \chi(X) + 24 \, \chi_h ({\rm O7}) - 2 \, \chi({\rm O7})$. We now focus on two different kinds of D7-brane setups which satisfy the D7-tadpole cancellation condition (\ref{eq:D7tadpole}): \begin{itemize} \item D7-branes on top of the O7-plane: in this case string loop effects simplify since winding corrections are absent due to the fact that there is no intersection between D7-branes and/or O7-planes. \item D7-branes not (entirely) on top of the O7-plane: in this case $g_s$ corrections to the scalar potential can potentially involve also winding loop effects which are crucial to drive inflation in most fibre inflation models. \end{itemize} \subsubsection{Tadpole cancellation} Let us now present some explicit choices of brane setup which satisfy the D7-tadpole cancellation condition for different orientifold involutions. \begin{itemize} \item \textbf{Case 1}: we focus on the involution $\sigma: x_3 \to -x_3$ with an O7-plane wrapping $D_3$. D7-tadpole cancellation is satisfied via the following brane setup: \begin{equation} 8 [{\rm O7}] = 8 \left( [D_2] + [D_5]\right) \,, \end{equation} implying that two stacks of D7-branes wrap $D_2$ and $D_5$. The condition for D3-tadpole cancellation (\ref{eq:D7tadpole}) instead becomes: \begin{equation} N_{\rm D3} + \frac{N_{\rm flux}}{2} + N_{\rm gauge} = \frac{N_{\rm O3}}{4} + \frac{\chi({\rm O7})}{12} + \sum_a\, \frac{N_a \left(\chi(D_a) + \chi(D_a^\prime) \right) }{48} = 9\,, \nonumber \end{equation} which leaves some (even if little) space for turning on both gauge and background fluxes for complex structure and dilaton stabilisation. \item \textbf{Case 2}: we consider the involution $\sigma: x_6 \to -x_6$ with an O7-plane on $D_6$. The D7 tadpole condition can be satisfied by placing 4 D7-branes (plus their images) on top of the O7-plane: \begin{equation} 8 [{\rm O7}] =8 [D_6]\,. \end{equation} Thus D3-tadpole cancellation takes the form: \begin{equation} N_{\rm D3} + \frac{N_{\rm flux}}{2} + N_{\rm gauge} = 9\,, \nonumber \end{equation} leaving again some freedom to turn on gauge and 3-form fluxes. \item \textbf{Case 3}: the involution $\sigma: x_7 \to -x_7$, which results in an O7-plane wrapping $D_7$, can give rise to a larger freedom for switching on background fluxes since $\chi(D_7) = 207$. The D7-tadpole cancellation condition can be satisfied by: \begin{equation} a)\qquad 8 [{\rm O7}] = 8\, \left(3\, [D_3] + [D_6] \right)\,, \end{equation} with two stacks of D7-branes wrapping $D_3$ and $D_6$. Hence D3-brane tadpoles can be cancelled if: \begin{equation} N_{\rm D3} + \frac{N_{\rm flux}}{2} + N_{\rm gauge} = 39\,, \nonumber \end{equation} showing that the contribution to the total D3-brane charge from background fluxes can indeed be larger in this case. The involution $\sigma: x_7 \to -x_7$ and D7-tadpole cancellation allow also for many other different choices for the brane setup such as: \begin{eqnarray} b) \qquad 8 [{\rm O7}] &=& 8\left(3\, [D_2] +3\, [D_5] + [D_6] \right) \qquad\qquad\Rightarrow\quad N_{\rm D3} + \frac{N_{\rm flux}}{2} + N_{\rm gauge} = 36\,, \nonumber \\ c) \qquad 8 [{\rm O7}] &=& 8\left(2\, [D_2] + [D_3] +2\, [D_5] + [D_6] \right) \quad\Rightarrow\quad N_{\rm D3} + \frac{N_{\rm flux}}{2} + N_{\rm gauge} = 37\,, \nonumber \\ d) \qquad 8 [{\rm O7}] &=& 8\left([D_2] + 2\, [D_3] + [D_5] + [D_6] \right) \quad\,\,\,\,\Rightarrow\quad N_{\rm D3} + \frac{N_{\rm flux}}{2} + N_{\rm gauge} = 38\,. \nonumber \end{eqnarray} \end{itemize} \subsection{String loop effects} Let us now follow the procedure described in Sec. \ref{ComputeLoops} to write down the expression for the string loop corrections to the scalar potential for each brane setup described above. Given that winding loop effects arise due to the exchange of strings wound around non-contractible 1-cycles at the intersection between stacks of D7-branes/O7-planes, we start by listing in Tab. \ref{Tab4} all possible intersections between two coordinate divisors. \begin{table}[H] \centering \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline & $D_1$ & $D_2$ & $D_3$ & $D_4$ & $D_5$ & $D_6$ & $D_7$ \\ \hline \hline $D_1$ & $\mathcal{C}_2$ & ${\mathbb T}^2$ & ${\mathbb T}^2$ & $\mathcal{C}_2$ & $\emptyset$ & $\emptyset$ & $\mathcal{C}_4$ \\ $D_2$ & ${\mathbb T}^2$ & ${\mathbb P}^1$ & ${\mathbb T}^2$ & $2\,{\mathbb P}^1$ & $\mathcal{C}_2$ & $\emptyset$ & $\mathcal{C}_4$ \\ $D_3$ & ${\mathbb T}^2$ & ${\mathbb T}^2$ & $\mathcal{C}_2$ & ${\mathbb P}^1$ & ${\mathbb P}^1$ & $\mathcal{C}_2$ & $\mathcal{C}_{14}$ \\ $D_4$ & $\mathcal{C}_2$ & $2\,{\mathbb P}^1$ & ${\mathbb P}^1$ & $6\, {\mathbb P}^1$ & ${\mathbb P}^1$ & $\mathcal{C}_2$ & $\mathcal{C}_5$ \\ $D_5$ & $\emptyset$ & $\mathcal{C}_2$ & ${\mathbb P}^1$ & ${\mathbb P}^1$ & $4\,{\mathbb P}^1$ & $\mathcal{C}_2$ & $\mathcal{C}_5$ \\ $D_6$ & $\emptyset$ & $\emptyset$ & $\mathcal{C}_2$ & $\mathcal{C}_2$ & $\mathcal{C}_2$ & $\emptyset$ & $\mathcal{C}_{10}$ \\ $D_7$ & $\mathcal{C}_4$ & $\mathcal{C}_4$ & $\mathcal{C}_{14}$ & $\mathcal{C}_5$ & $\mathcal{C}_5$ & $\mathcal{C}_{10}$ & $\mathcal{C}_{82}$ \\ \hline \end{tabular} \caption{Intersection curves of two coordinate divisors. Here $\mathcal{C}_g$ denotes a curve with Hodge numbers $h^{0,0} = 1$ and $h^{1,0} = g$ (hence ${\mathbb T}^2\equiv \mathcal{C}_1$), while $n\,{\mathbb P}^1$ indicates the disjoint union of $n$ ${\mathbb P}^1$s.} \label{Tab4} \end{table} Notice that, whenever the intersection is the disjoint union of ${\mathbb P}^1$s, there is no non-contractible 1-cycle, and so winding loop corrections are absent by construction. \subsubsection{Case 1} This brane setup is characterised by two D7-stacks wrapping $D_2$ and $D_5$ and an O7-plane located at $D_3$. From Tab. \ref{Tab4} we see that all relevant intersections are: \begin{equation} D_3 \cap D_5 = {\mathbb P}^1\,, \qquad D_2\cap D_3 = {\mathbb T}^2\,, \qquad D_2 \cap D_5 ={\cal C}_2\,. \end{equation} Thus we can have winding corrections only from the intersection of the D7s wrapping $D_2$ with either the D7s on $D_5$ or the O7 on $D_3$ since a ${\mathbb P}^1$ does not contain non-contractible 1-cycles. The volumes of the corresponding intersection curves read: \begin{equation} t^\cap(D_2 \cap D_3) = \int_X J \wedge D_2 \wedge D_3 = t_s + 6 \, t_f\,, \quad t^\cap(D_2 \cap D_5) = \int_X J \wedge D_2 \wedge D_5 = 6 \, t_f\,. \nonumber \end{equation} Therefore from (\ref{LoopCorrKw}) and (\ref{VgsW}) we have that winding string loop corrections take the form: \begin{equation} V_{g_s}^{\scriptscriptstyle \rm W}=-2\left(\frac{g_s}{8\pi}\right) \frac{W_0^2}{\vo^3} \left( \frac{C_1^{\scriptscriptstyle \rm W}}{6 \, t_f}+\frac{C_2^{\scriptscriptstyle \rm W}}{t_s + 6 \, t_f} \right) = -\left(\frac{g_s}{8\pi}\right) \frac{W_0^2}{\vo^3} \left( \frac{C_1^{\scriptscriptstyle \rm W}}{\sqrt{\tau_f}}+\frac{C_2^{\scriptscriptstyle \rm W}}{\sqrt{\tau_f}-\sqrt{\frac{\tau_s}{2}}} \right)\,. \label{VW} \end{equation} Given that the O7-plane and all D7-branes intersect each other, there are no KK loop corrections induced by parallel O7/D7 stacks. However, as shown in Tab. \ref{Tab3}, the fixed point set includes 3 O3-planes located at $D_1 D_2 D_7$ and 2 O3s at $D_4 D_5 D_7$ on the CY hypersurface. Thus KK $g_s$ effects arise from O7/O3 and D7/O3 combinations which lead to a sum over all basis elements in the general 1-loop KK scalar potential (\ref{VgsKK}). Ignoring terms which depend only on $\vo$ and $\tau_s$ that are fixed at leading order, and neglecting terms which have the same volume scaling as (\ref{VW}) but with additional suppression powers of $g_s^2\ll 1$, we end up with (we focus on the region where $\sqrt{\tau_f}\tau_b\gg \tau_s^{3/2}$): \begin{equation} V_{g_s}^{\scriptscriptstyle \rm KK} = g_s^2 \left(\frac{g_s}{8\pi}\right) \frac{W_0^2}{\vo^2} \left[\frac{(C_f^{\scriptscriptstyle \rm KK})^2}{4\tau_f^2} + \frac{(C_b^{\scriptscriptstyle \rm KK})^2 \tau_f}{72 \vo^2} \left(1-6\frac{C_s^{\scriptscriptstyle \rm KK}}{C_b^{\scriptscriptstyle \rm KK}} \sqrt{\frac{2\tau_s}{\tau_f}}+ \frac{C_f^{\scriptscriptstyle \rm KK}}{C_b^{\scriptscriptstyle \rm KK}} \left(\frac{2 \tau_s}{\tau_f}\right)^{3/2}\right) \right]\,. \label{VKK} \end{equation} Therefore the sum of the two string loop corrections (\ref{VW}) and (\ref{VKK}) to the scalar potential for the brane setup 1 is: \begin{equation} V_{g_s} = \left[\frac{A}{\tau_f^2}-\frac{1}{\vo\sqrt{\tau_f}}\left(\tilde{B} +\frac{\hat{B}}{1-\sqrt{\frac{\tau_s}{2\tau_f}}}\right)+\frac{\tau_f}{\vo^2} \left(C- \tilde{C} \sqrt{\frac{\tau_s}{\tau_f}}+ \hat{C} \left(\frac{\tau_s}{\tau_f}\right)^{3/2}\right)\right] \frac{W_0^2}{\vo^2}\,, \label{VA1} \end{equation} where (setting $\kappa \equiv g_s/(8\pi)$): \begin{eqnarray} A &=& \frac{\kappa}{4} \left(g_s\,C_f^{\scriptscriptstyle \rm KK}\right)^2 >0 \nonumber \\ \tilde{B} &=& \kappa\, C_1^{\scriptscriptstyle \rm W} \nonumber \\ \hat{B} &=& - \kappa\, C_2^{\scriptscriptstyle \rm W} \nonumber \\ C &=& \frac{\kappa}{72} \left(g_s\, C_b^{\scriptscriptstyle \rm KK}\right)^2 >0 \nonumber \\ \tilde{C} &=& \frac{\kappa}{6\sqrt{2}}\, g_s^2\,C_s^{\scriptscriptstyle \rm KK}\,C_b^{\scriptscriptstyle \rm KK} \nonumber \\ \hat{C} &=& \frac{\kappa}{18\sqrt{2}}\,g_s^2\,C_f^{\scriptscriptstyle \rm KK}\,C_b^{\scriptscriptstyle \rm KK}\,. \nonumber \end{eqnarray} Note that in the region of field space where $\tau_f\gg \tau_s$ the terms in (\ref{VA1}) proportional to $\tilde{B}$, $\tilde{C}$ and $\hat{C}$ are negligible and the loop-generated scalar potential simplifies to: \begin{equation} V_{g_s} \simeq \left(\frac{A}{\tau_f^2}-\frac{B}{\vo\sqrt{\tau_f}} +\frac{C\,\tau_f}{\vo^2} \right) \frac{W_0^2}{\vo^2}\,, \label{VA1simpl} \end{equation} with $B=\tilde{B}+\hat{B}$. This reproduces exactly the inflationary potential of `fibre inflation' \cite{Cicoli:2008gp}. \subsubsection{Case 2} In this case D7-tadpole cancellation is ensured by placing 4 D7-branes (plus their images) on top of the O7-plane which wraps $D_6$. Thus there is no intersection between the O7-plane and the D7-branes, resulting in the absence of winding loop corrections. Moreover, there are no KK loop effects from the O7/D7 system since the distance between the O7 and the D7 stack is zero. However, the fixed point set has 6 O3-planes at $D_1 D_4 D_7$ and other 6 O3-planes at $D_2 D_5 D_7$, and so KK string loops can arise from the exchange of KK modes between the O7 or the D7s on $D_6$ and the O3s. Since the volume of $D_6$ is given by $\tau_f$, the simple expression for the volume (\ref{simpleVol}) suggests that the distance between the O7/D7s and the O3s is parametrised by the base of the fibration $t_b$. Hence, using (\ref{VgsKK}), KK string loop correction to the scalar potential become: \begin{equation} V_{g_s}^{\scriptscriptstyle \rm KK} = \frac{C\,\tau_f\,W_0^2}{\vo^4} \qquad\text{with}\qquad C = \frac{\kappa}{72}\left(g_s\,C_b^{\scriptscriptstyle \rm KK}\right)^2 \,. \label{V2} \end{equation} \subsubsection{Case 3} The brane setup of case ($a$) is characterised by two D7-stacks wrapping $D_3$ and $D_6$ and an O7-plane located at $D_7$. From Tab. \ref{Tab4} we see that all relevant intersections are: \begin{equation} D_3 \cap D_6 = \mathcal{C}_2\,, \qquad D_3\cap D_7 = \mathcal{C}_{14}\,, \qquad D_6 \cap D_7 =\mathcal{C}_{10}\,. \end{equation} Thus all intersections give rise to curves which contain non-contractible 1-cycles and whose volume takes the form: \begin{equation} t^\cap(D_3 \cap D_6) = 6 \, t_f \,, \qquad t^\cap(D_3 \cap D_7) = 3 \, ( t_s + 6 \, t_f + 2\,t_b) \,, \qquad t^\cap(D_6 \cap D_7) = 18 \, t_f\,. \nonumber \end{equation} Therefore from (\ref{LoopCorrKw}) and (\ref{VgsW}) we have that winding string loop corrections take the form: \begin{equation} V_{g_s}^{\scriptscriptstyle \rm W} =-\frac23\left(\frac{g_s}{8\pi}\right) \frac{W_0^2}{\vo^3} \left(\frac{3\,C_1^{\scriptscriptstyle \rm W}+C_2^{\scriptscriptstyle \rm W}}{6 \, t_f}+\frac{C_3^{\scriptscriptstyle \rm W}}{t_s + 6 \, t_f + 2\,t_b}\right). \label{VW2} \end{equation} Working in the limit $t_b\gg t_i$ with $i=f,s$, the previous expression simplifies to: \begin{equation} V_{g_s}^{\scriptscriptstyle \rm W} \simeq -\frac13\left(\frac{g_s}{8\pi}\right) \frac{W_0^2}{\vo^3} \left(\frac{3\,C_1^{\scriptscriptstyle \rm W}+C_2^{\scriptscriptstyle \rm W}}{\sqrt{\tau_f}}+\frac{C_3^{\scriptscriptstyle \rm W}\,\tau_f}{\vo} \right) \,. \label{VW3} \end{equation} Given that the O7-plane and all D7-branes intersect each other, there are no KK loop corrections induced by parallel O7/D7 stacks. However, as shown in Tab. \ref{Tab3}, the fixed point set includes 1 O3-plane located at $D_1 D_2 D_3$ on the CY hypersurface. Thus KK $g_s$ effects arise from O7/O3 and D7/O3 combinations which lead to a sum over all basis elements in the general 1-loop KK scalar potential (\ref{VgsKK}). Thus KK loop effects take the same form as in (\ref{VKK}) where however the term proportional to $C_b^{\scriptscriptstyle \rm KK}$ can be neglected since it is suppressed with respect to the term proportional to $C_3^{\scriptscriptstyle \rm W}$ in (\ref{VW3}) by $g_s^2\ll 1$. Hence the total $g_s$ potential for the brane setup 3 is: \begin{equation} V_{g_s} = V_{g_s}^{\scriptscriptstyle \rm W} + V_{g_s}^{\scriptscriptstyle \rm KK} = \left(\frac{A}{\tau_f^2}-\frac{B}{\vo\sqrt{\tau_f}}+\frac{D\,\tau_f}{\vo^2}\right) \frac{W_0^2}{\vo^2}\,, \label{Vbad} \end{equation} where: \begin{eqnarray} A &=& \frac{\kappa}{4} \left(g_s\,C_f^{\scriptscriptstyle \rm KK}\right)^2 >0 \nonumber \\ B &=& \frac{\kappa}{3} \left(3\,C_1^{\scriptscriptstyle \rm W}+ C_2^{\scriptscriptstyle \rm W}\right) \nonumber \\ D &=& - \frac{\kappa}{3}\, C_3^{\scriptscriptstyle \rm W}\,. \nonumber \end{eqnarray} A similar form of the string loop scalar potential arises also for the cases ($b$), ($c$) and ($d$) of the brane setup 3. All these cases allow for a larger freedom to turn on background fluxes but generate a winding correction that has a linear dependence on $\tau_f$ without additional suppression factors of $g_s^2$ which are typical instead of KK loop effects like the one proportional to $C$ in (\ref{VA1simpl}). The winding correction proportional to $D$ in (\ref{Vbad}) would therefore cause a steepening of the potential which would destroy the flatness of the inflationary plateau unless the coefficient $C_3^{\scriptscriptstyle \rm W}$ is unnaturally tuned to very small values. Thus the potential (\ref{Vbad}) is not particularly suitable to support enough e-foldings of inflation. This example illustrates the challenges that one can encounter in the attempts to build globally consistent D-brane models which give rise to appropriate string loop effects to drive inflation. \subsection{Higher derivative corrections} Let us now consider $F^4$ corrections to the scalar potential which can be computed independently on the choice of the orientifold involution and brane setup. The relevant topological quantities which control the size of these higher derivative $\alpha'$ effects are the various $\Pi_i$'s defined in (\ref{Pii}). They turn out to be: \begin{equation} \Pi_1 =10 , \quad \Pi_2 = 14 , \quad \Pi_3 =34 , \quad \Pi_4 =24 , \quad \Pi_5 =20 ,\quad \Pi_6 =24 , \quad \Pi_7 =126 , \quad \Pi_x = 108\,. \nonumber \end{equation} Now focusing on $\Pi_1$, $\Pi_6$ and $\Pi_x$, from (\ref{VF4}) the higher derivative scalar potential becomes: \begin{equation} V_{F^4} = - \left(\frac{g_s}{8\pi}\right)^2 \frac{\lambda\, W_0^4}{g_s^{3/2}\vo^4} \left(10 \,t_s + 24 \,t_b + 108\, t_f\right) \simeq \left(\frac{2}{3\vo\,\tau_f}+\frac{\sqrt{\tau_f}}{\vo^2} \right) \frac{F\,W_0^2}{\vo^2}\,, \label{F4terms} \end{equation} where we have neglected the term independent on the fibre modulus $\tau_f$ and: \begin{equation} F = -36 \,\lambda \left(\kappa\, W_0\right)^2 g_s^{-3/2}\,. \end{equation} Note that the topological quantities $\Pi_x$ and $\Pi_6$ which control the size of the two terms in (\ref{F4terms}) are both of the same order since $\Pi_x/\Pi_6=4.5$. As described in Sec. \ref{InflationaryModels}, left-right inflation models require instead a hierarchy between these two topological quantities at least of order $10^4$ in order to protect the flatness of the inflationary potential. Hence we conclude that this explicit CY example does not satisfy a crucial condition for the realisation of left-right inflation models where the inflationary plateau is developed by $F^4$ terms \cite{Broy:2015zba}. \subsection{Inflationary dynamics} The only case whose scalar potential is rich enough to yield an interesting inflationary dynamics is case 1. In fact, in case 2 the string loop potential (\ref{V2}) contains just a single KK contribution and the two $F^4$ terms in (\ref{F4terms}) do not feature the right hierarchy to realise left-right inflation. Moreover in case 3 the scalar potential (\ref{Vbad}) contains a dangerously large winding loop term which depends linearly on $\tau_f$, and so would very quickly destroy the flatness of the potential unless its coefficient is tuned to unnaturally small values. We shall therefore focus on the scalar potential of case 1 and show that it can lead to a viable realisation of both `fibre inflation' and `$\alpha'$ inflation' models. The total scalar potential is given by the sum of the loop induced contribution (\ref{VA1}) and the $F^4$ terms (\ref{F4terms}). In the region of field space where $\tau_f \gg \langle \tau_s \rangle$, this scalar potential simplifies to: \begin{equation} V_{\rm tot} = V_{g_s} + V_{F^4} = \frac{W_0^2}{\vo^2} \left[\frac{A}{\tau_f^2}- \frac{B}{\vo \sqrt{\tau_f}} \left(1-\frac{U}{\sqrt{\tau_f}}\right) + \frac{C \tau_f}{\vo^2} + \frac{F \sqrt{\tau_f}}{\vo^2}\right]\,. \label{infpot1} \end{equation} where (for $C_{\scriptscriptstyle \rm W} \equiv C_1^{\scriptscriptstyle \rm W} - C_2^{\scriptscriptstyle \rm W}$): \begin{equation} |U| = \frac{2|F|}{3B} = \frac{3}{\pi} \frac{|\lambda| \, W_0^2}{C_{\scriptscriptstyle \rm W}\sqrt{g_s}} \sim \mathcal{O}(|\lambda|) \ll 1\,, \end{equation} for natural values $C_{\scriptscriptstyle \rm W}\sim W_0\sim\mathcal{O}(1)$, $g_s\sim\mathcal{O}(0.1)$ and $|\lambda|\sim\mathcal{O}(10^{-3})$ \cite{Ciupke:2015msa}. The term proportional to $U$ is therefore suppressed by both $|U|\ll 1$ and $\tau_f\gg 1$, and so it can be safely neglected (it would just slightly shift the position of the minimum). Hence the final form of the inflationary potential is: \begin{equation} V_{\rm inf} = \frac{A\,W_0^2}{\vo^2} \left(\frac{1}{\tau_f^2}- \frac{C_1}{\vo \sqrt{\tau_f}} + C_2 \frac{\tau_f}{\vo^2} + C_3 \frac{\sqrt{\tau_f}}{\vo^2} \right), \label{infpot2} \end{equation} where: \begin{equation} C_1 = \frac{B}{A}=\left(\frac{2}{C_f^{\scriptscriptstyle \rm KK}}\right)^2\frac{C_{\scriptscriptstyle \rm W}}{g_s^2} \qquad C_2 = \frac{C}{A}=\frac 12 \left(\frac{C_b^{\scriptscriptstyle \rm KK}}{3 C_f^{\scriptscriptstyle \rm KK}}\right)^2 \qquad C_3 = \frac{F}{A}=-\frac{162 \,\lambda}{\pi\,g_s^{5/2}} \left(\frac{W_0}{3 C_f^{\scriptscriptstyle \rm KK}}\right)^2 \,. \nonumber \end{equation} For $g_s\lesssim\mathcal{O}(0.1)$ and $|\lambda|\sim\mathcal{O}(10^{-3})$ and natural $\mathcal{O}(1)$ values of $W_0$ and the coefficients of the string loop effects, the terms in (\ref{infpot2}) proportional to $C_2$ and $C_3$ are both negligible with respect to the $C_1$-dependent term in the region where $1\ll \tau_f\ll\tau_b$ since: \begin{equation} \frac{C_2\,\tau_f/\vo^2}{C_1/(\vo \sqrt{\tau_f})} = \frac{\left(C_b^{\scriptscriptstyle \rm KK}\right)^2}{C_{\scriptscriptstyle \rm W}}\frac{g_s^2}{12} \frac{\tau_f}{\tau_b} \ll 1\,, \end{equation} and: \begin{equation} \frac{|C_3|\,\sqrt{\tau_f}/\vo^2}{C_1/(\vo \sqrt{\tau_f})} = \frac{972 \,\lambda}{\pi\,C_{\scriptscriptstyle \rm W}\,g_s^{1/2}} \left(\frac{W_0}{6}\right)^2 \frac{1}{\sqrt{\tau_f}}\frac{\tau_f}{\tau_b} \ll 1\,. \end{equation} Therefore the interplay between the first two terms in (\ref{infpot2}) gives a minimum of the inflationary potential for $1\ll \tau_f\ll\tau_b$ which is located at: \begin{equation} \langle \tau_f \rangle \simeq \left(\frac{4}{C_1}\right)^{2/3}\vo^{2/3}\sim g_s^{4/3} \,\vo^{2/3}\ll \vo^{2/3} \qquad\Rightarrow\qquad 1\ll\langle \tau_f \rangle\sim g_s^2 \,\langle \tau_b \rangle \ll \langle \tau_b \rangle\,. \label{minimum} \end{equation} In order to study the inflationary dynamics, it is convenient to work with the canonically normalised inflaton $\phi$ defined as: \begin{equation} \tau_f = e^{k \phi} = \langle \tau_f \rangle \,e^{k \varphi}\qquad\text{with}\qquad k = \frac{2}{\sqrt{3}}\,, \label{CN} \end{equation} where we have shifted the inflaton from its minimum as $\phi = \langle \phi \rangle + \varphi$. The scalar potential (\ref{infpot2}) written in terms of $\varphi$ looks like: \begin{equation} V_{\rm inf} = \frac{A W_0^2}{\langle \tau_f \rangle^2 \vo^2} \left(C_{\scriptscriptstyle \rm dS}+ e^{-2 k \varphi}- 4 e^{-\frac{k \varphi}{2}} + \mathcal{R}_1\, e^{k \varphi} + \mathcal{R}_2\, e^{\frac{k \varphi}{2}}\right)\,, \label{Vref} \end{equation} where we added a constant $C_{\scriptscriptstyle \rm dS}=3 - \mathcal{R}_1-\mathcal{R}_2$ to obtain a Minkowski (or slightly dS) vacuum and: \begin{equation} \mathcal{R}_1 = \frac{16 C_2}{C_1^2}= \left(\frac{C_f^{\scriptscriptstyle \rm KK} C_b^{\scriptscriptstyle \rm KK}}{C_{\scriptscriptstyle \rm W}}\right)^2 \frac{g_s^4}{18}\ll 1 \,, \nonumber \end{equation} and (without loss of generality we choose $\lambda=-|\lambda|<0$): \begin{equation} \mathcal{R}_2 = \frac{8 C_3}{C_1^{5/3}}\left(\frac{2}{\vo}\right)^{1/3} = \frac{18\,W_0^2}{\pi} \frac{\left(C_f^{\scriptscriptstyle \rm KK}\right)^{4/3}}{C_{\scriptscriptstyle \rm W}^{5/3}} \frac{|\lambda|\,g_s^{5/6}}{\vo^{1/3}} \ll 1 \,. \nonumber \label{Rs} \end{equation} Note that for $\mathcal{R}_2 \ll \mathcal{R}_1\ll 1$ (\ref{Vref}) reproduces exactly the inflationary potential of `fibre inflation' \cite{Cicoli:2008gp}. For $\mathcal{R}_1 = 10^{-6} \,(10^{-5})$ and $\mathcal{R}_2 = 0$ we obtain the same predictions: $n_s \simeq 0.964 \,(0.971)$ and $r \simeq 0.007 \,(0.008)$.\footnote{These values are actually slightly different than those reported in \cite{Cicoli:2008gp} since we are evaluating them at $50$, instead of $60$, e-foldings before the end of inflation.} On the other hand, for $\mathcal{R}_1 \ll \mathcal{R}_2\ll 1$ the potential (\ref{Vref}) is very similar to the one of `$\alpha'$ inflation' since in both cases the plateau is generated by a winding loop effect and the steepening for large $\varphi$ is due to an $F^4$ term \cite{Cicoli:2016chb}. The only difference is the term responsible for the minimum at $\varphi=0$. In our case it is a KK loop correction whereas in \cite{Cicoli:2016chb} it is another higher derivative $\alpha'$ effect. Due to this small difference, the final predictions for the two main cosmological observables are slightly different. In particular, for the same tensor-to-scalar ratio, in our case the spectral index is a bit larger since when the first derivatives of the two inflationary potentials are the same (and so $\epsilon \propto V'^2$ and $r=16\epsilon$ are the same), the second derivative of (\ref{Vref}) is larger than the one of the pure `$\alpha'$ inflation' potential (and so $\eta\propto V''$ and $n_s=1+2\eta-6\epsilon$ are larger). For illustrative purposes, $\mathcal{R}_2 = 1.5\cdot 10^{-3}$ and $\mathcal{R}_1 = 0$ would lead to $n_s \simeq 0.976$ and $r \simeq 0.01$ while `$\alpha'$ inflation' predicts $n_s\simeq 0.972$ for the same $r$ \cite{Cicoli:2016chb}. Let us now consider both positive exponentials in (\ref{Vref}) and compare their relative strength. Note that the term proportional to $\mathcal{R}_1$ becomes relevant and starts lifting the inflationary plateau roughly when: \begin{equation} 0.1 \left(C_{\scriptscriptstyle \rm dS} - 4 \,e^{-\frac{k \varphi}{2}}\right) \simeq \mathcal{R}_1\, e^{k \varphi}\,. \label{loopssteepening} \end{equation} Two illustrative numerical results are: \begin{equation} \varphi_1 \simeq 10.9 \quad \text{for} \quad \mathcal{R}_1 = 10^{-6} \qquad\text{and}\qquad \varphi_2 \simeq 8.9 \quad \text{for} \quad \mathcal{R}_1 = 10^{-5} \,. \label{philoops} \end{equation} Following the same logic, the point where the positive exponential proportional to $\mathcal{R}_2$ starts spoiling the flatness of the plateau can be estimated as: \begin{equation} 0.1 \left(C_{\scriptscriptstyle \rm dS} - 4 e^{-\frac{k \varphi}{2}}\right) \simeq \mathcal{R}_2\, e^{k \varphi/2}\,. \label{F4steepening} \end{equation} Fig. \ref{fig1} shows the value of $\varphi$ where the $F^4$ term becomes relevant as a function of $\mathcal{R}_2$, and compares it with the two numerical results in (\ref{philoops}). It is clear that the steepening, and hence also the cosmological observables, are determined by the $F^4$ term for $\mathcal{R}_2 > 5 \cdot 10^{-4}$ if $\mathcal{R}_1 = 10^{-6}$, and for $\mathcal{R}_2 > 2 \cdot 10^{-3}$ if $\mathcal{R}_1 = 10^{-6}$. \begin{figure}[h!] \begin{center} \includegraphics[width=0.70\textwidth, angle=0]{plot1.eps} \caption{The red curve shows the point where the $F^4$ term starts spoiling the flatness of the inflationary plateau as a function of the small parameter $\mathcal{R}_2$. The green and blue horizontal lines show the values where the KK loop proportional to $\mathcal{R}_1$ becomes relevant for $\mathcal{R}_1=10^{-6}$ and $\mathcal{R}_1=10^{-5}$ respectively.} \label{fig1} \end{center} \end{figure} In Fig. \ref{fig2} we plot the inflationary potential for the three cases $\mathcal{R}_2 = \{0, 7 \cdot 10^{-4}, 1.5 \cdot 10^{-3}\}$ and $\mathcal{R}_1 = 10^{-6}$. The two limiting cases with $\mathcal{R}_2 = 0$ and $\mathcal{R}_2=1.5\cdot 10^{-3}$ reproduce respectively `fibre inflation' and `$\alpha'$ inflation'. The corresponding predictions for the cosmological observables are reported in Tab. \ref{tab1}. Note that both $r$ and $n_s$ become larger when $\mathcal{R}_2$ increases since horizon exit takes place in a steeper region of the scalar potential. \begin{figure}[h!] \begin{center} \includegraphics[width=0.70\textwidth, angle=0]{plot2.eps} \caption{Inflationary potential for different values of $\mathcal{R}_2$ and $\mathcal{R}_1$ fixed at $\mathcal{R}_1=10^{-6}$.} \label{fig2} \end{center} \end{figure} \begin{table}[h!] \begin{center} \begin{tabular}{cccccc} \hline $\mathcal{R}_2$ & $n_s$ & $r$ & $\left|W_0\right|$ & $|\lambda|$ & $\delta$ \\ \hline $0$ & $0.964$ & $0.007$ & $5.7$ & $0$ & $0.17$ \\ \hline $7 \cdot 10^{-4}$ & $0.970$ & $0.008$ & $6.1$ & $1.5 \cdot 10^{-3}$ & $0.17$ \\ \hline $1.5 \cdot 10^{-3}$ & $0.977$ & $0.012$ & $6.7$ & $2.7 \cdot 10^{-3}$ & $0.17$ \\ \hline \end{tabular} \end{center} \caption{Predictions for the cosmological observables and choice of the underlying parameters for different values of $\mathcal{R}_2$ and $\mathcal{R}_1=10^{-6}$.} \label{tab1} \end{table} The right amplitude of the density perturbations can be obtained by imposing the COBE normalisation: \begin{equation} \left.\frac{V_{\rm inf}^3}{V_{\rm inf}^{'\, 2}}\right|_{\text{horizon exit}} = 2.7 \times 10^{-7}\,, \label{COBE} \end{equation} which, for $g_s = 0.1$ and $\vo = 10^3$, requires natural values of $W_0$ reported in Tab. \ref{tab1}. Moreover $\mathcal{R}_1 = 10^{-6}$ and $\mathcal{R}_2$ can be exactly reproduced with reasonable choices of the underlying parameters $C_{\scriptscriptstyle \rm W} = 90$, $C^{\scriptscriptstyle \rm KK}_f = 65$, $C^{\scriptscriptstyle \rm KK}_b = 0.58$ and the values of $|\lambda|$ listed in Tab. \ref{tab1}. Let us finally check the consistency of the effective field theory. In order to trust our single field approximation, the mass of the volume mode has to be larger than the Hubble scale $H^2 \simeq V_{\rm inf}/3$. This condition boils down to: \begin{equation} \delta = \frac{H^2}{m^2_{\vo}} \simeq \frac{V_{\rm inf}}{V_{\alpha'}} \ll 1\,, \label{singlefield} \end{equation} where the $V_{\alpha'}$ is the leading $\mathcal{O}(\alpha'^3)$ contribution to the scalar potential and reads: \begin{equation} V_{\alpha'} = \kappa \frac{3 \xi W_0^2}{4 g_s^{3/2} \vo^3}\,. \end{equation} As shown in Tab. \ref{tab1}, the single field approximation is under control since $\delta\ll 1$ for each case (our CY example has $\chi_{\rm eff} = -190$ which gives $\xi = 0.46$). Moreover the $\alpha'$ expansion can be trusted only if: \begin{equation} \zeta = \frac{\xi}{2 g_s^{3/2} \vo} \ll 1\,. \end{equation} The previous choice of $g_s$ and $\vo$ gives $\zeta \simeq 0.007$, implying that also the $\alpha'$ expansion is under control. Finally, our choice of microscopic parameters leads to $\langle \tau_f \rangle \simeq 60 \gg \langle \tau_s \rangle \simeq 3$, so that the corrections proportional to $\langle \tau_s \rangle/\tau_f\lesssim 0.05$ in (\ref{VA1}) can be consistently neglected. \section{Conclusions} \label{Concl} String inflation models are very promising to describe the early universe due to the presence of approximate symmetries which can explain the flatness of the inflationary potential. However they sometimes lack a fully consistent description of the mechanism responsible to stabilise all the moduli in a concrete Calabi-Yau compactification. In this paper we presented the first explicit type IIB examples of globally consistent models where the Calabi-Yau background is described by toric geometry, all closed string moduli are stabilised, and the scalar potential of one of the K\"ahler moduli, namely the fibre modulus, is suitable to drive inflation. In particular we managed to produce global embeddings of both `fibre inflation' \cite{Cicoli:2008gp} and `$\alpha'$ inflation' \cite{Cicoli:2016chb} models which can predict primordial gravitational waves which might be detectable in the near future. After finding $45$ different `weak Swiss cheese' Calabi-Yau manifolds with $h^{1,1}= 3$ with a shrinkable del Pezzo divisor needed for LVS moduli stabilisation, we focused on one of them and provided different consistent models with O7/O3-planes and D3/D7-branes which lead to the generation of the correct perturbative (both in $\alpha'$ and $g_s$) and non-perturbative effects to stabilise all the K\"ahler moduli and reproduce the potential of fibre inflation models. In particular, we showed that the inflationary potential features a long plateau that is naturally generated by winding loop corrections. At large inflaton values the inflationary potential has instead a raising behaviour which, depending on the values of the underlying parameters, can be due to either KK loop effects of higher derivative contributions. We believe that this paper represents an important step forward in the construction of globally consistent string inflation models even if Calabi-Yau manifolds with $h^{1,1}=3$ are too simple to allow also for the realisation of a chiral visible sector. Chiral matter can be included only in the presence of non-vanishing gauge fluxes on D7-branes which, when combined with the requirement of a viable inflationary direction, require Calabi-Yau manifolds with $h^{1,1}=4$. We leave the study of this case for future work. \section*{Acknowledgements} We would like to thank Roberto Valandro for many useful discussions. We are also thankful to Ross Altman, Volker Braun, David Ciupke, Xin Gao, James Gray, Jim Halverson and Christoph Mayrhofer for several useful conversations. PS is grateful to the Bologna INFN division for hospitality during the Spring and the Summer of 2016 when most of this work was carried out.
1,941,325,219,941
arxiv
\section{Introduction} Hindi is written in the Devanagari script, which is an abugida, an orthographic system where the basic unit consists of a consonant and an optional vowel diacritic or a single vowel. Devanagari is fairly regular, but a Hindi word's actual pronunciation can differ from what is literally written in the Devanagari script.\footnote{Throughout this paper, we will adopt the convention of using $\langle$angle brackets$\rangle$ to describe how a word is literally spelled, and [square brackets] to describe how a word is actually pronounced.} For instance, in the Hindi word {\dn p\?pr} $\langle$\textipa{pep@R@}$\rangle$ `paper', there are three units {\dn p\?}~$\langle$\textipa{pe}$\rangle$, {\dn p}~$\langle$\textipa{p@}$\rangle$, and {\dn r}~$\langle$\textipa{R@}$\rangle$, corresponding to the pronounced forms \textipa{[pe]}, \textipa{[p@]}, and \textipa{[r]}. The second unit's inherent schwa is retained in the pronounced form, but the third unit's inherent schwa is deleted. Predicting whether a schwa will be deleted from a word's orthographic form is generally difficult. Some reliable rules can be stated, e.g. `delete any schwa at the end of the word', but these do not perform well enough for use in an application that requires schwa deletion, like a text-to-speech synthesis system. This work approaches the problem of predicting schwa deletion in Hindi with machine learning techniques, achieving high accuracy with minimal human intervention. We also successfully apply our Hindi schwa deletion model to a related language, Punjabi. Our scripts for obtaining machine-readable versions of the Hindi and Punjabi pronunciation datasets are published to facilitate future comparisons.\footnote{All of the code, models, and datasets for this research are publicly available at \url{https://github.com/aryamanarora/schwa-deletion}.} \section{Previous Work} Previous approaches to schwa deletion in Hindi broadly fall into two classes. The first class is characterized by its use of rules given in the formalism of {\it The Sound Pattern of English} \citep{spe}. Looking to analyses of schwa deletion produced by linguists \citep[e.g.,][]{ohala_1983} in this framework, others built schwa deletion systems by implementing their rules. For example, this is a rule used by \citet{narasimhan_schwa-deletion_2004}, describing schwa deletion for words like {\dn j\2glF} $\langle$\textipa{{dZ}@Ng@li:}$\rangle$: \vspace{0.4em} \begin{center} \begin{tabular}{cccccccccccccc} C & V & C & C & \textbf{a} & C & V & & C & V & C & C & C & V \\ \textipa{{dZ}} & \textipa{@} & \textipa{N} & \textipa{g} & \textbf{\textipa{@}} & \textipa{l} & \textipa{i:} & $\rightarrow$ & \textipa{{dZ}} & \textipa{@} & \textipa{N} & \textipa{g} & \textipa{l} & \textipa{i:} \end{tabular} \end{center} \vspace{0.4em} \noindent Paraphrasing, this rule could be read, ``if a schwa occurs with a vowel and two consonants to its left, and a consonant and a vowel to its right, it should be deleted.'' A typical system of this class would apply many of these rules to reach a word's output form, sometimes along with other information, like the set of allowable consonant clusters in Hindi. These systems were able to achieve fair accuracy (\citeauthor{narasimhan_schwa-deletion_2004}\ achieve 89\%), but were ill-equipped to deal with cases that seemed to rely on detailed facts about Hindi morphology and prosody. \begin{figure} \centering \includegraphics[width=0.5\linewidth]{pstructure.pdf} \caption{A representative example of the linguistic representations used by \citeauthor{tyson_prosodic_2009} \citeyearpar{tyson_prosodic_2009}. Proceeding from top to bottom, a prosodic word (PrWd) consists of feet, syllables (which have weights), and syllable templates.} \label{fig:pstructure} \end{figure} Systems of the second class make use of linguistically richer representations of words. Typical of this class is the system of \citet{tyson_prosodic_2009}, which analyzes each word into a hierarchical phonological representation (see figure \ref{fig:pstructure}). These same representations had been used in linguistic analyses: \citet{pandey}, for instance, as noted by \citet{tyson_prosodic_2009}, ``claimed that schwas in Hindi cannot appear between a strong and weak rhyme\footnote{The {\it rhyme} in Hindi (not pictured in \cref{fig:pstructure}), is the part of the syllable that begins with the vowel and includes any consonants that come after the vowel. Its weight is determined by vowel length and whether any consonants appear in it.} within a prosodic foot.'' Systems using prosodic representations perform fairly well, with \citeposs{tyson_prosodic_2009} system achieving performance ranging from 86\% to 94\% but prosody proved not to be a silver bullet; \citet{tyson_prosodic_2009} remark, ``it appears that schwa deletion is a phenomenon governed by not only prosodic information but by the observance of the phonotactics of consonant clusters.'' There are other approaches to subsets of the schwa-deletion problem. One is the diachronic analysis applied by \citet{choudhury} which achieved 99.80\% word-level accuracy on native Sanskrit-derived terms. Machine learning has not been applied to schwa deletion in Hindi prior to our work. \citet{johny_brahmic_2018} used neural networks to model schwa deletion in Bengali (which is not a binary classification problem as in Hindi) and achieved great advances in accuracy. We employ a similar approach to Hindi, but go further by applying gradient-boosting decision trees to the problem, which are more easily interpreted in a linguistic format. Similar research has been undertaken in other Indo-Aryan languages that undergo schwa-deletion, albeit to a lesser extent than Hindi. \citet{wasala-06}, for example, proposed a rigorous rule-based G2P system for Sinhala. \section{Methodology} We frame schwa deletion as a binary classification problem: orthographic schwas are either fully retained or fully deleted when spoken. Previous work has shown that even with rich linguistic representations of words, it is difficult to discover categorical rules that can predict schwa deletion. This led us to approach the problem with machine learning, which we felt would stand a better chance at attaining high performance. We obtained training data from digitized dictionaries hosted by the University of Chicago \href{https://dsalsrv04.uchicago.edu/dictionaries/}{Digital Dictionaries of South Asia} project. The Hindi data, comprised of the original Devanagari orthography and the phonemic transcription, was parsed out of \citet{mcgregor} and \citet{bahri} and transcribed into an ASCII format. The Punjabi data was similarly processed from \citet{singh}. \Cref{table:entry-example} gives an example entry from the \citeauthor{mcgregor} Hindi dataset. To find all instances of schwa retention and schwa deletion, we force-aligned orthographic and phonemic representations of each dictionary entry using a linear-time algorithm. In cases where force-alignment failed due to idiosyncrasies in the source data (typos, OCR errors, etc.)\ we discarded the entire word. We provide statistics about our datasets in \cref{table:datasets}. We primarily used the dataset from \citeauthor{mcgregor} in training our Hindi models due to its comprehensiveness and high quality. \begin{table}[t!] \small \begin{center} \begin{tabular}{| r | l |} \hline \textbf{Devanagari} & {\dn a\1kwAhV} \\ \textbf{Orthographic} & \texttt{a \textasciitilde{} k a rr aa h a tt a} \\ \textbf{Phonemic} & \texttt{a \textasciitilde{} k\,\,\,\,\, rr aa h a tt} \\ \hline \end{tabular} \end{center} \caption{\label{table:entry-example} An example entry from the Hindi training dataset. } \end{table} \begin{table}[t!] \small \begin{center} \begin{tabular}{|r|rrr|} \hline \textbf{Hindi Dict.} & \textbf{Entries} & \textbf{Schwas} & \textbf{Deletion Rate} \\ \hline McGregor & 34,952 & 36,183 & 52.94\% \\ Bahri & 9,769 & 14,082 & 49.41\% \\ Google & 847 & 1,098 & 56.28\% \\ \hline\hline \textbf{Punjabi Dict.} & \textbf{Entries} & \textbf{Schwas} & \textbf{Deletion Rate} \\\hline Singh & 28,324 & 34,576 & 52.25\% \\ \hline \end{tabular} \end{center} \caption{\label{table:datasets} Statistics about the datasets used. The deletion rate is the percentage of schwas that are deleted in their phonemic representation. The Google dataset, taken from \citet{johny_brahmic_2018}, was not considered in our final results due to its small size and over-representation of proper nouns.} \end{table} Each schwa instance was an input in our training set. The output was a boolean value indicating whether the schwa was retained. Our features in the input column were a one-hot encoding of a variable window of phones to the left ($c_{-n}, \dots, c_{-1}$) and right ($c_{+1}, \dots, c_{+m}$) of the schwa instance ($c_0$) under consideration. The length of the window on either side was treated as a hyperparamater and tuned. We also tested whether including phonological features (for vowels: height, backness, roundedness, and length; for consonants: voice, aspiration, and place of articulation) of the adjacent graphemes affected the accuracy of the model. We trained three models on each dataset: logistic regression from scikit-learn, MLPClassifier (multilayer perceptron neural network) from scikit-learn, and XGBClassifier (gradient-boosting decision trees) from XGBoost. We varied the size of the window of adjacent phonemes and trained with and without phonological feature data. \section{Results} \begin{table}[t!] \small \begin{center} \begin{tabular}{|r|llll|l|} \hline \textbf{} & \multicolumn{1}{c}{\textbf{Model}} & \multicolumn{1}{c}{\textbf{A}} & \multicolumn{1}{c}{\textbf{P}} & \multicolumn{1}{c|}{\textbf{R}} & \multicolumn{1}{c|}{\textbf{Word A}} \\ \hline Hindi & XGBoost & 98.00\% & 98.04\% & 97.60\% & 97.78\% \\ & Neural & 97.83\% & 97.86\% & 97.42\% & 97.62\% \\ & Logistic & 97.19\% & 97.19\% & 96.70\% & 96.86\% \\ & \citeauthor{wiktionary} & 94.18\% & 92.89\% & 94.29\% & 94.18\% \\ \hline Punjabi & XGBoost & 94.66\% & 92.79\% & 95.90\% & 94.18\% \\ & Neural & 94.66\% & 93.25\% & 95.47\% & 94.07\% \\ & Logistic & 93.77\% & 91.73\% & 95.04\% & 93.14\% \\ \hline \end{tabular} \end{center} \caption{\label{table:results} Results for our models on the \citeauthor{mcgregor} and \citeauthor{singh} datasets: Per-schwa accuracy, precision, and recall, as well as word-level accuracy (all schwas in the word must be correctly classified).} \end{table} \Cref{table:results} tabulates the performances of our various models. We obtained a maximum of 98.00\% accuracy for all schwa instances in our test set from the McGregor dataset with gradient-boosted decision trees from XGBoost. We used a window of 5 phonemes to the left and right of the schwa instance, phonological features, 200 estimators, and a maximum tree depth of 11. Any model with at least 200 estimators and a depth of at least 5 obtains a comparable accuracy, but this gradually degrades with increasing estimators due to overfitting. Without phonological feature data, the model consistently achieves a slightly lower accuracy of 97.93\%. Logistic regression with the same features achieved 97.19\% accuracy. An MLP classifier with a single hidden layer of 250 neurons and a learning rate of $10^{-4}$ achieved 97.83\% accuracy. On the Singh dataset for Punjabi, the same XGBoost model (except without phonological features) achieved 94.66\% accuracy. This shows the extensibility of our system to other Indo-Aryan languages that undergo schwa deletion. We were unable to obtain evaluation datasets or code from previous work (\citealt{narasimhan_schwa-deletion_2004}, \citealt{tyson_prosodic_2009}) for a direct comparison of our system with previous ones.\footnote{We were able to obtain code from \citet{roy_2017} but were unable to run it on our machines.} However, we were able to port and test the Hindi transliteration code written in Lua utilized by \citet{wiktionary}, an online freely-editable dictionary operated by the Wikimedia Foundation, the parent of Wikipedia. That system obtains 94.94\% word-level accuracy on the \citeauthor{mcgregor} dataset, which we outperform consistently. \section{Discussion} Our system achieved higher performance than any other. The schwa instances which our model did not correctly predict tended to fall into two classes: borrowings from Persian, Arabic, or European languages, or compounds of native or Sanskrit-borrowed morphemes. Of the 150 Hindi words from our test set from \citeauthor{mcgregor} that our best model incorrectly predicted schwa deletion for, we sampled 20 instances and tabulated their source languages. 10 were native Indo-Aryan terms descended through the direct ancestors of Hindi, 4 were learned Sanskrit borrowings, 5 were Perso-Arabic borrowings, and 1 was a Dravidian borrowing. 9 were composed of multiple morphemes. Borrowings are overrepresented relative to the baseline rate for Hindi; in one frequency list, only 8 of the 1,000 top words in Hindi were of Perso-Arabic origin (\citealt{ghatage}). Notably, some of the Perso-Arabic borrowings that the model failed on actually reflected colloquial pronunciation; e.g.~{\dn amn} \ortho{\textipa{@m@n@}} is \textipa{[@mn]} in \citeauthor{mcgregor} yet our model predicts \textipa{[@m@n]} which is standard in most speech. We qualitatively analyzed our system to investigate what kind of linguistic representations it seemed to be learning. To do this, we inspected several decision trees generated in our model, and found that our system was learning both prosodic and phonetic patterns. Some trees very clearly encoded phonotactic information. One tree we examined had a subtree that could be paraphrased like so, where $c_n$ indicates the phone $n$ characters away from the schwa being considered: ``If $c_{+1}$ is beyond the end of the word, and $c_{-2}$ is not beyond the beginning of the word, and $c_{-2}$ is a \oipa{t}, then if $c_{-1}$ is a \oipa{j}, then penalize deleting this schwa;\footnote{{\it Penalize deleting} and not {\it delete}, because this tree is only contributing towards the final decision, along with all the other trees.} otherwise if $c_{-1}$ is not a \oipa{j}, prefer deleting this schwa.'' Put another way, this subtree penalizes deleting a schwa if it comes at the end of a word, the preceding two characters are exactly \oipa{tj}, and the word extends beyond the preceding two characters. This is just the kind of phonetic rule that systems like \citet{narasimhan_schwa-deletion_2004} were using. The extent to which our system encodes prosodic information was less clear. Our features were phonetic, not prosodic, but some prosodic information can be somewhat captured in terms of phonetics. Take, for instance, this subtree that we found in our model, paraphrasing as before: ``If $c_{-3}$ is beyond the beginning of the word, and $c_{-2}$ is \oipa{a:}, then if $c_{+2}$ is \oipa{@}, prefer deletion; otherwise, if $c_{+2}$ is not \oipa{@}, penalize deletion.'' Consider this rule as it would apply to the first schwa in the Hindi word {\dn aAmdnF} \ortho{\textipa{a:m@d@ni:}} \vspace{0.4em} \begin{center} \begin{tabular}{cccccccccc} -3 & -2 & -1 & \textbf{0} & 1 & 2 & 3 & 4 & 5 \\ & \textipa{a:} & \textipa{m} & \textbf{\textipa{@}} & \textipa{d} & \textipa{@} & \textipa{n} & \textipa{i:} \end{tabular} \end{center} \vspace{0.4em} \noindent The rule decides that deleting the first schwa should be penalized, and it decided this by using criteria that entail that the preceding rhyme is heavy and the following rhyme is light.\footnote{Actually, this is not exactly true, since if the following syllable had any consonants in the rhyme, it would become heavy, even if there were a schwa present. But this is an error that could be corrected by other decision trees.} Obviously, though, this same rule would not work for other heavy and light syllables: if any of the vowels had been different, or at different offsets, a non-deletion rather than a deletion would have been preferred, which is not what it ought to do if it is emulating the prosodic rule. It is expected that our model is only able to capture ungeneralized, low-level patterns like this, since it lacks the symbolic vocabulary to capture elegant linguistic generalizations, and it is perhaps surprising that our system is able to achieve the performance it does even with this limitation. In future work, it would be interesting to give our system more directly prosodic representations, like the moraic weights of the surrounding syllables and syllabic stress. Another limitation of our system is that it assumes all schwas are phonologically alike, which may not be the case. While most schwas are at all times either pronounced or deleted, there are less determinate cases where a schwa might or might not be deleted according to sociolinguistic and other factors. \Citet[p.~xi]{mcgregor} calls these ``weakened schwas'', describing them as ``weakened by Hindi speakers in many phonetic contexts, and dropped in others'' and orthographically indicating them with a breve. {\dn s(y} is transcribed \textit{satyă}. Our best model correctly classified 80.4\% of the weakened schwas present in our test set taken from \mbox{McGregor}. Improving our performance for this class of schwas may require us to treat them differently from other schwas. Further research is needed on the nature of weakened schwas. \section{Conclusion} We have presented the first statistical schwa deletion classifier for Hindi achieves state-of-the-art performance. Our system requires no hard-coded phonological rules, instead relying solely on pairs of orthographic and phonetic forms for Hindi words at training time. Furthermore, this research presents the first schwa-deletion model for Punjabi, and has contributed several freely-accessible scripts for scraping Hindi and Punjabi pronunciation data from online sources. \normalfont \bibliographystyle{acl_natbib}
1,941,325,219,942
arxiv
\section{Introduction} Charged droplets are often encountered in nature, for example, electrified cloud droplets as well as in artificial technologies such as combustion of fuel droplets, spray painting, crop spraying, and inkjet printing. Lord Rayleigh (1882) first derived a threshold value at which a charged drop exhibits instability \cite{rayleigh1882}. The mechanism suggested was overcoming of the force due to the surface tension of the liquid droplet by the repulsive electrostatic force. The first evidence of the disintegration of a liquid issuing from an electrified capillary in a sufficiently high external electric field was reported by \citet{zeleny17}, while the first charged droplet breakup in the strong electric field was investigated by \citet{macky31}. Further, the work of \citet{macky31} was supplemented by \citet{taylor64} using a suitable hydrodynamic theory. In most practical situations the dynamics of the breakup of a charged droplet is very fast. Hence for a detailed understanding of the droplet breakup characteristics, the droplets need to be suspended in space. The first systematic study of Rayleigh fission of a suspended drop was carried out by \citet{doyle64} where the droplet was levitated in a Millikan oil drop experiment \cite{millikan1935} and was observed to eject 1-10 smaller daughter droplets from a parent drop. Using a similar device, \citet{abbas67} obtained similar results but for a wider range of droplet radii (30-200 $\mu$m). In the Millikan oil drop setup, a continuous adjustment of DC suspension voltage was required against the change in mass and charge density. \citet{duft03} reported the undisputed images of Rayleigh breakup of a charged droplet levitated in an ideal Paul trap. The importance of their work was to produce the first of its kind images of systematically induced Rayleigh breakup in a quadrupolarly levitated charged drop. The time-lapsed images in their work exhibit symmetric breakup of a charged drop, where each image corresponds to an independent experiment. This was considered as a breakthrough in the field due to the correct identification of the onset of drop breakup in the experiments using a light scattering technique (see $\sim$ ref. \citet{duft02}). The signal from light scattering experiments was used to trigger a flash lamp and a CCD camera to capture the Rayleigh breakup events. Their methodology involved triggering the at a certain time delay after a threshold scattering signal, thereby allowing them to get the time lapsed images in different experiments, wherein various time delays were set. A remarkable sequence of breakup events (captured one image in one experiment) were reported. However, the success of their experiments also lay in the fact that their experiments had great accuracy and reproducibility in imparting an exact charge to exactly same sized drops, used in different experiments with almost the same evaporation rate. Although highly reproducible and accurate, their experiments were done on different droplets and can still account for experimental errors. A high speed imaging of the sequence of the droplet breakup events is therefore desirable for unequivocal demonstration of the Rayleigh instability and is presented here. Moreover, the experimental evidence of the Rayleigh breakup of a levitated charged droplet, reported by \citet{duft03}, indicates that the droplet breakup occurs symmetrically via ejection of jets from the two poles of the droplet. In their experiments the symmetry in the droplet breakup was most likely a consequence of perfect levitation of the charged droplet exactly at the center of the quadrupole trap which was achieved by using an additional DC bias voltage, to balance the gravitational force acting on the droplet, in addition to an AC potential. However, in the typical electrospray experiments (see $\sim$ ref. \citet{gomez1994}) a droplet was observed to eject a single jet at one of the poles of a droplet showing asymmetric breakup in the presence of gravity. In view of this, very recently, \cite{singh2019subcritical} have levitated a charged droplet using AC quadrupole field without any additional DC bias voltage such that the weight of the drop gives a natural asymmetry to the system by levitating the droplet slightly away from the center of the trap. In this work\cite{singh2019subcritical}, a typical experiment consists of electrospraying a positively charged droplet (in the dripping mode) into a quadrupole trap that consists of two endcap electrodes which are shorted and separated by 20$mm$ and a ring electrode. An AC voltage of 4.5-11 $kV_{pp}$ and 100-500 $Hz$ frequency was applied between the end cap and the ring electrodes. A typically charged droplet, levitated off-center in a quadrupole trap, takes several minutes to evaporate and build the Rayleigh charge before it undergoes breakup. The events were manually recorded using a high-speed camera at a speed of around 150-200k frames per second (fps) for around 2-4 seconds. The video was played back to capture droplet center of mass oscillations, shape deformations as well as the asymmetric breakup of the droplet. Also, it was reported that a droplet suspended in an AC quadrupole field, and without a DC bias, was seen to exhibit the several phenomena in a typical high-speed video such as the droplet undergoes simultaneous center of mass motion and associated deformation of an otherwise undeformed spherical droplet. The droplet then undergoes an asymmetric breakup, predominantly in the upward direction. The highlight of the present work is the observance of these different stages in the entire process of the breakup of a charged drop in a single high-speed video. In this paper, we have further advanced our work reported in \citet{singh2019subcritical} by addressing the effect of fluid properties such as conductivity of the liquid droplet as well as the effect of the applied field and unbalanced gravity on the droplet breakup characteristics. Towards this, a first video graphic and quantitative evidence of the effect of the applied field, size of the droplets and conductivity of the droplet is reported and the observations are qualitatively compared with the calculation of the axisymmetric boundary integral simulations. The experiments and numerical prediction are found to be in fair agreement, the mechanism for the breakup is then elucidated. \section{Description of Experimental setup} In the present work, a positively charged droplet was levitated in a modified Paul trap, the details of which are described elsewhere (see $\sim$ ref. \citet{singh2018surface}). The trap consists of two end cap electrodes and a ring electrode. The trap parameters namely $z_0$ (=10$mm$), distance between centre of the ring to the centre of the end cap electrode, and $r_0$ ($=10$mm), distance between centre of the ring to the inner periphery of the ring electrode, were significantly higher than those in the literature\cite{achtzehn05}. This allows enough space to so several operations simultaneously such as, illuminating the drop, introducing highly charged droplets generated by electrosprays and recording the drop deformation followed by Rayleigh breakup using high speed videography (by Phantom V12 camera) at 100-130k fps and a stereo zoom microscope (Nikon). To observe the Rayleigh breakup phenomenon, charged droplets were generated by electro-spraying ethylene glycol and ethanol solution (50\% v/v) and were levitated in an ED balance. The viscosity ($\mu_d$) of the droplet, measured using an Ostwald viscometer, was $\sim$0.006Ns/m and the surface tension ($\gamma$), measured using pendant drop (DIGIDROP, model DS) method as well as spinning drop (Dataphysics, SVT 20) method, was $\sim$30N/m. To increase the conductivity of the droplet an appropriate amount of NaCl was added to the solution. The droplet dynamics was given by the non-dimensional Mathieu equation, \begin{equation} x_i''(\tau)+c x_i'(\tau)-a_z x_i(\tau) \cos(\tau)+\frac{g}{\omega ^2}=0 \label{eq:r_z} \end{equation} where, $a_z$=$\frac{2 \text{Q} \Lambda_0 (\tau)}{\frac{\pi}{6}D_d^3 \rho_d \omega ^2}$, $c$=$\frac{3 \pi D_d \text{$\mu_a$} }{\frac{\pi}{6}D_d^3 \rho_d \omega}$, $\tau(=\omega t)$ was the non-dimensional time, $\omega=2 \pi f$, $f$ was the applied frequency. $\Lambda_0$ was the intensity of quadrupole field and determined by fitting $\rho$ \& $z$ directional potential distribution data obtained by COMSOL Multiphysics to the equation of an ideal quadrupole potential, $\phi=\Lambda_0 (z^2-\rho^2/2)$, using multilinear regression method. $\rho_d$ was the density of the drop, $\mu_a$ was the viscosity of the air, $Q$ was the charge on the drop, $D_d$ was the droplet diameter, $x_i$=$z$,$\rho$. Since the droplet was levitated in the presence of pure AC field, without any DC field to balance the gravity, the last term in the equation \ref{eq:r_z} was added to account the gravity. The droplet shifted its position from the centre of the trap due to unbalanced gravity. Its centre mass, therefore, oscillates a new equilibrium position, $z_{shift}$. The time-averaged equilibrium position at a distance $z_{shift}$ was obtained from the simple force balance in the z-direction and given as $z_{shift}=\frac{2(1+c^2)g_z}{a_z^2\omega^2}$ where, $a_z$ $\sim$ $2a_r$, $a_r$ was the stability parameter in the $r$-direction. The value of charge on the droplet was obtained from either by matching the experimental center of mass oscillations with the numerical solution of equation \ref{eq:r_z} (see $/sim$ ref \citet{singh2019subcritical}) or by cut-off frequency method (see $\sim$ ref \citet{singh2017levitation}). In most of the experiments, the droplet was levitated at a critical value of stability parameter ($a_{z_c}=\sim0.45$ at $\sim0.01$). After levitating the droplet, the droplet breakup was recorded through high-speed imaging. \section{Droplet breakup characteristics } In a typical Rayleigh breakup process a levitated charged droplet is observed to undergo three consecutive events as follows: surface oscillations and deformation, breakup and relaxation back to spherical shape. The detailed drop breakup characteristics are shown in one of our recent publication (Singh et al., 2019). Few important observations are re-emphasized here for the completeness of the discussion. (I) It is observed that a droplet undergoes several shape oscillations in sphere-prolate-sphere-oblate (SPSO) mode where the magnitude of prolate deformation increases progressively and diverges at the onset of the breakup. (II) The surface of the droplet oscillates with the driving frequency and simultaneously builds a charge density due to evaporation. The amplitude of surface oscillations depends on the applied voltage and increases progressively with increase in charge density. A very large amplitude of oscillation is observed prior to the breakup which serves as an indicator to the onset of Rayleigh breakup. (III) Although a time-varying AC quadrupole field is used to levitate the droplet, in 95\% of the cases, droplet breakup is observed in the upward direction. (IV) At the maximum deformation, the droplet ejects a jet from the conical tip of the drop at the north-pole and the jet further disintegrates into several smaller progeny droplets. (V) The droplet is observed to eject a significant amount of charge (25-40\%) but negligible mass (\textless3\%) in the process. The charge loss in the breakup process is measured by the cut-off frequency method and by changing the applied frequency to measure the transient displacement of a levitated drop inside the trap ($\sim$ see ref. \citet{singh2017levitation}). Due to the ejection of charge, the destabilizing electric stresses reduce and the drop relaxes back to spherical shape through a series of shape oscillations. Since the gravity associated with the mass of the drop is not balanced in the present experimental setup the droplet levitates slightly away from the center of the quadrupole trap in the vertically downward direction. Due to the shift from the centre of the trap, the droplet is acted upon by a uniform electric field ($E$) whose strength depends on the intensity of the applied quadrupole potential ($\Lambda$) and $z_{shift}$ from the centre of the trap, and is defined as $E$=4 $\Lambda_0$ $z_{shift}$). Thus the presence of $z_{shift}$ and thereby a uniform electric field modifies the electric stress distribution on the drop surface which leads to an asymmetric breakup of the drop. These observations were recently reported by us in a systematic study of Rayleigh breakup of a charged droplet levitated in a quadrupole trap. In this work, we addressed several additional issues which are relevant to understand the breakup physics as well as possible applications. The specific issues addressed in this work are: \begin{enumerate} \item The mechanism of droplet breakup, and the evolution of droplet shape characterized by AR \& AD during the breakup. \item Effect of quadrupole field strength on the deformation AR \& AD, gravitational $z_{shift}$, cone angle and jet diameter. \item Effect of conductivity of the droplet on the jet diameter. \end{enumerate} \section{Numerical simulations} Further, to validated the experimental observations and to understand the evolution of the electrical stresses responsible for the breakup, numerical simulations are carried out using axisymmetric boundary element method (BEM) for a charged viscous drop in the presence of positive or negative DC quadrupole potential. Since the droplet conductivity is high (\textgreater100 $\mu$$S/cm$) in most of the experiments and the surrounding medium (air) is assumed to be a perfect dielectric, the droplet is considered as a perfect conductor drop. Thus, to understand the mechanism of droplet breakup, simulations are carried out for a charged drop modelled as a perfect conductor. The details of the mathematical formulation and numerical implementation can be found elsewhere \cite{gawande2017}. The flow equations are solved in the Stokes flow limit while the Laplace equation is solved for the electric potential. The integral equation for the electric potential is modified by substituting applied electric potential in terms of $z_{shift}$ and is given by, \begin{equation} \phi(r,z)=\sqrt{Ca_\Lambda} [(z-z_{shift})^2-0.5 r^2 ] \label{simu_cal} \end{equation} Here, $Ca_{\Lambda}=\frac{D_{d}^{3} \epsilon}{8 \gamma}\Lambda^2$ is the electric capillary number where $\epsilon$ is the permittivity of air and $\gamma$ is the surface tension of the drop. The BEM simulations are carried out for all experimental parameters and the results are compared with the experimental observations of aspect ratio (AR) and asymmetric deformation (AD) at the onset of the breakup. Here, AR is defined as the ratio of the major axis ($L$) to the minor axis ($B$) while AD is defined as the ratio of the distance of north-pole ($L_1$) to distance of the south-pole ($L_2$) from the centroid of the drop. The values of $L$, $B$, $L_1$ and $L_2$ are obtained by tracking the boundary of the drop using the image processing tool of MATLAB software and ImageJ software. In the experiments, it is observed that the droplet breakup occurs in time which is $(1/4)^{th}$ the period of the applied AC cycle. Thus, the BEM simulations are carried out in the presence of either positive or negative DC quadrupole potential where $\Lambda_0$ is used as the intensity of the applied electric field. \begin{figure} \begin{center} \includegraphics[width=0.6\textwidth]{Stress_plot} \caption{Surface charge density as a fucntion of arclength indicates that asymmetry is set as the drop approaches breakup. Black dash-dot line indicates the charge density distribution at t=-2.20 ms and blue dash line is at t=0 (at the onset of breakup). Inset shows the corresponding normal electric stresses obtained from the BEM simulations.} \label{fig:Stress_plot} \end{center} \end{figure} The simulations indicate that, in the presence of a positive DC field with $z_{shift}$, the droplet attains a stable oblate shape at equilibrium. However, in the negative DC field with $z_{shift}$, the droplet forms a conical tip at the south-pole indicating downward breakup. These results contradict the observation of upward breakup in most of the experiments. Thus the experimental results are re-analyzed and it is observed that, initially, before the instability sets in, the droplet undergoes shape oscillations. During these oscillations, the interfacial surface charge density of the droplet increases due to evaporation. As the droplet achieves a certain charge density, called as the critical charge density, the droplet deforms continuously and eventually breaks. In case of an upward breakup, the shape of the droplet at this critical point is observed to have high $P_2$ perturbation with a significant positive $P_3$ perturbation. Here $P_2$ \& $P_3$ correspond to $2^{nd}$ \& $3^{rd}$ Legendre modes. The coefficients of different Legendre modes is obtained via non-linear least-square fitting of the critical shape of the droplet using Mathematica software. Thus, in the BEM simulations, when an experimentally obtained $P_2$ perturbation is provided to the initial shape of the droplet it breaks in the upward direction when a positive DC field simultaneously acts on the droplet with $z_{shift}$. The origin of this field is the off-centered position (i.e. $z_{shift}$) due to the weight of the droplet. The droplet breaks almost symmetrically, before the positive electric field acting on $P_2$ shape and thereby charge perturbation produces a $P_3$ perturbation, inducing asymmetry and the upward breakup of the droplet. The asymmetry in the breakup is attributed to the nonlinear interaction of $P_1$ due to the uniform field and $P_2$ due to the charge on the drop which collectively gives finite $P_3$ perturbation. Thus upward or downward breakup depends on the magnitude and sign of $P_3$ perturbation. Further to validate the experimental observations, the shape obtained from the shape analysis of the drop at critical point, is given as an initial shape in terms of $P_2$ and $P_3$ perturbations in the simulations. The nondimensional parameters used in the simulations are as follows: $Ca_\Lambda$=0.00052, $z_{shift}$=4.5, $P_2$=0.12, $P_3$=0.02 for a droplet with $D_d$=210 $\mu$m. The values of $P_2$ and $P_3$ are non-dimensionalized by $D_d/2$ to a the perturbed sphere of unit radius with volume $\frac{4\pi}{3}$. For the above parameters, the critical charge on the droplet required for the breakup is determined by increasing the total surface charge on the droplet with a step change of 0.1\% of the Rayleigh charge. It is found that the droplet breaks at 98.7 \% (i.e 7.9$\pi$) of the Rayleigh charge for the given parameters. This indicates that the breakup process is Coulombic and not induced by the applied field. The critical charge also indicates that the breakup is sub-Rayleigh, hinting at a subcritical instability of fine amplitude prolate perturbations. This validates the theoretical prediction of Rayleigh breakup process which shows that the charged drop exhibits transcritical bifurcation at the critical charge of 8$\pi$ \cite{das15}. The time evolution of the charge density and the corresponding normal electric stress acting on the surface of the drop is shown in figure \ref{fig:Stress_plot}. It can be observed that for a drop perturbed with experimentally obtained values of $P_2$ and $P_3$, the initial charge density (indicated by black dash-dot line) and the normal electric stress acting on the drop surface are slightly asymmetric where the stress is higher at the north-pole due to considerable positive $P_3$ perturbation. As time progresses the charge density at the north-pole increases rapidly as compared to that at the south pole as shown in figure \ref{fig:Stress_plot}(denoted by blue dashline). This induces asymmetric stress distribution on the drop surface with higher stresses acting at the north-pole than that at the south-pole. The charge density at the north-pole of the drop continues to increase and diverge with time as the drop approaches breakup. Thus near breakup the north-pole experiences higher electric stresses than the south-pole. To balance these high electric stresses the tip curvature at the north-pole also diverges with time and the drop breaks asymmetrically in the upward direction. This typical behaviour is attributed to the asymmetry introduced by finite amplitude of positive $P_2$ and $P_3$ perturbations which grow with time. However, it is observed that even with no initial $P_3$ perturbation, (only $P_2$ perturbation) the droplet breaks in the upward direction for given parameters. This indicates that the asymmetric Rayleigh breakup observed in the experiments is due to the up-down asymmetric redistribution of the surface charge on the droplet on account of the uniform electric field acting from the south pole to the north pole of the droplet. \section{Effect of various parameters} \subsection{Effect of $Ca_\Lambda$ on deformation} The magnitude of applied voltage plays an important role not only in levitating the droplet but also in the extent of the deformation prior to the breakup and thereby the subsequent droplet breakup mechanism. For example, when a bigger sized droplet is levitated at a lower voltage (4$kv_{pp}$) and higher frequency it is observed that the droplet is displaced to a greater distance from the center of the trap and exhibits very large center of mass (CM) oscillation. In this case the CM stability of the droplet is relatively poor due to lower inwardly directed time-averaged quadrupolar force ($\sim$ $\frac{(a_z^2 \bar{z}_{shift})}{2(1+c^2 )}$, where, $a_z$ is the stability parameter in the z-direction, $c$ is the drag coefficient and $\bar{z}_{shift}$ is the average downward distance from the centre of the trap (for details see \citet{ singh2018theoretical}). Since the charged droplet is levitated in the quadrupole field it is pertinent to examine the effect of applied voltage on the droplet breakup. Towards this, various values of $Ca_{\Lambda}$ are realized experimentally by changing the applied voltage used for levitating different sized charged droplets. The data discussed next is for a droplet with high electrical conductivity ($\sigma$ $\sim$ 55-1000 $\mu$S/cm). The effect of $Ca_{\Lambda}$ on the droplet breakup is characterized by measuring the values of AR and AD at the onset of the breakup, as shown in figure \ref{fig:AD_vs_CaL_exp_simu} and figure \ref{fig:AR_vs_CaL_exp_simu}. In the experiments, it is found that the values of $z_{shift}$ are different in different experiments due to different sizes of the drops at a fixed applied voltage. Moreover, the $z_{shift}$ could not be determined experimentally. Therefore, for comparison of experimental data numerical simulations are carried out for three different (non-dimensionalised by the radius of the droplet) values of $z_{shift}$=2, 6 and 10. These values are chosen in accordance with the average values observed in the experiments. The experimental values of $z_{shift}$ are estimated using the expression obtained from the $z$ directional force balance for measured parameters. \begin{figure} \begin{center} \includegraphics[width=0.6\textwidth]{AD_vs_CaL_exp_simu} \caption{Effect of $Ca_{\Lambda}$ on the asymmetric deformation of the charged droplet breakup. The inset plot is the experimental observation showing the effect of $Ca_{\Lambda}$ on the AD. } \label{fig:AD_vs_CaL_exp_simu} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.6\textwidth]{AR_vs_CaL_exp_simu} \caption{Effect of $Ca_{\Lambda}$ on the asymmetric deformation of the charged droplet breakup. The inset plot is the experimental observation indicating effect of $Ca_{\Lambda}$ on the AR. } \label{fig:AR_vs_CaL_exp_simu} \end{center} \end{figure} The effect of $z_{shift}$ in BEM calculations is included by shifting the center of the trap in the positive z-direction which modifies the equation of applied quadrupole field, as shown in equation \ref{simu_cal}. The initial perturbation in the shape is given as a critical $P_2$ perturbation for a nondimensional charge fixed at Rayleigh limit (8$\pi$). The critical $P_2$ perturbation in numerical simulations is obtained by increasing its magnitude in the step of 0.01 until the droplet breaks for the given charge and $Ca_\Lambda$. In the present experimental setup $Ca_{\Lambda}$ varies from $\sim$ 4$\times$ $10^{-5}$ to $\sim$ 6$\times$ $10^{-4}$. In the numerical simulations, the values of $Ca_{\Lambda}$ are explored up to 3 decades to extend the scope of the study for other systems where $Ca_{\Lambda}$ can be of the order of $10^{-3}$. Figure \ref{fig:AD_vs_CaL_exp_simu} shows the effect of $Ca_\Lambda$ on the AD at the onset of breakup. In figure \ref{fig:AD_vs_CaL_exp_simu} inset, it can be observed that as the value of $Ca_\Lambda$ is increased the asymmetric deformation (AD) in the breakup increases. The error bar in $Ca_\Lambda$ is the standard deviation (SD) in the data due to uncertainty in the measurement of the droplet size. The SD of the $Ca_\Lambda$, AR and AD accounts for the error due to blurriness, camera inclination and image thresholding. The experimental results indicate that the AD values vary with $Ca_{\Lambda}$ with an exponent of 0.08. This indicates that for a given range of $Ca_{\Lambda}$, the asymmetric deformation is weakly dependent on the applied field. The simulations and experimental results (figure \ref{fig:AD_vs_CaL_exp_simu}) indicate that the BEM simulations are in qualitative agreement with the experimental observations. \begin{figure} \begin{center} \includegraphics[width=0.6\textwidth]{Shift_vs_caL_Images} \caption{Normal stress distribution on the surface of the charged drop showing effect of $Ca_{\Lambda}$ for different values of non-dimensional $z_{shift}$ i.e 2, 6, 10.} \label{fig:Shift_vs_caL} \end{center} \end{figure} The extent of deformation is characterized by AR and the experimental observations (figure \ref{fig:AR_vs_CaL_exp_simu} inset) show that the AR is inversely proportional, although weakly, to $Ca_{\Lambda}$. The corresponding BEM simulations support this experimental observation. Further, the electric stress distribution on the drop surface obtained from BEM simulations are shown in figure \ref{fig:Shift_vs_caL}. It can be observed that at a higher value of $Ca_{\Lambda}$, with a constant $z_{shift}$, the droplet experiences higher normal electric stresses at the north-pole than at the south-pole. This is due to the position of the droplet ($z_{shift}$) being below the center of the trap and the south endcap is considered to be at a positive potential. Thus the positively charged drop experiences high electrostatic repulsion from the positive endcap at the south-pole. This causes higher accumulation of charges in the north pole of the drop inducing higher electrical stress as compared to the south pole of the drop and introduces asymmetry in the drop deformation and breakup. When the value of $z_{shift}$ is decreased from 10 to 2 at constant $Ca_{\Lambda}$, the electrostatic repulsion from the southern end-cap is reduced and fewer charges migrate towards the north-pole of the drop. Thus the difference in the normal electric stresses acting on the two poles of the drop is reduced thereby reducing the asymmetry in the droplet breakup process. This indicates that the higher magnitude of $z_{shift}$ and $Ca_{\Lambda}$ induce more asymmetry (AD) to the shape deformation of the drop, as seen in figure \ref{fig:AD_vs_CaL_exp_simu}. The reduction in AR with the $Ca_{\Lambda}$ can be attributed to the oblate deformation tendency of the quadrupole field, thereby reducing the prolate deformation measured as AR\textgreater1. Moreover, a large value of $Ca_{\Lambda}$ leads to an early breakup, thereby reducing the AR. It should be noted that the breakup of a charged droplet can be induced by strong uniform field\cite{grimm2005dynamics}. The field influenced work as studied in this manuscript deals with the electric field of the order of 0.09 $kV/cm$ unlike the field induced breakup ($\sim$ 20 $kV/cm$) \cite{fontelos2008evolution, grimm2005dynamics}. \subsection{Effect of $Ca_\Lambda$ on the cone angle} The symmetric breakup of a droplet can be observed in the present setup by levitating smaller sized ($D_d$=100-170$ \mu$m) droplets at a lower voltage (4 to 7 $kv_{pp}$) thereby reducing the value of $Ca_{\Lambda}$ and $z_{shift}$. \citet{duft03} levitated a charged ethylene glycol droplet (at $80^o$C and 1 atm) at the center of a classical Paul trap and observed symmetric breakup of a charged drop. They report a semi cone angle formed during the symmetric breakup up to $30-33^o$. It is interesting to explore the effect of $Ca_{\Lambda}$ on the cone angle (2$\theta$) at a constant $z_{shift}$. Towards this the cone angle for different values of $Ca_{\Lambda}$ are plotted in the figure \ref{fig:Cone_angle_vs_CaL} and it is observed that for the experimental range of $Ca_{\Lambda}$ the value of ($\theta$) almost remains constant around $\sim$ $30^o\pm2^o$. The corresponding BEM simulations are carried out for an extended range of $Ca_{\Lambda}$ and it is observed that while the cone angle is nearly constant at low $Ca_{\Lambda}$ it increases with $Ca_{\Lambda}$ at higher values of $Ca_{\Lambda}$. This can be attributed to the field-induced effects on the breakup process. This indicates that for the $Ca_{\Lambda}$ explored in the experiments, the Rayleigh breakup process admits a nearly constant cone angle. \begin{figure} \begin{center} \includegraphics[width=0.6\textwidth]{TH_vs_CaL_exp_simu1} \caption{Change in cone angle with $Ca_{\Lambda}$. The solid vertical and horizontal lines indicate the standard deviation in the experimental data.} \label{fig:Cone_angle_vs_CaL} \end{center} \end{figure} \subsection{Effect $Ca_{\Lambda}$ on jet diameter ($J_d$) } When a droplet is levitated at high value of $Ca_{\Lambda}$ at constant $z_{shift}$ it experiences high electric stresses at the north pole and the droplet issues a jet at the north pole with higher asymmetry. It is therefore pertinent to observe how the jet diameter $J_d$ changes with the $Ca_{\Lambda}$. To examine the effect of $Ca_{\Lambda}$ a droplet of ethylene glycol and ethanol mixture is levitated. Since no NaCl is added the conductivity ($\sigma$) of the liquid drop is low (0.8-1 $\mu$S/cm). The lower conducting droplet is levitated at a fixed applied voltage i.e 11 $kV_{pp}$ and the frequency is adjusted to keep the droplet at critical stability. It is observed that for a given set of parameters the droplet ejects a thick jet at the north-pole whose thickness varies with the jet length. The $J_d$ is measured using ImageJ software and the point of measurement is chosen as the intersection of two tangents drawn at the endpoint of the cone and the starting of the jet. The different values of $Ca_{\Lambda}$ are obtained by levitating different sized drops. The change in the $J_d$ with $Ca_{\Lambda}$ is shown in figure \ref{fig:Jd_vs_caL} and it can be observed that $J_d$ increases with an increase in $Ca_{\Lambda}$ and is observed to scale as $Ca_{\Lambda}^{0.41}$. Although no direct measurement of the progeny size could be made due to poor resolution of the images the $J_d$ can be considered to be related to progeny size. Since the $J_d$ changes with the $Ca_{\Lambda}$ the sizes of the progenies are also expected to change with $Ca_{\Lambda}$. This observation contradicts the prediction of \citet{collins13}, where the simulations of a drop undergoing breakup in the presence of uniform field indicate that the deformation and subsequent progeny formation is independent of the size of the mother droplet. \begin{figure} \begin{center} \includegraphics[width=0.6\textwidth]{Jd_vs_caL} \caption{Effect of $Ca_\Lambda$ on the jet diameter. Parameters: $D_p$ varies from 100 to 260$\mu$m, $\Lambda_0$ varies from 2 to 3.6$\times$$10^7$V/$\text{m}^2$. } \label{fig:Jd_vs_caL} \end{center} \end{figure} \subsection{Effect of conductivity on $J_d$ } A continuous jet with measurable jet thickness can be observed for low conductivity droplets. To the best of our knowledge, no experimental study is available which explores the effect of conductivity on the breakup characteristics of a levitated charged droplet. It should be noted that \citet{collins13} have looked at the effect of conductivity on the progeny size generated by the stretching of a liquid pool by a strong electric field and not due to an inherent surface charge on the droplet. Hence in the present experimental study, the effect of conductivity is explored in terms of non-dimensional Saville number (Sa) which is the ratio of charge relaxation time scale $t_e$(=$\frac{\epsilon_i}{\sigma_i}$, where $\epsilon_i$ is the permittivity of the drop) to the the hydrodynamic timescale $t_h(=\mu_i D_d/2\gamma)$ where $\mu_i$ and $\gamma$ are the viscosity and surface tension of the drop. The definition of $Sa$ suggests that the charge relaxation time is lower if the conductivity of the solution is higher. Thus the distribution of the charges over the droplet deformation time scale is instantaneous higher normal electrical stresses than the tangential electric stresses on the drop surface. In the present work, the effect of the conductivity is explored over two decades of change in the value of Sa, as shown in figure \ref{fig:Jd_vs_sa}. It can be observed that the $J_d$ varies as $Sa^{0.12}$. Thus at a lower value of $Sa$ or higher conductivity, the droplet ejects thinner jets as compared to droplets with lower conductivity. \begin{figure} \begin{center} \includegraphics[width=0.6\textwidth]{Jd_vs_sa} \caption{Change in jet diameter with Sa. Parameters: $\sigma$ varies from 1 to 250 $\mu$S/cm, $\mu_d$=0.006Pa-S, $\gamma$=0.03mN/m, $D_p$$\sim$290$\mu$m, $\Lambda_0$=2$\times$$10^7$V/$\text{m}^2$} \label{fig:Jd_vs_sa} \end{center} \end{figure} In our experimental setup, with a moderate value of $Ca_{\Lambda}$ and $z_{shift}$, it is observed that when the conductivity ($\sigma$) of the liquid drop is increased to a very high value, the jet cannot be detected by the optical resolution of the microscope-camera assembly. At a very high value of conductivity, the drop forms a sharp tip at the north pole, from where it ejects considerable charge but negligible mass. The ejection of charge is confirmed by the direct observation of the immediate relaxation of the drop shape after jet ejection. The reason for the formation of a sharp cone at high conductivity is the reduction in the relative magnitude of the tangential stress as compared to the normal stresses. \begin{figure} \begin{center} \includegraphics[width=0.9\textwidth]{deformation1} \caption{Deformation, breakup and relaxation sequence of droplet in the breakup process for two different conductivities. a) 1.8 $\mu$S/cm b) 1000 $\mu$S/cm. Parameters: $\Lambda_0$=2$\times$$10^7$V/$\text{m}^2$, $\mu_d$=0.006Pa-S, $\gamma$=0.03mN/m, $D_p$$\sim$200$\mu$m. } \label{fig:deformation} \end{center} \end{figure} The detailed mechanism of the droplet deformation, breakup and relaxation for two conductivities (figure \ref{fig:deformation}(a), $\sigma=20\mu S/cm$, and figure \ref{fig:deformation}(b), $\sigma=1000\mu S/cm$) are shown in figure \ref{fig:deformation}. The numbers below the figures indicate the time of evolution and the unit is $\mu$s. It can be observed from the figure \ref{fig:deformation} that at time t=0, which is marked as a breakup time, figure \ref{fig:deformation}(a) shows a visible jet. On the contrary in the case of a high conductivity droplet (figure \ref{fig:deformation}(b)) a sharp conical tip is formed and the jet cannot be observed. A similar qualitative observation of jet diameter dependence on $Ca_\Lambda$ and the conductivity of the droplet is presented as a phase diagram in figure \ref{fig:phase diagram}. From figure \ref{fig:phase diagram} it can be observed that, as the conductivity of the drop increases, the jet thickness reduces and at $\sigma$ i.e $\sim$ 1000 $\mu$s/cm the droplet breaks with the formation of a sharp tip and no jet is visible. On the other hand, when the quadrupole potential is increased, at lower conductivity ($\sigma$ $\sim$ 20 $\mu$s/cm) the extent of asymmetry increases with an increase in $\phi_0$, in agreement with BEM simulations (figure \ref{fig:Shift_vs_caL}). At a higher conductivity, an increase in $\phi_0$ only leads to more oblate (and thereby fatter droplets) contribution to the droplet shape, at the breakup. \begin{figure} \begin{center} \includegraphics[width=0.6\textwidth]{phase_diagram} \caption{The phase diagram of effect of conductivity on the droplet breakup characteristics. The blue dotes corresponds to the actual coordinates in the $\sigma$-$\phi_0$ space.} \label{fig:phase diagram} \end{center} \end{figure} \section{Conclusions} The effect of applied voltage on the droplet breakup characteristics is reported for non-zero gravity and it is observed that while the AD values increase with an increase in the value of $Ca_\Lambda$ the AR decreases. The comparison of average cone angle for the symmetric and asymmetric breakup with experimentally obtained values shows that the cone angle remains constant at about $30^0$ at low $Ca_\Lambda$, while it increases with $Ca_{\Lambda}$ above a certain $Ca_\Lambda$. Thus at moderate values of trap parameter, it can be conjectured that the breakup is indeed Rayleigh breakup, which is only influenced by the external field. On the other hand, at higher values of $Ca_{\Lambda}$, the instability could be induced by the external field. In the experiments though, the instability seems to be in the former category. The change of the $J_d$ with the $Ca_\Lambda$ and $\sigma$ of the droplet is reported here for the first time. The magnitude of $J_d$ is observed to be higher in case of low conductivity and high $Ca_\Lambda$ which results in the formation of larger progeny droplets. The experimental observations are validated with the perfect conductor model using BEM simulations. These results further the understanding of Rayleigh breakup of charged drops in quadrupole traps, and indicates that in actual experimental and technological setups, the progeny sizes, as well as charge ejection, could be significantly affected by the electrostatic conditions in these setups. \providecommand{\latin}[1]{#1} \providecommand*\mcitethebibliography{\thebibliography} \csname @ifundefined\endcsname{endmcitethebibliography} {\let\endmcitethebibliography\endthebibliography}{} \begin{mcitethebibliography}{21} \providecommand*\natexlab[1]{#1} \providecommand*\mciteSetBstSublistMode[1]{} \providecommand*\mciteSetBstMaxWidthForm[2]{} \providecommand*\mciteBstWouldAddEndPuncttrue {\def\unskip.}{\unskip.}} \providecommand*\mciteBstWouldAddEndPunctfalse {\let\unskip.}\relax} \providecommand*\mciteSetBstMidEndSepPunct[3]{} \providecommand*\mciteSetBstSublistLabelBeginEnd[3]{} \providecommand*\unskip.}{} \mciteSetBstSublistMode{f} \mciteSetBstMaxWidthForm{subitem}{(\alph{mcitesubitemcount})} \mciteSetBstSublistLabelBeginEnd {\mcitemaxwidthsubitemform\space} {\relax} {\relax} \bibitem[Rayleigh(1882)]{rayleigh1882} Rayleigh,~L. \emph{The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science} \textbf{1882}, \emph{14}, 184--186\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Zeleny(1917)]{zeleny17} Zeleny,~J. \emph{Physical Review} \textbf{1917}, \emph{10}, 1\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Macky(1931)]{macky31} Macky,~W. \emph{Proceedings of the Royal Society of London. Series A, Mathematical and Physical Character} \textbf{1931}, 565--587\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Taylor(1964)]{taylor64} Taylor,~G. Disintegration of water drops in an electric field. Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences. 1964; pp 383--397\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Doyle \latin{et~al.}(1964)Doyle, Moffett, and Vonnegut]{doyle64} Doyle,~A.; Moffett,~D.~R.; Vonnegut,~B. \emph{Journal of Colloid Science} \textbf{1964}, \emph{19}, 136--143\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Millikan(1935)]{millikan1935} Millikan,~R.~A. \emph{Electrons, Protons, Photons, Neutrons, and Cosmic Rays.}; 1935\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Abbas and Latham(1967)Abbas, and Latham]{abbas67} Abbas,~M.; Latham,~J. \emph{Journal of Fluid Mechanics} \textbf{1967}, \emph{30}, 663--670\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Duft \latin{et~al.}(2003)Duft, Achtzehn, M{\"u}ller, Huber, and Leisner]{duft03} Duft,~D.; Achtzehn,~T.; M{\"u}ller,~R.; Huber,~B.~A.; Leisner,~T. \emph{Nature} \textbf{2003}, \emph{421}, 128--128\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Duft \latin{et~al.}(2002)Duft, Lebius, Huber, Guet, and Leisner]{duft02} Duft,~D.; Lebius,~H.; Huber,~B.~A.; Guet,~C.; Leisner,~T. \emph{Physical Review Letters} \textbf{2002}, \emph{89}, 084503--084507\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Gomez and Tang(1994)Gomez, and Tang]{gomez1994} Gomez,~A.; Tang,~K. \emph{Physics of Fluids} \textbf{1994}, \emph{6}, 404--414\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Singh \latin{et~al.}(2019)Singh, Gawande, Mayya, and Thaokar]{singh2019subcritical} Singh,~M.; Gawande,~N.; Mayya,~Y.; Thaokar,~R. \emph{arXiv preprint arXiv:1907.02294} \textbf{2019}, \relax \mciteBstWouldAddEndPunctfalse \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Singh \latin{et~al.}(2018)Singh, Gawande, Mayya, and Thaokar]{singh2018surface} Singh,~M.; Gawande,~N.; Mayya,~Y.; Thaokar,~R. \emph{Physics of Fluids} \textbf{2018}, \emph{30}, 122105\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Achtzehn \latin{et~al.}(2005)Achtzehn, M{\"u}ller, Duft, and Leisner]{achtzehn05} Achtzehn,~T.; M{\"u}ller,~R.; Duft,~D.; Leisner,~T. \emph{The European Physical Journal D-Atomic, Molecular, Optical and Plasma Physics} \textbf{2005}, \emph{34}, 311--313\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Singh \latin{et~al.}(2017)Singh, Mayya, Gaware, and Thaokar]{singh2017levitation} Singh,~M.; Mayya,~Y.; Gaware,~J.; Thaokar,~R.~M. \emph{Journal of Applied Physics} \textbf{2017}, \emph{121}, 054503\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Gawande \latin{et~al.}(2017)Gawande, Mayya, and Thaokar]{gawande2017} Gawande,~N.; Mayya,~Y.; Thaokar,~R. \emph{Physical Review Fluids} \textbf{2017}, \emph{2}, 113603\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Das \latin{et~al.}(2015)Das, Mayya, and Thaokar]{das15} Das,~S.; Mayya,~Y.; Thaokar,~R. \emph{EPL (Europhysics Letters)} \textbf{2015}, \emph{111}, 24006\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Singh \latin{et~al.}(2018)Singh, Thaokar, Khan, and Mayya]{singh2018theoretical} Singh,~M.; Thaokar,~R.; Khan,~A.; Mayya,~Y. \emph{Physical Review E} \textbf{2018}, \emph{98}, 032202\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Grimm and Beauchamp(2005)Grimm, and Beauchamp]{grimm2005dynamics} Grimm,~R.~L.; Beauchamp,~J.~L. \emph{The Journal of Physical Chemistry B} \textbf{2005}, \emph{109}, 8244--8250\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Fontelos \latin{et~al.}(2008)Fontelos, Kindel{\'a}n, and Vantzos]{fontelos2008evolution} Fontelos,~M.~A.; Kindel{\'a}n,~U.; Vantzos,~O. \emph{Physics of Fluids} \textbf{2008}, \emph{20}, 092110\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Collins \latin{et~al.}(2013)Collins, Sambath, Harris, and Basaran]{collins13} Collins,~R.~T.; Sambath,~K.; Harris,~M.~T.; Basaran,~O.~A. \emph{Proceedings of the National Academy of Sciences} \textbf{2013}, \emph{110}, 4905--4910\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \end{mcitethebibliography} \end{document}
1,941,325,219,943
arxiv
\section{Introduction}\label{sec_intro} Non-orthogonal multiple access (NOMA) is a key ingredient of recent multiple access techniques for the fifth generation (5G) mobile networks. By allocating several users to the same resource block, NOMA techniques realize high spectral efficiency and low latency even when a network is massively connected~\cite{NOMA}. Additionally, in future multiple access communications, \emph{overloaded} access is considered to be unavoidable because of spectral resource limitation. NOMA techniques are expected to deal with such an overloaded system in which the number of active transmitters is larger than that of the signal dimension~\cite{SCMA}. Code division multiple access (CDMA)~\cite{Hara} is an OMA system in which {$n$ active users} communicate with a base station (BS) simultaneously by spreading users' signals with their \emph{signature sequences} (or spreading sequences) {of length $m$}. Although orthogonality of signature sequences ensures a reasonable multiuser detection performance {if $m\ge n$, detection performance will drop in overloaded cases ($m<n$).} \emph{Sparsely spread CDMA} (SCDMA)~\cite{YT} is a promising NOMA technique based on CDMA. In SCDMA, data streams are {modulated} by randomly generated signature sequences which contain a small number of non-zero elements. The BS receives superimposed signals with additive noise and tries to detect data streams from multiple users. Compared with conventional CDMA, sparse signature sequences in SCDMA allow low-complexity detection using a linear-time algorithm such as the belief propagation (BP). Moreover, as a NOMA system, SCDMA potentially achieves reasonable detection performance even in overloaded cases. Recent studies on SCDMA mainly focused on design of detectors and signature sequences. As described above, BP is a detector suitable for the sparse structure of SCDMA~\cite{Guo}, which exhibits nearly optimal performance predicted theoretically~\cite{YT, Tse}. {The computational complexity of the BP detector rapidly increases with respect to signature sparsity and the constellation size of transmit signals. Since practical SCDMA systems use sufficiently large values of these parameters, we need to reduce a computational cost for faster multiuser detection. } Signature design is another crucial issue because detection performance depends on superimposed signals {spreaded by signature sequences}. In~\cite{Song}, a signature matrix family that improves BP detection performance is proposed. Recently,~\cite{Kim} and~\cite{Lin} proposed an alternative approach for related SCMA systems which designs signature sequences and a detector jointly by autoencoders. Although learned autoencoders provide signature sequences with reasonable performance, their high training cost is a drawback because they contain a large number of training parameters. In summary, a desirable detector and signature design should posses both high scalability for large systems and good adaptability to practical SCDMA systems with high signature sparsity, large signal constellations, and/or overloaded access. Rapid development of deep learning (DL) techniques has stimulated design of wireless communication systems~\cite{ML}. Recently, \emph{deep unfolding} proposed by Gregor and LeCun~\cite{LISTA} has attracted great interests as another DL-based approach~\cite{DU} in addition to an end-to-end approach~\cite{E2E}. In deep unfolding, the signal-flow graph of an existing iterative algorithm is expanded to a deep network architecture in which some parameters such as step-size parameters are embedded. These embedded parameters are treated as trainable parameters to tune the behavior of the algorithm. Learning trainable parameters is accomplished by standard supervised learning techniques such as back propagation and stochastic gradient descent (SGD) if the original algorithm consists of differentiable processes. An advantage of deep unfolding is that the number of trainable parameters are much fewer than conventional deep neural networks, which leads to fast and stable training process and high scalability. Deep unfolding has been applied to various topics in signal processing and wireless communications: sparse signal recovery~\cite{LISTA,TISTA,TISTA2}, massive MIMO detection~\cite{He, TPG,TPG2}, signal detection for clipped OFDM systems~\cite{CTISTA} and trainable decoder for LDPC codes~\cite{LDPC}. In this paper, we propose a trainable multiuser detector and signature design with high scalability and adaptability to overloaded SCDMA systems. In order to resolve a scalability issue of multiuser detection, we first introduce a novel SCDMA {multiuser} detector called sparse trainable gradient projection (STPG) detector. The STPG detector is based on a projected gradient descent algorithm whose gradient can be computed efficiently. Combined with the deep unfolding technique, we will propose a trainable detector with reasonable detection performance, high scalability, and adaptability to practical SCDMA systems. In addition, a scalable DL-based SCDMA signature design is proposed by learning a signature matrix and STPG detector simultaneously. In the proposed method, values of non-zero elements in a signature matrix and trainable parameters of the detector are jointly trained to improve detection performance based on an estimate from a temporal signature matrix and detector. Compared with existing DL-based approaches, the proposed method can be trained in huge systems. The outline of the paper is as follows. Section~\ref{sec_2} describes a system model and conventional BP detector. In Section~\ref{sec_3}, we propose the STPG detector for SCDMA multiuser detection and compare its detection performance in large systems with a BP decoder. Section~\ref{sec_4} describes signature design based on STPG detector and demonstrate performance improvement. Section~\ref{sec_5} is a summary of this paper. \section{System model and BP detector}\label{sec_2} We first introduce SCDMA system model and a conventional BP detector. \subsection{SCDMA system model} We consider an uplink SCDMA system {where $n$ active users with a single antenna try to transmit their messages to a BS by using signature sequences of length $m$}. The ratio $\beta := n/m$ is called overloaded factor. From the definition, $\beta>1$ indicates that the system is overloaded, i.e., $m<n$. We assume that the ratio $\beta$ is a constant number. Each user has a BPSK-modulated signal $x_i\in\{+1,-1\}$ ($i=1,\dots,n$) as a transmit data. In addition, users have their own signature sequences $\bm{a}_i=(a_{1,i},\dots,a_{m,i})^T\in\mathbb{R}^m$. Then, the BS receives superimposed signals given by \begin{equation} \bm{y} = \sum_{i=1}^n \bm a_{i}x_i + \bm{w}, \label{eq_ch1} \end{equation} where $\bm w$ is a noise vector and $\bm y\in\mathbb{R}^m$ is a received signal {at the BS}. Letting $\bm A:=(\bm a_1, \dots, \bm a_n)\in \mathbb{R}^{m\times n}$ be a signature matrix, (\ref{eq_ch1}) has another form written by \begin{equation} \bm{y} = \bm A \bm x + \bm{w}, \label{eq_ch2} \end{equation} where $\bm x := (x_1,\dots, x_n)^T$. In {conventional} CDMA systems, we assume orthogonality of signature sequences $\bm{a}_i^T\bm{a}_j=0$ for any $i\neq j$. Instead, SCDMA systems require sparsity of signature sequences so that the number of non-zero elements in each signature sequence is constant to $n$ and $m$. We consider the following typical SCDMA system. First, we assume an AWGN channel. Second, each row of the signature matrix $\bm A$ is assumed to have $k$ non-zero entries, which is called signature sparsity in this paper. We also assume that the signature matrix is normalized such as $\|\bm A\|_F^2=km$, where $\|\cdot\|_F$ {denotes} a Frobenius norm. Under these assumptions, the signal-to-noise ratio (SNR) of the system defined by $n_0:=\mathsf{E}_{\bm x}\|\bm A\bm x\|_2^2/\mathsf{E}_{\bm w}\|\bm w\|_2^2$ is calculated as $n_0=k/\sigma^2$, where $\sigma^2$ is the variance of the noise per a symbol. Equivalently, the SCDMA model for a given SNR $n_0$ is defined by \begin{equation} \bm{y} = \sqrt{\frac{n_0}{k}}\bm A \bm x + \bm{w}_0, \label{eq_ch3} \end{equation} where $\bm{w}_0$ is an i.i.d. Gaussian random vector with zero mean and unit variance. We consider a multiuser detector and signature design for this system model. \subsection{BP detector}\label{sec_BP} We briefly describe a BP detector of a standard multiuser detector for SCDMA~\cite{Guo}. Recursive equations of the BP are constructed on a {factor} graph whose nodes are variables $\bm x$ and $\bm y$ and edges are set according to non-zero elements of $\bm{A}$. The message $U_{j\rightarrow i}(x)$ ($x\in\{+1,-1\}$) is a message from a {chip} node $y_j$ to a {symbol} node $x_i$, and $V_{i\rightarrow j}(x)$ is a message from a {symbol} node $x_i$ to a {chip} node $y_j$. Then, the BP recursive formula is given by \begin{align} V_{i\rightarrow j}(x) &= Z_{i\rightarrow j}^{-1}\prod_{l\in\partial i\backslash j} U_{l\rightarrow i}(x),\\ U_{j\rightarrow i}(x) &= Z_{j\rightarrow i}^{-1}\sum_{\bm{x}_{\partial j\backslash i}} \left(\prod_{k\in \partial j\backslash i} V_{k\rightarrow j}(x_k)\right)\nonumber\\ \times& \exp\left\{-\frac{1}{2}\left[y_j-\sqrt{\frac{n_0}{k}}\left(a_{j,i}x+\sum_{k\in \partial j\backslash i}a_{j,k}x_k\right)\right]^2 \right\}, \label{eq_BP} \end{align} where $Z_{i\rightarrow j}$ and $Z_{j\rightarrow i}$ are normalization constants, and $\partial i:=\{j\in \{1,\dots,m\}| a_{j,i}=1\}$ and $\partial j:=\{i\in \{1,\dots,n\}| a_{j,i}=1\}$ are neighboring node sets on the {factor} graph. After $T_{\mathrm{BP}}$ iterations, the probability that the $i$th transmit signal takes $x$ is estimated by \begin{equation} V_{i}(x) = Z_{i}^{-1}\prod_{j\in\partial i} U_{j\rightarrow i}(x), \end{equation} where $Z_{i}$ is a normalization constant. Finally, the $i$th transmit signal is detected as $x_i=1$ if $V_{i}(1)\ge V_{i}(-1)$, and $x_i=-1$ otherwise. {The computational cost of the BP detector is $O(k^22^{k-1}n)$ because (\ref{eq_BP}) contains a sum over all possible combinations of $k-1$ transmit signals $\bm{x}_{\partial j\backslash i}$. Similarly, if the detector is applied to a system with a higher order modulation of size $|\mathcal M|$, the computational cost of the BP detector is $O(k^2|\mathcal{M}|^{k-1}n)$. This rapid increase with respect to $k$ and $|\mathcal{M}|$ is a drawback of the BP detector. } \section{STPG detector}\label{sec_3} In this section, we propose a trainable multiuser detector for SCDMA using the idea of deep unfolding. \begin{table*}[t] \begin{center} \caption{Number of operations in STPG and BP detectors, and values for various $k$ when $n=m=1200$ ($\beta=1$).} \begin{tabular}{|c|c|c|c|c|c|} \hline & Number of operations & $k=2$ & $k=4$ & $k=6$ & Big-O notation \\ \hline \hline STPG additions & $(2\beta^{-1}k + \beta^{-1}+1)n$ & $7.20\times 10^3$ & $1.20\times 10^4$ & $1.68\times 10^4$ & $O(kn)$ \\ BP additions& $(k2^k + 2)\beta^{-1}kn$ & $2.40\times 10^4$ & $3.16\times 10^5$ & $2.77\times 10^6$ & $O(k^22^kn)$ \\ \hline STPG multiplications & $( \beta^{-1}k+ \beta^{-1} +2)n+1$ & $6.00\times 10^3$ & $8.40\times 10^3$ & $1.08\times 10^4$ & $O(kn)$ \\ BP multiplications & $\{(2 k + 3)2^k + 2 \beta^{-1}k\}\beta^{-1}kn$ & $7.68\times 10^4$ & $8.83\times 10^5$ & $6.99\times 10^6$ & $O(k^22^kn)$ \\ \hline \end{tabular} \label{tab_1} \end{center} \end{table*} \subsection{Structure of STPG detector} Deep unfolding is an efficient DL-based approach borrowing the structure of an iterative algorithm. Here, we employ a gradient descent-based detector apart from a message-passing algorithm such as a BP. The maximum-likelihood (ML) estimator for SCDMA system (\ref{eq_ch3}) is formulated by \begin{equation} \bm{\hat x} = \mathrm{argmin}_{\bm{x}\in\{+1,-1\}^n} \left\|\bm{y}-\sqrt{\frac{n_0}{k}}\bm A\bm x\right\|_2^2. \label{eq_ml} \end{equation} This ML estimator is formally similar to that for the MIMO signal detection~\cite{TPG}, and computationally intractable for large $n$. Alternatively, a projected gradient descent (PG) algorithm is used to solve (\ref{eq_ml}) approximately by replacing the constraint $\bm{x}\in\{+1,-1\}^n$ to relaxed one $\bm{x}\in[-1,1]^n$. Its recursive formula of the PG is given by \begin{align} \bm r_t &= \bm s_t + \gamma \sqrt{\frac{n_0}{k}}\bm A^T\left(\bm y- \sqrt{\frac{n_0}{k}}\bm A \bm s_t\right), \label{eq_pg_1}\\ \bm s_{t+1} &= \tanh(\alpha \bm r_t), \label{eq_pg_2} \end{align} where $\bm{s}_0=\bm 0$ is an initial vector. The first equation is called a gradient step because $\bm r_t$ is updated by a gradient descent method with a step size $\gamma>0$. The next equation is named a projection step with an element-wise soft projection function $\tanh(\cdot)$. The softness parameter $\alpha$ controls the shape of the soft projection function. In the large-$\alpha$ limit, the function becomes a step function, which is the original projection function onto $[+1,-1]$. It is expected that the detection performance of the PG depends on the choice of the parameters $\gamma$ and $\alpha$. {As a disadvantage of the plain PG, we should search values of parameters carefully for reasonable performance.} To introduce the STPG detector, we replace a parameter $\gamma$ to $\gamma_t$~\footnote{The parameter is introduced as {$\gamma_t^2$} to avoid a negative step size in implementation. } depending on the iteration step $t$. The proposed STPG detector is thus defined by \begin{align} \bm r_t &= \bm s_t + \gamma_t \sqrt{\frac{n_0}{k}}\bm A^T\left(\bm y- \sqrt{\frac{n_0}{k}}\bm A \bm s_t\right), \label{eq_tpg_1}\\ \bm s_{t+1} &= \tanh(\alpha \bm r_t), \label{eq_tpg_2} \end{align} where $\{\gamma_t\}_{t=1}^T$ and $\alpha$ are regarded as trainable parameters. The architecture of the $i$th iteration of the STPG detector is shown in Fig.~\ref{fig_ar}. Note that, although the trainable parameter $\alpha$ can be replaced to $\alpha_t$, a single trainable parameter $\alpha$ is used here to reduce the number of trainable parameters. The total number of trainable parameters is $T+1$ in $T$ iterations, which is constant to $n$ and $m$. This leads to high scalability and stable convergence in its training process. It is also emphasized that the STPG detector uses $\bm A^T$ in the gradient step although a similar MIMO detector called TPG-detector uses the pseudo-inverse matrix $\bm U:=( \bm A^T\bm A)^{-1}\bm A^T$~\cite{TPG} or $\bm U_\eta:=(\bm I+\eta \bm A^T\bm A)^{-1}\bm A^T$ with a trainable parameter $\eta$~\cite{TPG2}. This change reduces the computational complexity of the detector. In particular, the sparse structure of a signature matrix $\bm A$ in SCDMA enables us to calculate all the matrix-vector product operations in $O(n)$ time. On the other hand, even though $\bm A$ is a sparse matrix, a matrix-vector product including $\bm U$ or $\bm U_{\beta}$ takes $O(n^2)$ operations because these matrices are dense in general. The details of the computational cost is described in the next subsection. \begin{figure}[t] \centering \includegraphics[width=7.5cm]{STPG} \caption{Architecture of the STPG detector at the $t$th iteration.} \label{fig_ar} \end{figure} \subsection{Computational complexity}\label{sec_com} A crucial property of SCDMA is a low computational cost in multiuser detection. {Here, we count the number of additions and multiplications of the STPG and BP detectors in each iteration step.} For simplicity, we neglect a calculation of nonlinear functions such as $\tanh(\cdot)$ in the STPG detector and $\exp(\cdot)$ in the BP detector. Table~\ref{tab_1} shows the number of operations {per an iteration} as a function of $n$, $\beta$ and $k$. In addition, we show the values when $n=m=1200$ ($\beta=1$) and {$k=2,4,6$} for comparison. It is found that both detectors are linear-time algorithms with respect to $n$. In particular, the use of $\bm A^T$ in a gradient step helps the STPG detector to reduce its complexity. We also find that the STPG detector requires less number of operations than the BP detector in terms of {signature sparsity $k$.} In fact, the STPG detector has $O(kn)$ additions/multiplications in each iteration. On the other hand, the BP detector needs $O(k^22^kn)$ operations {as discussed in Sec.~\ref{sec_BP}.} As shown in Fig.~\ref{fig_3}, the constant $k$ should be large enough to ensure reasonable detection performance, which results in the rapid increase of the BP computational cost. It is noteworthy that the gap of computational complexity in terms of $k$ will increase {if we consider higher order modulations}. For a constellation of size $|\mathcal M|$, the number of operations of the BP detector is $O(k^2|\mathcal{M}|^{k-1}n)$ while that of the STPG detector remains $O(kn)$. This is a strong point of the STPG detector for practical SCDMA systems. \subsection{Simulation settings}\label{sec_ex} In the following subsections, we compare the proposed STPG detector to the original PG and the BP detector in terms of multiuser detection performance. In the numerical simulations, we consider a massive SCDMA system with $n=1200$ active users. A signature matrix $\bm A$ is randomly generated by an element-wise product $\bm A= \bm H\odot \bm W$ where $\bm H\in \{0,1\}^{m\times n}$ is a \emph{mask matrix} and $\bm W\in \mathbb{R}^{m\times n}$ is a weight matrix. In numerical simulations, each weight of $\bm W$ is uniformly chosen from $\{+1,-1\}$. The mask matrix $\bm H$ is also randomly generated by Gallager's construction~\cite{Gal} {so that its row and column weights are exactly equal to $k$ and $k'=km/n(\in\mathbb{N})$, respectively.} For the PG and STPG detectors, we set $T=30$ as a number of iterations. The STPG detector is implemented by PyTorch 1.2~\cite{PyTorch}. {Initial values of trainable parameters are set to $\gamma_t=0.01$ ($t=1,\dots,T$) and $\alpha=2$.} In training process of the STPG detector, we can use a mini-batch training by back propagation and SGD. In addition, the use of incremental training~\cite{TISTA2, TPG2} is crucial to avoid a vanishing-gradient problem and obtain reasonable results. In the incremental training, we begin with learning the trainable parameters $\gamma_1,\alpha$ assuming that $T=1$. This is called the first generation of training. After the first generation is finished, we next train parameters $\gamma_1,\gamma_2,\alpha$ as if $T=2$ by using the trained values of $\gamma_1,\alpha$ as their initial values. Learning these generations is repeated in an incremental manner until $T$ reaches to the desired value. In the following simulations, we use $100$ mini-batches of size $200$. We use the Adam optimizer~\cite{Adam} whose learning rate is $0.0005$. Training process of the detector is executed for each SNR. Multiuser detection performance is measured by bit error rate (BER). Since outputs $\bm{s}_{T}$ of the PG and STPG detectors are continuous values, a sign function $\mathrm{sign}(x)=1$ ($x\ge 0$) and $-1$ ($x<0$) is applied to the outputs. Thus, the detected signal is given by $\bm{\hat x}=\mathrm{sign}(\bm s_T)$. \subsection{Acceleration of convergence in STPG} \begin{figure}[t] \centering \includegraphics[width=8cm]{ICC2020-6} \caption{{BER of the PG and STPG detectors as a function of the number of iterations $T$ with different SNRs; $n=1000$, $m=1200$ ($\beta=1.2$), and $k=6$. Parameters of the PG are set to $\gamma=0.01$ and $\alpha=2$.}} \label{fig_2} \end{figure} We first compare the STPG detector to the original PG to demonstrate advantages of learning parameters by deep unfolding. Figure~\ref{fig_2} shows the BER performance of both detectors with different SNRs. {In the original PG, we choose $\gamma=0.01$ and $\alpha=2$ corresponding to initial values of the STPG detector}. We find that the STPG detector exhibits better performance than the PG. For example, when SNR is 11dB, the BER of the STPG detector ($T=30$) is about $1.0\times 10^{-3}$ while that of the PG is about $5.1\times 10^{-2}$. In addition, when {SNR}$=$14dB, the STPG detector shows fast convergence to a fixed point compared with the PG. These results indicate that training a constant number of parameters in the PG leads to better detection performance and fast convergence to a fixed point. Detection performance improvement and convergence acceleration are crucial advantages of deep unfolding as shown in other signal detection problems~\cite{TISTA2, TPG2}. \subsection{Performance comparison to BP detector} \begin{figure}[t] \centering \includegraphics[width=8cm]{ICC2020-9} \caption{{BER performance of the STPG (circles) and BP detectors (triangles) for SCDMA with signature sparsity $k=2$ (solid line) and $6$ (dotted line); $n=m=1200$ ($\beta=1$).}} \label{fig_3} \end{figure} \begin{figure}[t] \centering \includegraphics[width=8cm]{ICC2020-10} \caption{{BER performance of the STPG (circles) and BP detectors (triangles) for SCDMA with overloaded factor $\beta=1$ (dotted line) and $1.2$ (solid line); $n=1200$ and $k=6$.}} \label{fig_5} \end{figure} Next, we compare the STPG detector to a conventional BP detector. Figure~\ref{fig_3} shows multiuser detection performance of the {STPG ($T=30$) and BP ($T_{\mathrm{BP}}=30$)} detectors {with $n=1200$ active users and $m=1200$ signature sequence length}. Since $n=m$ ($\beta=1$), reasonable detection performance is expected by using proper signature sequences. In fact, two detectors exhibit nearly same performance when $k=6$. When $k=2$, however, the overall BER performance of both detectors decreases while the STPG detector is inferior to the BP detector. It suggests that the sufficiently large $k$ is preferable for reliable multiuser detection, which leads to a rapid increase of the computational cost of the BP decoder. {When $k=6$, the computational cost of the BP detector is more than a hundred times as high as that of the STPG detector as shown in Tab.~\ref{tab_1}.} {Figure~\ref{fig_5} shows the multiuser detection performance with different overloaded factors $\beta$ when $n\!=\!1200$ and $k\!=\!6$.} In the overloaded case where $\beta\!=\!1.2$ ($m\!=\!1000$), two detectors exhibit similar BER performance. Although the overloaded system suffers from about $1$dB performance degradation compared with the $\beta=1$ case, both algorithms successfully detect transmit signals in the high SNR regime, which is an advantage of SCDMA as NOMA. {In overloaded systems, a computational cost of a detector is still crucial because the signature sparsity $k$ should be sufficiently large as well as the $\beta=1$ case.} In summary, the STPG detector shows similar detection performance to the BP detector even in an overloaded case. From the discussion in Sec.~\ref{sec_com}, we can conclude that the STPG detector has an advantage in the computational cost for sufficiently large signature sparsity $k$. \section{Signature design with STPG detector}\label{sec_4} As described in Sec.~\ref{sec_3}, a trained STPG detector shows reasonable SCDMA multiuser detection performance with low computational complexity. Moreover, we can train a signature matrix $\bm A$ combined with the STPG detector. In this section, we propose a new signature design by learning a signature matrix and the STPG detector simultaneously. \subsection{Joint learning of signature matrix and STPG detector} The structure of deep unfolding enables us to train weights of a signature matrix $\bm A$ by back propagation and SGD. We show signature design with the STPG detector in Alg.~1. For simplicity, we train a weight matrix $\bm W$ of a signature matrix with a mask matrix $\bm{H}$ fixed. The trainable parameters are then a signature matrix $\bm A= \bm H\odot \bm W$ in addition to $\{\gamma_t\}_{t=1}^T$ and $\alpha$ of the STPG detector. An update rule for these parameters consists of three steps: (i) calculating a temporal $\bm A$, (ii) generating training data, and (iii) updating trainable parameters. For step (i), a signature matrix is modified to satisfy the sparsity and normalization conditions that might be broken by the previous parameter update. In line $4$ of Alg.~1, {signature sparsity $k$ of $\bm{A}$} is recovered by multiplying the masking matrix $\bm H$ to $\bm A$ updated in the last training step. The normalization condition $\|\bm A\|_{F}=\sqrt{km}$ is satisfied after line $5$. As step (ii), a mini-batch for a parameter update is generated according to the system model~(\ref{eq_ch3}) with a temporal $\bm A$. Then, as step (iii), trainable parameters including $\bm A$ are updated to reduce the loss value calculated by the mini-batch and the STPG detector with $t$ iterations. This training step belongs to the $t$th generation of incremental training. Due to sparse signature sequences and architecture of the STPG detector, the substantial number of training parameters are $km+T+1$ in total. It realizes sufficiently fast joint learning. In fact, the training process is executed within 20 minutes by a PC with GPU NVIDIA GeForce RTX 2080 Ti and Intel Core i9-9900KF CPU (3.6 GHz). \begin{figure}[!t]\label{alg} \removelatexerror \begin{algorithm}[H] \caption{Joint learning of signature matrix and STPG} \begin{algorithmic}[1] \INPUT $m,n,T,k,n_0$, mini-batch size $bs$, number of mini-batches $B$, mask matrix $\bm{H}$ \OUTPUT Trained params. $\{\gamma_t\}_{t=1}^T,\alpha, \bm{A}$ \State Initialize $\{\gamma_t\}_{t=1}^T$, $\alpha$, and $\bm{A}$. \For{$t=1$ to $T$} \Comment{Incremental training} \For{$b=1$ to $B$ } \Statex \LeftComment{1}{(i) Masking and normalization of $\bm A$.} \State{$\bm A := \bm A \odot \bm H$} \State{$\bm A := (\sqrt{km}/\|\bm A\|_F) \bm A$} \Statex \LeftComment{1}{(ii) Generating training data.} \State{Generate $\bm x\in\{\pm1\}^{n\times bs}$ and $\bm{w}$ randomly.} \State{Generate $\bm y$ by $\bm y = \sqrt{n_0/k} \bm A \bm x +\bm w$.} \Statex \LeftComment{1}{(iii) Update of training params.} \State{Estimate $\bm{\hat x}:= \bm{s}_{t}$ by a temporal STPG detector.} \State{Calculate MSE loss between $\bm{x}$ and $\bm{\hat x}$.} \State{Update $\{\gamma_t\}_{t=1}^T$, $\alpha$, and $\bm{A}$ by an optimizer.} \EndFor \EndFor \end{algorithmic} \end{algorithm} \end{figure} \subsection{Multiuser detection performance} Now we evaluate the multiuser detection performance of the STPG detectors with/without learning a signature matrix. In the training process of joint learning, we change the number of mini-batches to $1000$ because the number of trainable parameters increases. A mask matrix with signature sparsity $k=6$ is generated according to Gallager's method. Initial values of weights of $\bm A$ are set to one and signature matrices are trained independently for a given SNR. Other conditions are based on the descriptions in Sec. \ref{sec_ex}. For the STPG detector with fixed $\bm A$, weights of $\bm A$ is randomly and uniformly chosen from $\{+1,-1\}$. \begin{figure}[t] \centering \includegraphics[width=8cm]{ICC2020-2} \caption{Multiuser detection performance of STPG with/without learning a signature matrix when $\beta=1$ ($m=1200$; dotted lines) and $1.2$ ($m=1000$; solid lines); $n=1200$ and $k=6$.} \label{fig_4} \end{figure} {Figure~\ref{fig_4} shows BER performance of the STPG detectors with/without learning a signature matrix with overloaded factor $\beta=1$ and $1.2$ when $n=1200$. It is found that tuning $\bm A$ largely improves detection performance in the low SNR regime in both cases. When $\beta=1$ and BER$=1.0\times 10^{-2}$, the gain of learning a signature matrix is about $0.9$dB. } On the other hand, the gain vanishes as the SNR increases. Especially in the case of $\beta=1.2$, the joint learning shows worse detection performance than the STPG detector with a fixed signature matrix in the high SNR regime. This is because the gain of signature design is expected to be small and training the detector is sensitive to perturbations of a signature matrix when noise level is relatively small. It is a future task to improve the joint learning method in the high SNR regime. These results suggest that the proposed signature design with the STPG detector improves the multiuser detection performance with reasonable training costs especially in the low SNR regime. \section{Concluding remarks}\label{sec_5} In this paper, we propose a trainable SCDMA multiuser detector called STPG detector. Applying the notion of deep unfolding to a computationally efficient PG detector, the STPG detector contains a constant number of trainable parameters which can be trained by standard deep learning techniques. An advantage of the STPG detector is the low computational cost that is proportional to the number of active users. Moreover, compared with a conventional BP detector, the STPG detector has less computational complexity with respect to signature sparsity $k$ and signal constellation size while its detection performance is fairly close to that of a BP detector. In addition, we demonstrate a DL-based signature design using the STPG detector. Numerical results show that the joint learning method improves multiuser detection performance especially in the low SNR regime with reasonable training costs. \section*{Acknowledgement} This work was partly supported by JSPS Grant-in-Aid for Scientific Research (B) Grant Number 16H02878 (TW) and Grant-in-Aid for Young Scientists (Start-up) Grant Number 17H06758 (ST), and the Telecommunications Advancement Foundation (ST).
1,941,325,219,944
arxiv
\section[Introduction]{Introduction} In \cite{mat01} Mathias systematically studies and compares a variety of subsystems of $\mathrm{ZFC}$. One of the weakest systems studied in \cite{mat01} is the set theory $\mathbf{M}$ axiomatised by: extensionality, emptyset, pair, union, powerset, infinity, transitive containment, $\Delta_0$-separation and set foundation. This paper will expand upon some of the initial comparisons of extensions of $\mathbf{M}$ achieved in \cite{mat01} by studying the strengths of extensions of $\mathbf{M}$ obtained by adding fragments of the set-theoretic collection scheme. The fragments of the collection scheme considered in this paper will be obtained by restricting the following alternative versions of the collection scheme to the Takahashi class $\Delta_0^\mathcal{P}$ and the L\'{e}vy $\Pi_n$ classes: \begin{itemize} \item[](Collection) For all formulae $\phi(x, y, \vec{z})$ in the language of set theory, $$\forall \vec{z} \forall w((\forall x \in w) \exists y \phi(x, y, \vec{z}) \Rightarrow \exists C (\forall x \in w)(\exists y \in C) \phi(x, y, \vec{z})).$$ \item[](Strong Collection) For all formulae $\phi(x, y, \vec{z})$ in the language of set theory, $$\forall \vec{z} \forall w \exists C (\forall x \in w)(\exists y \phi(x, y, \vec{z}) \Rightarrow (\exists y \in C) \phi(x, y, \vec{z})).$$ \end{itemize} Both Collection and Strong Collection yield $\mathrm{ZF}$ when added to $\mathbf{M}$. In section \ref{Sec:Background} we note that, over $\mathbf{M}$, the restriction of the Strong Collection scheme to $\Pi_n$-formulae (strong $\Pi_n$-collection) is equivalent to the restriction of the Collection scheme to $\Pi_n$-formulae ($\Pi_n$-collection) plus separation for all $\Sigma_{n+1}$-formulae. This means that $\mathbf{M}$ plus $\Pi_{n+1}$-collection proves all instances of strong $\Pi_n$-collection. One of the many achievements of \cite{mat01} is showing that if $\mathbf{M}$ is consistent, then so is $\mathbf{M}$ plus the Axiom of Choice and strong $\Delta_0$-collection. In section \ref{Sec:BaseCaseSection} we investigate the strength of adding $\Delta_0^\mathcal{P}$-collection to four of the weak set theories studied in \cite{mat01}. We show that if $T$ is one of the theories $\mathbf{M}$, $\mathrm{Mac}$, $\mathbf{M}+\mathrm{H}$ or $\mathrm{MOST}$, then $T$ plus $\Delta_0^\mathcal{P}$-collection is $\Pi_2^\mathcal{P}$-conservative over $T$. As a consequence, we are able to extend the consistency results of \cite{mat01} by showing that if $\mathbf{M}$ is consistent, then so is $\mathbf{M}$ plus the Axiom of Choice plus $\Pi_1$-collection. The results of \cite{mat01} also show that the theory obtained by adding strong $\Pi_1$-collection to $\mathbf{M}$ is strictly stronger than $\mathbf{M}$. More specifically, $\mathbf{M}$ plus strong $\Pi_1$-collection proves the consistency of Zermelo Set Theory plus $\Delta_0$-collection. This result and the main result of section \ref{Sec:BaseCaseSection} are generalised in section \ref{Sec:GeneralCollectionResults} to show: For all $n \geq 1$, \begin{enumerate} \item $\mathbf{M}$ plus $\Pi_{n+1}$-collection and the scheme of induction on $\omega$ restricted to $\Sigma_{n+2}$-formulae proves that there exists a transitive model of Zermelo Set Theory plus $\Pi_n$-collection, \item the theory $\mathbf{M}+\Pi_{n+1}\textrm{-collection}$ is $\Pi_{n+3}$-conservative over the theory\\ $\mathbf{M}+\textrm{strong }\Pi_n \textrm{-collection}$. \end{enumerate} These comparisons are achieved using techniques, developed by Pino and Ressayre in \cite{res87} (see also \cite{flw16}), for building models of fragments of the collection scheme from chains of partially elementary submodels of the universe indexed by an ordinal, or a cut of a nonstandard ordinal, of a model of set theory. Finally, in section \ref{Sec:ResultsForKP} we consider replacing the base theory $\mathbf{M}$ by a theory, Kripke-Platek Set Theory with the Axiom of Infinity ($\mathrm{KPI}$) plus $V=L$, that does not include the powerset axiom. We indicate how the arguments in section \ref{Sec:GeneralCollectionResults} can be adapted to obtain the following analogues of (1) and (2) above: For all $n \in \omega$, \begin{enumerate} \item $\mathrm{KPI}+V=L$ plus $\Pi_{n+1}$-collection and the scheme of induction on $\omega$ restricted to $\Sigma_{n+2}$-formulae proves that there exists a transitive model of the theory $\mathrm{KPI}+V=L$ plus strong $\Pi_n$-collection, and full class foundation, \item the theory $\mathrm{KPI}+V=L$ plus $\Pi_{n+1}$-collection is $\Pi_{n+3}$-conservative over the theory $\mathrm{KPI}+V=L$ plus strong $\Pi_{n}$-collection. \end{enumerate} \section[Background]{Background} \label{Sec:Background} Throughout this paper $\mathcal{L}$ will denote the language of set theory. Structures will usually be denoted using upper-case calligraphy roman letters ($\mathcal{M}, \mathcal{N}, \ldots$) and the corresponding plain font letter ($M, N, \ldots$) will be used to denote the underlying set of that structure. If $\mathcal{M}$ is a structure, then we will use $\mathcal{L}(\mathcal{M})$ to denote the language of $\mathcal{M}$. If $\mathcal{M}$ is an $\mathcal{L}^\prime$-structure where $\mathcal{L}^\prime \supseteq \mathcal{L}$ and $a \in M$ then we will use $a^*$ to denote the class $\{x \in M \mid \mathcal{M} \models (x \in a)\}$. As usual $\Delta_0 (= \Sigma_0= \Pi_0), \Sigma_1, \Pi_1, \ldots$ will be used to denote the L\'{e}vy classes of $\mathcal{L}$-formulae, and we use $\Pi_\infty$ to denote the union of all of these classes (i.e. $\Pi_\infty= \bigcup_{n \in \omega} \Sigma_n= \bigcup_{n \in \omega} \Pi_n$). For all $n \in \omega$, $\Delta_n$ is the class of all formulae that are provably equivalent to both a $\Sigma_n$ formula and a $\Pi_n$ formula. We will also have cause to consider the class $\Delta_0^\mathcal{P}$, which is the smallest class of $\mathcal{L}$-formulae that contains all atomic formulae, contains all compound formulae formed using the connectives of first-order logic, and is closed under quantification in the form $\mathcal{Q} x \in y$ and $\mathcal{Q} x \subseteq y$ where $x$ and $y$ are distinct variables, and $\mathcal{Q}$ is $\exists$ or $\forall$. The classes $\Sigma_1^\mathcal{P}, \Pi_1^\mathcal{P}, \Delta_1^\mathcal{P}, \ldots$ are defined inductively from the class $\Delta_0^\mathcal{P}$ in the same way that the classes $\Sigma_1, \Pi_1, \Delta_1, \ldots$ are defined from $\Delta_0$. If $\Gamma$ is a class of formulae and $T$ is a theory, then we write $\Gamma^T$ for the class of formulae that are provably equivalent in $T$ to a formula in $\Gamma$. If $\Gamma$ is a class of formulae, then we use $\mathbf{Bol}(\Gamma)$ to denote the smallest class of formulae that contains $\Gamma$, and contains all compound formulae formed using the connectives of first-order logic. Note that for all $n \in \omega$, $\mathbf{Bol}(\Sigma_n)^\emptyset= \mathbf{Bol}(\Pi_n)^\emptyset$ and $\mathbf{Bol}(\Sigma_n^\mathcal{P})^\emptyset= \mathbf{Bol}(\Pi_n^\mathcal{P})^\emptyset$. If $\Gamma$ is a class of formulae, then we write $\neg \Gamma$ for the class of negations of formulae in $\Gamma$. So, for all $n \in \omega$, $(\neg \Sigma_n)^\emptyset= \Pi_n^\emptyset$, $(\neg \Pi_n)^\emptyset= \Sigma_n^\emptyset$, $(\neg \Sigma_n^\mathcal{P})^\emptyset=(\Pi_n^\mathcal{P})^\emptyset$, and $(\neg \Pi_n^\mathcal{P})^\emptyset= (\Sigma_n^\mathcal{P})^\emptyset$. Let $T$ be an $\mathcal{L}^\prime$-theory and let $S$ be $\mathcal{L}^{\prime\prime}$-theory where $\mathcal{L}^{\prime} \subseteq \mathcal{L}^{\prime\prime}$, and let $\Gamma$ be a class of $\mathcal{L}^\prime$-formulae. The theory $S$ is said to be {\it $\Gamma$-conservative} over $T$ if $S$ and $T$ prove the same $\Gamma$-sentences. Let $\mathcal{M}$ and $\mathcal{N}$ be $\mathcal{L}$-structures. If $\mathcal{M}$ is a substructure of $\mathcal{N}$ then we will write $\mathcal{M} \subseteq \mathcal{N}$. If $\Gamma$ is a class of $\mathcal{L}$-formulae then we will write $\mathcal{M} \prec_\Gamma \mathcal{N}$ if $\mathcal{M} \subseteq \mathcal{N}$ and for every $\vec{a} \in M$, $\vec{a}$ satisfies the same $\Gamma$-formulae in both $\mathcal{M}$ and $\mathcal{N}$. In the case that $\Gamma$ is $\Pi_\infty$ or $\Sigma_n$ then we will abbreviate this notation by writing $\mathcal{M} \prec \mathcal{N}$ and $\mathcal{M} \prec_n \mathcal{N}$ respectively. If $\mathcal{M} \subseteq \mathcal{N}$ and for all $x \in M$ and $y \in N$, $$\textrm{if } \mathcal{N} \models (y \in x) \textrm{ then } y \in M,$$ then we say that $\mathcal{N}$ is an \emph{end-extension} of $\mathcal{M}$ and write $\mathcal{M} \subseteq_e \mathcal{N}$. It is well-known that if $\mathcal{M} \subseteq_e \mathcal{N}$ then $\mathcal{M} \prec_0 \mathcal{N}$. The following is a slight generalisation of the notion of a powerset preserving end-extension that was first studied by Forster and Kaye in \cite{fk91}. \begin{Definitions1} Let $\mathcal{M}$ and $\mathcal{N}$ be $\mathcal{L}$-structures. We say that $\mathcal{N}$ is a powerset preserving end-extension of $\mathcal{M}$, and write $\mathcal{M} \subseteq_e^\mathcal{P} \mathcal{N}$ if \begin{itemize} \item[(i)] $\mathcal{M} \subseteq_e \mathcal{N}$, \item[(ii)] for all $x \in N$ and for all $y \in M$, if $\mathcal{N} \models (x \subseteq y)$, then $x \in M$. \end{itemize} \end{Definitions1} Just as end-extensions preserve $\Delta_0$ properties, powerset preserving end-extensions preserve $\Delta_0^\mathcal{P}$ properties. The following is a slight modification of a result proved in \cite{fk91}: \begin{Lemma1} Let $\mathcal{M}$ and $\mathcal{N}$ be $\mathcal{L}$-structures that satisfy extensionality. If $\mathcal{M} \subseteq_e^\mathcal{P} \mathcal{N}$, then $\mathcal{M} \prec_{\Delta_0^\mathcal{P}} \mathcal{N}$. \Square \end{Lemma1} Let $\Gamma$ be a class of $\mathcal{L}$-formulae. The following define the restriction of some commonly encountered axiom and theorem schemes of $\mathrm{ZFC}$ to formulae in the class $\Gamma$: \begin{itemize} \item[]($\Gamma$-separation) For all $\phi(x, \vec{z}) \in \Gamma$, $$\forall \vec{z} \forall w \exists y \forall x(x \in y \iff (x \in w) \land \phi(x, \vec{z})).$$ \item[]($\Gamma$-collection) For all $\phi(x, y, \vec{z}) \in \Gamma$, $$\forall \vec{z} \forall w((\forall x \in w) \exists y \phi(x, y, \vec{z}) \Rightarrow \exists C (\forall x \in w)(\exists y \in C) \phi(x, y, \vec{z})).$$ \item[](strong $\Gamma$-collection) For all $\phi(x, y, \vec{z}) \in \Gamma$, $$\forall \vec{z} \forall w \exists C (\forall x \in w)(\exists y \phi(x, y, \vec{z}) \Rightarrow (\exists y \in C) \phi(x, y, \vec{z})).$$ \item[]($\Gamma$-foundation) For all $\phi(x, \vec{z}) \in \Gamma$, $$\forall \vec{z}(\exists x \phi(x, \vec{z}) \Rightarrow \exists y(\phi(y, \vec{z}) \land (\forall x \in y) \neg \phi(x, \vec{z}))).$$ If $\Gamma= \{x \in z\}$ then we will refer to $\Gamma$-foundation as \emph{set foundation}. \item[]($\Gamma$-induction on $\omega$) For all $\phi(x, \vec{z}) \in \Gamma$, $$\forall \vec{z}(\phi(\emptyset, \vec{z}) \land (\forall n \in \omega)(\phi(n, \vec{z}) \Rightarrow \phi(n+1, \vec{z})) \Rightarrow (\forall n \in \omega) \phi(n, \vec{z})).$$ \end{itemize} We will use $\bigcup x \subseteq x$ to abbreviate the $\Delta_0$-formula that says that $x$ is transitive ($(\forall y \in x)(\forall z \in y)(z \in x)$). We will also make reference to the following axioms: \begin{itemize} \item[](Axiom H) $$\forall u \exists T \left(\bigcup T \subseteq T \land \forall z(\bigcup z \subseteq z \land |z| \leq |u| \Rightarrow z \subseteq T) \right).$$ \item[]($\mathrm{TCo}$) $$\forall x \exists y \left( \bigcup y \subseteq y \land x \subseteq y \right).$$ \end{itemize} The following weak subsystems of $\mathrm{ZFC}$ are studied by Mathias in \cite{mat01}: \begin{itemize} \item $\mathbf{S}_1$ is the $\mathcal{L}$-theory with axioms: extensionality, emptyset, pair, union, set difference, and powerset. \item $\mathbf{M}$ is obtained from $\mathbf{S}_1$ by adding $\mathrm{TCo}$, infinity, $\Delta_0$-separation, and set foundation. \item $\mathrm{Mac}$ is obtained from $\mathbf{M}$ by adding the axiom of choice. \item $\mathbf{M}+\mathrm{H}$ is obtained from $\mathbf{M}$ by adding Axiom H. \item $\mathrm{KPI}$ is obtained from $\mathbf{M}$ by removing powerset, and adding $\Delta_0$-collection and $\Pi_1$-foundation. \item $\mathrm{KP}^\mathcal{P}$ is obtained from $\mathbf{M}$ by adding $\Delta_0^\mathcal{P}$-collection and $\Pi_1^\mathcal{P}$-foundation. \item $\mathrm{MOST}$ is obtained from $\mathrm{Mac}$ by adding $\Sigma_1$-separation and $\Delta_0$-collection. \item $\mathrm{Z}$ is obtained from $\mathbf{M}$ by removing $\mathrm{TCo}$, and adding $\Pi_\infty$-separation. \item $\mathrm{ZC}$ is obtained from $\mathrm{Z}$ by adding the axiom of choice. \end{itemize} In addition to these theories, we will also use $\mathrm{MOST}^{-\mathrm{AC}}$ to refer to the theory obtained by removing the axiom of choice from $\mathrm{MOST}$, and $\mathbf{M}^-$ to refer to the theory obtained by removing the powerset axiom from $\mathbf{M}$. $\mathrm{ZF}$ and $\mathrm{ZFC}$ are obtained by adding $\Pi_\infty$-collection (or, equivalently, strong $\Pi_\infty$-collection) to $\mathbf{M}$ and $\mathrm{Mac}$ respectively. We begin by collecting together some well-known relationships between fragments of induction, separation, collection, and strong collection over the weak base theory $\mathbf{M}^-$. \begin{Lemma1} \label{Th:BasicRelationships} Let $\Gamma$ be a class of $\mathcal{L}$-formulae. Let $n \in \omega$. \begin{enumerate} \item $\mathbf{M}^- + \Gamma\textrm{-foundation} \vdash \neg \Gamma\textrm{-induction on } \omega$ \item $\mathbf{M}^- + \Gamma\textrm{-separation} \vdash \mathbf{Bol}(\Gamma)\textrm{-separation}$ \item $\mathbf{M}^- + \Gamma\textrm{-separation} \vdash \Gamma\textrm{-foundation}$ \item $\mathbf{M}^- + \textrm{[strong] } \Pi_n^{(\mathcal{P})}\textrm{-collection} \vdash \textrm{[strong] } \Sigma_{n+1}^{(\mathcal{P})}\textrm{-collection}$ \item $\mathbf{M}^- + \Pi_n\textrm{-collection} \vdash \Delta_{n+1}\textrm{-separation}$ \end{enumerate} \Square \end{Lemma1} Another well-known application of $\Pi_n$-collection is that, over $\mathbf{M}^-$, this scheme implies that the classes $\Sigma_{n+1}$ and $\Pi_{n+1}$ are essentially closed under bounded quantification. \begin{Lemma1} \label{Th:Normalisation} Let $\phi(x, \vec{z})$ be a $\Sigma_{n+1}$-formula, and let $\psi(x, \vec{z})$ be a $\Pi_{n+1}$-formula. The theory $\mathbf{M}^-+ \Pi_n\textrm{-collection}$ proves that $(\forall x \in y) \phi(x, \vec{z})$ is equivalent to a $\Sigma_{n+1}$-formula, and $(\exists x \in y) \psi(x, \vec{z})$ is equivalent to a $\Pi_{n+1}$-formula. \Square \end{Lemma1} We also observe that for all $n \in \omega$, strong $\Pi_n$-collection is equivalent, over $\mathbf{M}^-$, to $\Pi_n$-collection plus $\Sigma_{n+1}$-separation. The following lemma generalises one of the equivalences reported in \cite[Proposition 3.14]{mat01}. \begin{Lemma1} \label{Th:StrongCollectionVsCollectionSeparation} For all $n \in \omega$, \begin{enumerate} \item $\mathbf{M}^- + \textrm{strong } \Pi_n\textrm{-collection} \vdash \Pi_n\textrm{-collection and } \Sigma_{n+1}\textrm{-separation}$ \item $\mathbf{M}^- + \Pi_n\textrm{-collection}+\Sigma_{n+1}\textrm{-separation} \vdash \textrm{strong } \Pi_n \textrm{-collection}$. \end{enumerate} \end{Lemma1} \begin{proof} We first prove (1). The fact that $\mathbf{M}^- + \textrm{strong } \Pi_n\textrm{-collection}$ proves the scheme of $\Pi_n$-collection is clear. We need to prove that $\mathbf{M}^- + \textrm{strong } \Pi_n\textrm{-collection}$ proves the scheme of $\Sigma_{n+1}$-separation. It immediately follows from Lemma \ref{Th:BasicRelationships} that $\mathbf{M}^- + \textrm{strong } \Pi_n\textrm{-collection}$ proves the scheme of strong $\Sigma_{n+1}$-collection and $\Pi_n$-separation. Work in the theory $\mathbf{M}^- + \textrm{strong } \Pi_n\textrm{-collection}$. Consider $\exists y \phi(y, x, \vec{z})$ where $\phi(y, x, \vec{z})$ is $\Pi_n$. Let $\vec{a}, b$ be sets. By strong $\Pi_n$-collection, there exists a set $C$ such that $$(\forall x \in b)(\exists y \phi(y, x, \vec{a}) \Rightarrow (\exists y \in C)\phi(y, x, \vec{a})).$$ Therefore, using Lemma \ref{Th:Normalisation} and $\Pi_n$-separation, $$A= \{x \in b \mid \exists y \phi(y, x, \vec{a})\}= \{x \in b \mid (\exists y \in C)\phi(y, x, \vec{a})\}$$ is a set. This completes the proof of (1).\\ We turn our attention to (2). Work in the theory $\mathbf{M}^-+ \Pi_n\textrm{-collection}+\Sigma_{n+1}\textrm{-separation}$. Let $\phi(x, y, \vec{z})$ be a $\Pi_n$-formula, and let $\vec{a}, b$ be sets. Now, $\Sigma_{n+1}$-separation implies that $$A= \{x \in b \mid \exists y \phi(x, y, \vec{a})\}$$ is a set. And, $(\forall x \in A)\exists y \phi(x, y, \vec{a})$ holds. Therefore, we can apply $\Pi_n$-collection to obtain a set $C$ such that $(\forall x \in A)(\exists y \in C) \phi(x, y, \vec{a})$ holds. It now follows from the definition of $A$ that $$(\forall x \in b)(\exists y \phi(x, y, \vec{a}) \Rightarrow (\exists y \in C) \phi(x, y, \vec{a})).$$ This completes the proof of (2). \Square \end{proof} \begin{Coroll1} $\mathrm{MOST}$ ($\mathrm{MOST}^{-\mathrm{AC}}$, respectively) is the same theory as $\mathrm{Mac}+\textrm{strong } \Delta_0\textrm{-collection}$ ($\mathbf{M}+\textrm{strong } \Delta_0\textrm{-collection}$, respectively). \Square \end{Coroll1} Sufficiently rich set theories such as $\mathbf{M}$ and $\mathrm{KPI}$ allow us to express satisfaction in set structures. The following can be found in \cite{mat69} and \cite[Section III.1]{bar75}: \begin{Lemma1} \label{Th:SatisfactionForSetStructuresInKPI} In the theory $\mathrm{KPI}$, if $\mathcal{M}$ is a set structure, $\vec{a}$ is sequence of sets, and $\phi$ is an $\mathcal{L}(\mathcal{M})$-formula in the sense of the model whose arity agrees with the length of $\vec{a}$, then the predicate ``$\mathcal{M} \models \phi[\vec{v}/ \vec{a}]$" is definable by a $\Delta_1$-formula. \Square \end{Lemma1} It is noted in \cite{mat01} that when powerset is present the recursions involved in the definition of satisfaction can be contained in sets even without any collection. The following is a consequence \cite[Proposition 3.10]{mat01}: \begin{Lemma1} In the theory $\mathbf{M}$, if $\mathcal{M}$ is a set structure, $\vec{a}$ is sequence of sets, and $\phi$ is an $\mathcal{L}(\mathcal{M})$-formula in the sense of the model whose arity agrees with the length of $\vec{a}$, then the predicate ``$\mathcal{M} \models \phi[\vec{v}/ \vec{a}]$" is definable and $$\{\langle \ulcorner \phi \urcorner, \vec{a}\rangle \mid \vec{a} \in M \land \mathcal{M} \models \phi(\vec{a})\}$$ is a set. \end{Lemma1} Equipped with these results, we can now define formulae that, in the theories $\mathrm{KPI}$ and $\mathbf{M}$, express satisfaction in the universe for the L\'{e}vy classes of $\mathcal{L}$-formulae. \begin{Definitions1} \label{Df:Delta0Satisfaction} Define $\mathrm{Sat}_{\Delta_0}(n, x)$ to be the formula $$\begin{array}{c} (n \in \omega) \land (n= \ulcorner \phi(v_1, \ldots, v_m) \urcorner \textrm{ where } \phi \textrm{ is } \Delta_0) \land (x= \langle x_1, \ldots, x_m \rangle) \land\\ \exists N \left( \bigcup N \subseteq N \land (x_1, \ldots, x_m \in N) \land (\langle N, \in \rangle \models \phi[x_1, \ldots, x_m]) \right) \end{array}.$$ \end{Definitions1} The absoluteness of $\Delta_0$ properties between transitive structures and the universe, and the availability of $\mathrm{TCo}$ in $\mathrm{KPI}$ implies that the formula $\mathrm{Sat}_{\Delta_0}$ is equivalent, in the theory $\mathrm{KPI}$, to the formula $$\begin{array}{c} (n \in \omega) \land (n= \ulcorner \phi(v_1, \ldots, v_m) \urcorner \textrm{ where } \phi \textrm{ is } \Delta_0) \land (x= \langle x_1, \ldots, x_m \rangle) \land\\ \forall N \left( \bigcup N \subseteq N \land (x_1, \ldots, x_m \in N) \Rightarrow (\langle N, \in \rangle \models \phi[x_1, \ldots, x_m]) \right) \end{array}.$$ Therefore, Lemma \ref{Th:SatisfactionForSetStructuresInKPI} implies that $\mathrm{Sat}_{\Delta_0}(n, x)$ is $\Delta_1^{\mathrm{KPI}}$, and $\mathrm{Sat}_{\Delta_0}(n, x)$ expresses satisfaction for $\Delta_0$-formulae in the theories $\mathrm{KPI}$ and $\mathbf{M}$. We can now inductively define formulae $\mathrm{Sat}_{\Sigma_m}(n, x)$ and $\mathrm{Sat}_{\Pi_m}(n, x)$ that express satisfaction for formulae in the classes $\Sigma_m$ and $\Pi_m$. \begin{Definitions1} The formulae $\mathrm{Sat}_{\Sigma_m}(n, x)$ and $\mathrm{Sat}_{\Pi_m}(n, x)$ are defined inductively. Define $\mathrm{Sat}_{\Sigma_{m+1}}(n, x)$ to be the formula $$\exists \vec{y} \exists k \exists b \left( \begin{array}{c} (n= \ulcorner\exists \vec{u} \phi(\vec{u}, v_1, \ldots, v_l)\urcorner \textrm{ where } \phi \textrm{ is } \Pi_m)\land (x= \langle x_1, \ldots, x_l \rangle)\\ \land (b= \langle \vec{y}, x_1, \ldots, x_l \rangle) \land (k= \ulcorner \phi(\vec{u}, v_1, \ldots, v_l) \urcorner) \land \mathrm{Sat}_{\Pi_m}(k, b) \end{array}\right).$$ Define $\mathrm{Sat}_{\Pi_{m+1}}(n, x)$ to be the formula $$\forall \vec{y} \forall k \forall b \left( \begin{array}{c} (n= \ulcorner\forall \vec{u} \phi(\vec{u}, v_1, \ldots, v_l) \urcorner \textrm{ where } \phi \textrm{ is } \Sigma_m)\land (x= \langle x_1, \ldots, x_l \rangle)\\ \land ((b= \langle \vec{y}, x_1, \ldots, x_l \rangle) \land (k= \ulcorner\phi(\vec{u}, v_1, \ldots, v_l)\urcorner) \Rightarrow \mathrm{Sat}_{\Sigma_m}(k, b)) \end{array}\right).$$ \end{Definitions1} The formula $\mathrm{Sat}_{\Sigma_m}(n, x)$ (respectively $\mathrm{Sat}_{\Pi_m}(n, x)$) is $\Sigma_m^{\mathrm{KPI}}$ ($\Pi_m^{\mathrm{KPI}}$, respectively), and, in the theories $\mathrm{KPI}$ and $\mathbf{M}$, expresses satisfaction for $\Sigma_m$-formulae ($\Pi_m$-formulae, respectively). Another important feature of the theory $\mathrm{KPI}$ is its ability to construct $L$. The following can be found in \cite{mat69} and \cite[Chapter II]{bar75}: \begin{Theorems1} \label{Th:DefinabilityOfLInKPI} ($\mathrm{KPI}$) The function $\alpha \mapsto L_\alpha$, where $\alpha$ is an ordinal, is total and $\Delta_1$. \Square \end{Theorems1} As is usual, we use $V=L$ to abbreviate the expression that says that every set is the member of some $L_\alpha$ ($\forall x \exists \alpha((\alpha \textrm{ is an ordinal} \land (x \in L_\alpha)))$). We now turn to noting some of the properties of the theories $\mathbf{M}$, $\mathrm{Mac}$, $\mathbf{M}+H$ and $\mathrm{MOST}$ that are established in \cite{mat01}. The following useful fact is a consequence of \cite[Theorem I.6.1.]{bar75}: \begin{Lemma1} The theory $\mathrm{KPI}$ proves $\mathrm{TCo}$. \Square \end{Lemma1} We also record the following consequence of \cite[Theorem Scheme 6.9(i)]{mat01}: \begin{Theorems1} \label{Th:Delta0PSeparationInM} The theory $\mathbf{M}$ proves all instances of $\Delta_0^\mathcal{P}$-separation. \Square \end{Theorems1} Section 2 of \cite{mat01} shows that by considering classes of well-founded extensional relations in a model of $\mathbf{M}$ one can obtain a model of $\mathbf{M}+\mathrm{H}$. \begin{Theorems1} \label{Th:MisConsistentWithH} (Mathias) If $\mathbf{M}$ is consistent, then so is $\mathbf{M}+\mathrm{H}$. \end{Theorems1} Section 3 of \cite{mat01} establishes a variety of consequences of Axiom H over the theories $\mathbf{M}$ and $\mathrm{Mac}$. A key observation of this section is that the theory $\mathrm{MOST}$ is exactly $\mathrm{Mac}$ plus Axiom H. \begin{Lemma1} \label{Th:MOSTIsMacAndH} $\mathrm{MOST}$ is the same theory as $\mathrm{Mac}+\textrm{Axiom }\mathrm{H}$. \Square \end{Lemma1} The following useful consequences of the theory $\mathrm{MOST}$ (=$\mathrm{Mac}+\textrm{strong } \Delta_0\textrm{-collection}$) are also proved in section 3 of \cite{mat01}: \begin{Lemma1} \label{Th:BasicConsequencesMOST} The theory $\mathrm{MOST}$ proves \begin{itemize} \item[(i)] every well-ordering is isomorphic to an ordinal, \item[(ii)] every well-founded extensional relation is isomorphic to a transitive set, \item[(iii)] for all cardinals $\kappa$, $\kappa^+$ exists, \item[(iv)] for all cardinals $\kappa$, $H_\kappa$ exists. \end{itemize} \Square \end{Lemma1} Section 4 of \cite{mat01} establishes that the theory $\mathbf{M}+\mathrm{H}$ is capable of building G\"{o}del's $L$. Combined with Theorems \ref{Th:MisConsistentWithH} and \ref{Th:MOSTIsMacAndH} this yields that following consistency result: \begin{Theorems1} \label{Th:MOSTplusVequalsLConsistentWithM} (Mathias \cite[Theorem 1]{mat01}) If $\mathbf{M}$ is consistent, then so is $\mathrm{MOST}+V=L$. \Square \end{Theorems1} The classes $\Delta_0^\mathcal{P}$, $\Sigma_1^\mathcal{P}$, $\Pi_1^\mathcal{P}$, \ldots are introduced and studied by Takahashi in \cite{tak72} where it is shown that for all $n \geq 1$, $(\Sigma_n^\mathcal{P})^{\mathrm{ZFC}}= \Sigma_{n+1}^\mathrm{ZFC}$, $(\Pi_n^\mathcal{P})^{\mathrm{ZFC}}= \Pi_{n+1}^\mathrm{ZFC}$, and $(\Delta_n^\mathcal{P})^{\mathrm{ZFC}}= \Delta_{n+1}^\mathrm{ZFC}$. The following calibration of Takahashi's result appears as Proposition Scheme 6.12 of \cite{mat01}: \begin{Lemma1} \label{Th:TakahashiBaseCase} (Takahashi) $\Sigma_1 \subseteq (\Delta_1^\mathcal{P})^{\mathrm{MOST}}$ and $\Delta_0^\mathcal{P} \subseteq \Delta_2^{\mathbf{S}_1}$. \Square \end{Lemma1} \noindent This yields the following refined version of Theorem 6 of \cite{tak72}: \begin{Theorems1} \label{Th:TakahashiVsLevy} (Takahashi) For all $n \geq 1$, $\Sigma_{n+1} \subseteq (\Sigma_n^\mathcal{P})^{\mathrm{MOST}}$, $\Pi_{n+1} \subseteq (\Pi_n^\mathcal{P})^{\mathrm{MOST}}$, $\Delta_{n+1} \subseteq (\Delta_n^\mathcal{P})^{\mathrm{MOST}}$, $\Sigma_n^\mathcal{P} \subseteq \Sigma_{n+1}^{\mathbf{S}_1}$, $\Pi_n^\mathcal{P} \subseteq \Pi_{n+1}^{\mathbf{S}_1}$, and $\Delta_n^\mathcal{P} \subseteq \Delta_{n+1}^{\mathbf{S}_1}$. \Square \end{Theorems1} Lemmas \ref{Th:BasicRelationships} and \ref{Th:StrongCollectionVsCollectionSeparation}, and Theorem \ref{Th:TakahashiVsLevy} now show: \begin{Coroll1} The theory $\mathbf{M}+\textrm{strong } \Pi_1\textrm{-collection}$ proves every axiom of $\mathrm{KP}^\mathcal{P}$. \Square \end{Coroll1} In \cite{mat01}, Mathias proves a $\Sigma_1^\mathcal{P}$-Recursion Theorem in the theory $\mathrm{KP}^\mathcal{P}$. The following appear as Lemma 6.25 and Theorem 6.26 in \cite{mat01}: \begin{Lemma1} If $F$ is a total $\Sigma_1^\mathcal{P}$-definable class function, then the formula $y=F(x)$ is $\Delta_1^\mathcal{P}$. \Square \end{Lemma1} \begin{Theorems1} \label{Th:Sigma1PRecursionTheorem} ($\mathrm{KP}^\mathcal{P}$) Let $G$ be a $\Sigma_1^\mathcal{P}$-definable class. If $G$ is a total function, then there exists a $\Sigma_1^\mathcal{P}$-definable total class function $F$ such that for all $x$, $F(x)= G(F\upharpoonright x)$. \Square \end{Theorems1} The fact that we have access to Theorem \ref{Th:Sigma1PRecursionTheorem} in the theory $\mathbf{M}+\textrm{strong } \Pi_1\textrm{-collection}$ yields: \begin{Coroll1} The theory $\mathbf{M}+\textrm{strong } \Pi_1\textrm{-collection}$ proves that for all ordinals $\alpha$, $V_\alpha$ is a set. Moreover, the formula ``$x= V_\alpha$" with free variables $x$ and $\alpha$ is equivalent to a $\Delta_1^\mathcal{P}$-formula. \Square \end{Coroll1} Results proved in \cite{mat01} also reveal that the theory $\mathbf{M}+\textrm{strong } \Pi_1\textrm{-collection}$ is capable of proving the consistency of Zermelo Set Theory plus $\Delta_0$-collection. Mathias \cite[Lemma 6.31]{mat01} shows that the theory obtained by strengthening $\mathrm{KP}$ with an axiom that asserts the existence of $V_\alpha$ for every ordinal $\alpha$ is capable of proving the consistency of $\mathrm{Z}$. The fact that $\mathrm{KP}^\mathcal{P}$ is equipped with enough recursion to prove the existence of $V_\alpha$ for every $\alpha$ \cite[Proposition 6.28]{mat01} thus yields: \begin{Theorems1} \label{Th:KPPProvesConsistencyOfZ} (Mathias) The theory $\mathrm{KP}^\mathcal{P}$ proves that there exists a transitive model of $\mathrm{Z}$.\Square \end{Theorems1} Mathias \cite[Theorem 5]{mat01} also shows that all of the axioms of $\mathrm{KP}$ plus $V=L$ can be consistently added to $\mathrm{Z}$. In particular: \begin{Theorems1} \label{Th:ZisEquiconsistentWithKLZ} (Mathias) If $\mathrm{Z}$ is consistent, then so is $\mathrm{Z}+\Delta_0\textrm{-collection}+V=L$. \Square \end{Theorems1} Theorems \ref{Th:KPPProvesConsistencyOfZ} and \ref{Th:ZisEquiconsistentWithKLZ} now yield: \begin{Coroll1} $\mathrm{KP}^\mathcal{P} \vdash \mathrm{Con}(\mathrm{Z}+\Delta_0\textrm{-collection}+V=L)$. \Square \end{Coroll1} \section[The strength of $\Delta_0^\mathcal{P}$-collection]{The strength of $\Delta_0^\mathcal{P}$-collection}\label{Sec:BaseCaseSection} In this section we investigate the strength of adding $\Delta_0^\mathcal{P}$-collection to subsystems of set theory studied in \cite{mat01}. We show that if $T$ is one of the theories $\mathbf{M}$, $\mathbf{M}+\mathrm{H}$, $\mathrm{Mac}$ or $\mathrm{MOST}$, then the theory obtained by adding $\Delta_0^\mathcal{P}$-collection to $T$ is $\Pi_2^\mathcal{P}$-conservative over $T$. Combined with Theorems \ref{Th:MOSTplusVequalsLConsistentWithM} and \ref{Th:TakahashiVsLevy}, this shows that if $\mathbf{M}$ is consistent, then so is $\mathrm{MOST}+\Pi_1\textrm{-collection}$. If $u$ is a set, then we will $H_{\leq|u|}$ to denote the set $$\{x \mid |\mathrm{TC}(\{x\})|\leq |u|\}.$$ \begin{Lemma1}\label{Th:HuExist} The theory $\mathbf{M}+\mathrm{H}$ proves that for all sets $u$, $H_{\leq|u|}$ exists. \end{Lemma1} \begin{proof} Work in the theory $\mathbf{M}+\mathrm{H}$. Let $u$ be a set. Using Axiom H, let $T$ be a set such that $$\forall z\left(\bigcup z \subseteq z \land |z|\leq |u| \Rightarrow z \subseteq T\right)$$ Note that if $x$ is a set such that $|\mathrm{TC}(\{x\})| \leq |u|$, then $\mathrm{TC}(\{x\}) \subseteq T$ and so $x \in T$. Moreover, if $|\mathrm{TC}(\{x\})| \leq |u|$, then $\mathrm{TC}(\{x\}) \in \mathcal{P}(T)$ and the injection witnessing $|\mathrm{TC}(\{x\})| \leq |u|$ is in $\mathcal{P}(T\times u)$. Therefore $\Delta_0$-separation implies that $H_{\leq|u|}$ exists. \Square \end{proof} The following is immediate from the definition of $H_{\leq|u|}$: \begin{Lemma1}\label{Th:HuSuperTransitive} The theory $\mathbf{M}+\mathrm{H}$ proves that if $u, x, y$ are sets, then \begin{itemize} \item[(i)] if $x \in y \in H_{\leq|u|}$, then $x \in H_{\leq|u|}$, and \item[(ii)] if $x \subseteq y \in H_{\leq|u|}$, then $x \in H_{\leq|u|}$. \end{itemize} \Square \end{Lemma1} \begin{Definitions1} \label{Df:HApproximations} Let $n \in \omega$ and let $u$ be a set. We say that $f$ is an $n$-good $|u|$-$H$-approximation if \begin{itemize} \item[(i)] $f$ is a function and $\mathrm{dom}(f)= n+1$ \item[(ii)] $f(\emptyset)= H_{\leq|u|}$ \item[(iii)] $(\forall k \in n+1)\exists v (f(k)= H_{\leq|v|})$ \item[(iv)] $(\forall k \in n)(f(k) \in f(k+1))$. \end{itemize} \end{Definitions1} We first observe that in any model of $\mathbf{M}+\mathrm{H}$ there exists an $n$-good $|u|$-$H$-approximation for every externally finite $n$ and every set $u$ in the model. \begin{Lemma1} \label{Th:FiniteHApproximationsExist} Let $n \in \omega$. If $\mathcal{M} \models \mathbf{M}+\mathrm{H}$ and $u \in M$, then $$\mathcal{M} \models \exists f(f \textrm{ is an } n\textrm{-good } |u|\textrm{-}H\textrm{-approximation}).$$ \end{Lemma1} \begin{proof} Let $\mathcal{M}= \langle M, \in^\mathcal{M} \rangle$ be such that $\mathcal{M} \models \mathbf{M}+\mathrm{H}$ and let $u \in M$. We prove, by external induction on $\omega$, that for all $n \in \omega$, $$\mathcal{M} \models \exists f(f \textrm{ is an } n\textrm{-good } |u|\textrm{-}H\textrm{-approximation}).$$ It follows from Lemma \ref{Th:HuExist} that $$\mathcal{M} \models \exists f(f \textrm{ is a }0 \textrm{-good } |u|\textrm{-}H\textrm{-approximation}).$$ Suppose that the lemma is false, and $k \in \omega$ is least such that $$\mathcal{M} \models \neg \exists f(f \textrm{ is a } (k+1)\textrm{-good } |u|\textrm{-}H\textrm{-approximation}).$$ Work inside $\mathcal{M}$. Let $f$ be a $k$-good $|u|$-$H$-approximation. Let $v= f(k)\cup\{f(k)\}$. It follows from Definition \ref{Df:HApproximations}(iii) and Lemma \ref{Th:HuSuperTransitive} that $v= \mathrm{TC}(\{f(k)\})$. Therefore $g=f\cup \{\langle k+1, H_{\leq|v|}\rangle\}$ is a $(k+1)$-good $|u|$-$H$-approximation, which is a contradiction. \Square \end{proof} In the proof of the following result we obtain models of $\Delta_0^\mathcal{P}$-collection by considering a cut of an $n$-good $|u|$-$H$-approximation of nonstandard length. This idea of obtaining ``more" collection from a cut of a nonstandard model of set theory also appears in Ressayre's work on limitations of extensions of Kripke-Platek Set Theory \cite{res87} (see also \cite{flw16}) and Friedman's work \cite{fri73} on the standard part of countable non-standard models of set theory. \begin{Theorems1}\label{Th:Delta0PCollectionConsistentWithMOST} \begin{itemize} \item[(I)] The theory $\mathbf{M}+\mathrm{H}+\Delta_0^\mathcal{P}\textrm{-collection}$ is $\Pi_2^\mathcal{P}$-conservative over the theory $\mathbf{M}+\mathrm{H}$. \item[(II)] The theory $\mathrm{MOST}+\Pi_1\textrm{-collection}$ is $\Pi_3$-conservative over the theory $\mathrm{MOST}$. \end{itemize} \end{Theorems1} \begin{proof} To prove (I) it is sufficient to show that every $\Sigma_2^\mathcal{P}$-sentence that is consistent with $\mathbf{M}+\mathrm{H}$ is also consistent with $\mathbf{M}+\mathrm{H}+\Delta_0^\mathcal{P}\textrm{-collection}$. Suppose that $\exists \vec{x} \forall \vec{y} \theta(\vec{x}, \vec{y})$, where $\theta(\vec{x}, \vec{y})$ is a $\Delta_0^\mathcal{P}$-formulae, is consistent with $\mathbf{M}+\mathrm{H}$. Let $\mathcal{M}= \langle M, \in^\mathcal{M} \rangle$ be a recursively saturated model of $\mathbf{M}+\mathrm{H}+\exists \vec{x} \forall \vec{y} \theta(\vec{x}, \vec{y})$. Let $\vec{a} \in M$ be such $\mathcal{M}\models \forall \vec{y} \theta(\vec{a}, \vec{y})$ and let $u \in M$ be such that $\vec{a} \in u$. Consider the type $$\Xi(x, u)= \{x \in \omega\}\cup\{x > n \mid n \in \omega\}\cup \{\exists f(f \textrm{ is an }x\textrm{-good } u\textrm{-}H\textrm{-approximation})\}.$$ By Lemma \ref{Th:FiniteHApproximationsExist}, $\Xi(x, u)$ is finitely realised in any model of $\mathbf{M}+\textrm{H}$, and so there exists $k \in M$ such that $\Xi(k, u)$ is satisfied in $\mathcal{M}$. Note that $k$ is a nonstandard element of $\omega^{\mathcal{M}}$. Let $f \in M$ be such that $$\mathcal{M} \models (f \textrm{ is a }k\textrm{-good } u\textrm{-}H\textrm{-approximation}).$$ Define $\mathcal{N}= \langle N, \in^\mathcal{N} \rangle$ by $$N= \bigcup_{n \in \omega} f(n^\mathcal{M})^* \textrm{ and } \in^\mathcal{N} \textrm{ is the restriction of } \in^\mathcal{M} \textrm{ to }N.$$ We claim that $\mathcal{N}$ satisfies $\mathbf{M}+\textrm{H}+\Delta_0^\mathcal{P}\textrm{-collection}+\exists \vec{x} \forall \vec{y} \theta(\vec{x}, \vec{y})$. Note that $\mathcal{N} \subseteq_e^\mathcal{P} \mathcal{M}$ and $\vec{a} \in N$, so $\mathcal{N}\models \exists \vec{x} \forall \vec{y} \theta(\vec{x}, \vec{y})$. Let $x \in N$. Let $n \in \omega$ be such that $\mathcal{M} \models (x \in f(n^\mathcal{M}))$. Therefore $\mathcal{M}\models (\mathcal{P}(x) \subseteq f(n^\mathcal{M}))$ and $f(n^{\mathcal{M}}) \in (f((n+1)^\mathcal{M}))^* \subseteq N$. It now follows from Definition \ref{Df:HApproximations} that $\mathcal{P}^\mathcal{M}(x) \in N$. Therefore $\mathcal{N}\models (\mathrm{powerset})$ and for all $x \in N$, $\mathcal{P}^\mathcal{N}(x)= \mathcal{P}^\mathcal{M}(x)$. It is now clear that $\mathcal{N} \models \mathbf{M}$. We turn to showing that Axiom H holds in $\mathcal{N}$. Let $u \in N$. Let $n \in \omega$ be such that $u \in f(n^\mathcal{M})^*$. By Definition \ref{Df:HApproximations}, there exists $v \in M$ such that $\mathcal{M} \models (f(n^\mathcal{M})= H_{\leq|v|})$, and so $\mathcal{M} \models (|u|\leq |v|)$. Now, working inside $\mathcal{N}$, if $z$ is transitive with $|z| \leq |u|$, then $|z|\leq |v|$ and so $z \in f(n^{\mathcal{M}})$. Therefore $$\mathcal{N} \models \forall z\left(\bigcup z \subseteq z \land |z|\leq |u| \Rightarrow z \in f(n^{\mathcal{M}})\right)$$ and so Axiom H holds in $\mathcal{N}$. We are left to show that $\mathcal{N}$ satisfies $\Delta_0^\mathcal{P}\textrm{-collection}$. We make use of the following property of $\mathcal{N}$:\\ {\bf Claim:} If $C\in M$ and $C^* \subseteq N$, then $C \in N$.\\ We prove this claim. Suppose, for a contradiction, that $C \in M$, $C^* \subseteq N$ and $C \notin N$. Note that if $n \in k^*$ is nonstandard, then $C^* \subseteq f(n)^*$ and $\mathcal{M} \models (C \in f(n+1))$. Therefore, working inside $\mathcal{M}$, the set $$A= \{n \in k \mid C \notin f(n)\}$$ defines the standard $\omega$, which is a contradiction. This proves the claim.\\ Now, let $\phi(x, y, \vec{z})$ be a $\Delta_0^\mathcal{P}$-formula. Let $\vec{d}, b \in N$ be such that $$\mathcal{N}\models (\forall x \in b) \exists y \phi(x, y, \vec{d}).$$ The following formula is a $\Delta_0^\mathcal{P}$-formula with parameters $\vec{d}$, $k$ and $f$: $$\phi(x, y, \vec{d}) \land (\forall n \in k)(y \notin f(n) \Rightarrow \neg (\exists w \in f(n))\phi(x, w, \vec{d})).$$ So, by $\Delta_0^\mathcal{P}$-absoluteness, $$\mathcal{M} \models (\forall x \in b)(\exists y \in f(k))(\phi(x, y, \vec{d}) \land (\forall n \in k)(y \notin f(n) \Rightarrow \neg (\exists w \in f(n))\phi(x, w, \vec{d}))).$$ Working inside $\mathcal{M}$, $\Delta_0^\mathcal{P}$-separation (Theorem \ref{Th:Delta0PSeparationInM}) implies that $$C= \{\langle x, y \rangle \in b \times f(k) \mid \phi(x, y, \vec{d}) \land (\forall n \in k)(y \notin f(n) \Rightarrow \neg (\exists w \in f(n))\phi(x, w, \vec{d}))$$ is a set. And $\Delta_0^\mathcal{P}$-absoluteness implies that $C^* \subseteq N$. Therefore $C \in N$. Working inside $\mathcal{N}$, let $B= \mathrm{rng}(C)$. So, $$\mathcal{N} \models (\forall x \in b)(\exists y \in B) \phi(x, y, \vec{d}),$$ which shows that $\mathcal{N} \models \Delta_0^\mathcal{P}\textrm{-collection}$. To see that (II) holds observe that if the Axiom of Choice holds in $\mathcal{M}$ in the proof of (I), then it also holds in $\mathcal{N}$. It then follows from Theorem \ref{Th:TakahashiVsLevy} that $\mathcal{N}$ also satisfies $\Pi_1$-collection, and we get $\Pi_3$-conservativity. \Square \end{proof} Theorem \ref{Th:Delta0PCollectionConsistentWithMOST} combined with Theorems \ref{Th:MOSTplusVequalsLConsistentWithM} shows that the consistency $\mathbf{M}$ implies the consistency of $\mathrm{MOST}+\Pi_1\textrm{-collection}$. \begin{Coroll1}\label{Th:ExtensionOfEquiconsistencyOfM} If $\mathbf{M}$ is consistent, then so is $\mathrm{MOST}+\Pi_1\textrm{-collection}$ ($= \mathrm{Mac}+\Pi_1\textrm{-collection}$). \Square \end{Coroll1} The argument used in the proof of Theorem \ref{Th:Delta0PCollectionConsistentWithMOST} can also be used to show that that the theories $\mathbf{M}+\Delta_0^\mathcal{P}\textrm{-collection}$ and $\mathrm{Mac}+\Delta_0^\mathcal{P}\textrm{-collection}$ are $\Pi_2^\mathcal{P}$-conservative over the theories $\mathbf{M}$ and $\mathrm{Mac}$, respectively. To see this we introduce a modification of Definition \ref{Df:HApproximations}: \begin{Definitions1}\label{Df:PowersetApproximations} Let $n \in \omega$ and let $u$ be a set. We say that $f$ is an $n$-good $u$-$\mathcal{P}$-approximation if \begin{itemize} \item[(i)] $f$ is a function and $\mathrm{dom}(f)= n+1$ \item[(ii)] $f(\emptyset)= \mathrm{TC}(u)$ \item[(iii)] $(\forall k \in n)(f(k+1)= \mathcal{P}(f(k)))$. \end{itemize} \end{Definitions1} An $n$-good $u$-$\mathcal{P}$-approximation is a sequence $\mathcal{P}(v)$, $\mathcal{P}(\mathcal{P}(v)), \ldots$ where $v$ is the transitive closure of $u$. The same argument that was used to prove Lemma \ref{Th:FiniteHApproximationsExist} shows that in any model of $\mathbf{M}$, any such sequence with externally finite length is guaranteed to exist. \begin{Lemma1}\label{Th:FinitePowersetApproximationsExist} Let $n \in \omega$. If $\mathcal{M} \models \mathbf{M}$ and $u \in M$, then $$\mathcal{M} \models \exists f(f \textrm{ is an } n\textrm{-good } u\textrm{-}\mathcal{P}\textrm{-approximation}).$$ \Square \end{Lemma1} Replacing the $n$-good $|u|$-$H$-approximations in the proof of Theorem \ref{Th:Delta0PCollectionConsistentWithMOST} now shows that adding $\Delta_0^\mathcal{P}$-collection to $\mathbf{M}$ or $\mathrm{Mac}$ does not prove any new $\Pi_2^\mathcal{P}$-sentences. \begin{Theorems1}\label{Th:Delta0PCollectionConsistentWithMac} \begin{itemize} \item[(I)] The theory $\mathbf{M}+\Delta_0^\mathcal{P}\textrm{-collection}$ is $\Pi_2^\mathcal{P}$-conservative over the theory $\mathbf{M}$. \item[(II)] The theory $\mathrm{Mac}+\Delta_0^\mathcal{P}\textrm{-collection}$ is $\Pi_2^\mathcal{P}$-conservative over the theory $\mathrm{Mac}$. \end{itemize} \Square \end{Theorems1} \begin{Remark1} Theorems \ref{Th:Delta0PCollectionConsistentWithMOST} and \ref{Th:Delta0PCollectionConsistentWithMac} highlight a mistake in the final sentence of \cite[Metatheorem 9.41]{mat01} and the final clause, starting after the colon, of \cite[Theorem 16]{mat01} (which paraphrases \cite[Metatheorem 9.41]{mat01}). This erroneous assertion is used by the author in \cite{mck15} to claim that the theory $\mathrm{Mac}+\Delta_0^\mathcal{P}\textrm{-collection}$ represents a new lower-bound on the consistency strength of the theory $\mathrm{NFU}+\mathrm{AxCount}_\leq$. Theorem \ref{Th:Delta0PCollectionConsistentWithMOST} now shows that $\mathrm{Mac}+\Delta_0^\mathcal{P}\textrm{-collection}$ does not represent an improvement on previously known lower-bounds on the consistency strength of $\mathrm{NFU}+\mathrm{AxCount}_\leq$. \end{Remark1} \section[The strength of $\Pi_n$-collection over $\mathbf{M}$]{The strength of $\Pi_n$-collection over $\mathbf{M}$} \label{Sec:GeneralCollectionResults} In this section we generalise and expand upon Theorem \ref{Th:Delta0PCollectionConsistentWithMOST} to show for all $n \geq 1$, \begin{enumerate} \item the theory $\mathbf{M}+\Pi_{n+1}\textrm{-collection}$ is $\Pi_{n+3}$-conservative over the theory\\ $\mathbf{M}+\textrm{strong }\Pi_n\textrm{-collection}$, \item the theory $\mathbf{M}+\Pi_{n+1}\textrm{-collection}+ \Sigma_{n+2}\textrm{-induction on } \omega$ proves that there exists a transitive model of $\mathrm{Z}+ \Pi_n\textrm{-collection}$. \end{enumerate} The main tool used in the proof of these results will be the following modification and generalisation of Definition \ref{Df:HApproximations}: \begin{Definitions1} \label{Df:nGoodSubmodelApproximation} Let $n, m \in \omega$, and let $\alpha$ be an ordinal. We say that $f$ is an $n$-good $\langle m+1, \alpha \rangle$-submodel approximation if \begin{itemize} \item[(i)] $f$ is a function and $\mathrm{dom}(f)= n+1$ \item[(ii)] $f(\emptyset)= V_\alpha$ \item[(iii)] $(\forall k \in n+1)\exists \beta((\beta \textrm{ is an ordinal})\land f(k)= V_\beta)$ \item[(iv)] $$(\forall k \in n)(\forall l \in \omega)(\forall a \in f(k+1))((\langle f(k+1), \in\rangle \models \mathrm{Sat}_{\Pi_m}(l, a)) \Rightarrow \mathrm{Sat}_{\Pi_m}(l, a))$$ \item[(v)] $$(\forall k \in n)(\forall l \in \omega)(\forall a \in f(k))(\mathrm{Sat}_{\Sigma_{m+1}}(l, a) \Rightarrow (\langle f(k+1), \in \rangle \models \mathrm{Sat}_{\Sigma_{m+1}}(l, a)))$$ \end{itemize} \end{Definitions1} An $n$-good $\langle m+1, \alpha\rangle$-submodel approximation is a sequence $\langle V_{\beta_0}, \ldots, V_{\beta_n} \rangle$ such that $V_{\beta_0}= V_\alpha$ (condition (ii)), for all $0 \leq l < k$, $\beta_l \leq \beta_k$ (condition (v) applied to the $\Sigma_1$-formula ``$\exists v(a \in v)$"), each $V_{\beta_k}$ ($1 \leq k \leq n$) is a $\Pi_m$-elementary submodel of the universe (condition (iv)), each $V_{\beta_{k+1}}$ satisfies the same $\Sigma_{m+1}$-formulae with parameters from $V_{\beta_k}$ as the universe (condition (v)). Note that if an infinite sequence $\langle V_{\beta_0}, V_{\beta_1}, \ldots \rangle$ is such that for every $n \in \omega$, the first $n+1$ elements of this sequence form an $n$-good $\langle m+1, \alpha \rangle$-submodel approximation, then $\bigcup_{n \in \omega} V_{\beta_n}$ is a $\Pi_{m+1}$-elementary submodel of the universe. We make the following observations about the complexity of Definition \ref{Df:nGoodSubmodelApproximation}: \begin{enumerate} \item The formula ``$f$ is a function and $\mathrm{dom}(f)= n+1$" is $\Delta_0$ with parameters $f$ and $n$. \item The formula ``$f(\emptyset)= V_\alpha$" is $\Delta_0$ with parameters $f$ and $V_\alpha$. \item The formula ``$(\forall k \in n+1)\exists \beta((\beta \textrm{ is an ordinal})\land f(k)= V_\beta)$" is both $\Sigma_2^{\mathbf{M}+\textrm{strong } \Pi_1\textrm{-collection}}$ and $(\Sigma_1^\mathcal{P})^{\mathbf{M}+\textrm{strong } \Pi_1\textrm{-collection}}$ with parameters $f$ and $n$. \item For all $m \in \omega$, the formula $$(\forall k \in n)(\forall l \in \omega)(\forall a \in f(k+1))((\langle f(k+1), \in\rangle \models \mathrm{Sat}_{\Pi_m}(l, a)) \Rightarrow \mathrm{Sat}_{\Pi_m}(l, a))$$ is $\Pi_{\max(1, m)}^{\mathrm{KPI}}$ with parameters $f$ and $n$. \item For all $m \in \omega$, the formula $$(\forall k \in n)(\forall l \in \omega)(\forall a \in f(k))(\mathrm{Sat}_{\Sigma_{m+1}}(l, a) \Rightarrow (\langle f(k+1), \in \rangle \models \mathrm{Sat}_{\Sigma_{m+1}}(l, a)))$$ is $\Pi_{m+1}^{\mathrm{KPI}}$ with parameters $f$ and $n$. \end{enumerate} In light of these observations we introduce specific notion for the formulae that say that $f$ is an $n$-good $\langle m+1, \alpha \rangle$-submodel approximation. \begin{Definitions1} Let $\alpha$ be an ordinal and let $m \in \omega$. We write $\Psi_m(n, f, V_\alpha)$ for the formula, with free variables $f$ and $n$, and parameter $V_\alpha$, that the theory $\mathbf{M}+\textrm{strong } \Pi_1\textrm{-collection}$ proves asserts that $f$ in an $n$-good $\langle m+1, \alpha\rangle$-submodel approximation, and such that $\Psi_0(n, f, V_\alpha)$ is $\Sigma_2$, $\Psi_1(n, f, V_\alpha)$ is $\mathbf{Bol}(\Sigma_2)$, and if $m > 1$, $\Psi_m(n, f, V_\alpha)$ is $\Pi_{m+1}$. \end{Definitions1} \begin{Lemma1} \label{Th:BaseCaseIKeyLemma} The theory $\mathbf{M}+\textrm{strong } \Pi_{1}\textrm{-collection}$ proves that for all ordinals $\alpha$ and for all $n \in \omega$, there exists an $n$-good $\langle 1, \alpha\rangle$-submodel approximation. \end{Lemma1} \begin{proof} Work in the theory $\mathbf{M}+\textrm{strong } \Pi_{1}\textrm{-collection}$. Let $\alpha$ be an ordinal. We will use $\Sigma_2$-induction on $\omega$ to prove $(\forall n \in \omega) \exists f \Psi_0(n, f, V_\alpha)$. It is clear that $\exists f \Psi_0(\emptyset, f, V_\alpha)$ holds. Let $n \in \omega$ and suppose that $f$ is such that $\Psi_0(n, f, V_\alpha)$ holds. Let $\beta$ be the ordinal such that $f(n)= V_\beta$. Consider the $\Sigma_1$-formula $\psi(x, y)$ defined by $$\exists z \exists a \exists l((x= \langle a, l \rangle) \land (z= \langle y, a \rangle) \land (l= \ulcorner \phi(u, v) \urcorner \textrm{ where } \phi \textrm{ is } \Delta_0)\land \mathrm{Sat}_{\Delta_0}(l, z)).$$ Strong $\Sigma_1$-collection implies that there exists a $C$ such that $$(\forall x \in V_\beta \times \omega)(\exists y \psi(x, y) \Rightarrow (\exists y \in C) \psi(x, y)).$$ Let $\gamma > \beta$ be such that $C \subseteq V_\gamma$. Therefore, for all $l \in \omega$ and for all $a \in V_\beta$, $$\textrm{if } \mathrm{Sat}_{\Sigma_1}(l, a), \textrm{ then } \langle V_\gamma, \in \rangle \models \mathrm{Sat}_{\Sigma_1}(l, a).$$ It now follows that $g= f \cup \{\langle n+1, V_\gamma \rangle\}$ satisfies $\Psi_0(n+1, g, V_\alpha)$. The fact that $(\forall n \in \omega) \exists f \Psi_0(n, f, V_\alpha)$ holds now follows by $\Sigma_2$-induction on $\omega$. \Square \end{proof} \begin{Lemma1} \label{Th:BaseCaseIIKeyLemma} The theory $\mathbf{M}+\textrm{strong } \Pi_{1}\textrm{-collection}$ proves that for all ordinals $\alpha$, there exists a function $f$ with $\mathrm{dom}(f)=\omega$ such that for all $n \in \omega$, $f \upharpoonright (n+1)$ is an $n$-good $\langle 1, \alpha\rangle$-submodel approximation. \end{Lemma1} \begin{proof} Work in the theory $\mathbf{M}+\textrm{strong } \Pi_{1}\textrm{-collection}$. Using Lemma \ref{Th:BaseCaseIKeyLemma} and strong $\Sigma_2$-collection, we can find a set $B$ such that $(\forall n \in \omega)(\exists f \in B) \Psi_0(n, f, V_\alpha)$ holds. Now, $\Sigma_2$-separation ensures that $$D= \{f \in B \mid (\exists n \in \omega)\Psi_0(n, f, V_\alpha)\}$$ is a set. Let $$G= \left\{f \in D \Big| (\forall k \in \mathrm{dom}(f))(\forall g \in D)\left(\begin{array}{c} (k \in \mathrm{dom}(g)) \land (g(k) \neq f(k))\\ \Rightarrow f(k) \in g(k) \end{array} \right)\right\},$$ which is a set. Now, for all $f_1, f_2 \in G$, $f_1$ and $f_2$ agree on their common domain. Moreover, a straightforward internal induction using the fact that Lemma \ref{Th:BaseCaseIKeyLemma} holds shows that for all $n \in \omega$, $(\exists f \in G)(\mathrm{dom}(f)=n+1)$ holds. Therefore $g= \bigcup G$ is a function with domain $\omega$ such that for all $n \in \omega$, $\Psi_0(n, g \upharpoonright (n+1), V_\alpha)$ holds. \Square \end{proof} We can now prove analogues of Lemmas \ref{Th:BaseCaseIKeyLemma} and \ref{Th:BaseCaseIIKeyLemma} for the theories $\mathbf{M}+ \Pi_{m}\textrm{-collection}+\Sigma_{m+1}\textrm{-induction on } \omega$ where $m \geq 2$. \begin{Lemma1} \label{Th:KeyLemmaForProvingConsistency} Let $m \geq 1$. The theory $\mathbf{M}+ \Pi_{m+1}\textrm{-collection}+\Sigma_{m+2}\textrm{-induction on } \omega$ proves \begin{itemize} \item[(I)] for all ordinals $\alpha$ and for all $n \in \omega$, there exists an $n$-good $\langle m+1, \alpha\rangle$-submodel approximation, \item[(II)] for all ordinals $\alpha$, there exists a function $f$ with $\mathrm{dom}(f)=\omega$ such that for all $n \in \omega$, $f \upharpoonright (n+1)$ is an $n$-good $\langle m+1, \alpha\rangle$-submodel approximation. \end{itemize} \end{Lemma1} \begin{proof} We prove this lemma by external induction on $m$. We begin by proving the induction step. Suppose that (I) and (II) of the lemma hold for $m=p \geq 1$. Work in the theory $\mathbf{M}+\Pi_{p+2}\textrm{-collection}+\Sigma_{p+3}\textrm{-induction on } \omega$. Let $\alpha$ be an ordinal. We will use $\Sigma_{p+3}$-induction on $\omega$ to show that $(\forall n \in \omega) \exists f \Psi_{p+1}(n, f, V_\alpha)$ holds. It is clear that $\exists f \Psi_{p+1}(\emptyset, f, V_\alpha)$ holds. Let $n \in \omega$, and suppose that $\exists f \Psi_{p+1}(n, f, V_\alpha)$ holds. Let $f$ be such that $\Psi_{p+1}(n, f, V_\alpha)$. Let $\delta$ be the ordinal such that $f(n)= V_\delta$. Consider the $\Sigma_{p+2}$-formula $\psi(x, y)$ defined by $$\exists z \exists a \exists l((x= \langle a, l \rangle) \land (z= \langle y, a \rangle) \land (l= \ulcorner \phi(u, v) \urcorner \textrm{ where } \phi \textrm{ is } \Pi_{p+1})\land \mathrm{Sat}_{\Pi_{p+1}}(l, z)).$$ Strong $\Sigma_{p+2}$-collection implies that there exists a $C$ such that $$(\forall x \in V_\delta \times \omega)(\exists y \psi(x, y) \Rightarrow (\exists y \in C) \psi(x, y)).$$ Let $\beta > \delta$ be such that $C \subseteq V_\beta$. Now, using (II) of the induction hypothesis, we can find a function $g$ with $\mathrm{dom}(g)=\omega$ such that for all $q \in \omega$, $\Psi_p(q, g \upharpoonright (q+1), V_\beta)$. Now, let $\gamma > \beta$ be such that $V_\gamma= \bigcup \mathrm{rng}(g)$. It follows from (iv) and (v) of Definition (\ref{Df:nGoodSubmodelApproximation}) that for all $l \in \omega$ and for all $a \in V_\gamma$, $$\textrm{if } \langle V_\gamma, \in \rangle \models \mathrm{Sat}_{\Pi_{p+1}}(l, a), \textrm{ then } \mathrm{Sat}_{\Pi_{p+1}}(l, a).$$ And, since $C \subseteq V_\beta \subseteq V_\gamma$, for all $l \in \omega$ and for all $a \in V_\delta$, $$\textrm{if } \mathrm{Sat}_{\Sigma_{p+2}}(l, a), \textrm{ then } \langle V_\gamma, \in \rangle \models \mathrm{Sat}_{\Sigma_{p+2}}(l, a).$$ Therefore, the function $h=f \cup \{\langle n+1, V_\gamma \rangle\}$ satisfies $\Psi_{p+1}(n+1, h, V_\alpha)$. The fact that $(\forall n \in \omega)\exists f \Psi_{p+1}(n, f, V_\alpha)$ now follows from $\Sigma_{p+3}$-induction on $\omega$. This completes the induction step for (I). Turning our attention to (II), we can use $\Pi_{p+2}$-collection to find a set $B$ such that $(\forall n \in \omega)(\exists f \in B) \Psi_{p+1}(n, f, V_\alpha)$. Now, $\Pi_{p+2}$-separation ensures that $$D= \{f \in B \mid (\exists n \in \omega) \Psi_{p+1}(n, f, V_\alpha)\}$$ is a set. Let $$G= \left\{f \in D \Big| (\forall k \in \mathrm{dom}(f))(\forall g \in D)\left(\begin{array}{c} (k \in \mathrm{dom}(g)) \land (g(k) \neq f(k))\\ \Rightarrow f(k) \in g(k) \end{array} \right)\right\},$$ As in the proof of Lemma \ref{Th:BaseCaseIIKeyLemma}, if $f_1, f_2 \in G$, then $f_1$ and $f_2$ agree on their common domain, and $(\forall n \in \omega)(\exists f \in G)(\mathrm{dom}(f)=n+1)$. Therefore, $g= \bigcup G$ is a function with $\mathrm{dom}(g)= \omega$ such that for all $n \in \omega$, $\Psi_{p+1}(n, g \upharpoonright (n+1), V_\alpha)$ holds. This completes the induction step for (II). The base case of the induction on $m$ ($m=1$) follows from the same arguments used to prove the induction step with Lemma \ref{Th:BaseCaseIIKeyLemma} replacing the induction hypothesis. This completes the proof of the lemma. \Square \end{proof} Using Lemma \ref{Th:KeyLemmaForProvingConsistency} we can show that for $m \geq 1$, $\mathbf{M}+ \Pi_{m+1}\textrm{-collection}+\Sigma_{m+2}\textrm{-induction on } \omega$ proves that there exists a transitive model of $\mathrm{Z}+\Pi_m\textrm{-collection}$. \begin{Theorems1} \label{Th:CollectionPlusInductionProvesConsistency} Let $m \geq 1$. The theory $\mathbf{M}+\Pi_{m+1}\textrm{-collection}+\Sigma_{m+2}\textrm{-induction on } \omega$ proves that there exists a transitive models of $\mathrm{Z}+\Pi_m\textrm{-collection}$. \end{Theorems1} \begin{proof} Work in the theory $\mathbf{M}+\Pi_{m+1}\textrm{-collection}+\Sigma_{m+2}\textrm{-induction on } \omega$. By Lemma \ref{Th:KeyLemmaForProvingConsistency}(II), there exists an $f$ such that $\mathrm{dom}(f)= \omega$, and for all $n \in \omega$, $f \upharpoonright (n+1)$ is an $n$-good $\langle m+1, \omega\rangle$-submodel approximation. Let $\beta$ be an ordinal such that $V_\beta= \bigcup \mathrm{rng}(f)$. We claim that $\langle V_\beta, \in \rangle$ is a set structure that satisfies $\mathrm{Z}+\Pi_m\textrm{-collection}$. Since $\beta$ is a limit ordinal $>\omega$, it is immediate that $\langle V_\beta, \in \rangle$ satisfies all of the axioms of $\mathrm{Z}$. Let $\phi(x, y, \vec{z})$ be a $\Pi_m$-formula. Let $\vec{a}, b \in V_\beta$. Note that Definition \ref{Df:nGoodSubmodelApproximation} implies that $V_\beta$ is a $\Pi_{m+1}$-elementary submodel of the universe, and for all $n \in \omega$, $\langle f(n), \in \rangle \prec_m \langle V_\beta, \in \rangle$. Let $k \in \omega$ be such that $\vec{a}, b \in f(k)$. Now, it follows from Definition \ref{Df:nGoodSubmodelApproximation}(v) that for all $x \in b$, $$\langle V_\beta, \in \rangle \models \exists y \phi(x, y, \vec{a}) \textrm{ if and only if } \langle V_\beta, \in \rangle \models (\exists y \in f(k+1)) \phi^{\langle f(k+1), \in \rangle}(x, y, \vec{a})$$ $$\textrm{if and only if } \langle V_\beta, \in \rangle \models (\exists y \in f(k+1)) \phi(x, y, \vec{a}).$$ Therefore $$\langle V_\beta, \in \rangle \models (\forall x \in b)(\exists y \phi(x, y, \vec{a}) \Rightarrow (\exists y \in f(k+1)) \phi(x, y, \vec{a}))$$ and so $\langle V_\beta, \in \rangle$ satisfies strong $\Pi_m$-collection. Since $\langle V_\beta, \in \rangle$ is a transitive set structure, we can conclude that $\mathbf{M}+\Pi_{m+1}\textrm{-collection}+\Sigma_{m+2}\textrm{-induction on } \omega$ proves that there exists a transitive model of $\mathrm{Z}+\Pi_m\textrm{-collection}$. \Square \end{proof} We now turn to generalising Theorem \ref{Th:Delta0PCollectionConsistentWithMOST} to show that for all $m \geq 1$, the theories $\mathbf{M}+\textrm{strong } \Pi_m\textrm{-collection}$ and $\mathbf{M}+\Pi_{m+1}\textrm{-collection}$ have the same consistency strength. The key ingredient for this result will be the fact that if $m \geq 1$ and $\mathcal{M}$ is a model of $\mathbf{M}+\textrm{strong } \Pi_m\textrm{-collection}$, then for every standard natural number $n$, there exists an $n$-good $\langle m+1, \omega\rangle$-submodel approximation in $\mathcal{M}$. \begin{Lemma1} \label{Th:GeneralFiniteSatisfiabilityLemma} Let $m \geq 1$ and let $\mathcal{M} \models \mathbf{M}+\textrm{strong } \Pi_m\textrm{-collection}$. For all $n \in \omega$ and for all $\alpha \in \mathrm{Ord}^\mathcal{M}$, $$\mathcal{M} \models \exists f(f \textrm{ is an }n\textrm{-good } \langle m+1, \alpha \rangle\textrm{-submodel approximation}).$$ \end{Lemma1} \begin{proof} Let $\alpha \in \mathrm{Ord}^\mathcal{M}$. We prove the lemma by external induction on $n$. It is clear that $$\mathcal{M} \models \exists f(f \textrm{ is a } 0\textrm{-good } \langle m+1, \alpha\rangle\textrm{-submodel approximation}).$$ Suppose that $p \in \omega$ and $f \in M$ are such that $$\mathcal{M} \models (f \textrm{ is a } p\textrm{-good } \langle m+1, \alpha\rangle\textrm{-submodel approximation}).$$ Work inside $\mathcal{M}$. Let $V_\delta$ be the rank such that $f(p)=V_\delta$. Consider the $\Pi_m$-formula $\psi(x, y)$ defined by $$(x=\langle a, l \rangle) \land (l= \ulcorner \phi(u, v) \urcorner \textrm{ where } \phi \textrm{ is } \Pi_m) \land \mathrm{Sat}_{\Pi_m}(l, \langle y, a \rangle).$$ Strong $\Pi_m$-collection implies that there is a set $C$ such that $$(\forall x \in V_\delta \times \omega)(\exists y \psi(x, y) \Rightarrow (\exists y \in C)\psi(x, y)).$$ Let $\gamma > \delta$ be such that $C \subseteq V_\gamma$. Using Lemma \ref{Th:BaseCaseIIKeyLemma} (if $m=1$) or Lemma \ref{Th:KeyLemmaForProvingConsistency} (if $m > 1$), we can find a function $g$ with $\mathrm{dom}(g)= \omega$ such that for all $k \in \omega$, $$g \upharpoonright (k+1) \textrm{ is a } k\textrm{-good } \langle m, \gamma \rangle \textrm{-submodel approximation}.$$ Let $\beta$ be such that $V_\beta= \bigcup \mathrm{rng}(g)$. It follows that for all $l \in \omega$ and for all $a \in V_\beta$, $$\textrm{if }\langle V_\beta, \in \rangle \models \mathrm{Sat}_{\Pi_m}(l, a), \textrm{ then } \mathrm{Sat}_{\Pi_m}(l, a).$$ And, since $C \subseteq V_\beta$, for all $l \in \omega$ and for all $a \in V_\delta$, $$\textrm{if } \mathrm{Sat}_{\Sigma_{m+1}}(l, a), \textrm{ then } \langle V_\beta, \in \rangle \models \mathrm{Sat}_{\Sigma_{m+1}}(l, a).$$ Therefore, $h= f \cup \{\langle p+1, V_\beta\rangle\}$ is a $p+1$-good $\langle m+1, \alpha \rangle$-submodel approximation. This concludes the proof of the induction step and the lemma. \Square \end{proof} We now use a generalisation of the construction used is the proof of Theorem \ref{Th:Delta0PCollectionConsistentWithMOST} to obtain a model $\mathbf{M}+\Pi_{m+1}\textrm{-collection}$ from a model of $\mathbf{M}+\textrm{strong }\Pi_{m}\textrm{-collection}$. \begin{Theorems1} \label{Th:ConsistencyOfCollectionWithStrongCollection} Let $m \geq 1$. \begin{itemize} \item[(I)] The theory $\mathbf{M}+\Pi_{m+1}\textrm{-collection}$ is $\Pi_{m+3}$-conservative over the theory\\ $\mathbf{M}+\textrm{strong }\Pi_m\textrm{-collection}$. \item[(II)] The theory $\mathrm{Mac}+\Pi_{m+1}\textrm{-collection}$ is $\Pi_{m+3}$-conservative over the theory $\mathrm{Mac}+\textrm{strong }\Pi_m\textrm{-collection}$. \end{itemize} \end{Theorems1} \begin{proof} To prove (I) it is sufficient to show that every $\Sigma_{m+3}$-sentence that is consistent with $\mathbf{M}+\textrm{strong }\Pi_m\textrm{-collection}$ is also consistent with $\mathbf{M}+\Pi_{m+1}\textrm{-collection}$. Suppose that $\exists \vec{x} \forall \vec{y} \theta(\vec{x}, \vec{y})$, where $\theta(\vec{x}, \vec{y})$ is a $\Sigma_{m+1}$-formulae, is consistent with $\mathbf{M}+\textrm{strong }\Pi_m\textrm{-collection}$. Let $\mathcal{M}= \langle M, \in^\mathcal{M} \rangle$ be a recursively saturated model of $\mathbf{M}+\textrm{strong }\Pi_m\textrm{-collection}+\exists \vec{x} \forall \vec{y} \theta(\vec{x}, \vec{y})$. Let $\vec{a} \in M$ be such $\mathcal{M}\models \forall \vec{y} \theta(\vec{a}, \vec{y})$ and let $\alpha \in M$ be an ordinal such that $\vec{a} \in (V_\alpha^{\mathcal{M}})^*$. Consider the type {\small $$\Xi(x, u)= \{x \in \omega\}\cup\{x > n \mid n \in \omega\}\cup \{\exists f(f \textrm{ is an }x\textrm{-good } \langle m+1, \alpha\rangle\textrm{-submodel approximation})\}.$$} By Lemma \ref{Th:GeneralFiniteSatisfiabilityLemma}, $\Xi(x, u)$ is finitely realised in $\mathcal{M}$, and so there exists $k \in M$ such that $\Xi(k, u)$ is satisfied in $\mathcal{M}$. Note that $k$ is a nonstandard element of $\omega^{\mathcal{M}}$. Let $f \in M$ be such that $$\mathcal{M} \models (f \textrm{ is a }k\textrm{-good } \langle m+1, \alpha\rangle\textrm{-submodel approximation}).$$ Define $\mathcal{N}= \langle N, \in^\mathcal{N} \rangle$ by $$N= \bigcup_{n \in \omega} f(n^\mathcal{M})^* \textrm{ and } \in^\mathcal{N} \textrm{ is the restriction of } \in^\mathcal{M} \textrm{ to }N.$$ We claim that $\mathcal{N}$ satisfies $\mathbf{M}+\Pi_{m+1}\textrm{-collection}+\exists \vec{x} \forall \vec{y} \theta(\vec{x}, \vec{y})$. Note that $\mathcal{N} \subseteq_e^\mathcal{P} \mathcal{M}$. It follows from the fact that $f$ is an $k$-good $\langle m+1, \alpha \rangle$-submodel approximation that $\mathcal{N} \models \mathbf{M}$ and for all $x \in N$, $\mathcal{P}^\mathcal{N}(x)=\mathcal{P}^\mathcal{M}(x)$. Moreover, Definition \ref{Df:nGoodSubmodelApproximation}(iv) implies that $\mathcal{N} \prec_{m+1} \mathcal{M}$. Therefore, since $\vec{a} \in N$, $\mathcal{N}\models \exists \vec{x} \forall \vec{y} \theta(\vec{x}, \vec{y})$. We are left to show that $\Pi_{m+1}$-collection holds in $\mathcal{N}$. Using exactly the same reasoning that was used in the proof of Theorem \ref{Th:Delta0PCollectionConsistentWithMOST}, we can see that if $C \in M$ is such that $C^* \subseteq N$, then $C \in N$. Now, let $\phi(x, y, \vec{z})$ be a $\Pi_{m+1}$-formula. Let $\vec{d}, b \in N$ be such that $$\mathcal{N}\models (\forall x \in b) \exists y \phi(x, y, \vec{d})$$ The following formula is a $\mathbf{Bol}(\Pi_{m+1})$-formula with parameters $\vec{d}$, $k$ and $f$: $$\phi(x, y, \vec{d}) \land (\forall n \in k)(y \notin f(n) \Rightarrow \neg (\exists w \in f(n))\phi(x, w, \vec{d})).$$ And, since $\mathcal{N} \prec_{m+1} \mathcal{M}$, $$\mathcal{M} \models (\forall x \in b)(\exists y \in f(k))(\phi(x, y, \vec{d}) \land (\forall n \in k)(y \notin f(n) \Rightarrow \neg (\exists w \in f(n))\phi(x, w, \vec{d}))).$$ Working inside $\mathcal{M}$, $\mathbf{Bol}(\Pi_{m+1})$-separation (Lemma \ref{Th:BasicRelationships}) implies that $$C= \{\langle x, y \rangle \in b \times f(k) \mid \phi(x, y, \vec{d}) \land (\forall n \in k)(y \notin f(n) \Rightarrow \neg (\exists w \in f(n))\phi(x, w, \vec{d}))$$ is a set. And, the fact that $\mathcal{N} \prec_{m+1} \mathcal{M}$ ensures that $C^* \subseteq N$. Therefore $C \in N$. Working inside $\mathcal{N}$, let $B= \mathrm{rng}(C)$. So, $$\mathcal{N} \models (\forall x \in b)(\exists y \in B) \phi(x, y, \vec{d}),$$ which shows that $\mathcal{N} \models \Pi_{m+1}\textrm{-collection}$. To see that (II) holds observe that if the Axiom of Choice holds in $\mathcal{M}$ in the proof of (I), then it also holds in $\mathcal{N}$. \end{proof} \begin{Coroll1} \label{Th:ConsistencyOfCollectionWithStrongCollection2} If $\mathbf{M}+\textrm{strong }\Pi_m\textrm{-collection}$ is consistent, then so is $\mathbf{M}+\Pi_{m+1}\textrm{-collection}$. \end{Coroll1} Theorem \ref{Th:CollectionPlusInductionProvesConsistency} and Corollary \ref{Th:ConsistencyOfCollectionWithStrongCollection2} yield: \begin{Coroll1} If $m \geq 1$, then $$\mathbf{M}+ \Pi_{m+1}\textrm{-collection} \vdash \mathrm{Con}(\mathbf{M}+\Pi_m\textrm{-collection})$$ \Square \end{Coroll1} These results also reveal the limitations of the theory $\mathbf{M}+\Pi_m\textrm{-collection}$ when $m \geq 2$. \begin{Coroll1} \label{Th:LimitationsOfCollection} If $m \geq 1$, then $$\mathbf{M}+\Pi_{m+1}\textrm{-collection} \nvdash \Sigma_{m+2}\textrm{-induction on } \omega.$$ \end{Coroll1} \begin{proof} One can easily verify that by starting with a model of $\mathbf{M}+\textrm{strong }\Pi_m\textrm{-collection}+\neg\mathrm{Con}(Z+\Pi_m\textrm{-collection})$ in the proof of Theorem \ref{Th:ConsistencyOfCollectionWithStrongCollection}, one obtains a model of $\mathrm{M}+\Pi_{m+1}\textrm{-collection}+\neg\mathrm{Con}(Z+\Pi_m\textrm{-collection})$. If $\mathbf{M}+\Pi_{m+1}\textrm{-collection}$ proves $\Sigma_{m+2}$-induction, then, by Theorem \ref{Th:CollectionPlusInductionProvesConsistency}, this model would also satisfy $\mathrm{Con}(Z+\Pi_m\textrm{-collection})$, which is a contradiction. \Square \end{proof} The proof of Proposition 9.20 of \cite{mat01} shows that there is an instance of $\Sigma_2$-induction on $\omega$ that coupled with the theory $\mathbf{M}$ proves the consistency of $\mathrm{Mac}$. Therefore, by observing that the proof of Theorem \ref{Th:Delta0PCollectionConsistentWithMOST} can be used to obtain a model of $\mathrm{MOST}+\Pi_1\textrm{-collection}+\neg\mathrm{Con}(\mathrm{MOST})$, we can see that there is an instance of $\Sigma_2$-induction on $\omega$ that is not provable in $\mathrm{MOST}+\Pi_1\textrm{-collection}$. Therefore Corollary \ref{Th:LimitationsOfCollection} also holds when $m=0$. \section[The strength of $\Pi_n$-collection over $\mathrm{KPI}$]{The strength of $\Pi_n$-collection over $\mathrm{KPI}+V=L$} \label{Sec:ResultsForKP} In this section we show that the techniques developed in sections \ref{Sec:BaseCaseSection} and \ref{Sec:GeneralCollectionResults} can be adapted to reveal the relative strengths of fragments of the collection scheme over the base theory $\mathrm{KPI}+V=L$. This is achieved by replacing the levels of the $V$-hierarchy in Definition \ref{Df:nGoodSubmodelApproximation} by levels of the $L$-hierarchy. \begin{Definitions1} \label{Df:nGoodLApproximation} Let $n, m \in \omega$, and let $\alpha$ be an ordinal. We say that $f$ is an $n$-good $\langle m+1, \alpha \rangle$-$L$-approximation if \begin{itemize} \item[(i)] $f$ is a function and $\mathrm{dom}(f)= n+1$ \item[(ii)] $f(\emptyset)= L_\alpha$ \item[(iii)] $(\forall k \in n+1)\exists \beta((\beta \textrm{ is an ordinal})\land f(k)= L_\beta)$ \item[(iv)] $$(\forall k \in n)(\forall l \in \omega)(\forall a \in f(k+1))((\langle f(k+1), \in\rangle \models \mathrm{Sat}_{\Pi_m}(l, a)) \Rightarrow \mathrm{Sat}_{\Pi_m}(l, a))$$ \item[(v)] $$(\forall k \in n)(\forall l \in \omega)(\forall a \in f(k))(\mathrm{Sat}_{\Sigma_{m+1}}(l, a) \Rightarrow (\langle f(k+1), \in \rangle \models \mathrm{Sat}_{\Sigma_{m+1}}(l, a)))$$ \end{itemize} \end{Definitions1} Note that the only difference between Definitions \ref{Df:nGoodSubmodelApproximation} and \ref{Df:nGoodLApproximation} are that the references to levels of the $V$-hierarchy in clauses (ii) and (iii) of Definition \ref{Df:nGoodSubmodelApproximation} have been replaced by level of the $L$-hierarchy in Definition \ref{Df:nGoodLApproximation}. It should be clear that the expression ``$f(\emptyset)= L_\alpha$" remains $\Delta_0$ with parameters $f$ and $L_\alpha$, and, in light of Theorem \ref{Th:DefinabilityOfLInKPI}, the expression ``$(\forall k \in n+1)\exists \beta((\beta \textrm{ is an ordinal})\land f(k)= L_\beta)$" is equivalent to a $\Sigma_1$-formula with parameters $f$ and $n$ in the theory $\mathrm{KPI}$. As we did in section \ref{Sec:GeneralCollectionResults}, we introduce specific notion for formulae that express that $f$ is an $n$-good $\langle m+1, \alpha\rangle$-$L$-approximation. \begin{Definitions1} Let $\alpha$ be an ordinal and let $m \in \omega$. We write $\Psi^*_m(n, f, L_\alpha)$ for the formula, with free variables $f$ and $n$, and parameter $L_\alpha$, that the theory $\mathrm{KPI}$ proves asserts that $f$ in an $n$-good $\langle m+1, \alpha\rangle$-$L$-approximation, and such that $\Psi^*_0(n, f, L_\alpha)$ is $\mathbf{Bol}(\Sigma_2)$, and if $m > 0$, $\Psi^*_m(n, f, L_\alpha)$ is $\Pi_{m+1}$. \end{Definitions1} Using the same arguments as we used in the proofs of Lemmas \ref{Th:BaseCaseIKeyLemma} and \ref{Th:BaseCaseIIKeyLemma} we obtain: \begin{Lemma1} \label{Th:BaseCaseLemmaKPI1} The theory $\mathrm{KPI}+V=L+\Pi_1\textrm{-collection}+\Sigma_2\textrm{-induction on } \omega$ proves that for all ordinals $\alpha$ and for all $n \in \omega$, there exists an $n$-good $\langle 1, \alpha\rangle$-$L$-approximation. \Square \end{Lemma1} \begin{Lemma1} \label{Th:BaseCaseLemmaKPI2} The theory $\mathrm{KPI}+V=L+\Pi_1\textrm{-collection}+\Sigma_2\textrm{-induction on } \omega$ proves that for all ordianls $\alpha$, there exists a function $f$ with $\mathrm{dom}(f)=\omega$ such that for all $n \in \omega$, $f \upharpoonright (n+1)$ in an $n$-good $\langle 1, \alpha\rangle$-$L$-approximation. \Square \end{Lemma1} Lemmas \ref{Th:BaseCaseLemmaKPI1} and \ref{Th:BaseCaseLemmaKPI2} now provide the base case of an induction argument that proves an analogue of Lemma \ref{Th:KeyLemmaForProvingConsistency}. \begin{Lemma1} \label{Th:KeyLemmaForProvingConsistencyKPI} Let $m \in \omega$. The theory $\mathrm{KPI}+V=L+\Pi_{m+1}\textrm{-collection}+\Sigma_{m+2}\textrm{-induction}$ proves \begin{itemize} \item[(I)] for all ordinals $\alpha$ and for all $n \in \omega$, there exists an $n$-good $\langle m+1, \alpha\rangle$-$L$-approximation, \item[(II)] for all ordinals $\alpha$, there exists a function $f$ with $\mathrm{dom}(f)= \omega$ such that for all $n \in \omega$, $f \upharpoonright (n+1)$ is an $n$-good $\langle m+1, \alpha\rangle$-$L$-approximation. \end{itemize} \Square \end{Lemma1} Lemma \ref{Th:KeyLemmaForProvingConsistencyKPI} provides the key ingredient for showing that the theory\\ $\mathrm{KPI}+V=L+\Pi_{m+1}\textrm{-collection}+\Sigma_{m+2}\textrm{-induction on } \omega$ proves the consistency of the theory $\mathrm{KPI}+V=L+\textrm{strong } \Pi_m\textrm{-collection}+\Pi_\infty\textrm{-foundation}$. \begin{Theorems1} \label{Th:ProofOfCosistencyForKPI} Let $m \in \omega$. The theory $\mathrm{KPI}+V=L+\Pi_{m+1}\textrm{-collection}+\Sigma_{m+2}\textrm{-induction on } \omega$ proves that there exists a transitive model of $\mathrm{KPI}+V=L+\textrm{strong }\Pi_m\textrm{-collection}+\Pi_\infty\textrm{-foundation}$. \end{Theorems1} \begin{proof} Work in the theory $\mathrm{KPI}+V=L+\Pi_{m+1}\textrm{-collection}+\Sigma_{m+2}\textrm{-induction on } \omega$. By Lemma \ref{Th:KeyLemmaForProvingConsistencyKPI}(II), there exists $f$ such that $\mathrm{dom}(f)=\omega$, and for all $n \in \omega$, $f\upharpoonright (n+1)$ is an $n$-good $\langle m+1, \omega\rangle$-$L$-approximation. Let $\beta$ be an ordinal such that $L_\beta= \bigcup \mathrm{rng}(f)$. We claim that $\langle L_\beta, \in \rangle$ is a set structure that satisfies $\mathrm{KPI}+\textrm{strong }\Pi_m\textrm{-collection}+\Pi_\infty\textrm{-foundation}$ (=$\mathbf{M}^-+\textrm{strong }\Pi_m\textrm{-collection}+\Pi_\infty\textrm{-foundation}$). Note that, since $\beta$ is a limit ordinal, $L_\beta$ is a transitive set that is closed under G\"{o}del operations. Therefore $\langle L_\beta, \in\rangle$ satisfies all of the axioms of $\mathbf{M}^-$. Let $\phi(x, \vec{z})$ be a $\Pi_\infty$-formula and let $\vec{a} \in L_\beta$. Separation in the theory $\mathrm{KPI}$ implies that $$A= \{ x \in L_\beta \mid \langle L_\beta, \in \rangle \models \phi(x, \vec{a})\}$$ is a set. Therefore, set foundation in $\mathrm{KPI}$, implies that if $A\neq \emptyset$, then $A$ has an $\in$-least element. This shows that $\langle L_\beta, \in\rangle$ satisfies $\Pi_\infty$-foundation. Finally, identical reasoning to that used in the proof of Theorem \ref{Th:CollectionPlusInductionProvesConsistency} shows that $\langle L_\beta, \in\rangle$ satisfies $\textrm{strong }\Pi_m\textrm{-collection}$. Since $\langle L_\beta, \in\rangle$ is a transitive set structure, we can conclude that $\mathrm{KPI}+\Pi_{m+1}\textrm{-collection}+\Sigma_{m+2}\textrm{-induction on } \omega$ proves that there exists a transitive models of $\mathrm{KPI}+\textrm{strong }\Pi_m\textrm{-collection}+\Pi_\infty\textrm{-foundation}+V=L$. \Square \end{proof} We next turn indicating how the proof of Theorem \ref{Th:ConsistencyOfCollectionWithStrongCollection} can be adapted to obtain an analogue of this result with the base theory $\mathbf{M}$ replaced by $\mathrm{KPI}+V=L$. The same argument used in the proof of Lemma \ref{Th:GeneralFiniteSatisfiabilityLemma} can be used to prove the following: \begin{Lemma1} \label{Th:FiniteSatisfiabilityLemmaKPI} Let $m \in \omega$ and let $\mathcal{M} \models\mathrm{KPI}+V=L+\textrm{strong }\Pi_m\textrm{-collection}$. For all $n \in \omega$ and for all $\alpha \in \mathrm{Ord}^\mathcal{M}$, $$\mathcal{M} \models \exists f(f \textrm{ is an } n \textrm{-good } \langle m+1, \alpha \rangle\textrm{-}L\textrm{-approximation}).$$ \Square \end{Lemma1} Lemma \ref{Th:FiniteSatisfiabilityLemmaKPI} yields an analogue of Theorem \ref{Th:ConsistencyOfCollectionWithStrongCollection}. \begin{Theorems1}\label{Th:ConsistencyOfStrongCollectionWithCollectionKPI} Let $m \in \omega$. \begin{itemize} \item[(I)] The theory $\mathrm{KPI}+V=L+\Pi_{m+1}\textrm{-collection}$ is $\Pi_{m+3}$-conservative over the theory $\mathrm{KPI}+V=L+\textrm{strong }\Pi_m\textrm{-collection}$. \item[(II)] If $\mathrm{KPI}+V=L+\textrm{strong }\Pi_m\textrm{-collection}$ is consistent, then so is $\mathrm{KPI}+V=L+\Pi_{m+1}\textrm{-collection}$. \end{itemize} \Square \end{Theorems1} Theorems \ref{Th:ProofOfCosistencyForKPI} and \ref{Th:ConsistencyOfStrongCollectionWithCollectionKPI} yield: \begin{Coroll1} If $m \geq 1$, then $$\mathrm{KPI}+V=L+\Pi_{m+1}\textrm{-collection} \vdash \mathrm{Con}(\mathrm{KPI}+V=L+\Pi_m\textrm{-collection})$$ \Square \end{Coroll1} \begin{Quest1} Does the theory $\mathrm{KPI}+V=L+\textrm{strong }\Pi_0\textrm{-collection}$ prove the consistency of $\mathrm{KPI}$? \end{Quest1} I am grateful to Ali Enayat for the following observation: \begin{Remark1}\label{Th:AliRemark} The proofs of Theorems \ref{Th:Delta0PCollectionConsistentWithMOST}, \ref{Th:Delta0PCollectionConsistentWithMac}, \ref{Th:ConsistencyOfCollectionWithStrongCollection} and \ref{Th:ConsistencyOfStrongCollectionWithCollectionKPI} can all be formalised in the subsystem of second order arithmetic $\mathrm{WKL}_0$. The fact that $\mathrm{WKL}_0$ is conservative over Primitive Recursive Arithmetic ($\mathrm{PRA}$) for sentences that are $\Pi_2$ sentences of arithmetic (see \cite[Theorem IX.3.16]{sim09}), then shows that all of these results are theorems of $\mathrm{PRA}$. \end{Remark1} \noindent{\bf Acknowledgements:} I am very grateful to Adrian Mathias and Ali Enayat for their helpful comments on earlier drafts of this paper. In particular, Ali Enayat's observations led to the strengthening of Theorems \ref{Th:Delta0PCollectionConsistentWithMOST}, \ref{Th:Delta0PCollectionConsistentWithMac}, \ref{Th:ConsistencyOfCollectionWithStrongCollection} and \ref{Th:ConsistencyOfStrongCollectionWithCollectionKPI}. I would also like to thank the anonymous referee for their careful reading of this paper and their thoughtful suggestions. \bibliographystyle{alpha}
1,941,325,219,945
arxiv
\section{INTRODUCTION} \label{sec:introduction} Automatic scene classification aims to classify the input signal into one of several predefined scene labels and help machines understand the environments around them. It has enormous potential in the applications of human-computer interaction, intelligent robotics, smart video surveillance and autonomous driving~\cite{Xie2020}. However, enabling devices to accurately recognize the environment is a challenging task due to the noise disturbance and ambiguity of input signal. In the past decades, a variety of algorithms have been proposed for Acoustic Scene Classification (ASC)~\cite{Barchiesi2015} and Visual Scene Classification (VSC)~\cite{Wei2016} task. State-of-the-art solutions for ASC are based on spectral features, most commonly the Mel-scale filter bank coefficients (FBank), and convolutional neural network (CNN) architectures~\cite{Sakashita2018}\cite{Chen2019}\cite{Suh2020}. In addition, long-term window depicts the scene information with various time scales, resulting in a more discriminative feature~\cite{Chen2021}. By contrast, VSC has a longer history and more types of approaches, e.g., global attribute descriptors~\cite{Oliva2001}, learning spatial layout patterns~\cite{Jiang2012} and discriminative region detection~\cite{Zuo2014}. Moreover, using powerful deep models pre-trained on large-scale image datasets as feature extractor has demonstrated promising performance~\cite{Damodaran2019}. In recent years, researchers have shown increasing interests on audio-visual scene classification (AVSC), which utilizes acoustic and visual information simultaneously to further improve the performance of scene classification\cite{Wang2021audiovisual}. Prior works on the AVSC task mainly focused on training state-of-the-art models for ASC and VSC individually, then retrieving the intermediate embeddings of each model to train the scene classifier~\cite{Parekh2020}\cite{Wang2021b}\cite{Wang2021}\cite{Okazaki2021}. We name this two-stage procedure “pipeline training strategy”. In this way, the extracted embeddings contain abundant information within each modality, and speed up the convergence of the classifier. However, the complementarity and redundancy across modalities are neglected. To model the interactions within and across modalities simultaneously, various fusion methods have been proposed on other multi-modal fields\cite{Huang2020}\cite{Gao2019}\cite{Xu2019}. Most of these methods perform well on time-sequential related tasks using recurrent neural network (RNN). However, the AVSC task is less sensitive to the sequential relations. In this paper, we propose a “joint training strategy” for the AVSC task to model the interactions within and across modalities in a unified framework. The main contribution of this work is threefold. First, we introduce long-term scalogram to the AVSC task inspired by~\cite{Chen2021}, and explore a frame-level alignment for audio-visual streams at the front end. Second, we retrieve the bottom layers of pre-trained image models to serve as deep visual representation extractor to overcome the problem of image data sparsity. Third, we propose to jointly optimize the acoustic encoder and scene classifier. In this way, the acoustic encoder is able to interact with visual representations to construct more discriminative audio-visual embeddings for scene classifier. Additionally, it is the further research based on our prior work~\cite{Wang2021b}, which ranked first place in Task 1B of the challenge of Detection and Classification of Acoustic Scenes and Events (DCASE) in 2021. \section{METHODOLOGY} \label{sec:methodology} \subsection{Model Structure} \label{ssec:model} Fig.~\ref{fig:overview} illustrates our proposed joint optimization framework for the AVSC task. The system is built with three modules: the acoustic encoder (AE), the visual encoder (VE) and the scene classification module (SC). The bottom layers of pre-trained image model are utilized as VE to extract deep representations, and the parameters are frozen during training. The structure of AE is based on 1D-DCNN~\cite{Chen2021}. We introduce residual learning~\cite{He2016} and name it “res-DCNN”. Specifically, we use a convolutional layer to extract features and an extra convolutional layer to match the dimension of input and output of each CNN block. SC is built with stacked fully connected layers followd by a SoftMax layer, and the parameters of it are updated along with AE during training. \begin{figure}[htb] \centering \includegraphics[width=8.cm]{framework.png} \caption{Block diagram of the proposed framework.} \label{fig:overview} \end{figure} \subsection{Transfer Learning Based Joint Optimization} \label{ssec:transfer} \subsubsection{Pre-processing of Audio-visual Stream} \label{sssec:prepro} The audio files in the development dataset~\cite{Wang2021dataset} are recorded in binaural way. To extract features, the magnitude of the STFT spectrum is first calculated every 171 {\it ms} over 512 {\it ms} windows on the raw signal resampled at 16 kHz. Then the wavelet filters are applied directly to the STFT spectrum to obtain scalogram feature. For more details of the wavelet filters, please refer to~\cite{Chen2021}. We also extract long-term FBank for comparison. The only difference is that we apply Mel-scale filters to the STFT spectrum. For all features, we use average and difference channel instead of left and right channel. On the video stream, we perform down-sampling with a frame rate of 1 fps since the images in the same video segment vary little. Then the acoustic frames and visual frames of the same video segment are time-aligned to construct our training dataset $ \mathcal{D} = \{(a_{1},v_{1},s_{1}),...,(a_{N},v_{N},s_{N})\}$, where $a_{i}, v_{i}, s_{i}$ denote the $i$th acoustic frame, visual frame and scene label, respectively. \subsubsection{Pre-training of Visual Encoder} \label{sssec:pretrain} In this stage, we utilize the powerful image models developed in the computer vision area to obtain robust representations of visual frames. To validate the generality of our approach, we adopt the recently released EfficientNetV2-Small~\cite{Tan2021} and the classic ResNet50~\cite{He2016}. After downloading the open-source models pre-trained on ImageNet~\cite{Deng2009}, we fine-tune them on Places365~\cite{Zhou2018}, a large image dataset for scene recognition, to adjust the models to be more suitable for scene classification. Different from the conventional pipeline training strategy, we do not extract visual embeddings in advance. \subsubsection{Joint Training of Acoustic Encoder and Scene Classifier} \label{sssec:e2e train} Given a training sample $(a_{i}, v_{i}, s_{i}) \in \mathcal{D}$, the intermediate representation $e_{i}$ can be calculated by \begin{equation} e_{i} = {\rm AE}(a_{i}) \oplus {\rm VE}(v_{i}), \end{equation} where $\oplus$ denotes concatenation operation. The cross-entropy loss of SC’s output, denoted as $L_{scene}$, is defined by \begin{equation} L_{scene} = \sum_{i=1}^{N} –log P(s_{i} | {\rm SC}(e_{i})). \end{equation} The performance of SC is optimized by minimizing $L_{scene}$. We propose to update the parameters of AE and SC simultaneously through gradient decent, while the parameters of VE are frozen. In this way, VE is initialized with the ability to extract the global scene attribute of images, while AE learns to cooperate with VE to generate a more discriminative and compact representation. Moreover, data augmentation methods could be applied on raw inputs during training procedure. We have tried to randomly initialize and train the whole model including VE in a totally end-to-end way. However, it took much more time for the system to converge and the final results were not satisfactory. To a certain extent, our approach is a trade-off between the conventional pipeline and end-to-end training strategy. \section{EXPERIMENTAL SETUP} \label{sec:experiments} \subsection{Dataset} \label{ssec:dataset} The development dataset of TAU Urban Audio-Visual Scenes 2021~\cite{Wang2021dataset} contains 34 hours of synchronized audio-visual data from 10 European cities, provided in files with a length of 10 seconds. It consists of 10 scene classes, including airport, metro station, public square, etc. The official training and test fold consist of 8646 and 3645 files, respectively. In our experiments, approximately 10\% of the training fold was randomly selected and reserved as the validation fold for hyperparameter fine-tuning and early stopping. \subsection{Implementation Details} \label{ssec:implement} The detailed hyperparameters of the model are shown in Table~\ref{tab:details}. The notation ``2-3 Conv(pad=0,stride=1)-4-BN-ReLU-AvgPooling(pad=1,stride=2)'' denotes a convolutional kernel with 2 input channels, 4 output channels and a size of 3, followed by batch normalization and ReLU activation, finally an average pooling layer. We only used image data augmentation methods, including RandomResizedCrop, RandomHorizontalFlip, ColorJitter~\cite{paszke2019pytorch} and their combinations. Besides, we used stochastic gradient descent (SGD) with cosine annealing and warm restart~\cite {Loshchilov2017} to optimize the model, with a batch size of 256 and max epochs of 150. The maximum and minimum learning rates were set to 1e-2 and 1e-5, respectively. The models with best validation loss were retained. \begin{table}[htbp] \centering \caption{\label{tab:details} Details of model structure and configuration.} \scriptsize \begin{tabular}{cl} \toprule Model & Settings \\ \midrule \multirow{2}{*}{Input} & Frame size of scalogram: 2*290; FBank: 2*256 \\ & Image size for EffNetV2-S: 3*288*288; ResNet50: 3*224*224 \\ \midrule \multirow{7}{*}{AE} & 2-3 Conv(pad=0,stride=1)-4-BN-ReLU-AvgPooling(pad=1,stride=2) \\ & 4-3 Conv(pad=0,stride=1)-8-BN-ReLU-AvgPooling(pad=1,stride=2) \\ & 8-3 Conv(pad=0,stride=1)-16-BN-ReLU-AvgPooling(pad=1,stride=2) \\ & 16-3 Conv(pad=0,stride=1)-32-BN-ReLU-AvgPooling(pad=1,stride=2) \\ & Flatten and concatenate input as well as Conv's output \\ & Linear(2048 units)-BN-ReLU-Dropout(p=0.5) \\ & Linear(1024 units)-BN-ReLU \\ \midrule VE & Refer to~\cite{He2016} and~\cite{Tan2021}, and remove the final classification layer \\ \midrule \multirow{2}{*}{SC} & Linear (1024 units)-BN-ReLU-Dropout(p=0.5) \\ & Linear (10 units)-SoftMax \\ \bottomrule \end{tabular} \end{table} In the test stage, we followed the official setup and split the test fold into 1-second segments, with a total number of 36450. Since the model was trained on frame-level, we took the average of the frame-wise probability distribution in the same segment as the final output. \subsection{Baseline Systems} \label{ssec:base} Three baseline systems were constructed to make a comparison with our proposed joint optimization audio-visual system.\\ \textbf{Audio-only system:} The model consists of two modules, AE and SC, as depicted in Fig.~\ref{fig:overview}. Only the audio files of the dataset were used to evaluate the system.\\ \textbf{Video-only system:} The model consists of two modules, VE and SC, as depicted in Fig.~\ref{fig:overview}. To make a fair comparison, VE was also pre-trained as described in Sec 2.2 and the parameters of it were frozen during training. Only the images of the dataset were used to evaluate the system.\\ \textbf{Pipeline audio-visual system:} First, we retrieved the trained AE of the above audio-only system and the pre-trained VE to extract acoustic and visual embeddings, respectively. The embeddings were then used to train SC. \section{RESULTS AND DISCUSSION} \label{sec:results} \subsection{Experimental Results} \label{ssec:exp_results} Evaluation of systems was performed using two metrics as suggested in~\cite{Wang2021audiovisual}: multi-class cross-entropy (log-loss) and accuracy. Both metrics are calculated as average of the class-wise performance, and the log-loss was the principle metric of DCASE2021 task 1B. \begin{table}[htbp] \centering \caption{\label{tab:evaluation of uni-modal systems}Evaluation of different uni-modal systems.} \footnotesize \begin{tabular}{ccccc} \toprule System & Input & Backbone & Log-loss & Acc/\% \\ \midrule \multirow{2}{*}{Audio-only} & FBank & Res-DCNN & 0.6968 & 76.08 \\ & scalogram & Res-DCNN & \textbf{0.6325} & \textbf{77.98} \\ \midrule \multirow{2}{*}{Video-only} & raw image & ResNet50 & 0.3939 & 87.03 \\ & raw image & EffNetV2-S & \textbf{0.2731} & \textbf{90.91} \\ \bottomrule \end{tabular} \end{table} As shown in Table~\ref{tab:evaluation of uni-modal systems}, in the case of audio-only system, using long-term scalogram as acoustic feature achieved a lower log-loss and higher accuracy compared with FBank. For video-only system, the EfficientNetV2-Small backbone demonstrated more strength to extract high-level representations from raw images. \begin{table}[htbp] \centering \caption{\label{tab:evaluation of audio-visual systems}Evaluation of different audio-visual systems.} \footnotesize \begin{tabular}{ccccc} \toprule Training strategy & Feature(A) & Model(V) & Log-loss & Acc/\% \\ \midrule \multirow{4}{*}{Pipeline} & FBank & ResNet50 & 0.3808 & 88.87 \\ & FBank & EffNetV2-S & 0.2146 & 92.84 \\ & scalogram & ResNet50 & 0.3506 & 89.29 \\ & scalogram & EffNetV2-S & \textbf{0.2055} & \textbf{93.14} \\ \midrule \multirow{4}{*}{Joint} & FBank & ResNet50 & 0.2664 & 91.13 \\ & FBank & EffNetV2-S & 0.1788 & 93.45 \\ & scalogram & ResNet50 & 0.2495 & 91.55 \\ & scalogram & EffNetV2-S & \textbf{0.1517} & \textbf{94.59} \\ \bottomrule \end{tabular} \end{table} In Table~\ref{tab:evaluation of audio-visual systems}, we summarize the performance of different combination of acoustic features and pre-trained image models. Compared with pipeline training strategy, we can observe a consistent performance gain of all the combinations when using joint training strategy. Besides, the superiority of acoustic features and pre-trained image models could further accumulate. Finally, the best single system turned out to be the joint optimization audio-visual system using long-term scalogram as acoustic feature and EfficientNetV2-Small as pre-trained image model. The following experiments would all be performed on this combination. \subsection{Ablation Study} \label{ssec:ablation} The key point of the joint training strategy lies in two aspects: (1) AE is trainable in the joint training. (2) Raw images are used as inputs in the joint training, and data augmentation methods are applied in the training procedure. Table~\ref{tab:ablation result} presents the results of evaluation on the key factors. \uppercase\expandafter{\romannumeral1} and \uppercase\expandafter{\romannumeral4} represent the pipeline and joint training strategy, respectively. Comparing \uppercase\expandafter{\romannumeral1} and \uppercase\expandafter{\romannumeral2}, introducing joint optimization without data augmentation lead to performance degradation. The reason may be that the extracted embeddings were not diverse enough to train a robust system. Comparing \uppercase\expandafter{\romannumeral1} and \uppercase\expandafter{\romannumeral3}, using augmented raw images as input could significantly improve the performance. On the basis of it, the introducing of trainable AE could further boost the performance of system comparing \uppercase\expandafter{\romannumeral3} and \uppercase\expandafter{\romannumeral4}. \begin{table}[htbp] \centering \caption{\label{tab:ablation result}Abation study on the audio-visual systems.} \small \begin{tabular}{ccccc} \toprule No. & Feature(V) & Model(A) & Log-loss & Acc/\% \\ \midrule \uppercase\expandafter{\romannumeral1} & embedding & pre-trained & 0.2055 & 93.14 \\ \uppercase\expandafter{\romannumeral2} & embedding & trainable & 0.2144 & 92.39 \\ \uppercase\expandafter{\romannumeral3} & raw image & pre-trained & 0.1591 & 94.31 \\ \uppercase\expandafter{\romannumeral4} & raw image & trainable & \textbf{0.1517} & \textbf{94.59} \\ \bottomrule \end{tabular} \end{table} \subsection{Visualization Analysis} \label{ssec:visualize} We plot the t-SNE~\cite{van2008visualizing} to visualize the embeddings of the test fold using different training strategies. As shown in Fig.~\ref{fig:tsne} (a) and (b), in the pipeline setting, the points of the metro, bus, and tram are entangled with each other. By contrast, we can observe clear boundaries on the clusters of these categories in the joint optimization setting. However, comparing Fig.~\ref{fig:tsne} (c) and (d), the acoustic embeddings of different categories using joint training strategy are more dispersed, which indicates that the optimization direction of acoustic encoder have turned into constructing more discriminative audio-visual embeddings instead of better acoustic embeddings. \begin{figure}[htb] \centering \scalebox{0.9}{ \includegraphics[width=8.cm]{compare_new.png}} \caption{The t-SNE visualization of embedding distributions using different training strategies.} \label{fig:tsne} \end{figure} \subsection{Comparison With Prior Works} \label{ssec:comparison} In Table~\ref{tab:comparison}, we compare the performance of our proposed joint optimization framework with the state-of-the-art methods of DCASE2021 task 1B. For fair comparison, we select the best audio-visual single systems from the technical reports~\cite{Wang2021b}~\cite{Wang2021}~\cite{Yang2021}. Our approach achieves the lowest log-loss of 0.152 and the highest accuracy of 94.6\%. All the other systems, including official baseline, adopted the pipeline training strategy. \cite{Wang2021} and~\cite{Yang2021} explored various state-of-the-art audio and visual models pre-trained on large scale datasets, and applied lots of data augmentation methods on both acoustic features and images. The key difference of our approach is that we aim to directly derive the best audio-visual embeddings, instead of obtaining the best acoustic and visual embeddings separately. \begin{table}[htbp] \centering \caption{\label{tab:comparison}Comparison with previous state-of-the-art methods.} \small \begin{tabular}{cccc} \toprule Method & Training strategy & Log-loss & Acc/\% \\ \midrule Official baseline~\cite{Wang2021dataset} & Pipeline & 0.658 & 77.0 \\ \midrule Wang et al.~\cite{Wang2021b} & Pipeline & 0.159 & 94.1 \\ Wang et al.~\cite{Wang2021} & Pipeline & 0.183 & 93.8 \\ Yang et al.~\cite{Yang2021} & Pipeline & 0.223 & 93.9 \\ \midrule Our proposed & Joint & \textbf{0.152} & \textbf{94.6} \\ \bottomrule \end{tabular} \end{table} \section{CONCLUSION} \label{sec:conclusion} In this paper, we propose a joint training strategy for audio-visual scene classification. Compared with the conventional pipeline setting, our approach can derive more discriminative audio-visual embeddings. Besides, using the long-term scalogram as acoustic feature demontrates better performance than FBank in all of our experiments. Results of ablation study show that the data augmentation methods applied on the raw images play a vital role on the joint training procedure. In the future, we intend to explore the knowledge distillation methods across modalities to tackle the modality asynchronous or missing problems in the real world. \vfill\pagebreak \bibliographystyle{IEEEbib.bst} \footnotesize
1,941,325,219,946
arxiv
\section{Introduction} It is well known that any map $\phi:\mathbb{F}_q\rightarrow \mathbb{F}_q$ can be expressed uniquely as a polynomial $f \in \mathbb{F}_q[x]$ of degree less than $q$. We say that $f$ is the \emph{reduced polynomial} corresponding to $\phi$ and that the \emph{reduced degree} of $\phi$ is the degree of $f$. A polynomial $f \in \mathbb{F}_q[x]$ is a \emph{permutation polynomial} if the map $x \mapsto f(x)$ is a permutation of $\mathbb{F}_q$. For $q>2$ it is well known that the reduced degree of a permutation polynomial is at most $q - 2$, and using Lagrange interpolation it is easily verified that any transposition has reduced degree exactly $q - 2$. A permutation polynomial $f \in \mathbb{F}_q[x]$ is an \emph{orthomorphism polynomial} if the map $x \mapsto f(x) - x$ is also a permutation of $\mathbb{F}_q$. Orthomorphisms have many applications in design theory, especially to Latin squares \cite{evans2018orthogonal,wanless2004diagonally}. The following theorem was proven by Niederreiter and Robinson \cite{niederreiter1982complete} for fields of odd characteristic, and by Wan \cite{wan1986problem} for fields of even characteristic. \begin{thm}\label{t:reddegorth} For $q>3$ any orthomorphism polynomial over $\mathbb{F}_q$ has reduced degree at most $q - 3$. \end{thm} Our first goal is to establish when the bound in \tref{t:reddegorth} is achieved. It was known \cite{shallue2013permutation} that the bound in \tref{t:reddegorth} is not achieved when $q\in\{2, 3, 5, 8\}$. We show: \begin{thm}\label{t:bndach} There exists an orthomorphism polynomial of degree $q - 3$ over $\mathbb{F}_q$ if and only if $q \notin \{2, 3, 5, 8\}$. \end{thm} We define the \emph{Hamming distance} $H(f,g)$ between two polynomials $f,g\in\mathbb{F}_q[x]$ by $H(f,g)=\big|\{a\in\mathbb{F}_q:f(a)\ne g(a)\}\big|$. For two distinct permutations $f,g$ it is obvious that $H(f,g)\ge2$. If $f(a)\ne f(b)$ for $a\ne b$, then $f(a)-b\ne f(a)-a\ne f(b)-a$. It follows that if $f,g\in\mathbb{F}_q[x]$ are distinct orthomorphism polynomials then $H(f,g)\ge3$. We investigate when this bound is tight. Our second main result is as follows: \begin{thm}\label{t:Ham3} There exist orthomorphism polynomials $f,g\in\mathbb{F}_q[x]$ that satisfy $H(f,g)=3$ if and only if $q \notin \{2, 5, 8\}$. \end{thm} Cavenagh and Wanless \cite{cavenagh2010number} showed the special case of \tref{t:Ham3} in which $q$ is prime. Their motivation was an application to Latin bitrades that we discuss in the next section. Suppose that $q-1=nk$, for some positive integers $n, k$. Let $\gamma$ be a primitive element of $\mathbb{F}_q^*$. Then we define $C_{j, n} =\{\gamma^{ni+j} : 0 \leq i \leq k - 1\}$ to be a \emph{cyclotomic coset} of the unique subgroup $C_{0,n}$ of index $n$ in $\mathbb{F}_q^*$. A cyclotomic map $\psi_{a_0, \dots, a_{n - 1}}$ of index $n$ can then be defined by \begin{equation}\label{e:cyceqn} \psi_{a_0, \dots, a_{n - 1}}(x) = \begin{cases} 0 &\text{if } x = 0, \\ a_ix &\text{if } x \in C_{i,n}, \end{cases} \end{equation} where $a_0, \dots, a_{n - 1} \in \mathbb{F}_q$. An orthomorphism is \emph{non-cyclotomic} if it cannot be written as a cyclotomic map for any index $n<q-1$. We define a \emph{translation} $T_g$ of an orthomorphism $\theta$ to be the orthomorphism $T_g[\theta](x) = \theta(x + g) - \theta(g)$. We say that an orthomorphism $\theta$ is \emph{irregular} if $T_g[\theta]$ is non-cyclotomic for all $g\in\mathbb{F}_q$. It was conjectured in \cite{FW17} that irregular orthomorphisms exist over all sufficiently large fields. We prove this and more in our last main result: \begin{thm}\label{t:irreg} There are irregular orthomorphisms over $\mathbb{F}_q$ for $7<q\not\equiv1\bmod3$ and for even $q>4$. For fields of odd characteristic, asymptotically almost all orthomorphisms are irregular. \end{thm} Note that the $q=2^{2k+1}$ subcase of \tref{t:irreg} was already shown in \cite{FW17}. The structure of this paper is as follows. In the next section we provide several different constructions for orthomorphisms that are as close as possible to each other in Hamming distance. The proofs of our main results are given in \sref{s:summary}. Then in \sref{s:conclude} we offer two conjectures for future research. \section{Orthomorphisms at minimal Hamming distance} In this section we provide several different methods for producing pairs of orthomorphisms that are as close to each other as possible, in Hamming distance. None of our methods work for all fields, but together our methods will combine in \sref{s:summary} to prove \tref{t:Ham3}. \tref{t:bndach} will then follow immediately given the next observation. \begin{lem}\label{l:polydiff} Suppose that $f, g \in \mathbb{F}_q[x]$ are reduced orthomorphism polynomials, where $q>3$. If $H(f,g)=3$, then at least one of $f$ or $g$ must have degree $q-3$. \end{lem} \begin{proof} Let $h = f - g$. Then $\deg(h) \leq \max\{\deg(f), \deg(g)\}\leq q-3$ by \tref{t:reddegorth}. Now $h$ is nonzero but has $q - 3$ roots, so $\deg(h) \geq q - 3$. It follows that $\max\{\deg(f), \deg(g)\} = q-3$. \end{proof} Suppose that orthomorphism polynomials $f,g\in\mathbb{F}_q[x]$ satisfy $H(f,g)=k$. Define \begin{align*} L_1&=\big\{(i,f(j)-j+i,f(j)+i):i,j\in\mathbb{F}_q,\ f(j)\ne g(j)\big\},\\ L_2&=\big\{(i,g(j)-j+i,g(j)+i):i,j\in\mathbb{F}_q,\ f(j)\ne g(j)\big\}. \end{align*} Then it is easy to check that $L_1$ and $L_2$ are disjoint sets of $kq$ ordered triples each, such that \begin{itemize} \item The projection of $L_1$ onto any two coordinates is $1$-to-$1$, and its image equals the image of the same projection acting on $L_2$. \item The projection of $L_1$ (or $L_2$) onto any one coordinate is $k$-to-$1$. \end{itemize} These are exactly the conditions that mean that the pair $(L_1,L_2)$ forms what is called a \emph{$k$-homogeneous Latin bitrade} (cf.~\cite{cavenagh2010number}). Hence all the methods in the following subsections can be applied to construct $3$-homogeneous Latin bitrades. For most fields, we will end up giving several different construction methods. \subsection{Fields of characteristic $p \notin \{2, 5\}$.} Our first method works when the characteristic of the field is not equal to $2$ or $5$. \begin{thm}\label{non25} Suppose that $\mathbb{F}_q$ has characteristic $p\notin\{2,5\}$. Then there exist orthomorphism polynomials $f,g\in\mathbb{F}_{q}[x]$ with $H(f,g)=3$. \end{thm} \begin{proof} For $q=3$ the orthomorphism polynomials $f=2x$ and $g=2x + 1$ suffice. Thus we may assume that $q=p^r>5$. Let $P$ be the prime subfield of $\mathbb{F}_q$. Cavenagh and Wanless \cite{cavenagh2010number} showed that there exist orthomorphisms $\phi, \theta\in P[x]$ such that $H(\phi,\theta)=3$. If $r=1$ then we are done, so assume that $r>1$. Define the following maps on $\mathbb{F}_q$, \begin{equation*} f(x)= \begin{cases} 2x &\text{if } x\notin P, \\ \phi(x) &\text{if } x\in P, \end{cases} \end{equation*} \begin{equation*} g(x)= \begin{cases} 2x &\text{if } x\notin P, \\ \theta(x) &\text{if } x\in P. \end{cases} \end{equation*} We know that the map $x\mapsto 2x$ permutes $\mathbb{F}_q$, and it clearly maps $P$ to $P$, so it follows that it also permutes $\mathbb{F}_q\backslash P$. By assumption $\phi$ permutes $P$. Hence $f$ is a permutation. By similar arguments $f$ is an orthomorphism, and so is $g$. Also $H(f,g)=H(\phi,\theta)=3$. \end{proof} \subsection{Fields of order $1\bmod 3$} Our next method works for fields of order $q\equiv 1 \bmod 3$. \begin{thm}\label{1mod3} Let $q \equiv 1 \bmod 3$. Then there exist orthomorphism polynomials $f,g\in\mathbb{F}_{q}[x]$ with $H(f,g)=3$. \end{thm} \begin{proof} Let $q - 1 = 3k$, for some positive $k \in \mathbb{Z}$. Niederreiter and Winterhof \cite{niederreiter2005cyclotomic} showed that there exists a ``near-linear'' orthomorphism $f$ over $F_q$, where \begin{equation*} f(x) = \begin{cases} a_0x & \text{if }x \in C_{0,k}, \\ a_1x & \text{if }x \notin C_{0,k}, \end{cases} \end{equation*} for distinct $a_0,a_1 \in \mathbb{F}_q\setminus\{0,1\}$. Now let $g$ be the map defined by $g(x)=a_1x$, and note that $g$ is an orthomorphism. Also $H(f,g)=|C_{0,k}|=3$. \end{proof} In many instances when we apply \lref{l:polydiff} we will not know which of the two polynomials has degree $q-3$ (plausibly they both have that degree). However, in \tref{1mod3} it is clear that $\deg(g)=1$, so $\deg(f)=q-3$. \subsection{Fields of large odd order} Our next method works in all sufficiently large fields of odd characteristic. We will use the following auxiliary result. \begin{lem}\label{l:partialorth} Suppose that $\theta$ is an orthomorphism satisfying $\theta(0) = 0$, $\theta(b) = c$ and $\theta(c) = c - b$, for some distinct $b,c \in \mathbb{F}_q^*$. Then there exists an orthomorphism polynomial $\phi\in\mathbb{F}_q[x]$ such that $H(\theta,\phi)=3$. \end{lem} \begin{proof} Define $\phi : \mathbb{F}_q \rightarrow \mathbb{F}_q$ by, \begin{equation*} \phi(x) = \begin{cases} \theta(x) & \text{if }x \in \mathbb{F}_q \backslash \{0, b, c\}, \\ c - b & \text{if }x = 0, \\ c & \text{if }x = c, \\ 0 & \text{if }x = b. \end{cases} \end{equation*} Clearly $H(\theta,\phi)=3$ and $\{\phi(0), \phi(b), \phi(c)\} = \{c - b, 0, c\} = \{\theta(0), \theta(b), \theta(c)\}$. So $\phi$ is injective by the injectivity of $\theta$. Similarly, \begin{equation*} \phi(x) - x = \begin{cases} \theta(x) - x & \text{if }x \in \mathbb{F}_q \backslash \{0, b, c\}, \\ c - b & \text{if }x = 0, \\ 0 & \text{if }x = c, \\ -b & \text{if }x = b, \end{cases} \end{equation*} and $\{\phi(0) - 0, \phi(b) - b, \phi(c) - c\}=\{c - b, -b, 0\} = \{\theta(b) - b, \theta(c) - c, \theta(0) - 0\}$, so $x \mapsto \phi(x) - x$ is injective because $\theta$ is an orthomorphism. The result follows. \end{proof} To make use of \lref{l:partialorth} we need to find orthomorphisms that have three specific values. Luckily, the next result does the work for us. It is due to Cavenagh, H{\"a}m{\"a}l{\"a}inen and Nelson \cite{cavenagh2009completing}, who stated it only for prime fields of odd order. Their proof generalises without change to all fields of odd order, so it will not be repeated here. \begin{thm}\label{t:parorth} Let $q \geq 191$ be an odd prime power. Let $z, k \in \mathbb{F}_q \backslash \{0, 1\}$ and $e \in \mathbb{F}_q \backslash \{0, z, k, k + z - 1\}$. Then there exists an orthomorphism $\theta$ over $\mathbb{F}_q$ satisfying $\theta(0) = 0$, $\theta(1) = z$ and $\theta(k) = e$. \end{thm} Applying \tref{t:parorth} in combination with \lref{l:partialorth}, we obtain the result for this subsection: \begin{thm}\label{largeq} Let $q \geq 191$ be an odd prime power. Then there exist orthomorphism polynomials $f,g\in\mathbb{F}_{q}[x]$ with $H(f,g)=3$. \end{thm} \subsection{Fields of order $q = 2^r$ for odd $r$} Our final method deals with the case of fields of order $2^r$ for odd integers $r$. We will need the following result of Williams \cite{williams1975note} regarding the reducibility of a cubic polynomial. \begin{lem}\label{l:cubred} Let $\mathbb{F}_q$ be a finite field of even order $q > 2$. Then the polynomial $f \in \mathbb{F}_q[x]$ defined by $f(x) = x^3 + ax + b$ has a unique root in $\mathbb{F}_q$ if and only if $\mathop{\mathrm{Tr}}(a^3b^{-2}) \neq \mathop{\mathrm{Tr}}(1)$. \end{lem} We will also need the following construction of an orthomorphism of a finite field of even order. Let $q = 2^r$ for some integer $r \geq 3$ and let $a \in \mathbb{F}_q \backslash \{0, 1\}$. Define $H = \{0, 1, a, a + 1\}$ and let $c \in \mathbb{F}_q \backslash H$. In \cite{FW17} it is shown that the map, \begin{equation}\label{e:orthchar2} \theta_a(x) = \begin{cases} ax + a(a + 1) & \text{if }x \in H + c, \\ ax & \text{otherwise}, \end{cases} \end{equation} is an orthomorphism over $\mathbb{F}_q$ satisfying $\theta_a(0)=0$. We now prove that when $q \geq 32$ is an odd power of $2$, there exist two orthomorphism polynomials over $\mathbb{F}_q$ at Hamming distance 3 from each other. \begin{thm}\label{t:oddtwo} Let $q = 2^r$ for some odd integer $r \geq 5$. Then there exist orthomorphism polynomials $f,g\in\mathbb{F}_{q}[x]$ with $H(f,g)=3$. \end{thm} \begin{proof} There are $2^{r - 1} - 1$ non-zero elements with zero trace in $\mathbb{F}_q$, and the map $x \mapsto x^{-3}$ is a permutation of $\mathbb{F}^*_q$. It follows that there are $2^{r - 1} - 1$ choices of an element $c \neq 0$ such that $\mathop{\mathrm{Tr}}(c^{-3})=0$. As $2^{r - 1}-1>3$, there exists some $c \in \mathbb{F}_q^*$ such that $c^3 + c + 1 \neq 0$ and $\mathop{\mathrm{Tr}}(c^{-3}) = 0$. Define the polynomial $g \in \mathbb{F}_q[x]$ by $g(x) = x^3 + (c^2 + c + 1)x + c^2$. Note that \begin{equation*} \begin{aligned} \mathop{\mathrm{Tr}}\big((c^2 + c + 1)^3c^{-4}\big) &= \mathop{\mathrm{Tr}}(c^2) + \mathop{\mathrm{Tr}}(c) + \mathop{\mathrm{Tr}}(c^{-1}) + \mathop{\mathrm{Tr}}(c^{-4}) + \mathop{\mathrm{Tr}}(c^{-3}) = 0 \neq \mathop{\mathrm{Tr}}(1), \end{aligned} \end{equation*} using the fact that $\mathop{\mathrm{Tr}}(a) = \mathop{\mathrm{Tr}}(a^2)= \mathop{\mathrm{Tr}}(a^4)$ for all $a \in \mathbb{F}_q$. Hence, \lref{l:cubred} implies that $g$ has a root. Let $f \in \mathbb{F}_q[x]$ be defined by $f(x) = g(x + c + 1) = x^3 + (c + 1)x^2 + cx + c$. It follows that $f$ also has a root, say $a$. Define $H = \{0, 1, a, a + 1\}$ and note that $a \notin \{0, 1, c, c + 1\}$ as none of these are roots of $f$. Since $a \notin \{0, 1\}$ and $c \notin H$, we can define an orthomorphism $\theta_a$ by \eref{e:orthchar2}. Define $b = \frac ca$. We claim that $b \notin H + c$. If $b = c$ then $a = 1$, a contradiction. If $b = c + 1$ then $a = \frac c{c + 1}$ and so $0 = f(a) = f(\frac c{c + 1}) = c(c^3 + c + 1)(c + 1)^{-3}$, hence $c^3 + c + 1 = 0$, a contradiction. If $b = a + c$ then it follows that $c = \frac{a^2}{a + 1}$ and so $f(a) = \frac{a^3}{a + 1} = 0$, thus $a = 0$, a contradiction. If $b = a + c + 1$ then $c = a$ or $a = 1$, a contradiction. Thus $\theta_a(b) = ab = c$. Now as $a$ is a root of $f$, we know that $a(ac + a(a + 1) + c + \frac ca) = 0$, and hence $\theta_a(c) = ac + a(a + 1) = c + \frac ca = c + b$. The result now follows from \lref{l:partialorth}. \end{proof} \section{Proof of the main results}\label{s:summary} We are now in a position to prove our main results. \begin{proof}[Proof of \tref{t:Ham3}] There clearly cannot exist polynomials which have Hamming distance $3$ over $\mathbb{F}_2$. By \lref{l:polydiff}, if there existed polynomials $f, g \in \mathbb{F}_q[x]$ with $H(f, g) = 3$ when $q \in \{5, 8\}$, then there would exist orthomorphism polynomials of degree $q-3$ over these fields, which we know is not the case from \cite{shallue2013permutation}. It remains to justify the claim that $f$ and $g$ exist for all prime powers $q = p^r \notin \{2, 5, 8\}$. If $p \notin \{2, 5\}$ then the claim is true by \tref{non25}. If $p \in \{2, 5\}$ and $r$ is even then the claim follows from \tref{1mod3}. If $r$ is odd and $p=2$ then the claim follows from \tref{t:oddtwo}. If $q=5^r \geq 191$, then the claim follows from \tref{largeq}. The only remaining case is $q = 125$. Consider $\mathbb{F}_{125}$ as $\mathbb{Z}_5[y] / (y^3 + 3y + 3)$, and note that $y$ is a primitive element of $\mathbb{F}_{125}^*$. Let $a = y^2 \notin C_{0, 4}$ and $b = y^2 + 4 = y^{75} \notin C_{0, 4}$. Then $f \in \mathbb{F}_{125}[x]$ defined by $f(x) = (a - b)^{-1}x^5 - b(a - b)^{-1}x$ is an orthomorphism polynomial by a result of Niederreiter and Robinson \cite{niederreiter1982complete}. Furthermore $f$ satisfies $f(0) = 0$, $f(y^2) = y^{118} = 4y^2 + 3$ and $f(y^{118}) = y^{40} = 3y^2 + 3 = y^{118} - y^2$, and so we are done, by \lref{l:partialorth}. \end{proof} The only fields not having orthomorphisms at Hamming distance 3 are $\mathbb{F}_2$, $\mathbb{F}_5$ and $\mathbb{F}_8$. There are no orthomorphisms at all over $\mathbb{F}_2$. Over $\mathbb{F}_5$, it was noted in \cite{cavenagh2010number} that the minimum Hamming distance between orthomorphisms is 4 (this distance is achieved by $f=2x$ and $g=3x$). The minimum Hamming distance between orthomorphisms over $\mathbb{F}_8$ is also $4$. To see this, consider $f=ax$ and $g=\theta_a$ from \eref{e:orthchar2}, and apply the logic behind \lref{l:polydiff}. \begin{proof}[Proof of \tref{t:bndach}] Orthomorphisms of degree $q-3$ do not exist when $q \in \{2, 3, 5, 8\}$ as shown in \cite{shallue2013permutation}. For all other prime powers $q$, existence of orthomorphisms of degree $q-3$ follows by combining \tref{t:Ham3} with \lref{l:polydiff}. \end{proof} Note that from \cite{shallue2013permutation}, the maximum reduced degree of any orthomorphism polynomial over $\mathbb{F}_3$, $\mathbb{F}_5$ and $\mathbb{F}_8$ is respectively $1$, $1$ and $4$. \begin{proof}[Proof of \tref{t:irreg}] Eberhard, Manners and Mrazovi\'{c} \cite{EMM19} showed that any abelian group of odd order $q$ has $(e^{-1/2}+o(1))q!^2q^{1-q}$ orthomorphisms. In particular, this is true for the additive group of $\mathbb{F}_q$ (when $q$ is odd). However, the number of cyclotomic maps of index $i$ over $\mathbb{F}_q$ is at most $q^i$, given that the map is determined by the values of $a_1,\dots,a_i$ in \eref{e:cyceqn}. Each such map has $q$ translations and the only relevant values of $i$ satisfy $1\le i<q/2$. Hence the number of orthomorphisms over $\mathbb{F}_q$ that are not irregular is at most $q^{q/2}q(q/2)=q^{q/2+2}/2$, which is asymptotically insignificant compared to the total number of orthomorphisms. We conclude that for fields of odd characteristic, asymptotically almost all orthomorphisms are irregular. Next, suppose that $q=2^r>4$ and consider the orthomorphism $\theta_a$ defined by \eref{e:orthchar2}. In \cite[Thm~5]{FW17} it was shown that $\theta_a$ is irregular if $r$ is odd. However, examining that proof reveals that $\theta_a$ will also be irregular for even $r$, provided that the set $X_g=\{g+1,g+a,g+a+1\}$ is not a union of cyclotomic cosets for any $g\in\mathbb{F}_q$. As $|X_g|=3$, the only possible problem is that $X_g=C_{i,n}$ for some $i$, where $n=(q-1)/3$. Note that $X_g$ contains two elements that differ by $1$ and the third element differs from one of those two elements by $a$. Hence for each $i$ there are at most 2 choices of $a$ that might allow $X_g=C_{i,n}$ to be satisfied for some $g$. Eliminating these choices for all $i$, we lose at most $2(q-1)/3<q-2$ choices for $a$. Hence, we can pick a value for $a$ such that $X_g\ne C_{i,n}$ for all $g\in\mathbb{F}_q$ and $0\le i<n$. In that case, $\theta_a$ is irregular. Finally, we consider the case when $7<q\not\equiv1\bmod3$. The previous case handled $q=8$, so assume $q>8$. Now, \tref{t:bndach} ensures the existence of an orthomorphism $\theta$ of reduced degree $q-3$. Suppose that $T_g[\theta]$ is cyclotomic of index $i<q-1$ for some $g\in\mathbb{F}_q$. Note that $T_g[\theta]$ also has reduced degree $q-3$. Hence, by \cite[Thm~1]{niederreiter2005cyclotomic} we must have $\gcd(q-1,3)>1$, but that contradicts the fact that $q\not\equiv1\bmod3$. \end{proof} \section{Concluding remarks}\label{s:conclude} For each finite field we have established what the minimum distance between two distinct orthomorphisms is, and what the largest degree of a reduced orthomorphism polynomial is. A direction for further research would be to investigate the proportion of orthomorphisms that have reduced degree $q-3$. It was shown in \cite{MR1933625} that asymptotically almost all permutation polynomials in $\mathbb{F}_q[x]$ have reduced degree $q-2$. The data for small fields presented in \cite{shallue2013permutation} is consistent with orthomorphisms displaying a similar trend. We propose: \begin{conj}\label{cj:mostq-3} Asymptotically almost all orthomorphism polynomials in $\mathbb{F}_q[x]$ have reduced degree $q-3$, as $q\to\infty$. \end{conj} On the subject of typical behaviour for orthomorphisms of large fields, we showed that almost all orthomorphisms of fields of odd order are irregular. We believe that fields of even order share this property. \begin{conj}\label{cj:irreg} Asymptotically almost all orthomorphisms are irregular. \end{conj} A related open question is to establish what is the largest field without irregular orthomorphisms. We know from \tref{t:irreg} that the answer is a field of odd order (it may well be $\mathbb{F}_7$). \bibliographystyle{plain}
1,941,325,219,947
arxiv
\section* \thesection {#1}} \addcontentsline{toc}{section} \thesection\ \ \ #1} } \newcommand{\no}{\nonumber\\} \newcommand{\e}{{\rm e}} \newcommand{\ii}{{\rm i}} \newcommand{\dd}{{\rm d}} \newcommand{\eqn}[1]{(\ref{#1})} \newcommand{\be}{\begin{equation}} \newcommand{\ee}{\end{equation}} \newcommand{\beq}{\begin{equation}} \newcommand{\eeq}{\end{equation}} \newcommand{\bea}{\begin{eqnarray}} \newcommand{\eea}{\end{eqnarray}} \newcommand{\ba}{\begin{eqnarray}} \newcommand{\ea}{\end{eqnarray}} \newcommand{\ci}[1]{\cite{#1}} \newcommand{\bi}[1]{\bibitem{#1}} \newcommand{\la}[1]{\label{#1}} \def\bra#1{\left\langle #1\right|} \def\ket#1{\left| #1\right\rangle} \def\hs#1#2{\left\langle #1\right|\left. #2\right\rangle} \def\ketbra#1#2{\left| #1\right\rangle\left\langle #2\right|} \def\vev#1{\left\langle #1\right\rangle} \newcommand{\tr}[1]{\:{\rm tr}\,#1} \newcommand{\Tr}[1]{\:{\rm Tr}\,#1} \def\one{\mbox{1 \kern-.59em {\rm l}}} \newcommand{\del}{\partial} \newcommand{\complex}{{\mathbb C}} \newcommand{\quater}{{\mathbb H}} \newcommand{\complexs}{{\mathbb C}} \newcommand{\zed}{{\mathbb Z}} \newcommand{\nat}{{\mathbb N}} \newcommand{\nats}{{\mathbb N}} \newcommand{\natss}{{\mathbb N}} \newcommand{\real}{{\mathbb R}} \newcommand{\reals}{{\mathbb R}} \newcommand{\zeds}{{\mathbb Z}} \newcommand{\zedss}{{\mathbb Z}} \newcommand{\rat}{{\mathbb Q}} \newcommand{\mat}{{\mathbb M}} \newcommand{\idop}{{\mathbb I}} \abstract{ We show how the bosonic spectral action emerges from the fermionic action by the renormalization group flow in the presence of a dilaton and the Weyl anomaly. The induced action comes out to be basically the Chamseddine-Connes spectral action introduced in the context of noncommutative geometry. The entire spectral action describes gauge and Higgs fields coupled with gravity. We then consider the effective potential and show, that it has the desired features of a broken and an unbroken phase, with the roll down.} \keywords{Space-Time Symmetries, Spectral Action, Weyl anomaly, Noncommutative Geometry, Higgs-Dilaton Potential} \preprint{ICCUB-11-147\\DSF/7/2011} \begin{document} \tableofcontents \section{Introduction} In this note we will show the intimate relationships between Weyl anomalies, the dilaton and the Higgs field in the framework of spectral physics. The framework is the expression of a field theory in terms of the spectral properties of a (generalized) Dirac operator. In this respect this work can be seen in the framework of the noncommutative geometry approach to the standard model of Connes and collaborators~\cite{Connesbook, ConnesLott, SpectralAction, AC2M2}, as well as of Sakharov induced gravity~\cite{Sakharov} (for a modern review see~\cite{Visser}). We start with a generic action for a chiral theory of fermions coupled to gauge fields and gravity. The considerations here apply to the standard model, but we will not need the details of the particular theory under consideration. It is known, and this is the essence of the noncommutative geometry approach to the standard model, that the theory is described by a fermionic action and a bosonic action, both of which can be expressed in terms of the spectrum of the Dirac operator. In~\cite{anlizzi} two of us have shown that if one starts from the classic fermionic action and proceeds to quantize the theory with a regularization based on the spectrum, an anomaly appears. it is possible that the full quantum theory is still invariant by correcting the path integral measure. This is tantamount to the addition of a term to the action, which renders the bosonic background interacting to the dilaton field. The main result of that paper is that this term is a modification of the bosonic spectral action~\cite{SpectralAction}. In this case the theory is still invariant. In this paper we have a shift of the point of view. We still consider the theory to be regularized in the presence of a cutoff scale, but we consider this scale to have a physical meaning, that of the breaking of Weyl invariance. We then consider the flow of the theory at a renormalization scale, which is not necessarily the scale which breaks the invariance. The theory has a dilaton, and the Higgs field. The dilaton may involve a collective scalar mode of all fermions accumulated in a {Weyl-noninvariant} dilaton action. Accordingly the spectral action arises as a part of the fermion effective action divided into the Weyl non-invariant and Weyl invariant parts. We calculate the dilaton effective potential and we discuss how it relates to the transition from the radiation phase with zero vacuum expectation value of Higgs fields and massless particles to the electroweak broken phase via condensation of Higgs fields. The collective field of dilaton can provide the above mentioned phase transition with EW symmetry breaking during the evolution of the universe. The next five Sections of the paper will present the general framework, namely the Weyl invariance of fermions in a fixed background described via the (generalized) Dirac operator in Section 2, the connections with noncommutative geometry, the Weyl invariance properties, the spectral action and the bosonic action in the the following Sections. These sections mostly follow reference~\cite{anlizzi}, although the point of view presented in Sect.~5 is different, and in particular show two possible ways to obtain the spectral action, which is briefly introduced in Sect.~6. The cosmological implications are discussed in Sec.~7. This material has not been previously published, but parts of it have been presented in a conference~\cite{corfuproc}. A final Section contains the conclusions. \section{Fermions in a Fixed Background} Our starting point is a theory in which we have some matter fields, represented by fermions transforming under some (reducible) representation a gauge group, such as the standard model group $SU(3)\times SU(2)\times U(1)$. We need not specify the group for the moment. The fermions will be spinors belonging to some Hilbert space $\cal H$ which we assume to be ``chiral'', i.e.\ split into a left and a right spaces: \be {\cal H}={\cal H}_L\oplus{\cal H}_R \ee A generic matter field will therefore be a spinor \be \Psi=\begin{pmatrix}\Psi_L\\ \Psi_R\end{pmatrix} \label{LRspinors} \ee and in this representation the chirality operator, which we call $\gamma$, is a two by two diagonal matrix with plus and minus one eigenvalues. The two components are spinors themselves and we are not indicating the gauge indices, nor the flavor indices. We will assume that the fermions come in a number of identical generations, distinguished only by the masses (or more precisely their Yukawa coupling). The dynamics of the fermions is given by coupling them to a gauge and gravitational background. This coupling is performed by a classical action, which we schematically write as: \be S_F=\bra{\Psi}D\ket{\Psi} \label{fermionicaction} \ee The operator $D$~\cite{SpectralAction} is a $2\times 2$ matrix acting on spinors of the kind~\eqn{LRspinors} \be D=\left(\begin{array}{cc} i\gamma^\mu D_\mu + {\mathbb A} & \gamma_5 S\\ \gamma_5 S^\dagger & i\gamma^\mu D_\mu + {\mathbb A}\end{array}\right) \label{dirac22} \ee where \small \be D_\mu=\del_\mu+\omega_\mu, \ee the quantity $\omega_\mu$ is the spin connection, $\mathbb A$ contains all gauge fields of the theory and $S$ contains the information about Higgs field, Yukawa couplings, mixings and all terms which couple the left and right part of the spinors. The gravitational background is in general nontrivial, and the metric is encoded in the anticommutator of the $\gamma$'s: $\{\gamma^\mu,\gamma^\nu\}=2 g^{\mu\nu}$. The quantity $\mathbb A$ represents instead a fixed gauge background, and the interaction of the spinors with it. We emphasize that at this stage we are just describing the classical dynamics of fermions in a fixed background. We are deliberately vague as to the detail of the model at this stage, not discussing important elements of the theory, like chirality or charge conjugation. The scheme presented here is largely independent on the details of the model. In particular it applies to the standard model, especially in the approach based on noncommutative geometry introduced by Connes, which we briefly describe in Sect.~\ref{Connesspectral}. \section{Weyl invariance and the Fermionic Action} The fermionic action~\eqn{fermionicaction} in invariant under the transformation \bea \ket{\Psi }&\to&\e^{\frac\phi2}\ket{\Psi}\nonumber\\ D&\to& \e^{-\frac\phi2}D\e^{-\frac\phi2} \label{weyloperatorform} \eea where the operator $\phi$ is a function of the (operator) $x$, or in a simpler case a constant. The action~\eqn{fermionicaction} can be expressed in coordinates as\footnote{We use the following normalization of eigenstates $|x\rangle$ of the coordinate operator: $\langle x |y\rangle = \delta(x-y)$, that corresponds to $\int d^4 x |x\rangle\langle x | = 1$ (without $\sqrt{|g|}$), and consequently such a normalized $|x\rangle$ doesn't transform under the Weyl transformation $g_{\mu\nu}\rightarrow e^{2\phi}g_{\mu\nu}$.} \be S_F=\int d^4x \sqrt{|g|} \psi(x)^\dagger D_x \psi(x);\quad (|g|)^{1/4}\psi(x) = \langle{x}\ket{\Psi} \ee where we introduced the subscript $x$ on $D$ to stress the fact that it is an operator acting on the $x$ coordinate. The transformation~\eqn{weyloperatorform} can be seen as a (generalized) Weyl transformation\footnote{One has to pay attention to the measure in checking transformations and Hermiticity of the operators.}: \bea g_{\mu\nu}(x)&\to&\e^{2\phi(x)} g_{\mu\nu}(x) \nonumber\\ \psi(x)&\to& \e^{-\frac32\phi(x)} \psi(x)\nonumber\\ (D_x\psi(x))&\to& \e^{-\frac52\phi(x)}(D_x\psi(x)) \label{scaleinvariance} \eea where $\phi(x)$ is real. Note that since the rescaling involves also the matrix part of $D$, we must also rescale the masses of the fermions. In this sense we differ form the usual usage of Weyl (or conformal) invariance which is only valid for massless fields. In our scheme Yukawa couplings are an integral part of the Dirac operator which encodes all metric properties of the ``noncommutative manifold'' described by the noncommutative matrix algebra. In the absence of a dimensional scale, this is an exact symmetry of the classical theory. We now proceed to quantize the theory. It can be proven~\cite{Fujikawabook} that if the classical theory is invariant, the measure in the quantum path integral is not. We have an anomaly: a classical theory is invariant against a symmetry transformation, but the quantum theory, due to unavoidable regularization, does not possess this symmetry anymore. If also the quantum theory is required to be symmetric then the symmetry can be restored by the addition of extra terms in the action, alternatively one should have a fundamental length in the theory to explain violation of Weyl invariance. A textbook introduction to anomalies can be found in~\cite{Fujikawabook}. The notion of Weyl anomaly is attached to the dilatation of both coordinates, fields and mass-like parameters according to their dimensionalities, Eq.~\eqn{scaleinvariance}. Evidently, in the absence of UV divergences, there is no Weyl anomaly which therefore can be correlated to rescaling of a cutoff in the theory. In the case when the dilatation is not constant, $\phi$ becomes a quantum field called the \emph{dilaton}. The dilaton of this kind has been investigated in the context of the spectral action in~\cite{ChamseddineConnesscale}. We remark that there may be also an alternative realization of the dilaton as a collective scalar mode of all fermions accumulated in a {scale-noninvariant} dilaton action (in the spirit of~\cite{aano}). We start from the partition function \be Z(D,\mu)=\int [\dd\psi] [\dd\bar\psi] e^{-S_\psi} =\det\left(\frac{D}{\mu}\right)\la{zinitial} \ee where we needed to introduce a normalization scale $\mu$ for dimensional reasons, and the last equality is formal because the expression is divergent and needs regularizing. The writing of the fermionic action in this form (as a Pfaffian) is instrumental in the solution of the fermion doubling problem in Connes approach to the standard model~\cite{LMMS, G-BIS, AC2M2}. In order to regularize the expression~\eqn{zinitial} we need to introduce a \emph{cutoff} scale, which we call $\Lambda$. This is the cutoff scale and it may have the physical meaning of an energy in which the theory (seen as effective) has a phase transition, or at any rate a point in which the symmetries of the theory are fundamentally different (unification scale). We then have two scales\footnote{In principle we would need also an infrared regulator to render the spectrum of the Dirac operator discrete. We will not discuss infrared issues here.}, and we will keep them separated although in principle, at this stage, they could be identified. We will see in the course of this work that they cannot actually be identical, although have to be of the same order of magnitude. We will regularize the theory in the ultraviolet using a procedure introduced by one of us, Bonora and Gamboa-Saravi in~\cite{AndrianovBonoraGamboa, AndrianovBonora1, AndrianovBonora2} but leaving room for the normalization scale $\mu$. Although this procedure predates the spectral action, it is very much in the spirit of spectral geometry, since it uses only the spectral data of the Dirac operator. The energy cutoff is enforced by considering only the first $N$ eigenvalues of $D$. Consider the projector \be P_N=\sum_{n=1}^N \ketbra{\lambda_n}{\lambda_n};\quad N=\max n \ \mbox{such that} \ \lambda_n\leq \Lambda \label{cuteigenvalues} \ee where $\lambda_n$ are the eigenvalues of $D$ arranged in increasing order of their absolute value (repeated according to possible multiplicities), $\ket{\lambda_n}$ a corresponding orthonormal basis, and the integer $N$ is a function of the cutoff. This means that we are effectively using the $N^{\mathrm{th}}$ eigenvalue as cutoff. Therefore this number and the corresponding spectral density depends on coefficient functions of the Dirac operator, $N=N(D)$. Instead of this sharp cutoff, which consider totally all eigenvalues up to a certain energy, and ignore all the rest of the spectrum, it is also possible to consider a smooth cutoff enforced by a smooth function. Choosing a function $\chi$ which is smoothened version of the characteristic function of the interval $[0,1]$ one can consider the operator \be P_\chi=\chi\left(\frac D\Lambda\right)= \sum_n \chi\left(\frac{\lambda_n}\Lambda\right)\ \ketbra{\lambda_n}{\lambda_n} . \ee This operator is not a projector anymore, and it coincides with $P_N$ for $\chi=\Theta$, where $\Theta$ is the Heaviside step function.. The use of a smooth $\chi$ can be preferable in an expansion, such as the heat kernel expansion we will perform later in Sect.\ref{standard} for the spectral action. Nevertheless for the scopes of the present paper a sharp cutoff is adequate. In the framework of noncommutative geometry this is the most natural cutoff procedure, although as we said it was introduced before the introduction of the standard model in noncommutative geometry. It makes no reference in principle to the underlying structure of spacetime, and it is based purely on spectral data, thus is perfectly adequate to Connes' programme. This form of regularization could be also used for field theory which cannot be described on an ordinary spacetime, as long as there is a Dirac operator, or generically a wave operator, with a discrete spectrum. We define the regularized partition function\footnote{Although $P_N$ commutes with $D$ we prefer to use a more symmetric notation.} \bea Z(D,\mu)&=&\prod_{n=1}^N\frac{\lambda_n}{\mu} =\det\left(\one-P_N+P_N\frac{D}{\mu}P_N\right)\nonumber\\ &=& \det\left(\one-P_N+P_N\frac{D}{\Lambda}P_N\right) \det\left(\one-P_N+\frac{\Lambda}{\mu}P_N\right)\nonumber\\ &=& Z_\Lambda(D,\Lambda) \det\left(\one-P_N+\frac{\Lambda}{\mu}P_N\right). \label{3.7} \eea In this way we can define the fermionic action in an intrinsic way. The regularized partition function $Z(D,\Lambda)$ has a well defined meaning. Expressing $\psi$ and $\bar\psi$ as \bea \psi=\sum_{n=1}^\infty a_n\ket{\lambda_n};\qquad \bar\psi=\sum_{n=1}^\infty b_n\ket{\lambda_n} \eea with $a_n$ and $b_n$ anticommuting (Grassman) quantities. Then $Z(D,\Lambda)$ becomes (performing the integration over Grassman variables for the last step) \be Z(D,\Lambda)=\int\prod_{n=1}^N \dfrac{\dd a_n \dd b_n}{\Lambda} \e^{-\sum_{n=1}^N b_n \lambda_n a_n}=\det\left(D_N\right) \ee where we defined \be D_N=1-P_N+P_N\frac{D}{\Lambda}P_N . \ee In the basis in which $D/\Lambda$ is diagonal it corresponds to set to $\Lambda$ all eigenvalues of $D$ larger than $\Lambda$. Note that $D_N$ is dimensionless and depends on $\Lambda$ both explicitly and intrinsically via the dependence of $N$ and $P_N$. It is possible to give an explicit functional expression to the projector in terms of the cutoff: \be P_N=\Theta\left(1-\frac{D^2}{\Lambda^2}\right)=\int\limits_{-\infty}^{\infty} \dd\alpha\,\frac1{2\pi\ii(\alpha- \ii\epsilon)} \e^{\ii\alpha\big(1-\frac{D^2}{\Lambda^2}\big)} \ee This integral is well defined for a compactified space volume. Actually $N$ depends also on the infrared cutoff, and the number of dimensions. \section{Bosonic Action from Weyl Anomaly} In this section we will see how the Weyl anomaly induces the bosonic part of the action. The induced action is the Chamseddine-Connes spectral action. The action $S_\psi$ is invariant under~\eqn{scaleinvariance}, but the partition function \eqn{zinitial} is not. The reason for this is the fact that the regularization procedure is not Weyl invariant. In~\cite{anlizzi} it was shown that the anomaly can in principle be absorbed by a change of the measure, which is equivalent to the addition of another term to the action. This term can compensate the change in the measure due to the regularization, but being in an exponential form, can also be seen as another addition to the action, so that the final partition function is invariant. This calculation has been originally performed in~\cite{AANN} in the QCD context, and applied to gravity in~\cite{NovozhilovVassilevich}. In the scenario we are favoring in this paper however Weyl symmetry is not an exact symmetry of the theory, and the bosonic part of the action is induced by the renormalization group flow. In the following we will mostly consider the case of $\phi$ constant (i.e.\ not depending on $x$). This simplifies things because we do not have to worry about the kinetic terms of the field, and renders the functional integrals simple integrals. In order to make contact with the spectral action (to be discussed next) let us notice that $N$ is just the number of eigenvalues smaller that $\Lambda$, and thereby \be \Tr\chi\left(\frac{D^2}{\Lambda^2}\right)= \Tr\Theta\left(1-\frac{D^2}{\Lambda^2}\right)=\Tr P_N= N(\Lambda,\ D). \label{spact} \ee where $\chi$ is a generic cutoff function, which in our case is a sharp cutoff at energy $\Lambda$, \be \chi(x)=\left\{\begin{array}{cc}0~ & x<0\\ 1~ & x\in[0,1]\\ 0~ & x>1 \end{array}\right. \label{sharpcutoff} \ee consequence of the sharp cutoff on the eigenvalues used in~\eqn{cuteigenvalues}. For smoother cutoffs of the eigenvalues this would reflect in different forms of $\chi$. We will see in the next section that at one loop level (the only doable approximation) the actual form of the cutoff is not crucial. The latter form or~\eqn{spact} is valid provided that we take into account the functional dependence $N=N(\Lambda,\ D)$. It is worth recalling again that the integer $N$ depends on the cutoff $\Lambda$, on the Dirac operator $D$ and also on the function $\chi$ which we have chosen to be a sharp cutoff. If we want to obtain a partition function invariant on $\phi$ we can integrate it out, i.e. \be Z_{\mathrm{inv}}(D,\mu)=\int\dd\phi Z(\e^{-\frac12\phi} D\e^{-\frac12\phi}, \mu)\equiv \int\dd\phi Z( D_\phi, \mu);\qquad D_\phi\equiv \e^{-\frac12\phi} D\e^{-\frac12\phi} . \label{caseA} \ee This was the procedure followed in~\cite{anlizzi}. Notice however that in principle we could have equally well defined \be \hat Z_{\mathrm{inv}}(D, \mu)= \left(\int\dd\phi \dfrac{1}{ Z ( D_\phi, \mu)}\right)^{-1} . \label{caseA1} \ee If we consider non Weyl invariant partition function we can split it in the product of a term invariant for Weyl transformations, and another not invariant, which will depend on the field $\phi$. \be Z(D,\mu)=\hat{Z}_{\mathrm{inv}}(D,\mu)Z_{\mathrm{not}}(D,\mu) \label{splitting} \ee The terms in $Z_{\mathrm{not}}$ are due to the Weyl anomaly and we can calculate them. Using \be D_\phi=\e^{-\frac12\phi}D\e^{-\frac12\phi} \ee consider the identity \be Z(D)=\left(\int[d\phi]\frac1{Z(D_\phi)}\right)^{-1}\, \int [d\phi] \frac{Z(D)}{Z(D_\phi)} \ee Since the first term is invariant by construction, the second is the not invariant one: \be Z_{\mathrm{not}}(D)=\int[d\phi]e^{-S_{\mathrm{not}}}=\int [d\phi] \frac{Z(D)}{Z(D_\phi)} \ee To obtain the Weyl invariant partition function we need to multiply the regularized one by a compensating term, which we express in exponential form, as an addition to the action which we call the anomalous action. \be {Z_{\mathrm{inv}}}(D,\mu)=Z(D,\mu)\cdot Z_{\mathrm{anom}}(D,\mu);\quad Z_{\mathrm{anom}}(D,\mu) = \int\dd\phi\, \e^{-S_{\mathrm{anom}}} \la{zinvprod} \ee where the effective action will be depending on $N$, and hence the cutoff $\Lambda$, and on $\phi$. Then \be S_{\mathrm{anom}}= \log\left( \dfrac{Z (D, \mu)}{Z( D_\phi, \mu)}\right) \label{sanom1} \ee Notice that the splitting in~\eqn{splitting} is of course not unique, but is motivated by the following. We know from~\cite{anlizzi} that if we add to the classical action the term $S_{\mathrm{anom}}$ we will restore the Weyl invariance of the partition function $Z$. Thereby it is essential to have $S_{\mathrm{not}} = -S_{\mathrm{anom}}$. We shall see below (Eq.~\eqn{sanomcoll}), that the choice~\eqn{caseA1} provides such an equality. Let us define \be Z_t=Z(D_{t\phi}, \mu) \ee therefore $Z_0=Z(D, \mu)$ and \be \dfrac{Z_{\mathrm{inv}}(D, \mu)}{Z(D, \mu)}=\int\dd\phi \frac{Z_1}{Z_0} \ee and hence \be S_{\mathrm{anom}}=-\int_0^1\dd t \del_t \log Z_t =-\int_0^1\dd t \frac{\del_t Z_t}{Z_t} \ee We have the following relation that can easily proven \bea \del_t Z_t=\del_t\det \left(\frac{D_{t\phi}}{\mu}\right)_N &=&\phi Z_t \left(-1 +\Lambda^2 \log\frac{\Lambda^2}{\mu^2} \partial_{\Lambda^2}\right) \tr P_N , \eea and therefore, for $\phi$ not dependent on $x$, \bea S_{\mathrm{anom}}&=&\int_0^\phi\dd t'\, \left(1 -\Lambda^2 \log\frac{\Lambda^2}{\mu^2} \partial_{\Lambda^2}\right)\Tr\Theta\left(1-\frac{D_{t'}^2}{\Lambda^2}\right)\nonumber\\ &=& \int_0^\phi\dd t'\, \left(1 -\Lambda^2 \log\frac{\Lambda^2}{\mu^2} \partial_{\Lambda^2}\right) N(\Lambda,\ D_{t'}).\ \label{Sanomal} \eea The presence of the bosonic action given by the trace of the regularized Dirac operator is a consequence of the renormalization flow of the partition function. Under the change \be \mu\to\gamma\mu \ee with $\gamma$ real. From~\eqn{3.7} the partition function changes as follows \be Z(D,\mu)\to Z(D,\mu)e^{-(\log\gamma)\tr P_N} \ee and \be \tr P_N=N=\tr\chi\left(\frac D\Lambda\right) \label{inducedaction} \ee as always for the choice of $\chi$ the characteristic function on the interval, a consequence of our sharp cutoff on the eigenvalues. The expression~\eqn{inducedaction} is nothing but the spectral action which we will discuss in the next section. We see therefore that the renormalization group flow of the fermionic action induces the bosonic spectral action. The anomalous part of the action~\eqn{Sanomal} is a modification of the action~\eqn{inducedaction}. \section{The Spectral Action Principle \label{Connesspectral}} In this section we give a briefest introduction to the relevant aspects of the spectral action principle. The reader conversant with the topic may skip this section. More thorough introduction can be found in~\cite{Schucker,AC2M2,CCintro}. \subsection{Fields, Hilbert Spaces, Dirac Operators and the (Non)commutative Geometry of Spacetime} The main idea of the whole programme of Connes' noncommutative geometry~\cite{Connesbook} is to describe ordinary mathematics, and physics, in term of the spectral properties of operators. This programme has its roots in quantum mechanics and aims at the description of generalized spaces. The main ingredients are an algebra represented on a Hilbert space, and the generalized Dirac operator which describes all metric aspects of the theory, and as we have seen the behavior of the fundamental matter fields, represented by vectors of the Hilbert space. The fluctuations of the Dirac operator instead contain all boson fields, including the mediators of the forces (intermediate vector bosons), and the Higgs field. We have introduced a (Euclidean) spacetime. And therefore implicitly the algebra $\cal A$ of complex valued continuous functions of this space time. There is in fact a one-to one correspondence between (topological Hausdorff) spaces and commutative $C^*$-algebras, i.e.\ associative normed algebras with an involution and a norm satisfying certain properties. This is the content of the Gelfand-Naimark theorem~\cite{FellDoran, Ticos}, which describes the topology of space in terms of the algebras. In physicists terms we may say the the properties of a space are encoded in the continuous fields defined on them. This concept, and its generalization to noncommutative algebras is one of the starting points of Connes' noncommutative geometry programme~\cite{Connesbook}. The programme aims at the transcription of the usual concepts of differential geometry in algebraic terms and a key role of this programme is played by a \emph{spectral triple}, which is composed by an algebra acting as operators on a Hilbert space and a (generalized) Dirac operator. In our case we have these ingredients, but we have to consider instead of the the algebra of continuous complex valued function, matrix valued functions. The underlying space in this case is still the ordinary spacetime, technically the algebra is ``Morita equivalent'' to the commutative algebra, but the formalism is built in a general way so to be easily generalizable to the truly noncommutative case, when the underlying space may not be an ordinary geometry. The spectral triple contains the information on the geometry of spacetime. The algebra as we said is dual to the topology, and the Dirac operator enables the translation of the metric and differential structure of spaces in an algebraic form. There is no room in these proceedings to describe this programme, and we refer to the literature for details~\cite{Connesbook, Landibook, Ticos, Madore}. Within this general programme a key role is played by the approach to the standard model. This is the attempt to understand which kind of (noncommutative) geometry gives rise to the standard model of elementary particles coupled with gravity. The roots of this approach is to have the Higgs appear naturally as the ``vector'' boson of the internal noncommutative degrees of freedom~\cite{Madoreearly, D-VKM, ConnesLott}. The most complete formulation of this approach is given by the \emph{spectral action}, which in its most recent form is presented in~\cite{AC2M2}. \subsection{The Spectral Action and the Standard Model coupled to Gravity \label{standard}} The integrand in \eqref{Sanomal} is basically the Chamseddine-Connes Spectral Action introduced in~\cite{SpectralAction} together with the fermionic action~\eqn{fermionicaction}. More precisely the bosonic part of the spectral action is \be S_B = \Tr \chi\left(\frac{D^2}{\Lambda^2}\right) \ee The bosonic spectral action so introduced is always finite by its nature, it is purely spectral and it depends on the cutoff $\Lambda$. For the choice of $\chi$ as sharp cutoff we have that the trace counts exactly the eigenvalues smaller than $N$, and therefore \be S_B=N(D,\Lambda) \ee In the original work of Chamseddine and Connes the bosonic and fermion parts of the action were treated differently. The fermionic action on the contrary is divergent, and will require renormalization. We have seen as the cancelation of the anomaly brings the two actions on the same footing, albeit with a modification of the bosonic part. We notice that already in~\cite{Sitarz} the two actions were proposed to ``unify'' in the bosonic action with the addition of the projection on the fermionic field to the covariant Dirac operator. This reproduces the full spectral action with some additional non linear terms for the fermions, which could have to do with fermionic masses. Recently Barret~\cite{Barrett} has argued that the bosonic spectral action can inferred from the fermionic action via the state sum model. His work has some points of contact with ours. To obtain the standard model take as algebra the product of the algebra of functions on spacetime times a {finite dimensional} matrix algebra \be \mathcal A =C({\mathbb R}^4)\otimes{\mathcal A}_F \ee Likewise the Hilbert space is the product of fermions times a finite dimensional space which contains all matter degrees of freedom, and also the Dirac operator contains a continuous part and a discrete one \be \mathcal H ={\mathrm{Sp}({\mathbb R}^4)\otimes{\mathcal H}_F} \ee and the Dirac operator \be D_0=\gamma^\mu\del_\mu\otimes\mathbb I + \gamma\otimes D_F \ee In its most recent form due to Chamseddine, Connes and Marcolli~\cite{AC2M2} a crucial role is played by the mathematical requirements that the noncommutative algebra satisfies the requirements to be a manifold. Then the internal algebra, is almost uniquely derived to be \be {\mathcal A}_F=\complex\oplus{\mathbb H}\oplus M_3(\complex) \ee Then the bosonic spectral action can be evaluated at one loop using standard heath kernel techniques~\cite{Vassilevich:2003xt} and the final result gives the full action of the standard model coupled with gravity. We restrain from writing it since it takes more than one page in the original paper~\cite{AC2M2}. In the process however one does not need to input the mass of the Higgs, which comes out as a prediction. Its value comes out to be $\sim 170 \mathrm{GeV}$. A small value experimentally disfavored. It must be said however that the present form of the model needs unification of the three coupling constant at a single energy point (given by $\Lambda$). The model also contains nonstandard gravitational terms (quadratic in the curvature), which are currently being investigated for their cosmological consequences~\cite{NelsonSakellariadu, MarcolliPierpaoli}. Technically the canonical bosonic spectral action is a sum of residues, and can be expanded in a power series in terms of $\Lambda^{-1}$ as \be S_B(\Lambda)=\sum_n f_n\, a_n(D^2/\Lambda^2) \ee where the $f_n$ are the momenta of $\chi$ \begin{eqnarray} f_0&=&\int_0^\infty \dd x\, x \chi(x)\nonumber\\ f_2&=&\int_0^\infty \dd x\, \chi(x)\nonumber\\ f_{2n+4}&=&(-1)^n \del^n_x \chi(x)\bigg|_{x=0} \ \ n\geq 0 \label{fcoeff} \end{eqnarray} the $a_n$ are the Seeley-de Witt coefficients which vanish for $n$ odd. For $D^2$ of the form \be D^2=-(g^{\mu\nu}\del_\mu\del_\nu\one+\alpha^\mu\del_\mu+\beta) \ee defining \begin{eqnarray} \omega_\mu&=&\frac12 g_{\mu\nu}\left(\alpha^\nu+g^{\sigma\rho} \Gamma^\nu_{\sigma\rho}\one\right)\nonumber\\ \Omega_{\mu\nu}&=&\del_\mu\omega_\nu-\del_\nu\omega_\mu+[\omega_\mu,\omega_\nu]\nonumber\\ {\mathcal E}&=&\beta-g^{\mu\nu}\left(\del_\mu\omega_\nu+\omega_\mu\omega_\nu-\Gamma^\rho_{\mu\nu}\omega_\rho\right) \end{eqnarray} then \begin{eqnarray} a_0&=&\frac{\Lambda^4}{16\pi^2}\int\dd x^4 \sqrt{g} \tr\one_F\nonumber\\ a_2&=&\frac{\Lambda^2}{16\pi^2}\int\dd x^4 \sqrt{g} \tr\left(-\frac R6+{\mathcal E}\right)\nonumber\\ a_4&=&\frac{1}{16\pi^2}\frac{1}{360}\int\dd x^4 \sqrt{g} \tr(-12\nabla^\mu\nabla_\mu R +5R^2-2R_{\mu\nu}R^{\mu\nu}\nonumber\\ &&+2R_{\mu\nu\sigma\rho}R^{\mu\nu\sigma\rho}-60R{\mathcal E}+180{\mathcal E}^2+60\nabla^\mu\nabla_\mu {\mathcal E}+30\Omega_{\mu\nu}\Omega^{\mu\nu}) \label{spectralcoeff} \end{eqnarray} $\tr$ is the trace over the inner indices of the finite algebra $\mathcal A_F$ and in $\Omega$ and $\mathcal E$ are contained the gauge degrees of freedom including the gauge stress energy tensors and the Higgs, which is given by the inner fluctuations of $D$. In our case for $\phi$ constant, after performing the integration we find \bea S_{\mathrm{anom}}&=& \left(1 -\Lambda^2 \log\frac{\Lambda^2}{\mu^2} \partial_{\Lambda^2}\right) \int_0^\phi \dd t' S_B (\Lambda e^{t'}) \nonumber\\&=& \left(1 -\Lambda^2 \log\frac{\Lambda^2}{\mu^2} \partial_{\Lambda^2}\right) \int_0^\phi \dd t' \sum_n \e^{(4-n)t'} a_n f_n\nonumber\\&=& \frac{1}{8} (e^{4\phi}-1) a_0\left(1 - 2~ \log\frac{\Lambda^2}{\mu^2} \right) + \frac{1}{2} (e^{2\phi}-1) a_2 \left(1 - \log\frac{\Lambda^2}{\mu^2} \right)+ \phi a_4 .\label{Sanomaaction} \eea There are just some numerical corrections to the first two Seeley-de Witt coefficients due to the integration in $t' = t\phi$ and a choice of normalization scale $\mu$. In the case of a non sharp cutoff some numerical coefficients would change according to~\eqn{fcoeff}, and of course the series would not terminate at $a_4$. The corrections are however small, and the remaining terms are subdominant. Therefore the presence of a different cutoff would not alter the qualitative aspects of what follows. The sign with which this action appears in he partition function is of course crucial. We will see in the next section of the interpretation of $\phi$ as emerging from bosonization choices a sign. And later on in Sect.~\ref{se:dilatoneffpot} we will see that the sign chosen in this case gives a qualitative realistic effective Higgs-dilaton potential. \section{Dilaton bosonization} In this section we will discuss the role of the dilaton considering it as arising from a bosonization process of high energy degrees of freedom. We are interested in the effective potential, therefore we will make the brutal assumption of considering only a \emph{constant} (i.e. not dependent on $x$) dilaton $\phi$ and Higgs field $H$. In this case $H$ is the only surviving term in the off-diagonal entries of~\eqn{dirac22}. In fact here by ``Higgs field'' we mean generically all degrees of freedom which connect left and right chiralities. The analysis carried is therefore quite solid and independent on the details of the model. The action after bosonization can be represented as, \be Z(D, \mu)=\hat Z_{\mathrm{inv}}(D, \mu) \int\dd\phi\, \e^{-S_{\mathrm{\mathrm{coll}}}} \label{caseB} \ee then \be S_{\mathrm{\mathrm{coll}}}= \log\left(\frac{Z( D_\phi, \mu)}{Z(D, \mu)}\right) = - S_{\mathrm{anom}},\la{sanomcoll} \ee which is to be confronted with~\eqref{sanom1}. The Higgs mechanism of spontaneous symmetry breaking is not compatible with the Weyl conformal invariance. Indeed, let us consider the dependence of the invariant partition function $Z_{\mathrm{inv}}$, given by (\ref{caseA}) or \eqref{caseA1}, on the Higgs field $H$. \begin{equation} Z_{\mathrm{inv}} = e^{-W_{\mathrm{inv}}(H,g_{\mu\nu,...})}\label{inv1}, \end{equation} where \begin{equation} W_{\mathrm{inv}} = \int d^4 x \sqrt{|g|} (\lambda H^4 + \mbox{\rm terms with derivatives}) \label{inv2}. \end{equation} We omit in the righthand side of~(\ref{inv2}) terms with derivatives of the Higgs fields and powers of the Riemann curvature tensor because in this work we are concerned only by properties of the effective potential for Higgs and dilaton fields. The form of $W_{\mathrm{inv}}$ could be guessed by dimensional analysis, but we show in the appendix how it emerges (with the correct sign). In~\eqref{inv2} there are no terms generating spontaneous symmetry breaking to supply Higgs fields with a mass and accordingly we assume, that the Higgs field mass formation is related to the Weyl noninvariant part of the partition function. The latter one is determined by conformal anomaly term~\eqref{Sanomaaction}. Let us investigate how the composite dilaton field is related to the primary fields of the theory under consideration: $\psi,\bar{\psi},H$. For a fixed configuration of the Higgs field $H$, the dilaton field $\varphi$ appears as a result of bosonization of the fermions $\psi,\bar{\psi}$. The bosonization is defined by identifying \begin{equation} Z_{\mathrm{fermion}}(j) = Z_{\mathrm{boson}}(j) \label{bosonisation}, \end{equation} where \begin{eqnarray} &&Z_{\mathrm{fermion}}(j) = \int D \psi D\bar{\psi} e^{-W(\psi,\bar{\psi},j)},\nonumber\\ &&Z_{\mathrm{boson}} (j)\simeq \int D \varphi e^{-S_{\mathrm{\mathrm{coll}}}(\varphi,j) - W_{\mathrm{inv}}(j)}= e^{-W_{\mathrm{\mathrm{coll}}}(j) - W_{\mathrm{inv}}(j)}, \label{Zfb} \end{eqnarray} Herein $j$ denotes a set of sources. In our bosonization scheme the Higgs field $H$, is treated as a source for a scalar combination $\psi\bar{\psi}$ and therefore it is fixed in the process of dilaton bosonization (included in to $j$). In the definition of the bosonic partition function "$\simeq$" signifies that we neglect the bosonic fields with the spin more than 0 and retain only one scalar (dilaton) degree of freedom. We have already seen that the Higgs mechanism is presumably related to the violation of the Weyl symmetry and the latter is given by conformal anomaly term \eqref{Sanomaaction}, which exploits only one zero-spin field besides the Higgs field itself. Thus we conclude that our simplification is reasonable. In order to investigate full dynamics, one should deal with the total partition function \begin{equation} Z_{\mathrm{total}} = \int D H \left(Z_{\mathrm{fermion\ or\ boson}}(H,\tilde{j})\right), \end{equation} where $\tilde{j}$ is a set of sources for all quantum fields, excluding $H$. Varying both the left and the right hand sides of (\ref{bosonisation}) over $H$, one derives the equation, that relates fermion condensate $\langle \psi \bar{\psi} \rangle$ with the average values (over bosonic vacuum) of the combination of the bosonic fields $H$ and $\phi$, \begin{equation} \langle \psi \bar{\psi} \rangle \propto \frac{\delta \ln Z_{boson}(H)}{\delta H} = -\left\langle \frac{\delta (S_{\mathrm{\mathrm{coll}}}(H,\varphi)+W_{\mathrm{inv}}(H))}{\delta H} \right\rangle . \end{equation} This relation allows to unravel the bosonic content of fermion bilinear operator in different phases: symmetric with $\langle H\rangle = 0$ and symmetry breaking one with $\langle H\rangle \not= 0$. Thus the two different choices of dilaton field correspond to two different interpretations of the $\phi$ degree of freedom. The different choices are described in the definitions~\eqn{caseA} and~\eqn{caseB}. From these descend the definition of the alternative $Z_{\mathrm{inv}}$ or $\hat Z_{\mathrm{inv}}$. The former choice~\eqn{caseA} is the natural one if one has a noninvariant partition function and wants to define an invariant one by including an extra fundamental degree of freedom. The latter choice is instead the natural one in the case in which one starts from a non invariant theory in which the dilaton is a composite object whose condensates restores a global symmetry. This bosonic degree can be some fermionic bilinear. In the following we will give some arguments in favor of this second choice, based on the interplay with the Higgs field. \section{The Dilaton and the effective potential \label{se:dilatoneffpot}} The full analysis of the model coupled with a dynamical dilaton is under way and will be published elsewhere. Nevertheless it is already possible to say something on the interplay between the dilaton and the Higgs, and in particular the effective potential. This can be used to characterize cosmic evolution right after inflation starts. In particular, it may open the ways to describe the transition from the radiation phase with massless particles to the EW symmetry breaking phase with spontaneous mass generation due to condensation of Higgs fields. \subsection{Mass generation from Higgs-dilaton potential during cosmic evolution} We will consider in the following only the potential terms relative to the complex Higgs doublet $H$ and the dilaton $\phi$. Because of Weyl invariance, within our approximations, the only allowed dependence on the Higgs field H of $\hat Z_{\mathrm{inv}}$ in \eqref{caseB} is given by(see Eqs.~\eqref{inv1} and \eqref{inv2}) \be \hat Z_{\mathrm{inv}} = e^{-\hat W_{\mathrm{inv}}(H,g_{\mu\nu},...)}, \ee where \begin{equation} \hat W_{\mathrm{inv}} = \int d^4 x \sqrt{|g|} (-C\phi_0 H^4 + \mbox{\rm terms with derivatives}) \label{WINV}, \end{equation} with some (not yet defined) constant $\phi_0$ and $C$ is fixed positive constant, which we define below in Eq.~\eqref{C}. Later we will see, that only the choice $\phi_0 < 0$ supports the spontaneous EW symmetry breaking. Let us define the effective Higgs-dilaton potential $V$ by the equality \be Z(D) = \int D\phi ~e^{-\hat W_{\mathrm{inv}}(H) - S_{\mathrm{coll}}(H,\phi)}\equiv \int D\phi ~e^{-\int d^4 x \sqrt{|g|}V(H,\phi)}.\la{Zscen2} \ee We can derive the form of effective Higgs-dilaton potential. To focus on this goal we reduce the joint effective Higgs-dilaton (HD) potential including only the real scalar component $H$ of the (complex) Higgs doublet $ (H_1, H_2) \to (0, H) $ subject to condensation. From the expression \eqref{Sanomaaction} for $S_{\mathrm{anom}}$ one obtains the following formula for the the effective Higgs-dilaton potential $V$: \ba V &=& V_{\mathrm{coll}} + \hat W_{\mathrm{inv}},\label{01}, \\ V_{\mathrm{coll}} &=& \tilde A \left(e^{4\phi}-1\right) + \tilde B H^2\left(e^{2\phi} - 1\right) - CH^4\phi. \la{vcoll} \ea The explicit form of the coefficient is given in the appendix. The quadratic term of the Higgs potential comes from the $a_2$ term of \eqn{Sanomaaction}, while the quartic one comes from the $a_4$ one and from $\hat W_{\mathrm{inv}}$. Evidently the constant $\phi_0$ can be eliminated by shifting the field $\phi \to \phi-\phi_0$ and rescaling the constants $\tilde A,\tilde B$. After performing renormalization the general form of the HD potential can be presented as, \be V = Ae^{4\phi} + BH^2e^{2\phi} - C\phi H^4 + EH^2 + V_0,\la{1} \ee where \ba A &=& \frac{45}{8\pi^2}\frac{\Lambda^4}{8}\left(2\log\frac{\Lambda^2}{\mu^2}-1\right)e^{-4\phi_0},\la{Aa}\\ B &=& \frac{3 y^2}{2\pi^2}\frac{\Lambda^2}{2}\left(1 - \log\frac{\Lambda^2}{\mu^2}\right)e^{-2\phi_0},\la{Ba} \\ C &=& \frac{3 z^2}{4\pi^2},\la{Ca}\\ E &=&-\frac{3 y^2}{2\pi^2}\frac{\Lambda^2}{2}\left(1 - \log\frac{\Lambda^2}{\mu^2}\right),\la{E}\\ V_0 &=& -\frac{45}{8\pi^2}\frac{\Lambda^4}{8}\left(2\log\frac{\Lambda^2}{\mu^2}-1\right).\la{V0} \ea In the formulas \eqref{Ba},\eqref{Ca},\eqref{E} the constants $y$ and $z$ depend on mixing an Yukawa couplings. Their exact definition is in \cite[Formula 3.17]{SpectralAction}. In~\eqref{1} depending on the normalization scale $\mu$ of the fermion effective action, compared with the cutoff $\Lambda$, one can get in principle any sign of the coefficients $A(\Lambda,\mu), B(\Lambda,\mu)\gtrless 0 $. Thus in general both signs and modules of these constants $A,B$ are possible.. Here we are interested in the evolution of fields $\phi$ and $H$ and correspondingly neglect the additional cosmological constant $V_0$. We would like to apply the HD potential in the framework of the description of cosmic evolution. This evolution will depend principally on the signs of the constants, and on relations among their modules. We therefore search for which combinations of signs can provide the evolution from a symmetric phase to the EW symmetry broken phase, with the generation of fermion mass due to the Higgs fields. Thus one has to inquire whether the HD potential has local minima, and what are the restrictions on the coefficients which provide the existence of such minima. Accordingly we are going to investigate all possible critical points\footnote{Here by critical point we mean a stationary one.} of this potential depending on the values of its coefficients. The potential~\eqref{1} has three arbitrary parameters $A,B,E$, but it must be sign $(B)$ = -sign$(E)$. The parameter $C$ is fixed and given by \eqref{Ca}. Nevertheless in the analysis of extremal properties of $V$ performed below we shall consider arbitrary $C, B$ and $E$. We will see that, in order to have symmetry breaking, indeed the constant $C$ must be positive, and $E$ and $B$ must have opposite signs. This is a confronting result. Without loss of generality one can impose $C>0$. For the opposite sign of $C$ the set of critical points can be found by reflection $V\to - V$. One can see, that $V$ has no any critical points at $H=0$. Let us perform the coordinate transformation to the variable $\eta$, \be H^2 = \eta e^{2\phi}\label{13} \ee Such a transformation is non-degenerate at $H \neq 0$ and, since $V$ is symmetric for $H\to-H$, preserves all the information about extremal properties of our potential. In the new variables the potential takes the form, \be V = e^{4\phi}\left(A + B\eta - C \phi \eta^2\right) + E e^{2\phi} \eta . \label{4} \ee Critical point coordinates obey the following equations, \ba 2A + B \eta - \frac{C}{2} \eta^2 &=& 0 \label{5}\\ \left(\frac{2C\eta}{E}\right)\phi - \frac{B}{E} &=& e^{-2\phi} \label{6} \ea with the additional requirement $\eta > 0$ . From the equation (\ref{5}) we immediately find, \be \eta_{1,2} = {\frac {4A}{-B\pm\sqrt {{B}^{2}+4\,AC}}} .\label{8} \ee It is known (for a quick introduction see e.g.~\cite{wiki}), that the equation of a type $a x + b = p^{c x + d}~~a,c\neq 0$, can be exactly solved in terms of the Lambert $W(z)$ function~\cite{lambert}. By definition, it is a solution of the equation, \be z = W(z)e^{W(z)} \label{lam} \ee The function $W e^W$ is not injective and $W$ is multivalued (except at 0). If we look for real-valued $W$ then the relation \eqref{lam} is defined only for $x \geq 1/e$, and is double-valued on $(-1/e, 0)$. Let us introduce the notation $W_{0}(x)$ for the upper branch. It is defined at $-1/e \leq x < \infty$ and it is monotonously increasing from -1 to $+\infty$. The lower branch is usually denoted $W_{-1}(x)$. It is defined only on $-1/e\leq x< 0$ and it is monotonously decreasing from -1 to $-\infty$. In these terms the general solution of \eqref{6} is given by, \be \phi = \frac12 W \left( \frac{E e^{-\frac{B}{\eta C}}}{\eta C} \right) +{\frac {B}{2\eta\,C}} \label{10} \ee Since we have two values of $\eta$ and the real $W$ is double-valued, then the maximal number of critical points is four. However $\eta$ must be positive and real, and $\phi$ must be real. From these requirements one obtains the restrictions on the coefficients, which provide an existence of each critical point. We shall denote our critical points as $(m,n)$. Here the first index $m$ marks the sign $\pm$ and corresponds to the type of a chosen $\eta$ from~(\ref{8}). The index $n$ ranges over $-1,0$ and corresponds to the chosen branch of the $W$ function. We specify a type of each critical point with the help of the Hessian matrix eigenvalues and find the following results for the acceptable composition of coefficient signs. We seek for combinations of signs of the coefficients $A,B,C,E$ which provide a \emph{minimum} triggering the spontaneous EW symmetry breaking at a final stage of cosmic evolution. There are 11 combinations of signs which are forbidden as they don't provide the existence of a local minimum. \begin{table} \begin{center}\begin{tabular}{|c|c|c|c|}\hline sign (A)&sign(B)& sign(C)&sign(E)\\\hline $\pm$ & $\pm$& +&+\\\hline -&-&+&-\\\hline -&$\pm$&-&$\pm$\\\hline +&+&-&$\pm$\\\hline \end{tabular} \end{center} \caption{\sl Choice of signs which \emph{do not} give a local minimum to the potential. \label{Tab1}} \end{table} The only five combinations of signs which give the required minimum are shown in Table~\ref{Tab2}. \begin{table}[htp] \begin{center}\begin{tabular}{|c|c|c|c|}\hline sign (A)&sign(B)& sign(C)&sign(E)\\\hline + & $+$& +&-\\\hline +&-&+&-\\\hline -&$+$&+&-\\\hline +&-&-&$+$\\\hline +&-&-&$-$\\\hline \end{tabular} \end{center} \caption{\sl Choice of signs which \emph{do} give a local minimum to the potential. \label{Tab2}} \end{table} \subsection{Transition from symmetric phase to Electroweak symmetry breaking phase and choice of signs} We now examine the possibility of scenario where, at the first stage of the Universe evolution, one deals with massless fermions with the vanishing vacuum expectation value of the Higgs field $\langle H_{in} \rangle = 0$ (symmetric phase). We consider an initial point $(\phi_{in}, H_{in} = 0)$ acceptable for starting evolution if the function $V|_{\phi = \phi_{in}}(H)$ has a local minimum at $H = 0$, and if we can roll down from the initial point to a final one which is a local minimum corresponding to the Higgs phase. We have listed in table~\ref{Tab2} the five combinations of signs of the parameters $A$, $B$, $C$, $E$ which provide the existence of the local minimum . Nevertheless not all of these combinations support the above transition scenario. Indeed one can prove that this scenario can be realized only for positive $A,B,C$ and negative $E$. For this case the solution for minimum belongs to the class $(+,-1)$ and the minimum (final-stage) coordinates are given by, \ba \eta_{fin} &=& {\frac {4A}{-B+\sqrt {{B}^{2}+4\,AC}}} > 0\label{20},\\ \phi_{fin} &=& \frac12 W_{-1} \left( \frac{E e^{-\frac{B}{\eta_{fin} C}}}{\eta_{fin} C} \right) +{\frac {B}{2\eta_{fin}\,C}} .\label{21} \ea The requirement for $\phi$ to be real leads to, \be {E_{min} < E < 0,~~~~~E_{min}\equiv - C\eta_{fin}\exp\left\{ -1 + \frac{B}{\eta_{fin} C} \right\}} \label{c6} \ee The additional bounds exist on the coefficients, \be B e^{2\phi_{in}} + E > 0, \ee to guarantee that the initial point is in the symmetric phase. \begin{figure}[htb] \vskip -0.5cm \includegraphics [scale=.2]{pic_2_Nt.pdf} \centering \vskip -0.5cm \caption{The effective Higgs-dilaton potential in the vicinity of its two symmetric local minimums: $H^2 = H^2_m = 0.31$ and $\phi = \phi_m = -1.38$. Black lines represent the sections of the plot of the potential by the surfaces of constant $\phi$ and constant $H$. Parameters are taken as follows: $A = 1$, $B = 2.1$, $C = 1$, $E = -1$.} \label{fig1} \vskip -0.5cm \end{figure}Evidently the phase transition point during evolution appears for $\phi_{crit} = (1/2) \ln(- E/B) < \phi_{in}$. It can be shown that $\phi_{fin} <\phi_{crit} <0$ and therefore $B+E > 0$. We remark that the latter inequality entails $|E_{min}| >|E|$ . As well in this case for $\phi_{in} \leq 0$ the Higgs potential is bounded below for any value of Higgs fields. By the way we notice that due to \eqref{B},\eqref{E}, critical point $\phi_c$ coincides with $\phi_0$ and the latter comes from the invariant action $\hat W_{\mathrm{inv}}$ \eqref{WINV}. Thereby the requirement $\phi_c = \phi_0 <0$ means that the invariant potential $-C\phi_0 H^4$, corresponding to $\hat W_{\mathrm{inv}}$ is bounded bellow. We summarize our finding in Fig.~\ref{fig1}. One can see that for the values of $\phi \simeq 0$ there is only one minimum of the function $V(H)|_{\phi = fixed}$ at $H = 0$. When we get closer to $\phi_m$ crossing $\phi_{crit}$, the phase transition occurs, and every function $V(H)|_{\phi = fixed}$ has two symmetric minimums. The section of this three-dimensional plot in the initial point $\phi_{in}=0 $ is shown in Fig.~\ref{fig2} and reveals the absolute minimum in Higgs fields. \begin{figure}[htb] \vskip -1.0cm \includegraphics [scale=.2]{sp.pdf}\centering \vskip -0.3cm \caption{$V(H)$ at the fixed value of $\phi = \phi_{in} = -0.1$ i.e. the profile of the potential in the symmetric phase. $A = 1$, $B = 2.1$, $C = 1$, $E = -1$. } \label{fig2} \end{figure} Such a choice of the parameters provides an existence of the local minimum in the late stage of the universe evolution at $\phi_m = -1.38$ and $H_m^2 = 0.31$. So in the Higgs phase ($\phi = \phi_m $) one has the following potential behavior, $ V(H, \phi_m) = 0.0039 - 0.87H^2 + 1.38H^4$, see Fig.~\ref{fig3}. \begin{figure}[htb] \includegraphics [scale=.2]{pic_3_N.pdf}\centering \vskip -0.3cm \caption{$V(H)$ at the fixed value of $\phi = \phi_m = -1.38$ i.e. the profile of the potential in the Higgs phase. $A = 1$, $B = 2.1$, $C = 1$, $E = -1$. }\label{fig3} \end{figure} In the next Fig.~\ref{fig4} \begin{figure}[htb] \includegraphics [scale=.64]{01.pdf}\centering \vskip -0.3cm \caption{The similar effective Higgs-dilaton potential in the vicinity of its local minimum: $H = H_m = 2.29$ and $\phi = \phi_m = -0.72$ chosen to display the saddle point. Colored lines represent the sections of the plot of the potential by the surfaces of constant $\phi$ and constant $H$. Parameters are taken as follows: $A = 1$, $B = 2.1$, $C = 0.2$, $E = -2$.} \label{fig4} \end{figure} another view on the plot for effective potential is taken in order to demonstrate that the saddle point is aside of the steepest descend path. \subsection{Remark.} Let us notice, that one is not allowed to identify the minimal value of HD-potential $V_{min}$ with cosmological constant, because at $A,B,C >0, -B < E< 0$ we can easily prove, that $V_{min}<0$. Indeed, our potential \eqref{1} satisfies the following relation: \be V = \frac{1}{4}\left( \frac{\partial V}{\partial\phi} + H\frac{\partial V}{\partial H} \right)+ \frac{H^2}{4}\left(CH^2 + 2E \right) + V_0\label{relat} \ee and thereby its minimal value is given by \be V_{m} =V_0 +\frac{H_m^2 }{4}\left(CH^2 + 2E\right). \label{vmin} \ee For a given value of the Higgs v.e.v. $H_m\equiv \eta_{fin}e^{2\phi_{fin}}$ one can present the coefficient $E$ in the form \be E = H_m^2 \cdot C W_{-1}\left(\frac{E e^{-\frac{B}{C\eta_{fin}}}}{C\eta_{fin}}\right) \la{E1}. \ee Substituting \eqref{E1} into \eqref{vmin} we have: \be V_m = V_0 + \frac{C H^4}{4}\left(1+2W_{-1}\left(\frac{E e^{-\frac{B}{C\eta_{fin}}}}{C\eta_{fin}}\right)\right) \ee and taking into account, that $V_0 <0$, $W_{-1} \leq -1$ we finally obtain: \be V_m < -\frac{CH_m^4}{4} < 0. \ee Anyway we suppose, that the observed cosmological constant is generated by both visible and dark matter, and only visible matter participates in the dilaton - bosonization process considered above and hence $V_m$ can be identified with (negative) contribution to the total (positive) cosmological constant. Let's use first the metric $g_{\mu\nu}$ as a background one and therefore independently in the dark and visible sectors, \[ Z_{\mathrm{total}}(g_{\mu\nu})=Z_{\mathrm{dark}}(g_{\mu\nu})\cdot Z_{\mathrm{SM}}(\tilde g_{\mu\nu}, H)\Big|_{\tilde g_{\mu\nu} = g_{\mu\nu}} . \] Performing bosonization $\tilde g_{\mu\nu} \to \tilde g_{\mu\nu} \exp(2\phi);\ H \to H \exp(-\phi)$ in the SM sector only one finds,\[ Z_{\mathrm{SM}}(\tilde g_{\mu\nu}, H) = \hat Z_{\mathrm{inv}}(\tilde g_{\mu\nu}, H) \int d\phi\, e^{-S_{\mathrm{\mathrm{coll}}}(\tilde g_{\mu\nu}, H, \phi)} \simeq \int d\phi e^{-\int d^4x \sqrt{- g} V (\tilde g_{\mu\nu}, H, \phi)} . \] The total cosmological generating functional is produced after averaging over gravity, i.e. over the metrics, \[ Z_{\mathrm{cosm}}= \int {\cal D} g_{\mu\nu} \times [\mathrm{gauge \ fixing}]\times e^{- W_{grav}(g)} Z_{\mathrm{dark}}(g_{\mu\nu})\cdot Z_{\mathrm{SM}}(g_{\mu\nu}, H) . \] The latter integral in the vacuum energy approximation for matter fields entails the determination of the cosmological constant, \[Z_{\mathrm{cosm}} \sim \int {\cal D} g_{\mu\nu} \times [\mathrm{gauge \ fixing}]\times e^{- W_{grav}(g) - \int d^4x \sqrt{- g} \frac{\Lambda_{cosm}}{8\pi G_N}}, \] which evidently consists of, \[\frac{\Lambda_{cosm}}{8\pi G_N} = V_{0,SM} +V_{0,\mathrm{dark}}, \quad \int d^4x \sqrt{- g} V_{0,\mathrm{dark}} \simeq - \log Z_{\mathrm{dark}} . \] All formulas are referred to the Euclidean space-time and can be re-written easily for the Minkowski one. \section{Conclusions} In this paper we have seen how the bosonic spectral action emerges form the fermionic action and Weyl anomaly via the renormalization group flow. In this sense we can say that the bosonic degrees of freedom are induced by the fermionic ones. The procedure followed is spectral and therefore well suited for the noncommutative approach to the standard model. The action emerges, in case of the presence of a fundamental scale, and therefore of a non Weyl invariant fundamental theory, in terms of a composite dilaton. What we find particularly encouraging is the fact that, at the level of effective potential, the theory gives rise to a Higgs-dilaton potential with desirable qualitative features, i.e.\ the presence of both a broken and an unbroken phase, and the possibility to roll form the latter to the former. We did so using just the bare essential ingredients of the spectral action, and therefore the result, while necessarily generic and qualitative, are to a large extend independent on the details of the model. We see a certain relevance of the Higgs-dilaton potential of our type for realization of the Higgs field assisted inflation and further stages of the Universe evolution undertaken in \cite{shaposh,Barvinsky:2009jd}. A refinement of this work taking other degrees of freedom into account is possible, and partially under way. \acknowledgments This work has been supported in part by CUR Generalitat de Catalunya under project 2009SGR502, by project FPA2010-20807-C02-01 and by Consolider CPAN. The work of A.A.A.\ and M.A.K.\ was supported by Grant RFBR 09-02-00073, 11-01-12103-ofi-m and SPbSU grant 11.0.64.2010. M.A.K.\ is supported by Dynasty Foundation stipend. \setcounter{section}{0}
1,941,325,219,948
arxiv
\section{Introduction} \begin{figure} \resizebox{\hsize}{!}{\includegraphics{7364fig1.eps}} \caption{Light curves of Abell\,43 shown for the three long NOT runs. The solid lines are synthetic light curves constructed from the parameters listed in Table 2, and shifted downward by 0.005 units for clarity.} \label{Fig1} \end{figure} Hydrogen deficient hot white dwarf stars constitute the PG\,1159 class, and are often found at the centre of planetary nebulae. \object {Abell\,43\,(=\,WD\,1751+10)} is one of the four known hybrid PG\,1159 stars, which show evidence of a non-negligible hydrogen abundance in their atmospheres (Napiwotzki \& Sch\"onberner \cite{napiwotzki}). Miksa et al. (\cite{miksa}) recently published the following atmospheric parameters for Abell\,43: ${\mathrm{T}}_{\mathrm{eff}}$\,=\,110\,kK, log $g$\,=\,5.7, X(H)\,=\,0.35, X(He)\,=\,0.42, and X(C)\,=\,0.23 in mass fractions. Werner \& Herwig (\cite{werner}) found an even higher carbon abundance from HST ultra-violet observations of Abell\,43. Low carbon abundance determinations by Dreizler et al. (\cite{dreizler}) had initially led Quirion et al. (\cite{quirion04}) to rule out pulsations in Abell\,43 based on their model calculations. But the recent high carbon abundance measurements led Quirion et al. (\cite{quirion05}) to re-evaluate their previous conclusions about Abell\,43. They found that the revised models suggested that Abell\,43 should exhibit $\ell$\,=\,1 $g$-mode pulsations, driven by the classical $\kappa$-mechanism due to the ionization of carbon K-shell electrons, in the range of $\sim$2604--5529\,s. C\'orsico et al. (\cite{corsico}) carried out nonadiabatic pulsation calculations for a grid of evolutionary PG\,1159 models. They predicted that Abell\,43 should exhibit periods in the range of $\sim$2600--7000\,s adopting a 0.530M$_{\sun}$ stellar mass evolutionary track. Ciardullo \& Bond (\cite{ciardullo}) were the first to report possible variations in the light curve of Abell\,43 with a period of about 2473\,s. Schuh et al. (\cite{schuh}) detected a longer period of 5500\,s. They also suggested that the $\sim$2500\,s period reported by Ciardullo \& Bond (\cite{ciardullo}) could simply be an artifact of the 5500\,s period caused by the short duration of the observations. Vauclair et al. (\cite{vauclair}) discovered multiple modes in the pulsation spectrum of Abell\,43, and reported periods at 2600\,s and 3035\,s. Their observations were based on three individual runs, amounting to a net observation time of 7.5\,hr at the 2.6\,m Nordic Optical Telescope (NOT). In this paper, we report the results from 24 hours of time-series photometry acquired on Abell\,43 in 2005 with the 2.6\,m NOT and the 3.5\,m telescope at Apache Point Observatory (APO). We also report pulsation periods detected in the hybrid PG\,1159 star \object {NGC\,7094\, (=\,WD\,2134+125)}, which is almost like a spectroscopic twin of Abell\,43. \section{Observations} \begin{table} \begin{minipage}[t]{\columnwidth} \caption{Log of observations of Abell\,43} \label{tablelog} \centering \renewcommand{\footnoterule}{} \begin{tabular}{c c c c r c } \hline\hline Date & Start & Length& Cycle&Data& Observer\\ & (UT) &(hrs)& time(s)&points&\\ \hline 2005-06-29 & 21:28 &4.40 & 30 & 541 & JES\footnote{J.~-~E.~Solheim} \\ 2005-07-01 & 03:43 &1.56 & 20 & 283 & JES \\ 2005-07-01 & 09:14 &1.54& 37 & 51 & ASM\footnote{A.~S.~Mukadam} \\ 2005-07-01 & 21:08 &7.71& 20 & 1399 & JES \\ 2005-07-02 & 21:07 &7.32& 20&1319 & JES \\ 2005-07-10 & 07:24 & 1.71&41 & 110& ASM \\ \hline \end{tabular} \end{minipage} \end{table} We observed Abell\,43 for four nights, from the 29th of June to the 2nd of July, 2005 using the 2.6\,m NOT in addition to two short runs with the 3.5\,m telescope at APO on the 1st and 10th of July, 2005. The log of observations is given in Table~\ref{tablelog}. We conducted observations at the NOT using the Andalucia Faint Object Spectrograph and Camera (ALFOSC), which is equipped with a thinned 2048\,$\times$\,2048 E2V CCD\,42-40 chip. We observed with the broadband blue W92 filter, which has a FWHM of 275\,nm centered at 550\,nm; this enables us to gather a fair amount of flux from the target and minimize the sky contribution at the same time. We acquired suitable flat fields each night just after sunset and determined the dark current for each individual image using the overscan region. We controlled the ALFOSC data acquisition using the software interface \emph{tcpcom}, defining multiple windows to achieve fast readouts (\O stensen \cite{ostensen}). We achieved a net readout time of 4.8 to 5.8\,s for the six separate windows around the target star, three comparison stars, and two blank fields for determining the simultaneous sky brightness. We chose a time resolution of 30\,s during our first night at the NOT telescope, and our exposure time became 23.9\,s. We reduced the cycle time to 20\,s on subsequent nights, and achieved exposure times of about 14.1, 14.8, and 14.9\,s for slightly different windowing patterns. With real time processing, we were able to display light curves of the raw and sky-subtracted data during acquisition. We followed the standard basic reduction procedure, subtracting a dark frame from each image and then flat-fielding it. After these preliminary steps, we computed light curves using aperture photometry for several aperture sizes. We selected the best light curve based on the highest S/N ratio. For the observations outlined above, we obtained the best result for an aperture diameter of 24 pixels corresponding to 4.6\,arcseconds on the sky. We corrected for transparency variations and differential extinction by dividing the target star light curve with the average light curve of the comparison stars. In Fig. \ref{Fig1} we show the light curves from the three longest NOT runs. The APO observations were acquired with the Seaver Prototype Imaging camera (SPIcam) and a Johnson B filter. We used 3x3 binning and windowing in order to reduce the read out time to about 25--35\,s. The instrument has a read noise of 5 electrons RMS, a gain of 1 e/ADU, and a plate scale of 0.42 arcsec/pixel with 3x3 binning. We used a standard IRAF reduction to extract sky-subtracted light curves from the CCD frames using weighted circular aperture photometry (O'Donoghue et al. \cite{odonoghue}). \section{Hybrid PG\,1159 star Abell\,43.} \begin{figure} \resizebox{\hsize}{!}{\includegraphics{7364fig2.eps}} \caption{Pulsation spectrum of the light curve of the NOT run from the 2nd of July, 2005. The insert is a blow up of the low frequency region, that shows the evident lack of excited frequencies above 0.5\,mHz.} \label{Fig2} \end{figure} \begin{table*} \begin{minipage}[t]{\columnwidth} \caption{Identified periodic signals in Abell\,43 in the three long NOT runs} \label{tableruns} \centering \renewcommand{\footnoterule}{} \begin{tabular}{c c r r c r r c r r c r r c c} \hline\hline Date & Res.&F1 & P1&A1 &F2&P2&A2&F3&P3&A3&F4&P4&A4&F3/F1\\ 2005&($\mu$Hz)&($\mu$Hz)&(s)&(mma\footnote{relative amplitude$\times$10$^{-3}$})&($\mu$Hz)&(s)&(mma)&($\mu$Hz)&(s)&(mma)&($\mu$Hz)&(s)&(mma)\\ \hline 06-29&63&183.3&5456&2.0&&&&363.9&2748&2.3&416.3&2402&0.8&1.98 \\ $\sigma$&&1.4&41&0.1&&&&1.2&10&0.1&3.6&21&0.1\\ 07-01&36&175.5&5698&3.1&321&3118&1.0&366.4&2729&2.7&408.3&2449&1.4&2.08\\ $\sigma$&&1.2&38&0.1&3.4&34&0.1&0.9&10&0.1&3.4&15&0.1\\ 07-02&38&185.2&5399&2.1&325&3075&0.8&368.8&2711&1.9&418.9&2387&2.0&1.99\\ $\sigma$&&1.6&47&0.1&4.4&41&0.1&1.8&1.4&0.1&3.3&19&0.1\\ \hline \end{tabular} \end{minipage} \end{table*} \subsection {Single night observations} We initially computed Discrete Fourier Transforms (DFTs) of the three long light curves acquired using the NOT. We show only the DFT from one of the longest runs in Fig. \ref{Fig2}, as the other DFTs look similar. We list the following observed features evident from the DFTs of our NOT data: \begin{itemize} \item Each pulsation spectrum shows three dominant components, which are relatively stable in frequency, but exhibit variable amplitudes. \item All pulsation spectra have a strong high frequency cut-off above 0.5 mHz, beyond which we do not detect any signals. \item The light curves are non-stationary\footnote{When signals detected in a light curve are not present for its entire duration, it is said to be non-stationary in nature.} and we have to be cautious in drawing conclusions using standard tools for spectral analysis. \end{itemize} The dramatic cut off implies that we only find pulsations with periods longer than $ \sim$2000\,s, as predicted both by Quirion et al. (\cite{quirion05}) and C\'orsico et al. (\cite{corsico}). We determined the frequencies, amplitudes, and phases of the significant peaks in the pulsation spectra using a simultaneous non-linear least squares fit (\emph{Period04}; Lenz \& Breger \cite{lenz}). To determine which peaks in the DFT constituted real power, we used the method of prewhitening in conjunction with the non-linear least squares analysis. This involves fitting the dominant component and subtracting it from the light curve. If we now compute a DFT of this resultant light curve, the peak corresponding to the dominant mode will be absent. We repeat the procedure for the next highest peak in the pre-whitened DFT, fitting both high-amplitude components simultaneously and then subtracting them from the light curve. The resultant DFT will not show power due to both the highest and second-highest peaks. We continue this process as long as there are well-resolved peaks in the DFT. When the prewhitened DFT does not show any significant power, then we can be sure that we have determined the entire pulsation spectrum using the simultaneous non-linear least squares fit. Fig. \ref{finalFT} serves as a demonstration of the prewhitening method. We show the results of our analysis for the three longest NOT runs in Table \ref{tableruns}, that list the significant periods detected with a False Alarm Probability better than 1/100 (Scargle\cite{scargle}). The uncertainties listed in the table are 1\,$ \sigma $ errors from the non-linear least squares fit. Fig. \ref{Fig1} also includes the synthetic light curves constructed from the frequencies, amplitudes and phases determined as above. It is evident from Table \ref{tableruns} that F1 varies most in frequency and amplitude from night to night. These variations may be caused by unresolved power. We also notice that the ratio of the frequencies F3:F1 is very close to two on the first and third nights, insinuating a harmonic ratio. The ratio is close to 2.08 on the second night and may perhaps be due to F1 being an unresolved multiplet. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{7364fig3.eps}} \caption{We show the DFT computed from the combined light curve of five runs used in the analysis. The top curve shows the window function. The second curve shows the DFT of the light curve. The third curve shows the DFT after subtracting out the two highest amplitude peaks. The lower four curves show the DFT after subtracting three, four, five, and six frequencies respectively.} \label{finalFT}% \end{figure} If we compare the periods we detect to those observed earlier, we find that the period 2473\,s discovered by Ciardullo \& Bond(\cite{ciardullo}) is close to P4, while the period of 5500\,s found by Schuh et al. (\cite{schuh}) is close to P1. The two periods 2600\, and 3035\,s, discovered by Vauclair et al. (\cite{vauclair}) are not too far from P3 and P2. The discrepancies may either be due to insufficient resolution or may perhaps indicate real period changes between the different pulsation modes. \subsection{Combining all useful observations} To investigate whether the nightly variations in frequencies and amplitudes were due to unresolved peaks, we re-analyzed the combined light curves with the addition of the APO observations from the 10th of July, 2005. Note that we could not utilize the APO observations from the 1st of July as bad weather rendered them far too noisy for inclusion in our analysis. We computed a combined DFT again, and used the prewhitening technique in conjunction with the non-linear least squares analysis to arrive at the values listed in Table \ref{tablecomb}. We show our final DFT in Fig. \ref{finalFT}, along with the different stages of prewhitening. We find that the power seen earlier as F1 split into two individual frequencies f1 and f2, while the power at F3 split into f4 and f5. Using the new frequency components, we find that { f3/f1\,=\,1.991 and (f4\,+\,f5)/2\,f2\,=\,2.005, giving us two ratios of frequencies close to 2. \begin{table} \begin{minipage}[t]{\columnwidth} \caption{Identified periodic signals in Abell\,43 from the combined data set} \label{tablecomb} \centering \renewcommand{\footnoterule}{} \begin{tabular}{c r r c c } \hline\hline Name& Freq & Period & Amplitude &Noise\\ & (${\mu}$Hz) & (s) & (mma\footnote{relative amplitude$\times$10$^{-3}$})&(mma) \\ \hline f1&164.62& 6075 & 0.8 & 0.15\\ $\sigma$&0.08&3&0.1\\ f2 & 183.76& 5442 & 1.9&0.15 \\ $\sigma$&0.03&1.0&0.1\\ f3& 327.70 &3051.6 &0.9&0.20 \\ $\sigma$&0.07&0.7&0.1\\ f4& 363.79& 2748.8 & 0.9&0.20\\ $\sigma$&0.08&0.6&0.1\\ f5&373.12&2680.1&2.0&0.20\\ $\sigma$&0.03&0.2&0.1\\ f6&420.23&2379.7&1.0&0.20\\ $\sigma$&0.06&0.4&0.1\\ \hline \end{tabular} \end{minipage} \end{table} Averaging the observations to a sampling time of 200\,s, we show the observed light curves along with the synthetic light curves constructed from our fit in Fig. \ref{lcp1}; an inspection of this figure reveals that the fit is quite good. This tells us that Abell\,43 is more stable than the pulsating PNNi with shorter periods, investigated by Gonz\'alez P\'erez et al. (\cite{perez}). We also notice how the light curve can change dramatically on short time scales due to beating of the closely spaced modes; this suggests observations over a shorter time span compared to the beat cycle may give wrong results or show no pulsations at all. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{7364fig4.eps}} \caption{We show observed light curves, averaged over a sampling time of 200\,s, compared with synthetic curves constructed from parameters in Table \ref{tablecomb}.} \label{lcp1}% \end{figure} The shortest period we detect in our data is 2380\,s, which is quite close to the predicted lower limit of 2600\,s for $\ell$\,=\,1 $g$-mode pulsations induced by the $\kappa$-mechanism (Quirion et al. \cite{quirion05} and C\'orsico et al. \cite{corsico}). The longest period of 6075\,s is slightly longer than predicted by Quirion et al. (\cite{quirion05}) but within the range predicted by C\'orsico et al. (\cite{corsico}). The non stationary nature of the light curves makes it difficult to draw certain conclusions about the frequency ratios close to 2.00. They may indicate simple harmonics or non linear resonances as proposed by Buchler et al. (\cite{buchler}).} \section{Hybrid PG\,1159 star NGC\,7094: Almost a spectroscopic twin of Abell\,43} \begin{figure} \resizebox{\hsize}{!}{\includegraphics{7364fig5.eps}} \caption{We show the observed light curve of NGC\,7094 with a simulated light curve based on the periods 34, 50, and 83\,min.} \label{n7094fig} \end{figure} The hybrid PG\,1159 star NGC\,7094 shows almost the same spectroscopic parameters as Abell\,43, except that 1\% He in Abell\,43 has to be replaced by 1\% O (Miksa et al. \cite{miksa}). Quirion et al. (\cite{quirion05}) propounded that NGC\,7094 should pulsate with periods in the range of 2550\,-\,5413\,s. Previous searches by Ciardullo et al. (\cite {ciardullo}) and Gonz\'alez P\'erez et al. (\cite{perez}) did not reveal any pulsations in the star. The latter observation lasted only 5220\,s and was probably too short to detect pulsations in the predicted range. In order to scrutinize the variability of NGC\,7094, we asked for a service run on the NOT, which can be a maximum of 4\,hr long. We obtained a light curve of length 13692\,s with ALFOSC on the 29th of November, 2006. We used the B filter and a window of size 1k\,$ \times $\,1k pixels. We chose an exposure time of 30\,s and the readout time was about the same, giving us a time resolution of 60\,s. Fig. \ref{n7094fig} shows the light curve we obtained from our observations. With the false alarm probability criterion of 1/100 or better, we identified three peaks with periods between 2000\,s and 5000\,s. A synthetic light curve constructed from these peaks is also shown in Fig. \ref{n7094fig}. Longer observing runs are needed to characterize the pulsations better, but this paper serves to establish NGC\,7094 as a low-amplitude pulsator. The range of pulsation periods is shifted toward shorter periods relative to Abell\,43. This is predicted by Quirion et al. (\cite{quirion05}) and is a consequence of adding a trace of oxygen in their models. The observed lower period limit of 2000\,s is shorter than the theoretical limit of 2550\,s predicted by the same authors. \section{Conclusions} For the hybrid PG\,1159 star Abell\,43, we find six pulsation periods with a range close to that predicted for $\ell$\,=\,1 $g$-mode pulsations driven by the $\kappa$-mechanism due to partially ionized carbon (Quirion et al. \cite{quirion05}; C\'orsico et al. \cite{corsico}). The light curves are non stationary, but the pulsation frequencies and amplitudes are relatively stable during the 11 day span of the observations. This is different from pulsating PNNi, which showed changes in amplitudes from night to night, and also during the same night (Gonz\'alez Per\'ez et al. \cite{perez}). We have also detected pulsations in the hybrid PG\,1159 star NGC\,7094 which is an approximate spectroscopic twin of Abell\,43. The observed pulsations lie in the range of 2000--5000\,s, in a band of periods shifted to shorter values than for Abell\,43. This agrees with theory and occurs when a small amount of oxygen is added to the model pulsator (Quirion et al. \cite{quirion05}), although the lowest period observed is somewhat shorter than predicted for NGC\,7094. \begin{acknowledgements} The data presented here have mostly been acquired using ALFOSC, which is owned by the Instituto de Astrofisica de Andalucia (IAA) and operated at the Nordic Optical Telescope under agreement between IAA and the NBIfAFG of the Astronomical Observatory of Copenhagen. We would like to specially thank the NOT staff for their service observations of NGC\,7094 resulting in the detection of a new pulsator. The rest of the data are based on observations obtained with the Apache Point Observatory 3.5-meter telescope, which is owned and operated by the Astrophysical Research Consortium. Support for a fraction of this work was provided by NASA through the Hubble Fellowship grant HST-HF-01175.01-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS 5-26555. We also thank the anonymous referee for suggesting a number of improvements to the first version of this manuscript. \end{acknowledgements}
1,941,325,219,949
arxiv
\section{introduction} An $\text{SL}(3,\mathbb{C})$-structure on an oriented manifold of real dimension $6$ is defined by a definite real $3$-form $\rho$, i.e. by a stable $3$-form $\rho$ inducing an almost complex structure $J_{\rho}$ (see \cite{Hitchin,Reichel}). We shall say that the $\text{SL}(3,\mathbb{C})$-structure $\rho$ is \emph{closed} if $d\rho=0$. As remarked in \cite{Donaldson}, closed $\text{SL}(3,\mathbb{C})$-structures obey an $h$-principle, since any hypersurface in $\mathbb R^7$ acquires a closed $\text{SL}(3,\mathbb{C})$--structure. A special case of closed $\text{SL}(3,\mathbb{C})$-structure is given by a \emph{closed $\text{\normalfont SU}(3)$-structure}, i.e. by the data of an almost Hermitian structure $(J, g, \omega)$ and a $(3,0)$-form $\Psi$ of non-zero constant length satisfying $$ \quad \frac{i}{2} \Psi \wedge \overline \Psi = \frac 23 \omega^3, \quad d (\text{Re}(\Psi) )=0.$$ Indeed the $3$-form $\rho = \text{Re}(\Psi)$ defines a closed $\text{SL}(3,\mathbb{C})$-structure such that $J_{\rho} = J$. As shown in \cite{Donaldson}, a closed $\text{SL}(3,\mathbb{C})$-structure always determines a real $3$-form $ \hat \rho \coloneqq J_{\rho} \rho$ such that $d \hat \rho $ is of type $(2,2)$ with respect to $J_{\rho}$. Moreover $\hat{\rho}$ is the imaginary part of a complex $(3,0)$-form $\Psi$. We shall say that a closed $\text{SL}(3,\mathbb{C})$-structure is \emph{mean convex} if the $(2,2)$-form $d \hat \rho$ is semi-positive. Note that $J_{\rho}$ is integrable if and only if $d (J_{\rho} \rho) = 0.$ A special class of mean convex closed $\text{SL}(3,\mathbb{C})$-structures is given by \emph{nearly-K\"ahler} structures. Indeed, a nearly-K\"ahler structure can be defined as an $\text{SU}(3)$-structure $( \omega, \Psi)$ satisfying the following conditions: \[d\omega=-\frac{3}{2}\nu_0 \, \text{Re}(\Psi), \quad d (\text{Im}(\Psi))=\nu_0 \, \omega^2, \] where $\nu_0\in \mathbb{R}-\{0\}$ and therefore, up to a change of sign of $ \text{Re}(\Psi)$, we can suppose $\nu_0>0$. The nearly-K\"ahler condition forces the induced Riemannian metric $g$ to be Einstein and, up to now, very few examples of manifolds admitting complete nearly-K\"ahler structures are known \cite{Butruille, FoscoloHaskins, Gray1, Gray2, Nagy1, Nagy2}. More in general, an $\text{SU}(3)$-structure $(\omega, \Psi)$ such that $d (\text{Re}(\Psi)) =0$ and $d (\omega \wedge \omega)=0$ is called {\em{half-flat}}, see for instance \cite{BelCorFreGoe, Bryant, ChiossiSalamon, Conti, CorLeiSchSH, FidMinTom, GibLuPopSte, Hitchin2, Larfors} for general results on this types of structures. In particular, every oriented hypersurface of a Riemannian $7$-manifold with holonomy in $\text{G}_2$ is naturally endowed with a half-flat $\text{SU}(3)$-structure and, conversely, using the Hitchin flow equations, a $6$-manifold with a real analytic half-flat $\text{SU}(3)$-structure can be realized as a hypersurface of a 7-manifold with holonomy in $\text{G}_2$ \cite{Bryant, Hitchin2}. Nilmanifolds, i.e. compact quotients $\Gamma \backslash G$ of connected, simply connected, nilpotent Lie groups $G$ by a lattice $\Gamma$, provide a large class of compact $6$-manifolds admitting invariant closed $\text{SL}(3,\mathbb{C})$-structures \cite{ChiossiSalamon,ChiossiSwann, Conti, ContiTomassini, FinoRaffero}, where by invariant we mean induced by a left-invariant one on the nilpotent Lie group $G$. Note that nilmanifolds cannot admit invariant nearly K\"ahler structures, since by \cite{Milnor} the Ricci tensor of a left-invariant metric on a non-abelian nilpotent Lie group always has a strictly negative direction and a strictly positive direction. Since a nilmanifold is parallelizable, its Stiefel-Whitney numbers and Pontryagin numbers are all zero, hence by well-known theorems of Thom and Wall, it bounds orientably, i.e. it is diffeomorphic to the boundary of a compact connected manifold $N$. So it would be a natural question to see if, given a $6$-dimensional nilmanifold endowed with an invariant mean convex closed $\text{SL}(3,\mathbb{C})$-structure $\rho$, there exists on $N$ a closed $\text{G}_2$-structure with boundary value an \lq \lq enhancement" of $\rho$ (see \cite[Section 3.1]{Donaldson} for more details). According to \cite{Goze, Magnin} there are 34 isomorphism classes of $6$-dimensional real nilpotent Lie algebras $\mathfrak{g}_i$, $i =1, \ldots, 34,$ listed in Table \ref{table1}. In this paper we classify $6$-dimensional nilpotent Lie algebras admitting mean convex closed $\text{SL}(3,\mathbb{C})$-structures: \begin{theoremA} \label{theoremA} Let $M= \Gamma \backslash G$ be a $6$-dimensional nilmanifold. Then $M$ admits invariant mean convex closed $\normalfont \text{SL}(3,\mathbb{C})$-structures if and only if the Lie algebra $\mathfrak{g}$ of $G$ is not isomorphic to any of the six Lie algebras $\mathfrak{g}_i$, $i=1,2,4,9,12,34$, as listed in Table \ref{table1}. \end{theoremA} Using the classification of half-flat nilpotent Lie algebras (see \cite{Conti}), we can then prove the following: \begin{theoremB}\label{half-flat mean convex} A nilmanifold $M= \Gamma \backslash G$ has an invariant mean convex half-flat structure if and only if the Lie algebra $\mathfrak{g}$ of $G$ is isomorphic to any of the Lie algebras $\mathfrak{g}_i$, $i=6,7,8,10,13,15,16,22,24,25,28,29,30,31,32,33$, as listed in Table \ref{table1}. \end{theoremB} Moreover, in Section \ref{Section6} we show that the mean convex condition is preserved by the Hitchin flow equations in some special cases. More generally, since in our examples the property is preserved for small times, it would be interesting to determine if this is always the case. Given a closed $\text{SL}(3,\mathbb{C})$-structure $\rho$ on a $6$-manifold, another natural condition to study is the existence of a symplectic form $\Omega$ taming $J_{\rho}$, i.e. such that $\Omega(X, J_{\rho} X)>0$ for each non-zero vector field $X$. This is equivalent to the positivity in the standard sense of the $(1,1)$-component $\Omega^{1,1}$ of $\Omega$. We shall say that a closed $\text{SL}(3,\mathbb{C})$-structure $\rho$ is \emph{tamed} if there exists a symplectic form $\Omega$ such that $\Omega^{1,1} >0$. As shown in \cite{Donaldson} a mean convex $\text{SL}(3,\mathbb{C})$-structure on a compact $6$-manifold cannot be tamed by any symplectic form. If we remove the assumption of mean convexity, examples of tamed closed $\text{SL}(3,\mathbb{C})$-structures are given by symplectic half-flat struc\-tures $(\omega, \Psi)$, i.e., by half-flat $\text{SU}(3)$-structures $(\omega,\Psi)$ with $d\omega=0$. In this case $\rho=\text{Re}(\Psi)$ is tamed by the symplectic form $\omega$, since $\omega$ is of type $(1,1)$ with respect to $J_{\rho}$. In \cite{ContiTomassini}, nilmanifolds admitting invariant symplectic half-flat structures were classified. Later, this classification was generalized to solvmanifolds, i.e. to compact quotients $\Gamma \backslash G$ of connected, simply connected, solvable Lie groups $G$ by lattices $\Gamma$ (for more details, see \cite{SHFsolvmanifolds}). We prove the following result: \begin{theoremC}\label{tamed} Let $\Gamma \backslash G$ be a $6$-dimensional solvmanifold, not a torus. Then $\Gamma \backslash G$ admits an invariant tamed closed $\normalfont \text{SL}(3,\mathbb{C})$-structure if and only if the Lie algebra $\mathfrak{g}$ of $G$ has symplectic half-flat structures. If $\mathfrak{g}$ is nilpotent, then it is isomorphic to $\mathfrak{g}_{24}$ or $\mathfrak{g}_{31}$ as listed in Table \ref{table1}. If $\mathfrak{g}$ is solvable, then it is isomorphic to one among $\mathfrak{g}_{6,38}^0$, $\mathfrak{g}_{6,54}^{0,-1}$, $\mathfrak{g}_{6,118}^{0,-1,-1}$, $\mathfrak{e}(1,1) \oplus \mathfrak{e}(1,1)$, $ A_{5,7}^{-1,\beta,-\beta}\oplus \mathbb{R}$, $A_{5,17}^{0,0,-1} \oplus \mathbb{R}$, $ A_{5,17}^{\alpha,-\alpha,1} \oplus \mathbb{R}$, as listed in Table \ref{table3}. Moreover, all the nine Lie algebras admit closed $\text{\normalfont SL}(3,\mathbb{C})$-structures tamed by a symplectic form $\Omega$ such that $d\Omega^{1,1}\neq 0$. \end{theoremC} Explicit examples of closed $\text{SL}(3,\mathbb{C})$-structures tamed by a symplectic form $\Omega$ such that $d\Omega^{1,1}\neq 0$ are provided. These examples provide new examples of closed $\text{G}_2$-structures on the product $M\times S^1$, where $M=\Gamma \backslash G$ is a $6$-dimensional solvmanifold endowed with an invariant tamed closed $\text{SL}(3,\mathbb{C})$-structure. It would be interesting to see if there exist compact manifolds which have tamed closed $\text{SL}(3,\mathbb{C})$-structures but do not admit any symplectic half-flat struc\-tures. The paper is organized as follows. In Section \ref{section2} we review the general theory of semi-positive $(p,p)$-forms focusing on the case $p=2$. In Section \ref{section3} we study the intrinsic torsion of closed $\text{SU}(3)$-structures in relation to the mean convex condition. In Section \ref{section4} we prove Theorem A. Starting from this result, in Section \ref{section5} we prove Theorem B. In Section \ref{Section6} we study the behaviour of mean convex half-flat $\text{SU}(3)$-structures under the Hitchin flow equations. Finally, in Section \ref{Section7}, we prove Theorem C. {\it Acknowledgements.} The authors are supported by the Project PRIN 2017 ``Real and complex manifolds: Topology, Geometry and Holomorphic Dynamics'' and by G.N.S.A.G.A. of I.N.d.A.M. The authors would like to thank Simon Chiossi and Alberto Raffero for useful discussions and comments. The authors are also grateful to an anonymous referee for useful comments. \section{Preliminaries on semi-positive differential forms}\label{section2} In this section we review the definition and main results regarding semi-positive $(p,p)$-forms on complex vector spaces. For more details we refer for instance to \cite{Demailly,HarveyKnapp}. Let $V$ be a complex vector space of complex dimension $n$, with coordinates $(z_1,\ldots,z_n)$. Note that $V$ can be considered also as a real vector space of dimension $2n$ endowed with the complex structure $J$ given by the multiplication by $i$. Consider the exterior algebra \[ \Lambda V^*\otimes \mathbb{C} = \bigoplus \Lambda^{p,q}V^*, \] where $\Lambda^{p,q}V^*$ is a shorthand for $\Lambda^pV^*\otimes \Lambda^q \overline{V}^*$. A canonical orientation for $V$ is given by the $(n,n)$-form \begin{equation}\label{volume} \tau(z)\coloneqq \frac{1}{2^n} i dz_1\wedge d\overline{z}_1 \wedge \ldots \wedge i dz_n \wedge d\overline{z}_n= dx_1 \wedge d y_1 \wedge d x_n \ldots \wedge dy_n, \end{equation} where $z_j = x_j + i y_j$. We shall say that a $(p, p)$-form $\gamma$ is real if $\gamma=\overline{\gamma}$. One can introduce a natural notion of positivity for real $(p, p)$-forms. \begin{definition}\label{semipositive} A real $(p, p)$-form $\gamma \in \Lambda^{p,p} V^*$ is said to be \emph{semi-positive} (resp. \emph{positive}) if, for all $\alpha_j$ of $\Lambda^{1,0}V^*$, $1\leq j\leq n-p$, \[ \gamma \wedge i \alpha_1\wedge \overline{\alpha}_1\wedge \ldots \wedge i \alpha_{n-p}\wedge \overline{\alpha}_{n-p}=\lambda \tau(z), \] where $\lambda\geq 0$ (resp. $\lambda>0$ when $\alpha_1,\ldots, \alpha_{n-p}$ are linearly indipendent). \end{definition} We shall focus on the case $n = 3$ and we shall provide equivalent definitions for semi-positive real forms of type $(1,1)$ and $(2,2)$. For a more general discussion we refer the reader to \cite[Ch. III]{Demailly}. \begin{proposition} \label{prop2} Let $\alpha=\frac{i}{2} \sum_{j,k} a_{j\overline{k}} \, dz_j\wedge d\overline{z}_k$ be a real $(1,1)$-form on $V$. Then the following are equivalent: \begin{itemize} \item[(i)] $\alpha$ is semi-positive (resp. positive); \item[(ii)] the Hermitian matrix of coefficients $(a_{j\overline{k}})$ is positive semi-definite (resp. positive definite); \item[(iii)] there exist coordinates $\left(w_1,\ldots w_n\right)$ on $V$ such that \[ \alpha=\frac{i}{2}\sum_{k=1}^n \tilde a_{k\overline{k}} \, d w_k \wedge d\overline{w}_k, \] with $\tilde a_{k\overline{k}} \geq 0$ (resp. $\tilde a_{k\overline{k}}>0$), $\forall k = 1, \ldots n$. \end{itemize} \end{proposition} \begin{proof} \mbox{} \\ (i) $\Longleftrightarrow$ (ii) follows from \cite[Ch. III, Corollary 1.7]{Demailly} and its straightforward generalization for the case of positive $(1,1)$-forms; \\ (ii) $\Longleftrightarrow$ (iii) is achieved by diagonalizing the Hermitian matrix of coefficients $(a_{j\overline{k}})$. \\ \end{proof} The next result follows from \cite[Ch. III, Corollary 1.9, Proposition 1.11]{Demailly}. \begin{proposition}\label{product} If $\alpha_1, \alpha_2$ are semi-positive real $(1,1)$-forms, then $\alpha_1 \wedge \alpha_2$ is semi-positive. \end{proposition} Now, for $n =3$, we want to characterize the semi-positivity of real $(2,2)$-forms. Let $\gamma$ be a real $(2,2)$-form on $V$. We can write \begin{equation}\label{gamma} \gamma=-\frac{1}{4} \sum_{\substack{i<k \\j<l}} \gamma_{i\overline{j}k\overline{l}} dz_i\wedge d\overline{z}_j \wedge dz_k\wedge d\overline{z}_l, \end{equation} with respect to some coordinates $(z_1,z_2,z_3)$ on $V$. To $\gamma$ we can associate the real $(1,1)$-form $\beta$, given by \[ \beta= \frac{i}{2}\sum_{m,n}\beta_{m\overline{n}}dz_m\wedge d\overline{z}_n, \] where \begin{equation}\label{betacoef} \beta_{m\overline{n}}\coloneqq \frac{1}{4}\sum_{i,j,k,l} \gamma_{i\overline{j}k\overline{l}} \epsilon_{ikm}\epsilon_{jln}. \end{equation} Here $\epsilon_{abc}$ is the Levi-Civita symbol, with $\epsilon_{123}=1$. Using a change of basis $dz_i=\sum_p A^p_i dw_p$, the matrix $(\beta_{m\overline{n}})$ changes by congruence via the matrix $\tilde{A}= \text{det}(A) (A^t)^{-1}$, where $A=(A^p_i)$. Consequently, the semi-positivity of $\beta$ does not depend on the choice of coordinates on V. Notice that the matrix $(\beta_{m\overline{n}})$ is Hermitian, since $\gamma=\overline{\gamma}$ implies $\gamma_{i\overline{j}k\overline{l}}=\overline{\gamma_{j\overline{i}l\overline{k}}}$. \begin{proposition}\label{prop3} Let $\gamma\neq 0$ be a real $(2,2)$-form on $V$. Then the following are equivalent: \begin{itemize} \item[(i)] $\gamma$ is semi-positive, \item[(ii)] $\gamma\wedge \alpha >0$ for every positive real $(1,1)$-form $\alpha$, i.e. $\gamma\wedge \alpha=\lambda \tau(z)$ where $\lambda>0$, \item[(iii)] the associated $(1,1)$-form $\beta$ is semi-positive. \end{itemize} \end{proposition} \begin{proof} \mbox{} \\ (i) $\Longleftrightarrow$ (iii) Let $\gamma$ be a real $(2,2)$ form on $V$. Then $\gamma$ can be written as in \ref{gamma} with respect to a basis $(dz_1,dz_2,dz_3)$ of $\Lambda^{1,0}V^*$. By Definition \ref{semipositive}, $\gamma$ is semi-positive if for all $\eta\in \Lambda^{1,0}V^*$ one has $\dfrac{i}{2}\gamma \wedge \eta \wedge \overline{\eta}\geq 0$. Set $\eta=\sum_m \eta_m dz_m$, then \[ \dfrac{i}{2}\gamma \wedge \eta\wedge \overline{\eta} = \sum_{m,n}\beta_{m\overline{n}}\eta_m\overline{\eta}_n \tau(z), \] where the coefficients $\beta_{m\overline{n}}$ are defined in \ref{betacoef}. Therefore, since $\eta$ is arbitrary, $\gamma$ is semi-positive if and only if the matrix $(\beta_{m\overline{n}})$ is positive semi-definite.\\ (i) $\implies$ (ii) Let $\alpha$ be a positive $(1,1)$-form on $V$, then there exists a basis $\left(dz_1,dz_2,dz_3\right)$ of $\Lambda^{1,0}V^*$ such that \[ \alpha=\dfrac{i}{2}\sum_{k}a_{k\overline{k}}dz_k\wedge d\overline{z}_k \] with $a_{k\overline{k}}>0$. Let $\gamma$ be a semi-positive $(2,2)$-form on $V$. We can write \[ \gamma=-\frac{1}{4} \sum_{\substack{i<k \\j<l}} \gamma_{i\overline{j}k\overline{l}} dz_i\wedge d\overline{z}_j \wedge dz_k\wedge d\overline{z}_l. \] Then \[ \gamma\wedge \alpha=\sum_{r}a_{r\overline{r}}\beta_{r\overline{r}}\tau(z). \] Since $\gamma$ is semi-positive, by (iii) we have that $\beta_{r\overline{r}}\geq 0$ with at least one strictly positive. Therefore, since $a_{r\overline{r}}>0$, for each $r$, the claim follows. \\ (ii) $\implies$ (i) Let $\left(\alpha_1,\alpha_2,\alpha_3\right)$ be a basis of $\Lambda^{1,0}V^*$. We define \[ \alpha_\epsilon \coloneqq \dfrac{i}{2}(\alpha_1\wedge\overline{\alpha}_1+\epsilon(\alpha_2\wedge\overline{\alpha}_2+\alpha_3\wedge\overline{\alpha}_3)). \] We notice that, for any $\epsilon>0$, $\alpha_\epsilon$ is a positive $(1,1)$-form. Then, by hypothesis, $\gamma\wedge \alpha_\epsilon>0$. The claim follows by continuity since $\dfrac{i}{2}\gamma\wedge\alpha_1\wedge \overline{\alpha}_1=\lim_{\epsilon\to 0}(\gamma\wedge \alpha_\epsilon) \geq 0$. \end{proof} As shown in \cite[Theorem 1.2]{HarveyKnapp}, a real $(2,2)$-form $\gamma$ is always diagonalizable, i.e. there exist coordinates $(w_1,w_2,w_3)$ of $V$ such that \[ \gamma=-\frac{1}{4} \sum_{\substack{i<k}} \gamma_{i\overline{i}k\overline{k}} dw_i\wedge d\overline{w}_i\wedge dw_k\wedge d\overline{w}_k.\] By Proposition \ref{prop3}, $\gamma$ is semi-positive if and only if $ \gamma_{i\overline{i}k\overline{k}} \geq 0$, for every $i<k$. In particular, the diagonal matrix $(\beta_{m\overline{n}})$ associated to $\gamma$ in these coordinates is positive semi-definite. Moreover, $\gamma$ is positive if and only if $ \gamma_{i\overline{i}k\overline{k}} > 0$, for every $i<k$. \begin{remark}\cite[formula (4.8)]{Michelsohn}\label{Michelsohn} A real $(2,2)$-form $\gamma$ on $V$ is positive if and only if $\gamma=\alpha^{2},$ where $\alpha$ is a positive $(1,1)$-form. \end{remark} \section{Mean convexity and intrinsic torsion of $\text{SU}(3)$-structures} \label{section3} In this section we study the mean convex property in the context of closed $\text{SU}(3)$-structures and provide necessary and sufficient conditions in terms of the intrinsic torsion of the $\text{SU}(3)$-structure. An $\text{SL}(3,\mathbb{C})$-structure on a $6$-manifold $M$ is a reduction to $\text{SL}(3,\mathbb{C})$ of the frame bundle of $M$ which is given by a definite real $3$-form $\rho$, i.e. by a stable $3$-form inducing an almost complex structure $J_{\rho}$. We recall that a 3-form $\rho$ on a real $6$-dimensional space $V$ is stable if its orbit under the action of $\text{GL}(V )$ is open. If we fix a volume form $\nu\in\Lambda^6V^*$ and denote by \[A:\Lambda^5 V^* \to V\otimes\Lambda^6 V^* \] the canonical isomorphism induced by the wedge product $ \wedge:V^*\otimes \Lambda^5V^* \to \Lambda^6 V^*, $ we can consider the map $$ K_{\rho} : V \to V\otimes\Lambda^6 V^*, \quad v \mapsto A((\iota_v \rho)\wedge \rho). $$ A $3$-form $\rho$ on $V$ is stable if and only if $\lambda(\rho) =\frac{1}{6}\, \text{Tr}(K^2_{\rho}) \neq 0$ (see \cite{Hitchin,Reichel} for further details). When $\lambda(\rho)<0$, the $3$-form $\rho$ induces an almost complex structure \[ J_{\rho} := -\frac{1}{\sqrt{-\lambda(\rho)}}K_{\rho} \] and we shall say that $\rho$ is \emph{definite}. A simple computation shows that $J_{\rho}$ does not change if $\rho$ is rescaled by a non-zero real constant, i.e., $J_{\rho}=J_{s \rho}$ for every $s\in \mathbb{R}-\{0\}$. Moreover, defining $\hat \rho \coloneqq J_{\rho}\rho$, we have that $ \rho + i \hat \rho$ is a complex $(3,0)$-form with respect to $J_{\rho}$. We shall say that an $\text{SL}(3,\mathbb{C})$-structure $\rho$ is closed if $d \rho =0$. According to \cite{Donaldson}, $d\hat \rho$ is a real $(2,2)$-form and so we can introduce the following \begin{definition}\label{meanconvex} Let $\rho$ be a closed $\text{SL}(3,\mathbb{C})$-structure on $M$. We shall say that $\rho$ is \emph{mean convex} (resp. strictly mean convex) if $d\hat \rho$, pointwise, is a non-zero semi-positive (resp. positive) $(2,2)$-form. \end{definition} Given an $\text{SL}(3,\mathbb{C})$-structure $\rho$ and a non-degenerate positive $(1,1)$-form $\omega$ on on a $6$-manifold $M$ such that $\rho \wedge \hat \rho=\frac{2}{3}\omega^3$, then the pair $(\omega, \Psi)$, where $\Psi=\rho+iJ_{\rho}\hat{\rho}$, defines an $\text{SU}(3)$-structure and the associated almost $J_{\rho}$-Hermitian metric $g$ is given by $g (\cdot , \cdot) \coloneqq \omega(\cdot,J_{\rho}\cdot)$. Since $\Psi$ is completely determined by its real part $\rho$, we shall denote an $\text{SU}(3)$-structure simply by the pair $(\omega,\rho)$. In this case, at any point $p\in M$, one can always find a coframe $\left(f^1,\ldots,f^6\right)$, called \emph{adapted basis} for the $\text{SU}(3)$-structure $(\omega, \rho)$, such that \begin{equation}\label{frameadattato} \omega=f^{12}+f^{34}+f^{56}, \quad \rho=f^{135}-f^{146}-f^{236}-f^{245}. \end{equation} Here $f^{ij\cdots k}$ stands for the wedge product $f^i\wedge f^j\wedge \cdots \wedge f^k$. We shall say that the $\text{SU}(3)$-structure $(\omega, \rho)$ is closed if $d \rho=0$ and in a similar way we can introduce the following \begin{definition} A closed $\text{SU}(3)$-structure $(\omega,\rho)$ on a $6$-manifold $M$ is (strictly) mean convex if the $\text{SL}(3,\mathbb{C})$-structure $\rho$ is (strictly) mean convex. \end{definition} The intrinsic torsion of the $\text{SU}(3)$-structure $(\omega, \rho)$ can be identified with the pair $(\nabla \omega, \nabla \Psi)$, where $\nabla$ is the Levi-Civita connection of $g$, and it is a section of the vector bundle $T^*M\otimes \mathfrak{su}(3)^{\perp}$, where $\mathfrak{su}(3)^{\perp} \subset \mathfrak{so}(6)$ is the orthogonal complement of $\mathfrak{su}(3)$ with respect to the Killing Cartan form $\mathcal{B}$ of $\mathfrak{so}(6)$. Moreover, by \cite[Theorem 1.1]{ChiossiSalamon} the intrinsic torsion of $(\omega, \rho)$ is completely determined by $d\omega$, $d \rho$ and $d \hat \rho$. Indeed, there exist unique differential forms $\nu_0, \pi_0 \in C^{\infty}(M)$, $\nu_1,\pi_1 \in \Lambda^1(M)$, $\nu_2,\pi_2\in [\Lambda^{1,1}_0 M ], \nu_3 \in \llbracket \Lambda^{2,1}_0 M \rrbracket $ such that \begin{equation}\label{torsionforms} \begin{aligned} d\omega &=-\frac{3}{2} \nu_0 \, \rho + \frac{3}{2}\pi_0 \, \hat \rho + \nu_1 \wedge \omega + \nu_3, \\ d\rho &=\pi_0 \, \omega^2 + \pi_1 \wedge \rho - \pi_2\wedge \omega, \\ d\hat \rho &=\nu_0\,\omega^2-\nu_2\wedge \omega +J\pi_1\wedge \rho, \end{aligned} \end{equation} where $[\Lambda^{1,1}_0 M ]\coloneqq \{ \alpha \in [\Lambda^{1,1} M ] ~ | ~ \alpha\wedge \omega^2=0 \}$ is the space of primitive real $(1,1)$-forms and $\llbracket \Lambda^{2,1}_0 M \rrbracket \coloneqq \{ \eta \in \llbracket \Lambda^{2,1} M \rrbracket ~ | ~ \eta \wedge \omega=0 \}$ is the space of primitive real $(2,1)+(1,2)$-forms. The forms $\nu_i, \pi_j$ are called \emph{torsion forms} of the $\text{SU}(3)$-structure and they completely determine its intrinsic torsion, which vanishes if and only if all the torsion forms vanish identically. If $\rho$ is closed, as a consequence of \ref{torsionforms}, we have $d\hat \rho = \theta \wedge \omega,$ where $\theta$ is the $(1,1)$-form defined by $\theta \coloneqq\nu_0\,\omega -\nu_2$. We recall that, given a real $(1,1)$-form $\alpha$, the trace $\normalfont \operatorname{Tr} (\alpha)$ of $\alpha$ is given by $3\alpha\wedge\omega^{2}=\normalfont \operatorname{Tr} (\alpha)\omega^3.$ Then, in terms of $\nu_0$ and the $(1,1)$-form $\theta$, we can prove the following \begin{proposition}\label{prop5} Let $(\omega,\rho)$ be a closed $\text{\normalfont SU}(3)$-structure on $M$. Then \begin{itemize} \item[(i)] if $(\omega,\rho)$ is mean convex, then the torsion form $\nu_0$ is strictly positive and the $(1,1)$-form $\theta$ is not negative (semi-)definite. Moreover, its trace $\normalfont \operatorname{Tr}(\theta)$ is strictly positive, \item[(ii)] if $\theta$ is semi-positive, then the $\text{\normalfont SU}(3)$-structure is mean convex. \end{itemize} \end{proposition} \begin{proof} Let us assume that $(\omega,\rho)$ is a mean convex closed $\text{SU}(3)$-structure on $M$. By \ref{torsionforms} we have $d\hat \rho=\theta \wedge \omega$. Now, Proposition \ref{prop3} implies $d\hat \rho \wedge \alpha>0 $ for every positive real $(1,1)$-form $\alpha$. Then (i) follows by choosing $\alpha=\omega$; indeed $d\hat \rho \wedge \omega=\nu_0 \omega^3$, since $\nu_2\in [\Lambda^{1,1}_0 M]$. In particular $\text{Tr}(\theta)=3 \nu_0>0$. (ii) follows from Proposition \ref{product}. \end{proof} A closed $\text{SU}(3)$-structure $(\omega,\rho)$ is called \emph{half-flat} if $d \omega^2 =0$ and we shall refer to it simply as a half-flat structure. Half-flat structures are strictly related to torsion free $\text{G}_2$-structures. We recall that a $\text{G}_2$-structure on a $7$-manifold $N$ is characterized by the existence of a $3$-form $\varphi$ inducing a Riemannian metric $g_{\varphi}$ and a volume form $dV_{\varphi}$ given by \[ g_{\varphi}(X,Y)dV_{\varphi}=\frac{1}{6}\iota_X \varphi \wedge \iota_Y \varphi \wedge \varphi, \quad X, Y \in \Gamma (TM). \] By \cite{FG}, the $\text{G}_2$-structure $\varphi$ is \emph{torsion free}, i.e. $\varphi$ is parallel with respect to the Levi-Civita connection of $g_{\varphi}$, if and only if $\varphi$ is closed and co-closed, or equivalently if the holonomy group $\text{Hol}(g_\varphi)$ is contained in $ \text{G}_2$. A torsion free $\text{G}_2$-structure $\varphi$ on $N$ induces on each oriented hypersurface $\iota :M\hookrightarrow N$ a natural half-flat structure $(\omega,\rho)$ given by \[ \rho=\iota^*\varphi, \quad \omega^2=2 \, \iota^*(*_{\varphi} \varphi). \] Conversely, in \cite{Hitchin2}, the so-called Hitchin flow equations \begin{equation}\label{HitchinFlow} \begin{cases} \frac{\partial}{\partial t} \rho(t)=d\omega(t), \\ \frac{\partial}{\partial t} \omega(t)\wedge \omega(t)=-d\hat{\rho}(t), \end{cases} \end{equation} have been introduced, proving that every compact real analytic half-flat manifold $(M, \omega,\rho)$ can be embedded isometrically as a hypersurface in a $7$-manifold $N$ with a torsion free $\text{G}_2$-structure. Moreover, the intrinsic torsion of the half-flat structure can be identified with the second fundamental form $B\in \Gamma( S^2 T^*M)$ of $M$ with respect to a fixed unit normal vector field $\xi$. As in \cite{Donaldson}, with respect to $J_{\rho}$, we can write $B=B_{1,1}+B_C$, where $B_{1,1}$ is the real part of a Hermitian form and $B_C$ is the real part of a complex quadratic form. If we denote by $\beta_{1,1}=B_{1,1}(J_{\rho}\cdot,\cdot)$ the corresponding $(1,1)$-form on $M$, we have $\beta_{1,1}\wedge\omega=\frac{1}{2}d\hat{\rho}$, from which it follows that, if $(\omega,\rho)$ is mean convex, then the mean curvature $\mu$ given explicitly by $\frac{1}{4}\mu \rho\wedge \hat{\rho} =\frac{1}{2} d\hat{\rho}\wedge\omega$ is positive with respect to the normal direction (for more details see \cite[Prop. 1]{Donaldson}). Moreover, since the wedge product with $\omega$ defines an injective map on $2$-forms, comparing this with \ref{torsionforms} yields $\theta=2\beta_{1,1}$. Then, by Proposition \ref{prop5}, if $B_{1,1}$ defines a positive semi-definite Hermitian product, then the half-flat structure $(\omega,\rho)$ is mean convex. \smallskip Special types of half-flat structures $(\omega,\rho)$ are called \emph{coupled}, when $d\omega=-\frac{3}{2}\nu_0 \,\rho$, and \emph{double}, when $ d\hat \rho=\nu_0 \,\omega^2. $ Notice that, by Proposition \ref{prop5}, double structures $(\omega, \rho)$ are trivially mean convex as long as $\nu_0>0$. However, it is straightforward to check that, if $(\omega,\rho)$ is a double structure such that $ \nu_0<0$, then $(\omega, -\rho )$ is mean convex. In \cite[Theorem 4.11]{ChiossiSwann}, a classification of $6$-dimensional nilpotent Lie algebras endowed with a double structure was given. Other examples of double structures on $S^3\times S^3$ were found in \cite{MadsenSalamon,schulte}. For a general Lie algebra we can show the following \begin{proposition} If a Lie algebra $\mathfrak{g}$ has a closed strictly mean convex $\normalfont \text{SL}(3,\mathbb{C})$-structure, then $\mathfrak{g}$ admits a double structure. \end{proposition} \begin{proof} Let $\rho$ be a closed strictly mean convex $\text{SL}(3,\mathbb{C})$-structure on $\mathfrak{g}$ and denote $\hat{\rho}= J_{\rho}\rho$ as usual. Then $d\hat{\rho}$ is a positive $(2,2)$-form and, as shown in \cite{Michelsohn} (see Remark \ref{Michelsohn}), there exists a positive $(1,1)$-form $\alpha$ such that $d\hat{\rho}=\alpha^2$. Moreover, since $\alpha$ is positive with respect to $J_{\rho}$, $\alpha^3$ is a positive multiple of the volume form $\rho\wedge \hat{\rho}$. Since $J_{\rho}$ does not change for a non-zero rescaling of $\rho$, this implies that there exists $b\neq 0$ such that $(b \rho ,\alpha) $ is a double structure on $\mathfrak{g}$. \end{proof} As a consequence, the classification of nilpotent Lie algebras admitting closed strictly mean convex $\text{SL}(3,\mathbb{C})$-structures reduces to Theorem 4.11 in \cite{ChiossiSwann}. Therefore, in the next two sections we weaken the condition asking for the existence of closed (non-strictly) mean convex $\text{SL}(3,\mathbb{C})$-structures. \section{Proof of Theorem A}\label{section4} We recall that a \emph{nilmanifold} $M= \Gamma \backslash G$ is a compact quotient of a connected, simply connected, nilpotent Lie group $G$ by a lattice $\Gamma$. We shall say that an $ \text{SL}(3,\mathbb{C})$-structure $\rho$ (resp. $\text{SU}(3)$-structure $(\omega,\rho)$) is \emph{invariant} if it is induced by a left-invariant one on the nilpotent Lie group $G$. Therefore, the study of these types of structure is equivalent to the study of $ \text{SL}(3,\mathbb{C})$-structures (resp. $\text{SU}(3)$-structures) on the Lie algebra $\mathfrak{g}$ of $G$ and we can work at the level of nilpotent Lie algebras. Six-dimensional nilpotent Lie algebras have been classified in \cite{Goze, Magnin}. Up to isomorphism, they are $34$, including the abelian algebra (see Table \ref{table1} for the list). Using this classification we can prove Theorem A. \begin{proof}[Proof of Theorem A] Let $\mathfrak{g}$ be the Lie algebra of $G$. Every invariant $\normalfont \text{SL}(3,\mathbb{C})$-structure on $M$ is determined by an $\normalfont \text{SL}(3,\mathbb{C})$-structure on $\mathfrak{g}$ and vice versa. First note that the possibility that $\mathfrak{g}$ is abelian is precluded by Definition \ref{meanconvex}. Then, in order to prove the first part of the theorem, we first show the non existence result for the five Lie algebras $\mathfrak{g}_{1}$, $\mathfrak{g}_{2}$, $\mathfrak{g}_{4}$, $\mathfrak{g}_{9}$ and $\mathfrak{g}_{12}$. For any of these Lie algebras, let us consider a generic closed 3-form \[ \rho=\sum_{i<j<k}p_{ijk}\, e^{ijk}, \quad p_{ijk}\in \mathbb{R}. \] Let us assume that $\rho$ is definite, i.e.\ stable with $\lambda(\rho)<0$. Then $\rho$ induces an almost complex structure $J_{\rho}$ and we may ask if the induced $(2,2)$-form $d\hat \rho$ is semi-positive. Notice that the $1$-forms $\zeta^k=e^k-iJ_{\rho}e^k$, for $k=1,\ldots,6$, generate the space $\Lambda^{1,0}\mathfrak{g}_i^*$ of $(1,0)$-forms with respect to $J_{\rho}$ on $\frak g_i$, $i = 1,2,4,9,12.$ Here we are using the convention $J_{\rho}\alpha(v)=\alpha(J_{\rho}v)$ for any $\alpha\in \mathfrak{g}^*$, $v\in\mathfrak{g}$. So, for any closed definite $3$-form $\rho$, we extract a basis $(\xi^1,\xi^2,\xi^3)$ for $\Lambda^{1,0}\mathfrak{g}_i^*$, where $\xi^j=\zeta^{k_j}$ for some $k_j\in \{1,\ldots,6\}$ and $j=1,2,3$. Then, $(\xi^1,\xi^2,\xi^3, \overline{\xi}^1, \overline{\xi}^2, \overline{\xi}^3)$ is a complex basis for $\mathfrak{g}_i^*\otimes \mathbb{C}$ and we can write $d\hat{\rho}$ in this new basis as \[ d\hat \rho=-\frac{1}{4}\sum_{\substack{i<k\\j<l}}\gamma_{i\overline{j}k\overline{l}} \, \xi^i\overline{\xi}^j\xi^k\overline{\xi}^l, \] for some $\gamma_{i\overline{j}k\overline{l}}\in \mathbb{C}$. We note that the real one-forms \[ e^{k_j}=\frac{1}{2}(\xi^j+\overline{\xi}^j), \quad J_{\rho}(e^{k_j})=\frac{i}{2}(\xi^j-\overline{\xi}^j), \quad j=1,2,3, \] define a new real basis for $\mathfrak{g}_i^*.$ Now, following Section \ref{section2}, we consider the real $(1,1)$-form $\beta$ associated to $d\hat \rho$, given explicitly by \begin{equation} \label{exprbeta} \beta=\frac{i}{2}\sum_{m,n} \beta_{m\overline{n}} \, \xi^m\overline{\xi}^n, \quad \beta_{m\overline{n}}=\frac{1}{4}\sum_{i,j,k,l} \gamma_{i\overline{j}k\overline{l}}\epsilon_{ikm}\epsilon_{jln}, \end{equation} and we compute the expression of $\beta_{m\overline{n}}$ in terms of $p_{ijk}$. Therefore, $d\hat{\rho}$ is semi-positive (non-zero) if and only if the Hermitian matrix $(\beta_{m\overline{n}})$ is positive semi-definite, which occurs if and only if \begin{equation}\label{criterio} \begin{dcases} \beta_{k\overline{k}}\geq 0, &k=1,2,3, \\ \beta_{r\overline{r}}\beta_{k\overline{k}}-\lvert \beta_{r\overline{k}}\rvert^2\geq 0, &r<k, \, \,r,k=1,2,3, \\ \det(\beta_{m\overline{n}})\geq 0, \end{dcases} \end{equation} with $(\beta_{m\overline{n}})$ different from the zero matrix. Then it can be shown that, for every closed $3$-form $\rho$ such that $\lambda (\rho)<0$, the system \ref{criterio} in the variables $p_{ijk}$ has no solutions. Let us see this explicitly for $\mathfrak{g}_i$, $i=1,2$. By a direct computation, for the generic closed $3$-form $\rho$ on $\mathfrak{g}_1$ we have $$ \lambda(\rho)=\left[(p_{145}+2p_{235})p_{146}+p_{145}p_{236}+p_{245}^2\right]^2 +4p_{146}p_{236}\left(p_{126}-p_{145}p_{235}+p_{135}p_{245}\right) $$ and, for the generic closed $3$-form $\rho$ on $\mathfrak{g}_2$, we get $$ \lambda(\rho)=\left(p_{245}^2+p_{145}p_{236}+2p_{146}p_{235}\right)^2+4p_{146}p_{236}\left(-p_{145}p_{235}+p_{135}p_{245}+p_{125}p_{146} \right). $$ Notice that, if at least one between $p_{146}$ and $p_{236}$ is equal to zero, then $\lambda(\rho)\geq 0$. So let us assume that both $p_{146}, \, p_{236}$ are non-zero. Then $(e^1,J_{\rho} e^1,e^2,J_{\rho} e^2,e^5,J_{\rho} e^5)$ defines a basis of $\mathfrak{g}_i^*$, for $i=1,2$, hence ($\xi^1=e^1-iJ_{\rho} e^1,\xi^2=e^2-iJ_{\rho} e^2,\xi^3=e^5-iJ_{\rho} e^5$) is a basis of $(1,0)$-forms on $\mathfrak{g}_i$, $i=1,2$. By a direct computation, it can be shown that in these cases the matrix coefficient $\beta_{1\overline{1}}$ vanishes and so $ \beta_{1\overline{1}}\beta_{3\overline{3}}-\lvert \beta_{1\overline{3}}\rvert^2=-\lvert \beta_{1\overline{3}}\rvert^2 \leq 0$, but $\beta_{1\overline{3}}=0$ implies $\lambda(\rho)=0$ which is a contradiction. By a very similar discussion, we may discard cases $\mathfrak{g}_4$, $\mathfrak{g}_9$ and $\mathfrak{g}_{12}$ as well. In order to prove the second part of the theorem, we construct an explicit mean convex closed $\text{SU}(3)$-structure $(\omega, \rho)$ on the remaining nilpotent Lie algebras (see Table \ref{table2}). \end{proof} \section{Proof of Theorem B}\label{section5} In \cite{Conti}, a classification up to isomorphism of $6$-dimensional real nilpotent Lie algebras admitting half-flat structures was given. The non-abelian ones are twenty three and they are listed in Table \ref{table1}. So, in order to classify nilpotent Lie algebras admitting a mean convex half-flat structure, we restrict our attention to this list. An explicit example of mean convex half-flat structure on $\mathfrak{g}_i$, $i=6,7,8,10,13,15,16,22, 24,$ $25, 28,$ $29,30,31,32,33$, is already given in Table \ref{table2}. Therefore, we only need to prove non-existence of mean convex half-flat structures on the remaining Lie algebras $\mathfrak{g}_i$, $i=4,9,11,12,14,21,27$. By Theorem \hyperref[theoremA]{A}, we may immediately exclude the Lie algebras $\mathfrak{g}_i$, $i=4,9,12$, since mean convex half-flat structures are in particular mean convex closed $\text{SL}(3,\mathbb{C})$-structures. For the remaining Lie algebras $\mathfrak{g}_i$, $i=11,14,21,27$, whose first Betti number is $3$ or $4$, we first collect some necessary conditions to the existence of mean convex closed $\text{SU}(3)$-structures $(\omega, \rho)$ in terms of a filtration of $J_{\rho}$-invariant subspaces $U_i$ of $\mathfrak{g}^*$, and then, by working in an $\text{SU}(3)$-adapted basis, we exhibit further obstructions. Let us start by defining the filtration $\{ U_i \}$ as in \cite{ChiossiSwann}. Let $(\omega,\rho)$ be an $\text{SU}(3)$-structure on a $6$-dimensional nilpotent Lie algebra $\frak g$ and let $(g,J_{\rho})$ be the induced almost Hermitian structure on $\mathfrak{g}$. By nilpotency there exists a basis $\left(\alpha^1,\ldots,\alpha^6\right)$ of $\mathfrak{g}^*$ such that, if we denote $V_j\coloneqq \left< \alpha^1,\ldots,\alpha^j\right>$, then $dV_j\subset \Lambda^2V_{j-1}$ and, by construction, $0\subset V_1\subset\ldots \subset V_5\subset V_6=\mathfrak{g}^*$. We notice that the basis $(e^i)$ whose corresponding structure equations are given in Table \ref{table1} satisfies the previous conditions and $V_i=\ker d$ when $b_1(\mathfrak{g})=i$. In the following, we consider $V_i= \left< e^1,\ldots,e^i\right>$. As in \cite{ChiossiSwann}, let $U_j\coloneqq V_j\cap J_{\rho} V_j$ be the maximal $J_{\rho}$-invariant subspace of $V_j$ for each $j$. Then, since $J_{\rho}$ is an automorphism of the vector space $\mathfrak{g}$, a simple dimensional computation shows that $\dim_{\mathbb{R}}U_2$, $\dim_{\mathbb{R}}U_3 \in \{0,2\}$, $\dim_{\mathbb{R}}U_4\in \{2,4\}$ and $\dim_{\mathbb{R}}U_5=4$. Note that the filtration $\{ U_i \}$ depends on $V_i$ and the almost complex structure $J_{\rho}$. We can prove the following \begin{lemma}\label{U3=2,U2=2} Let $\rho$ be a mean convex closed $\normalfont \text{SL}(3,\mathbb{C})$-structure on a nilpotent Lie algebra $\mathfrak{g}$. If $\mathfrak{g}$ is isomorphic to \[\mathfrak{g}_{11}=(0,0,0,e^{12},e^{14},e^{15}+e^{23}+e^{24}) \quad \text{or} \quad \mathfrak{g}_{14}=(0,0,0,e^{12},e^{13},e^{14}+e^{35}), \] then $U_3=U_4$. If $\mathfrak{g}$ is isomorphic to \[ \mathfrak{g}_{21}=(0,0,0,e^{12},e^{13},e^{14}+e^{23}) \quad \text{or} \quad \mathfrak{g}_{27}=(0,0,0,0,e^{12},e^{14}+e^{25}), \] then $\dim_{\mathbb{R}}U_2=2$, or equivalently $\left < e^1, e^2 \right > $ is $J_{\rho}$-invariant. Moreover, on $\mathfrak{g}_{21}$, up to isomorphism, we also have $\dim_{\mathbb{R}}U_4=4$. \end{lemma} \begin{proof} On each Lie algebra $\mathfrak{g}_i$, $i =11, 14, 21, 27$, we consider the generic closed 3-form \[ \rho=\sum_{i<j<k}p_{ijk}\, e^{ijk}, \quad p_{ijk}\in \mathbb{R} \] and we impose $ \lambda (\rho) <0$ and the mean convex condition. First, by a direct computation on each Lie algebra, we determine the expression of $\lambda (\rho)$ in terms of the coefficients $p_{ijk}$ and a basis of $(1,0)$-forms with respect to $J_{\rho}$. Then we exclude the cases where either $\lambda (\rho) \geq 0$ or the matrix $(\beta_{m\overline{n}})$ associated to $d\hat{\rho}$ is not positive semi-definite. As in the proof of Theorem \hyperref[theoremA]{A} we first extract a basis of $(1,0)$-forms from the set of generators $\{ \zeta^ i \}$ and we use \ref{exprbeta} to compute $(\beta_{m\overline{n}})$ in terms of $p_{ijk}$. We shall give all the details for the Lie algebra $\mathfrak{g}_{11}$. For the other cases the computations are similar and we only report the necessary conditions on $p_{ijk}$. The generic closed $3$-form $\rho$ on the Lie algebra $\mathfrak{g}_{11}$ has \begin{align*} \lambda(\rho)=& ( p_{126}p_{236}-p_{126}p_{146}-p_{135}p_{246}+p_{145}p_{236}+p_{146}p_{235}-p_{146}p_{245} +p_{234}p_{246} \\ & -p_{235}p_{245})^2+ 4p_{246}(p_{123}p_{236}p_{246}-p_{123}p_{246}^2-p_{124}p_{236}^2 +p_{124}p_{236}p_{246} \\ &+2p_{125}p_{146}p_{236} -p_{125}p_{146}p_{246}+p_{125}p_{235}p_{236}-p_{125}p_{235}p_{246}-p_{134}p_{235}p_{246} \\ & +p_{134}p_{236}p_{245}-p_{125}p_{146}p_{246}+p_{135}p_{234}p_{246}-p_{135}p_{235}p_{245}+p_{145}p_{146}p_{235} \\ &+p_{145}p_{235}^2-p_{145}p_{234}p_{236}) +4p_{146}p_{236}(-p_{125}p_{236}+p_{135}p_{235}-p_{145}p_{235}). \end{align*} Then we have the following possibilities: \begin{itemize} \item[(a)] $p_{246}\neq 0, p_{246}\neq p_{236}$. Then $\left(e^1-iJ_{\rho}e^1,e^2-iJ_{\rho}e^2,e^3-iJ_{\rho}e^3\right)$ is a basis for $\Lambda^{1,0}\mathfrak{g}_{11}^*$, but $(\beta_{m\overline{n}})$ being positive semi-definite implies $\lambda (\rho) =0$, a contradiction. \item[(b)] $p_{246}=0, p_{236}\neq 0, p_{146}\neq 0$. Taking $\left(e^1-iJ_{\rho}e^1,e^2-iJ_{\rho}e^2,e^5-iJ_{\rho}e^5\right)$ as a basis for $\Lambda^{1,0}\mathfrak{g}_{11}^*$, again we find that $(\beta_{m\overline{n}})$ being positive semi-definite implies $\lambda (\rho) =0$. \item[(c)] $p_{246}=p_{236}=0,$ or $p_{246}=p_{146}=0$, but then $\lambda (\rho) \geq 0$. \item[(d)] $p_{236}=p_{246}\neq 0$. In particular this implies that $V_2=\left<e^1,e^2\right>$ is $J_{\rho}$-invariant, i.e., $\dim_{\mathbb{R}}U_2=2$. Notice also that, since $J_{\rho} e^3 (e_6)=0$ if and only if $p_{236}=0$, we also have that $V_4=\left<e^1,e^2,e^3,e^4\right>$ is not $J_{\rho}$-invariant, hence $U_2=U_3=U_4$. \end{itemize} By a very similar discussion, one can show that a generic mean convex closed $\normalfont \text{SL}(3,\mathbb{C})$-structure $\rho$ on $\mathfrak{g}_{14}$ must have $p_{245}=0$ and $p_{356}\neq 0$. In particular, since $J_{\rho} e^1, J_{\rho} e^3 \in \left<e^1,e^3\right>$, we have $\dim_{\mathbb{R}} U_3=2$. Moreover, $J_{\rho} e^2 (e_6)\neq 0$, hence $\dim_{\mathbb{R}}U_2=0$ and $U_3=U_4$. Analogously, every mean convex closed $\normalfont \text{SL}(3,\mathbb{C})$-structure $\rho$ on $\mathfrak{g}_{21}$ must have $p_{345}=0$. This implies that $V_2$ and $V_4$ are $J_{\rho}$-invariant, so that $\dim_{\mathbb{R}} U_2=2$, $\dim_{\mathbb{R}}U_4=4$ and $U_2=U_3$. Finally, a mean convex closed $\normalfont \text{SL}(3,\mathbb{C})$-structure $\rho$ on $\mathfrak{g}_{27}$ must have $p_{345}=0$. In particular this implies that $V_2$ is $J_{\rho}$-invariant so that $U_2=U_3$. \end{proof} Now we can prove Theorem B. \begin{proof}[Proof of Theorem B] Starting from the classification of half-flat nilpotent Lie algebras given in \cite{Conti}, we divide the discussion depending on the first Betti number $b_1$ of $\mathfrak{g}$. When $b_1(\mathfrak{g})=2$, the claim follows directly by Theorem \hyperref[theoremA]{A}. In particular we have seen that $\mathfrak{g}_4$ cannot admit mean convex closed $\text{SL}(3,\mathbb{C})$-structures and, for the remaining Lie algebras $\mathfrak{g}_6$, $\mathfrak{g}_7$ and $\mathfrak{g}_8$ from Table \ref{table1}, we provide an explicit example in Table \ref{table2} on the respective Lie algebras. Analogously, when $b_1(\mathfrak{g})=3$, an explicit example of mean convex half-flat structure on $\frak g_i$, $i=10,13,15,16,22,24$, is given in Table \ref{table2}. By Theorem \hyperref[theoremA]{A}, we may exclude the existence of mean convex half-flat structures on $\mathfrak{g}_9$ and $\mathfrak{g}_{12}$. For the remaining Lie algebras $\mathfrak{g}_{i}$, $i=11,14,21$, let $(\omega,\rho)$ be a mean convex half-flat structure on $\mathfrak{g}_i$. Then, by Lemma \ref{U3=2,U2=2}, with respect to the fixed nilpotent filtration $V_i = \left < e^1, \ldots, e^i \right >$, we may assume $\dim_{\mathbb{R}}U_3=2$. Using this and the information on $U_4$ we collected in Lemma \ref{U3=2,U2=2}, we shall show that on the three Lie algebras there exists an adapted basis $(f^i)$ with dual basis $(f_i)$ such that $df^1=df^2=0$ and $f_6\in \xi(\mathfrak{g}_i)$, where by $ \xi(\mathfrak{g}_i)$ we denote the center of $\frak g_i$. To see this, let us consider the case of $\mathfrak{g}_{21}$, first. Then we may assume $\dim_{\mathbb{R}}U_4=4$. This occurs if and only if $V_4=J_{\rho} V_4$. In particular, we may choose a $g$-orthonormal basis $\left(f^1,f^2\right)$ of $U_3$ such that $J_{\rho} f^1=-f^2$, take $f^3,f^4\in U_3^{\perp}\cap U_4$ of unit norm such that $J_{\rho} f^3=-f^4$, and complete it to a basis for $\mathfrak{g}_{21}^*$ by choosing $f^5\in U_4^{\perp}\cap V_5$ and $f^6\in U_4^{\perp} \cap J_{\rho}V_5$ of unit norm such that $J_{\rho} f^5=-f^6$. Then, by construction, $\left(f^1,\ldots,f^6\right)$ is an adapted basis for the $\text{SU}(3)$-structure $(\omega,\rho)$. In particular, since $V_5=\left<f^1,f^2,f^3,f^4,f^5\right>$, the inclusion $dV_j\subset \Lambda^2(V_{j-1})$ implies $f_6\in \xi(\mathfrak{g}_{21})$. Therefore, since $f^1,f^2\in V_3=\ker d$, we have $df^1=df^2=0$. Now we consider $\mathfrak{g}_{11}$ and $\mathfrak{g}_{14}$. By Lemma \ref{U3=2,U2=2}, we can assume $\dim_{\mathbb{R}}U_4=2$ for both Lie algebras. As shown in \cite{ChiossiSwann}, since $U_4,V_3\subset V_4$, we have $\dim_{\mathbb{R}}(U_4\cap V_3)\geq 1$ and we may take $\left(f^1,f^2\right)$ to be a unitary basis of $U_4$ with $f^1\in V_3$. Then, since $U_3\subset V_3=\ker d$, we may suppose $df^1=df^2=0$. Analogously, since $\dim_{\mathbb{R}}(V_4\cap J_{\rho}V_5)\geq 3$ and $U_5\cap V_4=V_5 \cap J_{\rho}V_5\cap V_4= V_4\cap J_{\rho}V_5$, then $\dim_{\mathbb{R}}(U_5\cap V_4)\geq 3$, from which $\dim_{\mathbb{R}}(U_5\cap V_4\cap U_4^{\perp})\geq 1$ follows. Then we may take $\left(f^3,f^4\right)$ to be a unitary basis of $U_4^{\perp}\cap U_5$ with $f^3\in V_4$. Finally, since $\dim_{\mathbb{R}}(U_5^{\perp}\cap V_5)\geq 1$, we may take a unitary basis $\left(f^5,f^6\right)$ of $U_5^{\perp}$ with $f^5\in V_5$. By construction, $\left(f^1,f^2,\ldots,f^6\right)$ is an adapted basis for $(\omega,\rho)$. In particular, since $U_5\subset V_5$, we also have $V_5=\left<f^1,f^2,f^3,f^4,f^5\right>$, which implies $f_6\in \xi(\mathfrak{g_i})$. for $i=11,14$. This proves our claim. Now, we shall show that the three Lie algebras $\mathfrak{g}_{i}$, $i=11,14,21$, do not admit any mean convex half-flat structures. By contradiction, let us suppose there exists a nilpotent Lie algebra $\mathfrak{g}$ endowed with a mean convex half-flat structure $(\omega,\rho)$ which is isomorphic to $\mathfrak{g}_{11}$, $\mathfrak{g}_{14}$ or $\mathfrak{g}_{21}$. By the previous discussion, without loss of generality, we may assume that there exists an adapted basis $(f^i)$, i.e. satisfying $$ \omega=f^{12}+f^{34}+f^{56},\quad \rho=f^{135}-f^{146}-f^{236}-f^{245}, \quad \hat{\rho}=f^{136}+f^{145}+f^{235}-f^{246}, $$ and such that $df^1=df^2=0$, $f_6\in \xi(\mathfrak{g})$. In particular, $\frak g$ has structure equations \[ df^1=df^2=0, \quad \displaystyle df^k=-\sum_{\substack{ i<j\\ i,j=1}}^5 c_{ij}^k f^{ij}, \quad k=3,4,5,6. \] By imposing the unimodularity of $\frak g$, i.e. $ \sum_{j} c_{ij}^j=0$, for all $i=1,\ldots,6$, and that $(\omega, \rho)$ is half-flat, we can show by a direct computation that, if $c_{34}^5\neq 0$, then the Jacobi identities $d^2 f^i =0$, $i = 3, \ldots, 6$, are equivalent to the conditions \[ c_{15}^4=c_{25}^4=c_{25}^3=c_{15}^6=c_{13}^4=c_{14}^4=c_{13}^3=c_{23}^3=c_{24}^3=0, \] which imply $b_1(\mathfrak{g})\geq 4$, so we can exclude this case. Then we must have $c_{34}^5=0$. Let us assume $c_{12}^6\neq 0$. Again a straightforward computation shows that $d^2f^6=0$ implies \[ c_{25}^3=c_{25}^4=c_{15}^4=0, \quad c_{13}^3=-c_{14}^4, \quad c_{23}^3=-c_{13}^4-c_{15}^6. \] Now let us look at the mean convex condition. Since we are working in the adapted basis $(f^i)$, using \ref{exprbeta} we obtain that the matrix $(\beta_{m\overline{n}})$ associated to $d\hat{\rho}$, with respect to the basis $(\xi^1=f^1+if^2,\xi^2=f^3+if^4,\xi^3=f^5+if^6)$, is given by \[ \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & c_{15}^6-i(c_{24}^3+c_{14}^4) \\ 0 & c_{15}^6+i(c_{24}^3+c_{14}^4) & -c_{14}^5-c_{13}^6+c_{24}^6-c_{23}^5 \end{pmatrix} . \] Therefore $d\hat{\rho}$ is semipositive if and only if $c_{15}^6=0$, $c_{24}^3=-c_{14}^4$ and $ -c_{14}^5-c_{13}^6+c_{24}^6-c_{23}^5>0$. In particular, $c_{15}^6=0$ and $c_{24}^3=-c_{14}^4$ imply that the Jacobi identities hold if and only if $c_{13}^4=c_{14}^4=0$. However, this also implies $df^3=df^4=0$ so that $b_1(\mathfrak{g})\geq 4$ and we have to discard this case as well. Therefore $c_{34}^5=c_{12}^6=0$ and, as a consequence, \begin{equation}\label{b1=3} \begin{aligned} df^3 = & -c_{13}^3f^{13}-(c_{13}^4+c_{15}^6)f^{14}-c_{25}^4f^{15}-c_{23}^3 f^{23}-c_{24}^3 f^{24}-c_{25}^3 f^{25}, \\ df^4 = & -c_{13}^4f^{13}-c_{14}^4 f^{14}-c_{15}^4 f^{15}-c_{13}^3 f^{23}-(c_{13}^4 + c_{15}^6) f^{24}-c_{25}^4 f^{25},\\ df^5 = & -(c_{14}^6+c_{23}^6+c_{24}^5)f^{13}-c_{14}^5 f^{14}+(c_{14}^4+c_{13}^3) f^{15}-c_{23}^5 f^{23}-c_{24}^5 f^{24} \\ &+(c_{23}^3+ c_{13}^4 +c_{15}^6) f^{25}, \\ df^6= & -c_{13}^6 f^{13}-c_{14}^6 f^{14}-c_{15}^6 f^{15}-c_{23}^6 f^{23}-c_{24}^6 f^{24}-(c_{24}^3-c_{13}^3) f^{25}. \end{aligned} \end{equation} In particular, $f^{12}$ is a non-exact $2$-form belonging to $\Lambda^2(\ker d)$ such that $f^{12} \wedge d\mathfrak{g}^*=0$. On the other hand, a simple computation shows that for any Lie algebra $\mathfrak{g}_i$, for $i=11,14,21$, a $2$-form $\alpha\in \Lambda^2(\ker d)$ such that $\alpha\wedge d\mathfrak{g}_i^*=0$ is necessarily exact, so we get a contradiction. This concludes the non-existence part of the proof in the case $b_1=3$. Now we consider the remaining case $b_1(\mathfrak{g})\geq 4$. An explicit example of mean convex half-flat structure on $\mathfrak{g}_i$, $i=25,28,29,30,31,32,\allowbreak33$, is given in Table \ref{table2}. Then, we only need to prove the non-existence of mean convex half-flat structures on $\mathfrak{g}_{27}$. Let $(\omega,\rho)$ be a mean convex half-flat structure on $\mathfrak{g}_{27}$. We claim that on $\mathfrak{g}_{27}$ there exists an adapted basis $(f^i)$ such that $df^1=df^2=df^3=0$ and $f_{6}\in \xi(\mathfrak{g}_{27})$. By Lemma \ref{U3=2,U2=2}, we can assume $U_2=U_3$ with $\dim_{\mathbb{R}}U_3=2$. We recall that $U_4$ has dimension $2$ or $4$. Let us suppose $\dim_{\mathbb{R}}U_4=4$, first. We note that in this case the existence of an adapted basis $(f^i)$ for $(\omega,\rho)$ such that $f_6\in \xi(\mathfrak{g}_{27})$ and $V_4=U_4=\left<f^1,f^2,f^3,f^4\right>$ follows from the previous discussion on $\mathfrak{g}_{21}$, where we only used $\dim_{\mathbb{R}}U_2=2$ and $\dim_{\mathbb{R}}U_4=4$. In particular, since $V_4=\ker d$ on $\mathfrak{g}_{27}$, in this case we also have $df^1=df^2=df^3=df^4=0$. When $\dim_{\mathbb{R}}U_4=2$ instead, since $U_2=U_3=U_4$, the discussion is the same as for $\mathfrak{g}_{11}$ and $\mathfrak{g}_{14}$, where we only used $U_3=U_4$ to find an adapted basis such that $df^1=df^2=0$ and $f_6$ lying in the center. In particular, since by construction $f^1,f^2,f^3\in V_4$, on $\mathfrak{g}_{27}$ we also have $df^3=0$, since $V_4=\ker d$. This proves our claim on $\mathfrak{g}_{27}$. Now, using this claim we shall show that $\mathfrak{g}_{27}$ does not admit any mean convex half-flat structures. Like in the previous cases, by contradiction, let us suppose there exists a nilpotent Lie algebra $\mathfrak{g}$ isomorphic to $\mathfrak{g}_{27}$ admitting a mean convex half-flat structure $(\omega,\rho)$. Then we may assume that there exists on $\frak g$ an adapted basis $(f^i)$ for $(\omega,\rho)$ such that $df^1=df^2=df^3=0$ and $V_5=\left<f^1,f^2,f^3,f^4,f^5\right>$, so that $f_6\in \xi(\mathfrak{g})$. Then \[ \displaystyle df^k=-\sum_{\substack{ i<j\\ i,j=1}}^5 c_{ij}^k f^{ij}, \quad k=4,5,6. \] By imposing the unimodularity of $\frak g$ and that $(\omega,\rho)$ is half-flat, we get \begin{equation}\label{b1=4} \begin{aligned} df^4 = & c_{15}^6 f^{13}-c_{14}^4 f^{14}-c_{15}^4 f^{15},\\ df^5 = & c_{34}^5 f^{12}-(c_{24}^5+c_{14}^6+c_{23}^6) f^{13}-c_{14}^5 f^{14}+c_{14}^4 f^{15}-c_{23}^5 f^{23} \\ &-c_{24}^5 f^{24}-c_{34}^5 f^{34}, \\ df^6= & -c_{12}^6 f^{12}-c_{13}^6 f^{13}-c_{14}^6 f^{14}-c_{15}^6 f^{15}-c_{23}^6 f^{23}-c_{24}^6 f^{24}+c_{12}^6 f^{34}. \end{aligned} \end{equation} Since $b_1 (\frak g) =4$, there should exist a closed 1-form linearly independent from $f^1, f^2$ and $f^3$. Moreover, since $\ker d=V_4\subset V_5= \left<f^1,f^2,f^3,f^4,f^5\right>$, the matrix $C$ associated to \[ d:\left<f^4,f^5\right> \to \Lambda^2V_5 = \Lambda^2\left<f^1,f^2,f^3,f^4,f^5\right> \] must have rank equal to $1$. This is equivalent to requiring that $C$ is not the zero matrix and all the $2\times 2$ minors of $C$ vanish. After eliminating all the zero rows, we have \[ C=\begin{pmatrix} 0 & c_{34}^5 \\ c_{15}^6 & -c_{24}^5-c_{14}^6-c_{23}^6 \\ -c_{14}^4 & -c_{14}^5 \\ -c_{15}^4 & c_{14}^4 \\ 0 & -c_{23}^5 \\ 0 & -c_{24}^5 \\ 0 & -c_{34}^5 \end{pmatrix} . \] By using that $(f^i)$ is an adapted basis and \ref{exprbeta}, we get \[ (\beta_{m\overline{n}}) = \begin{pmatrix} 0 & 0 & 0 \\ 0 & c_{15}^4 & c_{15}^6-ic_{14}^4 \\ 0 & c_{15}^6+ic_{14}^4 & -c_{14}^5-c_{13}^6+c_{24}^6-c_{23}^5 \end{pmatrix}. \] Let us suppose $c_{15}^4=0$. Then $(\beta_{m\overline{n}})$ being positive semi-definite implies $c_{14}^4=c_{15}^6=0$, from which it follows that $\mathfrak{g}$ is $2$-step nilpotent, so that we can discard this case since $\mathfrak{g}_{27}$ is $3$-step nilpotent. Thus, we have to impose $c_{15}^4\neq 0$. As a consequence, $d^2f^i=0$, $i =4,5,6$, if and only if $c_{24}^5=c_{34}^5=c_{24}^6=c_{23}^5=c_{12}^6=0,$ from which it follows that $b_1(\mathfrak{g})=4$ holds if and only if \[ c_{14}^5=-\frac{c_{14}^4}{c_{15}^4}, \quad c_{14}^6=\frac{c_{14}^4 c_{15}^6-c_{15}^4 c_{23}^6}{c_{15}^4}. \] Then $\frak g$ must have structure equations \begin{equation}\label{g27} \begin{aligned} df^1=& df^2=df^3=0, \\ df^4=& c_{15}^6 f^{13}-c_{14}^4 f^{14}-c_{15}^4 f^{15}, \\ df^5=& -\frac{c_{14}^4 c_{15}^6}{c_{15}^4} f^{13}+\frac{(c_{14}^4)^2}{c_{15}^4} f^{14} + c_{14}^4 f^{15}, \\ df^6=& -c_{13}^6 f^{13}-\frac{c_{14}^4 c_{15}^6-c_{15}^4 c_{23}^6}{c_{15}^4} f^{14}-c_{15}^6 f^{15}-c_{23}^6 f^{23}. \end{aligned} \end{equation} Note that, by \ref{g27}, $\mathfrak{g}$ has the same central and derived series as $\mathfrak{g}_{27}$ and, if $c_{23}^6=0$, $\mathfrak{g}$ is almost abelian, so it cannot be isomorphic to $\mathfrak{g}_{27}$. Thus we can suppose $c_{23}^6\neq 0$. By \cite{Conti}, a $6$-dimensional 3-step nilpotent Lie algebra having $b_1=4$ and admitting a half-flat structure must be isomorphic to either $\mathfrak{g}_{25}$ or $\mathfrak{g}_{27}$. In addition, $b_2(\mathfrak{g}_{25})=6$, while $b_2(\mathfrak{g}_{27})=7$. We shall show that we cannot have $b_2(\mathfrak{g})=7$ and so we shall get a contradiction. To this aim we need to compute the space $Z^2$ of closed $2$-forms. By a direct computation using \ref{g27} and $c_{23}^6\neq 0$, it follows that $\dim Z^2=\dim \Lambda^2 V_4+2=8$. Therefore, in order to get $b_2(\mathfrak{g})=7$, we have to require that the space $B^2$ of exact 2-forms is one-dimensional. This is equivalent to asking that the linear map \[ d\rvert_{\left< f^4,f^5,f^6\right>}:\left< f^4,f^5,f^6\right> \to \Lambda^2 \mathfrak{g}^*, \] has rank equal to $1$. Let us denote by $E$ the matrix associated to $d\rvert_{\left< f^4,f^5,f^6\right>}$ in the induced basis ($f^{ij}$) of $\Lambda^2 \mathfrak{g}^*$. Eliminating all the zero rows, one has \[ E=\begin{pmatrix} c_{15}^6 & -\dfrac{c_{14}^4 c_{15}^6}{c_{15}^4} & -c_{13}^6 \\ -c_{14}^4 & \dfrac{(c_{14}^4)^2}{c_{15}^4} &-\dfrac{c_{14}^4 c_{15}^6-c_{15}^4 c_{23}^6}{c_{15}^4} \\ -c_{15}^4 & c_{14}^4 & -c_{15}^6 \\ 0 & 0 & -c_{23}^6 \end{pmatrix} . \] Then $E$ has rank $1$ if and only if $E$ is not the zero matrix and all the $2\times 2$ minors of $E$ vanish. Notice that the minor $c_{23}^6 c_{15}^4$ is different from zero, since we have already excluded both cases $c_{23}^6=0$ and $c_{15}^4=0$. Then $\mathfrak{g}$ cannot be isomorphic to $\mathfrak{g}_{27}$ and we obtain a contradiction. This concludes the case $b_1\geq 4$ and the proof of the theorem. \end{proof} \begin{remark} By Theorem \hyperref[half-flat mean convex]{B}, we notice that, on a $6$-dimensional nilpotent Lie algebra $\mathfrak{g}$ with $b_1(\mathfrak{g})=2$, whenever a mean convex half-flat $\text{SU}(3)$-structure exists, a double example can also be found (see Table \ref{table2}). This is not true for different values of the first Betti number. \end{remark} Under the hypothesis of exactness, we can prove the following \begin{theorem}\label{coupledmeanconvex} Let $\mathfrak{g}$ be a $6$-dimensional nilpotent Lie algebra admitting an exact mean convex $ \normalfont \text{SL}(3,\mathbb{C})$-structure. Then $\mathfrak{g}$ is isomorphic to $\mathfrak{g}_{18} $ or $\mathfrak{g}_{28}$. Moreover, up to a change of sign, every exact definite $3$-form $\rho$ on $\mathfrak{g}_{18}$ and $\mathfrak{g}_{28}$ is mean convex, and $\mathfrak{g}_{28}$ is the only nilpotent Lie algebra admitting mean convex coupled structures, up to isomorphism. \end{theorem} \begin{proof} Among the $6$-dimensional nilpotent Lie algebras admitting half-flat structures, as shown in the proof of \cite[Theorem 4.1]{FinoRaffero}, the only Lie algebras that can admit exact $\text{SL}(3,\mathbb{C})$-structures are isomorphic to $\mathfrak{g}_4$, $\mathfrak{g}_9$ or $\mathfrak{g}_{28}$. Therefore, by Theorem \hyperref[theoremA]{A}, $\mathfrak{g}_{28}$ is the only nilpotent Lie algebra among them which can admit a mean convex structure. In particular, a coupled mean convex structure on $\mathfrak{g}_{28}$ is given in Table \ref{table2}. This example was first found in \cite{FinoRaffero}, up to a change of sign of the definite $3$-form. For the remaining nilpotent Lie algebras $\mathfrak{g}_i$, for $i=3,5,17,18,19,20,23,26,$ which can admit mean convex $\text{SL}(3,\mathbb{C})$-structures by Theorem \hyperref[theoremA]{A}, we prove that $\mathfrak{g}_{18}$ is the only one that admits exact definite $3$-forms. To see this, let $(e^j)$ be the basis of $\mathfrak{g}_{i}^*$ as listed in Table \ref{table1}. Then the generic exact $3$-form $\rho$ on $\frak g_i$ is given by $d \eta$, where \begin{equation}\label{2-form} \eta=\sum_{i<j}p_{ij}e^{ij}, \quad p_{ij}\in \mathbb{R}. \end{equation} By an explicit computation, one can show that, on $\mathfrak{g}_i$, for $i=3,17,19,23,26$, $\lambda(\rho)=0$, while, on $\mathfrak{g}_{5}$ and $\mathfrak{g}_{20}$, $\lambda(\rho)=p_{56}^4>0$. Finally, on $\mathfrak{g}_{18}$, $\lambda(\rho)=-4 p_{56}^4$. Then, if $p_{56}\neq 0$, $\rho=d\eta$ is a definite $3$-form on $\mathfrak{g}_{18}$. Moreover, $(e^1-iJ_{\rho}e^1, e^3-iJ_{\rho}e^3, e^5-iJ_{\rho}e^5)$ is a basis for $\Lambda^{1,1}\mathfrak{g}_{18}^*$ and, with respect to this basis, the matrix $(\beta_{m\overline{n}})$ associated to the $(2,2)$-form $d \hat{\rho}$ is $\text{diag}(0,0,-4p_{56})$. Then, when $p_{56}<0$, $\rho$ is mean convex, otherwise $-\rho$ is. By a direct computation one can check that the same conclusions hold also for $\mathfrak{g}_{28}$. In particular, the generic exact $3$-form $\rho=d\eta$, with $\eta$ as in \ref{2-form}, is definite as long as $p_{56}\neq0$. Moreover, $(e^1-iJ_{\rho}e^1, e^3-iJ_{\rho}e^3, e^5-iJ_{\rho}e^5)$ is a basis of $\Lambda^{1,1}\mathfrak{g}_{28}^*$, for every exact definite $\rho$ and, with respect to this basis, the matrix $(\beta_{m\overline{n}})$ associated to the $(2,2)$-form $d \hat{\rho}$ is $\text{diag}(0,0,-4p_{56})$. \end{proof} \section{Hitchin flow equations} \label{Section6} In this section we study the mean convex property in relation to the Hitchin flow equations \ref{HitchinFlow}. We recall that the solution $(\omega(t), \rho(t))$ of \ref{HitchinFlow} starting from a half-flat structure remains half-flat as long as it exists. However, the same does not happen in general for special classes of half-flat structures. Then, a natural question is whether the Hitchin flow equations preserve the mean convexity of the initial data $(\omega(0), \rho(0))$. A first example of solution preserving the mean convex condition of the initial data, up to change of sign of $\rho(0)$, was found in \cite[Proposition 5.4]{FinoRaffero2}. In this case the initial structure is coupled. More generally, when the Hitchin flow solution $(\omega(t),\rho(t))$ preserves the coupled condition of the initial data, then $\rho(t)=f(t)\rho(0)$, where $f\colon I\to \mathbb{R}$ is a non-zero smooth function with $f(0)=1$ (for more details see \cite[Proposition 5.2]{FinoRaffero2}). Then, a coupled solution preserves the mean convexity of the initial data as long as it exists. Some further remarks can be made in other special cases. If $(\omega(t),\rho(t))$ is a solution of \ref{HitchinFlow} starting from a strictly mean convex half-flat structure $(\omega,\rho)$, by continuity the solution remains mean convex, at least for small times. This occurs, for instance, for double structures. In particular cases, the mean convex property of the double initial data is preserved for all times: \begin{proposition} Let $M$ be a connected $6$-manifold endowed with a double structure $(\omega,\rho)$. If $(\omega(t),\rho(t))$ is a double solution of \ref{HitchinFlow} defined on some $I\subseteq \mathbb{R}$, $0\in I$, i.e. $d\hat{\rho}(t)=\nu_0(t)\omega^2(t)$ for each $t\in I$ for some smooth nowhere vanishing function $\nu_0\colon I\to \mathbb{R} $, then there exists a nowhere vanishing smooth function $f:I\to \mathbb{R}$ such that $\omega(t)=f(t)\omega(0)$. Conversely, if $(\omega(t),\rho(t))$ is a solution of \ref{HitchinFlow} with $\omega(t)=f(t)\omega(0)$, then it is a double solution. \end{proposition} \begin{proof} Let $(\omega(t),\rho(t))$ be a solution with $\omega(t)=f(t)\omega(0)$. From \ref{HitchinFlow} one gets \[ d\hat{\rho}(t)=-\frac{1}{2}\frac{\partial}{\partial t}\left(\omega(t)^2\right) =-\frac{1}{2}\frac{\partial}{\partial t}\left( f^2(t)\omega(0)\wedge \omega(0)\right)=-f(t)\dot{f}(t)\omega(0)^2. \] Then $\omega(t)=f(t)\omega(0)$ is a double solution with $\nu_0(t)=- \frac{d}{d t} \ln f(t)$. Conversely, if $d\hat{\rho}(t)=\nu_0(t)\omega(t)^2$, then \[ \frac{\partial}{\partial t} \omega(t)\wedge \omega(t)=-d\hat{\rho}(t)=-\nu_0(t)\omega(t)^2. \] Since the wedge product with $\omega(t)$ is injective on $2$-forms, this is equivalent to $\frac{\partial}{\partial t}\omega(t)=-\nu_0(t)\omega(t)$, whose unique solution is $\omega(t)=f(t)\omega(0)$, with $f(t)=e^{-\int_0^t \nu_0(s)ds}$. \end{proof} We now provide an explicit example of double solution to \ref{HitchinFlow} and show that a double solution with double initial data may not exist. \begin{example} Consider the double $\text{SU}(3)$-structure $(\omega,\rho)$ given in Table \ref{table2} on $\mathfrak{g}_{24}$. The solution of the Hitchin flow equations with initial data $(\omega,\rho)$ is double and it is explicitly given by \begin{align*} \omega(t)&=\left( 1-\frac{5}{2}t \right)^{\frac{1}{5}}\omega, \\ \rho(t)&=-\left( 1-\frac{5}{2}t \right)^{\frac{6}{5}}e^{123}+e^{145}+e^{246}+e^{356}. \end{align*} In particular $d\hat{\rho}(t)=\nu_0(t)\omega^2(t)$ with $\nu_0(t) = (2-5t)^{-1}>0$ for each $t$ in the maximal interval of definition $I=(-\infty,\frac{2}{5})$. Consider now the double $\text{SU}(3)$-structure $(\omega,\rho)$ given in Table \ref{table2} on $\mathfrak{g}_{6}$. The solution of the Hitchin flow equation with initial data $(\omega,\rho)$ is given by \begin{align*} \omega(t)&=f_1(t)\left( e^{15}-e^{24} \right)-f_2(t)e^{36}, \\ \rho(t)&=h_1(t)e^{123}+\left(h_2(t)-1\right)e^{134}-e^{146}-e^{235}+e^{256}-e^{345}+h_2(t)e^{126}, \end{align*} where $f_1(t),f_2(t),h_1(t),h_2(t)$ satisfy the following autonomous \textsc{ode} system: \[ \begin{cases} \dot{f_1} =\frac{1}{2f_1^3f_2}\left( 2h_2-1 \right), \\[2pt] \dot{f_2} =-\frac{1}{2f_1^4f_2}\left( 2f_1+f_2\left(2h_2-1\right)\right), \\[2pt] \dot{h_1}=-2f_1,\\[2pt] \dot{h_2}=-f_2, \end{cases} \] with initial conditions $f_1(0)=f_2(0)=h_1(0)=1$, $h_2(0)=0$, which, by known theorems, admits a unique solution with given initial data. In particular, this solution is not a double solution. A direct computation shows that the eigenvalues $\lambda_i(t) $ of the matrix $(\beta_{m\overline{n}}(t))$ associated to $d\hat{\rho}(t)$ are \[ \lambda_1=\lambda_2=\sqrt{-h_2^2+h_1+h_2}, \quad \lambda_3=(1-2h_2)\sqrt{-h_2^2+h_1+h_2}. \] In particular the mean convex property is preserved for small times as expected. \end{example} To our knowledge, the question of whether the Hitchin flow preserves the mean convexity of the initial data when the $(2,2)$-form is not positive but just semi-positive is still open. Nonetheless, some easy considerations can be made in order to obtain a better understanding of the problem. Let $M$ be a compact real analytic $6$-dimensional manifold endowed with a half-flat mean convex $\text{SU}(3)$-structure $(\omega,\rho)$. Since the unique solution of \ref{HitchinFlow} starting from $(\omega,\rho)$ is a one-parameter family of half-flat structures $(\omega(t),\rho(t))$, we can write \[ d\hat{\rho}(t)=(\nu_0(t)\omega(t)-\nu_2(t))\wedge\omega(t), \] where $\nu_0(t)\in C^{\infty}(M)$ and $\nu_2(t)\in \Lambda^{1,1}_0M$ is a primitive $(1,1)$-form with respect to $J_{\rho(t)}$ for each $t\in I$, where $I$ is the maximal interval of definition of the flow. Then $d\hat{\rho}(t)\wedge\omega(t)=\nu_0(t)\omega(t)^3$ and, since $\nu_0(0)>0$ by the mean convexity of the initial data, by continuity we have $\nu_0(t)>0$ at least for small times. By \ref{HitchinFlow}, as long as $\nu_0(t)>0$, the volume form $\omega(t)^3$ is pointwise decreasing: \[\frac{\partial}{\partial t} (\omega(t)^3)=\frac{\partial}{\partial t}(\omega(t)^2)\wedge\omega(t)+ \frac{\partial}{\partial t}\omega(t)\wedge \omega(t)^2=-3d\hat{\rho}(t)\wedge\omega(t)=-3\nu_0(t)\omega(t)^3. \] Moreover, $\omega(t)^2$ is a positive $(2,2)$-form with respect to $J_{\rho(t)}$ for all $t\in I$ and, from the second equation in \ref{HitchinFlow}, we know that $-\partial_t(\omega^2(t))$ remains a $(2,2)$-form with respect to $J_{\rho(t)}$ for each $t\in I$ such that $-\partial_t(\omega^2(t))\big|_{t=0}=2 d\hat{\rho}(0)$ is semi-positive. Then the Hitchin flow solution preserves the mean convexity of the initial data if and only if $-\partial_t(\omega^2(t))=2 d\hat{\rho}(t)$ remains semi-positive. The essential difficulty in this problem lies in the fact that the link between the positivity of $\omega^2(t)$ and the mean convexity of the initial data is not sufficient to ensure the mean convexity of the solution since also the almost complex structure evolves in a non-linear way under the equation $\partial_t(\rho(t))=d\omega(t)$. Let us look at the behaviour of \ref{HitchinFlow} on a specific example. \begin{example} Consider the mean convex half-flat structure $(\omega,\rho)$ given in Table \ref{table2} on $\mathfrak{g}_{25}$ and consider the family of solutions to the second equation in \ref{HitchinFlow}, starting from $(\omega,\rho)$: \begin{align*} \omega(t)&=-a_1(t)e^{13}+\frac{1}{a_2(t)}e^{45}+a_2(t)e^{26}, \\ \rho(t)&=e^{156}+b_1(t)e^{124}-e^{235}-e^{346}+b_2(t)(e^{125}-e^{234}), \end{align*} where $a_1(t),a_2(t),b_1(t),b_2(t)$ satisfy the following \textsc{ode} system: \begin{equation}\label{SecondEquation} \begin{cases} \dot{a_1}=-\frac{1}{2a_1 a_2}\left(2a_2^2b_2+1\right), \\ \dot{a_2}=\frac{1}{2a_1^2}\left(2a_2^2b_2-1\right), \\ \end{cases} \end{equation} subject to the normalization condition $\sqrt{b_1-b_2^2}=a_1$, with initial data $a_1(0)=a_2(0)=b_1(0)=1$, $b_2(0)=0$. This system defines a family of solutions to $\frac{1}{2}\partial_t(\omega(t)^2)=-d\hat{\rho}(t)$ depending on $b_2(t)$. Then, if $b_2(t)=a_1(t)-1$, for instance, $d\hat{\rho}(t)$ is not semi-positive, at least for small times $t>0$. Anyway, the unique solution to \ref{HitchinFlow} starting from $(\omega,\rho)$, given by \ref{SecondEquation} together with \[ \begin{cases} \dot{b_1}=-\frac{1}{a_2}, \\ \dot{b_2}=a_2, \end{cases} \] preserves the mean convexity of the initial data. \end{example} By a direct computation, one can show that the mean convexity of the initial data is preserved by \ref{HitchinFlow}, for small times, also in all the other examples of half-flat mean convex structures given in Table \ref{table2}. \section{Proof of Theorem C}\label{Section7} We recall that a symplectic form $\Omega$ is said to \emph{tame} an almost complex structure $J$ if its $(1,1)$-part $\Omega^{1,1}$ is positive. A closed $\text{SL}(3,\mathbb{C})$-structure $\rho$ is then called \emph{tamed} if there exists a symplectic form $\Omega$ taming the induced almost complex structure $J_{\rho}$. As already observed in \cite{Donaldson}, compact $6$-manifolds cannot admit tamed mean convex $\text{SL}(3,\mathbb{C})$-structures. Notice that, if we denote as usual $\hat{\rho}= J_\rho \rho$, when the normalization condition $\rho\wedge\hat{\rho}=\frac{2}{3}\omega^3$ is satisfied and $d \omega =0$, then the pair $(\omega,\rho)$ defines a symplectic half-flat structure. Since we consider invariant tamed closed $\text{SL}(3,\mathbb{C})$-structures on solvmanifolds, we can work as in the previous sections at the level of solvable unimodular Lie algebras. \begin{proof}[Proof of Theorem C] First we prove the theorem in the nilpotent case. $6$-dimensional symplectic nilpotent Lie algebras were classified in \cite{Goze} (see also \cite{salamon}) and their structure equations are listed in Table \ref{table1}. For any such Lie algebra we consider a pair $(\rho,\Omega)\in \Lambda^3\mathfrak{g}_i^*\times \Lambda^2\mathfrak{g}_i^*$ explicitly given by \[ \rho=\sum_{i<j<k}p_{ijk}\, e^{ijk}, \quad \Omega=\sum_{r<s}h_{rs}\, e^{rs}, \] where $p_{ijk}, h_{rs}\in \mathbb{R}$, and impose the two conditions $d\rho=0$ and $d\Omega=0$, which are both linear in the coefficients $p_{ijk}, h_{rs}$. Then $\Omega$ is a symplectic form provided that it is non-degenerate, i.e. $\Omega^3\neq 0$. By \cite[Lemma 3.1]{EnriettiFinoVezzoni}, a real Lie algebra $\mathfrak{g}$ endowed with an almost complex structure $J$ such that $J\xi(\mathfrak{g}) \cap [\mathfrak{g},\mathfrak{g}] \neq \{0\} $, $\xi(\mathfrak{g})$ being the center of $\mathfrak{g}$, cannot admit a symplectic form $\Omega$ taming $J$. If we assume $\lambda(\rho)<0$, we may then apply this result on each $\mathfrak{g}_i$ by considering the almost complex structure $J_{\rho}$ induced by $\rho$. We notice that, for any $\mathfrak{g}_i$ listed in Table \ref{table1}, $e_6\in \xi(\mathfrak{g}_i)$. A direct computation on each $\mathfrak{g}_i$ for $i=3,4,5,6,7,8,9,10,13,18,19,20,28,29,30$, shows that $J_{\rho}e_6\in [\mathfrak{g}_i,\mathfrak{g}_i]$, for any $J_{\rho}$ induced by a closed $3$-form $\rho$. On $\mathfrak{g}_i$, for $i=23,26,33$, the same obstruction holds since an explicit computation shows that the map \[ \pi\circ J_{\rho}: \xi(\mathfrak{g}_i) \to \mathfrak{g}_i, \] has non-trivial kernel, where $\pi$ denotes the projection onto $\mathfrak{g}_i/[\mathfrak{g}_i,\mathfrak{g}_i]$. This means that, for each $\rho$, one can find a non-zero element in the center of $\mathfrak{g}_i$ whose image under $J_{\rho}$ lies entirely in $[\mathfrak{g}_i,\mathfrak{g}_i]$. For all the other cases, let $\Omega=\Omega^{1,1}+\Omega^{2,0}+\Omega^{0,2}$ be the decomposition of $\Omega$ in types with respect to $J_{\rho}$, and denote by $\omega$ the $(1,1)$-form $\Omega^{1,1} \coloneqq \frac{1}{2}\left(\Omega+J_{\rho}\Omega\right)$. Then, in order to have a closed $\text{SL}(3,\mathbb{C})$-structure tamed by $\Omega$ we have to require that $\omega$ is positive, i.e., that the symmetric $2$-tensor $g\coloneqq\omega(\cdot,J_{\rho}\cdot)$ is positive definite. Denote by $g_{ij}\coloneqq g(e_i,e_j)$ the coefficients of $g$ with respect the dual basis $\left(e_1,\ldots,e_6\right)$ of $\mathfrak{g}$. Then, a direct computation on $\mathfrak{g}_i$, for $i=11,12,21,22,27$, shows that $g_{66}$ always vanishes, so we may discard these cases as well. We may then restrict our attention to the remaining Lie algebras $\mathfrak{g}_{24}$ and $\mathfrak{g}_{31}$. Since, as shown in \cite[Theorem 2.4]{ContiTomassini}, these are the only $6$-dimensional non-abelian nilpotent Lie algebras carrying a symplectic half-flat structure. Explicit examples of closed $\text{SL}(3,\mathbb{C})$-structures tamed by a symplectic form $\Omega$ such that $d\Omega^{1,1}\neq 0$ are given by \[\rho= -e^{125}-e^{146}-e^{156}-e^{236}-e^{245}-e^{345}-e^{356}, \quad \Omega= e^{13}+\frac{1}{2} e^{14}-\frac{1}{2} e^{24}+e^{26}+e^{35}+e^{36}, \] on $\mathfrak{g}_{24}$, and by \[ \rho= e^{123}+2e^{145}+e^{156}+e^{235}+e^{246}+e^{345}, \quad \Omega=e^{16}-e^{25}-e^{34}+e^{36}, \] on $\mathfrak{g}_{31}$. This proves the first part of the theorem. Using the classification results in \cite[Th. 2]{Macri} for $6$-dimensional symplectic unimodular (non-nilpotent) solvable Lie algebras, for each Lie algebra one can compute the metric coefficients $g_{ij}$ of $g$ with respect to the basis $(e_1,\ldots,e_6)$ for $\mathfrak{g}$ as listed in Table \ref{table3}. It turns out that, if $\mathfrak{g}$ is one among $\mathfrak{g}_{6,3}^{0,-1}$, $\mathfrak{g}_{6,10}^{0,0}$, $\mathfrak{g}_{6,13}^{-1,\frac{1}{2},0}$, $\mathfrak{g}_{6,13}^{\frac{1}{2},-1,0}$, $\mathfrak{g}_{6,21}^0$, $\mathfrak{g}_{6,36}^{0,0}$, $\mathfrak{g}_{6,78}$, $A_{5,8}^{-1} \oplus \mathbb{R}$, $A_{5,13}^{-1,0,\gamma}$, $A_{5,14}^{0} \oplus \mathbb{R}$, $A_{5,15}^{-1} \oplus \mathbb{R}$, $A_{5,17}^{0,0,\gamma}\oplus\mathbb{R},$ $A_{5,18}^{0} \oplus \mathbb{R}$ or $A_{5,19}^{-1,2} \oplus \mathbb{R}$, each closed definite $3$-form $\rho$ induces a $J_{\rho}$ such that $g_{11}=0$. In a similar way, if $\mathfrak{g}$ is $\mathfrak{g}_{6,15}^{-1}$ or $\mathfrak{g}_{6,18}^{-1,-1}$, then $g_{44}=0$, while when $\mathfrak{g}$ is $\mathfrak{n}_{6,84}^{\pm 1}$, $\mathfrak{e}(2) \oplus \mathbb{R}^3$ or $\mathfrak{e}(1,1) \oplus \mathbb{R}^3$, $g_{33}=0$. Finally, when $\mathfrak{g}=\mathfrak{e}(1,1)\oplus \mathfrak{h}$, then $g_{66}=0$. In some other cases $g$ cannot ever be positive definite since, for each closed $\rho$ inducing an almost complex structure $J_{\rho}$, $g_{rr}=-g_{kk}$ for some $r\neq k$. In particular, when $\mathfrak{g}=\mathfrak{g}_{6,70}^{0,0}$, then $g_{11}=-g_{22}$, when $\mathfrak{g}=\mathfrak{e}(2)\oplus \mathfrak{e}(2)$, then $g_{55}=-g_{66}$, and when $\mathfrak{g}$ is $\mathfrak{e}(2)\oplus\mathfrak{e}(1,1)$ or $\mathfrak{e}(2)\oplus\mathfrak{h}$, then $g_{22}=-g_{33}$. As shown in \cite[Prop. 3.1, 4.1 and 4.3]{SHFsolvmanifolds}, for the remaining Lie algebras $\mathfrak{g}_{6,38}^0$, $\mathfrak{g}_{6,54}^{0,-1}$, $\mathfrak{g}_{6,118}^{0,-1,-1}$, $\mathfrak{e}(1,1) \oplus \mathfrak{e}(1,1)$, $A_{5,7}^{-1,\beta,-\beta}$, $A_{5,17}^{0,0,-1} \oplus \mathbb{R}$, $A_{5,17}^{\alpha,-\alpha,1} \oplus \mathbb{R}$, as listed in Table \ref{table3}, a symplectic half-flat structure always exists. Moreover, on these Lie algebras, an explicit example of closed $ \text{SL}(3,\mathbb{C})$-structure tamed by a symplectic form $\Omega$ such that $d\Omega^{1,1}\neq 0 $ is given Table \ref{table3}. \end{proof} \begin{remark} \begin{enumerate} \item By \cite[Remarks 3.2 and 4.4]{SHFsolvmanifolds}, the solvable Lie groups corresponding to each solvable Lie algebra admitting closed tamed $\text{SL}(3,\mathbb{C})$-structures admit compact quotients by lattices (for further details see \cite{Bock, FernandezLeonSaralegui, TralleOprea, Yamada}). \item As shown in \cite{Donaldson}, given an $\text{SL}(3,\mathbb{C})$-structure $\rho$ tamed by a $2$-form $\Omega$ on a real $6$-dimensional vector space $V$, the $3$-form \[ \varphi=\rho+\Omega\wedge dt, \] defines a $\text{G}_2$- structure on $V\oplus \mathbb{R}$. Therefore, as an application of Theorem \hyperref[tamed]{C}, we classify decomposable solvable Lie algebras of the form $\mathfrak{g} \oplus \mathbb{R}$ admitting a closed $\text{G}_2$-structure. In particular, in the nilpotent case, this result was already obtained in \cite{ContiFernandez}. \end{enumerate} \end{remark} \medskip
1,941,325,219,950
arxiv
\section{Main results} Throughout this paper, we work over the field of complex numbers. \subsection{Boundedness of singular Fano varieties} A normal, projective variety $X$ is called \emph{Fano} if a negative multiple of its canonical divisor class is Cartier and if the associated line bundle is ample. Fano varieties appear throughout geometry and have been studied intensely, in many contexts. For the purposes of this talk, we remark that Fanos with sufficiently mild singularities constitute one of the fundamental variety classes in birational geometry. In fact, given any projective manifold $X$, the Minimal Model Programme (MMP) predicts the existence of a sequence of rather special birational transformations, known as ``divisorial contractions'' and ``flips'', as follows, $$ \xymatrix{ % X = X^{(0)} \ar@{-->}[rr]^{α^{(1)}}_{\text{birational}} && X^{(1)} \ar@{-->}[rr]^{α^{(2)}}_{\text{birational}} && ⋯ \ar@{-->}[rr]^{α^{(n)}}_{\text{birational}} && X^{(n)}. } $$ The resulting variety $X^{(n)}$ is either canonically polarised (which is to say that a suitable power of its canonical sheaf is ample), or it has the structure of a fibre space whose general fibres are either Fano or have numerically trivial canonical class. The study of (families of) Fano varieties is thus one of the most fundamental problems in birational geometry. \begin{rem}[Singularities] Even though the starting variety $X$ is a manifold by assumption, it is well understood that we cannot expect the varieties $X^{(•)}$ to be smooth. Instead, they exhibit mild singularities, known as ``terminal'' or ``canonical'' --- we refer the reader to \cite[Sect.~2.3]{KM98} or \cite[Sect.~2]{MR3057950} for a discussion and for references. If $X^{(n)}$ admits the structure of a fibre space, its general fibres will also have terminal or canonical singularities. Even if one is primarily interested in the geometry of \emph{manifolds}, it is therefore necessary to include families of \emph{singular} Fanos in the discussion. \end{rem} In a series of two fundamental papers, \cite{Bir16a, Bir16b}, Birkar confirmed a long-standing conjecture of Alexeev and Borisov-Borisov, \cite{MR1298994, MR1166957}, asserting that for every $d ∈ ℕ$, the family of $d$-dimensional Fano varieties with terminal singularities is bounded: there exists a proper morphism of quasi-projective schemes over the complex numbers, $u : 𝕏 → Y$, and for every $d$-dimensional Fano $X$ with terminal singularities a closed point $y ∈ Y$ such that $X$ is isomorphic to the fibre $𝕏_y$. In fact, a much more general statement holds true. \begin{thm}[\protect{Boundedness of $ε$-lc Fanos, \cite[Thm.~1.1]{Bir16b}}]\label{thm:BAB} Given $d ∈ ℕ$ and $ε ∈ ℝ^+$, let $\mathcal X_{d,ε}$ be the family of projective varieties $X$ with dimension $\dim_ℂ X = d$ that admit an $ℝ$-divisor $B ∈ ℝ\Div(X)$ such that the following holds true. \begin{enumerate} \item\label{il:BAB1} The tuple $(X,B)$ forms a pair. In other words: $X$ is normal, the coefficients of $B$ are contained in the interval $[0,1]$ and $K_X+B$ is $ℝ$-Cartier. \item\label{il:BAB2} The pair $(X,B)$ is $ε$-lc. In other words, the total log discrepancy of $(X,B)$ is greater than or equal to $ε$. \item\label{il:BAB3} The $ℝ$-Cartier divisor $-(K_X+B)$ is nef and big. \end{enumerate} Then, the family $\mathcal X_{d,ε}$ is bounded. \end{thm} \begin{rem}[Terminal singularities] If $X$ has terminal singularities, then $(X,0)$ is $1$-lc. We refer to Section~\ref{sec:singofpairs}, to Birkar's original papers, or to \cite[Sect.~3.1]{MR3224718} for the relevant definitions concerning more general classes of singularities. \end{rem} For his proof of the boundedness of Fano varieties and for his contributions to the Minimal Model Programme, Caucher Birkar was awarded with the Fields Medal at the ICM 2018 in Rio de Janeiro. \subsubsection{Where does boundedness come from?} \label{ssec:1-1-1v2} The brief answer is: ``From boundedness of volumes!'' In fact, if $(X_t, A_t)_{t ∈ T}$ is a family of tuples where the $X_t$ are normal, projective varieties of fixed dimension $d$ and $A_t ∈ \Div(X_t)$ are very ample, and if there exists a number $v ∈ ℕ$ such that $$ \vol(A_t) := \limsup_{n→∞} \frac{d!·h⁰\bigl( X_t,\, 𝒪_{X_t}(n·A_t) \bigr)}{n^d} < v $$ for all $t ∈ T$, then elementary arguments using Hilbert schemes show that the family $(X_t, A_t)_{t ∈ T}$ is bounded. For the application that we have in mind, the varieties $X_t$ are the Fano varieties whose boundedness we would like to show and the divisors $A_t$ will be chosen as fixed multiples of their anticanonical classes. To obtain boundedness results in this setting, Birkar needs to show that there exists one number $m$ that makes all $A_t := -m·K_{X_t}$ very ample, or (more modestly) ensures that the linear systems $|-m·K_{X_t}|$ define birational maps. Volume bounds for these divisors need to be established, and the singularities of the linear systems need to be controlled. \subsubsection{Earlier results, related results} \label{ssec:1-1-2v2} Boundedness results have a long history, which we cannot cover with any pretence of completeness. Boundedness of smooth Fano surfaces and threefolds follows from their classification. Boundedness of Fano \emph{manifolds} of arbitrary dimension was shown in the early 1990s, in an influential paper of Kollár, Miyaoka and Mori, \cite{KMM92}, by studying their geometry as rationally connected manifolds. Around the same time, Borisov-Borisov were able to handle the toric case using combinatorial methods, \cite{MR1166957}. For (singular) surfaces, Theorem~\ref{thm:BAB} is due to Alexeev, \cite{MR1298994}. Among the newer results, we will only mention the work of Hacon-McKernan-Xu. Using methods that are similar to those discussed here, but without the results on ``boundedness of complements'' ($→$ Section~\ref{sec:bcomp}), they are able to bound the volumes of klt pairs $(X, Δ)$, where $X$ is projective of fixed dimension, $K_X + Δ$ is numerically trivial and the coefficients of $Δ$ come from a fixed DCC set, \cite[Thm.~B]{MR3224718}. Boundedness of Fanos with klt singularities and fixed Cartier index follows, \cite[Cor.~1.8]{MR3224718}. In a subsequent paper \cite{MR3507257} these results are extended to give the boundedness result that we quote in Theorem~\ref{thm:boundednessCriterion}, and that Birkar builds on. We conclude with a reference to \cite{Jiang17, Chen18} for current results involving $K$-stability and $α$-invariants. The surveys \cite{MR2827803, MR3821154} give a more complete overview. \subsubsection{Positive characteristic} Apart from the above-mentioned results of Alexeev, \cite{MR1298994}, which hold over algebraically closed field of arbitrary characteristic, little is known in case where the characteristic of the base field is positive. \subsection{Applications} \label{ssec:1-2} As we will see in Section~\ref{sec:jordan} below, boundedness of Fanos can be used to prove the existence of fixed points for actions of finite groups on Fanos, or more generally rationally connected varieties. Recall that a variety $X$ is \emph{rationally connected} if every two points are connected by an irreducible, rational curve contained in $X$. This allows us to apply Theorem~\ref{thm:BAB} in the study of finite subgroups of birational automorphism groups. \subsubsection{The Jordan property of Cremona groups} Even before Theorem~\ref{thm:BAB} was known, it had been realised by Prokhorov and Shramov, \cite{MR3483470}, that boundedness of Fano varieties with terminal singularities would imply that the birational automorphism groups of projective spaces (= Cremona groups, $\Bir(ℙ^d)$) satisfy the \emph{Jordan property}. Recall that a group $Γ$ is said to \emph{have the Jordan property} if there exists a number $j ∈ ℕ$ such that every finite subgroup $G ⊂ Γ$ contains a normal, Abelian subgroup $A ⊂ G$ of index $|G:A| ≤ j$. In fact, a stronger result holds. \begin{thm}[\protect{Jordan property of Cremona groups, \cite[Cor.~1.3]{Bir16b}, \cite[Thm.~1.8]{MR3483470}}]\label{thm:jordan} Given any number $d ∈ ℕ$, there exists $j ∈ ℕ$ such that for every complex, projective, rationally connected variety $X$ of dimension $\dim_{ℂ} X = d$, every finite subgroup $G ⊂ \Bir(X)$ contains a normal, Abelian subgroup $A ⊆ G$ of index $|G:A| ≤ j$. \end{thm} \begin{rem} Theorem~\ref{thm:jordan} answers a question of Serre \cite[6.1]{MR2567402} in the positive. A more detailed analysis establishes the Jordan property more generally for all varieties of vanishing irregularity, \cite[Thm.~1.8]{MR3292293}. \end{rem} Theorem~\ref{thm:jordan} ties in with the general philosophy that finite subgroups of $\Bir(ℙ^d)$ should in many ways be similar to finite linear groups, where the property has been established by Jordan more then a century ago. \begin{thm}[\protect{Jordan property of linear groups, \cite{Jordan1877}}]\label{thm:jordan-lin} Given any number $d ∈ ℕ$, there exists $j^{\Jordan}_d ∈ ℕ$ such that every finite subgroup $G ⊂ \GL_d(ℂ)$ contains a normal, Abelian subgroup $A ⊆ G$ of index $|G:A| ≤ j^{\Jordan}_d$. \qed \end{thm} \begin{rem}[Related results] For further information on Cremona groups and their subgroups, we refer the reader to the surveys \cite{MR3229352, MR3821147} and to the recent research paper \cite{Popov18}. For the maximally connected components of automorphism groups of projective varieties (rather than the full group of birational automorphisms), the Jordan property has recently been established by Meng and Zhang without any assumption on the nature of the varieties, \cite[Thm.~1.4]{MZ18}; their proof uses group-theoretic methods rather than birational geometry. For related results (also in positive characteristic), see \cite{Hu18, MR3830471, SV18} and references there. \end{rem} \subsubsection{Boundedness of finite subgroups in birational transformation groups} Following similar lines of thought, Prokhorov and Shramov also deduce boundedness of finite subgroups in birational transformation groups, for arbitrary varieties defined over a finite field extension of $ℚ$. \begin{thm}[\protect{Bounds for finite groups of birational transformation, \cite[Thm.~1.4]{MR3292293}}] Let $k$ be a finitely generated field over $ℚ$. Let $X$ be a variety over $k$, and let $\Bir(X)$ denote the group of birational automorphisms of $X$ over $\Spec k$. Then, there exists $b ∈ ℕ$ such that any finite subgroup $G ⊂ \Bir(X)$ has order $|G| ≤ b$. \end{thm} As an immediate corollary, they answer another question of Serre\footnote{Unpublished problem list from the workshop ``Subgroups of Cremona groups: classification'', 29–30 March 2010, ICMS, Edinburgh. Available at \url{http://www.mi.ras.ru/~prokhoro/preprints/edi.pdf}. Serre's question is found on page 7.}, pertaining to finite subgroups in the automorphism group of a field. \begin{cor}[\protect{Boundedness for finite groups of field automorphisms, \cite[Cor.~1.5]{MR3292293}}] Let $k$ be a finitely generated field over $ℚ$. Then, there exists $b ∈ ℕ$ such that any finite subgroup $G ⊂ \Aut(k)$ has order $|G| ≤ b$. \end{cor} \subsubsection{Boundedness of links, quotients of the Cremona group} Birkar's result has further applications within birational geometry. Combined with work of Choi-Shokurov, it implies the boundedness of Sarkisov links in any given dimension, cf.~\cite[Cor.~7.1]{MR2784026}. In \cite{BLZ}, Blanc-Lamy-Zimmermann use Birkar's result to prove the existence of many quotients of the Cremona groups of dimension three or more. In particular, they show that these groups are not perfect and thus not simple. \subsection{Outline of this paper} Paraphrasing \cite[p.~6]{Bir16a}, the main tools used in Birkar's work include the Minimal Model Programme \cite{KM98, BCHM10}, the theory of complements \cite{MR1892905, MR2448282, MR1794169}, the technique of creating families of non-klt centres using volumes \cite{MR3224718, MR3034294} and \cite[Sect.~6]{KollarSingsOfPairs}, and the theory of generalised polarised pairs \cite{MR3502099}. In fact, given the scope and difficulty of Birkar's work, and given the large number of technical concepts involved, it does not seem realistic to give more than a panoramic presentation of Birkar's proof here. Largely ignoring all technicalities, Sections~\ref{sec:bcomp}--\ref{sec:lcthres} highlight four core results, each of independent interest. We explain the statements in brief, sketch some ideas of proof and indicate how the results might fit together to give the desired boundedness result. Finally, Section~\ref{sec:jordan} discusses the application to the Jordan property in some detail. \subsection{Acknowledgements} The author would like to thank Florin Ambro, Serge Cantat, Enrica Floris, Christopher Hacon, Vladimir Lazić, Benjamin McDonnell, Vladimir Popov, Thomas Preu, Yuri Prokhorov, Vyacheslav Shokurov, Chenyang Xu and one anonymous reader, who answered my questions and/or suggested improvements. Yanning Xu was kind enough to visit Freiburg and patiently explain large parts of the material to me. He helped me out more than just once. His paper \cite{YXu18}, which summarises Birkar's results, has been helpful in preparing these notes. Even though our point of view is perhaps a little different, it goes without saying that this paper has substantial overlap with Birkar's own survey \cite{Bir18}. \svnid{$Id: 02-notation.tex 79 2019-01-07 13:21:51Z kebekus $} \section{Notation, standard facts and known results} \subsection{Varieties, divisors and pairs} We follow standard conventions concerning varieties, divisors and pairs. In particular, the following notation will be used. \begin{defn}[Round-up, round-down and fractional part] If $X$ is a normal, quasi-projective variety and $B ∈ ℝ\Div(X)$ an $ℝ$-divisor on $X$, we write $⌊ B ⌋$, $⌈ B ⌉$ for the round-down and round-up of $B$, respectively. The divisor $\{B\} := B - ⌊ B ⌋$ is called \emph{fractional part of $B$}. \end{defn} \begin{defn}[Pair] A \emph{pair} is a tuple $(X, B)$ consisting of a normal, quasi-projective variety $X$ and an effective $ℝ$-divisor $B$ such that $K_X + B$ is $ℝ$-Cartier. \end{defn} \begin{defn}[Couple] A \emph{couple} is a tuple $(X, B)$ consisting of a normal, projective variety $X$ and a divisor $B ∈ \Div(X)$ whose coefficients are all equal to one. The couple is called \emph{log-smooth} if $X$ is smooth and if $B$ has simple normal crossings support. \end{defn} \subsection{$ℝ$-divisors} While divisors with real coefficients had sporadically appeared in birational geometry for a long time, the importance of allowing real (rather than rational) coefficients was highlighted in the seminal paper \cite{BCHM10}, where continuity- and compactness arguments for spaces of divisors were used in an essential manner. Almost all standard definitions for divisors have analogues for $ℝ$-divisors, but the generalised definitions are perhaps not always obvious. For the reader's convenience, we recall a few of the more important notions here. \begin{defn}[Big $ℝ$-divisors] Let $X$ be a normal, projective variety. A divisor $B ∈ ℝ\Div(X)$, which need not be $ℝ$-Cartier, is called \emph{big} if there exists an an ample $H ∈ ℝ\Div(X)$, and effective $D ∈ ℝ\Div(X)$ and an $ℝ$-linear equivalence $B \sim_ℝ H + D$. \end{defn} \begin{defn}[Volume of an $ℝ$-divisor] Let $X$ be a normal, projective variety of dimension $d$. The \emph{volume} of an $ℝ$-divisor $D ∈ ℝ\Div(X)$ is defined as $$ \vol(D) := \limsup_{m→∞} \frac{d!·h⁰\bigl( X,\, 𝒪_X(⌊mD⌋) \bigr)}{m^d}. $$ \end{defn} \begin{defn}[Linear system] Let $X$ be a normal, quasi-projective variety and let $M ∈ ℝ\Div(X)$. The \emph{$ℝ$-linear system} $|M|$ is defined as $$ |M|_ℝ := \{ D ∈ ℝ\Div(X) \,|\, D \text{ is effective and } D \sim_{ℝ} M \}. $$ \end{defn} \subsection{Invariants of varieties and pairs} \label{sec:singofpairs} We briefly recall a number of standard definitions concerning singularities. In brief, if $X$ is smooth, and if $π : \widetilde{X} → X$ is any birational morphism, where $\widetilde{X}$ it smooth, then any top-form $σ ∈ H⁰ \bigl( X,\, ω_X \bigr)$ pulls back to a holomorphic differential form $τ ∈ H⁰ \bigl( \widetilde{X},\, ω_{\widetilde{X}} \bigr)$, with zeros along the positive-dimensional fibres of $π$. However, if $X$ is singular, if $π : \widetilde{X} → X$ is a resolution of singularities and if $σ ∈ H⁰ \bigl( X,\, ω_X \bigr)$ is any section in the (pre-)dualising sheaf, then the pull-back of $σ$ will only be a rational differential form on $\widetilde{X}$ which might well have poles along the positive-dimensional fibres of $π$. The idea in the definition of ``log discrepancy'' is to use this pole order to measure the ``badness'' of the singularities on $X$. We refer the reader to one of the standard references \cite[Sect.~2.3]{KM98} and \cite[Sect.~2]{MR3057950} for an-depth discussion of these ideas and of the singularities of the Minimal Model Programme. Since the notation is not uniform across the literature\footnote{The papers \cite{Bir16a, Bir16b, BCHM10} denote the log discrepancy by $a(D, X, B)$, while the standard reference books \cite{KM98, MR3057950} write $a(D, X, B)$ for the standard (= ``non-log'') discrepancies.}, we spend a few lines to fix notation and briefly recall the central definitions of the field. \begin{defn}[Log discrepancy]\label{not:logdiscrep} Let $(X,B)$ a pair and let $π : \widetilde{X} → X$ be a log resolution of singularities, with exceptional divisors $(E_i)_{1 ≤ i ≤ n}$. Since $K_X+B$ is $ℝ$-Cartier by assumptions, there exists a well-defined notion of pull-back, and a unique divisor $B_{\widetilde{X}} ∈ ℝ\Div(\widetilde{X})$ such that $K_{\widetilde{X}} + B_{\widetilde{X}} = π^* (K_X+B)$ in $ℝ\Div(\widetilde{X})$. If $D$ is any prime divisor on~$\widetilde{X}$, we consider the \emph{log discrepancy} $$ a_{\log}(D, X, B) := 1 - \mult_D B_{\widetilde{X}}. $$ The infimum over all such numbers, $$ a_{\log}(X, B) := \inf \{ a_{\log}(D, X, B) \:|\: π:\widetilde{X} → X \text{ a log resolution and } D ∈ \Div(\widetilde{X}) \text{ prime}\} $$ is called the \emph{total log discrepancy of the pair $(X,B)$}. \end{defn} The total log discrepancy measures how bad the singularities are: the smaller $a_{\log}(X, B)$ is, the worse the singularities are. Table~\vref{tab:x1} lists the classes of singularities will be relevant in the sequel. In addition, $(X, B)$ is called \emph{plt} if $a_{\log}(D, X, B) > 0$ for every resolution $π : \widetilde{X} → X$ and every \emph{exceptional} divisor $D$ on~$\widetilde{X}$. The class of $ε$-lc singularities, which is perhaps the most relevant for our purposes, was introduced by Alexeev. \begin{table} \centering \begin{tabular}{ccc} \rowcolor{gray!20} If …, then & & $(X,B)$ is called ``…'' \\ \hline $a_{\log}(X, B) ≥ 0$ & … & \emph{log canonical (or ``lc'')} \\ $a_{\log}(X, B) > 0$ & … & \emph{Kawamata log terminal (or ``klt'')} \\ $a_{\log}(X, B) ≥ ε$ & … & \emph{$ε$-log canonical (or ``$ε$-lc'')} \\ $a_{\log}(X, B) ≥ 1$ & … & \emph{canonical} \\ $a_{\log}(X, B) > 1$ & … & \emph{terminal} \end{tabular} \bigskip \caption{Singularities of the Minimal Model Programme} \label{tab:x1} \end{table} \subsubsection{Places and centres} The divisors $D$ that appear in the definition log discrepancy deserve special attention, in particular if $a_{\log}(D, X, B) ≤ 0$. \begin{defn}[Non-klt places and centres] Let $(X,B)$ a pair. A \emph{non-klt place} of $(X, B)$ is a prime divisor $D$ on birational models of $X$ such that $a_{\log}(D, X, B) ≤ 0$. A \emph{non-klt centre} is the image on $X$ of a non-klt place. When $(X, B)$ is lc, a non-klt centre is also called an \emph{lc centre}. \end{defn} \subsubsection{Thresholds} Suppose that $(X,B)$ is a klt pair, and that $D$ is an effective divisor on $X$. The pair $(X, B+t·D)$ will then be log-canonical for sufficiently small numbers $t$, but cannot be klt when $t$ is large. The critical value of $t$ is called the \emph{log-canonical threshold}. \begin{defn}[\protect{LC threshold, compare \cite[Sect.~9.3.B]{Laz04-II}}]\label{def:lct} Let $(X,B)$ be a klt pair. If $D ∈ ℝ\Div(X)$ is effective, one defines the \emph{lc threshold of $D$ with respect to $(X,B)$} as \begin{align*} \lct \bigl(X,\, B,\, D \bigr) & := \sup \bigl\{ t ∈ ℝ \:\bigl|\: (X,B+t·D) \text{ is lc} \bigr\}. \\ \intertext{If $Δ ∈ ℝ\Div(X)$ is $ℝ$-Cartier with non-empty $ℝ$-linear system (but not necessarily effective itself), one defines \emph{lc threshold of $|Δ|_{ℝ}$ with respect to $(X,B)$} as} \lct \bigl(X,\, B,\, |Δ|_ℝ \bigr) & := \inf \bigl\{ \lct(X,B,D) \:\bigl|\: D ∈ |Δ|_ℝ \bigr\}. \end{align*} \end{defn} \begin{rem}\label{rem:sflct} In the setting of Definition~\ref{def:lct}, it is a standard fact that $$ \lct \bigl(X,\, B,\, |Δ|_ℝ \bigr) = \sup \bigl\{ t ∈ ℝ \:\bigl|\: (X, B+t·D) \text{ is lc for every } D ∈ |Δ|_{ℝ} \bigr\}. $$ In particular, if $(X,B)$ is klt, then $(X, B+t'·D)$ is lc for every $D ∈ |Δ|_{ℝ}$ and every $0 < t' < t$. \end{rem} \begin{notation} If $B = 0$, we omit it from the notation and write $\lct \bigl(X,\, |Δ|_ℝ \bigr)$ and $\lct \bigl(X,\, D \bigr)$ in short. \end{notation} \subsection{Fano varieties and pairs} Fano varieties come in many variants. For the purposes of this overview, the following classes of varieties will be the most relevant. \begin{defn}[\protect{Fano and weak log Fano pairs, \cite[Sect.~2.10]{Bir16a}}] ~ \begin{itemize} \item A projective pair $(X, B)$ is called \emph{log Fano} if $(X, B)$ is lc and if $-(K_X+B)$ is ample. If $B = 0$, we just say that $X$ is Fano. \item A projective pair $(X, B)$ is called is called \emph{weak log Fano} if $(X, B)$ is lc and $-(K_X + B)$ is nef and big. If $B = 0$, we just say that $X$ is weak Fano. \end{itemize} \end{defn} \begin{rem}[Relative notions] There exist relative versions of the notions discussed above. If $(X, B)$ is any quasi-projective pair, if $Z$ is normal and if $X → Z$ is surjective, projective and with connected fibres, we say $(X, B)$ is log Fano over $Z$ if it is lc and if $-(K_X + B)$ is relatively ample over $Z$. Ditto with ``weak log Fano''. \end{rem} \subsection{Varieties of Fano type} Varieties $X$ that \emph{admit} a boundary $B$ that makes $(X,B)$ a Fano pair are said to be of \emph{Fano type}. This notion was introduced by Prokhorov and Shokurov in \cite{MR2448282}. We refer to that paper for basic properties of varieties of Fano type. \begin{defn}[\protect{Varieties of Fano type, \cite[Lem.\ and Def.~2.6]{MR2448282}}] A normal, projective variety $X$ is said to be \emph{of Fano type} if there exists an effective, $ℚ$-divisor $B$ such that $(X,B)$ is klt and weak log Fano pair. Equivalently: there exists a big $ℚ$-divisor $B$ such that $K_X + B \sim_ℚ 0$ and such that $(X, B)$ is a klt pair. \end{defn} \begin{rem}[Varieties of Fano type are Mori dream spaces]\label{rem:MoriDream} If $X$ is of Fano type, recall from \cite[Sect.~1.3]{BCHM10} that $X$ is a ``Mori dream space''. Given any $ℝ$-Cartier divisor $D ∈ ℝ\Div(X)$, we can then run the $D$-Minimal Model Programme and obtain a sequence of extremal contractions and flips, $X \dasharrow Y$. If the push-forward of $D_Y$ of $Y$ is nef over, we call $Y$ a minimal model for $D$. Otherwise, there exists a $D_Y$-negative extremal contraction $Y → T$ with $\dim Y > \dim T$, and we call $Y$ a Mori fibre space for $D$. \end{rem} \begin{rem}[Relative notions] As before, there exists an obvious relative version of the notion ``Fano type''. Remark~\ref{rem:MoriDream} generalises to this relative setting. \end{rem} Varieties of Fano type come in two flavours that often need to be treated differently. The following notion, which we recall for later use, has been introduced by Shokurov. \begin{defn}[Exceptional and non-exceptional pairs]\label{def:exceptional} Let $(X, B)$ be a projective pair, and assume that there exists an effective $P ∈ ℝ\Div(X)$ such that $K_X + B + P \sim_{ℝ} 0$. We say $(X, B)$ is \emph{non-exceptional} if we can choose $P$ so that $(X, B+P)$ is \emph{not} klt. We say that $(X, B)$ is \emph{exceptional} if $(X, B+P)$ is klt for every choice of $P$. \end{defn} \svnid{$Id: 03-genlPairs.tex 78 2019-01-07 12:48:44Z kebekus $} \section{b-Divisors and generalised pairs} In addition to the classical notions for singularities of pairs that we recalled in Section~\ref{sec:singofpairs} above, much of Birkar's work uses the notion of \emph{generalised polarised pairs}. The additional flexibility of this notion allows for inductive proofs, but adds substantial technical difficulties. Generalised pairs were introduced by Birkar and Zhang in \cite{MR3502099}. \subsubsection*{Disclaimer} The notion of generalised polarised pairs features prominently in Birkar's work, and should be presented in an adequate manner. The technical complications arising from this notion are however substantial and cannot be explained within a few pages. As a compromise, this section briefly explains what generalised pairs are, and how they come about in relevant settings. Section~\ref{ssec:BC} pinpoints one place in Birkar's inductive scheme of proof where generalised pairs appear naturally, and explains why most (if not all) of the material presented in this survey should in fact be formulated and proven for generalised pairs. For the purpose of exposition, we will however ignore this difficulty and discuss the classical case only. \subsection{Definition of generalised pairs} To begin, we only recall a minimal subset of the relevant definitions, and refer to \cite[Sect.~2]{Bir16a} and to \cite[Sect.~4]{MR3502099} for more details. We start with the notion of b-divisors, as introduced by Shokurov in \cite{MR1420223}, in the simplest case. \begin{defn}[b-divisor]\label{def:1v2} Let $X$ be a variety. We consider projective, birational morphisms $Y → X$ from normal varieties $Y$, and for each $Y$ a divisor $M_Y ∈ ℝ\Div(Y)$. The collection $M := (M_Y)_Y$ is called \emph{$b$-divisor} if for any morphism $f : Y' → Y$ of birational models over $X$, we have $M_Y = f_* (M_{Y'})$. \end{defn} \begin{defn}[b-$ℝ$-Cartier and b-Cartier b-divisors] Setting as in Definition~\ref{def:1v2}. A b-divisor $M$ is called \emph{b-$ℝ$-Cartier} if there exists one $Y$ such that $M_Y$ is $ℝ$-Cartier and such that for any morphism $f : Y' → Y$ of birational models over $X$, we have $M_{Y'} = f^* (M_Y)$. Ditto for \emph{b-Cartier} b-divisors. \end{defn} \begin{defn}[\protect{Generalised polarised pair, \cite[Sect.~2.13]{Bir16a}, \cite[Def.~1.4]{MR3502099}}]\label{def:gpp} Let $Z$ be a variety. A \emph{generalised polarised pair over $Z$} is a tuple consisting of the following data: \begin{enumerate} \item a normal variety $X$ equipped with a projective morphism $X → Z$, \item an effective $ℝ$-divisor $B ∈ ℝ\Div(X)$, and \item a b-$ℝ$-Cartier b-divisor over $X$ represented as $(φ: X' → X, M')$, where $M' ∈ ℝ\Div(X')$ is nef over $Z$, and where $K_X + B + φ_* M'$ is $ℝ$-Cartier. \end{enumerate} \end{defn} \begin{notation}[Generalised polarised pair] In the setup of Definition~\ref{def:gpp}, we usually write $M := φ_* M'$ and say that $(X, B+M)$ is a generalised pair with data $X' \overset{φ}{→} X → Z$ and $M'$. In contexts where $Z$ is not relevant, we usually drop it from the notation: in this case one can just assume $X → Z$ is the identity. When $Z$ is a point we also drop it but say the pair is projective. \end{notation} \begin{obs} Following \cite[p.~286]{MR3502099} we remark that Definition~\ref{def:gpp} is flexible with respect to $X'$ and $M'$. To be more precise, if $g : X'' → X'$ is a projective birational morphism from a normal variety, then there is no harm in replacing $X'$ with $X''$ and replacing $M'$ with $g^* M'$. \end{obs} \subsection{Singularities of generalised pairs} All notions introduced in Section~\ref{sec:singofpairs} have analogues in the setting of generalised pairs. Again, we cover only the most basic definition here. \begin{defn}[Generalised log discrepancy, singularity classes] Consider a generalised polarised pair $(X, B+M)$ with data $X' \overset{φ}{→} X → Z$ and $M'$, where $φ$ is a log resolution of $(X, B)$. Then, there exists a uniquely determined divisor $B'$ on $X'$ such that $$ K_{X'} + B' + M' = φ^* (K_X + B + M) $$ If $D ∈ \Div(X')$ is any prime divisor, the \emph{generalised log discrepancy} is defined to be $$ a_{\log}(D, X, B+M) := 1 - \mult_D B'. $$ As before, we define the \emph{generalised total log discrepancy} $a_{\log}(X, B+M)$ by taking the infimum over all $D$ and all resolutions. In analogy to the definitions of Table~\ref{tab:x1}, we say that the generalised polarised pair is \emph{generalised lc} if $a_{\log}(X, B+M) ≥ 0$. Ditto for all the other definitions. \end{defn} \subsection{Example: Fibrations and the canonical bundle formula} \label{ssec:gpfib} We discuss a setting where generalised pairs appear naturally. Let $Y$ be a normal pair variety, and let $f : Y → X$ be a fibration: the space $X$ is projective, normal and of positive dimension, the morphism $f$ is surjective with connected fibres. Also, assume that $K_Y$ is $ℚ$-linearly equivalent to zero over $X$, so that there exists $L_X ∈ ℚ\Div(X)$ with $K_Y \sim_ℚ f^* L_X$. Ideally, one might hope that it would be possible to choose $L_X = K_X$, but this is almost always wrong --- compare Kodaira's formula for the canonical bundle of an elliptic fibration, \cite[Sect.~V.12]{HBPV}. To fix this issue, we define a first correction term $B ∈ ℚ\Div(X)$ as $$ B := \sum_{\mathclap{\substack{D ∈ \Div(X)\\\text{prime}}}} (1-t_D)·D \quad \text{where} \quad t_D := \lct° \bigl(Y,\, Δ_Y,\, f^*D \bigr) $$ The symbol $\lct°$ denotes a variant of the lc threshold introduced in Definition~\ref{def:lct}, which measures the singularities of $\bigl(Y, f^*D \bigr)$ only over the generic point of $D$. Since $X$ is smooth in codimension one, this also solves the problem of defining $f^* D$. Finally, one chooses $M ∈ ℚ\Div(X)$ such that $K_X + B + M$ is $ℚ$-Cartier and such that the desired $ℚ$-linear equivalence holds, $$ K_Y \sim_ℚ f^* ( K_X + B + M ). $$ The divisor $B$ is usually called the ``discriminant part'' of the correction term. It detects singularities of the fibration, such as multiple or otherwise singular fibres, over codimension one points of $X$. The divisor $M$ is called the ``moduli part''. It is harder to describe. While we have defined it only up to $ℚ$-linear equivalence, a more involved construction can be used to define it as an honest divisor. \begin{com*} Conjecturally, the moduli part carries information on the birational variation of the fibres of $f$, \cite{MR1646046}. We refer to \cite{MR2359346} and to the introduction of the recent research paper \cite{FL18} for an overview, but see also \cite{MR3329677}. \end{com*} \subsubsection{Behaviour under birational modifications} We ask how the moduli part of the correction term behaves under birational modification. To this end, let $φ : X' → X$ be a birational morphism of normal, projective varieties. Choosing a resolution $Y'$ of $Y ⨯_X X'$, we find a diagram as follows, $$ \xymatrix{% Y' \ar[rr]^{Φ \text{, birational}} \ar[d]_{f'} && Y \ar[d]^f \\ X' \ar[rr]_{φ \text{, birational}} && X. } $$ Set $Δ_{Y'} := Φ^* K_Y - K_{Y'}$. Generalising the definition of $\lct°$ a little to allow for negative coefficients in $Δ_{Y'}$, one can then define $B'$ similarly to the construction above, $$ B' := \sum_{\mathclap{\substack{D ∈ \Div(X')\\\text{prime}}}} (1-t'_D)·D \quad \text{where} \quad t'_D := \lct° \bigl(Y',\, Δ_{Y'},\, (f')^*D \bigr). $$ Finally, one may then choose $M' ∈ ℚ\Div(X')$ such that \begin{align*} K_{Y'} + Δ_{Y'} & \sim_ℚ (f')^* (K_{X'} + B' + M' ), \\ K_{X'} + B' + M' & = φ^*(K_X+B+M) \end{align*} and $B = φ_* B'$ as well as $M = φ_* M'$. \subsubsection{Relation to generalised pairs} Now assume that $Y$ is lc. The divisor $B$ will then be effective. However, much more is true: after passing to a certain birational model $X'$ of $X$, the divisor $M_{X'}$ is nef and for any higher birational model $X'' → X'$, the induced $M_{X''}$ on $X''$ is the pullback of $M_{X''}$, \cite{MR1646046, MR2153078, MR2359346} and summarised in \cite[Thm.~3.6]{Bir16a}. In other words, going to a sufficiently high birational model of $X'$ of $X$, the moduli parts $M'$ define an b-$ℝ$-Cartier b-divisor. Moreover, this b-divisor is b-nef. We obtain a generalised polarised pair $(X, B+M)$ with data $X' \overset{φ}{→} X → \Spec ℂ$ and $M'$. This generalised pair is generalised lc by definition. \begin{com*} A famous conjecture of Prokhorov and Shokurov \cite[Conj.~7.13]{MR2448282} asserts that the moduli divisor $M_{X''}$ is semiample, on any sufficiently high birational model $X''$ of $X$. More precisely, it is expected that a number $m$ exists that depends only on the general fibre of $f$ such that all divisors $m·M_{X''}$ are basepoint free. If this conjecture was solved, it is conceivable that Birkar’s work could perhaps be rewritten in a manner that avoids the notion of generalised pairs. \end{com*} \begin{rem}[Outlook] The construction outlined in this section is used in the proof of ``Boundedness of complements'', as sketched in Section~\ref{ssec:BC} below. It generalises fairly directly to pairs $(Y, Δ_Y)$, and even to tuples where $Δ_Y$ is not necessarily effective, \cite[Sect.~3.4]{Bir16a}. \end{rem} \svnid{$Id: 04-boundedness-of-complements.tex 82 2019-01-08 08:34:53Z kebekus $} \section{Boundedness of complements} \label{sec:bcomp} \subsection{Statement of result} One of the central concepts in Birkar's papers \cite{Bir16a, Bir16b} is that of a \emph{complement}. The notion of a ``complement'' is an ingenious concept of Shokurov that was introduced in his investigation of threefold flips, \cite[Sect.~5]{zbMATH00146455}. We recall the definition in brief. \begin{defn}[\protect{Complement, \cite[Sect.~2.18]{Bir16a}}]\label{defn:comple} Let $(X, B)$ be a projective pair and $m ∈ ℕ$. An \emph{$m$-complement of $K_X + B$} is a $ℚ$-divisor $B^+$ with the following properties. \begin{enumerate} \item\label{il:c1} The tuple $(X, B^+)$ is an lc pair. \item\label{il:c2} The divisor $m·(K_X + B^+)$ is linearly equivalent to $0$. In particular, $m·B^+$ is integral. \item\label{il:c3} We have $m·B^+ ≥ m·⌊B⌋ +⌊(m+1)·\{B\}⌋$. \end{enumerate} \end{defn} \begin{rem}[Complements give sections]\label{rem:3-2} Setting as in Definition~\ref{defn:comple}. If $m$ can be chosen such that $m·⌊B⌋ +⌊(m+1)·\{B\}⌋ ≥ m·B$, then Item~\ref{il:c2} guarantees that $-m·(K_X+B)$ is linearly equivalent to the effective divisor $m·(B^+-B)$. In particular, the sheaf $𝒪_X\bigl(-m·(K_X+B)\bigr)$ admits a global section. \end{rem} \begin{rem} In view of Item~\ref{il:c2}, Shokurov considers complements as divisors that make the lc pair $(X, B^+)$ ``Calabi-Yau'', hence ``flat''. \end{rem} The following result, which asserts the existence of complements with bounded $m$, is one of the core results in Birkar's paper \cite{Bir16a}. A proof of Theorem~\ref{thm:boundOfCompl} is sketched in Section~\vref{ssec:BC}. \begin{thm}[\protect{Boundedness of complements, \cite[Thm.~1.7]{Bir16a}}]\label{thm:boundOfCompl} Given $d ∈ ℕ$ and a finite set $\mathcal{R} ⊂ [0,1] ∩ ℚ$, there exists $m ∈ ℕ$ with the following property. If $(X,B)$ is any log canonical, projective pair, where \begin{enumerate} \item\label{il:r1} $X$ is of Fano type and $\dim X = d$, \item the coefficients of $B$ are of the form $\frac{l-r}{l}$, for $r ∈ \mathcal{R}$ and $l ∈ ℕ$, \item $-(K_X+B)$ is nef, \end{enumerate} then there exists an $m$-complement $B^+$ of $K_X+B$ that satisfies $B^+ ≥ B$. The divisor $B^+$ is also an $(m·l)$-complement, for every $l ∈ ℕ$. \end{thm} \begin{rem}[Complements give sections]\label{rem:3-4} Given a pair $(X,B)$ as in Theorem~\ref{thm:boundOfCompl} and a number $l ∈ ℕ$ such that $(ml)·B$ is integral, then $ml·⌊B⌋ +⌊(ml+1)·\{B\}⌋ ≥ ml·B$, and Remark~\ref{rem:3-2} implies that $h⁰ \bigl( X,\, 𝒪_X(-ml·(K_X+B)) \bigr) > 0$. \end{rem} \subsection{Idea of application} We aim to show Theorem~\ref{thm:BAB}: under suitable assumptions on the singularities the family of Fano varieties is bounded. The proof relies on the following boundedness criterion of Hacon and Xu that we quote without proof (but see Sections~\ref{ssec:1-1-1v2} and \ref{ssec:1-1-2v2} for a brief discussion). Recall that a set of numbers is DCC if every strictly descending sequence of elements eventually terminates. \begin{thm}[\protect{Boundedness criterion, \cite[Thm.~1.3]{MR3507257}}]\label{thm:boundednessCriterion} Given $d ∈ ℕ$ and a $\DCC$ set $I ⊂ [0,1] ∩ ℚ$, let $\mathcal{Y}_{d,I}$ be the family of pairs $(X,B)$ such that the following holds true. \begin{enumerate} \item The pair $(X, B)$ is projective, klt, and of dimension $\dim_ℂ X = d$. \item The coefficients of $B$ are contained in $I$. The divisor $B$ is big and $K_X + B \sim_ℚ 0$. \end{enumerate} Then, the family $\mathcal{Y}_{d,I}$ is bounded. \qed \end{thm} With the boundedness criterion in place, the following observation relates ``boundedness of complements'' to ``boundedness of Fanos'' and explains what pieces are missing in order to obtain a full proof. \begin{obs}\label{obs:1} Given $d ∈ ℕ$ and $ε ∈ ℝ^+$, Theorem~\ref{thm:boundOfCompl} gives a number $m ∈ ℕ$ such that every $ε$-lc Fano variety $X$ with $-K_X$ nef admits an effective complement $B^+$ of $K_X = K_X+0$, with coefficients in the set $\{\frac{1}{m}, \frac{2}{m}, …, \frac{m}{m}\}$. If one could in addition always choose $B^+$ so that $(X,B^+)$ was klt rather than merely lc, then Theorem~\ref{thm:boundednessCriterion} would immediately apply to show that the family of $ε$-lc Fano varieties with $-K_X$ nef is bounded. \end{obs} As an important step towards boundedness of $ε$-lc Fanos, we will see in Section~\ref{sec:EB} how the theorem on ``effective birationality'' together with Theorem~\ref{thm:boundednessCriterion} and Observation~\ref{obs:1} can be used to find a boundedness criterion (=Proposition~\vref{prop:4.3a}) that applies to a relevant class of klt, weak Fano varieties. \subsection{Variants and generalisations} \label{sec:vandg} Theorem~\ref{thm:boundOfCompl} is in fact part of a much larger package, including boundedness of complements in the relative setting, \cite[Thm.~1.8]{Bir16a}, and boundedness of complements for generalised polarised pairs, \cite[Thm.~1.10]{Bir16a}. To keep this survey reasonably short, we do not discuss these results here, even though they are of independent interest, and play a role in the proofs of Theorems~\ref{thm:boundOfCompl} and \ref{thm:BAB}. \subsection{Idea of proof for Theorem~\ref*{thm:boundOfCompl}} \label{ssec:BC} We sketch a proof of ``boundedness of complements'', following \cite[p.~6ff]{Bir16a} in broad strokes, and filling in some details now and then. In essence, the proof works by induction over the dimension, so assume that $d$ is given and that everything was already shown for varieties of lower dimension. \subsubsection*{Simplification} Theorem~\ref{thm:boundOfCompl} considers a finite set $\mathcal R ⊂ [0,1] ∩ ℚ$, and log canonical pairs $(X,B)$, where the coefficients of $B$ are contained in the set $$ Φ(\mathcal R) := \bigl\{ \textstyle{\frac{l-r}{l}} \,|\, r ∈ \mathcal{R} \text{ and } l ∈ ℕ \bigr\}. $$ The set $Φ(\mathcal R)$ is infinite, and has $1 ∈ ℚ$ as its only accumulation point. Birkar shows that it suffices to treat the case where the coefficient set is finite. To this end, he constructs in \cite[Prop.~2.49 and Constr.~6.13]{Bir16a} a number $ε' ≪ 1$ and shows that it suffices to consider pairs with coefficients in the finite set $Φ(\mathcal R) ∩ [0,1-ε'] ∪ \{1\}$. In fact, given any $(X,B)$, he considers the divisor $B'$ obtained by replacing those coefficients on $B$ that lie in the range $(1-ε',1)$ with $1$. Next, he constructs a birational model $(X'', B'')$ of $(X, B')$ that satisfies all assumptions Theorem~\ref{thm:boundOfCompl}. His construction guarantees that to find an $n$-complement for $(X,B)$ it is equivalent to find an $n$-complement for $(X'', B'')$. Among other things, the proof involves carefully constructed runs of the Minimal Model Programme, Hacon-McKernan-Xu's local and global ACC for log canonical thresholds \cite[Thms.~1.1 and 1.5]{MR3224718}, and the extension of these results to generalised pairs \cite[Thm.~1.5 and 1.6]{MR3502099}. \begin{rem} Recall from Remark~\ref{rem:MoriDream} that Assumption~\ref{il:r1} (``$X$ is of Fano type'') allows us to run Minimal Model Programmes on arbitrary divisors. \end{rem} Along similar lines, Birkar is able to modify $(X'', B'')$ by further birational transformation, and eventually proves that it suffices to show boundedness of complements for pairs that satisfy the following additional assumptions. \begin{ass} The coefficient set of $(X, B)$ is contained in $\mathcal R$ rather than in $Φ(\mathcal R)$, and one of the following holds true. \begin{enumerate} \item\label{il:sc1} The divisor $-(K_X+B)$ is nef and big, and $B$ has a component $S$ with coefficient $1$ that is of Fano type. \item\label{il:sc2} There exists a fibration $f \colon X → T$ and $K_X+B\equiv 0$ along that fibration. \item\label{il:sc3} The pair $(X,B)$ is exceptional. \end{enumerate} \end{ass} \begin{com*} The main distinction is between Case~\ref{il:sc3} and Case~\ref{il:sc1}. In fact, if $(X,B)$ is not exceptional, recall from Definition~\ref{def:exceptional} that there exists an effective $P ∈ ℝ\Div(X)$ such that $K_X + B + P \sim_{ℝ} 0$ and such that $(X, B+P)$ is \emph{not} klt. This allows us to find a birational model whose boundary contains a divisor with multiplicity one. Case~\ref{il:sc2} comes up if the runs of the Minimal Model Programmes used in the construction of birational models terminates with a Kodaira fibre space. \end{com*} The three cases \ref{il:sc1}--\ref{il:sc3} require very different inductive treatments. \subsubsection*{Case~\ref{il:sc1}} We consider only the simple case where $S = ⌊ B ⌋$ is a normal prime divisor, where $(X,B)$ is plt near $S$ and where $-(K_X+B)$ is ample. Setting $B_S := (K_X+B)|_S - K_S$, the coefficients are contained in a finite set $\mathcal R'$ of rational numbers that depends only on $\mathcal R$ and on $d$. In summary, the pair $(S, B_S)$ reproduces the assumptions of Theorem~\ref{thm:boundOfCompl}, and by induction we obtain a number $n ∈ ℕ$ that depends only on $\mathcal R$ and $d$, such that \begin{enumerate} \item\label{il:i1} the divisor $n·B_S$ is integral, and \item\label{il:i2} there exists an $n$-complement $B^+_S$ of $K_S+B_S$. \end{enumerate} Following \cite[Prop.~6.7]{Bir16a}, we aim to extend $B^+_S$ from $S$ to a complement $B^+$ of $K_X+B$ on $X$. As we saw in in Remark~\ref{rem:3-4}, Item~\ref{il:i1} guarantees that $n·(B^+_S-B_S)$ is effective, so that the complement $B^+_S$ gives rise to a section in $$ H⁰ \bigl( S,\, n·(B^+_S-B_S) \bigr) = H⁰ \bigl( S,\, -n·(K_S+B_S) \bigr) $$ But then, looking at the cohomology of the standard ideal sheaf sequence, $$ H⁰ \bigl( X,\, -n·(K_X+B) \bigr) → \underbrace{H⁰ \bigl( S,\, -n·(K_X+B)|_S \bigr)}_{\not = 0 \text{ by Rem.~\ref{rem:3-4}}} → \underbrace{H¹ \bigl( X,\, -n·(K_X+B) - S \bigr) }_{= 0 \text{ by Kawamata-Viehweg vanishing}} $$ we find that the section extends to $X$ and defines an associated divisor $B^+ ∈ \lvert-(K_X+B)\rvert_ℚ$. Using the connectedness principle for non-klt centres\footnote{For generalised pairs, this is \cite[Lem.~2.14]{Bir16a}}, one argues that $B^+$ is the desired complement. \subsubsection*{Case~\ref{il:sc2}} \label{ssect:sc2} Given a fibration $f: X → T$, we apply the construction of Section~\ref{ssec:gpfib}, in order to equip the base variety $T$ with the structure of a generalised polarised pair $(T, B+M)$, with data $T' \overset{φ}{→} T → \Spec ℂ$ and $M'$. Adding to the results explained in Section~\ref{ssec:gpfib}, Birkar shows that the coefficients of $B$ and $M$ are not arbitrary. The coefficients of $B$ are in $Φ(\mathcal S)$ for some fixed finite set $\mathcal S$ of rational numbers that depends only on $\mathcal R$ and $d$. Along similar lines, there exists a bounded number $p ∈ ℕ$ such that $p·M$ is integral. The plan is now to use induction to find a bounded complement for $K_T + B + M$ and pull it back to $X$. This plan works out well, but requires us to formulate and prove all results pertaining to boundedness of complements in the setting of generalised polarised pairs. All the arguments sketched here continue to work, \emph{mutatis mutandis}, but the level of technical difficulty increases substantially. \subsubsection*{Case~\ref{il:sc3}} There is little that we can say in brief about this case. Still, assume for simplicity that $B=0$ and that $X$ is a Fano variety. If we could show that $X$ belongs to a bounded family, then we would be done. Actually we need something weaker: effective birationality. Assume we have already proved Theorem~\ref{thm:effBir}. Then there is a bounded number $m ∈ ℕ$ such that $|-mK_X|$ defines a birational map. Pick $M∈ |-mK_X|$ and let $B^+ := \frac{1}{m}·M$. Since $X$ is exceptional, $(X, B^+)$ is automatically klt, hence $K_X+B^+$ is an $m$-complement. Although this gives some idea of how one may get a bounded complement but in practice we cannot give a complete proof of Theorem~\ref{thm:effBir} before proving Theorem~\ref{thm:boundOfCompl}. Contrary to the exposition of this survey paper, where ``boundedness of complements'' and ``effective birationality'' are treated as if they were separate, the proofs of the two theorems are in fact much intertwined, and this is one of the main points where they come together. Many of the results discussed in this overview (``Bound on anti-canonical volumes'', ``Bound on lc thresholds'') have separate proofs in the exceptional case. \svnid{$Id: 05-effective-birationality.tex 82 2019-01-08 08:34:53Z kebekus $} \section{Effective birationality} \label{sec:EB} \subsection{Statement of result} The second main ingredient in Birkar's proof of boundedness is the following result. A proof is sketched in Section~\vref{ssec:BC}. \begin{thm}[\protect{Effective birationality, \cite[Thm.~1.2]{Bir16a}}]\label{thm:effBir} Given $d ∈ ℕ$ and $ε ∈ ℝ^+$, there exists $m ∈ ℕ$ with the following property. If $X$ is any $ε$-lc weak Fano variety of dimension $d$, then $\lvert -m·K_X \rvert$ defines a birational map. \end{thm} \begin{rem} The divisors $m·K_X$ in Theorem~\ref{thm:effBir} need not be Cartier. The linear system $\lvert -m·K_X \rvert$ is the space of effective Weil divisors on $X$ that are linearly equivalent to $-m·K_X$. \end{rem} \subsection{Idea of application} \label{ssec:EBA} In the framework of \cite{Bir16a}, effective birationality is used to improve the boundedness criterion spelled out in Theorem~\ref{thm:boundednessCriterion} above. \begin{prop}[\protect{Boundedness criterion, \cite[Prop.~7.13]{Bir16a}}]\label{prop:4.3a} Let $d, v ∈ ℕ$ and let $(t_ℓ)_{ℓ ∈ ℕ}$ be a sequence of positive real numbers. Let $\mathcal{X}$ be the family of projective varieties $X$ with the following properties. \begin{enumerate} \item The variety $X$ is a klt weak Fano variety of dimension $d$. \item\label{il:x3} The volume of the canonical class is bounded, $\vol(-K_X) ≤ v$. \item\label{il:x4} For every $ℓ ∈ ℕ$ and every $L ∈ \lvert -ℓ·K_X \rvert$, the pair $(X, t_ℓ·L)$ is klt. \end{enumerate} Then, $\mathcal{X}$ is a bounded family. \end{prop} \begin{rem} The formulation of Proposition~\ref{prop:4.3a} is meant to illustrate the application of Theorem~\ref{thm:effBir} to the boundedness problem. It is a simplified version of Birkar's formulation and defies the logic of his work. While we present Proposition~\ref{prop:4.3a} as a corollary to Theorem~\ref{thm:effBir}, and to all the results mentioned in Section~\ref{sec:bcomp}, Birkar uses \cite[Prop.~7.13]{Bir16a} as one step in the inductive proof of ``boundedness of complements'' and ``effective birationality''. That requires him to explicitly list partial cases of ``boundedness of complements'' and ``effective birationality'' as assumptions to the proposition, and makes the formulation more involved. \end{rem} \begin{rem} Proposition~\ref{prop:4.3a} reduces the boundedness problem to solving the following two problems. \begin{itemize} \item Boundedness of volumes, as required in \ref{il:x3}. This is covered in the subsequent Section~\ref{sec:volumes}. \item Existence of numbers $t_ℓ$, as required in \ref{il:x4}. This amounts to bounding ``lc thresholds'' and is covered in Section~\ref{sec:lcthres}. \end{itemize} \end{rem} To prove Proposition~\ref{prop:4.3a}, Birkar uses effective birationality in the following form, as a log birational boundedness result. \begin{prop}[\protect{Log birational boundedness of certain pairs, \cite[Prop.~4.4]{Bir16a}}]\label{thm:lbb} Given $d,v ∈ ℕ$ and $ε ∈ ℝ^+$. Then, there exists $c ∈ ℝ^+$ and a bounded family $\mathcal P$ of couples with the following property. If $X$ is a normal projective variety of dimension $d$ and if $B ∈ ℝ\Div(X)$ and $M ∈ ℚ\Div(X)$ are divisors such that the following holds, \begin{enumerate} \item the divisor $B$ is effective, with coefficients in $\{0\} ∪ [ε,∞)$, \item the divisor $M$ is effective, nef and $|M|$ defines a birational map, \item the difference $M-(K_X+B)$ is pseudo-effective, \item the volume of $M$ is bounded, $\vol(M) < v$, \item if $D$ is any component of $M$, then $\mult_D (B+M) ≥ 1$, \end{enumerate} then there exists a log smooth couple $(X', Σ) ∈ \mathcal P$, a rational map $\overline{X} \dasharrow X$ and a resolution of singularities $r : \widetilde{X} → X$, with the following properties. \begin{enumerate} \item\label{il:y1} The divisor $Σ$ contains the birational transform on $M$, as well as the exceptional divisor of the birational map $β$. \item\label{il:y2} The movable part $A_{\widetilde{X}}$ of $r^* M$ is basepoint free. \item\label{il:y3} If $\widetilde{X}'$ is any resolution of $X$ that factors via $X'$ and $\widetilde{X}$, $$ \xymatrix{ \widetilde{X}' \ar[d]_{s\text{, resolution}} \ar[rr]^{\widetilde{β}\text{, birational}} && \widetilde{X} \ar[d]^{r\text{, resolution}} \\ X' \ar@{-->}[rr]_{β\text{, birational}} && X } $$ then the coefficients of the $ℚ$-divisor $s_* (r ◦ \widetilde{β})^* M$ are at most $c$ and $\widetilde{β}^* A_{\widetilde{X}}$ is linearly equivalent to zero relative to $X'$. \end{enumerate} \end{prop} \begin{proof}[\protect{Sketch of proof for Proposition~\ref{thm:lbb}, following \cite[p.~42]{Bir16a}}] Since $|M|$ defines a birational map, there exists a resolution $r : \widetilde{X} → X$ such that $r^* M$ decomposes as the sum of a base point free movable part $A_{\widetilde{X}}$ and fixed part $R_{\widetilde{X}}$. The contraction $X → X''$ defined by $A_{\widetilde{X}}$ is birational. Since $\vol(M)$ is bounded, the varieties $X''$ obtained in this way are all members of one bounded family $\mathcal P'$. The family $\mathcal P'$ is however not yet the desired family $\mathcal P$, and the varieties in $\mathcal P'$ are not yet equipped with an appropriate boundary. To this end, one needs to invoke a criterion of Hacon-McKernan-Xu for ``log birationally boundedness'', \cite[Lem.~2.4.2(4)]{MR3034294}, and take an appropriate resolution of the elements in $\mathcal P'$. \end{proof} \begin{proof}[\protect{Sketch of proof for Proposition~\ref{prop:4.3a}, following \cite[p.~80]{Bir16a}}] Applying Theorems~\ref{thm:boundOfCompl} (``Boundedness of complements'') and \ref{thm:effBir} (``Effective birationality''), we find a number $m ∈ ℕ$ such that every $X ∈ \mathcal{X}$ admits an $m$-complement for $K_X$ and that $\lvert -m·K_X \rvert$ defines a birational map. If $m$-complements $B^+$ of $K_X$ could always be chosen such that $(X,B^+)$ were klt, we have seen in Observation~\ref{obs:1} that $\mathcal{X}$ is bounded. However, Theorems~\ref{thm:boundOfCompl} guarantees only the existence of an $m$-complement $B^+$ of $K_X$ where $(X,B^+)$ is lc. Using the bounded family $\mathcal P$ obtained when applying Proposition~\ref{thm:lbb} with $M = -m·K_X$ and $B = 0$, we aim to find a universal constant $ℓ$ and a finite set $\mathcal{R}$, and then perturb any given $(X,B^+)$ in order to find a boundary $B^{++}$ with coefficients in $\mathcal{R}$ that is $ℚ$-linearly equivalent to $-K_X$ and makes $(X, B^{++})$ klt. Boundedness will then again follow from Theorem~\ref{thm:boundednessCriterion}. To spell out a few more details of the proof use boundedness of the family $\mathcal P$ to infer the existence of a universal constant $ℓ$ with the following property. \begin{quote} If $(X', Σ) ∈ \mathcal P$ and if $A_{X'} ∈ \Div(X')$ is contained in $Σ$ with coefficients bounded by $c$, and if $|A_{X'}|$ is basepoint free and defines a birational morphism, then there exists $G_{X'} ∈ |ℓ·A_{X'}|$ whose support contains $Σ$. \end{quote} Now assume that one $X ∈ \mathcal{X}$ is given. It suffices to consider the case where $X$ is $ℚ$-factorial and admits an $m$-complement of the form $B^+ = \frac{1}{m}·M$, for general $M ∈ \lvert -m·K_X \rvert$. To make use of $ℓ$, consider a diagram as discussed in Item~\ref{il:y3} of Proposition~\ref{thm:lbb} above and decompose $r^*M = A_{\widetilde{X}} + R_{\widetilde{X}}$ into its moving and its fixed part. Write $A := r_* A_{\widetilde{X}}$ and $R := r_* R_{\widetilde{X}}$. Item~\ref{il:y1} of Proposition~\ref{thm:lbb} implies that the divisor $A_{X'} := s_* \widetilde{β}^* A_{\widetilde{X}}$ is then contained in $Σ$, and Item~\ref{il:y3} asserts that it is basepoint free, defines a birational morphism. So, we find $G_{X'} ∈ |ℓ·A_{X'}|$ as above. Writing $G := r_* \widetilde{β}_*s^* G_{X'}$, we find that $G+ℓ·R ∈ |- mℓ·K_X|$, so that $(X, t_{mℓ} G)$ is klt by assumption. We may assume that $t_{mℓ}$ is rational and $t_{mℓ} < \frac{1}{mℓ}$. If $(X, \frac{1}{mℓ}(G+ℓ·R))$ is lc, then set $B' := \frac{1}{mℓ}(G+ℓ·R)$. Otherwise, one needs to use the lower-dimensional versions of the variants and generalisations of boundedness of complements that we discussed in Section~\ref{sec:vandg} above. To be more precise, using \begin{enumerate} \item boundedness of complements for generalised polarised pairs for varieties of dimension $≤ d-1$, and \item boundedness of complements in the relative setting for varieties of dimension $d$, \end{enumerate} one can always find a universal number $n$ and $B' ≥ t_{mℓ}·(G+ℓ·R)$ where $(X,B')$ is lc and $n·(K_X+B') \sim 0$. Finally, set $$ B^{++} := \frac{1}{2}·B^+ + \frac{t}{2m}·A - \frac{t}{2mℓ}·G + \frac{1}{2}·B' $$ and then show by direct computation that all required properties hold. \end{proof} \subsection{Preparation for the proof of Theorem~\ref*{thm:effBir}} We prepare for the proof with the following proposition. In essence, it asserts that effective divisors with ``degree'' bounded from above cannot have too small lc thresholds, under appropriate assumptions. Since this proposition may look plausible, we do not go into details of the proof. Further below, Proposition~\ref{prop:lctbd} gives a substantially stronger result whose proof is sketched in some detail. \begin{prop}[\protect{Singularities in bounded families, \cite[Prop.~4.2]{Bir16a}}]\label{p-non-term-places} Given $ε' ∈ ℝ^+$ and given a bounded family $\mathcal P$ of couples, there exists a number $δ ∈ ℝ^{>0}$ such that the following holds. Given the the following data, \begin{enumerate} \item an $ε'$-lc, projective pair $(\widehat{G}, \widehat{B})$, \item a reduced divisor $T ∈ \Div(\widehat{G})$ such that $\bigl( \widehat{G}, \supp (\widehat{B}+T) \bigr) ∈ \mathcal P$, and \item an $ℝ$-divisor $\widehat{N}$ whose support is contained in $T$, and whose coefficients have absolute values $≤ δ$, \end{enumerate} then $(\widehat{G}, \widehat{B}+\widehat{L})$ is klt, for all $\widehat{L} ∈ |\widehat{N}|_ℝ$. \qed \end{prop} \subsection{Sketch of proof of Theorem~\ref*{thm:effBir}} Assume that numbers $d$ and $ε$ are given. Given an $ε$-lc Fano variety $X$ of dimension $d$, we will be interested in the following two main invariants, \begin{align*} m_X & := \min \{ \: m' ∈ ℕ \,|\, \text{the linear system } \lvert -m'·K_X \rvert \text{ defines a birational map }\} \\ n_X & := \min \{ \: n' ∈ ℕ\; \,|\, \vol(-n'·K_X) ≥ (2d)^d \:\} \end{align*} Eventually, it will turn out that both numbers are bounded from above. Our aim here is to bound the numbers $m_X$ by a constant that depends only on $d$ and $ε$. \subsection*{Bounding the quotient} Following \cite{Bir16a}, we will first find an upper bound for the quotients $m_X/n_X$ by a number that depends only on $d$ and $ε$. \subsubsection{Construction of non-klt centres} In the situation at hand, a standard method (``tie breaking'') allows us to find dominating families of non-klt centres; we refer to \cite[Sect.~6]{KollarSingsOfPairs} for an elementary discussion, but see also \cite[Sect.~2.31]{Bir16a}. Given an $ε$-lc Fano variety $X$ of dimension $d$, and using the assumption that $\vol(-n_X·K_X) ≥ (2d)^d$, the following has been shown by Hacon, McKernan and Xu. \begin{claim}[\protect{Dominating family of non-klt centres, \cite[Lem.~7.1]{MR3224718}}]\label{claim:dfnkc} Given any $ε$-lc Fano variety $X$, there exists a dominating family $\mathcal G_X$ of subvarieties in $X$ with the following property. If $(x, y) ∈ X ⨯ X$ is any general tuple of points, then there exists a divisor $Δ ∈ |-(n_X+1)·K_X|_ℝ$ such that the following holds. \begin{enumerate} \item\label{il:o1} The pair $(X,Δ)$ is not klt at $y$. \item\label{il:o2} The pair $(X,Δ)$ is lc near $x$ with a unique non-klt place. The associated non-klt centre is a subvariety of the family $\mathcal G_X$. \qed \end{enumerate} \end{claim} Given $X$, we may assume that the members of the families $\mathcal G_X$ all have the same dimension, and that this dimension is minimal among all families of subvarieties that satisfy \ref{il:o1} and \ref{il:o2}. \subsubsection{The case of isolated centres} If $X$ is given such that the members of $\mathcal G_X$ are points, then the elements are isolated non-klt centres. Given $G ∈ \mathcal G_X$, standard vanishing theorems for multiplier ideals will then show surjectivity of the restriction maps $$ H⁰ \bigl( X,\, 𝒪_X(K_X+Δ) \bigr) → \underbrace{H⁰ \bigl( G,\, 𝒪_X(K_X+Δ)|_G \bigr)}_{≅ ℂ}. $$ In particular, we find that $𝒪_X(K_X+Δ) ≅ 𝒪_X(-n_X·K_X)$ has non-trivial sections. Further investigation reveals that a bounded multiple of $-n_X·K_X$ will in fact give a birational map. \subsubsection{Non-isolated centres} It remains to consider varieties $X$ where the members of $\mathcal G_X$ are positive-dimensional. Following \cite[proofs of Prop.~4.6 and 4.8]{Bir16a}, we trace the arguments for that case in \emph{very} rough strokes, ignoring all of the (many) subtleties along the way. The main observation to handle this case is the following volume bound. \begin{claim}[\protect{Volume bound, \cite[Step 3 on p.~48]{Bir16a}}]\label{claim:vB} There exists a number $v ∈ ℝ^+$ that depends only on $d$ and $ε$, such that for all $X$ and all positive-dimensional $G ∈ \mathcal G_X$, we have $\vol(-m_X·K_X|_G) < v$. \end{claim} \begin{proof}[Idea of proof for Claim~\ref{claim:vB}] Going back and looking at the construction of non-klt centres (that is, the detailed proof of Claim~\ref{claim:dfnkc}), one finds that the construction can be improved to provide families of lower-dimension centres if only the volumes are big enough. But this collides with our assumption that the varieties in $\mathcal G_X$ were of minimal dimension. \qedhere~(Claim~\ref{claim:vB}) \end{proof} To make use of Claim~\ref{claim:vB}, look at one $X$ where the members of $\mathcal G_X$ are positive-dimensional. Choose a general divisor\footnote{the divisor $M$ should really be taken as the movable part, but we ignore this detail.} $M ∈ \lvert -m_X·K_X \rvert$, and let $(x,y) ∈ X ⨯ X$ be a general tuple of points with associated centre $G ∈ \mathcal G_X$. Since $G$ is a non-klt centre that has a unique place over it, adjunction (and inversion of adjunction) works rather well. Together with the bound on volumes, this allows us to define a natural boundary $\widehat{B}$ on a suitable birational modification $\widehat{G}$ of the normalisation of $G$, such that the following holds.\CounterStep \begin{enumerate} \item The pair $(\widehat{G}, \widehat{B})$ is $ε'$-lc, for some controllable number $ε'$. \item Writing $E$ for the exceptional divisor of $\widehat{G} → G$ and $T := (\widehat{B}+E)_{\red}$, the couple $\bigl(\widehat{G}, \supp (\widehat{B} + T) \bigr)$ belongs to a bounded family $\mathcal P$ that in turn depends only on the numbers $d$ and $ε$. \item The pull-back of $M$ to $\widehat{G}$ has support in $\supp (\widehat{B} + T)$. \end{enumerate} \subsubsection{End of proof} The idea now is of course to apply Proposition~\ref{p-non-term-places}, using the family $\mathcal P$. Arguing by contradiction, we assume that the numbers $m_X/n_X$ are unbounded. We can then find one $X$ where $n_X/m_X$ is really quite small when compared to the number $δ$ given by Proposition~\ref{p-non-term-places}. In fact, taking $\widehat{N}$ as the pull-back of $\frac{n_X}{m_X}·M$, it is possible to guarantee that the coefficients of $\widehat{N}$ are smaller than $δ$. Intertwining this proof with the proof of ``boundedness of complements'', we may use a partial result from that proof, and find $L ∈ |-n_X·K_X|_{ℚ}$, whose coefficients are $≥ 1$. Since the points $(x,y) ∈ X ⨯ X$ were chosen generically, the pull-back $\widehat{L}$ of $L$ to $\widehat{G}$ has coefficients $≥ 1$, and can therefore never appear in the boundary of a klt pair. But then, $\widehat{L} ∈ |\widehat{N}|_ℝ$, which contradicts Proposition~\ref{p-non-term-places} and ends the proof. In summary, we were able to bound the quotient $m_X/n_X$ by a constant that depends only on $d$ and $ε$. \qed\mbox{\quad(Boundedness of quotients)} \subsection*{Bounding the numbers $m_X$} Finally, we still need to bound $m_X$. This can be done by arguing that the volumes $\vol(-m_X·K_X)$ are bounded from above, and then use the same set of ideas discussed above, using $X$ instead of a birational model $\widehat{G}$ of its subvariety $G$. Since some of the core ideas that go into boundedness of volumes are discussed in more detail in the following Section~\ref{sec:volumes} below, we do not go into any details here. \qed \svnid{$Id: 06-volumina.tex 79 2019-01-07 13:21:51Z kebekus $} \section{Bounds for volumes} \label{sec:volumes} \subsection{Statement of result} Once Theorem~\ref{thm:BAB} (``Boundedness of Fanos'') is shown, the volumes of anticanonical divisors of $ε$-lc Fano varieties of any given dimension will clearly be bounded. Here, we discuss a weaker result, proving boundedness of volumes for Fanos of dimension $d$, assuming boundedness of Fanos in dimension $d-1$. \begin{thm}[\protect{Bound on volumes, \cite[Thm.~1.6]{Bir16a}}]\label{thm:bvol} Given $d ∈ ℕ$ and $ε ∈ ℝ^+$, if the $ε$-lc Fano varieties of dimension $d-1$ form a bounded family, then there is a number $v$ such that $\vol(-K_X) ≤ v$, for all $ε$-lc weak Fano varieties $X$ of dimension $d$ \end{thm} \subsection{Idea of application} We have seen in Section~\ref{ssec:EBA} how to obtain boundedness criteria for families of varieties from boundedness of volumes. This makes Theorem~\ref{thm:bvol} a key step in the inductive proof of Theorem~\ref{thm:BAB}. \subsection{Idea of proof for boundedness of volumes, following \cite[Sect.~9]{Bir16a}} To illustrate the core idea of proof, we consider only the simplest cases and make numerous simplifying assumptions, no matter how unrealistic. The assumption that $ε$-lc Fano varieties of dimension $d-1$ form a bounded family will be used in the following form. \begin{lem}[\protect{Consequence of boundedness, \cite[Lem.~2.22]{Bir16a}}]\label{lem:5-2} There exists a finite set $I ⊂ ℝ$ with the following property. If $X$ is an $ε$-lc Fano variety of dimension $d-1$, if $r ∈ ℝ^{≥ 1}$ and if $D$ is any non-zero integral divisor on $X$ such that $K_X + r·D \equiv 0$, then $r ∈ I$. \qed \end{lem} We argue by contradiction and assume that there exists a sequence $(X_i)_{i ∈ ℕ}$ of $ε$-lc weak Fanos of dimension $d$ such that the sequence of volumes is strictly increasing, with $\lim \vol(X_i) = ∞$. For simplicity of the argument, assume that all $X_i$ are Fanos rather than weak Fanos, and that they are $ℚ$-factorial. For the general case, one needs to consider the maps defined by multiples of $-K_X$ and take small $ℚ$-factorialisations. Choose a rational $ε'$ in the interval $(0,ε)$. Using explicit discrepancy computations of boundaries of the form $\frac{1}{N}·B'_i$, for $B'_i ∈ \lvert -N·K_{X_i} \rvert$ general, \cite[Cor.~2.32]{KM98}, we find a decreasing sequence $(a_i)_{i ∈ ℕ}$ of rationals, with $\lim a_i = 0$, and boundaries $B_i ∈ ℚ\Div(X_i)$ with the following properties. \begin{enumerate} \item For each $i$, the divisor $B_i$ is $ℚ$-linearly equivalent to $-a_i·K_{X_i}$. \item The volumes of the $B_i$ are bounded from below, $(2d)^d < \vol(B_i)$. \item\label{il:5-1-3} The pair $(X_i, B_i)$ has total log discrepancy equal to $ε'$. \end{enumerate} Passing to a subsequence, we may assume that $a_i < 1$ for every $i$. Again, discrepancy computation show that this allows us to find sufficiently general, ample $H_i ∈ ℚ\Div(X_i)$ that are $ℚ$-linearly equivalent to $-(1-a_i)·K_{X_i}$ and have the property that $(X, B_i+H_i)$ are still $ε'$-lc. Given any index $i$, Item~\ref{il:5-1-3} implies that there exists a prime divisor $D'_i$ on a birational model $X'_i$ that realises the total log discrepancy. For simplicity, consider only the case where one can choose $X_i = X'_i$ for every $i$, and therefore find prime divisors $D_i$ on $X_i$ that appear in $B_i$ with multiplicity $1-ε'$. Without that simplifying assumption one needs to invoke \cite[Cor.~1.4.3]{BCHM10}, in order to replace the variety $X_i$ by a model that ``extracts'' the divisor $D'_i$. In summary, we can write \begin{equation}\label{eq:5-2-4} -K_{X_i} \sim_{ℚ} \frac{1}{a_i}·B_i = \frac{1-ε'}{a_i}·D_i + (\text{effective}). \end{equation} As a next step, recall from Remark~\ref{rem:MoriDream} that the $X_i$ are Mori dream spaces. Given any $i$, we can therefore run the $-D_i$-MMP, which terminates with a Mori fibre space on which the push-forward of $D_i$ is relatively ample. Again, we ignore all technical difficulties and assume that $X_i$ itself is the Mori fibre space, and therefore admits a fibration $X_i → Z_i$ with relative Picard number $ρ(X_i/Z_i) = 1$ such that $D_i$ is relatively ample. Let $F_i ⊆ X_i$ be a general fibre. Adjunction and standard inequalities for discrepancies imply that $F_i$ is again $ε$-lc and Fano. The statement about the relative Picard number implies that any effective divisor on $X_i$ is either trivial or ample on $F_i$. In particular, Equation~\ref{eq:5-2-4} implies that $-K_{F_i} \equiv s_i·D_i$, where $s_i ≥ \frac{1-ε'}{a_i}$ goes to infinity. If $\dim F_i = d-1$, or more generally if $\dim F_i < d$ for infinitely many indices $i$, this contradicts Lemma~\ref{lem:5-2} and therefore proves Theorem~\ref{thm:bvol}. It remains to consider the case where the $Z_i$ are points. Birkar's proof in this case is similar in spirit to the argumentation above, but technically much \emph{more} demanding. He creates a covering family of non-klt centres, uses adjunction on these centres and the assumption that $ε$-lc Fano varieties of dimension $d-1$ form a bounded family to obtain a contradiction. \qed \svnid{$Id: 07-lct.tex 79 2019-01-07 13:21:51Z kebekus $} \section{Bounds for lc thresholds} \label{sec:lcthres} The last of Birkar's core results presented here pertains to log canonical thresholds of anti-canonical systems; this is the main result of Birkar's second paper \cite{Bir16b}. It gives a positive answer to a well-known conjecture of Ambro \cite[p.~4419]{MR3556423}. With the notation introduced in Section~\ref{sec:singofpairs}, the result is formulated as follows. \begin{thm}[\protect{Lower bound for lc thresholds, \cite[Thm.~1.4]{Bir16b}}]\label{thm:lct} Given $d ∈ ℕ$ and $ε ∈ ℝ^+$, there exists $t ∈ ℝ^+$ with the following property. If $(X,B)$ is any projective $ε$-lc pair of dimension $d$ and if $Δ := -(K_X+B)$ is nef and big, then $\lct \bigl(X,\, B,\, |Δ|_ℝ \bigr) ≥ t$. \end{thm} Though this is not exactly obvious, Theorem~\ref{thm:lct} can be derived from boundedness of $ε$-lc Fanos, Theorem~\ref{thm:BAB}. One of the core ideas in Birkar's paper \cite{Bir16b} is to go the other way and prove Theorem~\ref{thm:lct} using boundedness, but only for \emph{toric} Fano varieties, where the result has been established by Borisov-Borisov in \cite{MR1166957}. \subsection{Idea of application} As pointed out in Section~\ref{ssec:EBA}, bounding lc thresholds from below immediately applies to the boundedness problem. To illustration the application, consider the following corollary, which proves Theorem~\ref{thm:BAB} in part. \begin{cor}[Boundedness of $ε$-lc Fanos]\label{cor:bab} Given $d ∈ ℕ$ and $ε ∈ ℝ^+$, the family $\mathcal{X}^{\Fano}_{d,ε}$ of $ε$-lc Fanos of dimension $d$ is bounded. \end{cor} \begin{proof} We aim to apply Proposition~\ref{prop:4.3a} to the family $\mathcal{X}^{\Fano}_{d,ε}$. With Theorem~\ref{thm:bvol} (``Bound on volumes'') in place, it remains to satisfy Condition~\ref{il:x4} of Proposition~\ref{prop:4.3a}: we need a sequence $(t_{ℓ})_{ℓ ∈ ℕ}$ such that the following holds. \begin{quotation} For every $ℓ ∈ ℕ$, for every $X ∈ \mathcal{X}^{\Fano}_{d,ε}$ and every $L ∈ \lvert -ℓ·K_X \rvert$, the pair $(X, t_ℓ·L)$ is klt. \end{quotation} But this is not so hard anymore. Let $t ∈ ℝ^+$ be the number obtained by applying Theorem~\ref{thm:lct}. Given a number $ℓ ∈ ℕ$, a variety $X ∈ \mathcal{X}^{\Fano}_{d,ε}$ and a divisor $L ∈ |-ℓ·K_X|$, observe that $\frac{1}{ℓ}·L ∈ |-K_X|_{ℝ}$ and recall from Remark~\vref{rem:sflct} that $(X, \frac{t}{2ℓ}·L)$ is klt. We can thus set $t_{ℓ} := \frac{t}{2ℓ}$. \end{proof} \subsection{Preparation for the proof of Theorem~\ref*{thm:lct}: $ℝ$-linear systems of bounded degrees} \label{ssec:lctbd} To prepare for the proof of Theorem~\ref{thm:lct}, we begin with a seemingly weaker result that provides bounds for lc thresholds, but only for $ℝ$-linear systems of bounded degrees. This result will be used in Section~\ref{ssec:soft} to prove Theorem~\ref{thm:lct} in an inductive manner. \begin{prop}[\protect{LC thresholds for $ℝ$-linear systems of bounded degrees, \cite[Thm.~1.6]{Bir16b}}]\label{prop:lctbd} Given $d$, $r ∈ ℕ$ and $ε ∈ ℝ^+$, there exists $t ∈ ℝ^+$ with the following property. If $(X,B)$ is any projective, $ε$-lc pair of dimension $d$, if $A ∈ \Div(X)$ is very ample with $A-B$ ample and $[A]^d ≤ r$, then $\lct \bigl(X,\, B,\, |A|_ℝ \bigr) ≥ t$. \end{prop} \begin{rem}\label{rem:6-4} The condition on the intersection number, $[A]^d ≤ r$ implies that $X$ belongs to a bounded family of varieties. More generally, if we choose $A$ general in its linear system, then $(X,A)$ belongs to a bounded family of pairs. \end{rem} The proof of Proposition~\ref{prop:lctbd} is sketched below. It relies on two core ingredients. Because of their independent interest, we formulate them separately. \begin{setting}\label{set:6-5} Given $d$, $r ∈ ℕ$ and $ε ∈ ℝ^+$, we consider projective, $ε$-lc pairs $(X,B)$ of dimension $d$ where $X$ is $ℚ$-factorial, equipped with the following additional data. \begin{enumerate} \item A very ample divisor $A ∈ \Div(X)$, with $A-B$ ample and $[A]^d ≤ r$. \item An effective divisor $L ∈ ℝ\Div(X)$, with $A-L$ ample. \item A birational morphism $ν : Y → X$ of normal projective varieties, and a prime divisor $T ∈ \Div(Y)$ whose image is a point $x ∈ X$. \end{enumerate} \end{setting} \begin{lem}[\protect{Existence of complements, \cite[Prop.~5.9]{Bir16b}}]\label{lem:A6-6} Given $d$, $r ∈ ℕ$ and $ε ∈ ℝ^+$, assume that Proposition~\ref{prop:lctbd} holds for varieties of dimension $d-1$. Then, there exist integers $n$, $m ∈ ℕ$ and a real number $0 < ε' < ε$, with the following property. Whenever we are in Setting~\ref{set:6-5}, and whenever there exists a number $t < r$ such that \begin{enumerate} \item the pair $(X, B+t·L)$ is $ε'$-lc, and \item the log discrepancy is realised by $T$, that is $a_{\log}(T,X,B+t·L) = ε'$, \end{enumerate} Then there exists an effective divisor $Λ ∈ ℚ\Div(X)$ such that \begin{enumerate} \item the divisor $n·Λ$ is integral, \item the tuple $(X,Λ)$ is lc near $x$, and $T$ is an lc place of $(X,Λ)$, and \item the divisor $m·A-Λ$ is ample. \qed \end{enumerate} \end{lem} \begin{com*} Lemma~\ref{lem:A6-6} is another existence-and-boundedness result for complements, very much in the spirit of what we have seen in Section~\ref{sec:bcomp}. The relation to complements is made precise in \cite[Thm.~1.7]{Bir16b}, which is a core ingredient in Birkar's proof. In fact, after some birational modification of $Y$, Birkar finds a divisor $Λ_Y ∈ \Div(Y)$ such that $(Y, Λ_Y)$ is lc near $T$ and such that $n·(K_Y + Λ_Y)$ is linearly equivalent to $0$, relative to $X$ and for some bounded number $n ∈ ℕ$. As Birkar points out in \cite[p.~16]{Bir18}, one can think of $K_Y + Λ_Y$ as a local-global type of complement. He then takes $Λ$ to be the push-forward of $Λ_Y$ and proves all required properties. \end{com*} \begin{lem}[\protect{Bound on multiplicity at an lc place, \cite[Prop.~5.7]{Bir16b}}]\label{lem:A6-7} Given $d$, $r$ and $n ∈ ℕ$ and $ε ∈ ℝ^+$, assume that Proposition~\ref{prop:lctbd} holds for varieties of dimension $\le d-1$. Then, there exists $q ∈ ℝ^+$, with the following property. Whenever we are in Setting~\ref{set:6-5}, whenever $a(T,X,B)\le 1$, and whenever a divisor $Λ ∈ ℚ\Div(X)$ is given that satisfies the following conditions, \begin{enumerate} \item $Λ$ is effective and $n·Λ$ is integral, \item $A-Λ$ is ample, \item $(X, Λ)$ is lc near $x$, and $T$ is an lc place of $(X,Λ)$, \end{enumerate} then $T$ appears in the divisor $ν^* L$ with multiplicity $\mult_T ν^*L \le q$. \qed \end{lem} \begin{com*} Lemma~\ref{lem:A6-7} is perhaps the core of Birkar's paper \cite{Bir16b}. To begin, one needs to realise that the couples $\bigl( X, \supp(Λ) \bigr)$ that appear in Lemma~\ref{lem:A6-7} come from a bounded family. This allows us to consider common resolution, and eventually to assume from the outset that $(X, Λ)$ is a log-smooth couple. In particular, $(X, Λ)$ is toroidal, and $T$ can be obtained by a sequence of blowing ups that are toroidal with respect to $(X , Λ)$. Given that toroidal blow-ups are rather well understood, Birkar finds that to bound the multiplicity $\mult_T ν^*L$, it suffices to bound the number of blowups involved. Bounding the number of blowups is hard, and the next few sentences simplify a very complicated argument to the extreme\footnote{see \cite[p.~16f]{Bir18} and \cite[Sect.~10]{YXu18} for a more realistic account of all that is involved.}. Birkar establishes a Noether-normalisation theorem, showing that he may replace the couple $(X, Λ)$, which is log-smooth, by a pair of the form $(ℙ^d, \text{union of hyperplanes})$, which is toric rather than toroidal. Better still, applying surgery coming from the Minimal Model Programme, he is then able to replace $Y$ by a toric, Fano, $ε$-lc variety. But the family of such $Y$ is bounded by the classic result of Borisov-Borisov, \cite{MR1166957}, and a bound for the number of blowups follows. \end{com*} \begin{proof}[Sketch of proof for Proposition~\ref{prop:lctbd}] The proof of Proposition~\ref{prop:lctbd} proceeds by induction, so assume that $d$, $r$, and $ε$ are given and that everything was already shown in lower dimensions. Now, given a $d$-dimensional pair $(X, B)$ and a very ample $A ∈ \Div(X)$ as in Proposition~\ref{prop:lctbd}, we aim to apply Lemma~\ref{lem:A6-6} and \ref{lem:A6-7}. This is, however, not immediately possible because $X$ need not be $ℚ$-factorial. We know from minimal model theory that there exists a small $ℚ$-factorialisation, say $X' → X$, but then we need to compare lc thresholds of $X'$ and $X$, and show that the difference is bounded. To this end, recall from Remark~\ref{rem:6-4} that the family of all possible $X$ is bounded, which allows us to construct simultaneous $ℚ$-factorialisations in stratified families, and hence gives the desired bound for the differences. Bottom line: we may assume that $X$ is $ℚ$-factorial. Let $ε'$ be the number given by Lemma~\ref{lem:A6-6}. Next, given any divisor $L ∈ |A|_ℝ$, look at $$ s := \sup \{ s' ∈ ℝ \,|\, (X, B+s'·L) \text{ is $ε'$-lc} \}. $$ Following Remark~\ref{rem:sflct}, we would be done if we could bound $s$ from below, independently of $X$, $B$, $A$ and $L$. To this end, choose a resolution of singularities, $ν : Y → X$ and a prime divisor $T ∈ \Div(Y)$ such that $a_{\log}(T,X,B+s·L) = ε'$. For simplicity, we will only consider the case where $ν(T)$ is a point, say $x ∈ X$ --- if $ν(T)$ is not a point, Birkar cuts down with general hyperplanes from $|A|$, uses inversion of adjunction and invokes the induction hypothesis in order to proceed. In summary, we are now in a situation where we may apply Lemma~\ref{lem:A6-6} (``Existence of complements'') to find a divisor $Λ$ and then Lemma~\ref{lem:A6-7} (``Bound on multiplicity at an lc place'') to bound the multiplicity $\mult_T ν^*L$ from above, independently of $X$, $B$, $A$ and $L$. But then, a look at Definition~\ref{not:logdiscrep} (``log discrepancy'') shows that this already gives the desired bound on $s$. \end{proof} \subsection{Preparation for the proof of Theorem~\ref*{thm:lct}: varieties of Picard-number one} The second main ingredient in the proof of Theorem~\ref{thm:lct} is the following result, which essentially proves Theorem~\ref{thm:lct} in one special case. Its proof, which we do not cover in detail, combines all results discussed in the previous Sections~\ref{sec:bcomp}--\ref{sec:volumes}: boundedness of complements, effective birationality and bounds for volumes. \begin{prop}[\protect{Theorem~\ref{thm:lct} in a special case, \cite[Prop~3.1]{Bir16b}}]\label{p-bnd-lct-global-weak} Given $d ∈ ℕ$ and $ε ∈ ℝ^+$, assume that Proposition~\ref{prop:lctbd} (``LC thresholds for $ℝ$-linear systems of bounded degrees'') holds in dimension $≤ d$ and that Theorem~\ref{thm:BAB} (``Boundedness of $ε$-lc Fanos'') holds in dimension $≤ d-1$. Then, there exists $v ∈ ℝ^+$ such that the following holds. If $X$ is any $ℚ$-factorial, $ε$-lc Fano variety of dimension $d$ of Picard number one, and if $L ∈ ℝ\Div(X)$ is effective with $L \sim_{ℝ} -K_X$, then each coefficient of $L$ is less than or equal to $v$. \qed \end{prop} \subsection{Sketch of proof of Theorem~\ref*{thm:lct}} \label{ssec:soft} Like other statements, Theorem~\ref{thm:lct} is shown using induction over the dimension. The following key lemma provides the induction step. \begin{lem}[\protect{Implication Proposition~\ref{prop:lctbd} $⇒$ Theorem~\ref{thm:lct}, \cite[Lem.~3.2]{Bir16b}}]\label{l-local-lct-bab-to-global-lct} Given $d ∈ ℕ$, assume that Proposition~\ref{prop:lctbd} (``LC thresholds for $ℝ$-linear systems of bounded degrees'') holds in dimension $≤ d$ and that Theorem~\ref{thm:BAB} (``Boundedness of $ε$-lc Fanos'') holds in dimension $≤ d-1$. Then, Theorem~\ref{thm:lct} (``Lower bound for lc thresholds'') holds in dimension $d$. \end{lem} \begin{proof}[\protect{Sketch of proof following \cite[p.~13f]{Bir16b}}] The first steps in the proof are similar to the proof of Proposition~\ref{prop:lctbd}. Choose any number $ε'∈ (0,ε)$. Given any projective, $d$-dimension, $ε$-lc pair $(X,B)$ be as in Theorem~\ref{thm:lct} in dimension $d$ and any divisor $L ∈ |Δ|_{ℝ}$, let $s$ be the largest number such that $(X,B+s·L)$ is $ε'$-lc. We need to show $s$ is bounded from below away from zero. In particular, we may assume that $s<1$. As in the proof of Proposition~\ref{prop:lctbd}, we may also assume $X$ is $ℚ$-factorial. There is a birational modification $φ : Y → X$ and a prime divisor $T ∈ \Div(Y)$ with log discrepancy \begin{equation}\label{eq:ghjfg} a_{\log}(T,X,B+s·L)=ε'. \end{equation} Techniques of \cite{BCHM10} (``extracting a divisor'') allow us to assume that $φ$ is either the identity, or that the $φ$-exceptional set equals $T$ precisely. The assumption that $X$ is $ℚ$-factorial allows us to pull back divisors. Let $$ B_Y := φ^*(K_X+B)-K_Y \quad\text{and}\quad L_Y := φ^*L. $$ Using the definition of log discrepancy, Definition~\vref{not:logdiscrep}, the assumption that $(X,B)$ is $ε$-lc and Equation~\eqref{eq:ghjfg} are formulated in terms of divisor multiplicities as $$ \mult_T B_Y ≤ 1-ε \quad\text{and}\quad \mult_T(B_Y+s·L_Y) = 1-ε', $$ hence $\mult_T (s·L_Y) ≥ ε-ε'$. The pair $(Y, B_Y+ s·L_Y)$ is klt and weak log Fano, which implies that $Y$ is Fano type. Recalling from Remark~\vref{rem:MoriDream} that $Y$ is thus a Mori dream space, we may run a $(-T)$-Minimal Model Programme and obtain rational maps, $$ \xymatrix{ % Y \ar@{-->}[rrrr]^{α\text{, extr.\ contractions and flips}} &&&& Y' \ar[rrrr]^{β\text{, Mori fibre space}} &&&& Z', } $$ where $-T$ is ample when restricted to general fibres of $β$. We write $B_{Y'} := α_* B_Y$ and $L_{Y'} := α_* L_Y$ and note that $$ -(K_{Y'}+B_{Y'}+s·L_{Y'}) \overset{\text{def.\ of $L$}}{\sim_ℝ} (1-s)L_{Y'} \overset{s < 1}{≥} 0. $$ Moreover, an explicit discrepancy computation along the lines of \cite[Cor.~2.32]{KM98} shows that $(Y', B_{Y'}+s·L_{Y'})$ is $ε'$-lc, because $(Y,B_Y+s·L_Y)$ is $ε'$-lc and because $-(K_Y+B_Y+s·L_Y)$ is semiample. There are two cases now. If $\dim Z' > 0$, then restricting to a general fibre of $Y' → Z'$ and applying Proposition~\ref{prop:lctbd} (``LC thresholds for $ℝ$-linear systems of bounded degrees'') in lower dimension\footnote{or applying Theorem~\ref{thm:BAB} (``Boundedness of $ε$-lc Fanos'')} shows that the coefficients of those components of $(1-s)·L_{Y'}$ that dominate $Z'$ components of are bounded from above. In particular, $\mult_{T'}(1-s)·L_{Y'}$ is bounded from above. Thus from the inequality $$ \mult_{T'}(1-s)·L_{Y'} ≥ \frac{(1-s)·(ε-ε')}{s}, $$ we deduce that $s$ is bounded from below away from zero. If $Z'$ is a point, then $Y'$ is a Fano variety with Picard number one. Now $$ -K_{Y'} \sim_ℝ (1-s)·L_{Y'} + B_{Y'} + s·L_{Y'}≥ (1-s)·L_{Y'}, $$ so by Proposition~\ref{p-bnd-lct-global-weak}, $\mult_{T'}(1-s)·L_{Y'}$ is bounded from above which again gives a lower bound for $s$ as before. \end{proof} \svnid{$Id: 08-jordan.tex 84 2019-04-08 23:01:11Z kebekus $} \section{Application to the Jordan property} \label{sec:jordan} We explain in this section how the boundedness result for Fano varieties applies to the study of birational automorphism groups, and how it can be used to prove the Jordan property. Several of the core ideas presented here go back to work of Serre, who solved the two dimensional case, \cite[Thm.~5.3]{MR2567402} but see also \cite[Thm.~3.1]{MR2648675}. If one is only interested in the three-dimensional case, where birational geometry is particularly well-understood, most arguments presented here can be simplified. \subsection{Existence of subgroups with fixed points} If $X$ is any rationally connected variety, Theorem~\ref{thm:jordan} (``Jordan property of Cremona groups'') asks for the existence of finite Abelian groups in the Cremona groups $\Bir(X)$. As we will see in the proof, this is almost equivalent to asking for finite groups of automorphisms that admit fixed points, and boundedness of Fanos is the key tool used to find such groups. The following lemma is the simplest result in this direction. Here, boundedness enters in a particularly transparent way. \begin{lem}[\protect{Fixed points on Fano varieties, \cite[Lem.~4.6]{MR3483470}}]\label{lem:j1} Given $d ∈ ℕ$, there exists a number $j^{\Fano}_d ∈ ℕ$ such that for any $d$-dimensional Fano variety $X$ with canonical singularities and any finite subgroup $G ⊆ \Aut(X)$, there exists a subgroup $F ⊆ G$ of index $|G:F| ≤ j^{\Fano}_d$ acting on $X$ with a fixed point. \end{lem} \begin{rem} To keep notation simple, Lemma~\ref{lem:j1} is formulated for Fanos with canonical singularities, which is the relevant case for our application. In fact, it suffices to consider Fanos that are $ε$-lc. \end{rem} \begin{proof}[Proof of Lemma~\ref{lem:j1}] As before, write $\mathcal{X}^{\Fano}_{d,0}$ for the $d$-dimensional Fano variety $X$ with canonical singularities. It follows from boundedness, Theorem~\ref{thm:BAB} or Corollary~\ref{cor:bab}, that there exist numbers $m$, $v ∈ ℕ$ such that the following holds for every $X ∈ \mathcal{X}^{\Fano}_{d,0}$. \begin{enumerate} \item The divisor $-m·K_X$ is Cartier and very ample. \item The self-intersection number of $-m·K_X$ is bounded by $v$. More precisely, $$ -[m·K_X]^{d} ≤ v. $$ \end{enumerate} Given $X$, observe that the associated line bundles $𝒪_X(-m·K_X)$ are $\Aut(X)$-linearised. Accordingly, there exists a number $N ∈ ℕ$, such that every $X ∈ \mathcal{X}^{\Fano}_{d,0}$ admits an $\Aut(X)$-equivariant embedding $X ↪ ℙ^N$. Let $j^{\Jordan}_{N+1}$ be the number obtained by applying the classical result of Jordan, Theorem~\ref{thm:jordan-lin}, to $\GL_{N+1}(ℂ)$, and set $j^{\Fano}_d := j^{\Jordan}_{N+1}·v$. Now, given any $X ∈ \mathcal{X}^{\Fano}_{d,0}$ and any finite subgroup $G ⊆ \Aut(X)$, the $G$ action extends to $ℙ^N$. The action is thus induced by a representation of a finite linear group $Γ$, say $$ \xymatrix{ % Γ \ar@{^(->}[r] \ar@{->>}[d] & \GL_{N+1}(ℂ) \ar@{->>}[d] \\ G \ar@{^(->}[r] & \PGL_{N+1}(ℂ). \\ } $$ By Theorem~\ref{thm:jordan-lin}, the classic result of Jordan, we find a finite Abelian subgroup $Φ ⊆ Γ$ of index $|Φ:Γ| ≤ j^{\Jordan}_{N+1}$. Since $Φ$ is Abelian, the $Φ$-representation space $ℂ^{N+1}$ is a direct sum of one-dimensional representations. Equivalently, we find $N+1$ linearly independent, $Φ$-invariant, linear hyperplanes $H_i ⊂ ℙ^{N+1}$. The intersection of suitably chosen $H_i$ with $X$ is then a finite, $Φ$-invariant subset $\{x_1, …, x_r\} ⊂ X$, of cardinality $r ≤ v$. The stabiliser of $x_1 ∈ X$ is a subgroup $Φ_{x_1} ⊂ Φ$ of index $|Φ:Φ_{x_1}| ≤ v$. Taking $F$ as the image of $Φ_{x_1} → G$, we obtain the claim. \end{proof} \begin{rem} The proof of Lemma~\ref{lem:j1} shows that the groups $G$ are close to Abelian. It also gives an estimate for $j^{\Fano}_d$ in terms of the volume bound (``$v$'') and the classical Jordan constant $j^{\Fano}_d$. \end{rem} As a next step, we aim to generalise the results of Lemma~\ref{lem:j1} to varieties that are rationally connected, but not necessarily Fano. The following result makes this possible. \begin{lem}[\protect{Rationally connected subvarieties on different models, \cite[Lem.~3.9]{MR3483470}}]\label{lem:xfgs} Let $X$ be a projective variety with an action of a finite group $G$. Suppose that $X$ is klt, with $Gℚ$-factorial singularities and let $f : X \dasharrow Y$ be a birational map obtained by running a $G$-Minimal Model Programs. Suppose that there exists a subgroup $F ⊂ G$ and an $F$-invariant, rationally connected subvariety $T ⊊ Y$. Then, there exists an $F$-invariant rationally connected subvariety $Z ⊊ X$. \qed \end{lem} Since we are mainly interested to see how boundedness applies to birational transformation groups, we will not explain the proof of Lemma~\ref{lem:xfgs} in detail. Instead, we merely list a few of the core ingredients, which all come from minimal model theory and birational geometry. \begin{itemize} \item Hacon-McKernan's solution \cite{HMcK07} to Shokurov's ``rational connectedness conjecture'', which guarantees in essence that the fibres of all morphisms appearing in the MMP are rationally chain connected. \item A fundamental result of Graber-Harris-Starr, \cite{GHS03}, which implies that if $f : X → Y$ is any dominant morphism of proper varieties, where both the target $Y$ and a general fibre is rationally connected, then $X$ is also rationally connected. \item Log-canonical centre techniques, in particular a relative version of Kawamata's subadjunction formula, \cite[Lem.~2.5]{MR3483470}. These results identify general fibres of minimal log-canonical centres under contraction morphisms as rationally connected varieties of Fano type. \end{itemize} \begin{prop}[\protect{Fixed points on rationally connected varieties, \cite[Lem.~4.7]{MR3483470}}]\label{prop:j2} Given $d ∈ ℕ$, there exists a number $j^{rc}_d ∈ ℕ$ such that for any $d$-dimensional, rationally connected projective variety $X$ and any finite subgroup $G ⊆ \Aut(X)$, there exists a subgroup $F ⊆ G$ of index $|G:F| ≤ j^{rc}_d$ acting on $X$ with a fixed point. \end{prop} \begin{proof}[Sketch of proof] We argue by induction on the dimension. Since the case $d=1$ is trivial, assume that $d > 1$ is given, and that numbers $j^{rc}_1, …, j^{rc}_{d-1}$ have been found. Set $$ j_d := \max \{j^{rc}_1, …, j^{rc}_{d-1}, j^{\Fano}_d \} \quad\text{and}\quad j^{rc}_d := (j_d)². $$ Assume that a $d$-dimensional, rationally connected projective variety $X$ and a finite subgroup $G ⊆ \Aut(X)$ are given. By induction hypothesis, it suffices to find a subgroup $G' ⊆ G$ of index $|G:G'| ≤ j_d$ and a $G'$-invariant, rationally connected, proper subvariety $X' ⊊ X$. If $\widetilde{X} → X$ is the canonical resolution of singularities, as in \cite{BM96}, then $\widetilde{X}$ is likewise rationally connected, $G$ acts on $\widetilde{X}$ and the resolution morphism is equivariant. Since images of rationally connected, invariant subvarieties are rationally connected and invariant, we may assume from the outset that $X$ is smooth. But then we can run a $G$-equivariant Minimal Model Programme\footnote{The existence of an MMP terminating with a fibre space is \cite[Cor.~1.3.3]{BCHM10}, which we have quoted before. The fact that the MMP can be chosen in an equivariant manner is not explicitly stated there, but follows without much pain.} terminating with a $G$-Mori fibre space, $$ \xymatrix{ % X \ar@{-->}[rrr]^{G\text{-equivariant MMP}} &&& X' \ar[rrr]^{G\text{-Mori fibre space}} &&& Y. } $$ In the situation at hand, Lemma~\ref{lem:xfgs} claims that to find proper, invariant, rationally connected varieties on $X$, it is equivalent to find them on $X'$. The fibre structure, however, makes that feasible. Indeed, if the base $Y$ of the fibration happens to be a point, then $X'$ is Fano with terminal singularities, and Lemma~\ref{lem:j1} applies. Otherwise, let $G_Y$ be the image of $G$ in $\Aut(Y)$, let $G_{X'/Y} ⊆ G$ be the ineffectivity of the $G$-action on $Y$, and consider the exact sequence $$ 1 → G_{X'/Y} → G → G_Y → 1. $$ As the image of the rationally connected variety $X'$, the base $Y$ is itself rationally connected. By induction hypothesis, using that $\dim Y < \dim X$, there exists a subgroup $F'_Y ⊆ G_Y$ of index $|G_Y:F'_Y| < j_d$ that acts on $Y$ with a fixed point, say $y ∈ Y$. Let $G' ⊂ G$ be the preimage of $G'_Y$. The fibre $X_y$ is then invariant with respect to the action of $G'$ and rationally chain connected by \cite[Cor.~1.3]{HMcK07}. Better still, Prokhorov and Shramov show that it contains a rationally connected, $G'$-invariant subvariety. The induction applies. \end{proof} \subsection{Proof of Theorem~\ref*{thm:jordan} (``Jordan property of Cremona groups'')} \label{ssec:prof} Given a number $d ∈ ℕ$, we claim that the number $j := j^{rc}_d·j^{\Jordan}_d$ will work for us, where $j^{rc}_d$ is the number found in Proposition~\ref{prop:j2}, and $j^{\Jordan}_d$ comes from Jordan's Theorem~\ref{thm:jordan-lin}. To this end, let $X$ be any rationally connected variety of dimension $d$, and let $G ⊆ \Bir(X)$ be any finite group. Blowing up the indeterminacy loci of the birational transformations $g ∈ G$ in an appropriate manner, we find a birational, $G$-equivariant morphism $\widetilde{X} → X$ where the action of $G$ in $\widetilde{X}$ is regular rather than merely birational, see \cite[Thm.~3]{MR0337963}. Combining with the canonical resolution of singularities, we may assume that $\widetilde{X}$ is smooth. Proposition~\ref{prop:j2} will then guarantee the existence of a subgroup $G' ⊆ G$ of index $|G:G'| ≤ j^{rc}_d$ acting on $\widetilde{X}$ with a fixed point $\widetilde{x}$. Standard arguments (``linearisation at a fixed point'') that go back to Minkowski show that the induced action of $G'$ on the Zariski tangent space $T_{\widetilde{x}}(\widetilde{X})$ is faithful, so that Jordan's Theorem~\ref{thm:jordan-lin} applies. In fact, assuming that there exists an element $g ∈ G' ∖ \{e\}$ with $Dg|_{\widetilde{x}} = \Id_{T_{\widetilde{x}}(\widetilde{X})}$, choose coordinates and use a Taylor series expansion to write $$ g(\vec{x}) = \vec{x} + A_k(\vec{x}) + A_{k+1}(\vec{x}) + … $$ where each $A_m(\vec{x})$ is homogeneous of degree $m$, and $A_k$ is non-zero. Given any number $n$, observe that $$ g^n(\vec{x}) = \vec{x} + n·A_k(\vec{x}) + \text{(higher order terms)}. $$ Since the base field has characteristic zero, this contradicts the finite order of $g$. \qed \ifdefined\subtitle \else \fi
1,941,325,219,951
arxiv
\section{Introduction} This talk is based on Refs.~\cite{Bozek:2012gr,Bozek:2013uha,Bozek:2013ska}, where the details and more complete lists of references can be found (see also the mini-review~\cite{Bozek:2013yfa}). We wish to address here the most intriguing physics questions concerning the topic: \begin{itemize} \item Are the highest-multiplicity p-Pb collisions {\em collective?} \item What is the nature of the initial state and correlations therein? \item What are the limits in conditions on applicability of hydrodynamics? \end{itemize} Recall that the {\em collective flow} is one of the principal signatures of the strongly-interacting Quark-Gluon Plasma formed in ultra-relativistic A-A collisions at RHIC and the LHC. It manifests itself in harmonic components in the momentum spectra $v_n$, in specific structures in the correlation data {\em (ridges)}, in {\em mass hierarchy} of the $p_T$ spectra and $v_n$'s of identified particles, as well as in certain features of interferometry (femtoscopy). Since 1)~the ridges were found experimentally at the LHC in p-Pb collisions~\cite{CMS:2012qk,*Abelev:2012ola,*Aad:2012gla,*Aad:2013fja,*Chatrchyan:2013nka}, 2)~large elliptic and triangular flow was measured in p-Pb~\cite{Aad:2013fja,*Chatrchyan:2013nka}, 3)~strong mass hierarchy was recently detected in p-Pb~\cite{Chatrchyan:2013eya,*ABELEV:2013wsa,*Abelev:2013bla}, there are clear analogies between the ``collective'' A-A system and the ``small'' p-A system. Below we present the evidence for the collective interpretation of the highest-multiplicity p-A collisions. \section{Three-stage approach} To place our argumentation on a quantitative level, we use the three stage approach consisting of 1)~ modeling of the initial phase with the Glauber approach as implemented in GLISSANDO~\cite{Broniowski:2007nz,*Rybczynski:2013yba}, 2)~applying event-by-event 3+1D viscous hydrodynamics~\cite{Bozek:2011ua} to the intermediate evolution, and 3)~carrying out statistical hadronization at freezeout with THERMINATOR~\cite{Kisiel:2005hn,*Chojnacki:2011hb}. The details can be found in Refs.~\cite{Bozek:2012gr,Bozek:2013uha}. Here we only wish to point out the similarity of the initial conditions in high-multiplicity p-A collisions to those in peripheral A-A collisions, as seen from Fig.~\ref{f1}. This indicates that our approach should work with similar accuracy for the most central p-Pb collisions as it worked for the Pb-Pb collisions at centralities~$\sim 70\%$. \begin{figure} \centering \includegraphics[width=0.67\columnwidth]{dis_size_4.pdf} \caption{Event-by-event distribution of the rms size of the Glauber initial conditions for the {\em fixed} number of participants, $N_p=18$, for the standard source in p-Pb (thick solid line) obtained by placing the sources at the centers of participants, the compact source (dashed line), obtained by placing the sources in the center-of-mass of the colliding pair~\cite{Bzdak:2013zma}, and for the peripheral Pb-Pb collisions (thin solid line). The p-Pb sizes are not more that twice smaller from the Pb-Pb sizes. At the same time, the p-Pb system is more dense. This allows to analyze the p-Pb system with viscous hydrodynamics~\cite{Bozek:2013yfa}. \label{f1}} \end{figure} \section{The ridge} \begin{figure} \centering \includegraphics[width=0.85\columnwidth]{surf.pdf} \caption{Creation of the near-side ridge: The surfers' motion is correlated even when they are widely separated along the shore. \label{surf}} \end{figure} The emergence of the ridges in the two-particle correlations in the relative azimuth and pseudorapidity, $C(\Delta \eta, \Delta \phi) = {N^{\rm pairs}_{\rm phys}(\Delta \eta, \Delta \phi)}/{N^{\rm pairs}_{\rm mixed}(\Delta \eta)}$, finds a natural explanation in correlated collective flow orientation within a long pseudorapidity span. This is cartooned in Fig.~\ref{surf}. Numerical calculation in our approach yields fair agreement with the data, as indicated in Fig.~\ref{atlas}, providing alternative explanation to the color-glass approach~\cite{Dusling:2012iga,*Dusling:2013oia}. As shown below, the event-by-event hydrodynamics also yields the proper magnitude of the triangular component of the flow in a natural way. \begin{figure} \centering \includegraphics[width=0.7\textwidth]{atlas.pdf} \\ \includegraphics[width=0.6\textwidth]{at_zyam.pdf} \caption{Correlation functions in the ATLAS kinematic conditions. Top:~$C(\Delta \phi,\Delta \eta)$. Bottom: the projected ($2\le |\Delta \eta| \le 5$) per-trigger correlation function $Y(\Delta \phi)={\int B(\Delta \phi) d(\Delta \phi)} C(\Delta\phi)/N - b_{\rm ZYAM}$, compared to the ATLAS data. The solid (dashed) lines correspond to the standard (compact) source. \label{atlas}} \end{figure} \section{Harmonic flow} The structure of the correlation data (similar to the top panel of Fig.~\ref{atlas}) indicates that one may get rid of most of the nonflow effects by excluding the central peak from the analysis, simply using pairs with $|\Delta \eta > 2|$. The flow coefficients ($v_n\{2, |\Delta \eta >2|\}$) obtained that way from the experiment and from our model simulations are compared in Figs.~\ref{v2cms} and \ref{v3cms}. We note a very fair agreement with the data for the highest-multiplicity events. As the system becomes smaller, the simulations depart from the experiment, indicating that the dissipative effects or the direct production from the corona are becoming important. \begin{figure} \vspace{-2cm} \centering \includegraphics[width=0.8\textwidth]{v2cms.pdf} \vspace{-15mm} \caption{The elliptic flow coefficient $v_2\{2, |\Delta \eta >2|\}$ from our model (points) compared to the CMS data. \label{v2cms}} \end{figure} \begin{figure} \vspace{-4mm} \centering \includegraphics[width=0.8\textwidth]{v3cms.pdf} \vspace{-15mm} \caption{Same as Fig.~\ref{v2cms} but for the triangular flow $v_3\{2, |\Delta \eta >2|\}$. The departure of the model from the experiment for lower centrality classes indicates the limits of validity of the collective approach. \label{v3cms}} \end{figure} \section{Mass hierarchy} A very important effect of the presence of collective flow is the emergent mass hierarchy in certain heavy-ion observables~\cite{Bozek:2013ska,Werner:2013ipa}. The effect is kinematic: hadrons emitted from a moving fluid element acquire more momentum when they are more massive. For that reason, for instance, the average transverse momentum of the protons is significantly higher than for the kaons, which in turn is higher than for the pions. The results, showing agreement of our approach with the data, are presented in Fig.~\ref{ids}(a). As a benchmark with no flow, we show in Fig.~\ref{ids}(b) the results of the HIJING simulations, exhibiting much smaller splitting. A proper pattern in the differential identified-particle elliptic flow is also found, as seen from Fig.~\ref{idv}. We note that a very general argument in favor collectivity, based on failure of superposition in the p-A spectra, has been brought up in Ref.~\cite{Bzdak:2013lva}. \begin{figure} \centering \includegraphics[width=0.85\textwidth]{avpt.pdf} \vspace{-9mm} \caption{Mean transverse momentum of identified particles produced in the p-Pb collisions, plotted as a function of the charged particle density. (a)~our model, and for comparison (b)~HIJING~2.1, where no collective effects are present. The lines correspond to the model calculations, while the data points come from Ref.~\cite{Abelev:2013bla}. \label{ids}} \end{figure} \begin{figure} \centering \includegraphics[width=0.7\textwidth]{v2id.pdf} \caption{$v_2\{2\}$ for pions, kaons and protons in p-Pb collisions calculated in our model, plotted as a function of the transverse momentum. The data come from Ref.~\cite{ABELEV:2013wsa}. \label{idv}} \end{figure} \section{Conclusions} The numerous experimental data from the LHC for the p-Pb collisions of highest multiplicity are compatible with the collective expansion scenario: the formation of the two ridges, large elliptic and triangular flow, and the mass hierarchy found in the average transverse momentum and in the differential elliptic flow \cite{Bozek:2013ska,Werner:2013ipa}. Thus the p-Pb system can be used as a test ground for the onset of collective dynamics. Certainly, lower multiplicity events are ``contaminated'' with other effects, e.g., the production from the corona nucleons and their modeling must be more involved. Another signature of collectivity would be provided by the interferometric radii, where the model calculation for p+Pb place the results closer to the A+A lines and farther from the p+p lines~\cite{Bozek:2013df}. \section*{Acknowledgments} This work was supported by the Polish National Science Centre, grants DEC-2012/06/A/ST2/00390 and DEC-2011/01/D/ST2/00772, and PL-Grid infrastructure. \bibliographystyle{apsrev}
1,941,325,219,952
arxiv
\section{Introduction} In the last decade, tracking algorithms have evolved significantly in both their sophistication and quality of results. However, tracking is still considered a very challenging task in computer vision, particularly because a slight mistake in one frame may be reinforced after an online learning step, resulting in the so-called model drift problem. Furthermore, occlusion of target objects occurs quite often in real world scenarios, and it is not clear how to model occlusions robustly. The state-of-the-art methods (\eg \cite{tld,mil,semiB,ct}) usually employ very powerful learning and energy minimization methods in the hopes of better handling these issues. Fortunately, we are moving into a 3D era for digital devices. Accurate and affordable depth sensors, such as Microsoft Kinect, Asus Xtion and PrimeSense, makes depth acquisition easy and cheap. With an accurate depth map, many traditional computer vision tasks become significantly easier (\eg human pose estimation \cite{Jamie}). For tracking, the depth map can provide valuable additional information to significantly improve results with much more robust occlusion and model drift handling. \begin{figure}[t] \centering \includegraphics[width=0.159\linewidth]{./image/sample/22.pdf}~% \includegraphics[width=0.159\linewidth]{./image/sample/3.pdf}~% \includegraphics[width=0.159\linewidth]{./image/sample/7.pdf}~% \includegraphics[width=0.159\linewidth]{./image/sample/10.pdf}~% \includegraphics[width=0.159\linewidth]{./image/sample/13.pdf}~% \includegraphics[width=0.159\linewidth]{./image/sample/15.pdf} \vspace{1mm} \includegraphics[width=0.159\linewidth]{./image/sample/22.png}~% \includegraphics[width=0.159\linewidth]{./image/sample/3.png}~% \includegraphics[width=0.159\linewidth]{./image/sample/7.png}~% \includegraphics[width=0.159\linewidth]{./image/sample/10.png}~% \includegraphics[width=0.159\linewidth]{./image/sample/13.png}~% \includegraphics[width=0.159\linewidth]{./image/sample/15.png} \vspace{1mm} \includegraphics[width=0.159\linewidth]{./image/sample/1.pdf}~% \includegraphics[width=0.159\linewidth]{./image/sample/24.pdf}~% \includegraphics[width=0.159\linewidth]{./image/sample/21.pdf}~% \includegraphics[width=0.159\linewidth]{./image/sample/6.pdf}~% \includegraphics[width=0.159\linewidth]{./image/sample/8.pdf}~% \includegraphics[width=0.159\linewidth]{./image/sample/5.pdf} \vspace{1mm} \includegraphics[width=0.159\linewidth]{./image/sample/1.png}~% \includegraphics[width=0.159\linewidth]{./image/sample/24.png}~% \includegraphics[width=0.159\linewidth]{./image/sample/21.png}~% \includegraphics[width=0.159\linewidth]{./image/sample/6.png}~% \includegraphics[width=0.159\linewidth]{./image/sample/8.png}~% \includegraphics[width=0.159\linewidth]{./image/sample/5.png} \vspace{1mm} \includegraphics[width=0.159\linewidth]{./image/sample/12.pdf}~% \includegraphics[width=0.159\linewidth]{./image/sample/20.pdf}~% \includegraphics[width=0.159\linewidth]{./image/sample/26.pdf}~% \includegraphics[width=0.159\linewidth]{./image/sample/17.pdf}~% \includegraphics[width=0.159\linewidth]{./image/sample/18.pdf}~% \includegraphics[width=0.159\linewidth]{./image/sample/19.pdf} \vspace{1mm} \includegraphics[width=0.159\linewidth]{./image/sample/12.png}~% \includegraphics[width=0.159\linewidth]{./image/sample/20.png}~% \includegraphics[width=0.159\linewidth]{./image/sample/26.png}~% \includegraphics[width=0.159\linewidth]{./image/sample/17.png}~% \includegraphics[width=0.159\linewidth]{./image/sample/18.png}~% \includegraphics[width=0.159\linewidth]{./image/sample/19.png} \vspace{1mm} \includegraphics[width=0.159\linewidth]{./image/sample/2.pdf}~% \includegraphics[width=0.159\linewidth]{./image/sample/4.pdf}~% \includegraphics[width=0.159\linewidth]{./image/sample/28.pdf}~% \includegraphics[width=0.159\linewidth]{./image/sample/27.pdf}~% \includegraphics[width=0.159\linewidth]{./image/sample/16.pdf}~% \includegraphics[width=0.159\linewidth]{./image/sample/9.pdf} \vspace{1mm} \includegraphics[width=0.159\linewidth]{./image/sample/2.png}~% \includegraphics[width=0.159\linewidth]{./image/sample/4.png}~% \includegraphics[width=0.159\linewidth]{./image/sample/28.png}~% \includegraphics[width=0.159\linewidth]{./image/sample/27.png}~% \includegraphics[width=0.159\linewidth]{./image/sample/16.png}~% \includegraphics[width=0.159\linewidth]{./image/sample/9.png} \caption{Examples of our RGBD tracking benchmark dataset with manual annotation of all frames. } \label{fig:dataset} \end{figure} \begin{figure*}[t] \centering \includegraphics[width=0.97\linewidth]{./image/overview.pdf} \caption{ Illustration of our baseline RGBD tracking algorithm. The 2D confidence map is the combined confidence map from classifier and optical flow tracker. The 1D depth distribution is a Gaussian estimated from target depth histogram. A 3D confidence map is computed by applying threshold from the 1D Gaussian on the 2D confidence map. In the output, the target location (the green bounding boxes) is position of the highest confidence. Occluder is recognized from its depth value.} \label{overview} \end{figure*} How much does depth information help in tracking? What is the baseline performance for tracking given the depth information? How far are we from claiming that we have solved the tracking problem if we have reliable depth accessible? What is a reasonable baseline algorithm for tracking with RGBD data, and how do the state-of-the-art RGB tracking algorithms perform compared with this RGBD baseline? This paper seeks to answer these questions by proposing a very simple but powerful baseline algorithm and conducting a quantitative benchmark evaluation. To build a reasonable baseline, we use the state-of-the-art HOG features \cite{HOG,DPM} sliding window detection with linear SVM \cite{SVM,PrimalSVM}, which incorporate depth information to prevent model drift, and robust optical flow \cite{LargeFlow}, and propose a very simple model to represent the depth distribution for occlusion handling. To evaluate the algorithms, we construct a large RGBD video dataset of 100 videos with high diversity, including deformable objects, various occlusion conditions, and moving cameras, under different lighting conditions and in different scenes (Figure \ref{fig:dataset}). We aim to lay the foundation for further research in this task, for both RGB and RGBD tracking approaches, by providing a good benchmark and baseline. We will withhold the ground truth annotation for a portion of the dataset, provide instructions for submitting new models, and host an online evaluation server to allow public submission of the results from new models. \subsection{Related works} There are many noteworthy tracking algorithms which have been proposed in the last decade. Here we briefly summarize only a partial list of them, due to space constraints. \cite{mil} proposes a very robust system with online multiple instance learning. \cite{tld} designs a framework to integrate tracking, learning, and detection using P-N loops. \cite{semiB} uses semi-supervised online boosting to increase tracking robustness. \cite{EigenTracking} learns a view-based representation to account for object articulations, while \cite{Adam06robustfragments-based} handles it using a fragments-based model. To address target appearance changes, \cite{robustonline} uses a Gaussian Mixture Model built from online expectation maximization (EM), and \cite{Ross08incrementallearning} presents an incremental subspace learning algorithm. More recently, \cite{ct} proposes using compressive sensing for real-time tracking, and \cite{DBLP:conf/iccv/HareST11} presents structured output prediction to avoid intermediate classification. There are also some important works on multiple target tracking and motion flow estimation, such as \cite{dahua,xiaogang}. There has been also some seminal works on tracking using RGBD cameras \cite{luber11learning,spinelloIROS11,luberIROS11,spinelloICRA11}, but they all focus on tracking the human body. The publicly available RGBD People Dataset \cite{spinelloIROS11,luberIROS11} contains only one sequence with 1132 frames captured with static cameras with only people moving, which is obviously not enough to evaluate tracking algorithms for general objects. There has been several great benchmarks for various computer vision tasks that help to advance the field and shape computer vision as a rigorous experimental science, \eg two-view stereo matching benchmark \cite{stereoBenchmark}, multiple-view stereo reconstruction benchmark \cite{mviewBenchmark}, optical flow benchmark \cite{opticalFlowBenchmark}, Markov Random Field energy optimizaiton benchmark \cite{MRFBenchmark}, object classification, detection, and segmentation benchmark \cite{voc}, scene classification benchmark \cite{SUNDB} and large scale image classification benchmark \cite{ImageNet}. This paper is an addition to the list to provide a benchmark of tracking, for both RGB and RGBD video. \section{Baseline algorithm} The goal is to build a simple but strong baseline algorithm leveraging state-of-the-art feature, detection and optical flow algorithms with simple but reasonable occlusion handling. An overview of the baseline tracking algorithm is shown in Figure \ref{overview}. \subsection{Detection and optical flow} Our baseline algorithm includes a linear support vector machine classifier (SVM \cite{SVM,PrimalSVM}) based on RGBD features, an optical flow tracker \cite{LargeFlow} and a target depth distribution model, which are initialized by the input bounding box from the first frame and updated online. The RGBD feature we used is histogram of oriented gradients (HOG\cite{HOG,DPM}) from both RGB and depth data (Figure \ref{rhogdhog2}). HOG for depth is obtained by treating depth data as a gray scale image. This RGBD HOG feature describes local textures as well as 3D shapes, in which the target is more separable from background as well as occluder, and therefore improves the robustness against model drifting, especially when there is illumination variation, lack of texture, or high similarity between target and background color. In the subsequent frame, a HOG pyramid is computed, and a sliding window is run using a convolution of the SVM weights, which returns several possible target locations with their confidence. Confidence of these locations are then adjusted according to the bounding box estimated from optical flow tracker \cite{LargeFlow} in the following way: \begin{equation} \label{eq:finalconf} c = c_{d} + \alpha c_{t} r_{(t,d)} \end{equation} in which $c_{d}$ is the confidence of detection, $c_t$ is the confidence of optical flow tracking, and $r_{(t,d)}$ is the ratio of overlap between the detection and optical flow tracker's resulting bounding boxes, \ie an indication of their consistency. $\alpha$ denotes the weight of the overlap ratio ($\alpha = 0.5$ in our experiment). After this step, the target depth distribution model, a Gaussian distribution learned from previous frames, discards bounding boxes far from estimated target depth. The most probable remaining bounding box is picked and re-centered towards the center of the nearby region whose depth agrees with target depth model. Such re-centering helps prevent drifting of output bounding boxes. Afterwards, the target models are updated using this bounding box with hard negative mining and the tracker proceeds to the next frame. \subsection{Occlusion handling} \begin{figure}[t] \subfigure[HOG of RGB]{\includegraphics[width=0.33\linewidth]{./image/rhog.pdf}}~% \subfigure[RGB image]{\includegraphics[width=0.33\linewidth]{./image/rhogdhog.pdf}}~% \subfigure[HOG in Depth]{\includegraphics[width=0.33\linewidth]{./image/dhog.pdf}} \caption{The features we used for our baseline algorithm.} \label{rhogdhog2} \end{figure} In order to handle occlusions, some traditional RGB trackers like \cite{tld} use forward-backward error to indicate tracking failure caused by occlusion, and some others like \cite{Adam06robustfragments-based,mil} use a fragment-based model to reduce the models' sensitivity to partial occlusion. However, with depth information the solution for this issue becomes more straight-forward. Here we propose a simple but effective occlusion handler which actively detects the target occlusion and recovery during tracking process. \paragraph{Occlusion detection} We assume that the target is the closest object that dominates the bounding box when not occluded. A new occluder in front of the target inside the bounding box indicates the beginning of occlusion state. Therefore, depth histogram inside bounding box is expected to have a newly rising peak with a smaller depth value than target, and/or a reduction in the size of bins around the target depth, as illustrated in Figure \ref{occ_hist}. The depth histogram $h_i$ of all pixels inside a bounding box can be approximated as a Gaussian distribution for the $i$-th frame: \begin{equation} \label{eq:targetdistribution} h_i \sim \mathcal{N} (\mu_i,\sigma_i^2). \end{equation} And we define the likelihood of occlusion for this frame as: \begin{equation} \label{eq:checkocc} O_i = \frac{\sum\limits_{d=0}^{\mu_i-\sigma_i} h_i(d)}{\sum\limits_{d} h_i(d)}, \end{equation} where $h_i(d)$ is the count in the $d$-th bin for the $i$-th frame, and $d=0$ is the depth of the camera. $\mu_i-\sigma_i$ is a threshold for a point to be considered as occluder. The number of pixels that have smaller depth value than target depth is considered the area of the occluder that has appeared in the bounding box. Hence, a larger $O_i$ indicates that an occlusion is more likely. The target depth value is updated online, so a target moving towards the camera will not be treated as an occlusion. \begin{figure}[t] \centering \includegraphics[width=0.65\linewidth]{./image/Normal_hist.pdf}~\includegraphics[height=0.29\linewidth]{./image/Normal_rgb.png} \includegraphics[width=0.65\linewidth]{./image/Occ_hist.pdf}~\includegraphics[height=0.29\linewidth]{./image/occ_rgb.png} \caption{Depth distribution inside the bounding box. The top row shows the distribution in normal state, and the bottom row shows the distribution when occlusion occurs. The red Gaussian denotes the target model, and the green denotes the occluder model.} \label{occ_hist} \vspace{-3mm} \end{figure} \paragraph{Under occlusion} Our occlusion model, \ie the occluder's depth distribution, is initialized when entering the occlusion state. In the following frames, the occluder's position is updated by the optical flow tracker. A list of possible target candidates are identified either by the RGBD detection or a local search around the occluder. With color and depth distributions of target and occluder, the local search is done by performing segmentation on RGB and depth data respectively and combining their results. The combined segmentation produces a list of target candidates (Figure \ref{recover}), whose validity is then judged by the SVM classifier. If there is no candidate in the searching range or all of them are invalid, the tracker just tracks the occluder, preparing for the next frame. \paragraph{Recovery from occlusion} By examining the list of possible target candidates the tracker interprets target recovery when at least one candidate from the list satisfies the following condition: (1) the candidate's visible area is large enough compared to target area before entering occlusion, (2) the overlap between the occluder and the candidate is small and (3) the SVM classifier reports a high confidence. The occlusion subroutine ends if the target is recovered from occlusion. \begin{figure}[t] \subfigure[]{\includegraphics[width=0.24\linewidth]{./image/recovery1.png}}~% \subfigure[]{\includegraphics[width=0.24\linewidth]{./image/recovery2.png}}~% \subfigure[]{\includegraphics[width=0.24\linewidth]{./image/recovery3.png}}~% \subfigure[]{\includegraphics[width=0.24\linewidth]{./image/recovery4.png}} \vspace{-1mm} \caption{Local search for target candidates by segmentation. (a) RGB image (the target location is indicated by the green bounding box, the occluder indicated by the blue bounding box) (b) depth segmentation (c) RGB segmentation (d) final segmentation result along with possible target candidates.} \label{recover} \end{figure} \section{RGBD Tracking Benchmark} \subsection{Dataset construction} Several testing sets of RGB videos have been developed to measure the performance of different trackers. However, these datasets do not contain depth information and thus are not suitable for our purpose. In order to evaluate the performance improvement from depth information, we recorded a benchmark dataset consisting of 100 video clips with both RGB and depth data, manually annotated to contain the ground truth. \paragraph{Hardware setup} Our testing data set is captured using a standard Microsoft Kinect. It uses a paired infrared projector and camera to calculate depth value, thus its performance is severely impaired in an outdoor environment under direct sunlight. Also, Kinect requires a minimum and a maximum distance from the object to the cameras in order to obtain accurate depth value. Due to the above constraints, our videos are captured indoor, with object depth value ranging mainly from 0.8 to 6 meters. \begin{figure}[t] \includegraphics[width=1\linewidth]{./image/dataset.pdf} \caption{Statistics of our RGBD tracking benchmark dataset.} \label{tab:statistics} \vspace{-3mm} \end{figure} \begin{figure*}[t] \subfigure[Test cases without occlusion]{\includegraphics[width=0.33\linewidth]{./image/normalErr.pdf}}~% \subfigure[Test cases with occlusion]{\includegraphics[width=0.33\linewidth]{./image/occErr.pdf}}~% \subfigure[All test cases]{\includegraphics[width=0.33\linewidth]{./image/totalErr.pdf}} \caption{Average error rate composed of three types evaluated on different categories of test cases.} \label{errRate} \end{figure*} \paragraph{Annotation} We manually annotate the ground truth (target location) of the dataset by drawing bounding box on each frame as follows: A minimum bounding box covering the target is initialized on the first frame. On the next frame, if the target moves or its shape changes, the bounding box will be adjusted accordingly; otherwise, it remains the same. One author manually annotated all frames, to ensure high consistency. Because we manually annotate each frame, there is no interpolation or choosing of key frames. When occlusion occurs, the ground truth is defined as the minimum bounding box covering only the visible portion of the target. For example, if a person is occluded and so that only his/her left arm can be seen, then we provide the bounding box of the left arm instead of a predicted position of the whole human body. When the target is completely occluded there will be no bounding box for this frame. The same labeling criteria is also used in PASCAL VOC challenge. We annotate all following frames in this way. \subsection{Dataset statistics} Since the aim of the dataset is to cover as many scenarios as possible in real world tracking applications, the diversity of the video clips is important. Figure \ref{tab:statistics} summaries the statistics of our RGBD tracking dataset, which presents varieties in the following aspects: \paragraph{Target type} We divide targets into three types: human, animal and relatively rigid object. Rigid objects, such as toys and human faces, can only translate or rotate. Animals include dogs, rabbits and turtles, whose movement usually consists of out-of-plane rotation and some deformation. The degrees of freedom for human body motion is very high, and body parts, such as arms and legs, are often slim, resulting in a variety of deformation which may increase the difficulty for tracking. \paragraph{Target speed} Tracking difficulty is often related to target speed. We denote target speed using $1- r_{(i,i+1)}$, where $r_{(i,i+1)}$ is the ratio of overlap between target bounding boxes in two consecutive frames when no occlusion occurs. Target speed of a video sequence is defined by its maximum during the sequence. Compared to the real speed of the target, this definition of speed has a more direct influence on tracking performance, as it takes into account differences in frame rate. Average target speed in our video data set ranges from 0.057 to 0.599. \paragraph{Scene type} Background clutter is also an important factor affecting tracking performance. In our data set, we provide several types of scenes to with different levels of background clutter. The scenes include cafe, concourse, library, living room, office, playground and sports field. The living room, for example, has a simple and mostly static background, while the background of a cafe is complex, with many people passing by. \paragraph{Presence of occlusion} Out of 100 videos in our dataset, occlusion occurs in 63 videos, in which the targets are totally occluded in 16.3 frames on average. The videos cover several aspects that may affect tracking performance under occlusion, \eg how long the target is occluded, whether the target moves or changes in appearance during occlusion, and the similarity between the occluder and the target. \begin{figure*}[t] \centering \includegraphics[width=0.8\linewidth]{./image/label.pdf} \vspace{-2mm} \subfigure[Test cases without occlusion]{\includegraphics[width=0.33\linewidth]{./image/successRateNoOcc.pdf}}~% \subfigure[Test cases with occlusion]{\includegraphics[width=0.33\linewidth]{./image/successRateOcc.pdf}}~% \subfigure[All test cases]{\includegraphics[width=0.33\linewidth]{./image/successRateTotal.pdf}} \caption{Average success rate vs. threshold of overlap ratio ($r_t$) evaluated on different categories of test cases.} \label{SR} \end{figure*} \subsection{Evaluation metric} We used two metrics to evaluate the proposed baseline algorithm with other state-of-the-art trackers. One metric is center position error (CPE) which is the Euclidean distance between the centers of output target bounding boxes and the ground truth. The above metric shows how close the tracking results are to the ground truth in each frame. However, the overall performance of the trackers cannot be measured by averaging this distance. When the trackers are misled by background clutter, such distances can be huge, thus the average distance may be dominated by only a few frames. Also, this distance is undefined when trackers fail to output a bounding box or there is no ground truth bounding box (target is totally occluded). To evaluate the overall performance, we employ the criterion used in the PASCAL VOC challenge \cite{voc}, the ratio of overlap $r_i$ between the output and true bounding boxes: \begin{equation} \label{eq:overlap} r_{i}=\begin{cases} \frac{\text{area}(\text{ROI}_{T_{i}}\cap\text{ROI}_{G_{i}})}{\text{area}(\text{ROI}_{T_{i}}\cup\text{ROI}_{G_{i}})} & \text{if both \ensuremath{\text{ROI}_{T_{i}}} and \ensuremath{\text{ROI}_{G_{i}}} exist}\\ 1 & \text{if both \ensuremath{\text{ROI}_{T_{i}}} and \ensuremath{\text{ROI}_{G_{i}}} not exist}\\ -1 & \text{otherwise} \end{cases} \end{equation} where $\text{ROI}_{T_i}$ is the target bounding box in the $i$-th frame and $\text{ROI}_{G_i}$ is the ground truth bounding box. By setting a minimum overlapping area $r_t$, we can calculate the average success rate $R$ of each tracker as follows: \begin{align} R &= \dfrac{1}{N} \sum\limits_{i=1}^{N}u_i,\\ u_i &= \begin{cases} 1 & \text{if } r_i>r_t\\ 0 & \text{otherwise} \end{cases}, \end{align} where $u_i$ is an indicator denoting whether the output bounding box of the $i$-th frame is acceptable, and $N$ is the number of frames. According to the above calculation, the minimum overlap ratio $r_t$ makes a hard decision on whether an output is valid or not. Since some trackers may produce outputs that have small overlap ratio over all frames while others give large overlap on some frames and fail completely on the rest, $r_t$ must be treated as a variable to conduct a fair comparison. In Figure \ref{errRate}, we further divide tracking failures into three types: \begin{align*} \text{Type I }:& \text{ROI}_{T_i}\ne null\text{ and } \text{ROI}_{G_i}\ne null\text{ and }r_i<r_t\\ \text{Type II }:& \text{ROI}_{T_i}\ne null\text{ and } \text{ROI}_{G_i}=null \\ \text{Type III}:& \text{ROI}_{T_i}=null\text{ and } \text{ROI}_{G_i}\ne null \end{align*} Type I error is the case where target is visible, but tracker's output is far away from the target. Type II error is where target is invisible but tracker outputs a bounding box. Type III error is where target is visible but tracker fails to give any output. Running time of each algorithm is not included in our metrics, because our main focus is on the performance of the tracking algorithm on RGBD data. For our current implementation of the baseline algorithm, we tried to keep the system as simple as possible, so the code is written in Matlab and is not optimized for speed at all. There are many potential ways to speed up the algorithm, if we wish to use it in real time applications. For example, instead of a naive convolution for the sliding window detector, we can use \cite{FastConv} for acceleration. Instead of an optical flow \cite{LargeFlow} running in CPU, we can use an optical flow running on GPU \cite{GPUFlow}. Furthermore, there are also many ways to maximize the efficiency using special hardware, such as FPGA or other customized hardware ASIC circuits. \subsection{Evaluation results} To understand how much of the performance improvement is due to the use of depth data and how much is due to occlusion handling, we tested four versions of our proposed baseline tracker, which are: \begin{description} \vspace{-2mm} \item[RGB] uses RGB features without occlusion handling. \vspace{-2mm} \item[RGBD] uses RGBD features without occlusion handling. \vspace{-2mm} \item[RGBOcc] uses RGB features with occlusion handling. \vspace{-2mm} \item[RGBDOcc] uses RGBD features with occlusion handling enabled, which is our complete baseline algorithm. \end{description} We also compare the baseline algorithms to four state-of-the-art RGB trackers: TLD\cite{tld}, CT\cite{ct}, MIL\cite{mil}, semi-B\cite{semiB}, The performance measured by CPE and the corresponding snapshots are shown in Figure \ref{fig:result}, and the success rates measured by overlap ratio are shown in Figure \ref{SR}. Error decomposition of each tracker is shown in Figure \ref{errRate}. Furthermore, we define an average ranking of different algorithms, based on a combination of several indicators, as shown in Table \ref{table:ranking}. We can clearly see that the proposed baseline RGBD tracker significantly outperforms all others, which indicates that the extra depth map with some occlusion reasoning provides valuable information which helps to achieve a better tracking result. The proposed methods use very powerful but more computationally expensive classifiers (with hard negative mining) as well as a state-of-the-art optical flow algorithm, while other trackers mainly focus on real-time performance. Thus our RGB tracker is expected to have higher accuracy at the cost of longer running time. However, the effect of using depth data can still be seen by comparing the results of the tracker with depth input (RGBD) and without (RGB). With depth data, error is reduced by 10.9\%. After enabling the occlusion handler of the RGBD tracker, its error rate further decreased by 12.3\%. When compared with other state-of-art trackers, the proposed algorithm achieves an average 42.3\% reduction on error rate. In particular, when occlusion is present the occlusion detection and handling is critical to reduce error, as shown in Figure \ref{errRate} and \ref{SR} (b). Distinguishing three types of error helps analyze different sources of error. For example, TLD and SemiB have a relative high Type III error, suggesting that their models are sensitive to target appearance change or partial occlusion, while MIL, CT, and RGBD have high Type II error, resulting from the lack of an active occlusion detection mechanism. However, each error type cannot be considered separately as a direct indicator of performance. For example, MIL and CT use target models which are less sensitive to occlusion and thus have a very low Type III error at the cost of high Type II error when target is occluded, and possible high Type I error in the following frames after occlusion if trackers are misled by the occluder. Our proposed tracker robustly handles different scenarios and achieves the lowest overall error rate. \begin{table*}[t] \caption{Evaluation results: successful rate \% and corresponding ranking (in parentheses) under different categorizations.} \vspace{1mm} \centering \setlength{\tabcolsep}{5.1pt} { \small \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{algorithm} & \multirow{2}{*}{\specialcell{avg.\\ rank}} & \multicolumn{3}{c|}{target type} & \multicolumn{2}{c|}{target size} & \multicolumn{2}{c|}{speed} & \multicolumn{2}{c|}{oclussion } & \multicolumn{2}{c}{motion type}\tabularnewline \cline{3-13} & & human & animal & rigid & large& small&slow & fast & occ & no occ & passive & active\tabularnewline \hline RGBDOcc & 1 & 82.0(1) & 60.2(1) & 81.9(1) &82.9(1)&74.3(1)& 77.5(1) & 77.3(1) & 76.1(1) & 79.5(1) & 81.5(1) & 75.7(1)\tabularnewline \hline RGBD & 2.36 & 66.0(2)&58.3(2)&68.2(3)&69.5(2)&62.2(2)&64.9(2)&65.8(2)&61.1(3)&73.1(2)&67.7(3)&63.7(2)\tabularnewline \hline RGBOcc&2.63& 65.3 (3)&49.2(3)&73.4(2)&64.9(3)&62.0(3)&64.8(3)&63.1(3)&61.5(2)&66.5(3)&75.4(2)&58.1(3)\tabularnewline \hline RGB &4 & 54.7 (4)&48.4(4)&56.8(4)&55.4(4)&53.5(4)&55.5(4)&53.2(4)&50.7(4)&60.9(4)&62.0(4)&51.0(4)\tabularnewline \hline TLD\cite{tld} &6.09 &28.2(7) & 30.7(8) &43.7(5)&32.4(7) &38.5(5) & 44.8(6) & 29.5(5) & 34.3(5) & 39.3 (7)& 47.5(5) & 31.8(7)\tabularnewline \hline CT\cite{ct}& 6.27& 33.0 (6) & 43.6(5) & 33.4(8) &41.3(5) &33.5(7) &47.3(5) & 27.5(8) & 23.8(8) & 59.2(5) & 38.9(7) & 35.3(5)\tabularnewline \hline MIL\cite{mil}& 6.54 &34.3(5)& 34.8(6)& 34.2 (7)& 39.6(6) & 32.9(8) & 40.4(8) & 29.5 (5)& 27.7 (7)& 48.1(6) & 38.8(8) &34.0(6)\tabularnewline \hline SemiB\cite{semiB}& 6.90 & 26.1(8) & 31.7(7) & 38.8(6) &27.6(8) &35.0(6) & 44.8(6) & 29.0(7) & 31.8(6) & 34.1(8) & 46.3(6)&26.7(8)\tabularnewline \hline \end{tabular} } \label{table:ranking} \vspace{-3mm} \end{table*} \subsection{Discussion} From the evaluation results obtained in the previous section, we observed that traditional RGB trackers produce relative high error in the following scenario: \paragraph{Target rotation and deformation} Target rotation, especially out-of-plane rotation, and deformation are the main causes of model drifting for traditional RGB trackers. Target appearance can change significantly after rotation or deformation, making recognition difficult. In the video ``stuffed bear'' (Figure \ref{fig:result} Row 1), TLD, CT, MIL and Semi-B lose tracking when the stuffed bear starts to rotate out of plane. In ``basketball player'' (Figure \ref{fig:result} Row 2), those trackers gradually fail to follow the player as he moves his arms and legs. However, the RGBD tracker was robust in these situations as we used depth information to identify the target. The depth feature is still distinguishable when the similarity in RGB vanishes. \paragraph{Different types of occlusion} There are several factors which may affect the difficulty of tracking under occlusion: size of target's occluded portion, target movement or appearance variations during occlusion, similarity between occluder and target, and background clutter. When partially occluded, the target appearance is less similar to the pre-trained models and often cannot pass the threshold. In the video ``human face'' (Figure \ref{fig:result} Row 4), if only RGB data is available, fragment based trackers can locate the target but sometimes mistake background clutter for the target, because with only part of target visible, the detection confidence drops. Conservative approaches, which do not produce output with very low confidence, often lose tracking. When the target is completely occluded (video ``sign'', Figure \ref{fig:result} Row 3), optical flow tracking becomes uninformative. However, from depth data, our method is able to identify the occluder and raise the confidence in its neighboring 3D region, compensating for the confidence loss due to partial occlusion, and thus identifies the target more accurately. If the occlusion happens gradually, the occluder, if not excluded, slowly grows inside the target bounding box and finally dominates the bounding box (video ``student with bag'', Figure \ref{fig:result} Row 5). On this occasion, optical flow trackers and classifiers are often misled to track or detect the occluder. It is difficult for the trackers to make corrections afterwards because their models are updated incorrectly. Our method detects occlusion more reliably using depth data to recognize the occluder. And by only examining objects around the occluder, we prevent outputting the occluder as the result and update models accordingly. \section{Conclusions} Thanks to the great popularity of low cost depth sensors in the consumer market, tracking can be made easier by using reliable depth data as input. Object depth data provides information for discriminating between different objects, which cannot be obtained from RGB data alone. But there are many questions about how valuable such reliable depth information is for handling occlusion and preventing model drift. In this paper, we construct a benchmark dataset of 100 RGBD videos with high diversity, including deformable objects, various occlusion conditions and moving cameras. We propose a very simple but strong baseline model for RGBD tracking, and present a quantitative comparison of several state-of-the-art tracking algorithms. We have demonstrated that by incorporating depth data, trackers can achieve better performance and handle occlusion more easily as well as more accurately. With depth data, the baseline RGBD tracker outperforms current state-of-the-art RGB trackers significantly. We believe that this benchmark dataset and baseline algorithm can provide a better comparison of different tracking algorithms, and start a new wave of research advances in the field by making experimental evaluation more standardized and easily accessible. The datasets, evaluation details, source code of the baseline algorithm, and instructions for submitting new models will be made available online after acceptance. In future work, we would like to investigate stronger models for both RGB and RGBD tracking, such as modeling deformable objects by parts \cite{DPM,treeDPM}. \def 0.135{0.135} \begin{figure*}[t] \centering \includegraphics[width=0.8\linewidth]{./image/label.pdf} \includegraphics[width=0.135\linewidth]{./image/testCasepic/bear3.png}\hspace*{-3pt} \includegraphics[width=0.135\linewidth]{./image/testCasepic/bear4.png}\hspace*{-3pt} \includegraphics[width=0.135\linewidth]{./image/testCasepic/bear5.png}\hspace*{-3pt} \includegraphics[width=0.135\linewidth]{./image/testCasepic/bear7.png}\hspace*{-3pt} \includegraphics[width=0.135\linewidth]{./image/testCasepic/bear8.png}\hspace*{-3pt} \includegraphics[width=0.135\linewidth]{./image/testCasepic/bear9.png}\hspace*{-3pt} \includegraphics[width=0.205\linewidth]{./image/testCasepic/bearplot.pdf} \vspace{1.5mm} \includegraphics[width=0.135\linewidth]{./image/testCasepic/bk1.png}\hspace*{-3pt} \includegraphics[width=0.135\linewidth]{./image/testCasepic/bk2.png}\hspace*{-3pt} \includegraphics[width=0.135\linewidth]{./image/testCasepic/bk3.png}\hspace*{-3pt} \includegraphics[width=0.135\linewidth]{./image/testCasepic/bk4.png}\hspace*{-3pt} \includegraphics[width=0.135\linewidth]{./image/testCasepic/bk5.png}\hspace*{-3pt} \includegraphics[width=0.135\linewidth]{./image/testCasepic/bk6.png}\hspace*{-3pt} \includegraphics[width=0.205\linewidth]{./image/testCasepic/basketballplot.pdf}\hspace*{-3pt} \vspace{1.5mm} \includegraphics[width=0.135\linewidth]{./image/testCasepic/static1.png}\hspace*{-3pt} \includegraphics[width=0.135\linewidth]{./image/testCasepic/static2.png}\hspace*{-3pt} \includegraphics[width=0.135\linewidth]{./image/testCasepic/static3.png}\hspace*{-3pt} \includegraphics[width=0.135\linewidth]{./image/testCasepic/static5.png}\hspace*{-3pt} \includegraphics[width=0.135\linewidth]{./image/testCasepic/static6.png}\hspace*{-3pt} \includegraphics[width=0.135\linewidth]{./image/testCasepic/static7.png}\hspace*{-3pt} \includegraphics[width=0.205\linewidth]{./image/testCasepic/static.pdf}\hspace*{-3pt} \vspace{1.5mm} \includegraphics[width=0.135\linewidth]{./image/testCasepic/face1.png}\hspace*{-3pt} \includegraphics[width=0.135\linewidth]{./image/testCasepic/face3.png}\hspace*{-3pt} \includegraphics[width=0.135\linewidth]{./image/testCasepic/face4.png}\hspace*{-3pt} \includegraphics[width=0.135\linewidth]{./image/testCasepic/face8.png}\hspace*{-3pt} \includegraphics[width=0.135\linewidth]{./image/testCasepic/face10.png}\hspace*{-3pt} \includegraphics[width=0.135\linewidth]{./image/testCasepic/face11.png}\hspace*{-3pt} \includegraphics[width=0.205\linewidth]{./image/testCasepic/faceplot.pdf}\hspace*{-3pt} \vspace{1.5mm} \includegraphics[width=0.135\linewidth]{./image/testCasepic/sc1.png}\hspace*{-3pt} \includegraphics[width=0.135\linewidth]{./image/testCasepic/sc2.png}\hspace*{-3pt} \includegraphics[width=0.135\linewidth]{./image/testCasepic/sc3.png}\hspace*{-3pt} \includegraphics[width=0.135\linewidth]{./image/testCasepic/sc5.png}\hspace*{-3pt} \includegraphics[width=0.135\linewidth]{./image/testCasepic/sc6.png}\hspace*{-3pt} \includegraphics[width=0.135\linewidth]{./image/testCasepic/sc7.png}\hspace*{-3pt} \includegraphics[width=0.205\linewidth]{./image/testCasepic/scplot.pdf}\hspace*{-3pt} \caption{Example results comparing several approaches. More results are available in the supplementary videos. For the sake of clarity we only plot the center position error (CPE) of RGBDOcc, TLD, CT, MIL, Semi-B. The CPE is undefined when trackers fail to output a bounding box or there is no ground truth bounding box (target is totally occluded).} \label{fig:result} \vspace{-3mm} \end{figure*} {\small \bibliographystyle{ieee}
1,941,325,219,953
arxiv
\section{Introduction} Confocal quadrics in $\mathbb{R}^N$ belong to the favorite objects of classical mathematics, due to their beautiful geometric properties and numerous relations and applications to various branches of mathematics. To mention just a few well-known examples: \begin{itemize} \item Optical properties of quadrics and their confocal families were discovered by the ancient Greeks and continued to fascinate mathematicians for many centuries, culminating in the famous Ivory and Chasles theorems from 19th century given a modern interpretation by Arnold \cite{Arnold}. \item Dynamical systems: integrability of geodesic flows on quadrics (discovered by Jacobi) and of billiards in quadrics was given a far reaching generalization, with applications to the spectral theory, by Moser \cite{Moser}. \item Gravitational properties of ellipsoids were studied in detail starting with Newton and Ivory, see \cite[Appendix 15]{Arnold}, \cite[Part 8]{Fuchs-Tabachnikov}, and are based to a large extent on the geometric properties of confocal quadrics. \item Quadrics in general and confocal systems of quadrics in particular serve as favorite objects in differential geometry. They deliver a non-trivial example of isothermic surfaces which form one of the most interesting classes of ``integrable'' surfaces, that is, surfaces described by integrable differential equations and possessing a rich theory of transformations with remarkable permutability properties. \item Confocal quadrics lie at the heart of the system of confocal coordinates which allows for separation of variables in the Laplace operator. As such, they support a rich theory of special functions including Lam\'e functions and their generalizations \cite{Bateman-Erdelyi}. \end{itemize} In the present paper, we are interested in a discretization of a system of confocal quadrics, or, what is the same, of a system of confocal coordinates in $\mathbb{R}^n$. According to the philosophy of structure preserving discretization \cite{bobenko-suris}, it is crucial not to follow the path of a straightforward discretization of differential equations, but rather to discretize a well chosen collection of essential geometric properties. In the case of confocal quadrics, the choice of properties to be preserved in the course of discretization becomes especially difficult, due to the above-mentioned abundance of complementary geometric and analytic features. A number of attempts to discretize quadrics in general and confocal systems of quadrics in particular are available in the literature. In \cite{tsukerman} a discretization of the defining property of a conic as an image of a circle under a projective transformation is considered. Since a natural discretization of a circle is a regular polygon, one ends up with a class of discrete curves which are projective images of regular polygons. More sophisticated geometric constructions are developed in \cite{Akopyan-Bobenko} and lead to a very interesting class of quadrilateral nets in a plane and in space, with all quadrilaterals possessing an incircle, resp. all hexahedra possessing an inscribed sphere. The rich geometric content of these constructions still waits for an adequate analytic description. Our approach here is based on a discretization of the classical Euler-Poisson-Darboux equation which has been introduced in \cite{konopelchenko-schief} in the context of discretization of semi-Hamiltonian systems of hydrodynamic type. The discrete Euler-Poisson-Darboux equation is integrable in the sense of multi-dimensional consistency \cite{bobenko-suris}, which, in turn, gives rise to Darboux-type transformations with remarkable permutability properties. As we will demonstrate, the integrable nature of the discrete Euler-Poisson-Darboux equation is reflected in the preservation of a suite of algebraic and geometric properties of the confocal coordinate systems. Our proposal takes as a departure point two properties of the confocal coordinates: they are separable, and all two-dimensional coordinate subnets are isothermic surfaces (which is equivalent to being conjugate nets with equal Laplace invariants and with orthogonal coordinate curves). We propose here a novel concept of discrete isothermic nets. Remarkably, the incircular nets of \cite{Akopyan-Bobenko} turn out to be another instance of this geometry, see Appendix \ref{sect: incircular}. Discretization of confocal coordinate systems based on more general curvature line parametrizations will be addressed in \cite{BSST-II}. \medskip {\bf Acknowledgements.} This research is supported by the DFG Collaborative Research Center TRR 109 ``Discretization in Geometry and Dynamics''. We would like to acknowledge the stimulating role of the research visit to TU Berlin by I. Taimanov in summer 2014. In our construction, we combine the fact that the confocal coordinate system satisfies continuous Euler-Poisson-Darboux equations, which we learned from I. Taimanov, with the discretization of Euler-Poisson-Darboux equations recently proposed (in a different context) by one of the authors \cite{konopelchenko-schief}. The pictures in this paper were generated using {\tt blender}, {\tt matplotlib}, {\tt geogebra}, and {\tt inkscape}. \section{Euler-Poisson-Darboux equation} \begin{definition} Let $U \subset \mathbb{R}^M$ be open and connected. We say that a net \begin{equation*} \mbox{\boldmath $x$} : U \rightarrow \mathbb{R}^N, \quad (u_1, \ldots, u_M) \mapsto (x_1, \ldots, x_N) \end{equation*} satisfies the \emph{Euler-Poisson-Darboux system} if all its two-dimensional subnets satisfy the (vector) Euler-Poisson-Darboux equation with the same parameter $\gamma$: \begin{equation} \frac{\partial^2 \mbox{\boldmath $x$}}{\partial u_i \partial u_j} = \frac{\gamma}{u_i - u_j} \left( \frac{\partial \mbox{\boldmath $x$}}{\partial u_j} - \frac{\partial \mbox{\boldmath $x$}}{\partial u_i} \right) \tag{EPD$_\gamma$} \label{eq:Euler-Darboux} \end{equation} for all $i,j \in\{ 1, \ldots, M\}$, $i \neq j$. \end{definition} For any $s$ distinct indices $i_1, \ldots, i_s \in \{ 1, \ldots, M\}$, we write \begin{equation*} U_{i_1 \ldots i_s} \coloneqq \set{ (u_{i_1}, \ldots, u_{i_s}) \in \mathbb{R}^s }{ (u_1, \ldots, u_M) \in U }. \end{equation*} \begin{definition} A two-dimensional subnet of a net $\mbox{\boldmath $x$} : \mathbb{R}^M \supset U \rightarrow \mathbb{R}^N$ corresponding to the coordinate directions $i,j\in\{ 1, \ldots, M\}$, $i \neq j$, is called \emph{a Koenigs net}, or, classically, \emph{a conjugate net with equal Laplace invariants}, if there exists a function $ \nu: U_{ij} \rightarrow \mathbb{R}_+ $ such that \begin{equation} \frac{\partial^2 \mbox{\boldmath $x$}}{\partial u_i \partial u_j} = \frac{1}{\nu} \frac{\partial \nu}{\partial u_j} \frac{\partial \mbox{\boldmath $x$}}{\partial u_i} + \frac{1}{\nu} \frac{\partial \nu}{\partial u_i} \frac{\partial \mbox{\boldmath $x$}}{\partial u_j}. \label{eq:Koenigs-net} \end{equation} \end{definition} \begin{proposition} \label{prop:Koenigs} Let $\mbox{\boldmath $x$} : \mathbb{R}^M \supset U \rightarrow \mathbb{R}^N$ be a net satisfying the Euler-Poisson-Darboux system {\rm \refeq{eq:Euler-Darboux}}. Then all two-dimensional subnets of $\mbox{\boldmath $x$}$ are Koenigs nets. \end{proposition} \begin{proof} The function $ \nu(u_i, u_j) = |u_i - u_j|^\gamma $ solves \begin{equation} \label{eq:Koenigs-condition} \frac{1}{\nu} \frac{\partial \nu}{\partial u_i} = \frac{\gamma}{u_i - u_j},\qquad \frac{1}{\nu} \frac{\partial \nu}{\partial u_j} = \frac{\gamma}{u_j - u_i}, \end{equation} thus the Euler-Poisson-Darboux system \refeq{eq:Euler-Darboux} is of the Koenigs form \refeq{eq:Koenigs-net}. \end{proof} \begin{remark} Eisenhart classified conjugate nets in $\mathbb{R}^3$ with all two-dimensional coordinate surfaces being Koenigs nets \cite{eisenhart}. The generic case is described by solutions of the Euler-Poisson-Darboux system \refeq{eq:Euler-Darboux} with an arbitrary coefficient $\gamma$. \end{remark} \section{Confocal coordinates} For given $a_1 > a_2 > \cdots > a_N > 0$, we consider the one-parameter family of confocal quadrics in $\mathbb{R}^N$ given by \begin{equation} \label{eq:confocal-family} Q_\lambda=\left\{ \mbox{\boldmath $x$}=(x_1,\ldots,x_N)\in\mathbb R^N:\ \sum_{k=1}^N \frac{x_k^2}{a_k + \lambda} = 1\right\}, \quad \lambda \in \mathbb{R}. \end{equation} Note that the quadrics of this family are centered at the origin and have the principal axes aligned along the coordinate directions. For a given point $\mbox{\boldmath $x$}=(x_1, \ldots, x_N) \in \mathbb{R}^N$ with $x_1x_2\ldots x_N\neq 0$, equation $\sum_{k=1}^N x_k^2/(a_k + \lambda) = 1$ is, after clearing the denominators, a polynomial equation of degree $N$ in $\lambda$, with $N$ real roots $u_1,\ldots, u_N$ lying in the intervals \begin{equation*} -a_1 < u_1 < -a_2 < u_2 < \cdots < -a_N < u_N, \end{equation*} so that \begin{equation} \label{eq: equation factorized} \sum_{k=1}^N \frac{x_k^2}{\lambda+a_k} - 1= - \frac{\prod_{m=1}^N (\lambda - u_m)}{\prod_{m=1}^N (\lambda+a_m) }. \end{equation} These $N$ roots correspond to the $N$ confocal quadrics of the family \eqref{eq:confocal-family} that intersect at the point $\mbox{\boldmath $x$}=(x_1, \ldots, x_N)$: \begin{equation} \label{eq:confocal-quadrics} \sum_{k=1}^N \frac{x_k^2}{a_k + u_i} = 1, \quad i=1,\ldots,N \quad\Leftrightarrow\quad \mbox{\boldmath $x$}\in \bigcap_{i=1}^N Q_{u_i}. \end{equation} Each of the quadrics $Q_{u_i}$ is of a different signature. Evaluating the residue of the right-hand side of \eqref{eq: equation factorized} at $\lambda=-a_k$, one can easily express $x_k^2$ through $u_1, \ldots, u_N$: \begin{equation} \label{eq:elliptic-coordinates-squares} x_k^2 = \frac{\prod_{i=1}^N (u_i+a_k)}{\prod_{i \neq k} (a_k - a_i)}, \quad k = 1, \ldots, N. \end{equation} Thus, for each point $(x_1, \ldots, x_N) \in \mathbb{R}^N$ with $x_1x_2\ldots x_N\neq 0$, there is exactly one solution $(u_1, \ldots, u_N) \in \mathcal{U}$ of \refeq{eq:elliptic-coordinates-squares}, where \begin{equation} \mathcal{U} = \set{ (u_1,\ldots,u_N) \in \mathbb{R}^N }{ -a_1 < u_1 < -a_2 < u_2 < \ldots < -a_N < u_N }. \label{eq:domain} \end{equation} On the other hand, for each $(u_1, \ldots, u_N) \in \mathcal{U}$ there are exactly $2^N$ solutions $(x_1, \ldots, x_N) \in \mathbb{R}^N$, which are mirror symmetric with respect to the coordinate hyperplanes. In what follows, when we refer to a solution of \refeq{eq:elliptic-coordinates-squares}, we always mean the solution with values in \begin{equation*} \mathbb{R}_+^N = \set{ (x_1, \ldots, x_N) \in \mathbb{R}^N }{ x_1 > 0, \ldots, x_N > 0 }. \end{equation*} Thus, we are dealing with a parametrization of the first hyperoctant of $\mathbb{R}^N$, $\mbox{\boldmath $x$} : \mathcal{U}\ni (u_1, \ldots, u_N) \mapsto (x_1,\ldots,x_N)\in\mathbb{R}_+^N$, given by \begin{equation} \label{eq:elliptic-coordinates} x_k = \frac{\prod_{i=1}^{k-1} \sqrt{-(u_i+a_k)}\prod_{i=k}^N\sqrt{u_i+a_k}}{\prod_{i = 1}^{k-1} \sqrt{a_i - a_k}\prod_{i=k+1}^N\sqrt{a_k-a_i}}, \quad k = 1, \ldots, N, \end{equation} such that the coordinate hyperplanes $u_i = {\rm const}$ are mapped to the respective quadrics given by \refeq{eq:confocal-quadrics}. The coordinates $(u_1,\ldots,u_N)$ are called \emph{confocal coordinates} (or {\em elliptic coordinates}, following Jacobi \cite[Vorlesung 26]{jacobi}). \subsection{Confocal coordinates and isothermic surfaces} \begin{proposition} \label{prop:confocal-Euler-Darboux} The net $\mbox{\boldmath $x$} : \mathcal{U} \rightarrow \mathbb{R}_+^N$ given by \refeq{eq:elliptic-coordinates} satisfies the Euler-Poisson-Darboux system {\rm \refeq{eq:Euler-Darboux}} with $\gamma = \frac{1}{2}$. As a consequence, all two-dimensional subnets of $\mbox{\boldmath $x$}$ are Koenigs nets. \end{proposition} \begin{proof} The partial derivatives of \refeq{eq:elliptic-coordinates} satisfy \begin{equation} \label{eq:first-derivatives} \frac{\partial x_k}{\partial u_i} = \frac{1}{2} \frac{x_k}{(a_k + u_i)}. \end{equation} From this we compute the second order partial derivatives for $i \neq j$: \begin{align} \label{eq:second-derivatives} \frac{\partial^2 x_k}{\partial u_i \partial u_j} &= \frac{1}{2(a_k + u_i)} \frac{\partial x_k}{\partial u_j} \ = \ \frac{x_k}{4(a_k + u_i)(a_k + u_j)} \nonumber \\ &= \frac{x_k}{4(u_i-u_j)} \left( \frac{1}{a_k+u_j} - \frac{1}{a_k+u_i} \right) \nonumber \\ &= \frac{1}{2(u_i - u_j)} \left( \frac{\partial x_k}{\partial u_j} - \frac{\partial x_k}{\partial u_i} \right). \qedhere \end{align} \end{proof} \begin{proposition} \label{prop:orthogonal} The net $\mbox{\boldmath $x$} : \mathcal{U} \rightarrow \mathbb{R}_+^N$ given by \refeq{eq:elliptic-coordinates} is orthogonal, and thus gives a curvature line parametrization of any of its two-dimensional coordinate surfaces. \end{proposition} \begin{proof} With the help of \eqref{eq:first-derivatives}, we compute, for $i \neq j$, the scalar product \begin{align*} \left\langle \frac{\partial \mbox{\boldmath $x$}}{\partial u_i}, \frac{\partial \mbox{\boldmath $x$}}{\partial u_j} \right\rangle &= \frac{1}{4} \sum_{k=1}^{N} \frac{x_k^2}{(a_k + u_i)(a_k + u_j)}\\ &= \frac{1}{4(u_i - u_j)} \sum_{k=1}^{N} \left( \frac{x_k^2}{a_k + u_j} - \frac{x_k^2}{a_k + u_i} \right) = 0, \end{align*} since $\mbox{\boldmath $x$}(u_1, \ldots, u_N)$ satisfies \refeq{eq:confocal-quadrics} for $u_i$ and for $u_j$. \end{proof} We recall the following classical definition. \begin{definition} \label{dfn:is} A curvature line parametrized surface $\mbox{\boldmath $x$}: U_{ij}\to\mathbb R^N$ is called an {\em isothermic surface} if its first fundamental form is conformal, possibly upon a reparametrization of independent variables $u_i\mapsto\varphi_i(u_i)$, $u_j\mapsto\varphi_j(u_j)$, that is, if $$ \frac{|\partial \mbox{\boldmath $x$} /\partial u_i |^2}{|\partial \mbox{\boldmath $x$}/\partial u_ j |^2}=\frac{\alpha_i(u_i)}{\alpha_j(u_j)} $$ at every point $(u_i,u_j)\in U_{ij}$. \end{definition} In other words, isothermic surfaces are characterized by the relations $\partial^2 \mbox{\boldmath $x$}/\partial u_i\partial u_j\!\in{\rm span}(\partial \mbox{\boldmath $x$}/\partial u_i,\partial \mbox{\boldmath $x$}/\partial u_j)$ and \begin{equation}\label{eq:is prop} \left\langle\frac{\partial \mbox{\boldmath $x$}}{\partial u_i},\frac{\partial \mbox{\boldmath $x$}}{\partial u_j}\right\rangle=0,\quad \left|\frac{\partial \mbox{\boldmath $x$}}{\partial u_i}\right|^2=\alpha_i(u_i) s^2,\quad \left|\frac{\partial \mbox{\boldmath $x$}}{\partial u_j}\right|^2=\alpha_j(u_j) s^2, \end{equation} with a conformal metric coefficient $s:U_{ij}\to\mathbb R_+$ and with the functions $\alpha_i$, $\alpha_j$, each depending on the respective variable $u_i$, $u_j$ only. These conditions may be equivalently represented as \begin{equation}\label{eq:is prop1} \frac{\partial^2 \mbox{\boldmath $x$}}{\partial u_i\partial u_j}= \frac{1}{s}\frac{\partial s}{\partial u_j} \frac{\partial \mbox{\boldmath $x$}}{\partial u_i}+ \frac{1}{s}\frac{\partial s}{\partial u_i} \frac{\partial \mbox{\boldmath $x$}}{\partial u_j}, \quad \left\langle\frac{\partial \mbox{\boldmath $x$}}{\partial u_i}, \frac{\partial \mbox{\boldmath $x$}}{\partial u_j}\right\rangle=0. \end{equation} Comparison with \eqref{eq:Koenigs-net} shows that isothermic surfaces are nothing but orthogonal Koenigs nets. Thus, Propositions \ref{prop:confocal-Euler-Darboux}, \ref{prop:orthogonal} immediately imply the first statement of the following proposition. \begin{proposition} \label{prop: isothermic} All two-dimensional coordinate surfaces $\mbox{\boldmath $x$}:\mathcal U_{ij}\to\mathbb R^N$ (for fixed values of $u_m$, $m\neq i,j$) of a confocal coordinate system are isothermic. Specifically, one has \eqref{eq:is prop} with \begin{equation} \label{eq: s} s=s(u_i,u_j)=|u_i-u_j |^{1/2}, \end{equation} \begin{equation} \label{eq: alphas} \frac{\alpha_i(u_i)}{\alpha_j(u_j)} = -\frac{\prod_{m\neq i,j}(u_i - u_m)}{\prod_{m=1}^N (u_i + a_m)}\cdot \frac{\prod_{m=1}^N (u_j + a_m)}{\prod_{m\neq i,j}(u_j - u_m)}. \end{equation} \end{proposition} \begin{proof} Differentiate both sides of \eqref{eq: equation factorized} with respect to $u_i$. Taking into account \eqref{eq:first-derivatives}, we find: \begin{equation*} \sum_{k=1}^N \frac{x_k^2}{(u_i+a_k)(\lambda+a_k)} = \frac{\prod_{m\neq i} (\lambda - u_m)}{\prod_{m=1}^N (\lambda+a_m) }. \end{equation*} Setting $\lambda=u_i$, we finally arrive at \begin{equation} \label{eq: id squares} \left | \frac{\partial \mbox{\boldmath $x$}}{\partial u_i} \right |^2 =\sum_{k=1}^N\left(\frac{\partial x_k}{\partial u_i}\right)^2 =\frac{1}{4}\sum_{k=1}^N \frac{x_k^2}{(u_i+a_k)^2} = \frac{1}{4}\frac{\prod_{m\neq i} (u_i - u_m)}{\prod_{m=1}^N (u_i+a_m) }. \end{equation} This proves \eqref{eq:is prop} with \eqref{eq: s}, \eqref{eq: alphas}. \end{proof} \begin{remark} Darboux classified orthogonal nets in $\mathbb{R}^3$ whose two-dimensional coordinate surfaces are isothermic \cite[Livre II, Chap. III--V]{darboux} . He found several families, all satisfying the Euler-Poisson-Darboux system with coefficient $\gamma = \pm \frac{1}{2}, -1$, or $-2$. The family corresponding to $\gamma=\frac{1}{2}$ consists of confocal cyclides and includes the case of confocal quadrics (or their M\"obius images). \end{remark} \subsection{Confocal coordinates and separability} From \refeq{eq:elliptic-coordinates} we observe that confocal coordinates are described by very special (separable) solutions of Euler-Poisson-Darboux equations \refeq{eq:Euler-Darboux}. We will now show that the separability property is almost characteristic for confocal coordinates. \begin{proposition} \label{prop:Euler-Darboux-separable-solutions} A separable function $x : \mathbb{R}^N \supset U \rightarrow \mathbb{R}$, \begin{equation} \label{eq:separable-solution} x(u_1, \ldots, u_N) = \prod_{i=1}^N \rho_i(u_i) \end{equation} is a solution of the Euler-Poisson-Darboux system {\rm \refeq{eq:Euler-Darboux}} iff the functions $\rho_i : U_i \rightarrow \mathbb{R}$, $i = 1, \ldots, N$, satisfy \begin{equation} \label{eq:separable-solution-dgl} \frac{\rho_i^\prime(u_i)}{\rho_i(u_i)} = \frac{\gamma}{c + u_i} \end{equation} for some $c \in \mathbb{R}$ and for all $u_i \in U_i$. \end{proposition} \begin{proof} Computing the derivatives of \refeq{eq:separable-solution} for $i = 1,\ldots,N,$ we obtain: \begin{equation*} \frac{\partial x}{\partial u_i} = x\cdot \frac{\rho_i^{\prime}(u_i)}{\rho_i(u_i)}, \end{equation*} and for the second derivatives ($i \neq j$): \begin{equation} \label{eq:separable-second-derivatives1} \frac{\partial^2 x}{\partial u_i \partial u_j} = x\cdot \frac{\rho_i^\prime(u_i)}{\rho_i(u_i)} \frac{\rho_j^\prime(u_j)}{\rho_j(u_j)}. \end{equation} On the other hand, $x$ satisfies the Euler-Poisson-Darboux system \refeq{eq:Euler-Darboux}, which implies \begin{equation} \label{eq:separable-second-derivatives2} \frac{\partial^2 x}{\partial u_i \partial u_j} = \frac{\gamma}{u_i - u_j} \left( \frac{\rho_j^{\prime}(u_j)}{\rho_j(u_j)} - \frac{\rho_i^{\prime}(u_i)}{\rho_i(u_i)} \right) x. \end{equation} From \refeq{eq:separable-second-derivatives1} and \refeq{eq:separable-second-derivatives2} we obtain \[ u_i - u_j = \gamma \left( \frac{\rho_i(u_i)}{\rho_i^{\prime}(u_i)} - \frac{\rho_j(u_j)}{\rho_j^{\prime}(u_j)} \right), \] or \[ \gamma \frac{\rho_i(u_i)}{\rho_i^{\prime}(u_i)} - u_i = \gamma \frac{\rho_j(u_j)}{\rho_j^{\prime}(u_j)} - u_j \] for all $i, j = 1, ..., N$, $i \neq j$, and $(u_i, u_j) \in U_{ij}$. Thus, both the left-hand side and the right-hand side of the last equation do not depend on $u_i,u_j$. So, there exists a $c \in \mathbb{R}$ such that \refeq{eq:separable-solution-dgl} is satisfied. \end{proof} For $\gamma = \frac{1}{2}$ general solutions of \refeq{eq:separable-solution-dgl} are given, up to constant factors, by \begin{alignat*}{2} \rho_i(u_i, c) &= \sqrt{u_i +c} &&\quad \text{for} \quad u_i > - c, \intertext{respectively by } \rho_i(u_i,c) &=\sqrt{-(u_i + c)} &&\quad \text{for} \quad u_i < - c. \end{alignat*} A separable solution of the Euler-Poisson-Darboux system \refeq{eq:Euler-Darboux} with $\gamma = \frac{1}{2}$ finally takes the form \begin{equation*} x(u_1, \ldots, u_N) = D \prod_{i=1}^N \rho_i(u_i,c) \end{equation*} with some $c \in \mathbb{R}$, and with a constant $D \in \mathbb{R}$, which is the product of all the constant factors of $\rho_i(u_i,c)$ mentioned above. \begin{proposition} \label{prop:Euler-Darboux-confocal-quadrics} Let $a_1 > \cdots > a_N$ and set \begin{equation*} \mathcal{U} = \set{ (u_1,\ldots,u_N) \in \mathbb{R}^N }{ -a_1 < u_1 < -a_2 < u_2 < \cdots < -a_N < u_N }. \end{equation*} \begin{itemize} \item[{\rm a)}] Let $x_k : \mathcal{U} \rightarrow \mathbb{R}_+$, $k = 1, \ldots, N$, be $N$ independent separable solutions of the Euler-Poisson-Darboux system {\rm \refeq{eq:Euler-Darboux}} with $\gamma = \frac{1}{2}$ defined on $\mathcal U$ and satisfying there the following boundary conditions: \begin{align} \label{eq:boundary-conditions1} \lim_{u_k \searrow \ (- a_k)}x_k(u_1, \ldots, u_N) = 0 \quad &\text{for} \quad k = 1, \ldots, N,\\ \label{eq:boundary-conditions2} \lim_{u_{k-1} \nearrow \ (- a_k)}x_k(u_1, \ldots, u_N) = 0 \quad &\text{for} \quad k = 2, \ldots, N. \end{align} Then \begin{equation}\label{eq:Euler-Darboux-separable-solutions} x_k(u_1, \ldots, u_N) = D_k \prod_{i=1}^N \rho_i(u_i,a_k),\quad k = 1, \ldots, N, \end{equation} with some $D_1, \ldots, D_N > 0$ and with \begin{equation}\label{eq:rhos} \rho_i(u_i,a_k)=\left\{\begin{array}{ll} \sqrt{u_i+a_k} \quad & \text{for} \quad i\ge k, \\ \\ \sqrt{-(u_i+a_k)} \quad & \text{for} \quad i<k. \end{array}\right. \end{equation} Thus, the net $\mbox{\boldmath $x$} =(x_1, \ldots, x_N):\mathcal U\to\mathbb{R}_+^N$ coincides with the confocal coordinates \eqref{eq:elliptic-coordinates} on the positive hyperoctant, up to independent scaling along the coordinate axes $(x_1, \ldots, x_N) \mapsto (C_1 x_1, \ldots, C_N x_N)$ with some $C_1, \ldots, C_N >0$. \item[{\rm b)}] The choice of the constants $D_1,\ldots,D_N > 0$ that specifies the system of confocal coordinates \eqref{eq:elliptic-coordinates} among the separable solutions \eqref{eq:Euler-Darboux-separable-solutions}, namely \begin{equation} D_k^{-2}=\prod_{i<k}(a_i-a_k)\prod_{i>k}(a_k-a_i), \end{equation} is the unique scaling (up to a common factor) such that the parameter curves are pairwise orthogonal. \end{itemize} \end{proposition} \begin{proof} a) We have \begin{equation*} x_k(u_1, \ldots, u_N) = D_k\cdot \rho_1(u_1,c_k) \cdot \ldots \cdot \rho_N(u_N,c_k),\quad k = 1, \ldots, N. \end{equation*} Boundary conditions \refeq{eq:boundary-conditions1}, \eqref{eq:boundary-conditions2} yield that the constants are given by $c_k = a_k$, and that the solutions are actually given by \eqref{eq:Euler-Darboux-separable-solutions}. Formulas \refeq{eq:elliptic-coordinates} are now equivalent to a specific choice of the constants $D_k$. b) From \eqref{eq:first-derivatives} we compute: \begin{equation}\label{eq: cont scalar product} \left\langle \frac{\partial\mbox{\boldmath $x$}}{\partial u_i}, \frac{\partial\mbox{\boldmath $x$}}{\partial u_j}\right\rangle = \frac{1}{4}\sum_{k=1}^N \frac{x_k^2}{(u_i+a_k)(u_j+a_k)} = \frac{1}{4} \sum_{k=1}^N (-1)^{k-1} D_k^2 \prod_{l\neq i,j} (u_l + a_k). \end{equation} We have: \[ \prod_{l\neq i,j} (u_l + a_k) = \sum_{m=0}^{N-2} p^{(N-2-m)}_{ij}(\mbox{\boldmath $u$})a_k^m, \] where $p_{ij}^{(N-2-m)}(\mbox{\boldmath $u$})$ is an elementary symmetric polynomial of degree $N-2-m$ in $u_l$, $l\neq i,j$. Thus, \begin{align*} \left\langle \frac{\partial\mbox{\boldmath $x$}}{\partial u_i},\frac{\partial\mbox{\boldmath $x$}}{\partial u_j} \right\rangle &= \frac{1}{4}\sum_{m=0}^{N-2} \left( \sum_{k=1}^N (-1)^{k-1} a_k^m D_k^2 \right) p^{(N-2-m)}_{ij}(\mbox{\boldmath $u$}). \end{align*} Since the polynomials $p^{(N-2-m)}_{ij}(\mbox{\boldmath $u$})$ are independent on $\mathcal{U}$, the latter expression is equal to zero if and only if \[ \sum_{k=1}^N (-1)^{k-1} a_k^m D_k^2 = 0, \qquad m= 0, \ldots, N-2. \] This system of $N-1$ linear homogeneous equations for the $N$ unknowns $D_k^2$ does not depend on $i,j$. Supplying it by the non-homogeneous equation $\sum_{k=1}^N (-1)^{k-1} a_k ^{N-1} D_k^2 = 1$, we find the unique solution of the resulting system as quotients of Vandermonde determinants, or finally $(-1)^{k-1}D_k^2=1/\prod_{i\neq k} (a_k-a_i)$. \end{proof} \begin{remark} The boundary conditions ensure that the $2N-1$ faces of the boundary of $\mathcal{U}$ are mapped into coordinate hyperplanes. Their images are degenerate quadrics of the confocal family \refeq{eq:confocal-family}. \end{remark} \section{Discrete Koenigs nets} \label{sect: dKoenigs} For a function $f$ on $\mathbb{Z}^M$ we define the \emph{difference operator} in the standard way: \begin{equation*} \Delta_i f(\mbox{\boldmath $n$}) = f(\mbox{\boldmath $n$} + \mbox{\boldmath $e$}_i) - f(\mbox{\boldmath $n$}) \end{equation*} for all $\mbox{\boldmath $n$} \in \mathbb{Z}^M$, where $\mbox{\boldmath $e$}_i$ is the $i$-th coordinate vector of $\mathbb{Z}^M$. \begin{definition} A two-dimensional discrete net $\mbox{\boldmath $x$} : \mathbb{Z}^M \supset U_{ij} \rightarrow \mathbb{R}^N$ corresponding to the coordinate directions $i,j \in \{1, \ldots, M\}$, $i \neq j$, is called \emph{a discrete Koenigs net} if there exists a function $ \nu : U_{ij} \rightarrow \mathbb{R}_+ $ such that \begin{equation} \label{eq:discrete-Koenigs} \Delta_i \Delta_j \mbox{\boldmath $x$} = \frac{\nu_{(j)} \nu_{(ij)} - \nu \nu_{(i)}}{\nu(\nu_{(i)} + \nu_{(j)})} \Delta_i \mbox{\boldmath $x$} + \frac{\nu_{(i)} \nu_{(ij)} - \nu \nu_{(j)}}{\nu(\nu_{(i)} + \nu_{(j)})} \Delta_j \mbox{\boldmath $x$}. \end{equation} Here we use index notation to denote shifts of $\nu$: $$ \nu_{(i)}(\mbox{\boldmath $n$}) \coloneqq \nu(\mbox{\boldmath $n$} + \mbox{\boldmath $e$}_i),\qquad \nu_{(ij)}(\mbox{\boldmath $n$}) \coloneqq \nu(\mbox{\boldmath $n$} + \mbox{\boldmath $e$}_i + \mbox{\boldmath $e$}_j), \qquad \mbox{\boldmath $n$} \in \mathbb{Z}^M. $$ \end{definition} The geometric meaning of this algebraic definition is as follows. Like in the continuous case, discrete K\"onigs nets constitute a subclass of discrete conjugate nets (Q-nets), in the sense that all two-dimensional subnets have planar faces. See \cite{bobenko-suris} for more information on Q-nets, as well as on geometric properties of discrete Koenigs nets. Consider an elementary planar quadrilateral $(\mbox{\boldmath $x$},\mbox{\boldmath $x$}_{(i)},\mbox{\boldmath $x$}_{(ij)},\mbox{\boldmath $x$}_{(j)})$ of a Q-net governed by the discrete Darboux equation \begin{equation} \label{eq:discrete-darboux} \Delta_i \Delta_j \mbox{\boldmath $x$} = A \Delta_i \mbox{\boldmath $x$} + B \Delta_j \mbox{\boldmath $x$}. \end{equation} Let $M$ be the intersection point of its diagonals $[\mbox{\boldmath $x$},\mbox{\boldmath $x$}_{(ij)}]$ and $[\mbox{\boldmath $x$}_{(i)},\mbox{\boldmath $x$}_{(j)}]$. Then one can easily compute that $M$ divides the corresponding diagonals in the following relations: \[ \overrightarrow{ \mbox{\boldmath $x$}_{(i)}M }: \overrightarrow{M\mbox{\boldmath $x$}_{(j)}} = (B+1) :(A+1), \quad \overrightarrow{\mbox{\boldmath $x$} M}: \overrightarrow{M\mbox{\boldmath $x$}_{(ij)}} = 1:(A+B+1). \] A Q-net is called a Koenigs net, if there is a positive function $\nu$ defined at the vertices of the net such that \[ \overrightarrow{ \mbox{\boldmath $x$}_{(i)}M} : \overrightarrow{ M\mbox{\boldmath $x$}_{(j)}}=\nu_{(i)} : \nu_{(j)},\quad \overrightarrow{ \mbox{\boldmath $x$} M}: \overrightarrow{ M\mbox{\boldmath $x$}_{(ij)}}=\nu:\nu_{(ij)}. \] One can show \cite{bobenko-suris} that this happens if and only if the intersection points of the diagonals of four adjacent quadrilaterals are coplanar. The function $\nu$ should satisfy \bela{Z2} (A+1)\nu_{(i)} = (B+1)\nu_{(j)},\quad \nu_{(ij)} = (A+B+1)\nu. \end{equation} This is clearly equivalent to \bela{Z1} A= \frac{\nu_{(j)} \nu_{(ij)} - \nu \nu_{(i)}}{\nu(\nu_{(i)} + \nu_{(j)})},\quad B= \frac{\nu_{(i)} \nu_{(ij)} - \nu \nu_{(j)}}{\nu(\nu_{(i)} + \nu_{(j)})}. \end{equation} The pair of linear equations \eqref{Z2} is compatible if and only if the following nonlinear equation is satisfied for the coefficients $A,B$ associated with four adjacent quadrilaterals: \bela{Z3} \frac{A_{(ij)}+1}{B_{(ij)}+1} = \frac{(A_{(j)}+B_{(j)}+1)}{(A_{(i)}+B_{(i)}+1)}\frac{(A+1)}{(B+1)}. \end{equation} If this relation for the coefficients $A,B$ of the discrete Darboux equation \eqref{eq:discrete-darboux} holds true everywhere, then the linear equations (\ref{Z2}) determine a function $\nu$ uniquely, as soon as initial data are prescribed, consisting, for instance, of the values of $\nu$ at two neighboring vertices. The associated discrete Darboux equation is then of Koenigs type (\ref{eq:discrete-Koenigs}). \section{Discrete Euler-Poisson-Darboux equation} \begin{definition} Let $U \subset \mathbb{Z}^M$. We say that a discrete net \begin{equation*} \mbox{\boldmath $x$} : U \rightarrow \mathbb{R}^N, \quad (n_1, \ldots, n_M) \mapsto (x_1, \ldots, x_N) \end{equation*} satisfies the \emph{discrete Euler-Darboux system} if all of its two-dimensional subnets satisfy the (vector) discrete Euler-Poisson-Darboux equation with the same parameter $\gamma$: \begin{equation} \label{eq:discrete-Euler-Darboux} \Delta_i \Delta_j \mbox{\boldmath $x$} = \frac{\gamma}{n_i + \epsilon_i - n_j - \epsilon_j} ( \Delta_j \mbox{\boldmath $x$} - \Delta_i \mbox{\boldmath $x$} ) \tag{dEPD$_\gamma$} \end{equation} for all $i, j \in \{1, \ldots, M\}$, $i \neq j$, and some $\gamma \in \mathbb{R}$, $\epsilon_1, \ldots, \epsilon_M \in \mathbb{R}$. \end{definition} \begin{remark} This discretization of the Euler-Poisson-Darboux system was introduced by Konopelchenko and Schief \cite{konopelchenko-schief}. \end{remark} \begin{proposition} Let $\mbox{\boldmath $x$} : \mathbb{Z}^M \supset U \rightarrow \mathbb{R}^N$ be a discrete net satisfying the discrete Euler-Poisson-Darboux system {\rm \refeq{eq:discrete-Euler-Darboux}}. Then all two-dimensional subnets of $\mbox{\boldmath $x$}$ are discrete Koenigs nets. \end{proposition} \begin{proof} It is straightforward to verify that the coefficients \bela{Z4} A = -B = \frac{\gamma}{n_i + \epsilon_i - n_j - \epsilon_j} \end{equation} indeed obey the Koenigs condition (\ref{Z3}). \end{proof} We now show that for a discrete net satisfying the discrete Euler-Poisson-Darboux equation \eqref{eq:discrete-Euler-Darboux}, the function $\nu$ can be found explicitly. For this aim, use the ansatz \begin{equation*} \nu(n_i, n_j) = \mu(n_i - n_j), \end{equation*} so that $\nu_{(ij)} = \nu(n_i+1,n_j+1)=\nu(n_i,n_j)=\nu$. Under this ansatz, equation \refeq{eq:discrete-Koenigs} simplifies to \begin{equation*} \Delta_i \Delta_j \mbox{\boldmath $x$} = \frac{\nu_{(j)} - \nu_{(i)}}{\nu_{(i)} + \nu_{(j)}} \Delta_i \mbox{\boldmath $x$} + \frac{\nu_{(i)} - \nu_{(j)}}{\nu_{(i)} + \nu_{(j)}} \Delta_j \mbox{\boldmath $x$}. \end{equation*} Comparing with \refeq{eq:discrete-Euler-Darboux} we obtain \begin{align*} &\frac{\nu_{(i)} - \nu_{(j)}}{\nu_{(i)} + \nu_{(j)}} = \frac{\gamma}{n_i + \epsilon_i - n_j - \epsilon_j}\\ \Leftrightarrow~ &\nu_{(i)} \left( 1 - \frac{\gamma}{n_i + \epsilon_i - n_j - \epsilon_j} \right) = \nu_{(j)} \left( 1 + \frac{\gamma}{n_i + \epsilon_i - n_j - \epsilon_j} \right)\\ \Leftrightarrow~ &\nu(n_i+1, n_j ) = \nu(n_i, n_j + 1) \ \frac{n_i + \epsilon_i - n_j - \epsilon_j + \gamma}{n_i + \epsilon_i - n_j - \epsilon_j - \gamma}. \end{align*} Thus, the function $\mu$ should satisfy $$ \mu(m + 1) = \mu(m - 1) \ \frac{m + \epsilon_i - \epsilon_j + \gamma}{m + \epsilon_i - \epsilon_j - \gamma}. $$ This equation is easily solved: \begin{equation*} \mu(m) =\frac{\Gamma \left( \frac{1}{2} ( m + \epsilon_i - \epsilon_j + \gamma + 1 ) \right)} {\Gamma \left( \frac{1}{2} ( m + \epsilon_i - \epsilon_j - \gamma + 1 ) \right)}b(m), \end{equation*} where $\Gamma$ denotes the gamma function and $b$ is any function of period 2. It is recalled that $\Gamma(x+1)=x\Gamma(x)$. \section{Discrete confocal quadrics} \label{sect: discrete quadrics} We have seen in the continuous case (Proposition \ref{prop:Euler-Darboux-confocal-quadrics}) that confocal quadrics are described, up to a componentwise scaling, by separable solutions of the Euler-Poisson-Darboux system \refeq{eq:Euler-Darboux} with $\gamma = \frac{1}{2}$. It is therefore natural to consider separable solutions of the discrete Euler-Poisson-Darboux system. \subsection{Separability} \begin{proposition} {\rm \cite{konopelchenko-schief}} A separable function $x : \mathbb{Z}^N \supset U \rightarrow \mathbb{R}$, \begin{equation} \label{eq:discrete-separable-solution} x(n_1, \ldots, n_N) = \rho_1(n_1) \cdots \rho_N(n_N), \end{equation} is a solution of the discrete Euler-Poisson-Darboux system {\rm \refeq{eq:discrete-Euler-Darboux}} iff the functions $\rho_i: U_i \rightarrow \mathbb{R}$, $i = 1, \ldots, N$, satisfy \begin{equation} \label{eq:discrete-separable-solution-dgl-delta} \Delta\rho_i(n_i) = \frac{\gamma \rho_i(n_i)}{n_i + \epsilon_i + c} , \end{equation} or, equivalently, \begin{equation} \label{eq:discrete-separable-solution-dgl} \rho_i(n_i+1) = \rho_i(n_i) \frac{n_i + \epsilon_i + c + \gamma}{n_i + \epsilon_i + c} \end{equation} for some $c \in \mathbb{R}$ and for all $n_i \in U_i$. \end{proposition} \begin{proof} Substituting \refeq{eq:discrete-separable-solution} into \refeq{eq:discrete-Euler-Darboux} we obtain \begin{align*} &\big(\rho_i(n_i+1)-\rho_i(n_i)\big)\big(\rho_j(n_j+1) - \rho_j(n_j) \big)\\ &\quad= \frac{ \gamma}{ n_i + \epsilon_i - n_j - \epsilon_j } \Big( \rho_i(n_i)\big(\rho_j(n_j+1) -\rho_j(n_j)\big)-\rho_j(n_j) \big( \rho_i(n_i+1)-\rho_i(n_i)\big)\Big), \end{align*} which is equivalent to \[ n_i + \epsilon_i - n_j - \epsilon_j = \frac{\gamma \rho_i(n_i)}{\rho_i(n_i+1) - \rho_i(n_i)} - \frac{\gamma \rho_j(n_j)}{\rho_j(n_j+1) - \rho_j(n_j)}. \] So, the expression \[ \frac{\gamma \rho_i(n_i)}{\rho_i(n_i+1) - \rho_i(n_i)}-(n_i+\epsilon_i)=c \] depends neither on $n_i$ nor on $n_j$, i.e., is a constant. This is equivalent to \eqref{eq:discrete-separable-solution-dgl-delta} and thus to \eqref{eq:discrete-separable-solution-dgl}. \end{proof} If the constants $\gamma$, $c$ and $\epsilon_i$ are such that neither $\epsilon_i + c$ nor $\epsilon_i + c + \gamma$ is an integer, then the general solution of \refeq{eq:discrete-separable-solution-dgl} is given by \begin{equation*} \rho_i(n_i,\epsilon_i+c) = d_i \frac{\Gamma(n_i + \epsilon_i + c + \gamma)}{\Gamma(n_i + \epsilon_i + c)} = \tilde{d}_i \frac{\Gamma(-n_i - \epsilon_i - c + 1)}{\Gamma(-n_i - \epsilon_i - c - \gamma + 1)} \end{equation*} for all $n_i \in \mathbb{Z}$ with some constants $d_i, \tilde{d_i} \in \mathbb{R}$. In what follows, we will use the Pochhammer symbol $(u)_\gamma$ with a not necessarily integer index $\gamma$: \begin{equation}\label{eq: Pochhammer gamma} (u)_\gamma=\frac{\Gamma(u+\gamma)}{\Gamma(u)}, \quad u,\gamma>0. \end{equation} Because of the asymptotics $(u)_\gamma\sim u^\gamma$ as $u\to+\infty$, which can also be put as \[ \lim_{\varepsilon\to 0}\varepsilon^\gamma \left(\frac{u}{\varepsilon}\right)_\gamma=u^\gamma, \] the function $(u)_\gamma$ has been considered as a discretization of $u^\gamma$ in \cite[p. 47]{Gelfand_et_al}. With this notation, the above formulas take the form \begin{equation*} \rho_i(n_i,\epsilon_i+c) = d_i(n_i + \epsilon_i + c)_\gamma = \tilde{d}_i (-n_i - \epsilon_i - c - \gamma + 1)_\gamma. \end{equation*} \begin{definition} The {\em discrete square root function} is defined by \begin{equation} (u)_{1/2}=\frac{\Gamma(u+\frac{1}{2})}{\Gamma(u)}. \end{equation} \end{definition} To achieve boundary conditions similar to \refeq{eq:boundary-conditions1} and \refeq{eq:boundary-conditions2}, we will be more interested in the cases where solutions are only defined on an integer half-axis, and vanish at its boundary point. For $\gamma = \frac{1}{2}$ this is the case if: \begin{itemize} \item either $\epsilon_i + c\in\mathbb{Z}$, and then the general solution to the right of $-c -\epsilon_i$ is given by multiples of \begin{equation} \label{eq:discrete-separable-solution-version1} \rho_i(n_i, \epsilon_i + c) = (n_i + \epsilon_i + c)_{1/2} \quad \text{for} \quad n_i \geq -c - \epsilon_i, \end{equation} \item or $\epsilon_i + c +\frac{1}{2}\in\mathbb{Z}$, and then the general solution to the left of $-c -\epsilon + \frac{1}{2}$ is given by multiples of \begin{equation} \label{eq:discrete-separable-solution-version2} \rho_i(n_i, \epsilon_i + c) = \sqr{-n_i - \epsilon_i - c + \tfrac{1}{2}} \quad \text{for} \quad n_i \leq -c - \epsilon_i + \frac{1}{2}. \end{equation} \end{itemize} In terms of discrete square roots, the expressions for the separable solutions of the discrete Euler-Poisson-Darboux system for $\gamma=\frac{1}{2}$ now resemble their classical counterparts. \begin{proposition}\label{prop:boundary} Let $\alpha_1, \ldots, \alpha_N \in \mathbb{Z}$ with $\alpha_1 > \alpha_2 > \cdots > \alpha_N$, and set \begin{equation*} \mathcal{U} =\set{ (n_1,\ldots,n_N) \in \mathbb{Z}^N }{ -\alpha_1 \leq n_1 \leq -\alpha_2 \leq n_2 \leq \cdots \leq -\alpha_N \leq n_N } . \end{equation*} Let ${x_k : \mathcal{U} \rightarrow \mathbb{R}_+}$, $k = 1, \ldots, N$, be $N$ independent separable solutions of the discrete Euler-Poisson-Darboux system {\rm \refeq{eq:discrete-Euler-Darboux}} with $\gamma = \frac{1}{2}$ defined on $\mathcal U$ and satisfying the following boundary conditions: \begin{alignat}{2} \label{eq:discrete-boundary-conditions1} &x_k|_{n_k=-\alpha_k} = 0 &&\quad \text{for} \quad k= 1, \ldots, N,\\ \label{eq:discrete-boundary-conditions2} &x_k |_{n_{k-1} = -\alpha_k} = 0 &&\quad \text{for} \quad k= 2, \ldots, N. \end{alignat} Then the shifts $\epsilon_i$ of the variables $n_i$ are given by \begin{equation} \label{eq:epsilons} \epsilon_i - \epsilon_j = \frac{j-i}{2} \quad\text{for} \quad i,j = 1, \ldots, N, \end{equation} and the solutions are expressed by \begin{equation} \label{eq:discrete-solutions-with-boundary-conditions1} x_k(n_1, \ldots, n_N) = D_k \prod_{i=1}^N \rho_i^{(k)}(n_i), \end{equation} for some constants $D_1,\ldots, D_N > 0$ and \begin{equation} \label{eq:discrete-solutions-with-boundary-conditions2} \rho_i^{(k)}(n_i) = \left\{ \begin{array}{ll} \sqr{n_i + \alpha_k + \frac{k-i}{2}} \quad & \text{for} \quad i \geq k, \\ \\ \sqr{-n_i - \alpha_k - \frac{k-i}{2} + \frac{1}{2}} \quad & \text{for} \quad i < k. \end{array}\right. \end{equation} \end{proposition} \begin{proof} Separable solutions of \refeq{eq:discrete-Euler-Darboux} with $\gamma=\frac{1}{2}$ are of the general form \eqref{eq:discrete-solutions-with-boundary-conditions1}, where each $\rho_i^{(k)}(n_i)=\rho_i(n_i,\epsilon_i+c_k)$ is defined by one of the formulas \eqref{eq:discrete-separable-solution-version1}, \eqref{eq:discrete-separable-solution-version2}, and all multiplicative constants are collected in $D_1,\ldots, D_N > 0$. We have to determine suitable constants $\epsilon_i$ and $c_k$. The boundary conditions \refeq{eq:discrete-boundary-conditions1} and \refeq{eq:discrete-boundary-conditions2} imply that $x_k$ is defined for $n_k \geq -\alpha_k$, while vanishing for $n_k = -\alpha_k$, and also that $x_k$ is defined for $n_{k-1} \leq -\alpha_k$, while vanishing for $n_{k-1} = -\alpha_k$. This shows that \[ \alpha_k = c_k + \epsilon_k=c_k + \epsilon_{k-1} - \frac{1}{2}. \] We obtain $\epsilon_k- \epsilon_{k-1} = -\frac{1}{2}$, and equation \eqref{eq:epsilons} follows. Together with $c_k + \epsilon_k = a_k$ this implies that \begin{equation} \label{eq: a vs c} c_k + \epsilon_i = \alpha_k + \frac{k-i}{2}. \end{equation} It remains to substitute \eqref{eq: a vs c} into \refeq{eq:discrete-separable-solution-version1} and \refeq{eq:discrete-separable-solution-version2}. \end{proof} \subsection{Orthogonality} The remaining scaling freedom (multiplicative constants $D_k$) of the components $x_k$ as given by (\ref{eq:discrete-solutions-with-boundary-conditions1}) is the same as in the continuous case. As we have seen in Proposition \ref{prop:Euler-Darboux-confocal-quadrics}, in the continuous case, one can fix the scaling by imposing the orthogonality condition $(\partial \mbox{\boldmath $x$}/\partial u_i) \perp (\partial \mbox{\boldmath $x$}/\partial u_j)$. In the discrete case, it turns out to be possible to introduce a notion of orthogonality, which will allow us to fix the scaling in a similar way. \begin{definition}\label{def: d ortho} Let $\mathcal U\subset \mathbb{Z}^N$, $\mathcal U^*\subset {(\mathbb{Z}^N)}^*$, where ${(\mathbb{Z}^N)}^*=(\mathbb{Z}+\frac{1}{2})^N$. Consider a function \bela{Z6a} \mbox{\boldmath $x$} : \mathcal{U}\cup\mathcal{U}^* \rightarrow \mathbb{R}^N, \end{equation} such that both restrictions $\mbox{\boldmath $x$}(\mathcal U)$ and $\mbox{\boldmath $x$}(\mathcal U^*)$ are Q-nets. We say that the discrete net $\mbox{\boldmath $x$}$ is {\em orthogonal} if each edge of $\mbox{\boldmath $x$}(\mathcal U)$ is orthogonal to the dual facet of $\mbox{\boldmath $x$}(\mathcal U^*)$. \end{definition} \begin{figure}[H] \centering \input{pics/orthogonality.pdf_tex} \caption{Discrete orthogonality for a system of Q-nets defined on a square lattice and on its dual.} \label{fig:orthogonality} \end{figure} The (space of the) facet of $\mbox{\boldmath $x$}(\mathcal U^*)$ dual to the edge $[\mbox{\boldmath $x$}(\mbox{\boldmath $n$}),\mbox{\boldmath $x$}(\mbox{\boldmath $n$}+\mbox{\boldmath $e$}_i)]$ of $\mbox{\boldmath $x$}(\mathcal U)$ is spanned by the $N-1$ edges $[\mbox{\boldmath $x$}(\mbox{\boldmath $n$}-\mbox{\boldmath $e$}_j+\frac{1}{2}\mbox{\boldmath $f$}),\mbox{\boldmath $x$}(\mbox{\boldmath $n$}+\frac{1}{2}\mbox{\boldmath $f$})]$ with $j\neq i$, where $\mbox{\boldmath $f$}=(1,\ldots,1)$. Therefore, the orthogonality condition in the sense of Definition \ref{def: d ortho} reads: \begin{equation} \label{eq:orthoganality-condition} \left\langle \Delta_i \mbox{\boldmath $x$}(\mbox{\boldmath $n$}), \Delta_j \mbox{\boldmath $x$}(\mbox{\boldmath $n$}-\mbox{\boldmath $e$}_j+\tfrac{1}{2}\mbox{\boldmath $f$}) \right\rangle = 0 \end{equation} for all $i \neq j$ and $\mbox{\boldmath $n$} \in \mathbb{Z}^N$. From this it is easy to see that $\mbox{\boldmath $x$}(\mathcal U)$ and $\mbox{\boldmath $x$}(\mathcal U^*)$ actually play symmetric roles in Definition \ref{def: d ortho} (that is, each edge of $\mbox{\boldmath $x$}(\mathcal U^*)$ is orthogonal to the dual facet of $\mbox{\boldmath $x$}(\mathcal U)$, compare Fig.\ \ref{fig:orthogonality}). \medskip Turning to separable solutions of the discrete Euler-Poisson-Darboux system {\rm \refeq{eq:discrete-Euler-Darboux}} with $\gamma = \frac{1}{2}$, we extend the function $\mbox{\boldmath $x$} = (x_1,\ldots,x_N)$ defined in Proposition \ref{prop:boundary} to a bigger domain: \bela{Z5} \mbox{\boldmath $x$} : \mathcal{U}\cup\mathcal{U}^* \rightarrow \mathbb{R}^N_+, \end{equation} where \bela{Z6} \mathcal{U}^* =\set{ (n_1,\ldots,n_N) \in {(\mathbb{Z}^N)}^* }{ -\alpha_1 \leq n_1 \leq -\alpha_2 \leq n_2 \leq \cdots \leq -\alpha_N \leq n_N } . \end{equation} It is emphasized that the lattices $\mbox{\boldmath $x$}(\mathcal{U})$ and $\mbox{\boldmath $x$}(\mathcal{U}^*)$ are on equal footing except that the boundary conditions do not apply to $\mbox{\boldmath $x$}(\mathcal{U}^*)$. \begin{proposition} \label{prop:discrete-orthogonality} Let $\alpha_1, \ldots, \alpha_N \in \mathbb{Z}$ with $\alpha_1 > \alpha_2 > \cdots > \alpha_N$. Then the net $\mbox{\boldmath $x$} : \mathcal{U}\cup\mathcal{U}^* \rightarrow \mathbb{R}^N_+$ defined by (\ref{eq:discrete-solutions-with-boundary-conditions1}) and (\ref{eq:discrete-solutions-with-boundary-conditions2}) is orthogonal if and only if \begin{equation} \label{eq:D-solutions} D_k^{-2} = C \prod_{i < k} (\alpha_i - \alpha_k + \tfrac{i-k}{2}) \prod_{i > k} (\alpha_k - \alpha_i + \tfrac{k-i}{2}) \end{equation} with some $C \in \mathbb{R}_+$. \end{proposition} \begin{proof} We will use the following formulas for the ``discrete derivative'' of the ``discrete square root function'' $(u)_{1/2}=\Gamma(u+\tfrac{1}{2})/\Gamma(u)$, which are immediate consequences of the identity $\Gamma(u+1) = u\Gamma(u)$: \bela{E3} \Delta \big(\sqr{u}\big) = \frac{1}{2\sqr{u+\frac{1}{2}}}, \qquad \Delta\big(\sqr{-u}\big) = -\frac{1}{2\sqr{-u-\frac{1}{2}}}, \end{equation} where $\Delta f(u) = f(u+1) - f(u)$. We also note that the ``discrete squares'' of discrete square roots obey the relations \bela{E4a} \textstyle \sqr{u}\sqr{u+\frac{1}{2}} = u,\qquad \sqr{-u}\sqr{-u-\frac{1}{2}} = -u-\frac{1}{2}. \end{equation} Substituting (\ref{eq: a vs c}) and $\gamma = \frac{1}{2}$ into (\ref{eq:discrete-separable-solution-dgl-delta}), we obtain: \begin{equation}\label{eq: Delta rho} \Delta \rho^{(k)}_i(n_i) = \frac{\rho^{(k)}_i(n_i)}{2(n_i + \alpha_k + \frac{k - i}{2})}. \end{equation} Upon using property \eqref{E4a} and expressions \eqref{eq:discrete-solutions-with-boundary-conditions2}, we arrive at \begin{equation}\label{eq: rho rho} \rho_i^{(k)}(n_i)\rho_i^{(k)}(n_i+\tfrac{1}{2})=\left\{\begin{array}{ll} n_i+\alpha_k+\tfrac{k-i}{2}, & i\ge k, \\ \\ -(n_i+\alpha_k+\tfrac{k-i}{2}), & i<k. \end{array}\right. \end{equation} We use \eqref{eq:discrete-solutions-with-boundary-conditions1}, \eqref{eq: Delta rho}, \eqref{eq: rho rho} to compute the left-hand side of equation \eqref{eq:orthoganality-condition}: \[ \left\langle \Delta_i \mbox{\boldmath $x$}(\mbox{\boldmath $n$}) , \Delta_j \mbox{\boldmath $x$}(\mbox{\boldmath $n$}-\mbox{\boldmath $e$}_j+\tfrac{1}{2}\mbox{\boldmath $f$})\right\rangle = \frac{1}{4} \sum_{k=1}^N (-1)^{k-1} D_k^2 \prod_{l\neq i,j} \left( n_l + \alpha_k + \frac{k-l}{2} \right). \] Observe that this literally coincides with the analogous expression in the continuous case \eqref{eq: cont scalar product}, if we set \begin{equation} a_k=\alpha_k+\frac{k}{2}, \quad u_l=n_l-\frac{l}{2}. \end{equation} Therefore, the proof of part b) of Proposition \ref{prop:Euler-Darboux-confocal-quadrics} can be literally repeated, leading to the condition $D_k^{-2}=C\prod_{i<k} (a_i-a_k)\prod_{i>k} (a_k-a_i)$, which coincides with \eqref{eq:D-solutions}. \end{proof} \subsection{Definition of discrete confocal coordinates} \begin{definition} \label{def:discrete-elliptic-coordinates} Let $\alpha_1, \ldots, \alpha_N \in \mathbb{Z}$ with $\alpha_1 > \alpha_2 > \cdots > \alpha_N$. \emph{Discrete confocal coordinates} are given by the discrete net $\mbox{\boldmath $x$} : \mathcal{U}\cup\mathcal{U}^* \rightarrow \mathbb{R}^N_+$ defined by \begin{equation} \label{eq:discrete-elliptic-coordinates} x_k(\mbox{\boldmath $n$}) = D_k \prod_{i=1}^{k-1} \sqr{-n_i - \alpha_k - \tfrac{k-i}{2} + \tfrac{1}{2}} \prod_{i=k}^N \sqr{n_i + \alpha_k + \tfrac{k-i}{2}} \end{equation} with \begin{equation} \label{eq:D} D_k^{-1} = \prod_{i=1}^{k-1} \sqrt{\alpha_i - \alpha_k + \tfrac{i-k}{2}} \prod_{i=k+1}^{N} \sqrt{\alpha_k - \alpha_i + \tfrac{k-i}{2}} . \end{equation} \end{definition} The characteristic properties of this net can be summarized as follows. \begin{itemize} \item[(i)] Each two-dimensional subnet of $\mbox{\boldmath $x$}(\mathcal U)$ and of $\mbox{\boldmath $x$}(\mathcal U^*)$ satisfies \eqref{eq:discrete-Euler-Darboux} with $\gamma=\frac{1}{2}$; \item[(ii)] Therefore each two-dimensional subnet of $\mbox{\boldmath $x$}(\mathcal U)$ and of $\mbox{\boldmath $x$}(\mathcal U^*)$ is a Koenigs net; \item[(iii)] The net $\mbox{\boldmath $x$}(\mathcal U\cup\mathcal U^*)$ is orthogonal in the sense of Definition \ref{def: d ortho}; \item[(iv)] Both nets $\mbox{\boldmath $x$}(\mathcal U)$ and $\mbox{\boldmath $x$}(\mathcal U^*)$ are separable; \item[(v)] Boundary conditions \eqref{eq:discrete-boundary-conditions1}, \eqref{eq:discrete-boundary-conditions2} are satisfied. \end{itemize} Properties (ii) and (iii) lead to a novel discretization of the notion of isothermic surfaces. Property (v) allows us to define \emph{discrete confocal quadrics} by reflecting the net $\mbox{\boldmath $x$}$ in the coordinate hyperplanes. Like in the continuous case, the boundary conditions correspond to the $2N - 1$ degenerate quadrics of the confocal family lying in the coordinate hyperplanes. \begin{remark} In \cite{BSST-II} we will describe more general discrete confocal quadrics, corresponding to general reparametrizations of the curvature lines. They will be defined in a more geometric way, less based on integrable difference equations. \end{remark} \subsection{Further properties of discrete confocal coordinates} \label{sec:properties} We now obtain a variety of properties of discrete confocal quadrics and discrete confocal coordinates, which serve as discretizations of their respective continuous analogs. \begin{proposition} For any $N$-tuple of signs $\boldsymbol{\sigma}=(\sigma_1,\ldots,\sigma_N)$, $\sigma_i=\pm 1$, we have: \begin{equation} \label{eq:component-squares-general} x_k(\mbox{\boldmath $n$}) \cdot x_k(\mbox{\boldmath $n$}+\tfrac{1}{2}\boldsymbol{\sigma}) = \frac{\prod_{i=1}^N \left( n_i + \alpha_k + \frac{k-i}{2} - \frac{1}{4}(1-\sigma_i)\right)} {\prod_{i \neq k} \left( \alpha_k - \alpha_i + \frac{k-i}{2} \right)}, \end{equation} and therefore \begin{equation} \label{eq:discrete-confocal-quadric-equation-general} \sum_{k=1}^N \frac{ x_k(\mbox{\boldmath $n$}) x_k(\mbox{\boldmath $n$}+\tfrac{1}{2}\boldsymbol{\sigma}) }{ n_i + \alpha_k + \frac{k-i}{2} - \frac{1}{4} (1-\sigma_i)} = 1, \qquad i=1,\ldots,N. \end{equation} \end{proposition} \begin{proof} Equation \eqref{eq:component-squares-general} is obtained by straightforward computation. Using this result, equation \eqref{eq:discrete-confocal-quadric-equation-general} follows from the continuous equations \eqref{eq:confocal-quadrics}, \eqref{eq:elliptic-coordinates-squares} upon replacing $a_k = \alpha_k + \frac{k}{2}$ and $u_i = n_i - \frac{i}{2} - \frac{1}{4}(1-\sigma_i)$. \end{proof} We notice that \eqref{eq:component-squares-general} can be seen as a discrete version of the parametrization formulas \eqref{eq:elliptic-coordinates-squares}, while \eqref{eq:discrete-confocal-quadric-equation-general} can be seen as a discrete version of the quadric equation \eqref{eq:confocal-quadrics}. The above formulas take the simplest form for $\boldsymbol{\sigma}=\mbox{\boldmath $f$}=(1,\ldots,1)$: \begin{equation} \label{eq:component-squares} x_k(\mbox{\boldmath $n$}) \cdot x_k(\mbox{\boldmath $n$}+\tfrac{1}{2}\mbox{\boldmath $f$}) = \frac{\prod_{i=1}^N \left( n_i + \alpha_k + \frac{k-i}{2} \right)} {\prod_{i \neq k} \left( \alpha_k - \alpha_i + \frac{k-i}{2} \right)} \end{equation} and \begin{equation} \label{eq:discrete-confocal-quadric-equation} \sum_{k=1}^N \frac{ x_k(\mbox{\boldmath $n$}) x_k(\mbox{\boldmath $n$}+\tfrac{1}{2}\mbox{\boldmath $f$}) }{ n_i + \alpha_k + \frac{k-i}{2} } = 1, \qquad i=1,...,N. \end{equation} In the continuous setting one can obtain from \eqref{eq:elliptic-coordinates-squares} \begin{equation} \label{eq:product1} \scalarprod{\mbox{\boldmath $x$}(\mbox{\boldmath $u$})}{\mbox{\boldmath $x$}(\mbox{\boldmath $u$})} = \sum_{k=1}^N x_k^2(\mbox{\boldmath $u$}) = \sum_{k=1}^N (u_k + a_k), \end{equation} so that the hypersurfaces $\sum_{k=1}^N u_k = {\rm const}$ are (parts) of spheres. In particular, this implies that \begin{equation} \label{eq:product2} \left\langle \mbox{\boldmath $x$},\frac{\partial \mbox{\boldmath $x$}}{\partial u_i}\right\rangle = \frac{1}{2}, \end{equation} for all $i=1,\ldots,N$, which can be regarded as a characterization of the particular parametrization of the confocal quadrics considered in this paper. In the discrete case one obtains the following: \begin{proposition} For any $N$-tuple of signs $\boldsymbol{\sigma}=(\sigma_1,\ldots,\sigma_N)$, $\sigma_k=\pm 1$, we have: \begin{equation} \label{eq:discrete-product1} \scalarprod{\mbox{\boldmath $x$}(\mbox{\boldmath $n$})}{\mbox{\boldmath $x$}(\mbox{\boldmath $n$} + \tfrac{1}{2}\boldsymbol{\sigma})} = \sum_{k=1}^N \left( n_k + \alpha_k - \tfrac{1}{4}(1-\sigma_k) \right), \end{equation} and therefore, for any $i=1,\ldots,N$ and for any $\boldsymbol{\sigma}$ with $\sigma_i = -1$: \begin{equation} \label{eq:discrete-product2} \scalarprod{\mbox{\boldmath $x$}(\mbox{\boldmath $n$})}{\Delta_i\mbox{\boldmath $x$}(\mbox{\boldmath $n$}+ \tfrac{1}{2}\boldsymbol{\sigma})} = \frac{1}{2}. \end{equation} \end{proposition} \begin{proof} The right-hand sides of \eqref{eq:elliptic-coordinates-squares} and \eqref{eq:component-squares-general} may be identified by setting $a_k = \alpha_k + \frac{k}{2}$ and $u_i = n_i - \frac{i}{2} - \frac{1}{4}(1-\sigma_i)$ and hence the right-hand side of \eqref{eq:product1} also applies in the discrete case, leading upon the above identification to \eqref{eq:discrete-product1}. \end{proof} Finally, we obtain a factorization similar to \eqref{eq:is prop}, \eqref{eq: s}, \eqref{eq: alphas}, which characterizes isothermic surfaces in the continuous case. \begin{proposition} \label{prop: discrete factorization} For $i\neq j$ we have \begin{equation} \scalarprod{ \Delta_i \mbox{\boldmath $x$}(\mbox{\boldmath $n$}) }{ \Delta_i \mbox{\boldmath $x$}(\mbox{\boldmath $n$} + \tfrac{1}{2}\mbox{\boldmath $f$})}=s^2\phi_i(n_i), \end{equation} \begin{equation} \scalarprod{ \Delta_j \mbox{\boldmath $x$}(\mbox{\boldmath $n$} - \mbox{\boldmath $e$}_j + \tfrac{1}{2}\mbox{\boldmath $f$}) }{ \Delta_j \mbox{\boldmath $x$}(\mbox{\boldmath $n$} - \mbox{\boldmath $e$}_j + \mbox{\boldmath $f$}) } =s^2\phi_j(n_j), \end{equation} where \begin{equation} s(n_i,n_j)=\left | n_i-n_j+\frac{j-i}{2}+\frac{1}{2} \right |^{1/2}, \end{equation} and \begin{equation} \frac{\phi_i(n_i)}{\phi_j(n_j)} = -\frac{\prod_{m\neq i,j} (n_i - n_m + \frac{m-i}{2} + \frac{1}{2})}{\prod_{m=1}^N (n_i + \alpha_m - \frac{m-i}{2} + \frac{1}{2})} \cdot \frac{\prod_{m=1}^N (n_j + \alpha_m - \frac{m-j}{2})}{\prod_{m\neq i,j} (n_j - n_m + \frac{m-j}{2} - \frac{1}{2})}. \end{equation} \end{proposition} \begin{proof} For any $i, k$ we compute \begin{equation*} \Delta_i x_k(\mbox{\boldmath $n$}) \cdot \Delta_i x_k(\mbox{\boldmath $n$} + \tfrac{1}{2}\mbox{\boldmath $f$}) = \frac{1}{4}\frac{1}{\prod_{m\neq k}(\alpha_k-\alpha_m+\frac{k-m}{2})} \frac{\prod_{m\neq i}(n_m+ \alpha_k + \frac{k-m}{2})}{n_i + \alpha_k + \frac{k-i}{2} + \frac{1}{2}}. \end{equation*} Here, the right-hand side can be identified with the right-hand side of the corresponding continuous expression, put as $$ \left(\frac{\partial x_k}{\partial u_i} \right )^2 =\frac{1}{4} \frac{1}{\prod_{m \neq k} (a_k - a_m)}\frac{\prod_{m\neq i} (u_m+a_k)}{(u_i+a_k)}, $$ upon replacing $$ a_k = \alpha_k + \frac{k}{2}, \quad u_m = \begin{cases} n_m - \frac{m}{2}, &\text{for}~m\neq i,\\ n_i - \frac{i}{2} + \frac{1}{2}, &\text{for}~m = i. \end{cases} $$ Therefore, the continuous identity \eqref{eq: id squares} upon the above identification implies \begin{equation*} \scalarprod{\Delta_i \mbox{\boldmath $x$}(\mbox{\boldmath $n$})}{\Delta_i \mbox{\boldmath $x$}(\mbox{\boldmath $n$} + \tfrac{1}{2}\mbox{\boldmath $f$})} = \frac{\prod_{m\neq i}(n_i - n_m + \frac{m-i}{2} + \frac{1}{2})}{4 \prod_{m=1}^N(n_i + \alpha_m - \frac{m-i}{2} + \frac{1}{2})}. \end{equation*} Similarly, we find: \begin{equation*} \scalarprod{ \Delta_j \mbox{\boldmath $x$}(\mbox{\boldmath $n$} - \mbox{\boldmath $e$}_j + \tfrac{1}{2}\mbox{\boldmath $f$}) }{ \Delta_j \mbox{\boldmath $x$}(\mbox{\boldmath $n$} - \mbox{\boldmath $e$}_j + \mbox{\boldmath $f$}) } = \frac{\prod_{m\neq j}(n_j - n_m + \frac{m-j}{2} - \frac{1}{2})}{4 \prod_{m=1}^N(n_j + \alpha_m - \frac{m-j}{2})}. \end{equation*} The observation that the only factors in each of the latter two expressions which depends on both $n_i$ and $n_j$ are equal (up to sign) finishes the proof. \end{proof} \begin{remark} Similar to equations \eqref{eq:component-squares}, \eqref{eq:discrete-confocal-quadric-equation} it is possible to generalize Proposition \ref{prop: discrete factorization} by replacing the vector $\mbox{\boldmath $f$}$ by a vector of signs $\boldsymbol{\sigma}$ with $\sigma_i=\sigma_j=1$ and all other components components being arbitrary. \end{remark} \section{The case $N=2$} \label{sect: 2d} Here, and in the following section, we examine in greater detail the general theory set down in the preceding in the cases $N=2$ and $N=3$. For the benefit of the reader, these two sections are made as self-contained as possible. \subsection{Continuous confocal coordinates} Let $a > b > 0$. Then formulas \begin{equation} \label{eq:confocal2d-map} \begin{aligned} x(u_1, u_2) &= \frac{\sqrt{u_1 + a}\sqrt{u_2 + a}}{\sqrt{a-b}},\\ y(u_1, u_2) &= \frac{\sqrt{-(u_1 + b)}\sqrt{u_2 + b}}{\sqrt{a-b}}, \end{aligned} \end{equation} define a parametrization of the first quadrant of $\mathbb{R}^2$ by confocal coordinates \begin{equation*} \mathcal{U} = \set{ (u_1,u_2) \in \mathbb{R}^2 }{ -a < u_1 < -b < u_2 } \rightarrow \mathbb{R}_+^2. \end{equation*} A family of confocal conics is obtained by reflections in the coordinate axes. \begin{figure}[H] \centering \includegraphics[width=0.49\textwidth]{confocal2d-domain.png} \includegraphics[width=0.49\textwidth]{confocal2d-image.png} \caption{ Square grid on the domain $\mathcal{U}$ and its image under the map \refeq{eq:confocal2d-map}. The horizontal lines $u_2=const$ are mapped to ellipses with the degenerate case $u_2 \searrow -b$, which is mapped to a line segment on the $x$-axis. The vertical lines $u_1 = const$ are mapped to hyperbolas with the degenerate cases $u_1 \nearrow -b$, which is mapped to a ray on the $x$-axis, and $u_2 \searrow -a$, which is mapped to the positive $y$-axis. } \end{figure} \subsection{Discrete confocal coordinates} We start with the general formula \begin{equation*} \begin{aligned} x(n_1,n_2) &= D_1\sqr{n_1+\epsilon_1+c_1} \sqr{n_2+\epsilon_2+c_1} ,\\[.6em] y(n_1,n_2) &=\textstyle D_2\sqr{-n_1-\epsilon_1-c_2+\frac{1}{2}}\sqr{n_2+\epsilon_2+c_2} \end{aligned} \end{equation*} for a separable solution of the discrete Euler-Poisson-Darboux system \refeq{eq:discrete-Euler-Darboux} with $\gamma=\frac{1}{2}$, where a suitable choice of solutions \refeq{eq:discrete-separable-solution-version1}, \refeq{eq:discrete-separable-solution-version2} has already been made according to the continuous case. We use the above ansatz to illustrate the choice of the coordinate shifts $\epsilon_i$ and $c_k$ according to boundary conditions \refeq{eq:discrete-boundary-conditions1} and \refeq{eq:discrete-boundary-conditions2}. For $\alpha, \beta \in \mathbb{Z}$ with $\alpha > \beta$, we define $c_1, c_2$ and $\epsilon_1, \epsilon_2$ such that we obtain a map \begin{equation*} \mathcal{U} = \set{ (n_1,n_2) \in \mathbb{Z}^2 }{ -\alpha \leq n_1 \leq -\beta \leq n_2 } \rightarrow \mathbb{R}_+^2\ , \end{equation*} where the boundary components $n_1 = -\alpha$, $n_1 = -\beta$, and $n_2 = -\beta$ correspond to degenerate conics that lie on the coordinate axes: \begin{align*} x |_{n_1 = -\alpha} &= 0 \quad \text{(degenerate hyperbola)},\\ y |_{n_1 = -\beta} &= 0 \quad \text{(degenerate hyperbola)},\\ y |_{n_2 = -\beta} &= 0 \quad \text{(degenerate ellipse)}. \end{align*} For this, the following linear system of equations has to be satisfied: \begin{align*} \epsilon_1 + c_1 &= \alpha,\\ \epsilon_1 + c_2 &= \beta + \frac{1}{2},\\ \epsilon_2 + c_2&= \beta. \end{align*} As a consequence, we find: \begin{equation*} \epsilon_2 + c_1= \alpha - \frac{1}{2}. \end{equation*} Thus, we end up with the formula \bela{E5} \mbox{\boldmath $x$}(\mbox{\boldmath $n$}) = \left(\bear{c} D_1\sqr{n_1 + \alpha}\sqr{n_2 + \alpha - \frac{1}{2}}\\[.6em] D_2\sqr{- n_1 - \beta}\sqr{n_2 + \beta}\end{array}\right). \end{equation} Up to scaling along the coordinate axes, the latter defines discrete confocal coordinates on the first quadrant of $\mathbb{R}^2$, if the domain $\mathcal U$ is extended to $\mathcal U\cup\, \mathcal{U}^*$ as demonstrated below. From this we generate a family of discrete confocal conics by reflections in the coordinate axes. \begin{figure}[H] \centering \includegraphics[width=0.49\textwidth]{confocal2d_discrete_domain_01_integer-5-1.png} \includegraphics[width=0.49\textwidth]{confocal2d_discrete_image_01_integer-5-1.png} \caption{ Points of the square grid on the domain $\mathcal{U}$ and their images under the map \refeq{E5}, joined by straight line segments respectively. The horizontal lines $n_2={\rm const}$ are mapped to discrete ellipses with the degenerate case $n_2 = -\beta$, which is mapped to a line segment on the $x$-axis. The vertical lines $n_1 = {\rm const}$ are mapped to discrete hyperbolas with the degenerate cases $n_1 = -\beta$, which is mapped to a ray on the $x$-axis, and $n_1 = -\alpha$, which is mapped to the positive $y$-axis. } \end{figure} In order to implement the orthogonality condition, we extend $\mbox{\boldmath $x$}$ to $\mathcal U^*$, and compute the discrete derivatives of the extension of $\mbox{\boldmath $x$}$ along the dual edges of the two dual square lattices $\mathcal{U}$ and $\mathcal{U}^*$. Formulas \eqref{E3} for the ``discrete derivatives'' of the discrete square root immediately lead to \bela{E7} \Delta_1\mbox{\boldmath $x$}(\mbox{\boldmath $n$}) = \frac{1}{2}\left(\bear{c} \displaystyle D_1\frac{\sqr{n_2 + \alpha - \frac{1}{2}}}{\sqr{n_1 + \alpha + \frac{1}{2}}}\\[1.8em] \displaystyle-D_2\frac{\sqr{n_2 + \beta}}{\sqr{- n_1 - \beta - \frac{1}{2}}}\end{array}\right) \end{equation} and \bela{E8} \Delta_2\mbox{\boldmath $x$}(\mbox{\boldmath $n$}) = \frac{1}{2}\left(\bear{c} \displaystyle D_1\frac{\sqr{n_1 + \alpha}}{\sqr{n_2 + \alpha}}\\[1.8em] \displaystyle D_2\frac{\sqr{ - n_1 - \beta}}{\sqr{n_2 + \beta + \frac{1}{2}}}\end{array}\right). \end{equation} If we introduce the notation \bela{E8a} \textstyle \hn{\sigma_1,\sigma_2} = \mbox{\boldmath $n$} + \frac{1}{2}(\sigma_1,\sigma_2), \quad \sigma_i=\pm 1, \end{equation} then it turns out that \bela{E9} \textstyle\left\langle \Delta_1\mbox{\boldmath $x$}(\mbox{\boldmath $n$}), \Delta_2\mbox{\boldmath $x$}(\hn{+-}) \right\rangle= \frac{1}{4}(D_1^2 - D_2^2), \end{equation} so that dual edges are orthogonal if and only if \bela{E10} D_1^2 = D_2^2. \end{equation} We make the choice \bela{E23b} D_1^2 = D_2^2 = \frac{1}{\alpha - \beta - \frac{1}{2}}. \end{equation} Formulas \eqref{E5} with the constants \eqref{E23b} constitute a discretization of the parametrization \eqref{eq:confocal2d-map}. \begin{figure}[H] \centering \includegraphics[width=0.49\textwidth]{confocal2d_discrete_domain_02_plus_dual-5-1.png} \includegraphics[width=0.49\textwidth]{confocal2d_discrete_image_02_plus_dual-5-1.png} \caption{Points of the square grid on the domain $\mathcal{U} \cup \mathcal{U}^*$ and their images under the map \refeq{E5}. All pairs of corresponding dual edges are mutually orthogonal.} \end{figure} It is readily verified that with the choice \eqref{E23b}, a lattice point $\mbox{\boldmath $x$}(\mbox{\boldmath $n$})$ and its nearest neighbours $\mbox{\boldmath $x$}(\hn{++})$ and $\mbox{\boldmath $x$}(\hn{+-})$ are related by \bela{E24b} \begin{aligned} x(\mbox{\boldmath $n$})x(\hn{++}) & = \displaystyle\frac{(n_1 + \alpha)(n_2 + \alpha - \frac{1}{2})}{\alpha-\beta-\frac{1}{2}},\\[.6em] y(\mbox{\boldmath $n$})y(\hn{++}) & = \displaystyle\frac{(n_1 + \beta + \frac{1}{2})(n_2 + \beta)}{\beta-\alpha+\frac{1}{2}}, \end{aligned} \end{equation} respectively by \bela{E24c} \begin{aligned} x(\mbox{\boldmath $n$})x(\hn{+-}) & = \displaystyle\frac{(n_1 + \alpha)(n_2 + \alpha - 1)}{\alpha-\beta-\frac{1}{2}},\\[.6em] y(\mbox{\boldmath $n$})y(\hn{+-}) & = \displaystyle\frac{(n_1 + \beta + \frac{1}{2})(n_2 + \beta -\frac{1}{2})}{\beta-\alpha+\frac{1}{2}}, \end{aligned} \end{equation} which are natural discretizations of the formulas \bela{E24cc} x^2 = \frac{(u_1 + a)(u_2 + a)}{a-b},\quad y^2 = \frac{(u_1 + b)(u_2 + b)}{b-a} \end{equation} for the squares of coordinates. From \eqref{E24b}, \eqref{E24c} one easily derives \bela{E23} \begin{aligned} \frac{x(\mbox{\boldmath $n$})x(\hn{++})}{n_1 + \alpha} + \frac{y(\mbox{\boldmath $n$})y(\hn{++})}{n_1 + \beta + \frac{1}{2}} &= 1, \\[.6em] \frac{x(\mbox{\boldmath $n$})x(\hn{++})}{n_2 + \alpha - \frac{1}{2}} + \frac{y(\mbox{\boldmath $n$})y(\hn{++})}{n_2 + \beta} &= 1, \end{aligned} \end{equation} and \bela{E23a} \begin{aligned} \frac{x(\mbox{\boldmath $n$})x(\hn{+-})}{n_1 + \alpha} + \frac{y(\mbox{\boldmath $n$})y(\hn{+-})}{n_1 + \beta + \frac{1}{2}} & = 1,\\[.6em] \frac{x(\mbox{\boldmath $n$})x(\hn{+-})}{n_2 + \alpha - 1} + \frac{y(\mbox{\boldmath $n$})y(\hn{+-})}{n_2 + \beta - \frac{1}{2}} & = 1, \end{aligned} \end{equation} which can be considered as discretizations of the defining equations of the two confocal conics through the point $(x,y)\in\mathbb R^2$: \bela{E23c} \begin{aligned} \frac{x^2}{u_1+a} + \frac{y^2}{u_1+b} & = 1, \\[.6em] \frac{x^2}{u_2+a} + \frac{y^2}{u_2+b} & = 1. \end{aligned} \end{equation} Observe that relations (\ref{E24b}) and (\ref{E24c}) may be regarded as two maps \bela{E24a} \tau^{\mbox{\tiny $++$}}:\mbox{\boldmath $x$}(\mbox{\boldmath $n$}) \mapsto \mbox{\boldmath $x$}(\hn{++}),\quad\tau^{\mbox{\tiny $+-$}}: \mbox{\boldmath $x$}(n)\mapsto\mbox{\boldmath $x$}(\hn{+-}), \end{equation} whose commutativity $\tau^{\mbox{\tiny $++$}}\circ\tau^{\mbox{\tiny $+-$}}= \tau^{\mbox{\tiny $+-$}}\circ\tau^{\mbox{\tiny $++$}}$ is directly verified. Thus, the net $\mbox{\boldmath $x$}$ can be uniquely determined from its value at a single vertex. Proposition \ref{prop: discrete factorization} in the case $N=2$ can be shown by simple calculations starting either with the explicit parametrization (\ref{E5}) or the maps (\ref{E24b}), \eqref{E24c}. For instance, a factorization property associated with $\tau^{++}$ (shift by $\frac{1}{2}\mbox{\boldmath $f$}$) reads: \bela{E24ccc} \frac{\left\langle\Delta_1\mbox{\boldmath $x$}(\mbox{\boldmath $n$}), \Delta_1\mbox{\boldmath $x$}(\mbox{\boldmath $n$}+\frac{1}{2}\mbox{\boldmath $f$})\right\rangle} {\left\langle\Delta_2\mbox{\boldmath $x$}(\mbox{\boldmath $n$}-\mbox{\boldmath $e$}_2+\frac{1}{2}\mbox{\boldmath $f$}),\Delta_2\mbox{\boldmath $x$}(\mbox{\boldmath $n$}-\mbox{\boldmath $e$}_2+\mbox{\boldmath $f$})\right\rangle} = \frac{\phi_1(n_1)}{\phi_2(n_2)}, \end{equation} where \bela{E24ccca} \frac{\phi_1(n_1)}{\phi_2(n_2)} = -\frac{(n_2+\alpha-\frac{1}{2})(n_2+\beta)}{(n_1+\alpha+\frac{1}{2})(n_1+\beta+1)}, \end{equation} and a similar property is associated with the map $\tau^{\mbox{\tiny $+-$}}$. This can be seen as a discretization of the isothermicity property of the system of confocal conics which reads \bela{E24cccc} \frac{|\partial \mbox{\boldmath $x$}/\partial u_1|^2}{|\partial \mbox{\boldmath $x$}/\partial u_2|^2} = \frac{\alpha_1(u_1)}{\alpha_2(u_2)}, \end{equation} where \bela{24ccccc} \frac{\alpha_1(u_1)}{\alpha_2(u_2)} = -\frac{(u_2+a)(u_2+b)}{(u_1+a)(u_1+b)}. \end{equation} The combinatorics of the factorization property \eqref{E24ccc} is illustrated in \linebreak Figure \ref{fig:isothermic-2d}. \begin{figure}[H] \centering \input{pics/isothermic-combinatorics.pdf_tex} \caption{ Combinatorics of the factorization property \eqref{E24ccc}.} \label{fig:isothermic-2d} \end{figure} \section{The case $N=3$} \label{sect: 3d} \subsection{Continuous confocal coordinates} Let $a > b > c > 0$. Then formulas \begin{align}\label{eq: param cont 3d} x(u_1, u_2, u_3) &= \frac{\sqrt{u_1+a}\sqrt{u_2+a}\sqrt{u_3+a}}{\sqrt{a-b}\sqrt{a-c}},\nonumber\\ y(u_1, u_2, u_3) &= \frac{\sqrt{-(u_1+b)}\sqrt{u_2+b}\sqrt{u_3+b}}{\sqrt{a-b}\sqrt{b-c}},\\ z(u_1, u_2, u_3) &= \frac{\sqrt{-(u_1+c)}\sqrt{-(u_2+c)}\sqrt{u_3+c}}{\sqrt{a-c}\sqrt{b-c}}\nonumber \end{align} define a parametrization of the first octant of $\mathbb{R}^3$ by confocal coordinates, \begin{equation*} \mathcal{U} = \set{ (u_1,u_2,u_3) }{ -a < u_1 < -b < u_2 < -c < u_3 } \rightarrow \mathbb{R}_+^3\ . \end{equation*} Confocal quadrics are obtained by reflections of the coordinate surfaces (corresponding to $u_i={\rm const}$ for $i=1,2$ or 3) in the coordinate planes of $\mathbb{R}^3$, see Figure \ref{fig:confocal3d}, left. \begin{itemize} \item The planes $u_3 = {\rm const}$ are mapped to ellipsoids. In the degenerate case $u_3 \searrow -c$ one has $z = 0$, while $x(u_1,u_2)$ and $y(u_1,u_2)$ exactly recover the two-dimensional case \refeq{eq:confocal2d-map} on the interior of an ellipse given by $u_2 \nearrow -c$. \item The planes $u_2 = {\rm const}$ are mapped to one-sheeted hyperboloids with the two degenerate cases corresponding to $u_2 \nearrow -c$ and $u_2 \searrow -b$. \item The planes $u_1 = {\rm const}$ are mapped to two-sheeted hyperboloids with the two degenerate cases corresponding to $u_1 \nearrow -b$ and $u_1 \searrow -a$. \end{itemize} \subsection{Discrete confocal quadrics} Let $\alpha, \beta, \gamma \in \mathbb{Z}$ with $\alpha > \beta > \gamma$. Then the formula \bela{E15} \mbox{\boldmath $x$}(\mbox{\boldmath $n$}) = \left(\bear{c} D_1\sqr{n_1 + \alpha}\sqr{n_2 + \alpha - \frac{1}{2}}\sqr{n_3 + \alpha - 1}\\[.6em] D_2\sqr{- n_1 - \beta}\sqr{n_2 + \beta}\sqr{n_3 + \beta - \frac{1}{2}}\\[.6em] D_3\sqr{- n_1 - \gamma - \frac{1}{2}}\sqr{- n_2 - \gamma}\sqr{n_3 + \gamma}\end{array}\right) \end{equation} with $\mbox{\boldmath $x$}(\mbox{\boldmath $n$}) = (x(\mbox{\boldmath $n$}),y(\mbox{\boldmath $n$}),z(\mbox{\boldmath $n$}))$ defines a discrete net in the first octant of $\mathbb{R}^3$ (discrete confocal coordinate system), that is, a map \begin{equation*} \mathcal{U} = \set{ (n_1,n_2,n_3) \in \mathbb{Z}^3 }{ -\alpha \leq n_1 \leq -\beta \leq n_2 \leq -\gamma \leq n_3 } \rightarrow \mathbb{R}_+^3 \end{equation*} which is a separable solution of (dEPD$_{1/2}$). If this net is extended to $\mathcal U \cup\, \mathcal U^*$ then discrete confocal quadrics are obtained by reflections of the coordinate surfaces ($n_i={\rm const}$ for $i=1,2$ or 3) in the coordinate planes of $\mathbb{R}^3$, see Figure \ref{fig:confocal3d}, right, provided that the constants $D_k$ are chosen in the manner described below. The five boundary components $n_1 = -\alpha$, $n_1 = -\beta$, $n_2 = -\beta$, $n_2= -\gamma$, and $n_3 = -\gamma$ are mapped to degenerate quadrics that lie in the coordinate planes of $\mathbb{R}^3$. One computes the discrete derivatives with the help of formulas \eqref{E3}: \bela{E17} \Delta_1\mbox{\boldmath $x$}(\mbox{\boldmath $n$}) = \frac{1}{2}\left(\bear{c} \displaystyle D_1\frac{\sqr{n_2 + \alpha - \frac{1}{2}}\sqr{n_3 + \alpha - 1}}{\sqr{n_1 + \alpha + \frac{1}{2}}}\\[1.8em] \displaystyle-D_2\frac{\sqr{n_2 + \beta}\sqr{n_3 + \beta - \frac{1}{2}}}{\sqr{- n_1 - \beta - \frac{1}{2}}}\\[1.8em] \displaystyle -D_3\frac{\sqr{- n_2 - \gamma}\sqr{n_3 + \gamma}}{\sqr{- n_1 - \gamma - 1}}\end{array}\right), \end{equation} \bela{E18} \Delta_2\mbox{\boldmath $x$}(\mbox{\boldmath $n$}) = \frac{1}{2}\left(\bear{c} \displaystyle D_1\frac{\sqr{n_1 + \alpha}\sqr{n_3 + \alpha - 1}}{\sqr{n_2 + \alpha}}\\[1.8em] \displaystyle D_2\frac{\sqr{ - n_1 - \beta}\sqr{n_3 + \beta - \frac{1}{2}}}{\sqr{n_2 + \beta + \frac{1}{2}}}\\[1.8em] \displaystyle -D_3\frac{\sqr{- n_1 - \gamma - \frac{1}{2}}\sqr{n_3 + \gamma}}{\sqr{- n_2 - \gamma - \frac{1}{2}}} \end{array}\right) \end{equation} and \bela{E18a} \Delta_3\mbox{\boldmath $x$}(\mbox{\boldmath $n$}) = \frac{1}{2}\left(\bear{c} \displaystyle D_1\frac{\sqr{n_1 + \alpha}\sqr{n_2 + \alpha - \frac{1}{2}}}{\sqr{n_3 + \alpha - \frac{1}{2}}}\\[1.8em] \displaystyle D_2\frac{\sqr{- n_1 - \beta}\sqr{n_2 + \beta}}{\sqr{n_3 + \beta}}\\[1.8em] \displaystyle D_3\frac{\sqr{- n_1 - \gamma - \frac{1}{2}}\sqr{- n_2 - \gamma}}{\sqr{n_3 + \gamma + \frac{1}{2}}}\end{array}\right). \end{equation} In accordance with the general orthogonality condition, we now demand that dual pairs of edges and faces of the nets $\mbox{\boldmath $x$}(\mathcal{U})$ and $\mbox{\boldmath $x$}(\mathcal{U}^*)$ be orthogonal, so that \bela{E19} \begin{aligned} \langle\Delta_1\mbox{\boldmath $x$}(\mbox{\boldmath $n$}),\Delta_2\mbox{\boldmath $x$}(\mbox{\boldmath $n$}-\mbox{\boldmath $e$}_2+\tfrac{1}{2}\mbox{\boldmath $f$})\rangle & = 0,\\ \langle\Delta_1\mbox{\boldmath $x$}(\mbox{\boldmath $n$}),\Delta_3\mbox{\boldmath $x$}(\mbox{\boldmath $n$}-\mbox{\boldmath $e$}_3+\tfrac{1}{2}\mbox{\boldmath $f$})\rangle & = 0,\\ \langle\Delta_2\mbox{\boldmath $x$}(\mbox{\boldmath $n$}),\Delta_3\mbox{\boldmath $x$}(\mbox{\boldmath $n$}-\mbox{\boldmath $e$}_3+\tfrac{1}{2}\mbox{\boldmath $f$})\rangle & = 0. \end{aligned} \end{equation} Evaluation of the above conditions leads to \bela{E20} \begin{aligned} \textstyle D_1^2(n_3 + a - \frac{3}{2}) - D_2^2(n_3 + b - \frac{3}{2}) + D_3^2(n_3 + c - \frac{3}{2}) & = 0,\\ \textstyle D_1^2(n_2 + a - 1) - D_2^2(n_2 + b - 1) + D_3^2(n_2 + c - 1) & = 0,\\ \textstyle D_1^2(n_1 + a - \frac{1}{2}) - D_2^2(n_1 + b - \frac{1}{2}) + D_3^2(n_1 + c - \frac{1}{2}) & = 0, \end{aligned} \end{equation} where \bela{Z7} \textstyle a = \alpha + \frac{1}{2},\quad b = \beta+1,\quad c = \gamma+\frac{3}{2}. \end{equation} These are, {\sl mutatis mutandis}, identical with their classical continuous counterparts as demonstrated in connection with the general case analyzed in Section \ref{sect: discrete quadrics}. Since the coefficients $D_i$ are independent of, for instance, $n_3$, the first condition in (\ref{E20}) splits into the pair \bela{E21} \begin{aligned} D_1^2 - D_2^2 + D_3^2 & = 0,\\ D_1^2 a - D_2^2 b + D_3^2 c & = 0, \end{aligned} \end{equation} and it is evident that the remaining two conditions constitute linear combinations thereof. Accordingly, the orthogonality requirement leads to the unique relative scaling \bela{E22} \frac{D_1^2}{b-c} = \frac{D_2^2}{a-c} = \frac{D_3^2}{a-b} =: D^2. \end{equation} \begin{figure}[H] \centering \includegraphics[clip, trim={380 200 380 200}, width=0.8\textwidth]{confocal-e-shift2.png} \caption{ A discrete ellipsoid ($n_1, n_2$ integer, $n_3 = {\rm const}$) from the system \refeq{E15} and its two adjacent layers from the dual n ($n_1, n_2$ half-integer, $n_3 = {\rm const} \pm \frac{1}{2}$). All faces are planar and orthogonal to the corresponding edges of the other net.} \end{figure} \begin{remark} The curvature lines of a smooth surface are characterized by the following properties: they form a conjugate net, and along each curvature line two infinitesimally close normals intersect. In the case of discrete confocal coordinates the edges of the dual net $\mbox{\boldmath $x$}(\mathcal U^*)$ can be interpreted as normals to the faces of the net $\mbox{\boldmath $x$}(\mathcal U)$. Since both nets have planar faces, any two neighboring normals intersect. Thus, extended edges of $\mbox{\boldmath $x$}(\mathcal U^*)$ constitute a \emph{discrete line congruence} normal to the faces of the Q-net (discrete conjugate net) $\mbox{\boldmath $x$}(\mathcal U)$ (cf. \cite{bobenko-suris} for the notion of a discrete line congruence). \end{remark} The bilinear relations between a lattice point $\mbox{\boldmath $x$}(\mbox{\boldmath $n$})$ and its nearest neighbours $\mbox{\boldmath $x$}(\mbox{\boldmath $n$}+\frac{1}{2}\boldsymbol{\sigma})$ may be formulated as follows: \bela{E23d} \begin{aligned} \displaystyle\frac{x(\mbox{\boldmath $n$})x(\mbox{\boldmath $n$}+\frac{1}{2}\boldsymbol{\sigma})}{u+a} + \frac{y(\mbox{\boldmath $n$})y(\mbox{\boldmath $n$}+\frac{1}{2}\boldsymbol{\sigma})}{u+b} + \frac{z(\mbox{\boldmath $n$})z(\mbox{\boldmath $n$}+\frac{1}{2}\boldsymbol{\sigma})}{u+c} & = 1,\\[.6em] \displaystyle\frac{x(\mbox{\boldmath $n$})x(\mbox{\boldmath $n$}+\frac{1}{2}\boldsymbol{\sigma})}{v+a} + \frac{y(\mbox{\boldmath $n$})y(\mbox{\boldmath $n$}+\frac{1}{2}\boldsymbol{\sigma})}{v+b} + \frac{z(\mbox{\boldmath $n$})z(\mbox{\boldmath $n$}+\frac{1}{2}\boldsymbol{\sigma})}{v+c} & = 1, \\[.6em] \displaystyle\frac{x(\mbox{\boldmath $n$})x(\mbox{\boldmath $n$}+\frac{1}{2}\boldsymbol{\sigma})}{w+a} + \frac{y(\mbox{\boldmath $n$})y(\mbox{\boldmath $n$}+\frac{1}{2}\boldsymbol{\sigma})}{w+b} + \frac{z(\mbox{\boldmath $n$})z(\mbox{\boldmath $n$}+\frac{1}{2}\boldsymbol{\sigma})}{w+c} & = 1, \end{aligned} \end{equation} provided that \bela{E23e} D^2 = \frac{1}{(a-b)(a-c)(b-c)}, \end{equation} and \bela{Z8} u=n_1+\tfrac{1}{4}\sigma_1-\tfrac{3}{4}, \quad v=n_2+\tfrac{1}{4}\sigma_2-\tfrac{5}{4}, \quad w=n_3+\tfrac{1}{4}\sigma_3-\tfrac{7}{4}. \end{equation} \subsection{Discrete umbilics and discrete focal conics} An interesting feature of discrete confocal quadrics which is not present in the two-dimensional case is obtained by considering the ``discrete umbilics'' (that is, vertices of valence different from 4) of the discrete ellipsoids $n_3 =\mbox{const}$ and the discrete two-sheeted hyperboloids $n_1=\mbox{const}$. In the case of the discrete ellipsoids, these have valence 2 and are located at $n_1=n_2=-\beta$ so that (\ref{E15}) reduces to the planar discrete curve \bela{G1} \mbox{\boldmath $x$}(n_3) = \left(\bear{c} D_1(\alpha-\beta-\frac{1}{2})\sqr{n_3 + \alpha - 1}\\[.6em] 0\\ D_3(\beta - \gamma - \frac{1}{2})\sqr{n_3 + \gamma}\end{array}\right). \end{equation} Once again, it turns out convenient to extend the domain of this one-dimensional lattice to the appropriate subset of $\mathbb{Z}\cup\mathbb{Z}^*$ so that \bela{G2} \frac{x(n_3)x(n_3+\frac{1}{2})}{\alpha-\beta-\frac{1}{2}} - \frac{z(n_3)z(n_3+\frac{1}{2})}{\beta-\gamma-\frac{1}{2}} = 1. \end{equation} The latter constitutes a discretization of the focal hyperbola \cite{sommerville} \bela{G3} \frac{x^2}{a-b} - \frac{z^2}{b-c} = 1 \end{equation} which is known to be the locus of the umbilical points of confocal ellipsoids. Similarly, evaluation of (\ref{E15}) at $n_2=n_3=-\gamma$ produces the planar discrete curve \bela{G4} \mbox{\boldmath $x$}(n_1) = \left(\bear{c} D_1(\alpha - \gamma - 1)\sqr{n_1 + \alpha}\\[.6em] D_2(\beta - \gamma - \frac{1}{2})\sqr{-n_1 - \beta}\\[.6em] 0 \end{array}\right) \end{equation} which consists of the discrete umbilics of the discrete two-sheeted hyperboloids. Extension to half-integers yields \bela{G5} \frac{x(n_1)x(n_1+\frac{1}{2})}{\alpha-\gamma-1} + \frac{y(n_1)y(n_1+\frac{1}{2})}{\beta-\gamma-\frac{1}{2}} = 1, \end{equation} which reproduces, in the formal continuum limit, the classical focal ellipse \bela{G6} \frac{x^2}{a-c} + \frac{y^2}{b-c} = 1 \end{equation} as the locus of the umbilical points of confocal two-sheeted hyperboloids.
1,941,325,219,954
arxiv
\section{Introduction} Over the past decade, there has been significant consideration given to topological phases in 3+1d, in addition to the 1+1d and 2+1d cases that have been more heavily studied in the past. So far, results for 3+1d topological phases range from the construction of classes of commuting projector models \cite{Walker2012, Wan2015, Williamson2017, Bullivant2017} to a potential classification of the bosonic phases in the absence of symmetry \cite{Lan2018, Lan2019}. 3+1d phases are intriguing for several reasons. Firstly, we live in a 3+1d world, and so there is a natural interest in studying such phases. Secondly, the properties of 3+1d topological phases are quite different from their 2+1d cousins. Unlike their 2+1d counterparts, point-like particles in 3+1d are not expected to have non-trivial braiding with each-other (outside of the usual bosonic and fermionic cases) \cite{Doplicher1971, Doplicher1974}. However, in these higher-dimensional phases there exist loop-like excitations with non-trivial loop-braiding properties, so that exchange processes involving the loop-like excitations can result in a non-trivial transformation for the state of the system. Unfortunately, at present there are few examples of toy models where these loop-like excitations are constructed explicitly and the properties of the excitations (both point-like and loop-like) are studied in more depth. In this series of papers, we study one such toy model, based on higher lattice gauge theory \cite{Bullivant2017}, and aim to provide a detailed description of the excitations, and the conserved topological charge which they carry. \hfill As we have already seen in Refs. \cite{HuxfordPaper1, HuxfordPaper2}, the Hamiltonian model for topological phases based on higher lattice gauge theory \cite{Bullivant2017} hosts rich physics, including non-trivial loop braiding (in the 3+1d case), condensation and confinement. In Ref. \cite{HuxfordPaper1}, we gave a brief qualitative description of these features, while in Ref. \cite{HuxfordPaper2} we examined the 2+1d model. Now we will give a more explicit and mathematically detailed treatment of the 3+1d model, with full proofs presented in the Supplementary Material. Our approach focuses heavily on the so-called ribbon and membrane operators, which produce (as well as move and annihilate) the point-like and loop-like excitations of the model respectively. Because loops are extended objects, they sweep through a surface as they move, rather than just a line as point particles would. Therefore, to produce and move loop excitations we need an operator that is defined across a general membrane rather than a line or ribbon. The ribbon and membrane operators can be used to find the braiding statistics \cite{Kitaev2003, Levin2005} of the excitations, while closed ribbon and membrane operators can be used to measure topological charge \cite{Bombin2008}. In addition, certain properties of the excitations, such as whether they are confined or not, can be obtained directly from these operators. In this paper, we therefore aim to provide and justify the mathematical forms of the ribbon and membrane operators, and demonstrate how we can extract all of the previously mentioned information about the excitations from them. \subsection{Structure of This Paper} In this paper, we will consider the 3+1d model in various cases (a summary of these cases is presented in Section \ref{Section_Recap_3d}, along with a brief reminder of the Hamiltonian model). For each case we define the membrane operators and find the effects of braiding in turn, before moving on to the next case. In Sections \ref{Section_3D_MO_Tri_Trivial} and \ref{Section_3D_Braiding_Tri_Trivial} we consider the case where one of the maps describing the model, $\rhd$, is trivial (Case 1 from Table \ref{Table_Cases}). In Section \ref{Section_3D_MO_Tri_Trivial} we construct the ribbon and membrane operators for the theory and discuss the pattern of condensation and confinement exhibited by the excitations that they produce. In Section \ref{Section_3D_Braiding_Tri_Trivial} we use these operators to work out the braiding properties of these excitations, which involves passing loop or point particles through loops. In Sections \ref{Section_3D_MO_Fake_Flat} and \ref{Section_3D_Braiding_Fake_Flat} we repeat the construction of ribbon operators and the braiding for another special case, called the fake-flat case (Case 3 from Table \ref{Table_Cases}), while in Sections \ref{Section_3D_MO_Central} and \ref{Section_3D_Braiding_Central} we repeat it for another special case (Case 2 from Table \ref{Table_Cases}), which generalizes the $\rhd$ trivial case. \hfill Having found the membrane operators and effects of braiding, in Section \ref{Section_3D_Topological_Sectors} we move on to consider the topological charges of the model. These topological charges are conserved quantities carried by the excitations of the model. In 2+1d, to measure the topological charge in a spatial region, we simply put an operator on the boundary of that region. This boundary will be topologically equivalent to a circle (or multiple circles). However, in 3+1d there are more topologically distinct surfaces which can enclose our regions of interest. We can use these different surfaces to measure the loop-like and point-like charge carried by the excitations. Using a sphere as our surface of measurement, we can determine the point-like charge contained within the sphere. We present the corresponding charge measurement operators in Section \ref{Section_Sphere_Charge_Reduced}, and also find the charge carried by some simple excitations. However, to measure loop-like charge we need some surface with non-contractible loops. An important example is the torus, which we look at in some detail in Section \ref{Section_Torus_Charge}. We compare the number of topological charges that we can measure with the torus to the ground state degeneracy of the 3-torus and find that they are equivalent, in the broad cases that we look at, as previously reported in Ref. \cite{Bullivant2020}. Finally, in Section \ref{Section_conclusion_3d}, we summarize our results and propose further avenues of research based on this work. \hfill In the Supplementary Material, we present the proofs of our results that were too lengthy to include in the main text. We demonstrate the commutation relation between the energy terms and the ribbon and membrane operators in Section \ref{Section_ribbon_membrane_energy_commutation} (using some results from the 2+1d case discussed in Ref. \cite{HuxfordPaper2}). Then in Section \ref{Section_topological_membrane_operators} we demonstrate that the non-confined ribbon and membrane operators are \textit{topological}, meaning that we can deform them through unexcited regions of space without affecting their action, provided that we keep the locations of any excitations they produce fixed. In Section \ref{Section_magnetic_condensation}, we show that some of the magnetic loop excitations are condensed, and can be produced by operators only acting near the excitations (meaning that they cannot carry loop-like topological charge). Next, in Section \ref{Section_braiding_supplement} we find the braiding relations of the various excitations by explicitly calculating the appropriate commutation relations between the membrane and ribbon operators. Finally, in Section \ref{Section_topological_sectors_supplement} we construct the measurement operators for topological charge and demonstrate that they are projectors. \section{Summary of the Model} \label{Section_Recap_3d} In this section we will remind the reader of the Hamiltonian model we are studying, the higher lattice gauge theory model introduced in Ref. \cite{Bullivant2017}. We hope that this will provide a convenient place for the reader to refer back to for definitions of the various terms in the Hamiltonian, along with several useful identities. \hfill We are considering the model defined on a 3d lattice, representing the spatial degrees of freedom, with a Hamiltonian controlling the time evolution. The edges of the lattice are directed, while the plaquettes have a circulation and a base-point (a privileged vertex which we can think of as the start of the circulation). The edges are labelled by elements of a group $G$, and the plaquettes are labelled by elements of a second group, $E$. These groups are part of a \textit{crossed module}, which consists of the two groups and two maps, $\partial$ and $\rhd$. Here $\partial$ is a group homomorphism from $E$ to $G$, while $\rhd$ is a group homomorphism from $G$ to the automorphisms on $E$. That is, for each element $g \in G$, $g \rhd$ is a group isomorphism from $E$ to itself (so for $e \in E$, $g \rhd e$ is an element of $E$). These maps satisfy two additional constraints, called the Peiffer conditions \cite{Pfeiffer2003, Baez2002, Bullivant2017}: \begin{align} \partial(g \rhd e) &= g \partial(e)g^{-1} \ \forall g \in G, e \in E \label{Equation_Peiffer_1}\\ \partial(e) \rhd f &= efe^{-1} \ \forall e,f \in E \label{Equation_Peiffer_2}. \end{align} The Hamiltonian is given by a sum of projectors, with terms for the vertices, edges, plaquettes and blobs (3-cells) of the lattice \cite{Bullivant2017}: \begin{equation} H = - \hspace{-0.1cm} \sum_{\text{vertices, }v} \hspace{-0.4cm} A_v \: - \sum_{\text{edges, } i} \hspace{-0.2cm} \mathcal{A}_i \: -\hspace{-0.3cm}\sum_{\text{plaquettes, }p} \hspace{-0.5cm} B_p \: - \sum_{\text{blobs, }b} \hspace{-0.2cm} \mathcal{B}_b. \label{Equation_Hamiltonian_3d} \end{equation} The vertex terms are a sum of vertex transforms, and can be thought of as projecting to states that are 1-gauge invariant. That is $$A_v = \frac{1}{|G|} \sum_{g \in G} A_v^g,$$ where the vertex transforms have the algebra $A_v^g A_v^h =A_v^{gh}$, which implies that $A_v^g A_v =A_v$. This ensures that the ground states (which are eigenstates of $A_v$ with eigenvalue one) are invariant under the vertex transforms: $$A_v^g \ket{GS} = A_v^g A_v \ket{GS} =A_v \ket{GS} = \ket{GS}.$$ As for the specific action of the vertex transforms, they act on the edges adjacent to the vertex, as well as any plaquette whose base-point is that vertex. For an edge $i$ (initially labelled by $g_i$) or plaquette $p$ (initially labelled by $e_p$), we have \begin{align} A_v^g: g_{i} &\rightarrow \begin{cases} gg_{i} &\text{ if $v$ is the start of $i$}\\ g_ig^{-1} &\text{ if $v$ is the end of $i$}\\ g_{i} &\text{ otherwise} \end{cases} \notag\\ A_v^g : e_p &\rightarrow \begin{cases} g \rhd e_p &\text{if $v$ is the base-point of $p$}\\ e_p &\text{otherwise.} \label{Equation_vertex_transform_definition}\end{cases} \end{align} Similarly, the edge term is a sum of edge transforms (2-gauge transforms) $$\mathcal{A}_i = \frac{1}{|E|} \sum_{e \in E} \mathcal{A}_i^e,$$ which satisfy a similar algebra to the vertex transforms: $\mathcal{A}_i^e \mathcal{A}_i^f =\mathcal{A}_i^{ef}$. This ensures that individual edge transforms can be absorbed into the corresponding edge term, and into the ground state: $\mathcal{A}_i^e \mathcal{A}_i = \mathcal{A}_i$ and $\mathcal{A}_i^e \ket{GS} = \ket{GS}$. An edge transform $\mathcal{A}_i^e$ applied on an edge $i$ acts on the label of edge $i$ itself, as well as the labels of the adjacent plaquettes: \begin{align} \mathcal{A}_i^e: g_{j} &\rightarrow \begin{cases} \partial(e) g_{j} &\text{ if $i=j$}\\ g_{j} &\text{ otherwise} \end{cases} \notag \\ \mathcal{A}_i^e : e_p &\rightarrow \begin{cases} e_p [g(v_0(p) - s(i)) \rhd e^{-1}] &\text{if $i$ is on $p$ and}\\& \text{aligned with $p$}\\ [g(\overline{v_0(p) - s(i)}) \rhd e] e_p &\text{if $i$ is on $p$ and}\\& \text{aligned against $p$}\\ e_p &\text{otherwise.} \end{cases} \label{Equation_edge_transform_definition} \end{align} Here $s(i)$ is the source of edge $i$, which is the vertex attached to $i$ that $i$ points away from (with the vertex on the other end of $i$ being called the target). $g(v_0(p) - s(i))$ is the path element for the path from the base-point of plaquette $p$ to this source, running around the plaquette and aligned with the plaquette. On the other hand $g(\overline{v_0(p) - s(i)})$ is the path element for the path around the plaquette from $v_0(p)$ to $s(i)$, but this time anti-aligned with the plaquette. \hfill The next energy term is the plaquette term, $B_p$, which enforces so-called \textit{fake-flatness}. This is similar to the plaquette term from Kitaev's Quantum Double model, in that it restricts which labels the boundary of a plaquette can have. Unlike the term from the Quantum Double model however, the plaquette term in higher lattice gauge theory relates the label of the boundary to the surface label of the plaquette itself, rather than requiring the boundary label to be trivial as for the Quantum Double model. For a plaquette whose boundary (starting at the base-point and aligned with the circulation of the plaquette) has path label $\hat{g}_p$, and whose surface label is $\hat{e}_p$, the plaquette term $B_p$ acts as \begin{equation} B_p =\delta(\partial(\hat{e}_p)\hat{g}_p, 1_G). \end{equation} A plaquette which satisfies this Kronecker delta is called fake-flat, and a surface made from fake-flat plaquettes is also called fake-flat. Such a fake-flat surface will satisfy a similar condition on its surface and boundary labels. As we showed in Ref. \cite{HuxfordPaper2}, for a surface $m$ whose constituent plaquettes satisfy fake-flatness, the overall surface element will satisfy $$\partial(\hat{e}(m))\hat{g}_{dm}=1_G,$$ where $\hat{g}_{dm}$ is the group element associated to the boundary of $m$ and the total surface element is constructed by combining individual surface elements, as explained in Ref. \cite{Bullivant2017} and as we will summarize shortly. \hfill The final energy term is the blob term, $\mathcal{B}_b$, which enforces that the surface element of the boundary of the blob (calculated from the plaquettes on that blob using the rules for combining surfaces explained in Ref. \cite{Bullivant2017}, which we will describe shortly) is equal to the identity element $1_E$. That is \begin{equation} \mathcal{B}_b = \delta(\hat{e}(b),1_E), \end{equation} for blob $b$ with surface element $\hat{e}(b)$. \hfill A key idea in the higher lattice gauge theory model is that we can compose edges into paths and plaquettes into surfaces, with composite objects appearing throughout the description of the model as well as in the ribbon and membrane operators. We will therefore briefly review the rules for this kind of combination. First, consider composing edges into paths. If two edges (or more general paths) lie end-to-end, then we can combine them into one path, with a group label given by the product of the elements for the two edges. Then to combine multiple edges into a path, we take a product of all of the constituent edge labels, with the first edge on the path appearing on the left of the product. If one or more of the edges points against the path (for example, the edges labelled by $g_2$ and $g_3$ in Figure \ref{path_image_paper_3}), then we include the edge label in the path element with an inverse. \begin{figure}[h] \begin{center} \begin{overpic}[width=0.97\linewidth]{PathIndicationImageNoText.png} \put(19,30){\textcolor{gray}{\large $g_1$}} \put(38,30){\textcolor{gray}{\large $g_2^{-1}$}} \put(59,30){\textcolor{gray}{\large $g_3^{-1}$}} \put(79,30){\textcolor{gray}{\large $g_4$}} \put(20,17){\large $g_1$} \put(37,17){\large $g_2$} \put(60,17){\large $g_3$} \put(78,17){\large $g_4$} \put(1,10){start of path $t$} \put(79,10){end of path $t$} \put(30,2){\large $g(t)=g_1g_2^{-1}g_3^{-1}g_4$} \end{overpic} \caption{(Copy of Figure 41 from Ref. \cite{HuxfordPaper1}.) An electric ribbon operator measures the value of a path and assigns a weight to each possibility, creating excitations at the two ends of the path. In order to find the group element associated to the path, we must first find the contribution of each edge to the path. In this example, the edges along the path are shown in black. Some of the edges are anti-aligned with the path and so we must invert the elements associated to these edges to find their contribution to the path. This is represented by the grey dotted lines, which are labelled with the contribution of each edge to the path.} \label{path_image_paper_3} \end{center} \end{figure} \hfill Next consider composition of plaquettes, or more generally surfaces. Surfaces have both an orientation and a privileged vertex, called the base-point, which we can view as the start of the circulation. We represent this by drawing a circulating arrow in the plaquette which connects to the boundary at the base-point, as illustrated in Figure \ref{combine_two_surfaces_elements}. When we combine two adjacent plaquettes, we must ensure that the base-points and orientations of the plaquettes both agree. If they do, as in the example shown in Figure \ref{combine_two_surfaces_elements}, we can combine the plaquettes into a single surface whose label is a product of the two plaquettes. Contrary to the case of paths, the plaquette appearing first in the circulation is represented in the rightmost position of the product. \begin{figure}[h] \begin{center} \begin{overpic}[width=0.9\linewidth]{combine_two_surfaces_image.png} \put(45,12){\Huge $\rightarrow$} \put(43,10){Combine} \put(11,13){$A$} \put(29,13){$B$} \put(81,13){$AB$} \end{overpic} \caption{Two adjacent surfaces can be combined into one if their base-point (represented by the yellow dot) and circulation (represented by the blue arrow) match. The label of the combined surface is given by the product of the two individual elements in reverse order. That is, if the surfaces $A$ and $B$ have labels $e_A$ and $e_B$ respectively, then the combined surface has label $e_{AB}=e_B e_A$. If two adjacent surfaces do not have the same base-point and orientation then we can still combine them by using a set of rules that describe what happens when we change the orientation or move the base-point of a surface.} \label{combine_two_surfaces_elements} \end{center} \end{figure} \hfill While this simple procedure works if the base-points and orientations of the two plaquettes agree, we will often want to combine adjacent plaquettes for which this is not the case (similar to how we want to combine edges into paths even if their orientations are not all aligned). In this case, we need a procedure for changing the base-point and orientation of a plaquette, and describing the label the plaquette would have with this new decoration. As described in Ref. \cite{HuxfordPaper1}, we can reverse the orientation of a plaquette (while keeping its base-point fixed) by inverting its group label, as shown in Figure \ref{flip_plaquette}. If we want to move the base-point along a path $t$, as shown in Figure \ref{move_basepoint}, then we must act on that plaquette element with $g(t)^{-1} \rhd$, so that the plaquette label goes from $e_p$ to $g(t)^{-1} \rhd e_p$. When moving the base-point in this way, we can either move it along the boundary of the plaquette, as shown in the bottom-left of Figure \ref{move_basepoint}, or we can move it away from the boundary, as shown in the top-right of Figure \ref{move_basepoint}. Combining these two procedures, we see that the general formula for the label of a composite surface $m$ is \begin{equation} \hat{e}(m)= \prod_{p \in m} g(v_0(m)-v_0(p)) \rhd e_p^{\sigma_p}, \end{equation} where the $p \in m$ are the constituent plaquettes; $v_0(p)$ is the original base-point of plaquette $p$; $v_0(m)$ is the base-point of the combined surface; and $\sigma_p$ is 1 if the circulation of plaquette $p$ matches the surface and $-1$ otherwise. Note that this formula hides certain complexities, such as the order of the product and the precise definition of the paths $(v_0(m)-v_0(p))$, but often we care about situations where these details do not matter (for example if $E$ is Abelian the order does not matter, and if the surface is also fake-flat then the paths only need to be defined up to deformation). \begin{figure}[h] \begin{center} \begin{overpic}[width=0.9\linewidth]{flip_plaquette_image.png} \put(43,20){\Huge $\rightarrow$} \put(16,20){$e_p$} \put(77,20){$e_p^{-1}$} \put(0,2){$v_0(p)$} \put(60,2){$v_0(p)$} \put(44,25){flip} \ \end{overpic} \caption{Given a plaquette with label $e_p$, the label of the corresponding plaquette with the opposite orientation is $e_p^{-1}$. Note that when we reverse the orientation of a plaquette, we leave its base-point, here $v_0(p)$, in the same position.} \label{flip_plaquette} \end{center} \end{figure} \begin{figure}[h] \begin{center} \begin{overpic}[width=0.95\linewidth]{square_plaquette_move_base-point_image.png} \put(45,70){\Huge $\rightarrow$} \put(22,45){\Huge $\downarrow$} \put(40,80){\parbox{1.5cm}{move \\ base-point}} \put(0,48){\parbox{2cm}{move \\ base-point}} \put(19,76){$e_p$} \put(14,18){$g(t)^{-1} \rhd e_p$} \put(72,76){$g(t)^{-1} \rhd e_p$} \put(58,50){$t^{-1}$} \put(2,15){$t$} \put(2,58){$v_0(p)$} \put(2,35){$v_0(p)'$} \put(60,28){$v_0(p)'$} \end{overpic} \caption{We can move the base-point of a surface, either along the boundary of the surface (resulting in the case shown in the bottom-left) or away from the surface (in which case we say that we whisker the surface, and obtain the situation shown in the top-right). When we move the base-point of plaquette $p$ along a path $t$ in this way, the surface label goes from $e_p$ to $g(t)^{-1} \rhd e_p$.} \label{move_basepoint} \end{center} \end{figure} \hfill One useful way of checking whether we have correctly composed two surfaces it to examine the boundary of the combined surface. The boundary of a surface made by composing two other surfaces is the product of the two individual boundaries, once we have ensured that the orientations and base-points of the two surfaces agree. This product of the boundaries follows from the rules for composing paths given previously. As an example, consider Figure \ref{combine_boundaries}. The boundary of the left surface ($m_1$) in the top image is $i_1 i_2 i_3 i_4^{-1}$ (note that the boundary starts at the base-point and follows the orientation of the plaquette), while the boundary of the right surface ($m_2$) is $i_4i_5i_6 i_7^{-1}$. Here $i_x$ represents an edge, rather than an edge label. The boundary of the combined surface is therefore $i_1 i_2 i_3 i_4^{-1} i_4i_5i_6 i_7^{-1}$. This is the path shown in red in the upper image of Figure \ref{combine_boundaries}. We can simplify this path by removing the section $i_4^{-1}i_4$, to give the boundary $i_1 i_2 i_3 i_4^{-1} i_4i_5i_6 i_7^{-1}$ shown in the lower image. This rule for combining boundaries ensures that the total surface satisfies fake-flatness, if the two constituent surfaces do. If the surface label of surface $m_x$ (for $x= 1$ or 2) is $e_x$, and the boundary bd$(m(x))$ has label $t_x$, then the surface $m_1$ satisfies fake-flatness when $$\partial(e_1)t_1=1_G$$ and the surface $m_2$ satisfies fake-flatness when $$\partial(e_2)t_2=1_G.$$ The combined surface has label $e_2e_1$ and boundary label $t_1t_2$ (note the opposite order of composition for the paths and surfaces). If the two constituent surfaces satisfy fake-flatness, then the combined surface label satisfies \begin{align*} \partial(e_2e_1)t_1t_2&= \partial(e_2)(\partial(e_1)t_1)t_2\\ &=\partial(e_2) 1_G t_2\\ &=1_G, \end{align*} so the total surface satisfies a fake-flatness condition, as we claimed earlier. \begin{figure}[h] \begin{center} \begin{overpic}[width=0.95\linewidth]{combine_boundaries_image_2.png} \put(25,60){$i_1$} \put(2,75){$i_2$} \put(25,96){$i_3$} \put(38,75){$i_4$} \put(56,96){$i_5$} \put(70,75){$i_6$} \put(56,60){$i_7$} \put(25,7){$i_1$} \put(2,22){$i_2$} \put(25,43){$i_3$} \put(38,22){$i_4$} \put(56,43){$i_5$} \put(70,22){$i_6$} \put(56,7){$i_7$} \put(20,79){$m_1$} \put(53,79){$m_2$} \end{overpic} \caption{When we combine two surfaces (top image) into one (bottom image), the boundary of the combined surface is the product of the two individual boundaries (here the boundary path is represented in red). This boundary can be simplified by removing edges that appear twice consecutively in the boundary with opposite orientation. In this case the combined boundary includes $i_4^{-1}i_4$ in the top image (this is the section that dips down in the image), which can be removed to give the boundary shown in the bottom image.} \label{combine_boundaries} \end{center} \end{figure} \hfill Next, we want to remind the reader of the various special cases in which we consider the model. In the most general case of the model, the projectors in the Hamiltonian (specifically the edge and blob terms) no longer commute \cite{Bullivant2017}. Furthermore, there are inconsistencies with regards changing the branching structure of the lattice (reversing the orientation of edges or plaquettes, or moving the base-points of plaquettes around), as we showed in the Appendix of Ref. \cite{HuxfordPaper1}. We therefore consider the higher lattice gauge theory model in various special cases that remove these inconsistencies, or at least make them more manageable. In the first such special case (Case 1 in Table \ref{Table_Cases}), we take the map $\rhd$ to be trivial, so that each map $g \rhd$ is the identity map ($g \rhd e =e $ for all $g \in G$ and $e \in E$). This leads to a model very similar in character to Kitaev's Quantum Double model, but with some additional excitations (and with condensation and confinement). Because of the Peiffer conditions, taking this form for $\rhd$ enforces that $E$ be an Abelian group and $\partial$ maps onto the centre of $G$. In the second special case (Case 2 in Table \ref{Table_Cases}), we instead take these properties as our starting point, allowing $\rhd$ to be general, subject to the constraint that $E$ is Abelian and $\partial$ maps to the centre of $G$. In the final case (Case 3 in Table \ref{Table_Cases}), we do not place any conditions on our crossed module, but instead restrict the Hilbert space to only include fake-flat states (states where all of the plaquette terms are satisfied). \begin{table}[h] \begin{center} \begin{tabular}{ |c|c|c|c|c| } \hline & & & & Full\\ Case & $E$ & $\rhd$ & $\partial(E)$ & Hilbert \\ & & & &Space\\ \hline 1 & Abelian & Trivial & $\subset$ centre($G$) & Yes\\ 2 & Abelian & General & $\subset$ centre($G$) & Yes\\ 3 & General & General & General & No \\ \hline \end{tabular} \caption{A reminder of the special cases of the model} \label{Table_Cases} \end{center} \end{table} \section{Ribbon and Membrane Operators in the $\rhd$ Trivial Case} \label{Section_3D_MO_Tri_Trivial} Firstly, we consider Case 1 from Table \ref{Table_Cases}, the case where $\rhd$ is trivial ($g \rhd e=e \: \: \forall e \in E, \: g \in G$), which enforces that $E$ is Abelian. \subsection{Electric Excitations} \label{Section_Electric_Tri_Trivial} The first type of excitation to consider is the electric excitations. The ribbon operators that produce the electric excitations in 3+1d have the same form as the ones for the 2+1d case that we considered in Ref. \cite{HuxfordPaper2}. That is, an electric ribbon operator measures the group element of a path and assigns a weight depending on the measured group element. As we claimed in Ref. \cite{HuxfordPaper1}, an electric ribbon operator applied on a path $t$ has the form \begin{equation*} \hat{S}^{\vec{\alpha}}(t)= \sum_{g \in G} \alpha_{g} \delta( \hat{g}(t), g), \end{equation*} where $\alpha$ is an arbitrary set of coefficients for each group element $g \in G$ and different choices for these coefficients describe different operators in a space of ribbon operators. A useful basis for this space has basis operators that are labelled by irreps of the group $G$ and the matrix indices for that irrep. These basis electric ribbon operators have the form \begin{equation*} \hat{S}^{R,a,b}(t)= \sum_{g \in G} [D^{R}(g)]_{ab} \delta( \hat{g}(t), g), \end{equation*} where $R$ is an irrep of $G$, $D^{R}(g)$ is the associated matrix representation of element $g$, and $a$ and $b$ are the matrix indices. As we proved in Ref. \cite{HuxfordPaper2} (in the 2+1d case, although the proof also holds for 3+1d), the operators labelled by non-trivial irreps excite the vertices at the ends of the path, whereas the operator labelled by the trivial irrep is the identity operator (which of course does not create any excitations). In addition, just as in the 2+1d case discussed in Ref. \cite{HuxfordPaper2}, the irreps that have non-trivial restriction to the image of $\partial$ label confined excitations. The electric ribbon operators labelled by such irreps cause the edges along the path to be excited, so the excitations produced at the ends of the ribbon are confined. \subsection{Magnetic Excitations} \label{Section_3D_Tri_Trivial_Magnetic_Excitations} Unlike the electric excitations, the magnetic excitations in 3+1d are significantly different from their counterparts in 2+1d. Whereas in 2+1d the magnetic excitations are point particles that are produced in pairs by a ribbon operator (as we described in Ref. \cite{HuxfordPaper2}), in 3+1d the elementary magnetic excitation is a ``flux tube" at the boundary of a membrane. That is, the magnetic excitations are loop-like. We can see that the magnetic excitations must be loop-like by trying to excite a single plaquette. We try changing the value of a single edge belonging to that plaquette. However, as shown in Figure \ref{magnetic_step_by_step_2}, in 3+1d each edge belongs to multiple plaquettes (as opposed to two plaquettes when there are only two spatial dimensions). Therefore, changing the label of an edge excites all of the plaquettes around that edge. We can then put one of these plaquettes back into a lower energy state by changing the label of another edge on that plaquette, but this in turn excites all of the other plaquettes attached to that edge (see the second image in Figure \ref{magnetic_step_by_step_2}). We see that these excited plaquettes lie on a closed loop that pierces their centres, as shown by the blue loops in Figure \ref{magnetic_step_by_step_2}. This is made more clear by considering changing more edges. Instead of changing edges along a line, we consider changing edges across some surface (such as the four edges shown in the third image of Figure \ref{magnetic_step_by_step_2}, which lie on a square). Changing these edges excites the plaquettes on the boundary of that surface, much as the ribbon operators in 2+1d excite particles at the ends of some path. \begin{figure}[h] \begin{center} \begin{overpic}[width=0.9\linewidth]{plaquette_excitation_additional_step.png} \put(40,58){\huge $\rightarrow$} \put(73,40){\huge $\downarrow$} \put(45,20){\huge $\leftarrow$} \end{overpic} \caption{(Copy of Figure 44 from Ref. \cite{HuxfordPaper1}.) In order to excite one of the plaquettes in the lattice and produce a magnetic excitation, we change the label one of the edges (black cylinders) on boundary of the plaquette. However, this excites all of the plaquettes (excited plaquettes are shown in red) adjacent to that edge, as shown in the first image. Note that these plaquettes lie on a closed loop through their centres. If we change another edge label to try to prevent some of the plaquette excitations, we will excite the other plaquettes adjacent to that edge, as shown in the second image. Repeating the process, by changing the edges shown in black excites plaquettes along a closed loop (blue). Changing more edges simply changes the shape of this loop (unless we change the edge labels back and shrink the loop to nothing). This tells us that the magnetic excitations are loop-like} \label{magnetic_step_by_step_2} \end{center} \end{figure} \hfill The fact that we produce a loop excitation by changing edges across a surface rather than just along a path indicates that our creation operator is a membrane operator. While we have given a rough idea of the action of the membrane operator in the above discussion, we will now be more specific. In order to define the operator that produces a magnetic excitation, we must specify the region (the ``membrane") that this operator acts on. First we specify a membrane that passes through the centres of plaquettes and cuts through edges. The edges cut by the membrane (``cut edges") are acted on by the operator, as shown in Figure \ref{fluxmembrane2}. This membrane is called the dual membrane, and is analogous to the dual path for the magnetic ribbon operator in 2+1d. In addition to the dual membrane, we must specify a ``direct membrane". The cut edges terminate on this membrane. That is, the direct membrane contains one vertex at the end of each of the cut edges, as shown in Figure \ref{fluxmembrane2} (in special cases, with tightly folded membranes, both ends may be on the direct membrane and an edge may be cut twice by the dual membrane). We must also specify a set of paths to the vertices on the direct membrane. These paths go from a common start-point to the base of each cut edge (that is, to the vertex that lies on the direct membrane). We call this common start-point of the paths the start-point of the membrane or of the membrane operator. These features of the membrane operator are illustrated in Figure \ref{fluxmembrane2}. \hfill The fact that we specify two membranes as part of the magnetic membrane operator indicates that our ``membrane operator" really acts on a ``thickened" membrane, much as the ribbon operators in 2+1d can be considered as acting on thickened strings, that is on ribbons (i.e. their support has some finite thickness). Regardless, we will continue to refer to these operators as membrane operators. This unfortunately means that our use of the term membrane is somewhat ambiguous. Sometimes we mean a ``thickened membrane" and sometimes just an unthickened membrane. Generally we try to use membrane to refer to the region on which our membrane operators act, whether those regions are thickened or otherwise. If we want to refer to a surface that may not be part of a membrane operator, we will call this a surface. If we want to refer to part of a thickened membrane, we will use the terms direct and dual membranes. \hfill Having specified these features of the membrane, we can now describe the action of the magnetic membrane operator. The membrane operator acts on the edges cut by the dual membrane, in a way that depends on the direct membrane and the paths we defined. This is analogous to how the action of the magnetic ribbon operator in the 2+1d case depends on a direct path and a dual path (see Ref. \cite{HuxfordPaper2}). The membrane operator is labelled by a group element $h \in G$, but the label of each cut edge $i$ is left-multiplied by $g(s.p-v_i)^{-1}hg(s.p-v_i)$ or right-multiplied by the inverse, where $g(s.p-v_i)$ is the group element associated to the path specified from the start-point $s.p$ to the vertex $v_i$ on the direct membrane that is attached to the cut edge $i$. Whether left-multiplication or right-multiplication by the inverse is used depends on the orientation of the edge. The edge label is left-multiplied if the edge points away from the direct membrane (as with the example edge in Figure \ref{fluxmembrane2}) and is right-multiplied by the inverse element if it points towards the membrane. That is, the action of the membrane operator on an edge $i$ with initial label $g_i$ is given by \begin{equation} C^h(m):g_i = \begin{cases} &g(s.p-v_i)^{-1}hg(s.p-v_i)g_i \\ & \hspace{0.4cm} \text{if $i$ points away from the} \\ & \hspace{0.5cm} \text{ direct membrane} \\ & g_ig(s.p-v_i)^{-1}h^{-1}g(s.p-v_i) \\& \hspace{0.4cm} \text{if $i$ points towards the} \\ & \hspace{0.5cm} \text{ direct membrane.} \end{cases} \label{Equation_magnetic_membrane_on_edges_main_text} \end{equation} The only difference compared to the action of the magnetic ribbon operator in 2+1d is that the operator acts on a general membrane, rather than just a ribbon. In particular, this means that instead of having a direct path along a ribbon, we have multiple paths across a membrane. For a given edge $i$, there are many potential choices for the path from the start-point to the edge. However, the action of the membrane operator is unaffected by deforming any of these paths over a region satisfying fake-flatness. This is because deforming the path in this way only changes the path element $g(s.p-v_i)$ by an element $\partial(e)$ in $\partial(E)$. This factor of $\partial(e)$ is in the centre of $G$ and so it does not affect the expression $g(s.p-v_i)^{-1}hg(s.p-v_i)$, which just gains a factor of $\partial(e)$ and a factor of $\partial(e)^{-1}$ which can be moved together and cancelled. \hfill \begin{figure}[h] \begin{center} \begin{overpic}[width=\linewidth]{Magnetic_Membrane_Alternate_3_cropped.png} \put(0,5){start-point} \put(60,5){\parbox{4cm \linespread{0.5}\selectfont}{\raggedright dual\\ membrane}} \put(76,13){\parbox{4cm \linespread{0.5}}{\raggedright direct\\ membrane}} \put(78,90){ cut edges} \put(28,0){\parbox{4cm \linespread{0.5}}{\raggedright example \\path $t_i$}} \put(0,93){\parbox{4cm \linespread{0.5}}{\raggedright excited\\ plaquettes}} \put(30,99){\parbox{4cm \linespread{0.5}\selectfont}{\raggedright example action: \\ $g_i \rightarrow g(t_i)^{-1}hg(t_i)g_i$}} \end{overpic} \caption{(Copy of Figure 45 from Ref. \cite{HuxfordPaper1}.) Here we give an example of the membranes for the flux creation operator (magnetic membrane operator). The dual membrane (green) cuts through the edges changed by the operator. The direct membrane (blue) contains a vertex at the end of each of these cut edges (such as the orange sphere). A path from a privileged start-point to the end of the edge (such as the example path, $t_i$) determines the action on the edge. This action leads to the plaquettes around the boundary of the membrane being excited. } \label{fluxmembrane2} \end{center} \end{figure} Now that we have described the action of the membrane operator, we can discuss which of the energy terms are excited by the membrane operator. Firstly, note that if the membrane operator is labelled by $1_G$ then the membrane operator is just the identity operator and so will not excite any of the energy terms. From now on we will assume that we are talking about a membrane operator labelled by some non-trivial element of $G$. In this case the membrane operator excites the ``boundary plaquettes", which are plaquettes where only one edge on the plaquette is changed by the membrane operator (rather than two for the ``bulk" plaquettes). These boundary plaquettes lie around the perimeter of the membrane (they are the red plaquettes in Figure \ref{fluxmembrane2}) and we can construct a closed path around the boundary of the dual membrane which passes through these plaquettes. \hfill In addition to these plaquettes, the magnetic membrane operator may also excite the privileged start-point vertex that we defined previously, just as we saw in the 2+1d case for the magnetic ribbon operator in Ref. \cite{HuxfordPaper2}. In order for the magnetic membrane operator to produce an eigenstate of this vertex energy term when acting on an initially unexcited region, we need to construct a linear combination of the magnetic membrane operators labelled by elements of $G$. If the coefficients for this linear combination are a function of conjugacy class (that is we have an equal sum over all elements of the conjugacy class), the vertex is not excited. On the other hand, if the coefficients within each conjugacy class sum to zero, then the vertex is excited. In any other case (such as when we do not take a superposition of our operators), the start-point is neither definitely excited nor definitely unexcited, because we do not produce an energy eigenstate. While in Figure \ref{fluxmembrane2} the start-point is next to the loop-like excitation (at the edge of the membrane), the start-point can be displaced any distance from the excited loop (or even away from the membrane). The position of the start-point can be interpreted in terms of the picture of the ribbon and membrane operators creating and moving excitations. We can think of the membrane as corresponding to the process where we nucleate a loop at the start-point and then grow and move the loop along the membrane to its final position. The fact that the start-point may be excited far from the loop suggests that it can be treated as an additional particle. Therefore, when we produce a loop, we may also have to produce a point-particle. This is similar to how point-like charges must be produced in pairs in order to conserve topological charge. Indeed we will see in Section \ref{Section_Sphere_Charge_Reduced} that some loop-like excitations carry a non-trivial ``point-like" conserved charge, which must be balanced by the charge carried by the additional excitation at the start-point. \hfill The magnetic membrane operator described above is a creation operator for our flux tube, which runs around the perimeter of the membrane. The membrane operator creates a flux tube (and its associated vertex excitation) from the vacuum. Another relevant operator is the one that moves an existing flux tube to a new position. The movement operator can be thought of as an ordinary membrane operator, but with an additional hole whose boundary fits the loop that we wish to move. To see that this is a movement operator, consider splitting a creation operator into two parts, an inner part, which is another creation operator, and an outer part applied on a membrane equivalent to a tube or annulus, as shown in Figure \ref{membrane_splitting}. We know that the overall membrane operator produces and moves an excitation to the boundary of the outer part, while the inner part produces and moves an excitation to the boundary of the inner part. Because the overall membrane operator is a combination of the inner membrane operator and outer membrane operator, this means that the outer membrane operator must take the excitation from the boundary of the inner part and move it to the boundary of the outer part, to match the action of the total membrane operator. \hfill For instance, consider the example shown in Figure \ref{membrane_splitting}. In this figure, the yellow membranes indicate the membrane on which the operator is applied (so that if we zoomed in we would see the structure from Figure \ref{fluxmembrane2}). The yellow spheres are the start-points for the membranes (note that they are all in the same position). The opaque tori indicate the excitations they create. The two membranes on the right are displaced horizontally to indicate an order in which operators are applied, rather than spatial displacement. On the left-hand side of the Figure, we have an operator that simply creates and moves an excitation to the final position (indicated by the red torus). We can split this operator into the two operators shown on the right-hand side. The rightmost operator on the right-hand side is the inner part of the original membrane operator and so is another flux creation operator. Therefore, this operator also creates an excitation and moves it to its boundary, the lower red torus. In order for this decomposition of operators to agree with the total operator on the left-hand side, we see that the middle operator, which represents the outer part of the original membrane operator, must move an excitation from the lower position (the yellow torus) to the final position (the red torus). While in this context the outer membrane operator moves an existing excitation, we can also apply it when there are no existing excitations. In this case the operator instead creates two opposite fluxes at the two ends of the operator. We therefore do not need to distinguish between movement and creation operators, because the movement operators are also creation operators, although they create multiple loop-like excitations. \begin{figure}[h] \begin{center} \begin{overpic}[width=0.95\linewidth]{membrane_splitting.png} \put(35,10){\Huge $=$} \put(75,10){\Huge $\cdot$} \end{overpic} \caption{Given a flux creation operator (left), we can split it into two parts. One of these parts (the rightmost picture, comprised of the inner part of the original membrane) is another flux creation operator that nucleates the loop and moves it part way along the membrane. The second part (the outer part) takes that existing loop and moves it to the final position.} \label{membrane_splitting} \end{center} \end{figure} \hfill More generally, we can put many holes in the membrane to produce many loop excitations. Indeed, we can think of the membrane operator that we originally defined as a closed, topologically spherical membrane with a single hole in it. Then the excited loops (or single loop in the ordinary case) are at the boundaries of these holes. A topologically spherical membrane operator would produce no excitations. Indeed, as we prove in Section \ref{Section_Topological_Magnetic_Tri_Trivial} in the Supplementary Material, such a spherical membrane operator will act trivially if it is contractible and encloses no other excitations. \hfill When producing an excitation in a given location, there are many choices for the position of the membrane. This is because the excitation is produced at the boundary of the membrane and many different membranes share the same boundary. However, the membrane operator is \textit{topological} in the following sense. We can freely deform the membrane on which the operator is applied through the lattice, while keeping the positions of any excitations produced by the operator fixed, as long as we do not deform the membrane over any existing excitations. When we do this, the action of the membrane operator is preserved. That is, given an initial state $\ket{\psi}$ and a magnetic membrane operators $C^h(m)$ applied on a membrane $m$, if we can deform the membrane $m$ into a new membrane $m'$ without crossing any excitations in $\ket{\psi}$, or moving the excitations produced by $C^h(m)$, then we have $C^h(m) \ket{\psi} =C^h(m') \ket{\psi}$. This means that, like the ribbons in 2+1d, the membrane is invisible when acting on the ground state; it does not matter precisely where we put the membrane. However, when we act on a state that already has excitations, the position may matter. Indeed this fact is vital when considering braiding and leads to the non-trivial braiding relations that we will see in Section \ref{Section_3D_Braiding_Tri_Trivial}. \hfill It is not just the magnetic membrane operators that have this property under deformation, but all of the non-confined membrane and ribbon operators, as we will prove in Section \ref{Section_topological_membrane_operators} in the Supplementary Material. We therefore call the non-confined ribbon and membrane operators topological. However, in reality this topological nature is a combined property of the ground state and the operators, because we can only freely deform the membranes over a space that does not contain any excitations. \hfill Having obtained the membrane operators, we can find their algebra, which can give us the fusion rules for the excitations (although to formally obtain the fusion rules we should organise our excitations according to their topological charge first). Just as in 2+1d, two magnetic operators applied on the same space combine by multiplication of their flux labels. The precise way in which this occurs depends on the position of the start-point of the common membrane $m$. We may have $C^g(m) C^h(m)=C^{gh}(m)$, but we could also have $C^g(m) C^h(m)=C^{hg}(m)$ if the paths from the start-point to the membrane $m$ themselves intersect with the dual membrane. This is because in this case the action of the membrane operator $C^h(m)$ affects the path labels $g(s.p(m)-v_i)$ that determine the action of $C^g(m)$ (see Equation \ref{Equation_magnetic_membrane_on_edges_main_text}), leading to the membrane operator $C^g(m)$ instead acting like $C^{hgh^{-1}}(m)$. We also note that, just as we described for the $E$-valued membrane operators in 2+1d in Ref. \cite{HuxfordPaper2} (see Section III D), we can have partial fusion of the excitations. In this case the two magnetic membrane operators share part of their membrane and boundary, but the membranes are not completely identical (i.e. we have some $C^g(m_2)$ instead of $C^g(m)$), which can lead to only sections of the excited strings merging. \hfill As well as fusion, loop excitations have another important relationship between the different excitations. We can flip the orientation of a loop excitation and ask what the resulting label should be in terms of an unflipped loop. By flip the loop, we mean that we turn the loop over during its motion using its membrane operator. Then we determine what membrane operator would produce an equivalent flux tube by producing a loop without flipping it over during its motion. An example of the relevant membranes is shown in Figure \ref{flipping_paths}. In the case of the magnetic excitations, flipping the loop over gives a loop labelled by the inverse of the original label. This indicates that to specify a flux, the orientation of the flux tube is important. This is a feature not seen in point-particles and highlights that measuring the topological charge of a loop excitation is not as simple as it is for point excitations. \begin{figure}[h] \begin{center} \begin{overpic}[width=0.9\linewidth]{loop_flip_2.png} \end{overpic} \caption{Two membranes that would move a loop excitation from the same initial location to the same final location. The membrane on the left flips the loop during its motion (intermediate positions of the loop are shown along the membrane), whereas the right membrane moves the loop without flipping it. For the same original loop, measuring the flux along the blue path in the left figure gives us the inverse of the flux along the blue path on the right.} \label{flipping_paths} \end{center} \end{figure} \subsection{$E$-Valued Loop Excitations} \label{Section_E_Loop_Excitations_3D} Magnetic excitations are not the only loop-like excitations that we find in this model. In Ref. \cite{HuxfordPaper2}, we showed that in 2+1d we could have loop-like excitations that arise when our group $E$ is non-trivial, which we called $E$-valued loop excitations. These excitations persist in the 3+1d case, and the membrane operators that produce these excitations in 3+1d are very similar to the operators in 2+1d. Just as in 2+1d, the membrane operator measures the surface label $\hat{e}(m)$ of some membrane $m$ and assigns a weight depending on the value measured. It is convenient to consider a basis for this space of membrane operators where the weights are given by the irreducible representations of the group $E$. That is, we can define basis operators \begin{equation} L^{\mu}(m) = \sum_{e \in E} \mu(e) \delta(\hat{e}(m),e) \label{Equation_E_membrane_irrep_Abelian} \end{equation} where $\mu$ is an irrep of $E$ and $\mu(e)$ is the phase representing the element $e\in E$. Note that the irreps of $E$ are 1D because $E$ is Abelian when $\rhd$ is trivial. Then any of these operators that are labelled by non-trivial representations produce a loop of excited edges on the boundary of the membrane, whereas the operator labelled by the trivial irrep is the identity operator. We can fuse these excitations, with the resulting label being given by the product of the irreps under the multiplication $(\mu \cdot \nu) (e)= \mu(e) \cdot \nu(e)$ for irreps $\mu$ and $\nu$ of $E$. \hfill Unlike in 2+1d, there are many different membranes that have the same boundary and so produce a loop excitation in the same location. However, much like the magnetic membrane operators, the $E$-valued membrane operators are topological, meaning that we can deform the membrane without changing the action of the membrane operator. For the $E$-valued membrane the topological nature is relatively intuitive and derives from the fact that closed, contractible surfaces are forced to have trivial label in the ground state by the blob condition in the Hamiltonian. The $E$-valued membrane measures the value of some surface. Given two such surfaces with the same boundary, we can consider the difference between their labels by inverting the orientation of one surface and combining it with the other surface by gluing the surfaces along their common boundary. If the two surfaces can be deformed into one-another, this gluing procedure produces a contractible closed surface that encloses no excitations. However, in the ground state such a surface must have trivial label, due to the blob energy terms. Therefore, the two original surfaces must have the same label and so the two original operators give the same result. This is shown in Figure \ref{gluing_surfaces_1}. In the leftmost diagram we have one surface, the boundary of which is our loop excitation. This surface is labelled by $e_1$. In the next diagram, we have another surface with the same boundary, labelled by $e_2$. We can invert this surface, reversing its orientation and changing the label from $e_2$ to $e_2^{-1}$ as shown in the third picture. Then we can glue these surfaces together to obtain a sphere labelled by $e_1 e_2^{-1}$. However, this resulting surface is contractible, so its label must be $1_E$ if it encloses no excitations. Therefore, $e_1 e_2^{-1}=1_E$. This indicates that the two different surfaces have the same label. From this, we see that deforming the surface, without crossing an excitation, does not affect the action of the membrane operator. \begin{figure}[h] \begin{center} \begin{overpic}[width=0.9\linewidth]{glue_surfaces_1.png} \put(10,24){$e_1$} \put(38,26){$e_2$} \put(62,8){$e_2^{-1}$} \put(80,5){$e_1 e_2^{-1}=1_E$} \end{overpic} \caption{Given two different surfaces with the same boundary, if we can deform one into another without crossing any excitations then their labels must be the same.} \label{gluing_surfaces_1} \end{center} \end{figure} \subsection{Blob Excitations} \label{Section_3D_Blob_Excitation_Tri_Trivial} In addition to the three types of excitation we have considered so far (and which we already saw in the 2+1d case discussed in Ref. \cite{HuxfordPaper2}), we have a fourth simple excitation in 3+1d, called the blob excitation (or 2-flux excitation). The blob excitations correspond to violations of the 2-flatness of blobs, also called 3-cells (i.e. to violations of the blob energy terms). As we described in Ref. \cite{HuxfordPaper1} (in Section III B), we can consider creating two blob excitations by changing a chain of plaquettes along a dual path in the lattice. The blob energy term forces the total surface label around the blob (which is a certain product of surface elements around the blob) to be $1_E$ when the blob is unexcited. Then to excite a blob the naive thing to do is to multiply a single plaquette by some group element, $e^{-1}$ for example. However, each plaquette belongs to two adjacent blobs, both of which will be excited by changing the plaquette, as is shown in Figure \ref{blob_ribbon_operator_sequential}. We can correct the 2-holonomy of one of these blobs by changing the label of another plaquette on that blob, but this excites yet another blob, as shown in Figure \ref{blob_ribbon_operator_sequential}. We can repeat this process with another plaquette, but all this does is move one of the blob excitations around. That is, by changing the labels of a series of plaquettes appropriately, we can produce a pair of blob excitations and move one of these excitations along a path. This is exactly the behaviour we expect of a \textit{ribbon operator}. \begin{figure}[h] \begin{center} \begin{overpic}[width=0.98\linewidth]{blob_operator_sequential_image.png} \put(25,7){\Huge $\rightarrow$} \put(62,7){\Huge $\rightarrow$} \put(3,2){1} \put(11,1){2} \put(19,0){3} \end{overpic} \caption{(Copy of Figure 42 from Ref. \cite{HuxfordPaper1}). We consider a series of blobs in the ground state (leftmost image). In the ground state, all of the blob terms are satisfied, which we represent here by colouring the blobs blue. Changing the label of the plaquette between blobs 1 and 2 excites both adjacent blobs, as can be seen in the middle image (we represent excited blobs by colouring them orange). Multiplying another plaquette label on blob 2 to try to correct it just moves the right-hand excitation from blob 2 to blob 3 (rightmost image). In each step, the plaquettes whose labels we changed are shown in red and their orientations are indicated by an arrow.} \label{blob_ribbon_operator_sequential} \end{center} \end{figure} \hfill Having discussed the rough idea behind the blob ribbon operator, we will now be somewhat more precise about the action of the operator. Each blob ribbon operator is labelled by an element of $E$, for example $e$. We must also specify a (dual) path for the blob ribbon operator to act on. We denote the blob ribbon operator labelled by $e$ and acting on the path $t$ by $B^{e}(t)$. The path passes between the centres of blobs, much as a path on the lattice passes from one vertex to the next. Because the path travels between blobs, the path will pierce plaquettes and it is these pierced plaquettes that the operator will act on. The operator does so by multiplication of the plaquette label by $e^{-1}$ if the orientation of the plaquette is aligned with the direction of the ribbon and $e$ if it is aligned with the ribbon, where we use the right-hand rule to convert the clockwise or anticlockwise circulation of the plaquette into a direction in order to compare it with the orientation of the ribbon. An example of the action of the blob ribbon operator is illustrated in Figure \ref{Effect_blob_ribbon_tri_trivial}. This action excites the blob in which the ribbon originates and the blob in which it terminates. If the label of the operator $e$ is in $\text{ker}(\partial)$ then these two blob terms are the only excited energy terms. However, if $e$ is not in the kernel the plaquettes pierced by the ribbon are also excited, so the particles produced are confined (there is an energy cost that increases with the length of the ribbon). The plaquettes are excited because the plaquette operator checks that the image under $\partial$ of the plaquette element matches the path around the plaquette. Therefore, multiplying the plaquette label by an element $e$ with non-trivial $\partial(e)$ (i.e. an element outside the kernel of $\partial$) will cause this plaquette condition to be violated. \begin{figure}[h] \begin{center} \begin{overpic}[width=0.5\linewidth]{Effect_Blob_Operator_Alternate_Simple.png} \put(52,19){\large $e_1 \rightarrow ee_1$} \put(52,34){\large $e_2 \rightarrow e e_2$} \put(52,53){\large $e_3 \rightarrow e_3 e^{-1}$} \put(40,44){\large $t$} \put(-30,58){Excited blob} \put(-30,10){Excited blob} \end{overpic} \caption{In the $\rhd$ trivial case, the blob ribbon operator multiplies the labels of the plaquettes pierced by the ribbon by $e$ or $e^{-1}$, depending on the orientation of the plaquette. Here the circulation of a plaquette is shown by the curved yellow arrows, and this can be converted into a direction using the right hand rule. The plaquette label is multiplied by $e^{-1}$ if the orientation of the plaquette matches that of the ribbon and by $e$ if the orientation is anti-aligned with the ribbon (note that the order of multiplication does not matter when $\rhd$ is trivial, because $E$ is Abelian, but the order is chosen in this figure to match the more general case).} \label{Effect_blob_ribbon_tri_trivial} \end{center} \end{figure} \hfill Much like the other operators we have considered so far, blob ribbon operators can be combined by applying one after the other on the same path. This process leads to fusion of the simple excitations produced by the operators. The excitations fuse in a similar way to the magnetic ones: the ribbon algebra is given by $B^e(t) B^f(t)=B^{ef}(t)$. \hfill Before we move on to summarize the excitations, it is worth mentioning that the blob excitations in 3+1d replace the single-plaquette excitations from 2+1d. Recall from Ref. \cite{HuxfordPaper2} that we create the single-plaquette excitations by multiplying a plaquette label by an element of $E$. Because there are no blobs to excite in 2+1d (where the lattice is two-dimensional) this creates no excitations other than the plaquette. However, in 3+1d such an action produces blob excitations, as we saw from the action of the blob ribbon operator. \subsection{Condensation and Confinement} \label{Section_3D_Condensation_Confinement_Tri_Trivial} In Refs. \cite{HuxfordPaper1} and \cite{HuxfordPaper2}, we described a type of transition between different higher lattice gauge theory models called condensation-confinement transitions. During this transition, some particle types become confined, so that it costs energy to separate a confined particle from its antiparticle, and others become ``condensed". A condensed excitation can be produced \textit{locally} and so carries trivial topological charge in the condensed phase. We can consider a model with no confinement where $\partial \rightarrow 1_G$. Then changing this $\partial$ so that it maps onto a non-trivial subgroup of $G$, while keeping the groups $G$ and $E$ constant, results in certain topological charges condensing. In particular, the magnetic excitations labelled by $h \in \partial(E)$ and the $E$-valued loop excitations that are labelled by trivial irreps of the kernel become condensed. To see what we mean by this, consider the $E$-valued membrane operators, which have the form $$\sum_{e \in E} \alpha_e \delta( e, \hat{e}(m)).$$ If the membrane $m$ satisfies fake-flatness, the surface label of the membrane is related to the label of its boundary $\text{bd}(m)$ through $\partial(\hat{e}(m))\hat{g}(\text{bd}(m))=1_G$. Then, if the coefficients $\alpha_e$ are only a function of $\partial(e)$, and so are not sensitive to the kernel of $\partial$, we can write the membrane operator (when acting on a fake-flat state) as \begin{align*} \sum_{e \in E} \alpha_e& \delta( e, \hat{e}(m))\\ &=\sum_{e_k \in \ker(\partial)} \sum_{q \in E/\ker(\partial)} \alpha_q \delta( qe_k, \hat{e}(m)), \end{align*} where the $q$ are representative elements from the cosets of $\ker(\partial)$ in $E$ and $\alpha_{qe_k}=\alpha_q$ because the coefficient is not sensitive to factors in the kernel. Then \begin{align*} \sum_{e \in E} \alpha_e& \delta( e, \hat{e}(m))\\ &= \sum_{q \in E/\ker(\partial)} \alpha_q \sum_{e_k \in \ker(\partial)}\delta( qe_k, \hat{e}(m))\\ &=\sum_{q \in E/\ker(\partial)} \alpha_q \delta(\partial(q),\partial(\hat{e}(m))\\ &=\sum_{q \in E/\ker(\partial)} \alpha_q \delta(\partial(q),\hat{g}(\text{bd}(m))^{-1})\\ &= \sum_{g \in \partial(E)} \beta_g \delta(g, \hat{g}(\text{bd}(m))), \end{align*} where $\beta_{\partial(e)^{-1}}=\alpha_e$. This is just an electric ribbon operator applied on the boundary of $m$ and so is local to the excitation produced by the membrane operator. Rather than local in the usual sense of being restricted to a small spatial region, we mean that the operator only acts near the excitation. We see that the excitation produced by the membrane operator can be produced locally and so is condensed. That is, the $E$-valued membrane operators which are not sensitive to the kernel of $\partial$ produce condensed excitations. When we consider our irrep basis for the membrane operators, the operators labelled by irreps with trivial restriction to the kernel correspond to condensed excitations. It is slightly more complicated to prove which magnetic excitations are condensed, and so we postpone this proof until Section \ref{Section_magnetic_condensation} of the Supplementary Material. \hfill As these magnetic and $E$-valued loops condense, some of the point-like particles in the model become confined. As we showed in Section S-I A of the Supplementary Material of Ref. \cite{HuxfordPaper2} (with the proof being the same for the 2+1d and 3+1d cases), the confined electric ribbon operators $\sum_g \alpha_g \delta( \hat{g}(t),g)$ are those for which $\sum_{e \in E} \alpha_{\partial(e)g}=0$ for all $g \in G$. This means that a basis ribbon operator, given by $$S^{R,a,b}(t) = \sum_{g \in G} [D^R(g)]_{ab} \delta( \hat{g}(t),g),$$ for irrep $R$ of $G$ and matrix indices $a$ and $b$, is confined if $R$ has a non-trivial restriction to the subgroup $\partial(E)$ of $G$. So far, this pattern of condensation and confinement is analogous to the 2+1d case described in Ref. \cite{HuxfordPaper2}, but in the 3+1d case we have an extra type of excitation, the blob excitation. As we discussed in the previous subsection, the blob excitations with label outside the kernel of $\partial$ are also confined, because the corresponding ribbon operators multiply the plaquette labels of the plaquettes pierced by the ribbon by a factor which breaks the fake-flatness condition. These confined blob excitations are important when discussing the condensation of the magnetic excitations. The magnetic condensation is slightly different when there are three spatial dimensions compared to the case where there are only two, because the magnetic excitations in 3+1d are loops. Rather than being equivalent to strictly local (i.e. unextended) operators when acting on the ground state, the magnetic membrane operator is instead equivalent to a (confined) blob excitation operator acting on a path that runs around the boundary of the magnetic membrane. This blob ribbon operator is not local in the usual sense, given that the operator is linearly extended, but it is instead local to the excitation. This is analogous to how the condensed $E$-valued membrane operators act equivalently to (confined) electric ribbon operators applied around the boundary of the membrane. \subsection{Summary of Excitations} Given the large number of excitations that we have seen so far, it may be useful to briefly summarize them. The simple excitations and their confinement and condensation properties for the case described above are summarized in Figure \ref{excitation_summary_tri_trivial}. \begin{figure}[h] \begin{center} \begin{overpic}[width=\linewidth]{excitation_summary_triangle_trivial_image.png} \put(10,95){\textbf{Electric}} \put(-3,65){\parbox{4.5cm}{\raggedright \begin{itemize} \item Point-like \item Labelled by irreps of $G$ \item Internal space described\\ by matrix indices \item Confined if non-trivial\\ restriction of irrep to $\partial(E)$ \end{itemize} }} \put(48,95){\textbf{Magnetic}} \put(36,66){\parbox{4.5cm}{ \raggedright\begin{itemize} \item Loop-like \item Labelled by conjugacy\\ classes of $G$ \item Internal space within\\ conjugacy class \item Condensed if conjugacy\\ class in $\partial(E)$\\ \end{itemize}}} \put(10,48){\textbf{Blob}} \put(-3,10){\parbox{4.8cm}{\raggedright \begin{itemize} \item Point-like \item Labelled by elements of $E$ \item No internal space \item Confined if element not in ker($\partial$) \end{itemize}}} \put(45,48){\textbf{$E$-valued loop}} \put(36,10){\parbox{4.8cm}{\raggedright \begin{itemize} \item Loop-like \item Labelled by irreps of $E$ \item No internal space \item Condensed if non-trivial restriction to ker($\partial$) \end{itemize}}} \end{overpic} \caption{A summary of the excitations when $\rhd$ is trivial} \label{excitation_summary_tri_trivial} \end{center} \end{figure} \section{Braiding in the $\rhd$ Trivial Case} \label{Section_3D_Braiding_Tri_Trivial} Now that we have obtained the membrane and ribbon operators that produce the various excitations of our theory, we can use these operators to obtain the braiding relations of the excitations. We find that the non-trivial braiding is between the magnetic flux tubes and the electric charges; between the flux tubes and other flux tubes (though this is only non-trivial if $G$ is non-Abelian); and between the blob excitations and $E$-valued loops. We will describe all of these in more detail in the following sections. First, we will look at the relations involving the magnetic fluxes and electric charges. To describe this braiding, it is convenient to separately consider the cases where $G$ is Abelian and non-Abelian, starting with the Abelian case. \subsection{Abelian Case} \subsubsection{Flux-Charge Braiding} \label{Section_Abelian_flux_charge_braiding} The first non-trivial braiding that we consider is the braiding involving our magnetic fluxes and electric charges. Recall that the magnetic fluxes are loop-like particles, whereas the electric charges are point-like. Because of this, the appropriate braiding between these two types of particle is to pass the electric charge through the loop and back around to its original position, as shown in Figure \ref{chargethroughloop2}. It is also possible to pass the electric charge around the loop (without passing through), just as if the loop were a point particle, but this process is found to be trivial in this model. Indeed, generally we find that in the higher lattice gauge theory model, any braiding where one particle (loop-like or otherwise) is moved around another particle (but not through a loop-particle) is trivial. This is because such an operation can always be performed by membrane (and ribbon) operators which never intersect and so commute. This means that each excitation is oblivious to the presence of the other and so the motion has the same result as moving through the vacuum. \begin{figure}[h] \begin{center} \begin{overpic}[width=0.8\linewidth]{Particle_loop_braiding.png} \put(80,45) {\large Flux tube $h$} \put(50,13) {\large Charge $R$} \end{overpic} \caption{(Copy of Figure 39 from Ref. \cite{HuxfordPaper1}.) Schematic view of braiding a charge through a loop. The red line tracks the motion of the charge.} \label{chargethroughloop2} \end{center} \end{figure} \hfill Now that we have discussed what the relevant braiding move is, we need to find how the excitations transform under such a move. The braiding relation is conveniently calculated by considering a commutation of operators as follows. Consider starting with a state that has no excitations and then applying a magnetic membrane operator that produces a flux tube. Then consider acting with an electric string operator to produce a pair of electric charges and move one along the path of the string, with this path passing through the loop excitation. In this case the electric excitation has undergone the braiding move we described earlier. We want to compare this situation to a similar one in which the electric excitation has not braided with the loop excitation. To do so, consider reversing the order of operators that we apply. Instead of first acting with the magnetic membrane and then with the electric string operator, we first apply the electric operator. This produces a pair of electric excitations and moves one of them along the ribbon. However, there is no magnetic excitation present at this stage, so no braiding occurs. Then we act with the magnetic membrane to produce our flux tube. In this situation, the excitations end up in the same location as when we applied the operators in the original order, but no braiding has occurred. Comparing these two situations therefore gives us the braiding relation. This means that to describe the braiding, we just need to find the relationship between the two possible orderings of the operators. That is, we need to calculate the commutation relations of the magnetic membrane operator and the electric ribbon operator. \hfill In the case where $G$ is Abelian, it is simple to calculate the commutation relation described above. Let the path of the electric ribbon operator be $t$ and consider a magnetic membrane operator $C^h(m)$ applied on a membrane which intersects with the path $t$. The path $t$ intersects with the membrane $m$ at some edge $i$ in $t$. The label of the path up to edge $i$ is not affected by the magnetic membrane, because it does not intersect with it. We denote this part of the path by $t_1$. The path after $i$, which we call $t_2$, is similarly unaffected. However, the label of the edge $i$ itself is multiplied by either $h$ or $h^{-1}$, depending on the relative orientation of the edge and the membrane. Then the total path $t$ is the composition of $t_1$, the edge $i$ and $t_2$, which we write as $t=t_1it_2$ (if the edge $i$ points along the path, otherwise $t=t_1i^{-1}t_2$). This means that the path label operator satisfies the following commutation relation with the magnetic membrane operator: \begin{align*} \hat{g}(t) C^h(m)&=\hat{g}(t_1)\hat{g}_i\hat{g}(t_2)C^h(m)\\ &=C^h(m) \hat{g}(t_1)h^{\pm 1}\hat{g}_i\hat{g}(t_2), \end{align*} where the inverse depends on the orientation of the membrane. Because we are looking at the case where $G$ is Abelian, we can extract the factor $h^{\pm 1}$ to the front of the path operator and combine the sections of path to obtain $$\hat{g}(t)C^{h}(m)\ket{GS}=C^{h}(m) h^{\pm 1} \hat{g}(t) \ket{GS}.$$ Now consider an electric ribbon operator, which has the form $$\sum_{g \in G} \alpha_g \delta(g, \hat{g}(t)),$$ where $\alpha_g$ is an arbitrary set of coefficients. This ribbon operator then satisfies the commutation relation \begin{align} \sum_g \alpha_g& \delta(g,\hat{g}(t))C^{h}(m)\ket{GS} \notag \\ &= C^{h}(m) \sum_g \alpha_g\delta( g,h^{\pm 1}\hat{g}(t))\ket{GS} \notag \\ &=C^{h}(m) \sum_{g'=h^{\mp 1}g}\alpha_{h^{\pm 1}g'} \delta( g',\hat{g}(t))\ket{GS} \label{Equation_braiding_electric_magnetic_Abelian_1} \end{align} with the magnetic membrane operator. This relation is simplified when we consider an electric ribbon operator whose coefficients are described by an irrep $R$ of $G$. As discussed in Section \ref{Section_Electric_Tri_Trivial}, the electric ribbon operators labelled by irreps of $G$ form a basis for the space of electric ribbon operators, and so we can decompose any electric ribbon operator into a sum of such irrep-labelled ribbon operators. When $G$ is Abelian, all of the irreps are 1D, and so the basis ribbon operator labelled by irrep $R$ is given by $$S^R(t)= \sum_{g \in G} R(g) \delta(g, \hat{g}(t)),$$ where $R(g)$ is the representation of element $g$ in the irrep, and is a phase because the irreps are 1D when $G$ is Abelian. Substituting this into Equation \ref{Equation_braiding_electric_magnetic_Abelian_1}, we see that the electric ribbon operator labelled by an irrep $R$ and the magnetic membrane operator satisfy the commutation relation \begin{align*} S^R(t)&C^{h}(m)\ket{GS}\\ &=\sum_g R(g) \delta(g,\hat{g}(t))C^{h}(m)\ket{GS}\\ &=C^{h}(m) \sum_{g'=h^{\mp 1}g} R(h^{\pm 1}g') \delta( g',\hat{g}(t))\ket{GS}\\ &= C^{h}(m) \sum_{g'} R(h^{\pm 1})R(g') \delta( g',\hat{g}(t))\ket{GS}\\ &= R(h^{\pm 1}) C^{h}(m) \sum_{g'} R(g') \delta( g',\hat{g}(t))\ket{GS}\\ &= R(h^{\pm 1}) C^{h}(m)S^R(t)\ket{GS}. \end{align*} This is the same as the unbraided version $C^{h}(m) S^R(t)$, except that we have gained a phase of $R(h)$ or $R(h^{-1})$. Therefore, under braiding the state obtains a simple phase $R(h)$ (or the inverse) and so the braiding is Abelian. The phase is $R(h)$ if the electric ribbon's path meets the direct membrane before the dual membrane, and the inverse otherwise, as we show in Section \ref{Section_electric_magnetic_braiding_3D_tri_trivial}. Note that this is the same result that we would expect for conventional discrete gauge theory (see e.g. Refs. \cite{Bais1980,Bucher1992} and our discussion of how this relates to higher gauge theory in Section II of Ref. \cite{HuxfordPaper1}). \subsubsection{Flux-Flux Braiding} \label{Section_Flux_Flux_Braiding_Tri_Trivial_Abelian} Just as a point particle can be braided with a loop-like excitation in two ways, so can two loops be braided in multiple ways. The allowed patterns of motion can be built from two types of movement. Firstly, we can move the loops around each-other (in the same way as we can move two point particles around each-other), which we call permutation. Secondly, we can pull a loop through another loop (or over it, which is equivalent to pulling the second loop through the first), just as we saw when braiding point particles with loops. These two moves are shown in Figure \ref{LoopMoves}. In this figure, in each case the red loop is moved, with the path of its motion being represented by the yellow membrane. The arrows indicate the direction of motion. In this model, the permutation move is trivial in that it is the same regardless of whether the green loop is present or not. This is because the motion can be performed by membrane operators acting on membranes that never intersect. Then because the membranes do not intersect, the membrane operators commute. Even if we choose to use membrane operators that do intersect, as in Figure \ref{Exchange_deform}, the membranes can be deformed so that they do not intersect, by using the topological property of membrane operators. In the example shown in Figure \ref{Exchange_deform}, a loop (shown as a small red torus in the figure) is moved along a surface (indicated by the red surface attached to the loop) that intersects twice with a green membrane. Although the red loop intersects the green membrane, the red loop does not pass through the larger green loop excitation created by this green membrane. This motion is performed by an operator placed on the red surface. The red and green membranes can be deformed so that one goes around the other, using the topological property of the membrane operators. Then because the membranes do not intersect (and indeed can be deformed so that they never come close), the corresponding membrane operators commute and so permutation is trivial. This is also true for permutation involving non-confined point particles or the $E$-valued loops: in 3+1d the permutation can be performed by ribbon or membrane operators which do not intersect. \begin{figure}[h] \begin{center} \begin{overpic}[width=0.9\linewidth]{loop_moves_2_image.png} \put(-1,0) {A's initial position} \put(8,13){B} \put(7,22){A} \put(65,0) {A's initial position} \put(99,11){B} \put(98,26){A} \end{overpic} \caption{Schematic of a braid move (left) and a permutation move (right). The path of motion of the red loop, or equivalently the membrane for the operator used to move it, is shown in yellow} \label{LoopMoves} \end{center} \end{figure} \begin{figure}[h] \begin{center} \begin{overpic}[width=0.75\linewidth]{membrane_around_through_membrane_image.png} \put(44,12){\Huge $\rightarrow$} \put(43,20){deform} \end{overpic} \caption{Exchange of two excitations is implemented by membranes which can be freely deformed so that they do not intersect. This means that the corresponding commutation relation of operators is trivial.} \label{Exchange_deform} \end{center} \end{figure} \begin{figure}[h] \begin{center} \begin{overpic}[width=0.85\linewidth]{Loop_Braiding_Order_commutation_Needs_Psi_alternate_3.png} \put(87,59){\Huge $\ket{\Psi}$} \put(87,12){\Huge $\ket{\Psi}$} \put(40,59){\Huge $\cdot$} \put(45,12){\Huge $\cdot$} \put(82,59){\Huge $\cdot$} \put(82,12){\Huge $\cdot$} \end{overpic} \caption{(Copy of Figure 47 from Ref. \cite{HuxfordPaper1}.) The commutation of operators used to calculate the braiding. The partially transparent surfaces indicate the membranes for the operators, while the opaque loops indicate the excited regions, which are the boundaries of the membranes.} \label{LoopLoopCommutation} \end{center} \end{figure} As we did with the flux-charge braiding, we can express the braiding relation between two loops in terms of the commutation relation between creation operators. The flux tubes are created and moved by membrane operators, so the appropriate commutation relation is between two membrane operators, as indicated in Figure \ref{LoopLoopCommutation}. In the Abelian case, the two magnetic membrane operators commute, and so the loop braiding between two magnetic fluxes is trivial. This is because in this case, the magnetic membrane operator simply multiplies each cut edge by the label of the magnetic operator. This is in contrast to the non-Abelian case, where the action of the membrane operator on each edge depends on the value of the path from the start-point to that edge. This means that, in the Abelian case, the two membrane operators only share support when their membranes cut some of the same edges, so that the two membrane operators directly change the same edge label. Even then, the action on a shared edge is the same regardless of the order in which the operators act. Consider the action of two membrane operators $C^{h_1}(m_1)$ and $C^{h_2}(m_2)$ on such a shared edge $i$ (i.e. one cut by the dual membranes of both operators). If we first act with the membrane operator labelled by $h_2$ and then by the operator labelled by $h_1$, the edge label goes from $g_i$ to $h_1 h_2 g_i$ (possibly with inverses on $h_1$ or $h_2$, depending on the relative orientation of the membranes and the edge). On the other hand, when $C^{h_1}(m_1)$ acts first on the edge, followed by $C^{h_2}(m_2)$, the total effect on the edge label is given by $g_i \rightarrow h_2 h_1 g_i$. This is the same as $h_1h_2g_i$ (because $G$ is Abelian), so the two membrane operators commute and the braiding is trivial. \subsection{Non-Abelian Case} \subsubsection{Flux-Charge Braiding} \label{Section_Flux_Charge_Braiding} In the case where $G$ is non-Abelian, the braiding relations between the magnetic fluxes and electric charges are a little more complex, although they still match our expectations from conventional gauge theory (see e.g. Refs. \cite{Bais1980,Bucher1992} and our discussion in Section II of Ref. \cite{HuxfordPaper1}). Recall from the Abelian case that the electric string operator fails to commute with the magnetic membrane operator because the latter operator changes the label of one (or possibly more) of the edges along the path of the electric ribbon. In the Abelian case, the action on the path element was simple. The affected edge was multiplied by a fixed element $h$ or the inverse, and this factor could be brought to the front of the product of group elements that make up the path element, so that the entire path element was also multiplied by $h$ or the inverse. In the non-Abelian case, this is no longer true. Firstly, any changes to the edge cannot simply be extracted to the front of the path element by commutation. Secondly, the action of the membrane on the individual edge that is changed is more complex, depending on the path from the start-point of the magnetic membrane to the affected edge. This means that, rather than multiply the affected edge label by a fixed element $h$, we multiply the label by an element within the conjugacy class of $h$, with this element depending on the path element from the start-point of the membrane to the edge. However, this path element depends on the state that we act on, and even in the ground state the element is not generally fixed (the ground state is made of a superposition of states with different values of this path element) and so we must leave this path element as an operator. Therefore, the braiding relation is not generally well defined. To illustrate this idea, consider performing exactly the same braiding as in the Abelian case, by passing an electric ribbon operator through a magnetic membrane operator. Again we split the path $t$ of the electric ribbon into the path $t_1$ before the intersection; the edge $i$ along which the ribbon and membrane intersect; and the path $t_2$ after the intersection. Of these parts, only the group element $\hat{g}_i$ assigned to the edge $i$ is affected. For now, assume that the edge $i$ points along the path $t$, so that $t=t_1 i t_2$. If this path passes through the direct membrane of the magnetic membrane operator before the dual membrane (i.e. for a particular choice of relative orientation of ribbon and membrane), we have that $$\hat{g}_iC^h(m)=C^h(m) \hat{g}(t_s)^{-1}h\hat{g}(t_s)\hat{g}_i,$$ where $t_s$ is the path from the start-point of the membrane to the crossing point and we have assumed that the ribbon is aligned so that the path reaches the direct membrane of the magnetic membrane operator before the dual membrane. Then for the entire path element, we have that \begin{align*} \hat{g}(t)C^h(m)&= \hat{g}(t_1)\hat{g}_i \hat{g}(t_2)C^h(m)\\ &=C^h(m)\hat{g}(t_1) \hat{g}(t_s)^{-1}h\hat{g}(t_s) \hat{g}_i \hat{g}(t_2). \end{align*} If we had taken edge $i$ to point against the path, we would have a similar result, because the edge element $\hat{g}_i$ would appear with an inverse in the path element, but the edge element would be right-multiplied by the inverse factor $\hat{g}(t_s)^{-1}h^{-1}\hat{g}(t_s)$ by the membrane operator, so we would obtain \begin{align*} \hat{g}(t)C^h(m)&= \hat{g}(t_1)\hat{g}_i^{-1} \hat{g}(t_2)C^h(m)\\ &=C^h(m)\hat{g}(t_1) (\hat{g_i}\hat{g}(t_s)^{-1}h^{-1}\hat{g}(t_s))^{-1} \hat{g}(t_2)\\ &=C^h(m)\hat{g}(t_1)\hat{g}(t_s)^{-1}h\hat{g}(t_s) \hat{g}_i^{-1} \hat{g}(t_2). \end{align*} We can combine these cases by introducing $\hat{g}_i^{\sigma_i}$, where $\sigma_i$ is 1 if the edge and path align and $-1$ otherwise. Then we have \begin{align*} \hat{g}(t)C^h(m)&= \hat{g}(t_1)\hat{g}_i \hat{g}(t_2)C^h(m) \\ &=C^h(m)\hat{g}(t_1) \hat{g}(t_s)^{-1}h\hat{g}(t_s) \hat{g}_i^{\sigma_i} \hat{g}(t_2). \end{align*} By inserting the identity in the form $\hat{g}(t_1)^{-1} \hat{g}(t_1)$, we can write the commutation relation as \begin{align} \hat{g}(t)&C^h(m) \notag\\ &=C^h(m)\hat{g}(t_1) \hat{g}(t_s)^{-1}h\hat{g}(t_s)\hat{g}(t_1)^{-1} \hat{g}(t_1) \hat{g}_i^{\sigma_i} \hat{g}(t_2) \notag \\ &=C^h(m)(\hat{g}(t_s)\hat{g}(t_1)^{-1})^{-1} h(\hat{g}(t_s)\hat{g}(t_1)^{-1}) \hat{g}(t). \end{align} This is similar to the commutation relation from the Abelian case, except that the path element gains a factor of $$(\hat{g}(t_s)\hat{g}(t_1)^{-1})^{-1} h(\hat{g}(t_s)\hat{g}(t_1)^{-1}),$$ instead of simply $h$. This factor is an operator and has no definite value in general, so the effect on the electric excitation depends on which configuration within the ground state we consider and we cannot extract a definite braiding relation. However, there is one special case where we can obtain a definite braiding relation. When the electric string starts at the start-point of the magnetic membrane, then the path sections $t_s$ and $t_1$ start and end at the same points as each-other. Provided that these path sections can be deformed into one-another without crossing over any excitations, the fake-flatness condition imposed by the plaquette energy terms ensures that $\hat{g}(t_s)=\hat{g}(t_1)$ up to a potential factor of $\partial(e)$ for some $e \in E$. Such factors of $\partial(e)$ do not affect $$(\hat{g}(t_s)\hat{g}(t_1)^{-1})^{-1} h(\hat{g}(t_s)\hat{g}(t_1)^{-1})$$ because $\partial(e)$ is in the centre of $G$, so the factor of $\partial(e)$ and $\partial(e)^{-1}$ from $\hat{g}(t_s)$ and $\hat{g}(t_s)^{-1}$ cancel. Therefore, \begin{align} (\hat{g}(t_s)\hat{g}(t_1)^{-1})^{-1} h(\hat{g}(t_s)\hat{g}(t_1)^{-1}) \hat{g}(t)&= \partial(e)^{-1}h \partial(e)\hat{g}(t) \notag\\ &=h\hat{g}(t). \label{Equation_path_magnetic_commutation_same_start_point} \end{align} This relation then gives us a simple braiding relation in the ``same-site" (or same start-point) case. Note that we gave this braiding relation for a particular choice of relative orientation for the ribbon and membrane operator, and if we reversed this relative orientation then the element $g(t)$ would instead be multiplied by $h^{-1}$ (as we show in Section \ref{Section_electric_magnetic_braiding_3D_tri_trivial} in the Supplementary Material). This braiding relation is a simple extension of the Abelian case. Having said that, the non-Abelian nature of the group does still have some relevance when we use our irrep basis for the electric operators. Recall that the electric excitations are labelled by irreps of the group $G$, combined with matrix indices for the irrep. When $G$ is Abelian, these representations are 1D and we need not worry about matrix indices. However, when $G$ is non-Abelian some of these irreps are not 1D. We can look at the effect of braiding on an electric ribbon labelled by a representation $R$ and its indices, for which we find that: \begin{align} &\sum_{g \in G} [D^{R}(g)]_{ab} \delta (g, \hat{g}(t)) C^h(m) \ket{GS} \notag \\ & \ = \sum_{g \in G} C^h(m) [D^{R}(g)]_{ab} \delta (g, h\hat{g}(t))\ket{GS} \notag \\ & \ =\sum_{g \in G} C^h(m) [D^{R}(g)]_{ab} \delta (h^{-1}g, \hat{g}(t))\ket{GS} \notag \\ & \ =\sum_{g'=h^{-1}g} C^h(m) [D^{R}(hg')]_{ab} \delta (g', \hat{g}(t))\ket{GS} \notag \\ & \ = C^h(m) \sum_{c=1}^{|R|} [D^{R}(h)]_{ac} \sum_{g' \in G} [D^{R}(g')]_{cb} \delta (g', \hat{g}(t)) \ket{GS}. \label{Equation_electric_magnetic_irrep_braiding} \end{align} We see from Equation \ref{Equation_electric_magnetic_irrep_braiding} that the braiding mixes electric ribbon operators labelled by different matrix indices but the same representation. The fact that the representation is left invariant suggests that the representations label the purely electric topological sectors, given that braiding cannot mix different sectors. \hfill The importance of the start-points of the membrane and ribbon operators when it comes to braiding can be interpreted in terms of gauge theory. Just as we discussed for the 2+1d case in Ref. \cite{HuxfordPaper2}, the start-point of the magnetic membrane can be seen as a unique point in that the flux tube produced by the magnetic membrane operator has a definite flux label with respect to this point even within the conjugacy class. When we give a flux tube a flux label, we must specify the path with respect to which we measure this flux. The path must link with the flux tube, but smoothly deforming the path should not change the flux label measured (where by smoothly deform, we mean pulling through the space represented by the lattice to another position on the lattice). An exception to this is the start-point of the path. Moving this start-point can change the flux label that we would assign to the flux tube (see e.g. Ref. \cite{Alford1992}). Suppose that the flux label measured with respect to a particular start-point is $h$. Then if we measure the flux label of the flux tube starting from a different point, the result is related to $h$ by conjugation by a path element for a path between the two start-points. In our model this path element is not usually well-defined, because the energy eigenstates are usually linear combinations of states with different group elements assigned to the path. We say that the path element is generally operator valued. Therefore, the flux tube does not generally have definite flux with respect to points other than the start-point (with an exception if the flux label is in the centre of $G$). This non-definite flux is reflected in the flux-flux braiding. This idea is explained more clearly in the context of field theory in Ref. \cite{Alford1992}. \hfill A second interpretation of the start-point dependent fusion comes from anyon theory. We only have definite fusion in the case where the membrane operators share a start-point. Then, as in the 2+1d case, we may expect that we only have definite braiding when the fusion channel is definite: that is, when the operators involved have a common start-point. \subsubsection{Flux-Flux Braiding} \label{Section_Flux_Flux_Braiding_Tri_Trivial} As with the flux-charge braiding, the general result of braiding two magnetic fluxes is more complicated when $G$ is non-Abelian. However, as with the flux-charge braiding, there is a special case with simple braiding relations, when the two magnetic membrane operators have the same start-point. This makes sense if we think of the base-point as the point of definite flux, because we only expect a definite braiding result if the two flux tubes have definite flux when measured with respect to the same point. \hfill We consider the case where one flux tube (which we will call the inner loop) is passed through another loop (which we call the outer loop). The inner loop takes the role of the red loop in the left diagram of Figure \ref{LoopMoves}. Then in the same start-point case the label of the inner loop is conjugated by the label of the outer loop, while the label of the outer loop is unchanged. That is, if the label of the inner membrane operator is $k$ and the outer membrane operator is labelled by $h$, under braiding the label of the excitation from the inner membrane becomes $h^{-1}kh$ and the label of the outer membrane is unchanged. The simplest way to obtain this braiding relation is to use the topological nature of the operators. We can freely deform the membranes, as long as they do not cross an excitation and no excitations are moved in doing so. We can therefore pull the membrane that produces the inner loop fully through the other membrane (as there are no excitations within the membrane itself). However, when we do this we must keep the start-point fixed (because it may be excited), so the start-point is not moved through the outer membrane. However, recall that the action of the magnetic membrane operator depends on a set of paths from the start-point to the edges being changed by the membrane operator. As we deform the membrane, the start of these paths (the start-point) is fixed, while the ends are pulled through the outer membrane (if these ends were not already on the other side of the membrane). Therefore, all of these paths intersect the outer membrane. This means that the group elements of these paths will be changed by the action of the outer membrane operator, which will in turn affect the action of the inner membrane operator. To see how the inner membrane operator changes under the commutation relation we need to know how the labels of these paths are affected. \hfill As an example, consider Figure \ref{loop_loop_braiding_deform}. In the left side of the figure we have the membranes in their original position, with the red membrane nucleating a loop at the common start-point and moving the red loop through the green one. To calculate the commutation relation, we deform the red membrane so that it is entirely pulled through the green membrane, as in the right side of Figure \ref{loop_loop_braiding_deform}. However, when we deform a membrane we must keep any excitations fixed, including the potentially excited start-point. Therefore, the start-point is fixed. This means that the paths from the start-point to the red membrane, like the example path to the red membrane in the right-hand side of the figure, pass through the green membrane and so can be affected by the green membrane operator. This is significant because these paths determine the action of the membrane operator, as explained in Section \ref{Section_3D_Tri_Trivial_Magnetic_Excitations}. An edge $i$ cut by the dual membrane of the magnetic membrane operator $C^k(m_1)$ has its edge label $g_i$ multiplied by a factor $g(s.p(m_1)-v_i)^{-1}kg(s.p(m_1)-v_i)$ (or the inverse), where $v_i$ is the vertex on the direct membrane that is attached to $i$. These factors have the form $g(t)^{-1}hg(t)$, where each path $t$ now intersects with the green membrane operator $C^h(m_2)$. We must therefore find the commutation relation of such a path label operator $g(t)$ with the membrane operator. \begin{figure}[h] \begin{center} \begin{overpic}[width=0.98\linewidth]{loop_loop_braid_deform_alternate_start_point_arrows_image.png} \put(25,42){$C^{k}(m_1)$} \put(0,32){$C^h(m_2)$} \put(18,0.5){\parbox{2cm}{common\\ start-point}} \put(3,8){\parbox{2cm}{example\\ paths}} \put(45,22){\Huge $\rightarrow$} \put(44,28){deform} \end{overpic} \caption{To calculate the braiding of two loops, we consider the situation shown on the left where we first apply a magnetic membrane operator $C^h(m_2)$, then apply a membrane operator $C^k(m_1)$ which intersects with the first membrane and pushes a loop excitation through that first membrane. We choose the start-points of these two membranes to be the same. Example paths from the common start-point to the two membranes are shown as yellow cylinders. We can use the topological property of the membrane operators to deform the inner (red) membrane, $m_1$, and pull it through the green membrane $m_2$ to obtain the image on the right-hand side. However, when we do so we must leave the start-point fixed. Therefore, the paths from the start-point to each point on the membrane $m_1$, such as the example path shown here, must pass through $m_2$. This leads to a non-trivial braiding relation in general.} \label{loop_loop_braiding_deform} \end{center} \end{figure} \hfill We already saw how a magnetic membrane operator affects the label of paths that pierce the membrane when we looked at the charge flux-braiding. From the charge braiding calculation we know that the label of a path $t$ that starts at the common start-point and intersects the outer membrane changes from $g(t)$ to $hg(t)$, where $h$ is the label of the outer membrane operator being intersected (see Equation \ref{Equation_path_magnetic_commutation_same_start_point}). As discussed previously, this path element $g(t)$ appears in the action of the other (inner) membrane operator. If the inner membrane has label $k$, the membrane operator acts on an edge cut by the membrane by multiplying the edge label by $g(t)^{-1}kg(t)$, where $t$ is the path from the start-point to that edge. Under commutation with the outer membrane, when $g(t)$ changes to $hg(t)$, this action becomes multiplication by $g(t)^{-1} h^{-1}k h g(t)$, which is equivalent to the action of an unbraided membrane of label $h^{-1}k h$. Therefore, we see that the label of one of the flux tubes is conjugated by braiding. As we expect, the conjugacy class of the flux is invariant under braiding, but the flux element within the conjugacy class can be changed. \hfill It is worth noting that if we had not deformed one membrane to pull it entirely through the other, some of the paths from the start-point would not pierce the other membrane and so would be unaffected by the commutation relation. This means that the action of the membrane in the region from the start-point to the intersection of the membrane is unaltered. This reflects the fact that a membrane operator moves the loop excitation associated to it. Before the intersection, the membrane operator is moving the excitation before it has braided, so its label is the original label of the loop. After the intersection, braiding has occurred and so the label of the membrane (and the excitation) has changed. The precise point at which braiding has occurred is somewhat arbitrary in an anyon theory (although we can guarantee whether braiding has occurred if the excitations start and end in the same position). Similarly the choice of location for the membranes is somewhat arbitrary when the membrane operators act on the ground state because we can deform the membranes without affecting their action, and so we can change the location where the membrane operators intersect by deforming the membranes. This reflects the freedom in considering at which point during the motion the braiding transformation is applied (although if the excitations start and end in the same position, then the membrane operators will definitely intersect if braiding occurs, regardless of how we deform the membranes). \hfill Having obtained the braiding relation, it is useful to consider how braiding affects a linear combination of magnetic membrane operators with label within a certain conjugacy class, such as $\sum_h \alpha_h C^h(m)$. If the magnetic membrane operator is an equal superposition of operators labelled by each element of a conjugacy class, then the conjugation of the labels by the braiding only permutes the labels within the conjugacy class, which has no effect when the coefficient for each element is the same. Therefore, the overall membrane operator transforms trivially under braiding. For a general superposition, the conjugation (and so permutation of the labels) does affect the operator and so the braiding is non-trivial. These conditions match the conditions for the start-point of the membrane operator to be unexcited or excited. A magnetic operator with an unexcited start-point is an equal superposition of magnetic operators with labels within a conjugacy class, and so is unaffected by braiding through other magnetic excitations. On the other hand, a magnetic operator with an excited start-point will transform non-trivially when it braids through some other magnetic excitations. This is because, if the start-point is excited, the magnetic excitation carries a point-like charge, which enables it to braid non-trivially with other magnetic excitations when passed through them. Note that the same condition does not hold for the outer membrane operator, which can affect the inner membrane operator even if it does not have an excited start-point. This is because we can shrink the inner loop to a point before braiding it without affecting the braiding relation, whereas the loop-like character of the outer loop is essential for the braiding. \hfill It is important to note that the precise form of the braiding relation depends on the orientation of the loops involved. Flipping a magnetic excitation is equivalent to changing its label from $h$ to $h^{-1}$. Therefore, if we were to flip the orientation of the outer membrane from our earlier calculation, then the label of the inner membrane would change from $k$ to $hkh^{-1}$ under braiding rather than changing to $h^{-1}kh$. Flipping the orientation of the inner one does not change the expression, because the transformation is the same when we invert both sides: $k^{-1} \rightarrow h^{-1}k^{-1}h \implies k \rightarrow h^{-1}kh$. Therefore, if the orientation of both loops is flipped, the braiding transformation is \begin{equation} k \rightarrow hkh^{-1}. \label{braid_relation_magnetic_flipped} \end{equation} \subsubsection{Linking} \label{Section_linking} In addition to the non-trivial loop-loop braiding, there is another feature of loop excitations not present for point excitations. Two loop excitations may be \textit{linked}. In this case, depending on the labels of the two excitations, there may be an energetically costly ``linking string" that joins the two loops. This situation is indicated in Figure \ref{linking}. As we show in Section \ref{Section_linking_appendix} in the Supplementary Material, this linking string is present between two linked magnetic excitations when their labels do not commute. If the two loops are labelled by $g$ and $h$ and their membranes have the same start-point, then the two loops are linked by a string with a label similar to $ghg^{-1}h^{-1}$. The exact label depends on the relative orientations of the two loops and which path we choose to use to define the flux of the linking string (so $g$ or $h$ could appear with an inverse in the label of the linking string, or the label could be conjugated by $g$ or $h$). \begin{figure}[h] \begin{center} \begin{overpic}[width=0.7\linewidth]{Linking_cropped.png} \end{overpic} \caption{Two linked loops may have an energetically costly linking string between them (indicated by the short yellow string).} \label{linking} \end{center} \end{figure} \hfill This linking string indicates that there is an obstruction to pushing the two loops through one-another and so pushing them through results in an energetically costly linking string. One way of viewing this is that the two strings are unable to pass through each-other. Therefore, instead of the loops being pushed through each-other, one is deformed to accommodate the other, as shown in Figure \ref{linking_process}. In Figure \ref{linking_process}, part of one of the loops envelops the other one and folds back on itself, as seen in the bottom-right of the figure (this becomes the pink string in the lower-left of the figure). We can consider this part of the two loops as the linking string. It is possible to work out the flux label of the linking string from this picture by writing a path that links with the thin section as a combination of the paths defining our two original fluxes (recall from Section \ref{Section_Flux_Charge_Braiding} that a flux tube is defined along with a path linking with that flux tube, which measures the flux value). A more complete argument for this (in terms of generic fluxes, rather than in terms of this specific model) is given in Ref. \cite{Alford1992}. \begin{figure}[h] \begin{center} \begin{overpic}[width=0.9\linewidth]{linking_process_c_image.png} \put(48,43){\Huge $\rightarrow$} \put(70,29){\Huge $\downarrow$} \put(45,16){\Huge $\leftarrow$} \end{overpic} \caption{Given two loops (blue and red here) that cannot pass through each-other, we can push them together. To do this we must deform the boundary of one of the loops (second figure). The deformation then encloses the other loop (third figure). We can consider this deformed section of the loop as a new object, which is the linking string (coloured pink in the fourth figure), whose flux label will depend on the labels of the two linked loops.} \label{linking_process} \end{center} \end{figure} The presence of the linking string indicates that further relative motion of the two loops is non-trivial, because such motion will move the string (the position of which can be detected through the energy terms). For example, we can consider rotating one of the loops by a full rotation. This would be a trivial motion if the loops were unlinked. However, when the loops are linked the linking string will follow this rotation, as shown in Figure \ref{linkrotation}, indicating that the motion is non-trivial. In order to implement this rotation, we make the direct membrane paths (the paths that we defined when constructing the membrane operator) spiral outwards, wrapping around the linking (blue) loop. If we wanted to write the label of these paths in terms of the label of an unwrapped path, we would gain a factor that accounts for the flux of the blue loop. This in turn affects the action of the membrane operator, changing the effective label of parts of the membrane operator by conjugation by the flux of the blue loop. \begin{figure}[h] \begin{center} \begin{overpic}[width=0.7\linewidth]{Link_rotation_cropped.png} \end{overpic} \caption{Rotating a linked loop drags the linking string and conjugates the label of part of the membrane of the rotating loop.} \label{linkrotation} \end{center} \end{figure} \subsubsection{Three-Loop Braiding} It has become clear \cite{Wang2014, Jiang2014} that when considering loops, the simple case where two loops pass through each-other (two-loop braiding, shown in the left side of Figure \ref{LoopMoves}) that we have described so far does not fully describe the general topological properties of loops. A more general example of braiding is where two loops pass through each-other while both loops are linked to a third loop (see Figure \ref{threeloopbraiding}). This is known as three-loop \cite{Wang2014} or necklace braiding \cite{Bullivant2020a}. \hfill In this model however, the result of three-loop braiding is similar to that of the ordinary braiding. The only difference is that the two loops may also drag linking strings with the third loop. This is shown in Figure \ref{threeloopbraiding}. The transformation of the loop labels is otherwise the same as in the ordinary case. One exception is that the linking string of two magnetic excitations can cancel the confining string of a confined blob excitation (provided that the labels of the linking string and blob excitation agree, as described in Section \ref{Section_linking_appendix} in the Supplementary Material), which enables those blob excitations to move and braid freely while attached to a linked flux, which they would not normally be able to do. \begin{figure}[h] \begin{center} \begin{overpic}[width=\linewidth]{3-loop-braid-alternate_arrows_2_image.png} \put(63,75){C} \put(91,68){A} \put(96,46){B} \put(40,3){A's initial position} \put(74,13){linking strings} \end{overpic} \caption{Three loop braiding. The open strings (orange, purple and cyan) are possible linking strings } \label{threeloopbraiding} \end{center} \end{figure} \subsection{Loop-Blob Braiding} \label{Section_Loop_Blob_Braiding_Tri_Trivial} So far we have considered the excitations that are described by the group $G$. The final non-trivial braiding is between the two types of excitation that are associated to the group $E$, the $E$-valued loops and blob excitations. In this case it is easy to find the braiding relations by looking at the effect of the blob ribbon operator on the surface measured by the operator that produces the loop. We consider a situation where the blob excitation passes through the loop excitation. To implement this situation on our lattice, we apply both an $E$-valued membrane operator and a blob ribbon operator whose path intersects with that membrane, as shown in Figure \ref{blobloop}. \begin{figure}[h] \begin{center} \begin{overpic}[width=0.8\linewidth] {Blob_Loop_Braiding_Tri_Trivial.png} \end{overpic} \caption{Schematic of blob-loop braiding. The blue cubes represent the blob excitations at the ends of the ribbon operator (whose ribbon is represented by the translucent blue cuboid and red arrow). The ribbon operator moves one of the blob excitations through the green loop-like excitation produced by an $E$-valued membrane operator on the translucent green membrane.} \label{blobloop} \end{center} \end{figure} \hfill In order to compute the braiding relation, we compare the situation where we first create the loop and then push the blob through, thus performing the braiding move, with the one where we push the blob through empty space before producing the loop. The relevant commutation relation is shown in Figure \ref{blobloopdetail}. \begin{figure}[h] \begin{center} \begin{overpic}[width=0.8\linewidth]{Blob_Loop_With_Direction.png} \put(32,46){\Huge $\cdot$} \put(40,10){\Huge $\cdot$} \put(80,46){\Huge $\ket{\Psi}$} \put(80,10){\Huge $\ket{\Psi}$} \end{overpic} \caption{In order to determine the blob-loop braiding relations, we compare the situation shown in the top line, where we first apply an $E$-valued membrane operator and then a blob ribbon operator that intersects with that membrane, to the situation shown in the bottom line, where we apply the operators in the opposite order. Here $\ket{\psi}$ is any state with no other excitations near the support of the two operators (e.g. a ground state).} \label{blobloopdetail} \end{center} \end{figure} \hfill As we saw in Section \ref{Section_3D_Blob_Excitation_Tri_Trivial}, the blob ribbon operator with label $e$ multiplies the labels of the plaquettes pierced by its path by $e$ or $e^{-1}$, depending on the relative orientation of the plaquette and ribbon. This action is not sensitive to the presence of the $E$-valued loop excitation and so the blob ribbon operator is unaffected by the commutation. On the other hand, the membrane operator for the loop excitation is affected by the action of the blob ribbon operator. Recall that the $E$-valued membrane operator measures the surface element of the membrane on which it is applied. This surface element, $\hat{e}(m)$, is a product of the elements of individual plaquettes: $\hat{e}(m)=\prod_{\text{plaquettes in m}} \hat{e}_{\text{plaquette}}^{\pm 1}$, where the $\pm$ accounts for the relative orientation of the surface and plaquette. We defined the blob ribbon operator to intersect the membrane, and so it will affect the label of one of the plaquettes in this product. If the blob ribbon operator pierces the membrane $m$ through a plaquette $q$, then the label $e_q$ of that plaquette is multiplied by $e$ or $e^{-1}$, depending on the relative orientation of the plaquette and the ribbon. This in turn means that the contribution $e_q^{\pm 1}$ of the plaquette to the surface $m$ will be multiplied by $e$ or $e^{-1}$, depending on the relative orientation of the membrane and the ribbon. The orientation of the membrane matters rather than that of the plaquette, because the $\pm 1$ in the expression for the surface label accounts for the relative orientation of plaquette and membrane (if the plaquette is anti-aligned with the membrane, the inverse in $e_q^{- 1}$ converts a factor of $e$ from the ribbon operator into $e^{-1}$ if the orientation of the ribbon opposes the plaquette but matches the membrane, or vice-versa if the orientation of the ribbon matches the plaquette but opposes the membrane). If the orientation of the membrane matches the orientation of the blob ribbon operator, $e_q^{\pm 1}$ will be multiplied by $e^{-1}$. This indicates that $\hat{e}(m)B^e(t) =B^e(t) e^{- 1} \hat{e}(m)$ in this case. Then, considering the basis operator for our space of $E$-valued membrane operators labelled by an irrep $\gamma$ of $E$, the commutation relation with the blob ribbon operator $B^e(t)$ is given by \begin{align*} B^e(t) \sum_{e' \in E} &\gamma({e'}) \delta(e',\hat{e}(m)) \ket{GS}\\ &= \sum_{e' \in E} \gamma({e'}) \delta(e',e\hat{e}(m)) B^e(t) \ket{GS}\\ &=\sum_{e' \in E} \gamma({e'}) \delta(e^{-1} e',\hat{e}(m)) B^e(t) \ket{GS}\\ &=\sum_{e''=e^{-1}e'} \gamma(ee'') \delta(e'',\hat{e}(m)) B^e(t) \ket{GS}\\ &=\gamma(e) \sum_{e'' \in E} \gamma(e'') \delta(e'',\hat{e}(m)) B^e(t) \ket{GS}, \end{align*} where we used the fact that $E$ is Abelian to take $\gamma$ as a 1D irrep and separate $\gamma(ee'')$ into $\gamma(e)$ and $\gamma(e'')$. Having the $E$-valued membrane operator on the left of the product (and the blob ribbon operator on the right) corresponds to the unbraided case (because in this case the blob excitation moved before the loop excitation is present), so the braiding of our two excitations results in accumulating a phase of $\gamma(e)$. A similar argument holds in the case where the membrane operator and blob ribbon operator are anti-aligned, except that we should replace $e$ by its inverse. \hfill It is worth noting that the blob excitations with label not in the kernel of $\partial$ (that is the confined blob excitations, as we saw in Section \ref{Section_3D_Blob_Excitation_Tri_Trivial}) braid non-trivially with the condensed $E$-valued loop excitations (those with trivial representation of the kernel), while those with label in the kernel braid trivially with them. This is because the condensed $E$-valued loop excitations have trivial representation of the kernel: $\gamma(e_K)=1$ for $e_K$ in the kernel. Therefore, the phase gained is 1 when a condensed loop braids with an unconfined blob (which carries a label in the kernel). This matches our expectation that only the confined excitations can braid non-trivially with the condensed excitations. \subsection{Summary of Braiding When $\rhd$ Is Trivial} For convenience, we summarize the excitations that braid non-trivially with each-other in Table \ref{Table_Braiding_Tri_Trivial}. We can see that the excitations split into two sets. The electric and magnetic excitations (the excitations corresponding to the group $G$) braid non-trivially with each-other and the blob and $E$-valued loop excitations (the excitations corresponding to $E$) braid non-trivially with each-other, but there is no non-trivial braiding between the two sets. \begin{table}[h] \begin{center} \begin{tabular}{ |c|c|c|c|c| } \hline Non-Trivial& &Magnetic & & $E$-valued \\ Braiding?& Electric & flux & Blob & loop \\ \hline Electric & \ding{55} & \ding{51} & \ding{55} & \ding{55}\\ \hline Magnetic & && &\\ flux & \ding{51} & \ding{51} & \ding{55} & \ding{55} \\ \hline Blob& \ding{55} & \ding{55} & \ding{55} & \ding{51}\\ \hline $E$-valued & & & & \\ loop & \ding{55} & \ding{55} & \ding{51} & \ding{55}\\ \hline \end{tabular} \caption{A summary of which excitations braid non-trivially in Case 1, where $\rhd$ is trivial. A tick indicates that at least some of the excitations of each type braid non-trivially with each-other, while a cross indicates that there is no non-trivial braiding between the two types. Notice that the table has a block-diagonal structure, with non-trivial braiding only in the blocks.} \label{Table_Braiding_Tri_Trivial} \end{center} \end{table} \section{Ribbon and Membrane Operators in the Fake-Flat Case} Now that we have considered the first of our special cases, where $\rhd$ is trivial, we move on to another of our special cases (Case 3). We consider the case where our groups $G$ and $E$, as well as our maps $\rhd$ and $\partial$, are completely general, but we restrict our Hilbert space to only allow fake-flat configurations. Many of the features of the excitations are common between the two cases, so we will examine the differences between them rather than repeating our previous discussion entirely. \label{Section_3D_MO_Fake_Flat} \subsection{Electric Excitations} The electric excitations are unchanged by taking $\rhd$ non-trivial. Just as in the $\rhd$ trivial case, we measure the value of a path and assign a weight according to the value of the path. This creates two point-like excitations at the ends of the path. The operators are best labelled by irreps of the group $G$, with non-trivial irreps giving the excitations and the trivial irrep giving the identity operator. The excitations labelled by irreps with a non-trivial restriction to the image of $\partial$ are confined. \subsection{$E$-Valued Loop Excitations} \label{Section_3D_Loop_Tri_Non_Trivial} Next we consider the $E$-valued loop excitations, which are produced by membrane operators that measure the surface label of a membrane: $$L^{\vec{\alpha}}(m) = \sum_{e \in E} \alpha_e \delta(\hat{e}(m),e).$$ Here $\hat{e}(m)$ is the surface label of the membrane $m$ and the $\alpha_e$ are a general set of coefficients. This operator has the same form as the corresponding operator for the $\rhd$ trivial case. However, there is a slight difference when we consider our irrep basis for the space of $E$-valued membrane operators (see Equation \ref{Equation_E_membrane_irrep_Abelian} for the $\rhd$ trivial case). Because our group $E$ may now be non-Abelian, the irreps are generally not 1D. This means that our basis for the $E$-valued membrane operators must include the matrix indices for those irreps, so that our general basis operator takes the form \begin{equation} L^{\mu,a,b}(m)= \sum_{e \in E} [D^{\mu}(e)]_{ab} \delta(\hat{e}(m),e), \label{Equation_E_membrane_irrep_basis_non_Abelian} \end{equation} where $\mu$ is an irrep of $E$, and $a$ and $b$ are the matrix indices. \hfill In addition to this slight difference in the presentation of the basis operators, we also have some physical differences compared to the $\rhd$ trivial case. When $\rhd$ is non-trivial, whenever we measure a surface we must specify a base-point with respect to which we measure that surface label \cite{Bullivant2017}. Because the membrane operator for the $E$-valued loop excitations involves measuring a surface, we must specify a base-point for the measurement. We call that base-point the start-point of our membrane operator. Similarly to the start-point of the magnetic membrane operator, this start-point may be excited by the action of the $E$-valued membrane operator. Recall from Section \ref{Section_Recap_3d} that a vertex transform at the base-point of a surface affects the value of that surface, with $A_v^g$ taking the surface label from $e$ to $g \rhd e$. This means that the vertex term (which is made of a sum of vertex transforms) at the start-point of our membrane operator may not commute with our membrane operator, which may result in the vertex being excited (while the other vertex terms are still left unexcited). Whether the start-point vertex is excited or not depends on whether the coefficients of $\delta(\hat{e}(m),e)$ are a function of the $\rhd$-classes of $E$, where two elements $e_1$ and $e_2$ are in the same $\rhd$-class if there exists a $g \in G$ such that $e_2 = g \rhd e_1$ (this is an equivalence relation). If the coefficient for each element $e \in E$ is equal to the coefficient for each element related by the $\rhd$ action, such as $g \rhd e$, then the start-point is not excited. On the other hand, if for each $\rhd$-class the coefficients for the elements within that $\rhd$-class sum to zero, then the start-point is excited. We note that the irrep basis given in Equation \ref{Equation_E_membrane_irrep_basis_non_Abelian} does not provide a good description for this phenomenon, because the coefficients $[D^{\mu}(e)]_{ab}$ given by the matrix elements of an irrep do not transform in a particular way under an $\rhd$ action, and this action can even cause mixing between irreps. Generally, to get membrane operators which either definitely excite the start-point or definitely leave it unexcited, we must consider linear combinations of the basis operators given in Equation \ref{Equation_E_membrane_irrep_basis_non_Abelian}. \hfill In addition to the start-point, we also need to be careful when determining the boundary of the surface, which supports the loop-like excitation. The boundary of the surface always starts and ends at the start-point of the membrane operator, because this start-point is the base-point of the surface. This means that if we nucleate a loop-like excitation at the start-point and then try to move it away from the start-point, part of the boundary still connects to the start-point, as shown in Figure \ref{E_membrane_sp_away}. This section of the boundary is attached to the start-point and the edges in this section appear twice in the boundary, with opposite orientation each time (see Section S-I C in the Supplementary Material of Ref. \cite{HuxfordPaper2} for more details about the boundary of surfaces), just like a whiskering path of the type considered in Section I D of Ref. \cite{HuxfordPaper1} (also shown in Figure \ref{move_basepoint}). For this reason, we will refer to such a section of boundary as a whiskered section. As we showed in Section S-I C of the Supplementary Material of Ref. \cite{HuxfordPaper2}, the edges along such sections of boundary may be excited if $E$ is non-Abelian (whereas if $E$ is Abelian these edges are never excited). Whether these edges are excited by a particular membrane operator or not depends on the coefficients associated to that membrane operator. For an $E$-valued membrane operator $$L^{\vec{\alpha}}(m) = \sum_{e \in E} \alpha_e \delta(\hat{e}(m),e)$$ with a general set of coefficients $\alpha_e$, the edges on the whiskered section of boundary are left unexcited if the coefficients are a function of conjugacy class, so that $\alpha_e = \alpha_{fef^{-1}}$ for all $e, f \in E$. For example, if the coefficients are given by the characters of an irrep $\mu$ of $E$ (which are invariant under conjugation), then the edges will be unexcited. On the other hand, the edges will be excited if the coefficients within each conjugacy class sum to zero. One important thing to note is that if the start-point is not excited, then the edges from the start-point to the loop are not excited either (as we may expect, because the start-point becomes unimportant when it is unexcited). To see this, we note that if the start-point is unexcited, then the coefficients of the membrane operator are invariant under the $\rhd$ action: $\alpha_e = \alpha_{g \rhd e}$ for all $g \in G$ and $e \in E$. In particular, this means that the coefficients are invariant under an action of the form $\partial(f) \rhd$ for any $f\in E$: $\alpha_e = \alpha_{\partial(f) \rhd e}$. However, because of the Peiffer condition Equation \ref{Equation_Peiffer_2}, $\partial(f) \rhd e = fef^{-1}$. That is, the $\rhd$ action from an element in $\partial(E)$ is equivalent to conjugation. Therefore, if the coefficient is invariant under any $\rhd$ action, it is also invariant under conjugation, and so if the start-point is unexcited then so are any edges on the whiskered section of the boundary. \begin{figure}[h] \begin{center} \begin{overpic}[width=0.9\linewidth]{3D_membrane_start_point_away.png} \end{overpic} \caption{The boundary of the surface (green) measured by the membrane operator starts and ends at the start-point (yellow sphere). If this start-point is away from the naive boundary of the membrane (represented by the red torus) there may be a line of excited edges (blue path) connecting the start-point to the loop-like excitation, because the surface label measured by the membrane operator generally transforms non-trivially under such edge transforms when $E$ is non-Abelian. This indicates that for some loop-like excitations there is an energetic cost to moving the excitation away from the start-point.} \label{E_membrane_sp_away} \end{center} \end{figure} \hfill This idea that there may be a string of excited edges from the start-point to the loop-like excitation is significant for the behaviour of the excitation. If the edges are not excited then it indicates that we can move the loop-like excitation created by the membrane operator without dragging any excitations from the start-point. That is, the excitation is not confined. On the other hand, if the edges are excited then we cannot move the loop-like excitation away from the start-point without incurring an energy cost. This means that the excitation is confined. However, note that the additional energy cost is associated to a confining string which connects the loop to the start-point. This additional energy does not depend on the size of the loop itself. As we discuss in Section \ref{Section_Sphere_Charge_Reduced} (although we do not consider the fake-flat case in that section, we would expect a similar result), the loop-like excitations of this model can also carry point-like charge (which is balanced by the charge carried by the start-point). The fact that the confining energy cost depends on the separation of the loop and start-point (rather than the area of the membrane enclosed by the loop) suggests that it is the point-like charge that is confined, rather than the loop-like charge. This is further supported by the fact that the membrane operators that do not produce an excitation at the start-point never produce confined loop-like excitations. \subsection{Blob Excitations} \label{Section_3D_Blob_Fake_Flat} The blob excitations are changed significantly by taking $\rhd$ non-trivial and enforcing fake-flatness at the level of the Hilbert space. Firstly, as we saw in Section \ref{Section_3D_Blob_Excitation_Tri_Trivial}, in the $\rhd$ trivial case some of the blob ribbon operators excite the plaquettes along their length (and would still do so when we take $\rhd$ to be non-trivial). These confined ribbon operators must be thrown out in the fake-flat case because they violate fake-flatness. This means that the labels of the blob ribbon operators are restricted to lie in the kernel of $\partial$. Secondly, the action of each blob ribbon operator is more complicated. In addition to the path between the centres of blobs, we must specify a path on the lattice from a privileged vertex, called the start-point of the operator, to the base-points of each affected plaquette. We call this path the direct path, and call the original path (which pierces the affected plaquettes) the dual path. We can either have a single direct path that runs through each of these base-points (so that the path to each base-point is an extension of the path to the previous base-point), or instead have a set of paths, one for each pierced plaquette. Now instead of simply multiplying the plaquette labels by $e$ or $e^{-1}$, the blob ribbon operator left-multiplies the label of each plaquette $p$ pierced by the dual path by $g(s.p-v_0(p))^{-1} \rhd e$ or right-multiplies the plaquette elements by the inverse, where $(s.p-v_0(p))$ is the path from the start-point of the ribbon operator to the base-point of the affected plaquette and $g(s.p-v_0(p))$ is the corresponding group element. As in the $\rhd$ trivial case, whether the element or its inverse are used depends on the orientation of the plaquette with respect to the dual path of the blob ribbon operator, with the inverse used if the plaquette aligns with the dual path. A simple example of this action is shown in Figure \ref{effectbloboperator}. \begin{figure}[h] \begin{center} \begin{overpic}[width=0.7\linewidth]{Effect_Blob_Operator_Alternate.png} \put(52,19){\large $e_1 \rightarrow ee_1$} \put(52,35){\large $e_2 \rightarrow [g_1^{-1} \rhd e] \: e_2$} \put(52,52){\large $e_3 \rightarrow e_3 [(g_2^{-1}g_1^{-1}) \rhd e^{-1}]$} \put(24,25){\large $g_1$} \put(24,43){\large $g_2$} \end{overpic} \caption{When $\rhd$ is non-trivial, the effect of the blob ribbon operator on a plaquette depends on the (inverse of the) path label for a path from a designated start-point of the operator (yellow sphere labelled $s.p$) to the base-point of the plaquette (the grey sphere attached to each plaquette). As in the $\rhd$ trivial case, the orientation of the plaquette (indicated by the curved yellow arrows) determines whether the plaquette label is left-multiplied by $e$ or right-multiplied by $e^{-1}$.} \label{effectbloboperator} \end{center} \end{figure} The way the direct path affects the action of the blob ribbon operator is similar to how the direct path affects the action of the magnetic ribbon in 2+1d (as we saw in Ref. \cite{HuxfordPaper2}), except that instead of conjugation by the path element we have this $\rhd$ action. If we were allowed blob ribbon operators labelled by a general element of $E$ (rather than just an element in the kernel of $\partial$), the precise path chosen for this direct path would be significant. This is because when we deform a path $t$ over a fake-flat surface, while keeping the start and end points fixed, the path label $g(t)$ is altered by a factor of the form $\partial(f)$ for some $f \in E$. When $E$ is general, this additional factor of $\partial(f)$ causes a non-trivial difference in expressions of the form $g(t)^{-1} \rhd e$, such as those which appear in the action of the blob ribbon operator. Specifically, from the Peiffer conditions (Equations \ref{Equation_Peiffer_1} and \ref{Equation_Peiffer_2} in Section \ref{Section_Recap_3d}) we have $(\partial(f)g(t)^{-1}) \rhd e = f [g(t)^{-1} \rhd e] f^{-1}$. However, when we restrict the element $e$ labelling the blob ribbon operator to be in the kernel of $\partial$, we also ensure that $e$ is in the centre of the group $E$. To see that elements in the kernel of $\partial$ are also in the centre of $E$, we again use the Peiffer conditions. Given that $e_k$ is an element of the kernel of $\partial$, the second Peiffer condition (Equation \ref{Equation_Peiffer_2}) tells us that \begin{align*} e_k f e_k^{-1}=\partial(e_k) \rhd f &= 1_G \rhd f =f\\ & \implies e_k f = f e_k \: \: \forall f \in E. \end{align*} That is, elements in the kernel must commute with all elements of $E$. We also note that if $e_k$ is an element of the kernel, then so is $g \rhd e_k$ for any $g \in G$. This is because $\partial(g \rhd e_k)=g \partial(e_k) g^{-1}=g g^{-1}=1_G$, from the first Peiffer condition (Equation \ref{Equation_Peiffer_1}). Therefore, for any element $e_k$ in the kernel of $\partial$, $g \rhd e_k$ is also in the kernel of $\partial$ and so is in the centre of $E$. This means that in our earlier expression, $f [g(t)^{-1} \rhd e] f^{-1}$ is equal to $g(t)^{-1} \rhd e$ when $e$ is in the kernel, and so the additional factor of $\partial(f)$ from deforming the path $t$ is irrelevant, at least when we act on states that obey fake-flatness. Therefore, when we restrict our blob excitations to the non-confined ones, which are labelled by elements of the kernel of $\partial$, the precise choice of paths from the start-point to the base-points of the affected plaquettes does not matter. This insensitivity to the path is only for smooth deformation over fake-flat regions, so if the lattice supports non-contractible cycles then different choices of path may give different actions for the ribbon operator, with these different actions being equivalent to taking different labels for all or part of the ribbon operator. \hfill Much as we saw with the magnetic excitation in 2+1d, this dependence of the action of the blob ribbon operator on the value of (sections of) the direct path may lead to the blob ribbon operator exciting the start-point of the operator. This is because vertex transforms at the start-point can affect the path label of the direct path. As we show in Section \ref{Section_Blob_Ribbon_Fake_Flat} of the Supplementary Material, the start-point vertex is not excited if the ribbon operator is an equal superposition of all the ribbon operators labelled by elements in an $\rhd$-class (the sets of elements related by the $\rhd$ action), but is excited if the coefficients for the elements in each $\rhd$-class sum to zero. \subsection{Condensation and Confinement} In Section \ref{Section_3D_Condensation_Confinement_Tri_Trivial}, we discussed the pattern of confinement and condensation exhibited by the excitations of the higher lattice gauge theory model when $\rhd$ is trivial. In addition, we explained that for any pair of groups $G$ and $E$ that form a valid crossed module with $\rhd$ trivial, there is a family of crossed modules (and so lattice models) differentiated from each-other by different maps $\partial$ (assuming that the two groups can support different homomorphisms $\partial$). In each family, the model described by the crossed module for which $\partial$ maps only to the identity of $G$ is an ``uncondensed model" where there is no condensation or confinement present, and the transition to other models described by the same groups $G$ and $E$ (but different $\partial$) is a condensation-confinement transition. When $\rhd$ is non-trivial, however, the picture is less clear. This is because, given a model described by an arbitrary crossed module, we cannot necessarily construct a corresponding uncondensed model. To see this, consider a generic crossed module $(G,E, \partial, \rhd)$. Now suppose that $E$ is a non-Abelian group. This means that there are some pairs of elements $e,f \in E$ such that $efe^{-1} \neq f$. From the Peiffer condition Equation \ref{Equation_Peiffer_2} (see Section \ref{Section_Recap_3d}), this means that $\partial(e) \rhd f \neq f$ for this pair. However, there is condensation and confinement when $\partial(E)$ is not the trivial group (as we discussed in the $\rhd$ trivial case in Section \ref{Section_3D_Condensation_Confinement_Tri_Trivial} and will describe in the fake-flat case later in this section). If there were an ``uncondensed" model with $\partial \rightarrow 1_G$, then $\partial(e) \rhd f \neq f$ from the non-Abelian nature of the group $E$ implies that $1_G \rhd f \neq f$. However, this is incompatible with the definition of $\rhd$ as a group homomorphism from $G$ to endomorphisms on $E$ (see Section \ref{Section_Recap_3d}), because this definition means that the identity element $1_G$ should be mapped to the identity map on $E$. Therefore, there is no such crossed module and so no ``uncondensed" model corresponding to the pair of groups $G$ and $E$. Because the Hilbert space is fixed by the lattice and the groups $G$ and $E$, for a model described by a general crossed module there does not seem to be a corresponding uncondensed model with the same Hilbert space (at least not in the space of higher lattice gauge theory models, though there may well be another model giving the ``uncondensed" phase). This means that we are unable to describe the general model in terms of a condensation-confinement transition in this work. However, we can still describe the pattern of confinement (i.e. which excitations cost energy to separate from their antiparticle) and condensation (i.e. which operators act equivalently to ``local" operators on the ground state), which we aim to do briefly in this section. \hfill The first excitations to consider are the electric excitations, some of which are confined due to their ribbon operators exciting the edge terms along the ribbon. These excitations have the same pattern of confinement as in the $\rhd$ trivial case considered in Section \ref{Section_3D_Condensation_Confinement_Tri_Trivial}. Namely, the confined electric ribbon operators $\sum_g \alpha_g \delta( \hat{g}(t),g)$ have coefficients which satisfy $\sum_{e \in E} \alpha_{\partial(e)g}=0$ for all $g \in G$ and the unconfined ribbon operators have coefficients which satisfy $\alpha_{\partial(e)g}= \alpha_g$ for all $g \in G$ and $e \in E$ (while general ribbon operators can be split into contributions from the two cases and leave the edges along the ribbon in a superposition of excited and unexcited states). \hfill We next consider the blob excitations. Some of these, namely those created by blob ribbon operators with label outside the kernel of $\partial$, would be confined, but because the mechanism for this confinement is the violation of the plaquette terms that enforce fake-flatness, these ribbon operators must be excluded from the fake-flat model. \hfill So far, this pattern of confinement is the same as for the $\rhd$ trivial case described in Section \ref{Section_3D_Condensation_Confinement_Tri_Trivial}. However, unlike in that case, some of the $E$ -valued loops are also confined, as we described in Section \ref{Section_3D_Loop_Tri_Non_Trivial}. That is, the $E$-valued membrane operators $$\sum_{e \in E} \alpha_e \delta( \hat{e}(m),e)$$ whose coefficients $\alpha_e$ are sensitive to conjugation (i.e. $\alpha_e$ does not equal $\alpha_{fef^{-1}}$ for some pair $e,f \in E$) may produce an excited string (in addition to the loop excitation itself) as the loop is moved away from the start-point of the membrane. In particular, if the coefficients satisfy $\sum_{f \in E} \alpha_{fef^{-1}} =0$ for each $e \in E$, then the string is definitely excited (whereas the string is definitely not excited if $\alpha_e = \alpha_{fef^{-1}}$ for all $e, f \in E$). As we noted in Section \ref{Section_3D_Loop_Tri_Non_Trivial}, this confinement appears to correspond to confinement of the point-like charge carried by the loop excitation, rather than of the loop-like charge, because the confinement energy does not depend on the area of the loop, and the confinement can only occur when the start-point of the membrane is excited (an excited start-point seems to indicate the presence of a non-trivial point-like charge, as we are able to show in Section \ref{Section_3D_sphere_charge_examples} for the less general case where $\partial$ maps to the centre of $G$ and $E$ is Abelian). \hfill Next we consider the pattern of condensation evident in the model in this fake-flat case (Case 3 from Table \ref{Table_Cases}). By this, we mean that we want to look at which excitations can be produced by operators that are local to the excitation itself. That is, condensed point-like excitations can be produced by operators that act only on a few degrees of freedom near the excitations, and condensed loop-like excitations can be produced by operators near the loops (which have linear extent, so the operators need not be local in the traditional sense). Because we are unable to construct the magnetic excitations when we restrict to fake-flatness, the only condensed excitations remaining are $E$-valued loop excitations. Just as in the $\rhd$ trivial case, the condensed $E$-valued loop excitations are those that are produced by membrane operators which are not sensitive to surface elements in the kernel of $\partial$. This is because if the membrane operator is only sensitive to $\partial(\hat{e}(m))$ (i.e. is not sensitive to the kernel of $\partial$), then it has the same action as an electric ribbon operator measuring the boundary label (whereas a membrane operator sensitive to the kernel of $\partial$ can resolve information not obtainable from the boundary path element). \hfill One interesting fact about this pattern of condensation is that it can coexist with the confinement of the $E$-valued loop excitations. Recall that the condition for the $E$-valued loop excitation to be confined is that $ \sum_{f \in E} \alpha_{fef^{-1}} = 0$ for all $e \in E$. This is not mutually exclusive with the condition that the excitation is condensed ($\alpha_e = \alpha_{e e_k}$ for any $e \in E$ and $e_k \in \ker(\partial)$). For example, consider a crossed module of the form $(G,E=G, \partial = \text{id}, \rhd \rightarrow \text{ conj.})$, for which the two groups $G$ and $E$ are the same, with $\partial$ being the identity map, while $\rhd$ maps to conjugation (i.e. $g \rhd e =g e g^{-1}$). In this case, the only element of the kernel of $\partial$ is the identity element $1_G=1_E$. Therefore, any $E$-valued membrane operator will satisfy the condensation condition (i.e. any coefficients $\alpha_e$ satisfy $\alpha_e = \alpha_{ee_k}$ for all $e_k$ in the kernel of $\partial$, because $e_k$ can only be the identity element). This means that any coefficients satisfying the confinement condition $ \sum_{f \in E} \alpha_{fef^{-1}} = 0$ for this crossed module will also trivially satisfy the condensation condition. How can it be that an excitation is simultaneously confined and condensed? This is because, as we discussed previously in this section, it is the point-like charge carried by the excitation that is confined. On the other hand, the way we have defined condensation means that the condensation must be of the loop-like charge: we have shown that the loop excitation can be produced by an operator local to the excitation (which has linear extent), but this does not mean that the point-like charge can be produced locally in a point-like sense (i.e. with support only on a few degrees of freedom). This fact demonstrates that in future study of condensation and confinement in 3+1d, we must be careful to consider what exactly we mean by condensation or confinement, and which charges (not just excitations) undergo condensation. \section{Braiding in the Fake-Flat Case} \label{Section_3D_Braiding_Fake_Flat} Next we discuss the braiding relations in this special case, Case 3 from Table \ref{Table_Cases}, in which we restrict to fake-flat configurations. The fact that we are unable to include the magnetic excitations (because they violate fake-flatness) means that the braiding relations are rather simple. Indeed, the only remaining non-trivial braiding is between the $E$-valued loop excitations and the blob excitations. Just as in the 2+1d case considered in Ref. \cite{HuxfordPaper2}, however, the signatures of the magnetic excitations are still present in the ground states of manifolds with non-contractible cycles, which can have labels outside of $\partial(E)$. Before we discuss the braiding proper, we will briefly describe how the excitations transform as they are moved around such non-contractible cycles. \subsection{Moving Excitations Around Non-Contractible Cycles} The first type of excitation that we wish to consider moving around a non-contractible cycle is the electric excitation. The transformation obtained by moving an electric excitation around such a cycle is the same in the 3+1d case as in the 2+1d case considered in Ref. \cite{HuxfordPaper2}. Namely, if we compare an electric ribbon operator applied on a path $s$ to one that is applied on the path $s \cdot t$ obtained by concatenating the original path with a non-contractible closed path $t$ is \begin{align*} S^{R,a,b}(s \cdot t) &= \sum_{c=1}^{|R|} [D^R(\hat{g}(t))]_{cb} S^{R,a,c}(s) \end{align*} where $S^{R,a,b}(s \cdot t)$ is the electric ribbon operator labelled by irrep $R$ of $G$ and matrix indices $a$ and $b$. We see that there is mixing between different electric ribbon operators labelled by the same irrep $R$, with this mixing controlled by the matrix $D^R(\hat{g}(t))$ representing the path element $\hat{g}(t)$ in irrep $R$. The path element $\hat{g}(t)$ is an operator, and the ground states are not typically eigenstates of this operator even for closed paths $t$. \begin{figure}[h] \begin{center} \begin{overpic}[width=0.8\linewidth]{membrane_deform_whisker_complete_2.png} \put(28,64){\Huge $\downarrow$} \put(31,66){deform} \put(28,24){\Huge $\downarrow$} \put(31,26){deform} \end{overpic} \caption{Given an $E$-valued membrane operator (green) wrapping around a closed cycle, we can deform the section wrapping around the cycle and shrink it down to nothing. This just leaves a whiskering string (yellow) connecting the start-point (yellow sphere) to the small part of the membrane remaining (the green disk in the final image).} \label{E_valued_membrane_whisker_deform} \end{center} \end{figure} \hfill In a similar way, we can find how the $E$-valued loop-like excitations transform as they are moved around a non-contractible cycle. In order to do so, we compare $E$-valued membrane operators applied on two membranes $m$ and $m'$ which are the same except that $m'$ is whiskered around the non-contractible cycle $t$. This is because the membrane operators are topological, and so a membrane travelling around the cycle can be deformed by shrinking the section of the membrane around the cycle down to nothing, so that only a whiskering path $t$ remains, as shown in Figure \ref{E_valued_membrane_whisker_deform}. Then the $E$-valued membrane operator $L^{\mu,a,b}(m')$ applied on this whiskered membrane is given by $$L^{\mu,a,b}(m') = \sum_{e \in E} [D^{\mu}(e)]_{ab} \delta(e, \hat{e}(m')), $$ where $\mu$ is the irrep of $E$ labelling the membrane operator (and $a$ and $b$ are the matrix indices labelling the operator). The surface element $\hat{e}(m')$ can be written in terms of the surface element of the unwhiskered membrane $m$ using the rules for whiskering surfaces given in Ref. \cite{Bullivant2017}. We have $$\hat{e}(m') =\hat{g}(t) \rhd \hat{e}(m),$$ where $t$ is the closed cycle, which is also the path $(s.p(m')-s.p(m))$ between the start-points of the two membranes. Then we have \begin{align*} L^{\mu,a,b}(m') &= \sum_{e \in E} [D^{\mu}(e)]_{ab} \delta(e, \hat{e}(m'))\\ &= \sum_{e \in E} [D^{\mu}(e)]_{ab} \delta(e, \hat{g}(t) \rhd \hat{e}(m))\\ &= \sum_{e \in E} [D^{\mu}(e)]_{ab} \delta( \hat{g}(t)^{-1} \rhd e, \hat{e}(m))\\ &= \sum_{e'= \hat{g}(t)^{-1} \rhd e \in E} [D^{\mu}(\hat{g}(t) \rhd e')]_{ab} \delta( e', \hat{e}(m)). \end{align*} The matrix $D^{\mu}(\hat{g}(t) \rhd e')$ can be considered as the matrix representation for element $e'$ in a new irrep, $\hat{g}(t) \rhd \mu$. Therefore, we see that moving the $E$-valued loop excitation around the path $t$ mixes the irreps related by this $\rhd$ action. We say that irreps related by the action of $g \rhd$ for some $g \in G$ belong to the same $\rhd$-Rep class of irreps. \hfill The final type of excitation to consider passing around a non-contractible cycle is the blob excitation. Recall from Section \ref{Section_3D_Blob_Fake_Flat} that the action of a blob ribbon operator $B^e(r)$ on a plaquette $p$ pierced by the ribbon $r$ is (choosing the plaquette to be aligned with $r$ for simplicity) $$B^e(r) :e_p = e_p [g(s.p(r)-v_0(p))^{-1} \rhd e^{-1}].$$ Taking the path $s.p(r)-v_0(p)$ to be $t \cdot s$, where $t$ is a closed non-contractible cycle, we can write this action as \begin{align*} B^e(r) :e_p &= e_p [g(t \cdot s)^{-1} \rhd e^{-1}]\\ &= e_p [(g(s)^{-1}g(t)^{-1}) \rhd e^{-1}]\\ &= e_p [g(s)^{-1} \rhd (g(t)^{-1} \rhd e^{-1})] \end{align*} which is the same as the action of a blob ribbon operator that does not wrap around the cycle (so that the path $(s.p(r)-v_0(p))$ is just $s$) except with $e$ replaced by $g(t)^{-1} \rhd e$. Note that here we have taken the direct path of the ribbon operator to wrap around the non-contractible cycle, but not the dual path. If we also let the dual path wrap around the cycle then the plaquettes pierced by $s$ obtain two factors acting on the plaquette label, one from the ribbon before it wraps the cycle (corresponding to the label $e$) and one from the ribbon after it wraps the cycle (corresponding to the label $g(t)^{-1} \rhd e$ as above). \subsection{Loop-Blob Braiding} \label{Section_3D_Loop_Blob_Braiding_Fake_Flat} Compared to the loop-blob braiding that we saw in Section \ref{Section_Loop_Blob_Braiding_Tri_Trivial}, the loop-blob braiding in this special case is slightly more complicated. This is because the action of the ribbon and membrane operators involved now depends on the values of various paths on the lattice. For example, as we saw in Section \ref{Section_3D_Blob_Fake_Flat}, the blob ribbon operator multiplies plaquette elements by a group element $\hat{g}(s.p-v_0(p))^{-1} \rhd e$ which depends on the label of a path. This label is really an operator, because the value of the path label depends on what state we are acting on. In particular, the ground state does not have a definite value of this label, instead being made up of a linear combination of states with different path labels. Because of this, we may expect that the braiding does not generally give us a definite result and that the braiding relation may depend on such operator-valued labels. However, as with the braiding of the flux tubes that we saw in Section \ref{Section_Flux_Flux_Braiding_Tri_Trivial}, the braiding relations are simple for particular cases where the start-points of the operators match. Therefore, we are most interested in these same start-point commutation relations. To understand these, it will be useful to first discuss the interpretation of the blob and loop excitations in 2-gauge theory. \hfill Similar to lattice gauge theory, it is useful to consider the gauge invariants of higher lattice gauge theory, which can be built from quantities associated to closed loops and surfaces \cite{Bullivant2017}. In addition to the ``1-flux" of loops, which is also present in ordinary gauge theory, we have the ``2-flux" or 2-holonomy of closed surfaces. The 2-flux itself, described by an element of $E$, is not a gauge-invariant quantity. However, the 2-flux of a closed surface is only changed within certain equivalence classes of elements by the gauge transforms \cite{Bullivant2017} (as we will see for tori and spheres in Sections \ref{Section_3D_Topological_Charge_Torus_Tri_nontrivial} and \ref{Section_sphere_topological_charge_appendix_full} in the Supplementary Material), and so these classes in $E$ are gauge-invariant, with the identity element in particular belonging to a class of its own. This means that the 2-flux is still a useful quantity. In this model, the blob excitations are associated to non-trivial 2-flux on a surface enclosing the blob excitation. The boundary of the excited blob is itself a surface with non-trivial 2-flux, because an excited blob by definition has a non-trivial surface label on its boundary. To measure this 2-flux, we must pass a loop over the closed surface whose 2-flux we wish to measure. When we measure the 2-flux, we must specify the base-point with respect to which we measure the 2-flux. The choice of base-point is equivalent to a gauge choice, so choosing a different base-point can give a different element for the 2-flux, within the same $\rhd$-class (i.e. the new element is related to the old one by a $\rhd$ action). Once we have chosen the base-point, our measurement loop must be nucleated at that base-point before being passed over the surface, as shown in Figure \ref{surfaceholonomy1}. In this model, the $E$-valued loop excitations measure 2-flux, as we can see from the fact that the corresponding measurement operators assign a weight to each possible surface label. \begin{figure}[h] \begin{center} \begin{overpic}[width=0.6\linewidth]{Blob_Holonomy_Measurement.png} \end{overpic} \caption{(Copy of Figure 19 from Ref. \cite{HuxfordPaper1}.) The 2-holonomy of a surface (in this case a sphere) can be measured by a transport process. A small loop is created at the base-point (the small red sphere), then dragged over the surface (the larger blue sphere), as indicated by the arrow.} \label{surfaceholonomy1} \end{center} \end{figure} \hfill This idea about measuring a 2-flux has important ramifications for our braiding. We have seen that the blob excitations are non-trivial 2-fluxes and the $E$-valued loop excitations measure surface elements, so these loop excitations can measure the 2-flux of the blob excitations. This is why there is non-trivial braiding between these two types of excitation. When we compare the situation where the loop excitation is passed over the blob to the situation where it is not (i.e. we compare the braided case to the unbraided case), we measure the 2-flux of the blob excitation. However, the blob ribbon operator produces excitations that have definite 2-flux only with respect to the start-point of the blob ribbon operator. Similarly, the $E$-valued loop measures 2-flux with respect to the start-point of the membrane operator that creates the loop excitation. Therefore, we expect a definite braiding relation when the start-point for our blob ribbon operator matches the start-point for our $E$-valued loop membrane operator. Note that when $\rhd$ is trivial, the start-points lose meaning (we do not need a direct path for the blob ribbon operators and the surface does not need a base-point for the $E$-valued loop) and the loop can be nucleated at any point before being passed over the blob excitation (rather than at the specified start-point of the blob ribbon operator). This is why the braiding is simple when $\rhd$ is trivial (as discussed in Section \ref{Section_Loop_Blob_Braiding_Tri_Trivial}) and it is not necessary to fix the positions of the start-points in that case. \hfill While we have discussed braiding of the blobs and the loops so far in terms of passing the loop over the blob, we can equally move the blob excitation through the loop instead. These are equivalent, but it is slightly easier to calculate the latter situation. The relevant commutation relation to calculate this braiding relation is shown in Figure \ref{blobloopdetail}. Again, we must ensure that the start-points of each operator are in the same location. \hfill The result of this same-site braiding, where we pass a ribbon operator $B^e(t)$ through an $E$-valued membrane operator $\delta(e_m, \hat{e}(m))$, is then $$B^e(t) \delta(e_m, \hat{e}(m)) = \delta(e_m e^{-1}, \hat{e}(m)) B^e(t),$$ as illustrated in Figure \ref{blobloopresult} (and proven in Section \ref{Section_braiding_fake-flat_appendix} in the Supplementary Material). The operators $\delta(e_m, \hat{e}(m))$ for each label $e_m \in E$ form a basis for our space of $E$-valued membrane operators, but we want to consider the commutation of one of the basis operators labelled by an irrep of $E$ instead. We have that \begin{align} B^e(t)& \sum_{e_m \in E} [D^\alpha(e_m)]_{ab} \delta(\hat{e}(m),e_m) \notag \\ &= \sum_{e_m \in E} [D^\alpha(e_m)]_{ab} \delta(\hat{e}(m),e_m e^{-1}) B^e(t)\notag \\ &= \sum_{e'=e_m e^{-1} \in E} [D^\alpha(e' e)]_{ab} \delta(\hat{e}(m),e')B^e(t) \notag\\ &= \sum_{e' \in E} \sum_{c=1}^{|\alpha|} [D^{\alpha}(e')]_{ac} [D^{\alpha}(e)]_{cb} \delta(\hat{e}(m),e')B^e(t) \notag\\ &=\sum_{c=1}^{|\alpha|} [D^{\alpha}(e)]_{cb} \sum_{e' \in E} [D^{\alpha}(e')]_{ac} \delta(\hat{e}(m),e')B^e(t). \label{Equation_loop_blob_braiding_fake_flat} \end{align} If $\alpha$ is a 1D representation the braiding therefore results in the accumulation of a phase of $\alpha(e)$. If $\alpha$ is higher-dimensional, then it would seem that there is mixing between the different matrix indices. However, recall from Section \ref{Section_3D_Blob_Fake_Flat} that in order to ensure fake-flatness we restricted the label $e$ of the blob ribbon operators to be in the kernel of $\partial$ and therefore the centre of $E$. The matrix representation of an element of the centre is a scalar multiple of the identity from Schur's Lemma, so we can write $[D^{\alpha}(e)]_{cb} = \delta_{cb} [D^{\alpha}(e)]_{11}$ (where the index $1$ could be replaced with any index). This means that the braiding relation \ref{Equation_loop_blob_braiding_fake_flat} simplifies to \begin{align} B^e(t) \sum_{e_m \in E}& [D^\alpha(e_m)]_{ab} \delta(\hat{e}(m),e_m) \notag \\ &= [D^{\alpha}(e)]_{11} \sum_{e' \in E} [D^{\alpha}(e')]_{ab} \delta(\hat{e}(m),e')B^e(t), \label{Equation_llop_brod_braiding_fake_flat_2} \end{align} so again we only accumulate a phase $[D^{\alpha}(e)]_{11}$ (this matrix element must be a phase because the matrix is diagonal and unitary, and in fact the matrix element can be used to define an irrep of the kernel of $\partial$). \hfill We see that the braiding relation between the $E$-valued loops and blob excitations is similar to the result we found for the braiding of magnetic fluxes and charges that we discussed in Section \ref{Section_Flux_Charge_Braiding}, except that in this case the irrep labels the loop-like, rather than point-like, excitation. \begin{figure}[h] \begin{center} \begin{overpic}[width=0.9\linewidth]{Blob_Loop_With_Direction.png} \put(34,46){\large $\cdot$} \put(72,46){\large $\cdot \ket{\Psi}$} \put(39,10){\large $\cdot$} \put(72,10){\large $\cdot \ket{\Psi}$} \put(61,30){\large $B^e(t)$} \put(15,18){\large $\delta(e_m e^{-1},\hat{e}(m))$} \put(24,65){\large $B^e(t)$} \put(45,55){\large $\delta( e_m, \hat{e}(m))$} \put(0,10){\large $=$} \end{overpic} \caption{The result of loop-blob braiding ($\ket{\psi}$ is a state with no excitations near the two operators)} \label{blobloopresult} \end{center} \end{figure} \subsection{Summary of Braiding in the Fake-Flat Case} In Table \ref{Table_Braiding_Fake_Flat}, we summarize the braiding in the fake-flat case by indicating which excitations can braid non-trivially. Note that when we enforce fake-flatness on the level of the Hilbert space, there are no magnetic excitations, although we can still have non-trivial flux labels around non-contractible loops on manifolds that are not simply connected (e.g. the 3-torus). \begin{table}[h] \begin{center} \begin{tabular}{ |c|c|c|c|c| } \hline Non-Trivial& & & $E$-valued & Around\\ Braiding?& Electric & Blob & loop & handle \\ \hline Electric & \ding{55} & \ding{55} & \ding{55}& \ding{51} \\ \hline Blob& \ding{55} & \ding{55} & \ding{51}& \ding{51} \\ \hline $E$-valued & & & & \\ loop & \ding{55} & \ding{51} & \ding{55}& \ding{51}\\ \hline \end{tabular} \caption{A summary of the non-trivial braiding in the fake-flat case. In the fake-flat case, there are no magnetic excitations and the non-trivial braiding between excitations only involves the blob excitations and $E$-valued loops. However there are non-trivial results from moving excitations around handles (non-contractible cycles), which can support non-trivial 1-flux.} \label{Table_Braiding_Fake_Flat} \end{center} \end{table} \section{Ribbon and Membrane Operators in the Case Where $\partial \rightarrow$ Centre($G$) and $E$ Is Abelian} \label{Section_3D_MO_Central} So far, we have examined the excitations of the higher lattice gauge theory model in two cases. Firstly, we looked at the case where $\rhd$ is trivial, where we can find all of the excitations of the model. However, in this case the membrane operators corresponding to the 2-gauge field (labelled by the group $E$) are simple, and produce no excitations at the start-point of the corresponding operators. Secondly, we looked at the case where $\rhd$ is general, but where we restrict our Hilbert space to the fake-flat subspace, thus having to exclude the magnetic excitations. This gave us more interesting excitations from our 2-gauge field, allowing the $E$-valued membrane operators and blob ribbon operators to produce additional excitations at the start-points of the operators. However, excluding the magnetic excitations removes the interesting features from the 1-gauge field excitations. In the following sections, we consider a generalization of the $\rhd$ trivial case, which will allow us to keep all of the excitations while also gaining many of the features from the $\rhd$ general case. This means that we will be able to see how the magnetic excitation interacts with the more general 2-gauge excitations. We consider the case where $E$ is Abelian and $\partial$ maps onto the centre of $G$ (Case 2 in Table \ref{Table_Cases}). Note that this case includes the $\rhd$ trivial case as a sub-case, so this is a strict generalization of the situation considered in Sections \ref{Section_3D_MO_Tri_Trivial} and \ref{Section_3D_Braiding_Tri_Trivial}. \hfill In this case, many of the features of the general crossed module (but fake-flat) case are preserved, despite our restrictions on the crossed module. The electric, $E$-valued loop and blob excitation creation operators are all the same as in the general crossed module (but fake-flat) case (see Section \ref{Section_3D_MO_Fake_Flat}), except that we also allow blob ribbon operators with labels outside the kernel of $\partial$ and the irreps that label the basis $E$-valued membrane operators are 1D. Because of this, we will not describe the operators that produce these excitations again. On the other hand, we can include excitations analogous to the magnetic excitations from the $\rhd$ trivial case. The membrane operators that produce these magnetic excitations are significantly altered from the $\rhd$ trivial case that we considered in Section \ref{Section_3D_Tri_Trivial_Magnetic_Excitations}, however. This alteration of the membrane operators is necessary to ensure that each membrane operator commutes with the various energy terms, apart from those near the boundary of the membrane. The magnetic membrane operators affect the edge labels in the same way as in the $\rhd$ trivial case, but they also affect the plaquette labels near the membrane. Recall that to specify our magnetic membrane we had to define a dual membrane, with the operator changing the labels of the edges cut by the dual membrane, and a direct membrane, with paths on the direct membrane controlling how the cut edges were changed (see Figure \ref{fluxmembrane2}). As well as edges, the dual membrane cuts through the plaquettes between these edges (such as the vertical plaquettes in Figure \ref{modmembranecutplaquettespart1}). In this new case, the magnetic membrane operator changes the label of the ``cut" plaquettes if their base-points lie on the direct membrane. We say that plaquettes whose base-points lie on the direct membrane are based on the direct membrane. The action of the operator on these plaquettes depends on paths on the direct lattice, in a similar way to the action on the edges. Given a cut plaquette based on the direct membrane, the action of the membrane operator on the plaquette depends on the label of a path from the start-point of the membrane to the base-point of the plaquette. An example of this type of path is shown in Figure \ref{modmembranecutplaquettespart1}. We denote the group element assigned to the path between the start-point and the base-point of plaquette $p$ by $g(s.p-v_0(p))$, where $s.p$ is the privileged start-point of the membrane operator and $v_0(p)$ is the base-point of the plaquette $p$. If $p$ is cut by the dual membrane and based on the direct membrane, then its label is changed from $e_p$ to $(g(s.p-v_0(p))^{-1}hg(s.p-v_0(p))) \rhd e_p$. As we mentioned previously, the magnetic membrane operator only acts on a cut plaquette in this way if the plaquette is based on the direct membrane. That is, if the plaquette has its base-point away from the direct membrane, then the label of the plaquette is not affected by this $\rhd$ action. It may seem arbitrary that the plaquette label is only changed in this way if its base-point is on the direct membrane. However, this is analogous to the action of vertex transforms, which only affect plaquettes that are based at the vertex on which we apply the transform (except instead of only affecting surfaces based at the vertex, the membrane operator affects surfaces based on that surface). Indeed, we will see that closed magnetic membrane operators are closely related to the vertex transforms in Sections \ref{Section_Topological_Magnetic_Tri_Trivial} and \ref{Section_Topological_Magnetic_Tri_Nontrivial} of the Supplementary Material. \begin{figure*}[t!] \begin{center} \begin{overpic}[width=0.8\linewidth]{Magnetic_Membrane_Cut_Plaquettes_arrows.png} \put(8,13){start-point} \put(25,6){\parbox{3cm}{\raggedright example path, $t$, to base-point}} \put(50,6){base-point on direct membrane} \put(70,18){direct membrane} \put(83,54){dual membrane} \put(60,73){\parbox{5cm}{\raggedright example plaquette based on \\ direct membrane: $e_p \rightarrow (g(t)^{-1}hg(t)) \rhd e_p$}} \put(0,76){\parbox{5cm}{\raggedright example plaquette based away from direct membrane: $e_p \rightarrow e_p$}} \put(-5,66){\parbox{4cm}{\raggedright base-point away from direct membrane}} \end{overpic} \caption{In addition to changing the edges cut by the dual membrane, when $\rhd$ is non-trivial the magnetic membrane operator affects the plaquettes cut by the dual membrane if their base-points lie on the direct membrane} \label{modmembranecutplaquettespart1} \end{center} \end{figure*} \hfill The $\rhd$ action on the plaquettes is not the only additional feature to the magnetic membrane operator in this new special case. Changing some of the edge labels by multiplication and plaquette labels by this $\rhd$ action leaves the blob conditions for blobs cut by the dual membrane unsatisfied (recall that the blob condition enforces that the total surface label of the blob is trivial). To correct this and ensure that the membrane operator commutes with the blob energy terms near the bulk of the membrane, blob ribbon operators (of the type considered in Section \ref{Section_3D_Blob_Fake_Flat}) are added to the membrane operator. For every plaquette that is entirely on the direct membrane (not cut by the dual membrane, but instead lying flat on the direct membrane), we have one such blob ribbon operator associated to that plaquette. We call the associated plaquette the base plaquette for that blob ribbon operator. The blob ribbon operators all start at the same privileged blob, which we call blob 0 and must define when specifying the magnetic membrane operator. The blob ribbon operators end at the blob that is connected to the base plaquette and cut by the dual membrane, as shown in Figure \ref{modmembrane}. An example of the path of the blob ribbon operators is shown in Figure \ref{blobsonmem}. The label of this blob ribbon operator, for a base plaquette $b$ on the direct membrane and with orientation away from the dual membrane (downwards in Figure \ref{modmembrane}, as shown in Figure \ref{modmembraneorientation}), is given by $f(b)=[g(s.p-v_{0}(b))\rhd e_b] [(h^{-1}g(s.p-v_{0}(b)))\rhd e_b^{-1}]$, where $e_b$ is the label of the base plaquette and $v_0(b)$ is the base-point of the plaquette. If the plaquette has the opposite orientation, we must invert the label $e_b$ in this expression. After incorporating this additional action for the magnetic membrane operator, we write the total magnetic membrane operator (denoted by $C^h_T(m)$, where $T$ indicates that it is the total operator) as \begin{equation} C^h_T(m)=C^h_{\rhd}(m) \prod_{\text{plaquette }b \in m} B^{f(b)}(\text{blob }0 \rightarrow \text{blob }b), \label{total_magnetic_membrane_operator} \end{equation} where blob $b$ is the blob attached to base plaquette $b$ and cut by the dual membrane (note, however, that the same blob may be attached to multiple base plaquettes). In Equation \ref{total_magnetic_membrane_operator}, $C^h_{\rhd}(m)$ performs the action of the membrane operator on the edges and the $\rhd$ action on the plaquettes, while the $B^{f(b)}(\text{blob }0 \rightarrow \text{blob }b)$ operators are the added blob ribbon operators (see Section \ref{Section_3D_Blob_Fake_Flat} for a description of blob ribbon operators). \begin{figure}[h] \begin{center} \begin{overpic}[width=0.9\linewidth]{modified_membrane_blob_arrows_alt.png} \put(8,5){start-point} \put(0,69){cut edges} \put(20,9){example path} \put(73,19){plaquette $b$} \put(30,79){blob associated to plaquette $b$} \put(48,12){base-point of $b$} \end{overpic} \caption{The membrane and relevant features. The direct membrane is shown in pale blue. The yellow sphere in the corner of it is the start-point ($s.p$) of the membrane. The red square is a plaquette on the direct membrane, which is based at the orange sphere, with the green cube as the associated blob. The path from the start-point to the plaquette is also shown in orange.} \label{modmembrane} \end{center} \end{figure} \begin{figure}[h] \begin{center} \begin{overpic}[width=0.8\linewidth]{modified_membrane_orientation_alt.png} \put(13,90){cut edges} \put(18,50){ \parbox{1.5cm}{plaquette circulation}} \put(50,20){ plaquette orientation} \end{overpic} \caption{We consider the case where the plaquettes on the membrane point downwards, away from the cut edges. To obtain the case where some of the plaquettes point upwards, we must invert the labels of those plaquettes. Note that the orientation of the plaquette is related to the circulation by the right-hand rule.} \label{modmembraneorientation} \end{center} \end{figure} \begin{figure}[h] \begin{center} \begin{overpic}[width=0.75\linewidth]{modified_membrane_blob_arrows_2_image.png} \put(9,9){$s.p$} \put(4,63){blob 0} \put(18,2){\parbox{3cm}{\raggedright direct path for\\ blob ribbon}} \put(10,75){dual path for blob ribbon} \put(65,7){example plaquette $p$} \put(34,16){base-point of $p$} \put(60,70){blob $p$} \end{overpic} \caption{When we define the magnetic membrane operator $C^h_T(m)$, we must include blob ribbon operators. There is one blob ribbon operator per plaquette on the direct membrane (large blue surface here). Here we show an example, corresponding to the plaquette $p$ (red square). The dual path for the blob ribbon operator runs from the privileged blob 0 to the blob, blob $p$, which is attached to the plaquette $p$ and cut by the dual membrane (not shown for clarity, but it would be above the direct membrane and bisect the vertical edges). The direct path for the ribbon operator runs from the start-point of the membrane, $s.p$, to the base-point of plaquette $p$.} \label{blobsonmem} \end{center} \end{figure} \hfill Even with these modifications, the magnetic membrane operator still excites more energy terms than in the $\rhd$ trivial case (i.e. more than just the boundary plaquettes and potentially the start-point vertex). Firstly, the privileged blob, blob 0, is not generally left in an energy eigenstate. This is because the blob ribbon operators that we added to the magnetic membrane all originate in this blob and so change the surface label of the blob, from $1_E$ in the ground state to $[h \rhd \hat{e}(m)^{-1}] \hat{e}(m)$, where $\hat{e}(m)$ is the total surface label of the direct membrane. $\hat{e}(m)$ is an operator, which means that blob 0 is not generally left in an energy eigenstate. Secondly, the edges around the boundary of the direct membrane are potentially excited. This is because the labels of the added blob ribbon operators depend on the labels of the plaquettes on our direct membrane. Edges in the bulk of the membrane are attached to two plaquettes on the membrane and so the edge transform affects the blob ribbon operators associated with both plaquettes. These effects on the labels of the two ribbon operators (together with a contribution from the edge transform on the plaquettes cut by the dual membrane) cancel out, so that the edge transform commutes with the membrane operator, as we show in Section \ref{Section_Magnetic_Tri_Nontrivial_Commutation}. On the other hand, edges on the boundary of the membrane are only attached to one plaquette on the membrane, so there is no such cancellation. These boundary edges are therefore not generally left in an energy eigenstate. \subsection{Condensation and Confinement} \label{Section_condensation_confinement_partial_central} In the previously considered $\rhd$ trivial (see Section \ref{Section_3D_Condensation_Confinement_Tri_Trivial}) and fake-flat cases (see Section \ref{Section_3D_MO_Fake_Flat}), we saw that many of the excitations are confined, meaning that it costs energy to separate a pair of excitations, in addition to the energy required to produce the pair. We also found that other particle types are condensed, meaning that they can be produced by local operators (local to the excitation, in the sense discussed in Section \ref{Section_3D_Condensation_Confinement_Tri_Trivial}) and so carry trivial topological charge. The pattern of condensation and confinement in the $\partial \rightarrow$ Centre($G$) case is the same as in the $\rhd$ trivial case discussed in Section \ref{Section_3D_Condensation_Confinement_Tri_Trivial}. The blob excitations with label not in the kernel of $\partial$ are confined, as are the electric excitations labelled by irreps of $G$ that have non-trivial restriction to the subgroup $\partial(E)$ of $G$. On the other hand, the condensed excitations are the magnetic excitations with label in the image of $\partial$ and the $E$-valued loops that are labelled by irreps of $E$ which are trivial on the kernel of $\partial$. These properties, along with the other properties of the excitations in this case, are summarised in Figure \ref{Excitation_summary}. \begin{figure*}[ht] \begin{center} \begin{overpic}[width=0.8\linewidth]{braiding_summary_3_image.png} \put(20,63){\textbf{Electric}} \put(70,63){\textbf{Magnetic}} \put(20,28){\textbf{Blob}} \put(68,28){\textbf{$E$-valued loop}} \put(0,40){\parbox{6cm}{\raggedright \begin{itemize} \item Point-like \item Labelled by irreps of $G$ \item Internal space described by matrix indices \item Confined if non-trivial restriction of irrep to $\partial(E)$ \end{itemize}}} \put(58,40){\parbox{6cm}{\raggedright \begin{itemize} \item Loop-like \item Labelled by conjugacy classes of $G$ \item Internal space within conjugacy class \item Condensed if conjugacy class in $\partial(E)$ \end{itemize}}} \put(0,5){\parbox{6cm}{\raggedright \begin{itemize} \item Point-like \item Labelled by $\rhd$-classes of $E$ \item Internal space within class \item Confined if class not in ker($\partial$) \end{itemize} }} \put(58,5){\parbox{6cm}{\raggedright \begin{itemize} \item Loop-like \item Labelled by $\rhd$-Rep classes of irreps of $E$ \item Internal space within class \item Condensed if non-trivial restriction to ker($\partial$) \end{itemize}}} \end{overpic} \caption{A summary of the excitations in the $E$ Abelian, $\partial \rightarrow \text{centre}(G)$ case} \label{Excitation_summary} \end{center} \end{figure*} \section{Braiding in the Case Where $\partial \rightarrow$ Centre($G$) and $E$ Is Abelian} \label{Section_3D_Braiding_Central} Now that we have described the membrane and ribbon operators that produce our excitations, we can consider the braiding relations of these excitations. Any braiding not involving the magnetic excitations is the same as in the fake-flat case described in Section \ref{Section_3D_Braiding_Fake_Flat}. Namely, there is non-trivial braiding between the blob excitations and the $E$-valued loops, with the same start-point braiding resulting in an accumulation of phase. The result is a phase, rather than the more general transformation given in Equation \ref{Equation_loop_blob_braiding_fake_flat} for the case considered in Section \ref{Section_3D_Loop_Blob_Braiding_Fake_Flat}, because the irreps of $E$ are 1D when $E$ is Abelian. \hfill Unlike for the fake-flat case, we can find the magnetic excitations and so describe their braiding relations. However, rather than using the magnetic membrane operator directly, it is convenient when considering braiding to combine the magnetic membrane operator with an $E$-valued membrane operator. We multiply the magnetic membrane operator by an $E$-valued membrane operator such as $\delta(e_m,\hat{e}(m))$, acting before the magnetic membrane operator. That is, we construct membrane operators of the form $C^h_T(m)\delta(e_m,\hat{e}(m))$, which we denote by $C^{h,e_m}_T(m)$. We note that combining the magnetic membrane operator with this $E$-valued membrane operator in this way does not excite regions of the lattice not already excited by the magnetic membrane operator, because both membrane operators only cause excitations near the boundary of the membrane, and possibly at blob 0 and the start-point of the membrane. We will shortly explain why we perform this combination of membrane operators (in essence, it gives the loop-like excitation a well-defined 2-flux), but before we discuss this we shall consider the combination of the membrane operators in more detail. \hfill In order to combine the magnetic and $E$-valued membrane operators, there are some details that we must specify. The first of these is the relative orientation of the two membrane operators. We take the orientation of the $E$-valued membrane operator to point away from the dual membrane of the magnetic membrane operator. The second detail is more subtle. In addition to combining this $E$-valued membrane operator with the magnetic membrane operator, we move blob 0 and the start-point of the membrane operator so that they are displaced slightly away from the membrane itself, as shown in Figure \ref{higher_flux_displacement_main_text}. We do this because the $E$-valued membrane may cause an excitation at the start-point, which would prevent us from using the topological nature of the magnetic membrane operator to deform the membrane. It is not necessary to move blob 0 in this way, but it will be convenient when considering topological charge to have a clear separation between the point-like excitations (blob 0 and the start-point) and the loop-like excitation at the boundary of the membrane. A third detail is our convention for the overall orientation of the total membrane operator. Because we have displaced the start-point away from the membrane in a particular direction, it is sensible to define the orientation of the membrane to be consistent with this displacement. That is we imagine that the loop excitation is nucleated at the start-point and moves away from the start-point along the membrane. Therefore, the loop excitation would be oriented downwards in Figure \ref{higher_flux_displacement_main_text}, matching the orientation of the $E$-valued membrane operator. \begin{figure}[h] \begin{center} \begin{overpic}[width=0.7\linewidth]{higher_flux_displacement_2.png} \put(0,92){blob 0} \put(1,80){$s.p$} \put(28,50){\Huge $\downarrow$} \put(31,53){displace} \put(0,42){blob 0} \put(1,30){$s.p$} \end{overpic} \caption{Rather than place blob 0 and the start-point (represented by the yellow cube and sphere respectively) of the membrane operator on the direct membrane (green) itself, as in the upper image, we displace them away from the membrane as shown in the lower image. Then blob 0 and the start-point are on the other side of the edges cut by the dual membrane (where the edges are represented by the green cylinders). This allows us to deform the membrane away from the start-point and blob 0 (downwards in the figure) using the topological property of the magnetic and $E$-valued membrane operators which make up the higher-flux membrane operator.} \label{higher_flux_displacement_main_text} \end{center} \end{figure} \hfill Having considered these details about the combined membrane operator, we now explain why this combination was useful. As we mentioned in Section \ref{Section_3D_MO_Central}, the action of the magnetic membrane operator generally causes the privileged blob, blob 0, of the membrane to acquire a non-trivial 2-flux (non-trivial surface label). However, this surface label is given in terms of an operator (the surface label $\hat{e}(m)$ of the membrane itself) and so is not well-defined. Including the $E$-valued membrane operator ensures that the 2-flux of the privileged blob 0 after the action of the magnetic membrane operator is well-defined. Giving blob 0 a definite 2-flux is significant because we expect the loop-like excitation to also carry a non-trivial 2-flux to balance the 2-flux of blob 0, and we expect the value of this 2-flux to be important in braiding relations. As we show in Section \ref{Section_braiding_higher_flux} of the Supplementary Material, the surface label of blob 0 after the action of the combined membrane operator (with displaced start-point, which does affect the label) is given by $e_m^{-1} [h^{-1} \rhd e_m]$. As we show in Section \ref{Section_braiding_higher_flux} (see Equation \ref{Equation_2_flux_higher_flux_excitation_appendix}), the 2-flux of the loop excitation labelled by $h$ and $e_m$ is given by \begin{equation} \tilde{e}_m = e_m [h^{-1} \rhd e_m^{-1}], \end{equation} which is the inverse of the 2-flux carried by blob 0. Due to the fact that the excitations produced by this combined membrane operator carry both ordinary magnetic flux and this 2-flux, we call the combined membrane operator a ``higher-flux membrane operator" and call the excitations higher-flux excitations. If we wish to instead consider the original magnetic membrane operator (without the attached $E$-valued membrane operator), we can simply sum over each value of $e_m$, because this gives us a complete sum of projectors $\delta(e_m,\hat{e}(m))$. That is $\sum_{e_m \in E} \delta(e_m,\hat{e}(m)) =1$ and so \begin{equation} \sum_{e_m \in E} C^{h,e_m}_T(m) = C^h_T(m).\label{Equation_higher_flux_to_magnetic} \end{equation} Having constructed this higher-flux membrane operator, we can now use it to find the braiding relations involving the higher-flux excitations. Because the action of the higher-flux membrane operator on the edges is the same as that of the magnetic membrane operator from the $\rhd$ trivial case, the braiding relation between the magnetic and electric excitations is the same as in that case (which is described in Section \ref{Section_3D_Braiding_Tri_Trivial}). However, as we will see shortly, the braiding between the higher-flux loop excitation and the other excitations is significantly altered. In particular, because the higher-flux excitations can carry a non-trivial 2-flux, we expect non-trivial braiding relations with the $E$-valued loops, which measure 2-flux. \subsection{Braiding of the Higher-Flux Excitations With Blob Excitations} \label{Section_magnetic_blob_braiding} The first braiding relation we examine is between the higher-flux excitations and the blob excitations. We consider a blob ribbon operator $B^e(t)$, applied on a ribbon $t$, piercing the membrane of a higher-flux membrane operator $C^{h,e_m}_T(m)$, applied on a membrane $m$. The ribbon $t$ intersects the membrane $m$ at a plaquette $q$, as shown in Figure \ref{blob_operator_through_magnetic_1}. Note that the orientation of the operators is significant. We first look at the case where the blob ribbon operator pierces the direct membrane of the magnetic membrane operator before the dual membrane. If the membrane is oriented downwards, as in Figure \ref{blob_operator_through_magnetic_1} (note that the point-like excitation is above the loop) then the ribbon is oriented upwards (at the point of intersection at least). \hfill \begin{figure}[h] \begin{center} \begin{overpic}[width=0.75\linewidth]{blob_ribbon_through_higher_flux_detail_0_arrows_displaced_image.png} \put(-8,65){ $s.p(m)$} \put(18,12){ $s.p(t)$} \put(85,48){ membrane $m$} \put(13,79){ blob 0 of $m$} \put(13,26){\parbox{1.7cm}{ direct path of $t$}} \put(80,18){dual path of $t$} \put(70,72){\parbox{2.2cm}{ intersection at plaquette $q$}} \end{overpic} \caption{We consider a blob ribbon operator $B^e(t)$ (between the two red blobs) that passes through a higher-flux membrane operator, $C^{h,e_m}_T(m)$ (where $m$ is shown in green). The ribbon $t$ pierces the membrane $m$ through a plaquette $q$ (blue square).} \label{blob_operator_through_magnetic_1} \end{center} \end{figure} We have seen in previous cases that braiding is frequently well defined only when the start-points of the membrane and ribbon operators match. However, we shall first examine the general case where the start-points are arbitrary. As usual, we can relate the braiding relation to a commutation relation between the two operators. We compare the case where the magnetic membrane is produced first, and then the blob excitation moved through it, to the reverse case. We find that, as demonstrated in Section \ref{Section_braiding_higher_flux_blob} in the Supplementary Material, \begin{align} B^e&(t)C^{h,e_m}_T(m)\ket{GS} \notag\\ &= C^{h,e_m[\hat{g}(s.p(m)-s.p(t)) \rhd e] }_T(m) B^e(t_1') \notag\\ & \hspace{0.5cm} B^{(\hat{g}(s.p(t)-s.p(m)) h^{-1} \hat{g}(s.p(t)-s.p(m))^{-1}) \rhd e}(t_2') \ket{GS}. \label{higher_flux_blob_commutation_1} \end{align} In this expression, we note that the original ribbon operator is split into two parts, on ribbons $t_1'$ and $t_2'$, which transform differently under the braiding. Here $t_1'$ starts at the original origin of ribbon $t$ and ends at blob 0 of the membrane $m$ (corresponding to the part of the ribbon before the intersection with the membrane, except that it is diverted to end at blob 0 of $m$), while $t_2'$ starts at blob 0 and ends at the original end of ribbon $t$ (corresponding to the part of the ribbon after the intersection), as shown in Figure \ref{blob_ops_after_commutation}. We therefore see that, under commutation, the ribbon $t$ is diverted to pass through blob 0 of the membrane, and after passing through this blob it changes label from $e$ to $(\hat{g}(s.p(t)-s.p(m)) h^{-1}, \hat{g}(s.p(t)-s.p(m))^{-1}) \rhd e$. \begin{figure}[h] \begin{center} \begin{overpic}[width=0.8\linewidth]{blob_through_mod_mem_detail_2.png} \put(61,10){$B^e(t_1')$} \put(21,74){$B^{(g(s.p(m)-s.p(t))^{-1}h^{-1} g(s.p(m)-s.p(t))) \rhd e}(t_2')$} \put(4,50){blob 0} \end{overpic} \caption{The blob ribbon operators after commutation} \label{blob_ops_after_commutation} \end{center} \end{figure} \hfill The fact that the label of the blob ribbon operator before the intersection is unaffected by the commutation relation is perhaps unsurprising, because this part of the operator corresponds to the motion of the blob excitation before it braids with the magnetic excitation and so before it has undergone its transformation. This can be seen from the fact that the blob ribbon operator actually creates two excitations and the one which is not moved should not be affected by the other one moving through the loop excitation. Another thing to note is that, as long as the blob ribbon operator is not confined, we can deform the ribbons of the blob ribbon operators without changing their action, as long as we keep the end-points fixed. Because of this, it doesn't matter at which plaquette $q$ our magnetic membrane and blob ribbon operators intersect. \hfill In addition to the transformation undergone by the blob ribbon operator, the $E$ label of the higher-flux excitation changes from $e_m$ to $e_m [\hat{g}(s.p(m)-s.p(t))\rhd e]$. This transformation of the $E$-label of the membrane operator is simply the standard braiding relation between a blob excitation and an $E$-valued membrane, as we saw in Section \ref{Section_3D_Braiding_Fake_Flat}. We note that this result and the other results given in this section, are proven fully in Section \ref{Section_braiding_higher_flux_blob} in the Supplementary Material. \hfill If we give the magnetic membrane operator and the blob ribbon operator the same start-point, the braiding relation that we explained above simplifies and we are able to remove the operator $\hat{g}(s.p(m)-s.p(t))$ from the relation. We move the start-points together, without moving them through the higher-flux membrane (which would alter the commutation relations found so far). In this case the blob label goes from $e$ before the braiding to $h^{-1} \rhd e$ afterwards (at least in the part after the intersection) and the $E$ label of the higher-flux membrane operator goes from $e_m$ beforehand to $e_m e$. If we had used the opposite orientation, obtained by reversing the direction of the blob ribbon operator, instead the blob label $e$ becomes $h \rhd e$ and the membrane label $e_m$ becomes $e_m [h \rhd e^{-1}]$. Again, this result is proven in Section \ref{Section_braiding_higher_flux_blob} of the Supplementary Material. \hfill If we want to consider the braiding of the original magnetic excitation, produced by the membrane operator $C^h(m) = \sum_{e_m \in E} C^{h,e_m}(m)$, we simply need to sum over the $E$-valued label $e$ of the higher-flux membrane operator. Then we have, from Equation \ref{higher_flux_blob_commutation_1} \begin{align} B^e&(t)C^{h}_T(m)\ket{GS} \notag\\ &= \sum_{e_m \in E} B^e(t) C^{h, e_m}_T(m) \notag\\ &= \sum_{e_m \in E} C^{h,e_m[\hat{g}(s.p(m)-s.p(t)) \rhd e] }_T(m) B^e(t_1') \notag \\ & \hspace{0.5cm} B^{(\hat{g}(s.p(t)-s.p(m)) h^{-1}\hat{g}(s.p(t)-s.p(m))^{-1}) \rhd e}(t_2') \ket{GS} \notag\\ &= \sum_{e'_m =e_m [\hat{g}(s.p(m)-s.p(t))\rhd e]} C^{h,e'_m}_T(m)B^e(t_1) \notag \\ & \hspace{0.5cm} B^{(\hat{g}(s.p(t)-s.p(m)) h^{-1}\hat{g}(s.p(t)-s.p(m))^{-1}) \rhd e}(t_2') \ket{GS} \notag\\ &=C^h_T(m)B^e(t_1) \notag \\ & \hspace{0.5cm} B^{(\hat{g}(s.p(t)-s.p(m)) h^{-1}\hat{g}(s.p(t)-s.p(m))^{-1}) \rhd e}(t_2') \ket{GS}, \label{higher_flux_blob_commutation_2} \end{align} from which we see that the magnetic excitation is unchanged by the braiding, whereas the blob excitation is affected in the same way as in the braiding with the higher-flux excitation. \subsection{Braiding With Other Higher-Flux Excitations} \label{Section_higher_flux_higher_flux_braiding} Next, we consider the braiding between two higher-flux excitations. As we described in Section \ref{Section_Flux_Flux_Braiding_Tri_Trivial_Abelian}, there are two kinds of braiding for loops. The first, which we call permutation, involves moving two loops around each-other without passing through one another. The other, which we term braiding, involves passing one though the other. As we discussed in Section \ref{Section_Flux_Flux_Braiding_Tri_Trivial_Abelian}, the permutation move is trivial in this model. Therefore, we just consider the braiding move. In this motion, shown in Figure \ref{Braid_move_loops}, one of the magnetic loop excitations (indicated by a small red ring) is moved along the red surface and through another loop (indicated by a large green loop attached to a large green surface). To calculate the braiding relation we apply membrane operators on these surfaces and examine the commutation relations between the membrane operators. \begin{figure}[h] \begin{center} \begin{overpic}[width=0.75\linewidth]{higher_flux_braid_1_displaced.png} \put(-5,34){$C_T^{h,e_1}(m_1)$} \put(22,75){$C_T^{g,e_2}(m_2)$} \put(89,22){blob 0$(m_2)$} \put(90,28){$s.p(m_2)$} \put(25,10){$s.p(m_1)$} \put(16,4){blob 0$(m_1)$} \end{overpic} \caption{We consider the braiding move where we pull one higher-flux loop excitation (red torus) through another (green torus). This can be implemented using higher-flux membranes applied on the green and red membranes in the figure. If we first apply the membrane operator $C^{h,e_1}_T(m_1)$ on the green membrane, then $C_T^{g,e_2}(m_2)$ on the red membrane, then we are considering the case where we first produce the green loop excitation then move the red one through it. Comparing this to the opposite order of operators gives us the braiding relation. } \label{Braid_move_loops} \end{center} \end{figure} \hfill We define the membrane operators as indicated in Figure \ref{Braid_move_loops}, but then we use the topological nature of the magnetic membrane operators to pull $m_2$ through $m_1$ (as we did in the $\rhd$ trivial case in Section \ref{Section_Flux_Flux_Braiding_Tri_Trivial}), while keeping the start-point and blob 0 fixed. In Section \ref{Section_braiding_higher_flux_higher_flux} in the Supplementary Material, we show that this leads to the commutation relation \begin{align} C&^{g,e_2}_T(m_2)C^{h,e_1}_T(m_1) \ket{GS} \notag \\ &=C^{h,e_1 \big[g((1)-(2)) \rhd ([h_{[2-1]} \rhd e_2^{-1}] [(h_{[2-1]} g^{-1}) \rhd e_2])\big]}_T(m_1) \notag \\ &\hspace{0.5cm} C_\rhd^{h_{[2-1]}gh_{[2-1]}^{-1}}(m_2) \notag \\ &\hspace{0.5cm} \bigg(\prod_{\substack{\text{plaquette }\\ p \in m_2}} B^{[h_{[2-1]}^{-1}\rhd e_{p|2} ] [(g^{-1}h_{[2-1]}^{-1})\rhd e_{p|2}^{-1}]}((2)-(1))\notag \\ &\hspace{0.5cm} B^{e_{p|2} [(h_{[2-1]} g^{-1} h_{[2-1]}^{-1}) \rhd e_{p|2}^{-1}]}((1)-p) \bigg) \notag \\ &\hspace{0.5cm} \delta(h_{[2-1]} \rhd e_2, \hat{e}(m_2))\ket{GS}, \label{Equation_higher_flux_braiding_main_text} \end{align} where $g((1)-(2))$ is the path element for the path between the two start-points of the membranes and \begin{align} h_{[2-1]}&=g((1)-(2))^{-1}hg((1)-(2)) \notag \\ & =g((2)-(1))hg((2)-(1))^{-1}. \end{align} To simplify the expression, we used $e_{p|2}$ to denote the label of the plaquette $p$ when we move its base-point to the start-point of $m_2$. This quantity is equivalent to $g(s.p(m_2)-v_0(p)) \rhd e_p$. Furthermore, we used $B^{...}((2)-(1))$ to denote a blob ribbon operator that runs from blob 0 of $m_2$ to blob 0 of $m_1$ and $B^{...}((1)-p)$ to denote a blob ribbon operator running from blob 0 of $m_1$ to the blob on $m_2$ that is attached to plaquette $p$. These blob ribbon operators may seem complicated, but the situation is analogous to the braiding of blob ribbon operators with the magnetic membranes. Each of the blob ribbon operators that we added to the magnetic membrane operator has a similar commutation relation with the magnetic membrane operator as an ordinary blob ribbon operator. Namely, the blob ribbon operator splits into two parts, one that runs from blob 0 of $m_2$ to blob 0 of $m_1$ and one which runs from blob 0 of $m_1$ to the final destination of the original blob ribbon operator. The only difference from the ordinary blob ribbon operator braiding is that the label of the blob ribbon operator is an operator (depending on the label of a plaquette on $m_2$), which causes an additional apparent change to the label even for the part of the blob ribbon operator before the intersection. However, this does not reflect a real change to the blob ribbon operators before the intersection. This is because we can combine the blob ribbon operators that pass from blob 0 of $m_2$ to blob 0 of $m_1$ into a single blob ribbon operator, and use $\delta(h_{[2-1]} \rhd e_2, \hat{e}(m_2))$ to fix its label in terms of $e_2$ instead of an operator. As shown in Section \ref{Section_braiding_higher_flux_higher_flux} in the Supplementary Material, this gives us \begin{align} C&^{g,e_2}_T(m_2)C^{h,e_1}_T(m_1) \ket{GS} \notag \\ &=C^{h,e_1 \big[g((1)-(2)) \rhd ([h_{[2-1]} \rhd e_2^{-1}] [(h_{[2-1]} g^{-1}) \rhd e_2])\big]}_T(m_1) \notag \\ &\hspace{0.5cm} C_\rhd^{h_{[2-1]}gh_{[2-1]}^{-1}}(m_2) \notag \\ &\hspace{0.5cm} B^{e_2 [g^{-1}\rhd e_2^{-1}]}((2)-(1))\notag \\ &\hspace{0.5cm} \bigg(\prod_{\substack{\text{plaquette }\\ p \in m_2}} B^{e_{p|2} [(h_{[2-1]} g^{-1} h_{[2-1]}^{-1}) \rhd e_{p|2}^{-1}]}((1)-p) \bigg) \notag \\ &\hspace{0.5cm}\delta(h_{[2-1]} \rhd e_2, \hat{e}(m_2))\ket{GS}. \label{Equation_higher_flux_braiding_main_text_3} \end{align} Now the section of blob ribbon operator between blob 0 of each membrane, which is the part of the ribbon operator before the intersection of the membranes, is labelled by $e_2 [g^{-1}\rhd e_2^{-1}]$. This is the same label as it would have if we combined the blob ribbons on these sections in the absence of the second higher flux membrane $C^{h,e_1}_T(m_1)$. That is, this part of the blob ribbon operator is unaffected by the braiding, as we may expect given that this section of ribbon operator is the part before the intersection of the membranes. We can contrast this with the sections of the blob ribbon operators after the intersection (which should be affected by the braiding) which have their labels changed from $$e_{p|2} [g^{-1} \rhd e_{p|2}^{-1}]$$ to $$e_{p|2} [(h_{[2-1]} g^{-1} h_{[2-1]}^{-1}) \rhd e_{p|2}^{-1}].$$ We see that the only change is that we replace $g$ with $h_{[2-1]} g^{-1} h_{[2-1]}^{-1}$. This matches how the label $g$ of the higher-flux membrane operator transforms under braiding (as we see from the operator $C_\rhd^{h_{[2-1]}gh_{[2-1]}^{-1}}(m_2)$ in Equation \ref{Equation_higher_flux_braiding_main_text_3}). That is, the labels of the blob ribbon operators after the intersection (i.e. after braiding) are the labels we expect given the label of the magnetic part of the membrane operator after intersection. \hfill Apart from this splitting of the blob ribbon operators at the intersection of the membranes, we see that the labels of the two membrane operators change (as we mentioned previously for $g$). We have that \begin{align*} h &\rightarrow h,\\ e_1 &\rightarrow e_1 \bigl[g((1)-(2)) \rhd ([h_{[2-1]} \rhd e_2^{-1}]\\ & \hspace{1cm} [(h_{[2-1]}g^{-1}) \rhd e_2])\bigl],\\ g &\rightarrow h_{[2-1]}gh_{[2-1]}^{-1},\\ e_2 &\rightarrow h_{[2-1]} \rhd e_2. \end{align*} As usual for our braiding, when the start-points of the operators are not the same, we have operators in our braiding relations. When we take the start-points to be the same, these relations simplify to \begin{align} h &\rightarrow h, \notag\\ e_1 &\rightarrow e_1 [h \rhd e_2^{-1}] [(hg^{-1}) \rhd e_2], \notag\\ g &\rightarrow hgh^{-1}, \notag\\ e_2 &\rightarrow h \rhd e_2. \label{Loop_Braiding_Central} \end{align} This removes any operators from the labels, so that we have definite braiding. Note that the transformation of the 1-flux labels ($h$ and $g$) is the same as for the braiding of two magnetic excitations in the $\rhd$-trivial case, as given in Equation \ref{braid_relation_magnetic_flipped} in Section \ref{Section_Flux_Flux_Braiding_Tri_Trivial}, replacing $k$ with $g$ (that equation in particular because we used the specific orientation of the loops also used to find that equation). However, the expression for the change of the $E$ labels is not so easy to interpret. It is easier to understand these results if we change variables, from the surface labels of the membranes to the 2-fluxes possessed by the loop-like excitations, as we discussed at the start of Section \ref{Section_3D_Braiding_Central}. The 2-flux of the loop excitation, $\tilde{e}_1$, is related to the $1$-flux label, $h$, and the surface label $e_1$ of the membrane operator by $\tilde{e}_1 = e_1 [h^{-1} \rhd e_1^{-1}]$. Therefore, we define $$\tilde{e}_1=e_1 [h^{-1} \rhd e_1^{-1}],$$ $$\tilde{e}_2=e_2 [g^{-1} \rhd e_2^{-1}].$$ Then, from our braiding relations in Eqs. \ref{Loop_Braiding_Central}, under braiding these 2-fluxes transform according to \begin{align} \tilde{e}_1 &\rightarrow e_1 [h \rhd \tilde{e}_2^{-1}] [h^{-1} \rhd e_1^{-1}]\tilde{e}_2 \notag\\ & = \tilde{e}_1 [h \rhd \tilde{e}_2^{-1}] \tilde{e}_2, \notag\\ \tilde{e}_2 &\rightarrow [h \rhd e_2] [(hg^{-1}h^{-1}) \rhd (h \rhd e_2^{-1})] \notag\\ &= h \rhd (e_2 [g^{-1} \rhd e_2^{-1}]) \notag\\ &= h \rhd \tilde{e}_2. \label{Loop_Braiding_Central_Flux} \end{align} This means that the product of the two fluxes transforms as $$\tilde{e}_1 \tilde{e}_2\rightarrow \tilde{e}_1 [h \rhd \tilde{e}_2^{-1}] \tilde{e}_2 [h \rhd \tilde{e}_2]= \tilde{e}_1 \tilde{e}_2$$ under the braiding, which indicates that the product of these 2-fluxes is conserved. \hfill Putting this together, we can see that the 1-fluxes and 2-fluxes of the loop-like excitations transform as $$((g,\tilde{e}_2),(h,\tilde{e}_1))\rightarrow ((h,\tilde{e}_1\tilde{e}_2 [h\rhd \tilde{e}_2^{-1}]), (hgh^{-1}, h \rhd \tilde{e}_2))$$ under braiding, where the fact that one loop is moved past the other during the braiding is represented by swapping the order of their labels in the brackets. \hfill We also wish to work out the inverse transformation, which describes the reversed braiding process. Denoting the result of the forwards transformations as primed versions, we have from Eqs. \ref{Loop_Braiding_Central_Flux}: \begin{align*} &\tilde{e}_2'= h \rhd \tilde{e}_2\text{, with } h'=h \implies \tilde{e}_2={h'}^{-1} \rhd \tilde{e}_2'\\ &g'=hgh^{-1} \implies g={h'}^{-1}g'h'\\ &\tilde{e}_1'=\tilde{e}_1 [h \rhd \tilde{e}_2^{-1}] \tilde{e}_2 \implies \tilde{e}_1=\tilde{e}_1'\tilde{e}_2' [{h'}^{-1} \rhd {\tilde{e}_2}^{\prime -1}] \end{align*} So the inverse transformation is \begin{align*} ((h'&,\tilde{e}_1'),(g',\tilde{e}_2')) \rightarrow \\ & (({h'}^{-1}g'h',{h'}^{-1}\rhd \tilde{e}_2'),(h', \tilde{e}_1'\tilde{e}_2' {h'}^{-1}\rhd {\tilde{e}_2}^{ \prime -1})). \end{align*} This matches the braiding proposed in Ref. \cite{Bullivant2018} for higher gauge theory based on discussions of the loop braid group. It is important to note that the braiding depends on the result of fusing the two excitations. Given two loops with 1-flux and 2-flux given by $(h,\tilde{e}_1)$ for the first loop and $(g, \tilde{e}_2)$ for the second loop, there are many possible fusion products. The fact that the products of 1-fluxes $hg \rightarrow hgh^{-1} h =hg$ (swapping the order after braiding to account for the swapping of loop positions) and of 2-fluxes $\tilde{e}_1 \tilde{e}_2$ are conserved indicates that these are the total 1-flux and 2-flux of the combined loops, and so we have obtained the braiding when they fuse to give the labels $(hg, \tilde{e}_1 \tilde{e}_2)$. We could equally have considered the braiding in a different situation, such as when the start-points of the two operators are in different positions, for which the loops $(h,\tilde{e}_1)$ and $(g, \tilde{e}_2)$ fuse to give different total quantum numbers than $(hg, \tilde{e}_1 \tilde{e}_2)$. \hfill As we did when considering the braiding of the higher-flux with the blob excitation, we can also consider the braiding of our original magnetic excitation, before we pinned an additional $E$-valued loop to it. As described by Equation \ref{Equation_higher_flux_to_magnetic}, we can obtain the original magnetic membrane operators from the higher-flux membrane operators by summing over all possible elements of $e$ for the $E$ label. That is, we consider \begin{align*} C^g_T&(m_2)C^h_T(m_1) \ket{GS}\\ &= C^g_T(m_2)\sum_{e_2 \in E} \delta(e_2, \hat{e}(m_2))\\ & \hspace{0.5cm} C^h_T(m_1) \sum_{e_1 \in E} \delta(e_1, \hat{e}(m_1))\ket{GS}\\ &=\sum_{e_2 \in E}\sum_{e_1 \in E}C^{g,e_2}_T(m_2)C^{h,e_1}_T(m_1) \ket{GS}. \end{align*} Using Equation \ref{Equation_higher_flux_braiding_main_text}, we see that this gives us \begin{align*} &C^g_T(m_2)C^h_T(m_1) \ket{GS}\\ &=\sum_{e_1 \in E} \sum_{e_2 \in E} \\ & \hspace{0.5cm} C^{h,e_1 \big[g((1)-(2)) \rhd ([h_{[2-1]} \rhd e_2^{-1}] [(h_{[2-1]} g^{-1}) \rhd e_2])\big]}_T(m_1) \notag \\ &\hspace{0.5cm} C_\rhd^{h_{[2-1]}gh_{[2-1]}^{-1}}(m_2) \notag \\ &\hspace{0.5cm} \bigg(\prod_{\substack{\text{plaquette }\\ p \in m_2}} B^{[h_{[2-1]}^{-1}\rhd e_{p|2} ] [(g^{-1}h_{[2-1]}^{-1})\rhd e_{p|2}^{-1}]}((2)-(1))\notag \\ &\hspace{0.5cm} B^{e_{p|2} [(h_{[2-1]} g^{-1} h_{[2-1]}^{-1}) \rhd e_{p|2}^{-1}]}((1)-p) \bigg) \notag \\ &\hspace{0.5cm} \delta(h_{[2-1]} \rhd e_2, \hat{e}(m_2))\ket{GS}. \end{align*} Then we have \begin{align*} \sum_{e_1 \in E}C&^{h,e_1 \big[g((1)-(2)) \rhd ([h_{[2-1]} \rhd e_2^{-1}] [(h_{[2-1]} g^{-1}) \rhd e_2])\big]}_T(m_1) \\ &=C^h_T(m_1), \end{align*} because summing over $e_1$ gives us an equal sum over all Kronecker deltas $\delta(\hat{e}(m_1),e)$, regardless of the actual value of $e_2$. This gives us \begin{align*} C^g_T&(m_2)C^h_T(m_1) \ket{GS}\\ &=\sum_{e_1 \in E} \sum_{e_2 \in E} C^{h}_T(m_1) C_\rhd^{h_{[2-1]}gh_{[2-1]}^{-1}}(m_2) \notag \\ &\hspace{0.5cm} \bigg( \prod_{\substack{\text{plaquette }\\ p \in m_2}} B^{[h_{[2-1]}^{-1}\rhd e_{p|2} ] [(g^{-1}h_{[2-1]}^{-1})\rhd e_{p|2}^{-1}]}((2)-(1))\notag \\ &\hspace{0.5cm} B^{e_{p|2} [(h_{[2-1]} g^{-1} h_{[2-1]}^{-1}) \rhd e_{p|2}^{-1}]}((1)-p) \bigg) \notag \\ &\hspace{0.5cm} \delta(h_{[2-1]} \rhd e_2, \hat{e}(m_2))\ket{GS}. \end{align*} We can similarly use the sum over $e_2 \in E$ to remove the other Kronecker delta. We have $$\sum_{e_2 \in E}\delta(h_{[2-1]} \rhd e_2, \hat{e}(m_2)) = 1$$ because $h_{[2-1]} \rhd e_2$ runs over all $e \in E$. This gives us the final result \begin{align*} C^g_T&(m_2)C^h_T(m_1) \ket{GS}\\ &=\sum_{e_1 \in E} \sum_{e_2 \in E} C^{h}_T(m_1) C_\rhd^{h_{[2-1]}gh_{[2-1]}^{-1}}(m_2) \notag \\ &\hspace{0.5cm} \bigg( \prod_{\substack{\text{plaquette }\\ p \in m_2}} B^{[h_{[2-1]}^{-1}\rhd e_{p|2} ] [(g^{-1}h_{[2-1]}^{-1})\rhd e_{p|2}^{-1}]}((2)-(1))\notag \\ &\hspace{0.5cm} B^{e_{p|2} [(h_{[2-1]} g^{-1} h_{[2-1]}^{-1}) \rhd e_{p|2}^{-1}]}((1)-p) \bigg) \ket{GS}. \end{align*} Then looking at the effect of braiding on the $G$-valued label, we see that the result is simply conjugation of one of the magnetic flux labels by the other. The labels of the blob ribbons corresponding to $m_2$ are also changed before and after the intersection of the two membranes. Just as we discussed for the higher-flux membrane operators earlier in this section, the label after the intersection reflects the change to the label $g$ of the membrane operator applied on $m_2$, while the change to the label before the intersection is only an apparent change due to the operator label. \subsection{Braiding With $E$-Valued Loops} \label{Section_mod_mem_E_loop_braiding} The final braiding relation to consider is the braiding between these higher-flux excitations and the $E$-valued loops. We can obtain this relation from the calculation for two higher-flux membrane operators in the previous section, because the $E$-valued loops are simply higher-flux excitations with trivial $G$-label. We therefore simply need to take the special case of that calculation when one of the $G$ elements is $1_G$. Rather than repeating the full equations, we will only present the results in the same-start-point braiding case. \hfill First we consider the case where the red excitation shown in Figure \ref{Braid_move_loops} is a higher-flux excitation, produced by a membrane operator $C^{g, e_{\text{mag}}}_T(m_2)$, while the green excitation is a pure $E$-valued loop, labelled by $e_m$. In this case the label of the $E$-valued loop transforms as $e_m \rightarrow e_m e_{\text{mag}}^{-1} [g^{-1} \rhd e_{\text{mag}}]$ under the braiding, while the labels of the magnetic membrane operator are unaffected by the braiding. When we change to consider our irrep basis for the $E$-valued loops (given in Equation \ref{Equation_E_membrane_irrep_Abelian}) this transformation gives us a phase of $\gamma( e_{\text{mag}}^{-1} [g^{-1} \rhd e_{\text{mag}}])^{-1}$, where $\gamma$ is the irrep of $E$ labelling the $E$-valued loop. Note that if we consider the ordinary magnetic excitation by averaging over $ e_{\text{mag}}$, the braiding relation is different for each value of $ e_{\text{mag}}$, so the different terms in the sum accumulate different transformations. This is part of the reason why it was necessary to consider the higher-flux membrane instead of the magnetic one in the first place. The result of this different transformation for the different $E$-labels is that, even if the excitation is initially an ordinary magnetic excitation, with an equal superposition of the different $E$-labels, it will not necessarily remain so after braiding, instead becoming an uneven superposition of the different higher-flux excitations, with labels coupled to the state of the $E$-valued loop. \hfill Now we consider the opposite case where the $E$-valued loop passes through the magnetic one. In this case, the red excitation from Figure \ref{Braid_move_loops} is a pure $E$-valued loop excitation produced by the membrane operator $\delta(\hat{e}(m_2),e_m)$, while the green excitation is a higher-flux loop excitation produced by the membrane operator $C^{h,e_1}_T(m_1)$. In this case the label $e_m$ of the $E$-valued loop transforms as $e_m \rightarrow h \rhd e_m$ under the braiding, while the labels of the higher-flux operator are again unaffected by the braiding move. In our irrep basis for the $E$-valued membrane operators, this transformation actually changes the irrep $\gamma$ labelling the membrane operator to a different irrep $h^{-1} \rhd \gamma$ in the same $\rhd$-Rep class of irreps (where two irreps of $E$, $\alpha$ and $\beta$, are in the same $\rhd$-Rep class if there exists a $g \in G$ such that $\alpha(g \rhd e)= \beta(e)$ for all $e \in E$). This suggests that the irreps of $E$ are not by themselves good labels for the topological charge, because the irreps are not invariant under braiding, and instead the $\rhd$-Rep classes should be important (although the condensation further affects the topological charge). However, as we see in Section \ref{Section_3D_sphere_charge_examples}, the point-like charge of the loop-like excitations actually depends on how the coefficients of the membrane operator transforms under the $\rhd$ action, and so this topological charge has some dependence on quantities within the $\rhd$ class as well. We will discuss the topological charge in more detail in Section \ref{Section_3D_Topological_Sectors}. \subsection{Summary of Braiding in This Case} Table \ref{Table_Braiding_Central} summarizes which types of excitation can have non-trivial braiding relations in the case where $E$ is Abelian and $\partial$ maps to the centre of $G$, where non-trivial braiding between the types of excitation is indicated by ticks. Note that the higher-flux excitations have potentially non-trivial braiding relations with every class of excitation. \begin{table}[h] \begin{center} \begin{tabular}{ |c|c|c|c|c| } \hline Non-Trivial& &Higher- & & $E$-valued \\ Braiding?& Electric & flux & Blob & loop \\ \hline Electric & \ding{55} & \ding{51} & \ding{55} & \ding{55}\\ \hline Higher- & && &\\ flux & \ding{51} & \ding{51} & \ding{51} & \ding{51} \\ \hline Blob& \ding{55} & \ding{51} & \ding{55} & \ding{51}\\ \hline $E$-valued & & & & \\ loop & \ding{55}& \ding{51} & \ding{51}& \ding{55} \\ \hline \end{tabular} \caption{A summary of which excitations braid non-trivially in Case 2, where the group $E$ is Abelian and $\partial$ maps onto the centre of $G$. A tick indicates that at least some of the excitations of each type braid non-trivially with each-other, while a cross indicates that there is no non-trivial braiding between the two types. } \label{Table_Braiding_Central} \end{center} \end{table} \section{Topological Charge} \label{Section_3D_Topological_Sectors} In Ref. \cite{HuxfordPaper1} we explained the concept of topological charge and in Ref. \cite{HuxfordPaper2} we presented a detailed construction of the measurement operators for topological charge in the 2+1d case. To briefly restate our explanation from Ref. \cite{HuxfordPaper1}, topological charge is a quantity that is conserved, so that the only way to change the topological charge in a region is to apply an operator that connects this region to the rest of our lattice. Further conditions are imposed on the topological charge relevant to our model by requiring that the ground state is the topological vacuum. There is a significant difference between the 2+1d and 3+1d cases, however. Whereas we consider the topological charge in regions isomorphic to disks (or unions and differences of disks, like annuli) when there are only two spatial dimensions, there are more topologically distinct regions to consider when there are three spatial dimensions. For example, we have both topological balls and solid tori. The charge in these regions should be measured by operators on the surfaces of the regions, i.e. on spheres and tori. This variety of regions is related to our excitations. We have both point-like and loop-like excitations, both of which should carry topological charge. While we expect point-like charges to be fully measured by spheres, the sphere has no features that would allow it to distinguish between a loop and a point. On the other hand, a torus has handles which can link with a loop-like excitation and we expect this to allow the torus to distinguish between point particles and loops. We therefore expect that the loop excitations should carry a charge that is not measured by the sphere (in addition to some charge that can be measured by the sphere). Therefore, we need to include the toroidal measurement surfaces as well. \hfill In order to measure the topological charge held within or without a particular surface, we follow a similar procedure to the one used for the 2+1d case in Ref. \cite{HuxfordPaper2}, except that the boundary for our region is a surface rather than a path. We take our surface of interest and apply every closed membrane or ribbon operator that we can on this surface, before considering only the sums of these operators that will commute with the Hamiltonian. Combinations of ribbon and membrane operators can produce any linear operator on the Hilbert space. This is because we can consider ribbon and membrane operators that act only on a single edge or plaquette. An electric ribbon operator acting on a single edge can measure any value for that edge, while a magnetic membrane operator acting only on a single edge can multiply that edge by any group label. Together these allow us to freely control the label of any edge. Similarly an $E$-valued membrane operator can measure the label of a single plaquette and a blob ribbon operator can multiply its label by any value. Combinations of these operators can therefore control the label of every edge and plaquette in the lattice. However, when we restrict to operators that commute with the Hamiltonian, we are left only with closed ribbon and membrane operators. This restriction of commuting with the Hamiltonian is because our measurement operator should not by itself produce or move topological charge. \hfill We consider this process of measuring the charge for sphere-like and torus-like surfaces. Theoretically, we could do the same for an arbitrary surface. However, because the simple excitations of the model are either point-like or loop-like, it does not appear necessary to consider the charge measured by higher-genus surfaces. Nevertheless, this may not be the case and it would be interesting to construct the higher-genus measurement operators, but we leave this for future study. One subtlety with measuring a loop-like excitation is that, because the loops are extended, the excitations may pierce the measurement surface and not be wholly contained within or without the surface. As an example, consider the situation shown in Figure \ref{measurement_intersected}. As part of the measurement procedure, we must choose closed paths on which to measure any magnetic flux enclosed by the torus. However, in the presence of excitations on the surface itself, two choices of loops to measure on (for example, the blue or yellow paths in the figure) would give different results. This is because different loops may or may not link with the excitation (the thicker red torus). Because both choices are supposed to measure the charge within the torus (the partially transparent green torus), this leads to a contradiction. If we want to measure the charge held within a surface without knowing what excitations are present and where they are, this presents a difficulty. Therefore, we include in our charge measurement operators a projector to the space where the surface has no excitations. This sidesteps the above issue, but it does mean that we cannot measure the charge of confined excitations (which always cause excitations on a surface enclosing them) using this procedure. \begin{figure}[h] \begin{center} \begin{overpic}[width=\linewidth]{measurement_intersected_arrows_2_image.png} \put(80,68){excitation} \put(60,6){measurement surface} \put(0,61){two potential flux measurement loops} \put(15,6){start-point} \end{overpic} \caption{If an excitation pierces the measurement surface, then the charge within the surface is ill-defined. In the case shown in this figure, measuring the 1-flux along the two potential loops may give different results (to the point of not giving the same topological charge).} \label{measurement_intersected} \end{center} \end{figure} \subsection{Topological Charge Within a Sphere in the Case Where $\partial \rightarrow$ Centre($G$) and $E$ Is Abelian} \label{Section_Sphere_Charge_Reduced} Before we look at the charge measured by a torus, which is sensitive to both loop-like and point-like charge, we will first examine the charge measured by a sphere. We will do this in the case where $E$ is Abelian and $\partial$ maps onto the centre of $G$ (Case 2 in Table \ref{Table_Cases}), which includes the $\rhd$ trivial case (Case 1 in Table \ref{Table_Cases}) as a sub-case. Because a sphere has no non-contractible cycles, the sphere should only be sensitive to point-like charge. Nonetheless, the sphere charge is interesting, not only because it lets us look at the properties of point particles, but because loop excitations also possess point-like charge. As we explained in the previous section, to measure the charge within a sphere we first project to the case where there are no excitations on the measurement surface. Then we consider which independent closed ribbon and membrane operators we can apply on this surface. \hfill While it may seem that we can independently apply ribbon operators around any closed loop on the surface of the sphere, this is not the case. Any ribbon operators are either topological or confined (or can be written as a linear combination of ribbon operators of the two types), as we show in Section \ref{Section_topological_membrane_operators} in the Supplementary Material. If a ribbon operator is confined, then applying it leads to excitations on the measurement surface, which we do not allow. On the other hand, if a ribbon operator is topological, then because all closed paths on the sphere are contractible on the spherical surface (and we do not allow excitations on the surface), the ribbon can be contracted to nothing without affecting the action of the ribbon operator. This means that applying a closed topological ribbon operator on the surface of the sphere is equivalent to applying the identity operator (at least in the subspace on which we apply measurement operators). Therefore, any ribbon operators that we are allowed to apply (the non-confined ones) act trivially. \hfill This leaves us only with the membrane operators $C^{h}_T(m)$ and $L^e(m)$, where $C^h_T(m)$ is the total magnetic membrane operator defined in Section \ref{Section_3D_MO_Central} (see Eq. \ref{total_magnetic_membrane_operator}) and $L^e(m)$ is the $E$-valued membrane operator $\delta(e, \hat{e}(m))$. We consider applying these two operators over the sphere. Although we apply both operators on the same sphere, when we define the membrane $m$ for each operator to act on we need to define a start-point for the membrane. It would seem that we could choose the start-points of the membranes to be different for the two membrane operators, giving us many potential measurement operators. However, this is not the case because of the requirement that the total measurement operator commute with the energy terms on the sphere. As we have discussed previously, and prove in Section \ref{Section_Magnetic_Tri_Nontrivial_Commutation} in the Supplementary Material and Section S- I C in the Supplementary Material for Ref. \cite{HuxfordPaper2}, both the magnetic membrane operator and $E$-valued membrane operator commute with the vertex transforms except those at the start-points. If the two operators have different start-points, then each must individually commute with their specific start-point vertex transform (rather than their combination having to commute with a mutual start-point transform). However, when a membrane operator commutes with the start-point vertex transforms, the start-point of the operator becomes arbitrary. That is, if the start-point is not excited we can move the start-point without affecting the action of the membrane operator, because parallel transport of a vertex is equivalent to applying a vertex transform (see Section \ref{Section_magnetic_membrane_central_change_sp} in the Supplementary Material for a proof of this for the magnetic membrane operator and Section S-I C in the Supplementary Material of Ref. \cite{HuxfordPaper2} for a proof for the $E$-valued membrane operator). This means that we can move the start-points to be in the same location anyway, without affecting the action of the two membrane operators. Therefore, without loss of generality, we can consider the two start-points of the membrane operators to be in the same location. \hfill The most general operator we can apply is a linear combination of terms with the form $C^{h}_T(m) L^e(m)$ for different labels $h$ and $e$, where $m$ is the spherical membrane that we are measuring the charge within. We might also consider products that include multiples of one or more of the two types of operators, such as $C^h_T(m) L^e(m)L^f(m) C^g_T(m)$. However, because the two types of operator commute, we can always collect the separate instances of each type of operator, to give us terms like $C^h_T(m) C^g_T(m) L^e(m)L^f(m)$. Then we can use the algebra of the membrane operators to combine them, which just gives us $\delta(e,f)C^{hg}_T(m) L^e(m)$ for the above example. This is just an example of a linear combination of terms of the form $C^{h}_T(m) L^e(m)$, so we only need consider such terms. We take the membrane $m$ to be oriented inwards to match the orientation of the surface label of the direct membrane used in $C^{h}_T(m)$. Taking the opposite orientation would be equivalent to using $e^{-1}$ instead of $e$. We also do not displace blob 0 and the start-point of $C^h_T(m)$ and $L^e(m)$ from the membrane, contrary to the approach used in Section \ref{Section_3D_Braiding_Central} when considering the braiding of the higher-flux excitations. This choice does not matter, because the fact that we enforce the start-point and blob 0 to be unexcited by the combined action of the measurement operator means that we can freely move the start-point and blob 0 around without affecting the total action of the measurement operator. Moving the start-point is equivalent to applying a vertex transform at the start-point, which is trivial when the start-point is unexcited, and we show in Section \ref{Section_magnetic_change_blob_0} of the Supplementary Material that moving blob 0 is trivial when that blob is unexcited. \hfill Having found that the operator we apply must have the form $\sum_{h \in G} \sum_{e \in E} \alpha_{h,e} C^{h}_T(m) L^e(m)$, where $\alpha_{h,e}$ are a set of coefficients, we next have to find which coefficients lead to the operator commuting with the energy terms on the sphere. In Section \ref{Section_sphere_topological_charge_appendix_full} of the Supplementary Material, we show that requiring commutation with the energy terms leads to two types of restrictions for the coefficients. Some of these restrictions enforce that the coefficient $\alpha_{h,e}$ must be zero for certain labels (i.e. certain pairs of label $(h,e)$ for $C^{h}_T(m)L^{e}(m)$ are disallowed), while other conditions mean that the coefficients of two pairs $(h_1,e_1)$ and $(h_2,e_2)$ must be the same (i.e. $\alpha_{h_1,e_1}= \alpha_{h_2,e_2}$). As a shorthand, we write $(h_1,e_1) \overset{\mathrm{S}}{\sim} (h_2,e_2)$ for two pairs that are subject to this latter type of restriction (must have equal coefficients). Then the restrictions that we find are \begin{equation} h \rhd e =e, \end{equation} \begin{equation} \partial(e)=1_G, \end{equation} \begin{equation} h \overset{\mathrm{S1}}{\sim} \partial(f)h \ \forall f \in E, \end{equation} and \begin{equation} (h,e) \overset{\mathrm{S2}}{\sim} (ghg^{-1}, g \rhd e) \ \forall g \in G. \end{equation} The two equivalence relations $\overset{\mathrm{S1}}{\sim}$ and $\overset{\mathrm{S2}}{\sim}$ together form the equivalence relation $\overset{\mathrm{S}}{\sim}$, where any pairs of labels $(h_1,e_1)$ and $(h_2,e_2)$ related by $\overset{\mathrm{S}}{\sim}$ must have equal coefficients. We can write $\overset{\mathrm{S}}{\sim}$ explicitly as \begin{equation} (h,e) \overset{\mathrm{S}}{\sim} (\partial(f) ghg^{-1}, g \rhd e), \end{equation} for each $g \in G$ and $f \in E$ (i.e. $(h,e)$ is in the same equivalence class as $(h',e')$ if there exists any $g \in G$ and $e \in E$ such that $(h',e') = (\partial(f) ghg^{-1}, g \rhd e)$). Given all of these conditions for our measurement operators, we can construct a basis for the space of allowed measurement operators. As we show in Section \ref{Section_sphere_topological_charge_appendix_full} of the Supplementary Material, one such basis is given by a set of operators labelled by two objects. The first object, $C$, is a $\rhd$-class of the kernel of $\partial$. A $\rhd$-class of the kernel is a subset of the kernel consisting of elements related by the equivalence relation $e \overset{\rhd}{\sim} f$ if there exists a $g \in G$ such that $g \rhd e=f$. It is convenient to pick a representative element $r_C$ for each such class $C$. Then we define the centralizer of the class $C$ as $Z_{\rhd, {r_C}}=\set{ h \in G | h \rhd r_C = r_C}$. The second object that labels our basis operators is a class within this centralizer, this time described by the equivalence relation \begin{equation} h \overset{Z_{\rhd, {r_C}}}{\sim} xhx^{-1} \partial(w) \label{union_coset_relation_main_text} \end{equation} for any elements $x \in Z_{\rhd, {r_C}}$ and $w \in E$. Note that this equivalence relation is similar to $\overset{\mathrm{S}}{\sim}$, in that it gives the same form of relation, but only for elements $x \in Z_{\rhd,r_C}$ (for which $x \rhd r_C=r_C$) rather than general elements $g \in G$. The basis operator corresponding to a particular $\rhd$-class $C$ of the kernel and equivalence class $D$ (defined by Equation \ref{union_coset_relation_main_text}) of the associated centralizer is $$T^{D,C}(m)= \sum_{q \in Q_C} \sum_{d \in D} C^{qdq^{-1}}_T(m) L^{q \rhd r_C}(m),$$ where $Q_C$ is a set of elements of $G$ that move us between the elements of the $\rhd$-class $C$, so that each element $e_i \in C$ has a unique $q_i \in Q_C$ such that $e_i = q_i \rhd r_C$. As we show in Section \ref{Section_sphere_topological_charge_appendix_full}, any element $g\in G$ can be uniquely decomposed as a product of an element of $Q_C$ and an element of $Z_{\rhd,r_C}$. We can use this basis of operators to construct the projectors to definite topological charge within the sphere. These projectors are given by \begin{equation} T^{R,C}(m)=\frac{|R|}{|Z_{\rhd,r_C}|} \sum_{D \in (Z_{\rhd,r_C})_{cl}} \chi_R(D) T^{D,C}(m), \label{Equation_sphere_projector_definition_main_text} \end{equation} where $R$ is an irrep of the quotient group $Z_{\rhd,r_C}/\partial(E)$ with dimension $|R|$ and $(Z_{\rhd,r_C})_{cl}$ is the set of classes in the centralizer defined by the equivalence relation Equation \ref{union_coset_relation_main_text}. Note that the character $\chi_R$ of irrep $R$ is independent of the element $d \in D$ (because characters are a function of conjugacy class, and $R$ being an irrep of the quotient group means that it is also insensitive to factors of $\partial(w)$ from Equation \ref{union_coset_relation_main_text}). In Section \ref{Section_sphere_topological_charge_appendix_full} of the Supplementary Material we prove that the operators defined by Equation \ref{Equation_sphere_projector_definition_main_text} are indeed projectors and are orthogonal and complete in our space. \subsubsection{The Point-Like Charge of Simple Excitations} \label{Section_3D_sphere_charge_examples} Having worked out the projectors for the charges, it will be instructive to use them to check the topological charge of some of our simple excitations (those produced by single ribbon or membrane operators). To do this we try enclosing these charges with our measurement operators. \hfill We first consider measuring the charge of an electric excitation at the end-point of a ribbon operator. To do this, we first need to create our electric excitation, by applying an electric ribbon operator to our ground state. Considering a ribbon operator labelled by irrep $X$ of $G$ and matrix indices $a$ and $b$, we obtain the state $$\sum_{g \in G} [D^{X}(g)]_{ab} \delta(\hat{g}(t),g) \ket{GS}.$$ \begin{figure}[h] \begin{center} \begin{overpic}[width=0.6\linewidth]{Point_measurement_1_arrows.png} \put(10,91){Excitation to be measured} \put(0,5){Measurement surface} \put(72,80){Path of ribbon} \end{overpic} \caption{We measure the charge held at the end of an electric ribbon, using our spherical surface (large green sphere)} \label{3D_electric_measurement_2} \end{center} \end{figure} Next we want to measure this charge, by applying a measurement operator, as shown in Figure \ref{3D_electric_measurement_2}. Therefore, we want to calculate \begin{align*} T&^{R,C}(m) \sum_{g \in G} [D^{X}(g)]_{ab} \delta(\hat{g}(t),g) \ket{GS}\\ & = \frac{|R|}{|Z_{\rhd,r_C}|} \sum_{D \in (Z_{\rhd,r_C})_{cl}} \chi_R(D) \sum_{d \in D} \sum_{q \in Q_C} C_T^{qdq^{-1}}(m)\\ & \hspace{0.5cm} \delta(\hat{e}(m), q \rhd r_C) \sum_{g \in G} \delta(\hat{g}(t),g) [D^{X}(g)]_{ab} \ket{GS}. \end{align*} The operator $\delta(\hat{e}(m), q \rhd r_C)$ commutes with $\delta(\hat{g}(t),g)$, so we can commute $\delta(\hat{e}(m), q \rhd r_C)$ all the way to the right, so that it acts directly on the ground state. Then we have $$\delta(\hat{e}(m), q \rhd r_C)\ket{GS}=\delta(1_E, q\rhd r_C) \ket{GS},$$ because $m$ is a sphere, and any contractible sphere in the ground state must have a surface label of $1_E$ due to the blob energy terms. We can write $\delta(q\rhd r_C,1_E)$ as $\delta(r_C,1_E)=\delta(C,\set{1_E})$ (using the fact that the identity is invariant under the $\rhd$ action and so is the only element of its $\rhd$-class). Therefore, we find that the result of measurement is zero unless the class that we are trying to measure is the trivial one. This is as we expect, because the electric excitations do not possess non-trivial 2-flux. \hfill Having found that the class $C$ must be trivial for a non-zero result, we can also simplify the other mathematical objects appearing in the projector. When $r_C=1_E$, we have that $h \rhd 1_E=1_E \ \forall h \in G$, which implies that $Z_{\rhd,r_C}=G$ and the quotient group $Z_{\rhd,r_C}/ \partial(E)$ is simply $G/\partial(E)$. In addition, the set $Q_C$ is the trivial group containing just the identity element, so we may drop the sum over $q \in Q_C$. Then the result of our measurement is \begin{align*} \delta&(C,\set{1_E}) \frac{|R|}{|G|} \sum_{D \in (Z_{\rhd,r_C})_{cl}} \chi_R(D) \sum_{d \in D} C_T^d(m)\\ & \hspace{3cm}\sum_{g \in G} \delta(\hat{g}(t),g) [D^{X}(g)]_{ab} \ket{GS}\\ &= \delta(C,\set{1_E}) \frac{|R|}{|G|} \sum_{d \in G} \sum_{g \in G} \chi_R(d) [D^{X}(g)]_{ab}\\ & \hspace{3cm} C_T^d(m) \delta(\hat{g}(t),g) \ket{GS}. \end{align*} Then we just need to find the commutation relation between $C^d_T(m)$ and $\delta(\hat{g}(t),g)$. The calculation of this is analogous to the calculation performed to find the braiding relation between the electric and magnetic excitations (see Section \ref{Section_Flux_Charge_Braiding}), except that we have the opposite orientation of the magnetic membrane operator. We find that \begin{align*} C_T^d(m) &\delta(\hat{g}(t),g) \ket{GS}\\ & = \delta(\hat{g}(t-m)d\hat{g}(t-m)^{-1}\hat{g}(t),g) C_T^d(m) \ket{GS}, \end{align*} where $\hat{g}(t-m)$ is shorthand for $\hat{g}(s.p(t)-s.p(m))$, the path element for the path from the start-point of $t$ to the start-point of $m$. We also have that $C_T^d(m) \ket{GS}=\ket{GS}$ because the sphere is contractible and the operator is topological (so that we can deform the operator to nothing). Using these results in our previous expression gives \begin{align*} T^{R,C}(m) &\sum_{g \in G} [D^{X}(g)]_{ab} \delta(\hat{g}(t),g) \ket{GS}\\ &= \delta(C, \set{1_E}) \frac{|R|}{|G|}\sum_{d \in G} \sum_{g \in G} \chi_R(d) [D^{X}(g)]_{ab}\\ & \hspace{0.5cm} \delta(\hat{g}(t),\hat{g}(t-m)d^{-1}\hat{g}(t-m)^{-1}g) \ket{GS}. \end{align*} We then rewrite $\hat{g}(t-m)d\hat{g}(t-m)^{-1}$ as $d'$ and replace the sum over the dummy index $d$ with a sum of $d'$. Noting that the character $ \chi_R$ is a function of conjugacy class, so that $\chi_R(d') = \chi_R(d)$, we then see that \begin{align*} T^{R,C}(m) &\sum_{g \in G} [D^{X}(g)]_{ab} \delta(\hat{g}(t),g) \ket{GS}\\ &=\delta(C, \set{1_E}) \frac{|R|}{|G|}\sum_{d' \in G} \sum_{g \in G} \chi_R(d') [D^{X}(g)]_{ab}\\ & \hspace{0.5cm} \delta(\hat{g}(t),{d'}^{-1}g) \ket{GS}\\ &= \delta(C, \set{1_E}) \frac{|R|}{|G|}\sum_{d' \in G} \sum_{g'={d'}^{-1}g \in G} \chi_R(d')\\ & \hspace{0.5cm} [D^{X}(d'g')]_{ab} \delta(\hat{g}(t),g') \ket{GS}\\ &= \delta(C, \set{1_E}) \frac{|R|}{|G|}\sum_{d',g' \in G} \sum_{c=1}^{|X|} \sum_{e=1}^{|R|} [D^R(d')]_{ee}\\ & \hspace{0.5cm} [D^{X}(d')]_{ac} [D^{X}(g')]_{cb} \delta(\hat{g}(t),g')\ket{GS}. \end{align*} We now want to use the orthogonality relations for irreps of a group to simplify this. There is a slight complication in that $X$ is an irrep of $G$ whereas $R$ is an irrep of $G/\partial(E)$. However, $R$ induces a representation $R_G$ of $G$ defined by $R_G(g) =R(\tilde{g}\partial(E))$, where $\tilde{g}\partial(E)$ is the coset which $g$ belongs to. Therefore, each matrix from $R$ is copied $|\partial(E)|$ times in $R_G$. Given that $R$ is an irrep of $G/\partial(E)$, $R_G$ must also be irreducible (as a representation of $G$). This is because the same matrices appear in the two representations $R$ and $R_G$, so if $R$ cannot be reduced to a block diagonal form then neither can $R_G$. Then we can use $R_G$ instead of $R$ and apply the standard irrep orthogonality relations to obtain \begin{align*} T^{R,C}(m) &\sum_{g \in G} [D^{X}(g)]_{ab} \delta(\hat{g}(t),g) \ket{GS}\\ &= \delta(C, \set{1_E}) \frac{|R|}{|G|} \sum_{g' \in G} \frac{|G|}{|R|} \delta_{e c}\delta_{e b} \delta(\overline{R}_G, X)\\ & \hspace{0.5cm} [D^{X}(g')]_{cb} \delta(\hat{g}(t),g')\ket{GS}\\ &= \delta(C,\set{1_E}) \delta(R_G, \overline{X}) \sum_{g' \in G}[D^{X}(g')]_{cb}\\ & \hspace{0.5cm} \delta(\hat{g}(t),g') \ket{GS}. \end{align*} We see that the final result of applying the measurement operator is that we recover our original electric operator acting on the ground state, multiplied by $\delta(C,\set{1_E}) \delta(R_G, \overline{X})$. This indicates that the charge of the excitation is $(\set{1_E},\overline{X})$. If the irrep $X$ corresponds to a confined excitation, none of our measurement operators will give a non-zero result, because $R_G$ derives from an irrep of the quotient group, and so cannot have a non-trivial restriction to the image of $\partial$. This is a result of our requirement that we project out the states that have excitations on the measurement membrane itself, which naturally precludes the measurement of any confined excitations. \hfill We can similarly check the charge of a blob excitation. We expect that the corresponding charge will be labelled by a non-trivial class $C$, because this class is associated to the 2-flux measured by the measurement operator. We may think that because the blob excitation can have an excited vertex, in addition to the excited blob, that this means that the representation labelling the charge should be non-trivial, just like with the electric excitation. However, we will see that the vertex excitation does not in this case result in a non-trivial representation. We measure the topological charge of the blob excitation at the start of the path of a blob ribbon operator, as indicated in Figure \ref{3D_blob_measurement}. We choose to measure the charge of the excitation at the start of the ribbon, rather than the end as we did with the electric excitation, because this will highlight the fact that the excited vertex enclosed by the measurement surface does not lead to a non-trivial representation $R$. \begin{figure}[h] \begin{center} \begin{overpic}[width=0.6\linewidth]{point_measurement_2_arrows.png} \put(10,91){Blob excitation to be measured} \put(0,5){Measurement surface} \put(68,77){Path of ribbon} \put(28,20){Start-point of ribbon} \end{overpic} \caption{We measure the topological charge of the blob excitation at the start of the blob ribbon operator.} \label{3D_blob_measurement} \end{center} \end{figure} We consider measuring the charge of a blob excitation produced by the blob ribbon operator $$\sum_{e \in [\tilde{e}]_{\rhd}} \alpha_e B^e(t),$$ where $[\tilde{e}]_{\rhd}$ is the $\rhd$-class containing $\tilde{e}$, which is defined by \begin{equation} e \in [\tilde{e}]_{\rhd} \iff \exists g \in G \text{ s.t } e = g \rhd \tilde{e}. \label{rhd_class_def} \end{equation} In order to measure the charge, we apply a measurement operator $T^{R,C}$ on the state produced by acting with the blob ribbon operator on the ground state. That is, we examine a state \begin{align} T^{R,C}&(m) \sum_{e \in [\tilde{e}]_{\rhd}} \alpha_e B^e(t) \ket{GS} \notag\\ &= \frac{|R|}{|Z_{\rhd,r_C}|} \sum_{D \in(Z_{\rhd,r_C})_{cl}} \chi_R(D) \sum_{d \in D} \sum_{q \in Q_C} C_T^{qdq^{-1}}(m)\notag\\ & \hspace{1.5cm} \delta(\hat{e}(m), q \rhd r_C) \sum_{e \in [\tilde{e}]_{\rhd}} \alpha_e B^e(t) \ket{GS}, \label{blob_charge_measure_1} \end{align} where $\alpha_e$ is a set of coefficients that we keep general, so that we can show that the topological charge does not depend on the coefficients within the $\rhd$-class, only on the class itself. \hfill From a calculation analogous to the one for braiding between higher-flux excitations and blob excitations in Section \ref{Section_magnetic_blob_braiding}, we know that \begin{align} &C_T^{qdq^{-1}}(m) \delta(\hat{e}(m), q \rhd r_C) B^e(t) \ket{GS} \notag\\ & = B^e(\text{start}(t)-\text{ blob }0) \notag \\ & \hspace{0.5cm} B^{(g(m-t)^{-1}qdq^{-1}g(m-t)) \rhd e}(\text{blob }0- \text{end}(t)) C^{qdq^{-1}}_T(m) \notag\\ & \hspace{0.5cm} \delta( \hat{e}(m),[q \rhd r_C] [\hat{g}(s.p(m)-s.p(t)) \rhd e^{-1}]) \ket{GS}, \label{blob_charge_measure_2} \end{align} where all of the blob ribbon operators have the same start-point as $t$ and $g(m-t)$ is shorthand for the path element $g(s.p(m)-s.p(t))$. Then, because the membrane $m$ is contractible, its surface element must be the identity in the ground state. Therefore, we have \begin{align} \delta( \hat{e}&(m),[q \rhd r_C] [\hat{g}(s.p(m)-s.p(t)) \rhd e^{-1}]) \ket{GS} \notag\\ &= \delta([q \rhd r_C] [\hat{g}(s.p(m)-s.p(t)) \rhd e^{-1}], 1_E ) \ket{GS}. \label{higher_flux_on_ground_state} \end{align} Substituting the relations from Equations \ref{blob_charge_measure_2} and \ref{higher_flux_on_ground_state} into Equation \ref{blob_charge_measure_1}, we have \begin{align} &T^{R,C}(m) \sum_{e \in [\tilde{e}]_{\rhd}} \alpha_e B^e(t) \ket{GS} \notag \\ &= \frac{|R|}{|Z_{\rhd,r_C}|} \sum_{D \in (Z_{\rhd,r_C})_{cl}} \chi_R(D) \sum_{d \in D} \sum_{q \in Q_C} \sum_{e \in [\tilde{e}]_{\rhd}} \alpha_e \notag \\ & B^e(\text{start}(t) - \text{ blob }0) B^{(g(m-t)^{-1}qd) \rhd r_C}(\text{blob }0- \text{end}(t)) \notag\\ & \delta(q \rhd r_C, \hat{g}(m-t) \rhd e) \ket{GS}, \label{blob_charge_measure_3} \end{align} where we used the Kronecker delta to rewrite the label of the second blob ribbon operator in terms of $r_C$. But then $d$ is an element of $Z_{\rhd,r_C}$, so $d \rhd r_C =r_C$. This means that $(\hat{g}(m-t)^{-1}qd) \rhd r_C=(\hat{g}(m-t)^{-1}q) \rhd r_C$. Then the Kronecker delta enforces that $q \rhd r_C= \hat{g}(m-t) \rhd e,$ so that \begin{align*} (\hat{g}(m-t)^{-1}q) \rhd r_C &= \hat{g}(m-t)^{-1} \rhd (q \rhd r_C)\\ &=\hat{g}(m-t)^{-1} \rhd (\hat{g}(m-t) \rhd e)\\ &=e. \end{align*} Substituting this into Equation \ref{blob_charge_measure_3}, we see that the result of our measurement is \begin{align*} T^{R,C}&(m) \sum_{e \in [\tilde{e}]_{\rhd}} \alpha_e B^e(t) \ket{GS}\\ &=\frac{|R|}{|Z_{\rhd,r_C}|} \sum_{D \in (Z_{\rhd,r_C})_{cl}} \chi_R(D) \sum_{d \in D} \sum_{q \in Q_C} \sum_{e \in [\tilde{e}]_{\rhd}} \alpha_e\\ & \hspace{0.5cm} B^e(\text{start}(t) - \text{ blob }0) B^{e}(\text{blob }0- \text{end}(t)) \\ & \hspace{0.5cm} \delta(q \rhd r_C, \hat{g}(m-t) \rhd e) \ket{GS}. \end{align*} We see that the labels of the two sections of the blob ribbon operator are the same, and so we can recombine them into a single ribbon operator applied on the original ribbon, $t$. We then have \begin{align*} T^{R,C}&(m) \sum_{e \in [\tilde{e}]_{\rhd}} \alpha_e B^e(t) \ket{GS}\\ &=\frac{|R|}{|Z_{\rhd,r_C}|} \sum_{D \in (Z_{\rhd,r_C})_{cl}} \chi_R(D) \sum_{d \in D} \sum_{q \in Q_C} \sum_{e \in [\tilde{e}]_{\rhd}} \alpha_e\\ & \hspace{0.5cm} B^e(t) \delta(q \rhd r_C, \hat{g}(m-t) \rhd e) \ket{GS}. \end{align*} Next, note that if the Kronecker delta $$\delta(q \rhd r_C, \hat{g}(m-t) \rhd e)$$ is satisfied, then $e$ and $r_C$ must be in the same $\rhd$-class (they are related by the action of $q^{-1}\hat{g}(m-t)$), and so we can extract $\delta([\tilde{e}]_{\rhd},C)$ from the Kronecker delta to obtain \begin{align*} T^{R,C}&(m) \sum_{e \in [\tilde{e}]_{\rhd}} \alpha_e B^e(t) \ket{GS}\\ &=\frac{|R|}{|Z_{\rhd,r_C}|} \sum_{D \in (Z_{\rhd,r_C})_{cl}} \chi_R(D) \sum_{d \in D} \sum_{q \in Q_C} \sum_{e \in [\tilde{e}]_{\rhd}} \alpha_e\\ & \hspace{0.5cm} B^e(t) \delta([\tilde{e}]_{\rhd},C)\delta(q \rhd r_C, \hat{g}(m-t) \rhd e) \ket{GS}. \end{align*} In addition, the dummy variable $q$ now only appears in the expression $$\delta(q \rhd r_C, \hat{g}(m-t) \rhd e).$$ However, provided that $r_C$ and $e$ are in the same $\rhd$-class (as enforced by $\delta([\tilde{e}]_{\rhd},C)$), there is precisely one value of $q \in Q_C$ that satisfies $q \rhd r_C= \hat{g}(m-t) \rhd e$, and so we can remove the sum over $q$ along with the Kronecker delta, to obtain \begin{align*} T^{R,C}(m) \sum_{e \in [\tilde{e}]_{\rhd}}& \alpha_e B^e(t) \ket{GS}\\ &=\delta([\tilde{e}]_{\rhd},C) \frac{|R|}{|Z_{\rhd,r_C}|} \sum_{d \in Z_{\rhd,r_C}} \chi_R(d)\\ & \hspace{0.5cm} \sum_{e \in [\tilde{e}]_{\rhd}} \alpha_e B^e(t) \ket{GS}. \end{align*} Next, we wish to use orthogonality of characters to find the irrep $R$. To do so, we note that the character of the trivial irrep is one for all elements, and so $$\sum_{d \in Z_{\rhd,r_C}} \chi_R(d)=\sum_{d \in Z_{\rhd,r_C}} \chi_R(d) \chi_{1_{\text{Rep}}}(d^{-1}).$$ The index $d$ appears only in this expression, and so we can use orthogonality of characters to write \begin{align*} T^{R,C}&(m) \sum_{e \in [\tilde{e}]_{\rhd}} \alpha_e B^e(t) \ket{GS}\\ &= \delta([\tilde{e}]_{\rhd},C) \frac{|R|}{|Z_{\rhd,r_C}|} \big(\sum_{d \in Z_{\rhd,r_C}} \chi_R(d) \chi_{1_{\text{Rep}}}(d^{-1})\big)\\ & \hspace{0.5cm} \sum_{e \in [\tilde{e}]_{\rhd}} \alpha_e B^e(t) \ket{GS}\\ &= \delta([\tilde{e}]_{\rhd},C) \frac{|R|}{|Z_{\rhd,r_C}|} \big(\delta(R,1_{\text{Rep}}) |Z_{\rhd,r_C}|\big) \\ & \hspace{0.5cm} \sum_{e \in [\tilde{e}]_{\rhd}} \alpha_e B^e(t) \ket{GS}\\ &= \delta([\tilde{e}]_{\rhd},C) \delta(R,1_{\text{Rep}}) \sum_{e \in [\tilde{e}]_{\rhd}} \alpha_e B^e(t) \ket{GS}. \end{align*} This expression is just our original blob ribbon operator acting on the ground state, multiplied by $\delta([\tilde{e}]_{\rhd},C) \delta(R,1_{\text{Rep}})$. This indicates that our blob excitation has charge $([\tilde{e}]_{\rhd},1_{\text{Rep}})$. Note that because our measurement operator only runs over classes $C$ in the kernel of $\partial$, if the blob ribbon operator is confined we will always get zero when we act with our measurement operator. We also note that the representation $R$ is always the trivial representation, regardless of which set of coefficients $\alpha_e$ we have. This means that, as we stated earlier, even if the coefficients $\alpha_e$ are such that the blob ribbon operator excites the start-point vertex, this is not reflected in the charge of the excitation. The idea that the extra vertex excitation on an object may not correspond to an additional charge is something that is familiar from Kitaev's Quantum Double model in 2+1d \cite{Kitaev2003, Komar2017}. \hfill We previously mentioned that the loop-like excitations of this model may also carry a point-like topological charge that can be measured by the spherical measurement operators. As an example, consider the $E$-valued loop excitations (we also examine the higher-flux excitations, but this is left to Section \ref{Section_point_like_charge_higher_flux} in the Supplementary Material due to the increased mathematical complexity of the calculation). We wish to measure the topological charge of such a loop excitation using our spherical measurement operator, as shown in Figure \ref{sphere_charge_of_loop}. Note that if the start-point were inside the measurement sphere, the entire membrane operator would be wholly within the sphere (or could be deformed to be within the sphere), so the membrane operator would commute with any measurement operator applied on that sphere. Therefore, the measurement operator would just measure the charge of the ground state, i.e. the vacuum charge. This means that the combined point-like charge of the start-point and the loop excitation is trivial, and so any point-like charge carried by the loop must be balanced by a charge belonging to the start-point. \begin{figure}[h] \begin{center} \begin{overpic}[width=0.9\linewidth]{Sphere_measurement_loop_total_arrows.png} \put(10,60){measurement operator $T^{R,C}(n)$} \put(0,1){$E$-valued membrane operator on membrane $m$} \put(30,53){common start-point} \put(45,30){\Huge $\rightarrow$} \put(37,36){\parbox{2cm}{\raggedright pull through}} \end{overpic} \caption{We measure the spherical charge of an $E$-valued loop. To simplify the calculation, we deform the $E$-valued membrane to pull it inside the measurement operator, while keeping the start-point outside.} \label{sphere_charge_of_loop} \end{center} \end{figure} In order to find the charge of the loop-like excitation, we want to calculate $$T^{R,C}(n) \sum_{e \in E} a_e \delta(e, \hat{e}(m)) \ket{GS}.$$ To do so, we must first evaluate $$C^{h}_T(n) L^{e_n}(n) \delta(\hat{e}(m),e) \ket{GS},$$ where $h$ and $e_n$ are stand-ins for any label that can appear for the individual operators in our measurement operators. Firstly, we note that $L^{e_n}(n)= \delta(\hat{e}(n),e_n)$ commutes with $\delta(\hat{e}(m),e)$, because both are diagonal in the configuration basis (the basis where each edge is labelled by an element of $G$ and each plaquette is labelled by an element of $E$). On the other hand $\delta(\hat{e}(m),e)$ does not commute with the magnetic membrane operator $C^h_T(n)$. We can see this by writing the surface element $\hat{e}(m)$ in terms of the constituent plaquettes, as $$\hat{e}(m) = \prod_{p \in m} \hat{g}(s.p(m)-v_0(p)) \rhd \hat{e}_p,$$ where $e_p$ is the label of plaquette $p$ and we have assumed each plaquette aligns with $m$ (otherwise we must replace the plaquette label with the inverse). We see that this depends on the group element associated to the path $(s.p(m)-v_0(p))$ from the start-point of the membrane to the base-point of the plaquette. This path passes through the membrane $n$ and so is affected by the magnetic membrane operator. As we prove in Section \ref{Section_electric_magnetic_braiding_3D_tri_trivial} of the Supplementary Material (see Equation \ref{Magnetic_electric_3D_braid_reverse}), such a path element satisfies the commutation relation \begin{align*} \hat{g}&(s.p(m)-v_0(p)) C^h_T(n)\\ & = C^h_T(n)\hat{g}(s.p(m)-s.p(n))h^{-1}\hat{g}(s.p(m)-s.p(n))^{-1}\\ & \hspace{0.5cm} \hat{g}(s.p(m)-v_0(p)). \end{align*} Defining $$h_{[m-n]}^{\phantom {-1}}=\hat{g}(s.p(m)-s.p(n))h\hat{g}(s.p(m)-s.p(n))^{-1},$$ this leads to the surface label satisfying the commutation relation \begin{align*} &C^h_T(n) \hat{e}(m) = h_{[m-n]}^{\phantom {-1}} \rhd \hat{e}(m) C^h_T(n), \end{align*} so that \begin{align*} C^h_T(n) \delta(\hat{e}(m),e)&= \delta(h_{[m-n]}^{\phantom {-1}} \rhd \hat{e}(m),e) C^h_T(n)\\ &=\delta( \hat{e}(m),h_{[m-n]}^{-1} \rhd e) C^h_T(n). \end{align*} Then if we take the start-points of $m$ and $n$ to be the same (which has no effect on the result, because the start-point of the measurement operator can be changed without affecting the measurement operator), this becomes \begin{align*} C^{h}(n) L^{e_n}(n) &\delta(\hat{e}(m),e) \ket{GS}\\ &= \delta(\hat{e}(m),h^{-1} \rhd e) C^{h} L^{e_n}(n) \ket{GS} \\ &= \delta(\hat{e}(m),h^{-1} \rhd e) \delta(e_n,1_E) \ket{GS}, \end{align*} where in the last line we used the fact that the contractible closed surface $n$ must have trivial label in the ground state, while the magnetic membrane operator applied on $n$ acts trivially on the ground state (again, because $n$ is closed and contractible). We can then use this result to evaluate the action of the measurement operator, to obtain \begin{align*} T^{R,C}&(n) \sum_{e \in E} a_e \delta(e, \hat{e}(m)) \ket{GS} \\ &= \frac{|R|}{|Z_{\rhd,r_C}|} \sum_{D \in (Z_{\rhd,r_C})_{cl}} \chi_R(D) \sum_{d \in D} \sum_{q \in Q_C}\\ & \hspace{1cm} C^{qdq^{-1}}(n)L^{q \rhd r_C}(n) \sum_{e \in E} a_e \delta(e, \hat{e}(m)) \ket{GS}\\ &= \frac{|R|}{|Z_{\rhd,r_C}|} \sum_{D \in(Z_{\rhd,r_C})_{cl}} \chi_R(D) \sum_{d \in D} \sum_{q \in Q_C}\sum_{e \in E} a_e\\ & \hspace{1cm} \delta(qd^{-1}q^{-1} \rhd e, \hat{e}(m)) \delta(e_n,1_E) \ket{GS}. \end{align*} Then $\delta(e_n,1_E)$ enforces that $C=\set{1_E}$ and so, just as with the calculation for the charge of the electric excitation, the groups involved in the projector simplify greatly. $Z_{\rhd,r_C}$, the group of elements in $G$ which stabilise $r_C$, becomes the whole group when $r_C=1_E$. This also means that the sum over representatives $q \in Q_C$ becomes trivial, with $q=1_G$ being the only element. This gives us \begin{align*} T^{R,C}(n) &\sum_{e \in E} a_e \delta(e, \hat{e}(m)) \ket{GS} \\ &=\delta(C,\set{1_E}) \frac{|R|}{|G|} \sum_{d \in G} \chi_R (d) \sum_{e \in E} a_e\\ &\hspace{1cm}\delta(d^{-1} \rhd e, \hat{e}(m)) \ket{GS}\\ &=\delta(C,\set{1_E}) \frac{|R|}{|G|} \sum_{d \in G} \chi_R (d) \sum_{e'=d^{-1} \rhd e} a_{d \rhd e'}\\ &\hspace{1cm} \delta(e', \hat{e}(m))\ket{GS}\\ &= \delta(C,\set{1_E}) \frac{|R|}{|G|} \sum_{e' \in E} (\sum_{d \in G} \chi_R (d) a_{d \rhd e'})\\ & \hspace{1cm}\delta(e', \hat{e}(m))\ket{GS}. \end{align*} Examining the term in brackets, $(\sum_{d \in G} \chi_R (d) a_{d \rhd e'})$, we see that treating $a_{d \rhd e'}$ as a coefficient for $d$ will result in this coefficient being decomposed into irreps of $G$, describing the way in which the group $G$ acts on the coefficients by the $\rhd$ action. Only the contribution from $\overline{R}$ survives, due to orthogonality of characters, meaning that the measurement only gives a non-zero result if the $a$ coefficients contain the irrep $\overline{R}$. Therefore, the $E$-valued loop can have a non-trivial spherical topological charge labelled by a representation of $G$, depending on the choice of $a$ coefficients. If the $a$ coefficients are invariant under the $\rhd$ action, so that $a_{d \rhd e'}=a_{e'}$ for all $d \in G$ and $e \in E$, then the term in brackets is zero unless $R$ is the trivial irrep, because $a$ must only contain the trivial irrep of $G$. We note that this is the same condition for the start-point to be unexcited, as discussed in Section \ref{Section_3D_Loop_Tri_Non_Trivial}, and so if the start-point is not excited then there is no point-like charge for the loop (this is to be expected, because if the start-point is not excited then it should not carry a charge to be balanced by the loop). However, if $a_{d \rhd e'} \neq a_{e'}$ in general, then there will be some contribution from a non-trivial irrep. In particular, if $\sum_{d \in G} a_{d \rhd e'} =0$ for all $e' \in E$ (which is the condition for the start-point to definitely be excited, as proven in Section S-I C of the Supplementary Material for Ref. \cite{HuxfordPaper2}) then the contribution from the trivial irrep is proportional to $$\sum_{d \in G} \chi_{1_\text{Rep}} (d) a_{d \rhd e'}= \sum_{d \in G} a_{d \rhd e'} =0$$ and so there is no contribution from the trivial charge measurement operator. This means that the point-like charge is definitely non-trivial if the start-point is excited. \subsection{Topological Charge Within a Torus} \label{Section_Torus_Charge} Now we consider measuring the topological charge using a toroidal surface. To do this, we first choose such a surface to measure on. Then we project onto the space where the surface has no excitations on it, so that we only measure the charge if no objects intersect the surface itself. This is to avoid the case where a loop excitation is only partially inside the measurement surface, because then we cannot unambiguously define the charge within the torus, as explained earlier in Section \ref{Section_3D_Topological_Sectors}. A torus will allow us to measure both loop-like and point-like charge. One important thing to note is that the excitations that we measure need not lie inside the torus itself. Indeed we measure the loop-like charge of loop excitations that link with one of the cycles of the torus. For the meridian of the torus, those excitations will live inside the torus. However, the excitations that link with the longitude will be outside of the torus. This means that the torus surface can actually measure link-like excitations, made from loops outside the torus linking with those inside, rather than just loop-like excitations. \hfill The topological charges measured by the torus are more numerous and mathematically complicated to derive than those measured by the sphere. We therefore first consider the case where $\rhd$ is trivial (Case 1 from Table \ref{Table_Cases}) as an introduction. As illustrated in Figure \ref{unfoldedtorus1}, we represent the torus surface as a rectangle with opposite sides identified. These sides are then one particular choice for the two independent cycles of the torus. We will choose to apply any membrane operators on this rectangle with the boundary at the cycles, before closing the rectangle by gluing the opposite edges. This leaves ``seams" at the two cycles, which may have special properties because the action of the membrane operators on either side of the seam may not match. We can understand this by imagining taking a membrane operator applied on a rectangle and folding it up to glue the opposite edges together. There is no guarantee that a membrane operator acts the same on opposite sides of the rectangle, and this disparity may remain when we glue the sides together. We will see that this leads to additional joining conditions required to prevent additional excitations being present at these seams. \hfill To find the measurement operators, we have to first project onto the case where the surface itself is not excited, then we see what degrees of freedom are left over. After projecting onto all of the plaquettes on the surface being flat, these two cycles of the torus are still left undetermined. We therefore apply two closed electric operators $\delta(\hat{g}(c_1), g_{c_1})$ and $\delta(\hat{g}(c_2), g_{c_2})$, where $c_1$ and $c_2$ are the two cycles of the torus. We also apply a closed membrane operator $\delta(\hat{e}(m),e_m)$ on the torus, with the glued boundary of this torus being $c_1c_2c_1^{-1}c_2^{-1}$. \begin{figure}[h] \begin{center} \begin{overpic}[width=0.3\linewidth]{unfolded_torus_image.png} \put(50,95){\large $c_1$} \put(100,50){\large $c_2$} \put(50,-10){\large $c_1^{-1}$} \put(-15,50){\large $c_2^{-1}$} \put(0,95){\large $s.p$} \put(46,46){\large $m$} \end{overpic} \caption{The surface of the torus is conveniently represented by a square with periodic boundary conditions. The edges of this square (which are glued due to the periodic boundary conditions) are referred to as the seams of the torus. We apply electric ribbon operators along these seams to measure the non-contractible cycles of the torus and apply an $E$-valued membrane operator on the surface. We will also apply blob ribbon operators around the two cycles and a magnetic membrane operator over the surface. The edges cut by the dual membrane of the magnetic membrane point outwards from the page.} \label{unfoldedtorus1} \end{center} \end{figure} \hfill Requiring fake-flatness on the torus (this requirement follows from the plaquette terms) leads to the following constraint on the surface label $e_m$ of the torus and the labels $g_{c_1}$ and $g_{c_2}$ of the two cycles: $$\partial(e_m) g_{c_1} g_{c_2}g_{c_1}^{-1} g_{c_2}^{-1}=1_G.$$ Together with the other conditions that we will discuss in this section, we prove this constraint in Section \ref{Section_3D_Topological_Charge_Torus_Tri_trivial} in the Supplementary Material. We can use the fact that $\rhd$ is trivial to rewrite this constraint in a simpler way. When $\rhd$ is trivial, conjugating an element $\partial(e) \in \partial(E)$ by any element $g \in G$ is trivial, because $g\partial(e)g^{-1}=\partial(g \rhd e)=\partial(e)$ for all $g \in G$ and $e \in E$. Then defining $[g,h]=ghg^{-1}h^{-1}$, we can write the above constraint in various ways. For example, we have \begin{align*} \partial(e_m)& g_{c_1} g_{c_2}g_{c_1}^{-1} g_{c_2}^{-1}=1_G\\ &\implies \partial(e_m) =g_{c_2}g_{c_1}g_{c_2}^{-1}g_{c_1}^{-1}=[g_{c_2},g_{c_1}]\\ &\implies (g_{c_2}g_{c_1})^{-1}\partial(e_m)g_{c_2}g_{c_1} =g_{c_2}^{-1}g_{c_1}^{-1}g_{c_2}g_{c_1}\\ &\implies \partial((g_{c_2}g_{c_1})^{-1} \rhd e_m)=[g_{c_2}^{-1},g_{c_1}^{-1}]\\ &\implies \partial(e_m)=[g_{c_2}^{-1},g_{c_1}^{-1}], \end{align*} where in the fourth line we used one of the Peiffer conditions (Equation \ref{Equation_Peiffer_1} in Section \ref{Section_Recap_3d}) and in the last line we used the fact that $\rhd$ is trivial. We can also write the condition as \begin{equation} \partial(e_m)^{-1}=[g_{c_1}^{-1},g_{c_2}^{-1}]. \end{equation} Next we apply our magnetic membrane operator $C^h(m)$. Because we already projected to the subspace where the torus satisfies fake-flatness, some of the details of the operator are arbitrary; in particular the set of paths on the direct membrane, which affect the action on the edges, can be freely chosen, as long as these paths do not cross the seams of the membrane (we take this convention because two choices of path that differ by a non-contractible cycle may give different results and this gives us a consistent way of choosing the paths). \hfill Finally, we apply blob ribbon operators around the cycles, so that our measurement operator so far is given by \begin{align*} B^{e_{c_1}}(c_1)B^{e_{c_2}}(c_2)&C^h(m) \delta(\hat{e}(m),e_m)\\ & \delta(\hat{g}(c_1),g_{c_1}) \delta(\hat{g}(c_2),g_{c_2}). \end{align*} In principle we could put closed blob ribbon operators anywhere on the membrane, rather than just on the cycles $c_1$ and $c_2$. However, any blob ribbon operators with label in the kernel of $\partial$ can be freely deformed on the surface without affecting the action of the ribbon operators, because these operators are topological and there are no edge excitations on the surface (edge excitations in particular are relevant, because we deform blob ribbon operators by applying edge transforms, which are trivial when the edges are unexcited). This means that any such blob ribbon operator that does not wrap around a non-contractible cycle may be contracted into nothing, while an operator that does wrap around a non-contractible cycle on the torus may be deformed to wrap around the chosen cycles $c_1$ and $c_2$ (if the ribbon operator wraps both cycles, or wraps one multiple times, we split it into multiple ribbon operators on the cycles). This is not true for the other blob ribbon operators (those with label outside the kernel), because they are confined and so are not topological. Instead we find that this confinement leads to their position being fixed and their labels being restricted (as described in Section \ref{Section_3D_Topological_Charge_Torus_Tri_trivial} of the Supplementary Material), in order not to create any excitations. This is because the magnetic membrane operator may create plaquette excitations on the seams of the torus, but these excitations can be removed if the confined blob ribbon operators lie along the seam and have appropriate labels to cancel the effect of the magnetic membrane operator. We cannot place confined ribbon operators elsewhere (away from these seams) without producing new excitations. The appropriate labels for blob ribbon operators $B^{e_{c_1}}(c_1)$ and $B^{e_{c_2}}(c_2)$, applied around the cycles $c_1$ and $c_2$ respectively, satisfy \begin{align} \partial(e_{c_2})&=[g_{c_1},h] \\ \partial(e_{c_1})&=[h,g_{c_2}]. \end{align} \hfill So far we have restricted the labels by requiring that our operator does not violate the plaquette conditions. However, we also need the combined operator to commute with the vertex and edge transforms on the surface so that the operator does not create vertex and edge excitations. This forces us to use linear combinations of operators with different labels. In particular, we show in Section \ref{Section_3D_Topological_Charge_Torus_Tri_trivial} of the Supplementary Material that we need an equal sum of the operators with sets of labels in certain equivalence classes. If two sets of labels must appear with equal coefficients in the linear combination, we denote this with an equivalence relation. We find the relations \begin{align} (g_{c_1},g_{c_2},h)& \sim (x^{-1}g_{c_1}x,x^{-1}g_{c_2}x,x^{-1}hx) \: \: \forall x \in G \\ g_{c_1}& \sim \partial(e) g_{c_1} \: \: \forall e \in E\\ g_{c_2}& \sim \partial(e') g_{c_2} \: \: \forall e' \in E\\ h &\sim \partial(e'')h \: \: \forall e'' \in E. \end{align} These conditions show a striking resemblance to the relations that appear in the calculation of the ground state degeneracy of the 3-torus in Ref. \cite{Bullivant2017} and indeed they map perfectly onto them in the $\rhd$ trivial case. This indicates that the number of topological charges we can measure within a 2-torus is the same as the ground state degeneracy of the 3-torus in the $\rhd$-trivial case, as found more generally in Ref. \cite{Bullivant2020}. \hfill We can repeat this calculation for the special case (Case 2 from Table \ref{Table_Cases}) where we only enforce that $E$ is Abelian and $\partial \rightarrow$ centre($G$). Following the same argument as for the previous case (with full proofs given in Section \ref{Section_3D_Topological_Charge_Torus_Tri_nontrivial} in the Supplementary Material), we obtain the restrictions \begin{align} \partial(e_m)&=[g_{c_2},g_{c_1}];\\ \partial(e_{c_2})&=[g_{c_1},h];\\ \partial(e_{c_1})&=[h,g_{c_2}];\\ 1_E &=[h \rhd e_m^{-1}] \: e_m e_{c_1}^{-1} [g_{c_1}^{-1} \rhd e_{c_1}] e_{c_2}^{-1} [g_{c_2}^{-1} \rhd e_{c_2}], \end{align} together with the equivalence relations \begin{align} &((g_{c_1},g_{c_2},h),(e_{c_1},e_{c_2},e_m)) \notag \\ & \sim (g(g_{c_1},g_{c_2},h)g^{-1},g \rhd(e_{c_1},e_{c_2},e_m))\\ &(g_{c_1},e_{c_2},e_m) \notag \\ & \sim (\partial(e)^{-1}g_{c_1},e_{c_2} [h \rhd e] \: e^{-1}, e_m e^{-1} [g_{c_2}^{-1} \rhd e])\\ &(g_{c_2},e_{c_1},e_m) \notag \\ & \sim (\partial(r)g_{c_2},e_{c_1} [h \rhd r] \: r^{-1}, e_m r^{-1} [g_{c_1}^{-1} \rhd r])\intertext{and} &(h, e_{c_1},e_{c_2}) \notag \\ & \sim (\partial(e)h, e_{c_1} [g_{c_2}^{-1} \rhd e] \: e^{-1}, e_{c_2} [g_{c_1}^{-1}\rhd e^{-1}] \: e). \end{align} These restrictions can again be mapped onto the ground state conditions given in Ref. \cite{Bullivant2017}, as we demonstrate in Section \ref{Section_3D_Topological_Charge_Torus_Tri_nontrivial} in the Supplementary Material. This indicates that again there are the same number of ground states on the 3-torus as there are charges that can be measured by the 2-torus. We note that this relationship between the ground state degeneracy on the manifold $M \times S^1$ and the charge sectors measured by the surface $M$ (with $M$ in this case being the 2-torus $T^2= S^1 \times S^1$ and $M \times S^1$ being the 3-torus) is something we may expect for a TQFT \cite{Atiyah1988}. \hfill We can use these conditions for the measurement operators to produce a set of projection operators that span the space of allowed measurement operators, just as we did for the spherical topological charge. Each such projector then corresponds to a particular value of topological charge. We find that these projectors are labelled by certain mathematical objects that were used by Bullivant et al. \cite{Bullivant2020} when examining the ground states of the HLGT model. Specifically, each projector is labelled by a class $C$ of a particular space (to be described shortly) and an irrep $R$ of a particular group. To define these objects, we must follow some of the workings from Ref. \cite{Bullivant2020}. We note that some of our notation is slightly different from that paper, in order to match notation that we have previously used (and that was used in Ref. \cite{Bullivant2017}). To understand $C$, we must first define boundary $\mathcal{G}$-colourings. These are sets of three elements $(g_y,g_z,e_x)$, where $g_y,g_z \in G$ and $e_x \in E$. If this set satisfies $g_z=g_y^{-1} \partial(e_x^{-1})g_zg_y$, it is called a boundary $\mathcal{G}$-colouring \cite{Bullivant2020}. These sets are then divided into classes, by the equivalence relation \cite{Bullivant2020} \begin{align} (g_y,g_z,e_x) \sim (a^{-1}&\partial(b_2^{-1})g_ya, \: a^{-1}\partial(b_1^{-1})g_za, \notag\\ & a^{-1} \rhd (b_1^{-1}(g_z \rhd b_2^{-1})e_x(g_y \rhd b_1) b_2)), \label{Equation_sim_relation_Bullivant} \end{align} for each $a \in G$ and $b_1,b_2 \in E$. Then the label $C$ of the projector is one of these classes. The elements in $C$ are denoted by $(c_{y,i},c_{z,i},d_{x,i})$ and $(c_{y,1},c_{z,1},d_{x,1})$ is called the representative element of the class ($i$ is an index that runs from 1 to the size $|C|$ of the class $C$) \cite{Bullivant2020}. \hfill Expressions similar to the right-hand side of Equation \ref{Equation_sim_relation_Bullivant} will appear frequently in this section, so we introduce some shorthand from Ref. \cite{Bullivant2020}, defining \begin{equation} g^{k;f}= k^{-1}\partial(f)^{-1}g k, \label{Equation_superscript_notation_1_main_text} \end{equation} where $g$ and $k$ are elements of $G$ and $f$ is an element of $E$. We also introduce the notation from Ref. \cite{Bullivant2020} that \begin{equation} e^{k,h_1,h_2;f_1,f_2}= k^{-1} \rhd \big(f_{1}^{-1} (h_2 \rhd f_{2})^{-1} e [h_{1} \rhd f_{1}] f_{2} ), \label{Equation_superscript_notation_2_main_text} \end{equation} where $k$, $h_1$ and $h_2$ are elements of $G$ and $f_1$ and $f_2$ are elements of $E$. Then using this notation, Equation \ref{Equation_sim_relation_Bullivant} can be written as \begin{align} (g_y,g_z,e_x) \sim (g_y^{a;b_2}, \: g_z^{a;b_1},e_x^{a, g_y,g_z; b_1,b_2} ). \label{Equation_sim_relation_Bullivant_2} \end{align} In addition to the boundary $\mathcal{G}$-colourings, Bullivant et al. introduce sets of three elements $(g_x,e_y,e_z)$, where $g_x \in G$ and $e_y,e_z \in E$, with these sets of elements being called ``bulk $\mathcal{G}$-colourings" \cite{Bullivant2020}. These colourings are also divided into classes, this time using an equivalence relation that depends on the boundary colouring. The equivalence relation is \cite{Bullivant2020} $$(g_x,e_y,e_z) \underset{g_y,g_z}{\sim} (\partial( \lambda) g_x,[g_z \rhd \lambda] e_y \lambda^{-1}, [g_y \rhd \lambda] e_z \lambda^{-1})$$ for each $\lambda \in E$. The corresponding set of equivalence classes is denoted by $\mathfrak{B}_{g_y,g_z}$. Then for a class $\mathcal{E}_{g_y,g_z}$ in the set $\mathfrak{B}_{g_y,g_z}$, the elements in $\mathcal{E}_{g_y,g_z}$ are denoted by $$(s_{x,i},f_{y,i},f_{z,i}),$$ for $i=1,2,...,|\mathcal{E}_{g_y,g_z}|.$ The element $(s_{x,1},f_{y,1},f_{z,1})$ is called the representative element for this class. A subset of these classes form a group called the stabiliser group of the class $C$, $Z_C$ \cite{Bullivant2020}: \begin{align*} Z_C :=& \{\mathcal{E}_C \in\mathfrak{B}_C |(c_{y,1},c_{z,1},d_{x,1}) =\\ & (c_{y,1}^{s_{x,1};f_{z,1}},c_{z,1}^{s_{x,1};f_{y,1}} d_{x,1}^{s_{x,1},c_{y,1},c_{z,1};f_{y,1},f_{z,1}})\}, \end{align*} where $\mathfrak{B}_C = \mathfrak{B}_{c_{y,1},c_{z,1}}$ and the subscript $C$ in $\mathcal{E}_C$ is to remind us that $\mathcal{E}_C$ belongs to $\mathfrak{B}_C$ (and is interchangeable with the subscript ${c_{y,1},c_{z,1}}$). The product for this group is defined so that the product of two classes $\mathcal{E}_C$ and $\mathcal{E}'_C \in Z_C$, $\mathcal{E}_C \cdot \mathcal{E}'_C$, is the equivalence class in $\mathfrak{B}_C$ whose representative element is $$(s_{x,1} s_{x,1}', f_{y,1} (s_{x,1} \rhd f_{y,1}'),f_{z,1}(s_{x,1} \rhd f_{z,1}')).$$ The label $R$ of a projector to definite topological charge is an irrep of this stabilizer group. \hfill Using the objects that we have discussed so far, we can finally define our projectors. First we define our product of individual membrane and ribbon operators: \begin{align} Y&^{(h, \: g_{c_1}, \: g_{c_2}, \: e_m, \: e_{c_1}, \: e_{c_2}^{-1})}(m) \notag\\ &=B^{e_{c_1}}(c_1)B^{e_{c_2}^{-1}}(c_2)C^h_T(m)\notag\\ & \hspace{0.4cm} \delta(\hat{e}(m),e_m) \delta(\hat{g}(c_1),g_{c_1}^{-1}) \delta(\hat{g}(c_2),g_{c_2}^{-1}). \end{align} Then we take appropriate linear combinations to construct the projector labelled by $R$ and $C$: \begin{align*} P&^{R,C}(m)\\ &= \sum_{i=1}^{|C|} \sum_{\substack{\mathcal{E}_{c_{y,i},c_{z,i}}\\ \in \mathfrak{B}_{c_{y,i},c_{z,i}}}} \sum_{m=1}^{|R|} \sum_{\lambda \in E} \delta(c_{y,i},c_{y,i}^{s_{x,1};f_{z,1}})\\ &\delta(c_{z,i},c_{z,i}^{s_{x,1};f_{y,1}})\delta(d_{x,i},d_{x,i}^{s_{x,1},c_{y,i},c_{z,i};f_{y,1},f_{z,1}})\\ &D^R_{m,m}([\mathcal{E}_C^{\text{stab.}}]_{i,i}) \\ &Y^{(\partial(\lambda) s_{x,1},\:c_{y,i}, \: c_{z,i}, \:d_{x,i},\: [c_{z,i} \rhd \lambda] f_{y,1} \lambda^{-1}, \: [c_{y,i} \rhd \lambda] f_{z,1} \lambda^{-1} )}(m). \end{align*} In this expression, $[\mathcal{E}_C^{\text{stab.}}]_{i,i}$ is the class in $\mathfrak{B}_C$ with representative element \begin{align*} \big(p_{x,i}^{-1}s_{x,1}p_{x,i}, \: \: p_{x,i}^{-1}& \rhd (q_{z,i}^{-1} f_{z,1}(s_{x,1} \rhd q_{z,i})),\\ & p_{x,i}^{-1} \rhd (q_{y,i}^{-1} f_{y,1} [s_{x,1} \rhd q_{y,i}])\big), \end{align*} where the $p$ and $q$ elements are defined as representatives which satisfy $$(c_{y,1},c_{z,1},d_{x,1})=(c_{y,i}^{p_{x,i};q_{z,i}},c_{z,i}^{p_{x,i};q_{y,i}},d_{x,i}^{p_{x,i},c_{y,i},c_{z,i};q_{y,i},q_{z,i}})$$ and $(p_{x,1},q_{y,1},q_{z,1})=(1_G,1_E,1_E)$ (that is the $p$ and $q$ move us around in the class $C$ to get from element $i$ to the representative element labelled by 1). In Section \ref{Section_3D_Topological_Charge_Torus_Projectors} of the Supplementary Material we perform the lengthy algebraic task of proving that the operators $P^{R,C}(m)$ labelled by the objects $R$ and $C$ form an orthogonal and complete set of projectors, indicating that the topological charges that we can measure with the torus are appropriately labelled by a class $C$ of boundary $\mathcal{G}$-colourings and an irrep $R$ of the corresponding stabiliser group. \section{Conclusion} \label{Section_conclusion_3d} In this work, we have discussed in detail the features of the higher lattice gauge theory model in 3+1d. We started by constructing the ribbon and membrane operators which create the simple excitations of the model. We found that there were two categories of excitation, those best labelled by objects related to the group $G$ and those best labelled by objects related to the other group, $E$. The former type of excitation, consisting of point-like electric excitations and loop-like magnetic flux tubes, are analogous to the excitations we expect from ordinary lattice gauge theory. The other type, consisting of point-like 2-gauge fluxes and loop-like 2-gauge charges, are related to properties of the surfaces of the lattice, instead of paths. We then considered the braiding properties of these excitations. When the map $\rhd$, defined as part of the crossed module, is trivial, these two types of excitations form separate sectors that do not have non-trivial braiding between them (only within each sector). However, when $\rhd$ is non-trivial (although we had to restrict to the case where $E$ is Abelian and $\partial$ maps to the centre of $G$) some magnetic excitations acquire a 2-flux and can braid non-trivially with all other types of excitation. This is reflected in the fact the membrane operator that produces the magnetic excitation must be modified to depend on the surface elements of the lattice. \hfill Another feature that we looked at was the condensation of certain excitations, and the accompanying confinement of others. We found that this was controlled by the map $\partial$, with no condensation or confinement when $\partial$ maps only to the identity element (at least in the case where $E$ is Abelian). By altering $\partial$ while keeping the groups fixed, we introduce condensation for some of the loop-like (magnetic and $E$-valued loop) excitations while causing some of the point-like (electric and blob) excitations to become confined. This can be thought of as a condensation-confinement transition, where the confined excitations are those that had non-trivial braiding with the condensed excitations. We also looked at the topological charge carried by the excitations, by constructing projectors to definite topological charge. The available charges depend on the surface of measurement, and we constructed the projectors for a spherical and toroidal surface. Similar to results found in Ref. \cite{Bullivant2020}, we found that the charges measured by the 2-torus surface matched the ground state degeneracy on the 3-torus. We saw that these 2-torus surfaces generally measured links, rather than simple loop-like excitations, suggesting that the number of inequivalent link-like excitations is equal to the ground state degeneracy. \hfill In Ref. \cite{HuxfordPaper1}, we already mentioned several potential avenues of interest for further study based on this work. Rather than repeat ourselves here, we would like to discuss one of these directions further. In this paper, we gave braiding relations in terms of the simple excitations produced by the membrane and ribbon operators. However, it would be useful to obtain the braiding relations and other properties in terms of the topological charge. To do so, it would first be necessary to find the fusion rules for the various topological charges. Because the topological charge depends on the measurement surface, we would need additional machinery to describe how different charges can fuse in a simple way. Once this has been done, we can consider a braiding process where the individual excitations are projected onto states of definite charge, and the overall system is similarly projected to definite total charge, in order to find the braiding relations satisfied by the charges (analogous to the approach used for 2+1d theories). In addition to better understanding this particular model, creating this machinery would give us a structure with which to study different models for topological phases in 3+1d and understand what properties we expect. We note that, as mentioned in Ref. \cite{HuxfordPaper1}, from some preliminary calculations we can see that different torus charges can only fuse if they have the same value of a certain ``threading flux", meaning that the quantum numbers passing through the two loops must be the same (for example if they are linked to the same excitation). When this threading flux is non-trivial, we are considering the fusion of two loop-like excitations while both are linked to another excitation. This means that the braiding of these charges would correspond to so-called ``necklace braiding" \cite{Bullivant2020a} (or three-loop braiding \cite{Wang2014}), meaning the braiding of two loops while linked to another. For general models this braiding can give different results compared to the usual two-loop braiding \cite{Wang2014, Jiang2014}. Therefore, having a structure in which to consider this process for generic 3+1d topological models would be most useful. \begin{acknowledgments} The authors would once again like to thank Jo\~{a}o Faria Martins and Alex Bullivant for helpful discussions about the higher lattice gauge theory model. We are also grateful to Paul Fendley for advice on the preparation of this series of papers. We acknowledge support from EPSRC Grant EP/S020527/1. Statement of compliance with EPSRC policy framework on research data: This publication is theoretical work that does not require supporting research data. \end{acknowledgments}
1,941,325,219,955
arxiv
\section{Introduction}\label{sec1} A deconfined, strongly interacting state of matter called Quark-Gluon Plasma (QGP) is conjectured to be produced at high temperatures and densities in collisions recorded at high-energy physics experiments. QGP formation is gauged with several proposed signatures, including production and ratios of strange particles. The transverse momentum ({$p_{\rm T}$}) spectra of strange particles also serve an essential role in the determination of freezeout parameters~\cite{1, 1a, 1b}. Strange hadrons production in relativistic high energy collisions is an invaluable tool to investigate the properties of collision phases since these are not part of the colliding nuclei from the incoming beams. Strange particles have been studied extensively in high energy particle physics as their production and distribution in phase space provide information on the fireball and properties of the created medium~\cite{2}. In the initial stages, after a collision, flavor creation and excitation are mainly responsible for strangeness production at high {$p_{\rm T}$} (gluon splittings dominate in the later evolution). While at low {$p_{\rm T}$}, non-perturbative partonic processes contribute predominantly to the strangeness production~\cite{3, 4}. High energy proton-proton $(pp)$ collisions provide a simple system to investigate the structure of nuclear matter ~\cite{4a, 4b} and the basic workings of the universe. Many recent developments in fundamental particle physics, including the Higgs boson's discovery, proved the importance of $pp$ collision systems. Additionally, $pp$ collision system serves as a baseline guide for the investigations of complex proton-nucleus $(pA)$ and nucleus-nucleus $(AB)$ collision systems. Transverse momentum ({$p_{\rm T}$}) spectra in $pp$ collisions are used as reference in heavy-ion $(pA, AB)$ collisions for the studies of initial state effects. The high-energy heavy-ion collisions are important to characterize the quark-gluon plasma (QGP). However, many signatures typical to the heavy-ion collisions have also been observed in high multiplicity $pp$ collisions~\cite{1n, 2n} including recent observation of enhanced production of strange and multistrange particles~\cite{3n}. Monte-Carlo's studies of the production of strange and multistrange particles in $pp$ collisions are thus important to characterize not only the $pp$ systems but can further be extended to compare the production of enhanced strangeness in different collision systems. Perturbative Quantum-Chromodynamics (pQCD) nicely describes the particle production at high transverse momentum ({$p_{\rm T}$}). In the low ({$p_{\rm T}$}) regions, involving predominant soft processes, phenomenological models are commonly employed. Baryon production in this region notably lacks a full QCD-based description. The ambiguity lies in whether the baryon number should be associated with the valence quarks of a hadron or with its gluonic field. Phenomenological processes involving (anti-)string junctions and hostile C-parity exchanges may give rise to differences between spectra of particles and the corresponding anti-particles. Spectra of (anti-)baryons can provide information regarding competing mechanisms responsible for the baryon production in $pp$ collisions. The anti-baryon to baryon ratio of the hadrons with different valence quark content at different collision energies is considered one of the most direct methods to constrain the baryon production mechanisms~\cite{ALICEn}. Particle ratios simplify the comparison of various experiments performed under different trigger and centrality conditions, and systematic errors associated with absolute particle yields are reduced while working with ratios. Strange to non-strange, mixed particle ratios play an essential role in the quantitative analysis of strangeness production and serve as crucial probes for strangeness enhancement studies~\cite{10}. Particle ratios, such as ($p/\pi$) and ($\Lambda$/$K_S^0$), provide essential insights regarding production mechanisms and the spectral shapes, especially in the intermediate transverse momentum region~\cite{9}. Confronting the model predictions from phenomenological models with experimental observations provide insights on parameters tuning for further improvements in the models~\cite{4}. QGP features and other insights gained from the study of strange particle production ratios may prove to be helpful in the parameter tuning of Monte Carlo models. In this study anti-baryon to baryon ratios are presented vs the strangeness number: 0$(\bar{p}/p)$, 1$(\bar{\Lambda}/\Lambda)$, 2$(\bar{\Xi}/\Xi)$ and 3$(\bar{\Omega}/\Omega)$. The ratios $(\bar{\Lambda}/\Lambda)$, $(\bar{\Xi}/\Xi)$ as a function of {$p_{\rm T}$} are calculated with hadronic models and contrasted with the corresponding experimental data. The ratio $(\bar{\Lambda}/\Lambda)$ gives the information on the baryon-number transport between proton-proton $pp$ collision state to the final state hadrons. Furthermore, the ratio ($\Lambda$/$K_S^0$) shows the suppression of baryon to meson ratio in strange quark hadronization~\cite{LHCbn}. In this paper, the ratios ($\Lambda$/$K_S^0$) are presented as a function of ({$p_{\rm T}$}), Rapidity $(y)$, and Rapidity loss $(\Delta{y})$ in $pp$ collisions at \mbox{$\sqrt{\mathrm{s}}$}= 200 GeV, 900GeV and 7.0 TeV as calculated with HIJING, Sibyll2.3d and QGSJETII-04 models. A thorough comparison is made between experimental and simulation results and underlying features giving rise to differences in modelling results are discussed. As proposed, Particle production in high energy collisions includes contributions from both the hard processes (explained with perturbative Quantum Chromodynamics, pQCD) and the soft processes that are currently understood with phenomenological models~\cite{9a, 9b}. Confronting predictions from such hadronic models with experimental observations provides insights on parameters tuning for further improvements in the models~\cite{4, 9c, 9d, 9e, 9f, 9g, 9h, 9i, 9j, 9k}. The paper is organized into four sections: In section 2, the phenomenological models used for the study are described briefly. Section 3 gives results and discussion, and finally, the results are concluded in section 4. \section{Methods and Models} The hadronic models, HIJING, Sibyll2.3d, and QGSJETII-04, used in this study, are briefly described here. {\bf HIJING} model is a Monte Carlo framework of heavy ions based on the basic structure of C++ and CPU-based parallelism. HIJING is designed to observe the production of jets and multi-particles in $pp$, $pA$, and $AA$ high energy collisions. HIJING can adjust the parameters for the production of multi-particles in $pp$ collisions to retrieve the essential information of initial conditions in the energy range ($\sqrt{s_{NN}}= 5-2000 $~GeV)~\cite{11}. HIJING, a heavy-ion jet interaction generator, utilizes the LUND model for the fragmentation of jets and the QCD-inspired models for the production of jets. To study the nuclear consequences, the shadowing effect is also incorporated for the structuring of parton functions~\cite{12}. In addition, the model of HIJING includes phenomenology of multi-strings at low $p_{T}$ interaction, thereby combining the physics of fragmentation at inter-mediate energy with perturbative physics at the energy of collisions~\cite{13}. The model also includes the effects of the production of multiple mini jets, soft excitations, and jet interactions in dense hadronic matter~\cite{12, 14}. The {\bf Sibyll2.3d} model is based on mini-jet modeling with the ability of multiple hard parton interactions per hadron collision. In this model, the effects of energy-momentum conservation drive the change in the distribution of leading particles. The multiplicity of average particles is high at higher energy~\cite{15} in the Sibyll model. The primary purpose of the event-generator Sibyll2.3d is to take into account the main characteristics of hadronic-particle production and strong interactions. These characteristics are required for the study of air-shower cascades and fluxes of secondary particles, which are produced via the interaction of cosmic rays with the atmosphere of the Earth~\cite{16}. This model consists of a combination of Gribov Regge theory and QCD perturbative field theory with the inclusion of elastic and total cross-sections for p-p, K-p, and $\pi$-p interactions to retrace the new LHC data~\cite{17}. In this version of the Sibyll model, the production of charm-hadrons is also included in the studies of atmospheric neutrinos at high energies. In addition to charm hadrons, the abundance of muons is increased as compared to previous versions to remove well the difference between simulation and data~\cite{16}. Sibyll model also describes the excited states' diffractive production of projectiles and targets, leading-particle distributions with different energies, and particle production in forward phase-space~\cite{17}. { \bf QGSJETII-04} is a Monte Carlo event generator based on the Quark-Gluon String model developed for hadronic interactions. The generator of QGSJETII-04 relies on the Quark-Gluon String model, effective field theoretical framework of Gribov-Regge, and LUND algorithm to explain the interactions of high-energy particles and to study phenomena of multiple-scattering. For the nucleus-nucleus interactions and semi-hard processes, the QGSJET model employs the "Semi-hard pomerons" approach~\cite{17, 18}. The coupling of Pomeron in the QGSJETII-04 model is derived from the framework of Gribov Regge theory. The coupling of pomeron-pomeron represents the cascade of parton interaction when pomerons overlap in phase-space. QGSJETTII-04 event generator requires fewer parameters for a simulation~\cite{18} and can incorporate mini jets. The new version of QGJETII-04 is tuned with experimental data from LHC experiments. QGSJETII-04 model reproduces the experimental data at higher-momentum to a good extent, while the particle distributions are over-estimated at lower-momentum~\cite{19, 20}. \section{Results and discussion}\label{sec3} The events have been generated for various strange particles, {$\mathrm{K}^{0}_{\mathrm S}$}, {$\Lambda$ ($\overline{\Lambda}$)} and {$\Xi^-$ {$\overline{\Xi}^+$}} using different models including HIJING, Sibyll and QGSJET at \mbox{$\sqrt{\mathrm{s}}$}~= 0.2, 0.9, and 7 TeV. The pQCD-based model HIJING and cosmic ray air shower based models Sibyll and QGSJET gives different predictions at all energies. In order to validate the MC simulation results, a detailed comparison has been made with that of experimental results from $pp$ collision at \mbox{$\sqrt{\mathrm{s}}$}~= 200 GeV from STAR experiment at RHIC~\cite{21} and LHCb experiment at \mbox{$\sqrt{\mathrm{s}}$}~= 0.9 and 7 TeV~\cite{22}. \begin{figure}[ht!] \centering \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{2b2.pdf} \caption{{$\overline{\Lambda}$}/{$\Lambda$} Ratio as a function of {$p_{\rm T}$}} \label{fig1a} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{2b3.pdf} \caption{{$\overline{\Xi}^+$}/{$\Xi^-$} Ratio as a function of {$p_{\rm T}$}} \label{fig1b} \end{subfigure} \caption{Anti-baryon to baryon ratios as a function of {$p_{\rm T}$} in $pp$ collisions at \mbox{$\sqrt{\mathrm{s}}$} = 200 GeV from STAR experiment in comparison to the model predictions. Black solid markers are the data points and lines of different colors shows different model predictions. } \label{fig1} \end{figure} For a gluon jet, there is no leading baryon against anti-baryon, while in the case of a quark jet, this expectation is reversed as there is a leading baryon against anti-baryon. Therefore, the hadron production mechanisms at high {$p_{\rm T}$} are dominated by the jet fragmentation, and it is a reasonable expectation that with the increase in {$p_{\rm T}$} the $\bar B/ B$ ratio will start to decrease. This decreasing trend in $\bar B/ B$ ratios has been predicted previously~\cite{23}. Figures~\ref{fig1a} and ~\ref{fig1b} shows the anti-baryon to baryon ({$\overline{\Lambda}$}/{$\Lambda$}) and {$\overline{\Xi}^+$}/{$\Xi^-$} ratio as a function of {$p_{\rm T}$} respectively. It has been observed that all models describe the experimental data with reasonable agreement while taking into account the error bars and $\chi^2$/$n$ calculated in each case given in Table 1. Among the three, for both the ratios, the Sibyll gives a better description of data, particularly for the {$\overline{\Xi}^+$}/{$\Xi^-$} ratio, than the other model where the QGSJET did not produce any result. However, large error bars in data, as well as models, makes it difficult to observe the ratios follow decreasing trend at high {$p_{\rm T}$}. \begin{figure}[ht!] \centering \includegraphics[width=0.49\textwidth]{2b1.pdf} \caption{Particles ratio and {$p_{\rm T}$} vs mass at \mbox{$\sqrt{\mathrm{s}}$} = 200 GeV} \label{fig2} \end{figure} The mean anti-baron to baryon ($\bar B/ B$) ratio as a function of strangeness content from $pp$ collisions at \mbox{$\sqrt{\mathrm{s}}$}~ = 200 GeV from STAR compared with different models can be seen in figure~\ref{fig2}. The ratio shows a slightly increasing trend with an increase in strangeness content in the case of the experimental data, while the models show a completely different trend. In the case of the experimental data, for protons and {$\Lambda$} baryons, the $\bar B/ B$ ratio is not unity, and different parton distribution functions for light quark may explain this deviation~\cite{24}. On the other hand, the ratio from all models does not show a strong dependence on the strange content. This means that the models still need improvements to include the strangeness content. A possible explanation for experimental data as well as the model predictions in fig.~\ref{fig1} and fig.~\ref{fig2} is that the particles are not predominantly produced from the quark-jet fragmentation over the measured {$p_{\rm T}$} range. In order to understand the model behavior for different particle ratios, a detailed study has been preformed at LHC energies as well. LHCb experiment at the LHC reported the anti-baryon to baryon and baryon to meson ratios as a function of rapidity and {$p_{\rm T}$} in $pp$ collisions at \mbox{$\sqrt{\mathrm{s}}$}~ = 0.9 and 7 TeV~\cite{22}. The {$p_{\rm T}$} is divided into various regions; $0.25 < p_T < 0.65$ GeV/$c$, $0.65 < p_T < 1.0$ GeV/$c$ and $1.0 < p_T < 2.5$ GeV/$c$ in the rapidity $2 < y < 4$ region. The anti-baryon to baryon ({$\overline{\Lambda}$}/{$\Lambda$}) and baryon to meson ({$\overline{\Lambda}$}/{$\mathrm{K}^{0}_{\mathrm S}$}) ratios are then compared with different model predictions at the given {$p_{\rm T}$} and $y$ regions. Figure~\ref{fig3} (left column) shows the model prediction of anti-baryon to baryon ({$\overline{\Lambda}$}/{$\Lambda$}) ratios as a function of rapidity ($y$) in comparison to the data from LHCb experiment~\cite{22}. The $y$ dependence can be seen in the experimental data. It has been observed that at all {$p_{\rm T}$} regions, HIJING and Sibyll model do not show strong rapidity ($y$) dependence at all. Therefore, the ratio is about unity which means that in HIJING and Sibyll, the same number of {$\overline{\Lambda}$} are produced as that of {$\Lambda$}. However, the QGSJET model describes the distribution of {$\overline{\Lambda}$}/{$\Lambda$} ratios reasonably well. At $0.65 < p_T < 1.0$ GeV/$c$ region, below $y < 3$, QGSJET slightly overpredict the experimental results but at high $y$ region it is in good agreement with the experimental observations. Overall, QGSJET is in good agreement with data at all the {$p_{\rm T}$} regions. Figure~\ref{fig3} (right column) depict the model prediction of baryon to meson ({$\Lambda$}/{$\mathrm{K}^{0}_{\mathrm S}$}) ratios as a function of rapidity ($y$) compared with the experimental results. At all {$p_{\rm T}$} regions, it can bee seen that, again the predictions of HIJING and Sibyll shows no strong rapidity ($y$) dependence. However, HIJING predicts the experimental results reasonably well as compared to the Sibyll and QGSJET. Sibyll completely underpredict the experimental as well as HIJING and QGSJET results at $0.25 < p_T < 0.65$ GeV/$c$ and $0.65 < p_T < 1.0$ GeV/$c$ regions. QGSJET slightly underpredict the experimental data at $0.25 < p_T < 0.65$ GeV/$c$ where gives a reasonable description at $0.65 < p_T < 1.0$ GeV/$c$ region. At $1.0 < p_T < 2.5$ GeV/$c$, all of the model prediction are roughly similar and are consistent with experimental data. \begin{figure}[ht!] \centering \includegraphics[width=0.49\textwidth]{9a1.pdf} \includegraphics[width=0.49\textwidth]{9a4.pdf} \includegraphics[width=0.49\textwidth]{9a2.pdf} \includegraphics[width=0.49\textwidth]{9a5.pdf} \includegraphics[width=0.49\textwidth]{9a3.pdf} \includegraphics[width=0.49\textwidth]{9a6.pdf} \caption{(Left column) anti-baryon to baryon ({$\overline{\Lambda}$}/{$\Lambda$}) ratios as a function of rapidity ($y$) (Right Column) anti-baryon to meson ({$\overline{\Lambda}$}/{$\mathrm{K}^{0}_{\mathrm S}$}) ratios in $pp$ collisions \mbox{$\sqrt{\mathrm{s}}$}~ = 0.9 TeV at different {$p_{\rm T}$} regions. Black solid markers are the data points and lines of different colors shows different model predictions.} \label{fig3} \end{figure} We have also combined all the {$p_{\rm T}$} regions from $0.25 < p_T < 2.5$ GeV/$c$ and studied the models predictions of various ratios as a function of {$p_{\rm T}$}, rapidity ($y$) and rapidity loss ($\Delta y$) in comparison with experimental results. Figure~\ref{fig4a} shows the {$\overline{\Lambda}$}/{$\Lambda$} ratio as a function of rapidity ($y$) at all {$p_{\rm T}$} range. Only QGSJET model is consistent with experimental data while HIJING and Sibyll does not show strong rapidity dependence. {$\overline{\Lambda}$}/{$\mathrm{K}^{0}_{\mathrm S}$} ratio as a function of rapidity ($y$) is shown in fig.~\ref{fig4b}. HIJING results are slightly close to the data and does not show rapidity dependence. On the other hand, Sibyll and QGSJET underpredict the experimental data and HIJING data significantly. Figure~\ref{fig4c} depict the {$\overline{\Lambda}$}/{$\Lambda$} ratio as a function of {$p_{\rm T}$} in the rapidity range $2.0 < y < 4.0$. Only QGSJET agrees with the data at {$p_{\rm T}$} $> 0.8$ GeV/$c$, while HIJING and Sibyll significantly overshoot the data. \begin{figure}[ht!] \centering \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{9a7.pdf} \caption{{$\overline{\Lambda}$}/{$\Lambda$} Ratio as a function of rapidity ($y$)} \label{fig4a} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{9a8.pdf} \caption{{$\overline{\Lambda}$}/{$\mathrm{K}^{0}_{\mathrm S}$} Ratio as a function of rapidity ($y$)} \label{fig4b} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{9a9.pdf} \caption{{$\overline{\Lambda}$}/{$\Lambda$} Ratio as a function of {$p_{\rm T}$}} \label{fig4c} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{9a10.pdf} \caption{{$\overline{\Lambda}$}/{$\mathrm{K}^{0}_{\mathrm S}$} Ratio as a function of {$p_{\rm T}$}} \label{fig4d} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{9a11.pdf} \caption{{$\overline{\Lambda}$}/{$\Lambda$} Ratio as a function of rapidity loss ($\Delta y$)} \label{fig4e} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{9a12.pdf} \caption{{$\overline{\Lambda}$}/{$\mathrm{K}^{0}_{\mathrm S}$} Ratio as a function of rapidity loss ($\Delta y$)} \label{fig4f} \end{subfigure} \caption{Different ratios as a function of {$p_{\rm T}$}, rapidity ($y$) and rapidity loss ($\Delta y$) in $pp$ collisions at \mbox{$\sqrt{\mathrm{s}}$}~ = 0.9 TeV from LHCb experiment in comparison to the model predictions. Black solid markers are the data points and lines of different colors shows different model predictions. } \label{fig4} \end{figure} HIJING does not show {$p_{\rm T}$} dependence at all, while Sibyll and QGSJET do. {$\overline{\Lambda}$}/{$\mathrm{K}^{0}_{\mathrm S}$} ratio as a function of {$p_{\rm T}$} can be seen in fig.~\ref{fig4d}. QGSJET and HIJING reasonably describe the data as well as the ratio distribution at all {$p_{\rm T}$} regions, while Sibyll slightly underpredicts the data at almost all the {$p_{\rm T}$} regions. Figures~\ref{fig4e} and ~\ref{fig4f} respectively shows the {$\overline{\Lambda}$}/{$\Lambda$} and {$\overline{\Lambda}$}/{$\mathrm{K}^{0}_{\mathrm S}$} ratios as a function of rapidity loss ($\Delta y$). In the case of {$\overline{\Lambda}$}/{$\Lambda$} in fig~\ref{fig4e}, only QGSJET describes the data and distribution while HIJING and Sibyll overshoot the data. HIJING also does not shows the $\Delta y$ dependence at all. {$\overline{\Lambda}$}/{$\mathrm{K}^{0}_{\mathrm S}$} ratios from fig.~\ref{fig4f} almost all of the models underpredict the data significantly except HIJING, whose predictions are close to the experimental observations. \begin{figure}[ht!] \centering \includegraphics[width=0.49\textwidth]{7a1.pdf} \includegraphics[width=0.49\textwidth]{7a4.pdf} \includegraphics[width=0.49\textwidth]{7a2.pdf} \includegraphics[width=0.49\textwidth]{7a5.pdf} \includegraphics[width=0.49\textwidth]{7a3.pdf} \includegraphics[width=0.49\textwidth]{7a6.pdf} \caption{(Left column) anti-baryon to baryon ({$\overline{\Lambda}$}/{$\Lambda$}) ratios as a function of rapidity ($y$) (Right Column) anti-baryon to meson ({$\overline{\Lambda}$}/{$\mathrm{K}^{0}_{\mathrm S}$}) ratios in $pp$ collisions \mbox{$\sqrt{\mathrm{s}}$}~ = 7 TeV at different {$p_{\rm T}$} regions. Black solid markers are the data points and lines of different colors shows different model predictions.} \label{fig5} \end{figure} For the comparison study of various particle ratios in $pp$ collisions at \mbox{$\sqrt{\mathrm{s}}$}~ = 7 TeV, the {$p_{\rm T}$} is divided into various regions; $0.15 < p_T < 0.65$ GeV/$c$, $0.65 < p_T < 1.0$ GeV/$c$ and $1.0 < p_T < 2.5$ GeV/$c$ in the rapidity $2 < y < 4$ region. The anti-baryon to baryon ({$\overline{\Lambda}$}/{$\Lambda$}) and baryon to meson ({$\overline{\Lambda}$}/{$\mathrm{K}^{0}_{\mathrm S}$}) ratios are then compared with different model predictions at the given {$p_{\rm T}$} and $y$ regions. Figure~\ref{fig5} (left column) shows the model prediction of anti-baryon to baryon ({$\overline{\Lambda}$}/{$\Lambda$}) ratios as a function of rapidity ($y$) in comparison to the data from LHCb experiment in $pp$ collisions at \mbox{$\sqrt{\mathrm{s}}$}~ = 7 TeV~\cite{22}. A slight $y$ dependence can be observed from the experimental measurements. At $0.15 < p_T < 0.65$ GeV/$c$ region, the ratio predictions of the models is close to unity while experimental data lies under the model predictions. At $0.65 < p_T < 1.0$ GeV/$c$ region, Sibyll and QGSJET predictions are close to unity, while a small deviation is observed in HIJING predictions. The HIJING slightly describes the shape of the ratio distribution. The HIJING model at $1.0 < p_T < 2.5$ GeV/$c$ region behave differently and overshoot the data and hence does not describe the distribution shape correctly. On the other hand, Sibyll and QGSJET ratio is close to unity. Overall, none of the models describe the experimental data and distribution shape accurately at all {$p_{\rm T}$} regions. Figure~\ref{fig5} (right column) depict the model prediction of baryon to meson ({$\overline{\Lambda}$}/{$\mathrm{K}^{0}_{\mathrm S}$}) ratios as a function of rapidity ($y$) compared with the experimental results. At $0.15 < p_T < 0.65$ GeV/$c$ region, HIJING and QGSJET describe the experimental data within uncertainties; however, Sibyll completely fails to predict the experimental results and hence underpredict the data. At $0.65 < p_T < 1.0$ GeV/$c$ region, QGSJET predictions match with data within uncertainties while HIJING slightly undershoots the data. Sibyll completely fails to predict the experimental results and undershoot the experimental observations. At $1.0 < p_T < 2.5$ GeV/$c$ region, Sibyll predictions are slightly closer to the HIJING but still lower than HIJING and QGSJET predictions and hence experimental data. Only the QGSJET model completely describes the experimental observations. \begin{figure}[ht!] \centering \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{7a7.pdf} \caption{{$\overline{\Lambda}$}/{$\Lambda$} Ratio as a function of rapidity ($y$)} \label{fig6a} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{7a8.pdf} \caption{{$\overline{\Lambda}$}/{$\mathrm{K}^{0}_{\mathrm S}$} Ratio as a function of rapidity ($y$)} \label{fig6b} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{7a9.pdf} \caption{{$\overline{\Lambda}$}/{$\Lambda$} Ratio as a function of {$p_{\rm T}$}} \label{fig6c} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{7a10.pdf} \caption{{$\overline{\Lambda}$}/{$\mathrm{K}^{0}_{\mathrm S}$} Ratio as a function of {$p_{\rm T}$}} \label{fig6d} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{7a11.pdf} \caption{{$\overline{\Lambda}$}/{$\Lambda$} Ratio as a function of rapidity loss ($\Delta y$)} \label{fig6e} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{7a12.pdf} \caption{{$\overline{\Lambda}$}/{$\mathrm{K}^{0}_{\mathrm S}$} Ratio as a function of rapidity loss ($\Delta y$)} \label{fig6f} \end{subfigure} \caption{Different ratios as a function of {$p_{\rm T}$}, rapidity ($y$) and rapidity loss ($\Delta y$) in $pp$ collisions at \mbox{$\sqrt{\mathrm{s}}$}~ = 7 TeV from LHCb experiment in comparison to the model predictions. Black solid markers are the data points and lines of different colors shows different model predictions. } \label{fig6} \end{figure} Figure~\ref{fig6a} shows the {$\overline{\Lambda}$}/{$\Lambda$} ratio as a function of rapidity ($y$) at all {$p_{\rm T}$} range from $0.15 < p_T < 2.50$ GeV/$c$ region. All the model predictions are close to unity and roughly about the same, while data lies below the model predictions and shows the decreasing trend with rapidity $y$. All the models do not show rapidity dependence at all. The {$\overline{\Lambda}$}/{$\mathrm{K}^{0}_{\mathrm S}$} ratio as a function of rapidity ($y$) is shown in fig.~\ref{fig6b}. Only the prediction of QGSJET is in good agreement with the data, while HIJING and Sibyll's predictions are undershooting the data and QGSJET significantly. Figure~\ref{fig6c} depict the {$\overline{\Lambda}$}/{$\Lambda$} ratio as a function of {$p_{\rm T}$} in the rapidity range $2.0 < y < 4.0$. None of the models describe the experimental data. The model predictions are close to unity while data lies below around 0.95. Both data and all the models do not show {$p_{\rm T}$} dependence. {$\overline{\Lambda}$}/{$\mathrm{K}^{0}_{\mathrm S}$} ratio as a function of {$p_{\rm T}$} can be seen in fig.~\ref{fig6d}. QGSJET and HIJING reasonably describe the data within uncertainties as well as the ratio distribution at all {$p_{\rm T}$} regions. At the same time, Sibyll slightly underpredicts the data at almost all the {$p_{\rm T}$} regions. Figures~\ref{fig6e} and ~\ref{fig6f} respectively shows the {$\overline{\Lambda}$}/{$\Lambda$} and {$\overline{\Lambda}$}/{$\mathrm{K}^{0}_{\mathrm S}$} ratios as a function of rapidity loss ($\Delta y$). In the case of {$\overline{\Lambda}$}/{$\Lambda$} in fig~\ref{fig4e}, all of the model predictions are close to unity, and data lies significantly lower than the model data. None of the models explains this distribution well. {$\overline{\Lambda}$}/{$\mathrm{K}^{0}_{\mathrm S}$} ratios from fig.~\ref{fig6a} almost all of the models underpredict the data significantly except QGSJET, whose predictions are consistent with the experimental observations. \begin{figure}[ht!] \centering \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{7b9.pdf} \caption{{$\Lambda$}/{$\mathrm{K}^{0}_{\mathrm S}$} Ratio as a function of rapidity ($|y|$)} \label{fig7a} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{7b7.pdf} \caption{{$\Lambda$}/{$\mathrm{K}^{0}_{\mathrm S}$} Ratio as a function of {$p_{\rm T}$}} \label{fig7b} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{7b10.pdf} \caption{{$\Xi^-$}/{$\Lambda$} Ratio as a function of rapidity ($|y|$)} \label{fig7c} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{7b8.pdf} \caption{{$\Xi^-$}/{$\Lambda$} Ratio as a function of {$p_{\rm T}$}} \label{fig7d} \end{subfigure} \caption{Different ratios as a function of {$p_{\rm T}$} and rapidity ($y$) in $pp$ collisions at \mbox{$\sqrt{\mathrm{s}}$}~ = 7 TeV from LHCb experiment in comparison to the model predictions. Black solid markers are the data points and lines of different colors shows different model predictions. } \label{fig7} \end{figure} Figure~\ref{fig7a} shows the {$\Lambda$}/{$\mathrm{K}^{0}_{\mathrm S}$} ratio as a function of {$p_{\rm T}$} from $pp$ collisions at \mbox{$\sqrt{\mathrm{s}}$}~ = 7 TeV in comparison with the experimental results. The data shows no rapidity $y$ dependence and almost follows the straight line. It can also be seen that all of the models also do not show the rapidity $y$ dependence at all, but QGSJET is the only model which predicts the same results as that of data within uncertainties. Sibyll and HIJING predictions are very close but underpredict the experimental data and QGSJET roughly by 15\% and 10\%, respectively. Figure~\ref{fig7b} presents the {$\Lambda$}/{$\mathrm{K}^{0}_{\mathrm S}$} ratio as a function of {$p_{\rm T}$}. It is observed that there is sharper increases in {$\Lambda$} spectrum with the increase in low {$p_{\rm T}$} ({$p_{\rm T}$} $< 1.5$ GeV/$c$) and {$\Lambda$} production is relatively larger at the intermediate {$p_{\rm T}$} region ($1.5 \le$ {$p_{\rm T}$} $\le 4.0$ GeV/$c$). The same shape difference between {$\Lambda$} and {$\mathrm{K}^{0}_{\mathrm S}$} can be seen as reported in baryon to meson ratios~\cite{25}. HIJING is observed to predict the almost similar shape and experimental results from low to intermediate {$p_{\rm T}$} regions, i,e, up to {$p_{\rm T}$} $\le 2.5$ GeV/$c$. Sibyll slightly underpredict the data up to {$p_{\rm T}$} $< 1.5$ GeV/$c$ and overshoot at higher {$p_{\rm T}$} regions. QGSJET, on the other hand, slightly overpredicts the experimental and both model data up to {$p_{\rm T}$} $< 1.5$ GeV/$c$ and undershoot at above {$p_{\rm T}$} regions. This difference in the shape of experimental results and model predictions may be related to the fact that there is a significant contribution from the fragmentation processes to meson production compared to the baryon production, which is based on mass and energy arguments. Furthermore, the measurements of {$\Xi^-$}/{$\Lambda$} ratios have also been performed with these models. Figure~\ref{fig7c} shows the {$\Xi^-$}/{$\Lambda$} ratio as a function of rapidity $y$. The QGSJET model does not include the definition of $\Xi$ particle; therefore, no comparison can be made. However, HIJING and Sibyll predict similar results to the experimental data within uncertainties. Figure~\ref{fig7d} presents {$\Xi^-$}/{$\Lambda$} ratios as a function of {$p_{\rm T}$}. Both models Sibyll and HIJING describe the data up to {$p_{\rm T}$} $< 1.5$ GeV/$c$, while underpredict the data at higher {$p_{\rm T}$} regions. Sibyll shows balanced distribution at higher {$p_{\rm T}$} regions while fluctuation is observed in HIJING predictions which may be due to statistics. For a quantitative description of the particles' ratio, we have calculated the values of $\chi^2/n$ for all models for the energies above. The Values are tabulated in Table 1, where the first column shows the power, followed by the figures and particles ratios in columns 2 and 3, respectively. The following three columns show the values of the $\chi^2$/$n$ for the HIJING, Sibyll2.3d, and QGSJETII-04 models, respectively. The last column shows the numerical values of $n$, where $n$ indicates the number of measured points on the x-axis in each case. Taking into account the statistical error bars in the models' prediction and the values of the $\chi^2$/$n$ for Fig. 1, it is clear that the models are in good agreement with the experimental data, while large values of $\chi^2$/$n$ for Fig. 2 show that models failed to reproduce the measurements. At 0.9 and 7 TeV, the QGSJET model has the lowest values of $\chi^2$/$n$ consistently compared to the other two models. For different particle ratios ({$\overline{\Lambda}$}/{$\mathrm{K}^{0}_{\mathrm S}$}, {$\Xi^-$}/{$\Lambda$}) the HIJING model performed better even than the QGSJET model but failed completely to prediction the same particles ratio ({$\overline{\Lambda}$}/{$\Lambda$}) at 0.9 and 7 TeV. The values of $\chi^2$/$n$ for the different particles' ratios in the Sibyll model are lower than the HIJING but comparatively higher than the QGSJET model. Still, they have very high values for the same particles' ratios. \begin{table*}[hbt!] {\scriptsize Table 1. The values of the $\chi^2/n$ for the HIJING, Sibyll2.3d and QGSJETII-04 models at $\sqrt{s}$ = 0.2, 0.9 and 7 TeV. \begin{center} \begin{tabular} {cccccccccccc}\\ \hline\hline &$Energy$ &$Figure$ & ratio & $\chi^2/n$ & $\chi^2/n$ & $\chi^2/n$\\ &&&& $HJIJING$ & $Sibyll2.3d$ & $QJSJETII-04$ & $n$\\ \hline & & 1(a) & {$\overline{\Lambda}$}/{$\Lambda$} vs {$p_{\rm T}$} & 2.29 & 1.78 & 1.99 & 21\\ & 200 GeV & 1(b) & {$\overline{\Xi}^+$}/{$\Xi^-$} vs {$p_{\rm T}$} & 0.99 & 0.71 & - & 11\\ & & 2 & $\bar B/ B$ vs strangness & 18.75 & 63.23 & 75.76 & 4\\ \hline && 3(a) & {$\overline{\Lambda}$}/{$\Lambda$} vs $y$ & 18.86 & 8.19 & 5.62 & 4\\ && 3(b) & {$\overline{\Lambda}$}/{$\mathrm{K}^{0}_{\mathrm S}$} vs $y$ & 0.52 & 5.00 & 1.85 & 4\\ &900& 3(c) & {$\overline{\Lambda}$}/{$\Lambda$} vs $y$ & 22.10 & 12.72 & 4.40 & 4\\ &GeV& 3(d) & {$\overline{\Lambda}$}/{$\mathrm{K}^{0}_{\mathrm S}$} vs $y$ & 0.48 & 9.62 & 1.56 & 4\\ && 3(e) & {$\overline{\Lambda}$}/{$\Lambda$} vs $y$ & 23.43 & 30.52 & 2.41 & 4\\ && 3(f) & {$\overline{\Lambda}$}/{$\mathrm{K}^{0}_{\mathrm S}$} vs $y$ & 1.70 & 0.67 & 1.45 & 4\\ \hline && 4(a) & {$\overline{\Lambda}$}/{$\Lambda$} vs $y$ & 39.88 & 20.92 & 3.70 & 4\\ && 4(b) & {$\overline{\Lambda}$}/{$\mathrm{K}^{0}_{\mathrm S}$} vs $y$ & 1.32 & 8.12 & 3.83 & 6\\ &900& 4(c) & {$\overline{\Lambda}$}/{$\Lambda$} vs {$p_{\rm T}$} & 26.88 & 20.81 & 3.14 & 6\\ &GeV& 4(d)& {$\overline{\Lambda}$}/{$\mathrm{K}^{0}_{\mathrm S}$} vs {$p_{\rm T}$} & 0.18 & 7.22 & 1.41 & 6\\ && 4(e) & {$\overline{\Lambda}$}/{$\Lambda$} vs $\Delta y$ & 39.88 & 20.92 & 3.70 & 4\\ && 4(f) & {$\overline{\Lambda}$}/{$\mathrm{K}^{0}_{\mathrm S}$} vs $\Delta y$ & 1.32 & 8.12 & 3.83 & 4\\ \hline && 5(a) & {$\overline{\Lambda}$}/{$\Lambda$} vs $y$ & 1.40 & 2.74 & 2.41 & 5\\ && 5(b) & {$\overline{\Lambda}$}/{$\mathrm{K}^{0}_{\mathrm S}$} vs $y$ & 0.21 & 6.03 & 0.45 & 5\\ &7000& 5(c) & {$\overline{\Lambda}$}/{$\Lambda$} vs $y$ & 0.76 & 3.43 & 3.39 & 5\\ &GeV& 5(d) & {$\overline{\Lambda}$}/{$\mathrm{K}^{0}_{\mathrm S}$} vs $y$ & 1.52 & 11.58 & 0.90 & 5\\ && 5(e) & {$\overline{\Lambda}$}/{$\Lambda$} vs $y$ & 2.86 & 6.17 & 5.81 & 5\\ && 5(f) & {$\overline{\Lambda}$}/{$\mathrm{K}^{0}_{\mathrm S}$} vs $y$ & 1.44 & 2.71 & 0.71 & 5\\ \hline && 6(a) & {$\overline{\Lambda}$}/{$\Lambda$} vs $y$ & 4.44 & 6.48 & 6.36 & 5\\ && 6(b) & {$\overline{\Lambda}$}/{$\mathrm{K}^{0}_{\mathrm S}$} vs $y$ & 3.47 & 8.20 & 0.44 & 5\\ &7000& 6(c) & {$\overline{\Lambda}$}/{$\Lambda$} vs {$p_{\rm T}$} & 2.13 & 3.19 & 3.89 & 6\\ &GeV& 6(d) & {$\overline{\Lambda}$}/{$\mathrm{K}^{0}_{\mathrm S}$} vs {$p_{\rm T}$} & 0.90 & 9.60 & 0.57 & 6\\ && 6(e) & {$\overline{\Lambda}$}/{$\Lambda$} vs $\Delta y$ & 4.44 & 6.48 & 6.36 & 5\\ && 6(f) & {$\overline{\Lambda}$}/{$\mathrm{K}^{0}_{\mathrm S}$} vs $\Delta y$ & 3.47 & 8.20 & 0.44 & 5\\ \hline && 7(a) & {$\Lambda$}/{$\mathrm{K}^{0}_{\mathrm S}$} vs $|y|$ & 2.97 & 5.17 & 0.52 & 10\\ && 7(b) & {$\Lambda$}/{$\mathrm{K}^{0}_{\mathrm S}$} vs {$p_{\rm T}$} & 1.29 & 9.67 & 9.66 & 24\\ &7000& 7(c) & {$\Xi^-$}/{$\Lambda$} vs $|y|$ & 0.97 & 0.47 & - & 10\\ &GeV& 7(d) & {$\Xi^-$}/{$\Lambda$} vs {$p_{\rm T}$} & 2.36 & 2.47 & - & 22\\ \hline \end{tabular}% \end{center}} \end{table*} \section{Conclusion}\label{sec4} We have performed systematic study of different particle ratios {$\overline{\Lambda}$}/{$\Lambda$}, {$\overline{\Lambda}$}/{$\mathrm{K}^{0}_{\mathrm S}$} and {$\Xi^-$}/{$\Lambda$} as a function of rapidity $y$ and {$p_{\rm T}$} from $pp$ collisions at \mbox{$\sqrt{\mathrm{s}}$}~= 0.2, 0.9, and 7 TeV using Sibyll, HIJING and QGSJET and detailed comparison has been made with available experimental data. The anti-baryon to baryon ({$\overline{\Lambda}$}/{$\Lambda$}) ratio is considered as the measure of baryon transport number in $pp$ collisions to final state hadrons. It has been observed from the model comparison that none of the single models completely predict the experimental observations. However, QGSJET model predictions are close to the experimental results in the case of ratios as a function of rapidity $y$. HIJING, on the other hand, is in good agreement with {$\overline{\Lambda}$}/{$\mathrm{K}^{0}_{\mathrm S}$} ratios as a function of {$p_{\rm T}$}. The {$\overline{\Lambda}$}/{$\mathrm{K}^{0}_{\mathrm S}$} ratios as a function of {$p_{\rm T}$} shows the increasing trend at \mbox{$\sqrt{\mathrm{s}}$}~= 0.9, and 7 TeV suggest that in data, more baryons are expected to produce compared to mesons in strange hadronisation specifically at high {$p_{\rm T}$} regions. HIJING and QGSJET models support the experimental data and the above statement; however, Sibyll does not. Further improvement is required to fully understand the strange hadron production mechanisms, particularly at low {$p_{\rm T}$} regions. These ratios are further studied as a function of rapidity loss ($\Delta y$); all models fail to produce the experimental results. It is seen that models are somehow independent of rapidity. A current study of $V^0$ production ratios using various pQCD based models and cosmic-ray air shower based MC generators will be helpful for the development and hence, further improve the predictions of Standard Model physics at RHIC and LHC experiments, and it may also be beneficial in defining the baseline for discoveries. {\bf Data availability} The data used to support the findings of this study are included within the article and are cited at relevant places within the text as references. {\bf Compliance with Ethical Standards} The authors declare that they are in compliance with ethical standards regarding the content of this paper. \input bib.tex% \end{document}
1,941,325,219,956
arxiv
\section{Introduction} \label{sec:intro} Empirical Bayes methods enable frequentist estimation that emulates a Bayesian oracle. Suppose we observe $X$ generated as below, and want to estimate $\theta(x)$, \begin{equation} \label{eq:EB} \mu \sim G, \ \ X \sim \nn\p{\mu, \, \sigma^2}, \ \ \theta(x) = \EE{h(\mu) \cond X = x}, \end{equation} for some function $h(\cdot)$. Given knowledge of $G$, $\theta(x)$ can be directly evaluated via Bayes' rule. An empirical Bayesian does not know $G$, but seeks an approximately optimal estimator $\smash{\htheta(x) \approx \theta(x)}$ using independent draws $X_1, \, X_2, \, ..., \, X_m$ from the distribution \eqref{eq:EB}. The empirical Bayes approach was first introduced by \citet{robbins1956empirical} and, along with the closely related problem of compound estimation, has been the topic of considerable interest over the past decades \citep{efron2012large}. Empirical Bayes methods have proven to be successful in a wide variety of settings with repeated observations of similar phenomena including genomics \citep*{efron2001empirical, love2014moderated}, economics \citep{gu2017unobserved, abadie2018choosing}, and sports statistics \citep*{efron1975data, ragain2018improving}; and there is by now a large literature proposing a suite of estimators \smash{$\htheta(x)$} for $\theta(x)$ and characterizing minimax rates for estimation error \citep[e.g.,][]{brown2009nonparametric,butucea2009adaptive,efron1973stein, james1961estimation, jiang2009general,pensky2017minimax}. The goal of this paper is to move past point estimation, and develop confidence intervals for $\theta(x)$, i.e., intervals with the following property: \begin{equation} \label{eq:CI} \ii_\alpha(x) = \sqb{\htheta^-_\alpha(x), \, \htheta^+_\alpha(x)}, \ \ \liminf_{n \rightarrow \infty} \PP{\theta(x) \in \ii_\alpha(x)} \geq 1 - \alpha. \end{equation} The main challenge in building such intervals is in accurately accounting for the potential bias of point estimates $\htheta(x)$. As concrete motivation, consider the problem of estimating the local false-sign rate $\theta(x) = \PP{\mu_i X_i \leq 0 \cond X_i = x}$, i.e., the posterior probability that $\mu_i$ has a different sign than $X_i$; see Section \ref{sec:lfsr_intro} for a discussion and references. In the existing literature, the predominant approach to local false sign rate estimation involves first getting an estimate $\hG$ of the effect size distribution $G$ from \eqref{eq:EB} by maximum likelihood over some appropriate regularity class---for example, \citet{stephens2016false} uses a mixture distribution whereas \citet{efron2014two} uses a log-spline---and then estimating \smash{$\htheta(x)$} via a plug-in Bayes rule on \smash{$\hG$}. While these methods often have reasonably good estimation error, these procedures are highly non-linear so it is not clear how to accurately characterize their bias in a way that would allow for confidence intervals as in \eqref{eq:CI}. Furthermore, it is well known that minimax rates for estimation of $\theta(x)$ in nonparametric empirical Bayes problems are extremely slow \citep{butucea2009adaptive,pensky2017minimax}, e.g., they are polynomial in $\log(n)$ for local false sign rate estimation; thus, eliminating bias via techniques like undersmoothing may be prohibitively costly even in large samples. Here, instead of explicitly estimating the effect size distribution $G$, we use tools from convex optimization to design estimators \smash{$\htheta(x)$} for $\theta(x)$ such that we have explicit control on both their bias and variance. This idea builds on early work from \citet{donoho1994statistical} for quasi-minimax estimation of linear statistics over convex parameter spaces, and has recently proven useful for statistical inference in a number of settings ranging from semiparametrics \citep{hirshberg2017balancing,kallus2016generalized} and the high-dimensional linear model \citep*{athey2018approximate,javanmard2014confidence,zubizarreta2015stable} to regression discontinuity designs \citep{armstrong2018optimal,imbens2017optimized} and population recovery~\citep*{polyanskiy2017sample}. The empirical Bayes estimand $\theta(x)$ in \eqref{eq:EB} is of course not a linear statistic; however, we will use a local version of the method of \citet{donoho1994statistical} as a starting point for our analysis. Despite widespread use of empirical Bayes methods, we do not know of existing confidence intervals with the property \eqref{eq:CI}, i.e., asymptotic covarage of the empirical Bayes estimand $\theta(x)$. The closest method we are aware of is a proposal by \citet{efron2016empirical} for estimating the variance of empirical Bayes estimates \smash{$\htheta(x)$}, and then using these variance estimates for uncertainty quantification. Such intervals, however, do not account for bias and so could only achieve valid coverage via undersmoothing; and it is unclear how to achieve valid undersmoothing in practice noting the very slow rates of convergence in empirical Bayes problems. \citet{efron2016empirical} himself does not suggest his intervals be combined with undersmoothing, and rather uses them as pure uncertainty quantification tools. We also note work that seeks to estimate Bayesian uncertainty $\Var{h(\mu) \cond X = x}$ via empirical Bayes methods \citep{morris1983parametric, laird1987empirical}; this is, however, a different problem from ours. \subsection{Bias-Aware Calibration of Empirical Bayes Estimators} We start by recalling the following result from \citet{donoho1994statistical}. Suppose we observe a random vector \smash{$Y \sim \nn\p{Kv, \, \sigma^2 I_{n \times n}}$} for some unknown $v \in \vv$ and known $K, \sigma^2$. Suppose further that we want to estimate \smash{$\theta = a^\top v$} with $a$ known. Then, whenever $\vv$ is convex, there exists a linear estimator, i.e., an estimator of the form \smash{$\htheta = \beta + \gamma^\top Y$} for some non-random vector $\gamma$ and constant $\beta$, that is within a factor 1.25 of being minimax among all estimators (including non-linear ones). Moreover, the minimax linear estimator can be derived via convex programming. The empirical Bayes problem is, of course, very different from the problem discussed above: in particular, our estimand is not linear in $G$ and our signal is not Gaussian. Nonetheless, we find that by applying the ideas that underlie the result of Donoho to a linearization of the empirical Bayes problem, we obtain a practical construction for confidence intervals with rigorous coverage guarantees that do not rely on any kind of undersmoothing. Our approach for inference of $\theta(x) = \EE{h(\mu_i) \cond X_i=x}$ (for $h$ measurable) starts with a reasonably accurate pilot estimator $\bar{\theta}(x)$ (this pilot could be obtained, for example, via the plug-in rules of \citet{efron2014two} or \citet{stephens2016false}) and then ``calibrating'' the pilot $\bar{\theta}(x)$ using a linear estimator derived via numerical optimization. To do so, we first write our estimand $\theta(x)$ as follows via Bayes' rule,\footnote{We use $\sigma=1$ in~\eqref{eq:EB} throughout the rest of the text to simplify exposition.} \begin{equation} \label{eq:ratio} \theta(x) = \frac{\int h(\mu) \varphi(x - \mu) \ dG(\mu)}{\int \varphi(x - \mu) \ dG(\mu)} = \frac{a(x)}{f(x)}, \end{equation} where $\varphi$ is the standard Normal density, $f(x) = (\varphi \star dG)(x)$ is the convolution of $\varphi$ and $dG$, i.e., the marginal density of $X$ and $a(x)$ is used to denote the numerator. Because our estimand is a ratio of two functionals, direct estimation of $\theta(x)$ may be difficult. However, by first order approximation about a pilot estimator $\bar{\theta}(x) = \bar{a}(x) / \bar{f}(x)$, we see that (we will make this argument rigorous in Section \ref{sec:calibrate}) \begin{equation} \label{eq:calib} \theta(x) \approx \bar{\theta}(x) + \frac{a(x) - \bar{\theta}(x) f(x)}{\bar{f}(x)}, \end{equation} where the right-hand side above only depends on $G$ through the linear functional \begin{equation} \label{eq:delta} \Delta_G(x) = \frac{1}{\bar{f}(x)}\int \p{h(\mu) - \bar{\theta}(x)} \varphi(x - \mu) \ dG(\mu). \end{equation} Our core proposal is to estimate $\Delta_G(x)$ as a linear estimator, i.e., one of the form\footnote{Estimators of the form \eqref{eq:Q} and linear estimators in the Gaussian problem considered by \citet{donoho1994statistical} are closely related via the white noise limit for density estimation; see \citet{donoho1989hardest} for details. We review basic results from \citet{donoho1994statistical} in the context of estimators of the form \eqref{eq:Q} in the proof of Proposition \ref{prop:minimax_estimator}.} \begin{equation} \label{eq:Q} \begin{split} \hDelta(x) = Q_0 + \frac{1}{m} \sum_{i = 1}^m Q(X_i), \end{split} \end{equation} where \smash{$Q_0$} and \smash{$Q(\cdot)$} are chosen to optimize a worst-case bias-variance tradeoff depending on a class of effect size distributions $\gcal$. In order to apply our method, we need the class $\gcal$ to be convex but, beyond that, we have considerable flexibility. For example, $\gcal$ could be a class of Lipschitz functions, or the class of symmetric unimodal densities around 0. Unlike in some simpler problems, e.g., the ones considered in \citet{armstrong2018optimal}, \citet{imbens2017optimized} or \citet{kallus2016generalized}, deriving the minimax choice of \smash{$Q(\cdot)$} for use in \eqref{eq:Q} may not be a tractable optimization problem, even after linearization. However, we show that is possible to derive a quasi-minimax \smash{$Q(\cdot)$} function via convex optimization provided that, in addition to our pilot $\bar{\theta}(x)$ we, also have access to \smash{$\bar{f}(\cdot)$}, a pilot estimate of the marginal $X$-density $f(\cdot)$ such that \smash{$\Norm{\bar{f}(\cdot) - f(\cdot)}_\infty \leq c_m$} with $c_m \to 0$ in probability; details are provided in Section \ref{sec:lin_func}. Then, using this choice of \smash{$Q(\cdot)$} for \eqref{eq:Q}, we report point estimates \begin{equation} \htheta(x) = \bar{\theta}(x) + \hDelta(x). \end{equation} For confidence intervals, we first estimate the variance and worst-case bias\footnote{The worst case bias is computed over a subset of $\mathcal{G}$, namely the localization $\mathcal{G}_m$ which is defined in~\eqref{eq:localization}.} of our calibration step as \begin{align} &\hV = \frac{1}{m(m - 1)}\sqb{\sum_{i = 1}^m Q^2(X_i) - \p{\sum_{i=1}^m Q(X_i)}^2\bigg/m}, \label{eq:sample_var_est}\\ &\hB^2 = \sup_{G \in \gcal_m}\cb{\p{\EE[\varphi \star dG]{\hDelta(x)} - \Delta_G(x)}^2}, \label{eq:bias_est} \end{align} and then use these quantities to build bias-adjusted confidence intervals $\ii_\alpha$ for $\theta(x)$ \citep[e.g.,][]{armstrong2018optimal,imbens2004confidence, imbens2017optimized} \begin{equation} \label{eq:im_iw_ci} \ii_\alpha = \htheta(x) \pm \hat{t}_\alpha(\hB, \hV), \ \ \hat{t}_\alpha(B, V)= \inf\cb{t : \PP{\abs{b + V^{1/2} Z} > t} < \alpha \text{ for all } \abs{b} \leq B}, \end{equation} where $Z \sim \nn\p{0, \, 1}$ is a standard Gaussian random variable. Section \ref{sec:calibrate} has formal results establishing asymptotic coverage properties for these intervals. \subsection{Example: Local False Sign Rate Estimation} \label{sec:lfsr_intro} The local false sign rate measures the posterior probability that the sign of an observed signal $X_i$ disagrees with the sign of the true effect $\mu_i$:\footnote{More generally, for an estimator $\hat{\mu}_i = \hat{\mu}_i(X_i)$ of $\mu_i$, the local false sign rate at $x$ may be defined as $\lfsr(x) = \PP{\hat{\mu}_i\mu_i \leq 0 \cond X_i = x}$.} \begin{equation} \label{eq:lfsr} \lfsr(x) = \PP{X_i\mu_i \leq 0 \cond X_i = x}. \end{equation} Local false sign rates provide a principled approach to multiple testing without assuming that the distribution $G$ of the effect sizes $\mu_i$ is spiked at 0, and in particular form an attractive alternative to the local false discovery rate, $\text{lfdr}(x) = \PP{\mu_i = 0 \cond X_i = x}$, without requiring a sharp null hypothesis \citep*{stephens2016false, zhu2018heavy}. Inferential emphasis is thus placed on whether we can reliably detect the direction of an effect; such inference has been considered (in the context of both model~\eqref{eq:EB} and other models) by several authors, including \citet{barber2016knockoff, benjamini2005false, gelman2000type, hung2018, owen2016confidence, weinstein2014selective} and \citet{yu2019adaptive}. We focus on the local false sign rate as it is of obvious scientific interest, yet behaves as a ``generic'' empirical Bayes problem that can directly be used to understand \eqref{eq:EB} for other choices of $h(\cdot)$. In contrast, the problem of posterior mean estimation $\theta(x) = \EE{\mu_i \cond X_i = x}$ exhibits a special ``diagonal'' structure \citep[e.g.,][]{efron2011tweedie} that allows for unexpectedly good estimation properties. In the case of posterior mean estimation, our approach will in fact be able to benefit from this structure to get short confidence intervals; however, we emphasize that we do not need such special properties for valid inference. \begin{figure} \begin{subfigure}{\textwidth} \centering \includegraphics[width=\linewidth]{pl_bimodal_lfsr.pdf} \end{subfigure} \begin{subfigure}{\textwidth} \centering \includegraphics[width=\linewidth]{pl_unimodal_lfsr.pdf} \end{subfigure} \caption{\textbf{Inference for Local False Sign Rates:} \textbf{a)} Probability density function of the effect size distribution $G_{\text{Bimodal}}$ defined in~\eqref{eq:prior_gs}. \textbf{b)} The dotted curve shows the true target $\theta(x)=\PP{\mu_i \geq 0 \cond X_i=x}$, while the shaded areas show the expected confidence bands for the MCEB procedure, as well as the exponential family plug-in approach. \textbf{c)} The coverage (as a function of $x$) of the bands shown in panel b) where the dashed horizontal line corresponds to the nominal $90\%$ coverage. \textbf{d,e,f)} Analogous results to panels a,b,c) however with the effect size distribution $G_{\text{Unimodal}}$ defined in~\eqref{eq:prior_gs}.} \label{fig:lfsr_simulation} \end{figure} As discussed above, the standard approach to local false sign rate estimation relies on plug-in estimation of $G$ \citep{efron2014two,stephens2016false}. Here, we compare our Minimax Calibrated Empirical Bayes estimator (MCEB) to this plug-in approach in two simple simulation examples: We draw $10,000$ observations from the distribution~\eqref{eq:EB} with the effect size distributions $G$ defined as follows (and $\sigma=1$), \begin{equation} \label{eq:prior_gs} \begin{aligned} G_{\text{Bimodal}} &= \frac{1}{2}\nn\p{-1.5,0.2^2} + \frac{1}{2}\nn\p{1.5,0.2^2}, \\ G_{\text{Unimodal}} &= \frac{7}{10}\nn\p{-0.2,0.2^2} + \frac{3}{10}\nn\p{0,0.9^2}, \end{aligned} \end{equation} and seek to provide $90\%$ pointwise confidence intervals for the $\lfsr(x)$ for a collection of different values of $x$. As discussed above, our MCEB approach requires the practitioner to specify a convex class $\gcal$ that contains the effect size distribution $G$; here, we use the class $\gcal:= \mathcal{G}\mathcal{N}(0.2^2, [-3,3])$, where\footnote{How to best choose $\gcal$ is a difficult question that needs to rely on subject matter expertise. In applications, it may be prudent to run our approach for different values of $\gcal$, and examine sensitivity of the resulting confidence intervals to the smoothness of $\gcal$.} \begin{equation} \mathcal{G}\mathcal{N}(\tau^2, \mathcal{K}) := \{ \mathcal{N}(0, \tau^2) \star \pi \mid \pi \text{ distribution, } \operatorname{support}(\pi) \subset \mathcal{K} \subset \mathbb R \}, \label{eq:normal_mixing_class} \end{equation} Further implementation details for the MCEB approach are discussed in Section~\ref{subsec:instantiation}. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{gmodel_varying_dof} \caption{\textbf{Coverage versus expected width of confidence bands:} Here the simulation setting is the same as that of Figure~\ref{fig:lfsr_simulation}, panels d,e,f) with the results also averaged over $x$. Furthermore, we apply the exponential family plug-in estimator for a range of degrees of freedom (from $2$ to $15$ shown by the number as well as progressively darker blue color), while in Figure~\ref{fig:lfsr_simulation} only the estimator with $5$ degrees is considered.} \label{fig:gmodel_varying_dof} \end{figure} Meanwhile, for the plug-in baseline we estimate \smash{$\hat{G}$} by maximum likelihood over a flexible exponential family where, following~\citet{efron2016empirical}, we use a natural spline with 5 degrees of freedoms as our sufficient statistic; we then obtain \smash{$\htheta(x)$} by applying Bayes rule with effect size distribution \smash{$\hat{G}$}. As is standard in the literature, this baseline constructs confidence intervals for $\theta(x)$ using the delta method, which captures for the variance of \smash{$\htheta(x)$} but not its bias. Such confidence intervals are only guaranteed to cover $\theta(x)$ in the presence of undersmoothing. See Appendix~\ref{sec:exp_family} for implementation details of this plug-in baseline. Figure~\ref{fig:lfsr_simulation} summarizes the results of the simulations. For ease of visualization, we report results on the (substantively equivalent) quantity $\theta(x)=\PP{\mu_i \geq 0 \cond X_i=x}$ instead of the local false sign rate so that the resulting curve is monotonic in $x$. The take-home message is that the plug-in estimator leads to much narrower bands than the MCEB bands; however, the plug-in bands do not in general achieve good coverage. In the bimodal example ($G_{\text{Bimodal}}$), the plug-in method appears to have gotten ``lucky'' in that bias is vanishingly small relative to variance, and methods provide good coverage. In contrast, for the unimodal example ($G_{\text{Unimodal}}$), there appears to be non-negligible bias and the plug-in bands get close to $0\%$ coverage of $\theta(x)$. At least in this example, it appears that having many true effects $\mu_i$ close to 0 made the local false sign rate estimation problem more delicate, thus highlighting the vulnerability of the plug-in approach that does not account for bias. One might at this point wonder whether one can reduce the bias of the plug-in approach and achieve nominal coverage by increasing the degrees of freedom of the spline; we explore this in Figure~\ref{fig:gmodel_varying_dof} for the above simulation with the effect size distribution $G_{\text{Unimodal}}$. In general, coverage indeed improves as the degrees of freedom increase; however, with many degrees of freedom, the variance can be so large that the resulting confidence intervals are longer than the proposed MCEB intervals. More importantly it is not clear a-priori, i.e., without knowing the ground truth, how to properly undersmooth the plug-in estimation and choose a number of degrees of freedom that provides good coverage. This example highlights the fact that if we want confidence intervals that cover the true local false sign rate $\theta(x)$, then explicitly accounting for bias is important. We also note that similar phenomena hold if we compare MCEB to the plug-in approach for estimating the posterior mean $\EE{\mu_i \cond X_i=x}$; see Figure~\ref{fig:posterior_mean_simulation} in Section \ref{sec:numerical}. Again, the plug-in approach provides shorter bands, but at the cost of poor coverage in the second unimodal simulation design. \subsection{Related Work} As discussed briefly above, the empirical Bayes principle has spurred considerable interest over several decades. One of the most successful applications of this idea involves compound estimation of a high-dimensional Gaussian mean: We observe $X \sim \nn\p{\mu, \, I}$, and want to recover $\mu$ under squared error loss. If we assume that the individual $\mu_i$ are drawn from an effect size distribution $G$, then empirical Bayes estimation provides a principled shrinkage rule \citep{efron1973stein,efron2011tweedie}. Moreover, even when $\mu$ is assumed to be fixed, empirical Bayes computations provide an excellent method for sparsity-adaptive estimation \citep{brown2009nonparametric,jiang2009general,johnstone2004needles}. The more general empirical Bayes problem \eqref{eq:EB} has also raised interest in applications \citep{efron2012large,efron2001empirical,stephens2016false}; however, the accompanying formal results are less comprehensive. Some authors, including \citep{butucea2009adaptive,pensky2017minimax} have considered rate-optimal estimation of linear functionals of the effect size distribution; and their setup covers, for example, the numerator $a(x)$ in \eqref{eq:ratio}. The main message of these papers, however, is rather pessimistic: For example, \citet{pensky2017minimax} shows that for many linear functionals, the minimax rate for estimation in mean squared error over certain Sobolev classes $\gcal$ is logarithmic (to some negative power) in the sample size. In this paper, we study a closely related problem but take a different point of view. Even if minimax rates of optimal point estimates $\htheta(x)$ may be extremely slow, we seek confidence intervals for $\theta(x)$ that still achieve accurate coverage in reasonable sample sizes and explicitly account for bias. The results of \citet{butucea2009adaptive} and \citet{pensky2017minimax} imply that the width of our confidence intervals must go to zero very slowly in general; but this does not mean that our intervals cannot be useful in finite samples (and, in fact, our experiments in Figure~\ref{fig:lfsr_simulation} and applications to real data suggest that they can be). In the spirit of~\citet{koenker2014convex}, we utilize the power of convex optimization for empirical Bayes inference and our methodological approach builds heavily on the literature on minimax linear estimation of linear functionals in Gaussian problems. \citet{donoho1994statistical} and related papers \citep{armstrong2018optimal,cai2003note,donoho1991geometrizing, ibragimov1985nonparametric,johnstone2011gaussian,juditsky2009nonparametric} show that there exist linear estimators that achieve quasi-minimax performance and can be efficiently derived via convex programming. \section{Bias-Aware Inference for Linear Functionals} \label{sec:lin_func} Our goal is to provide valid inference in model \eqref{eq:EB}, based on independent samples $X_i$ generated hierarchically as $\mu_i \sim G$ and $X_i \cond \mu_i \sim \nn\p{\mu_i, \, \sigma^2}$. As a first step towards our approach to empirical Bayes inference, we need a method for bias-aware inference about linear functionals $L = L(G)$ of the unknown effect size distribution $G$\footnote{Our results, also apply for linear functionals that can not be represented as in~\eqref{eq:lin_functional}, for example point evaluation of the Lebesgue density of $G$, i.e., $L(G) = G'(0)=g(0)$, when it exists.} \begin{equation} \label{eq:lin_functional} L(G) = \int_{\RR} \psi(\mu) dG(\mu), \end{equation} where $\psi(\cdot)$ is some function of our choice. Despite its simple appearance, even this problem is not trivial and to our knowledge no practical method for building confidence intervals for such functionals is available in the existing literature. We will then build on the results developed here to study methods for bias-aware empirical Bayes inference in Section \ref{sec:calibrate}. We build on a line of work that has attained minimax rate-optimal estimations for linear functionals of $G$ using estimators written in terms of the empirical characteristic function \citep{butucea2009adaptive, matias2004minimax, pensky2017minimax}. More specifically, writing $\psi^*(t) = \int \exp(itx)\psi(x)dx$ for the Fourier Transform of $\psi^*$ and $\varphi^*(t) = \exp(- t^2/2)$ for the characteristic function of the standard Gaussian distribution, these authors consider estimators of the form \begin{equation} \label{eq:comte_butucea_estimator} \hat{L}_{\text{BC},h_m} = \frac{1}{2 \pi m}\sum_{i=1}^m \int_{-1/h_m}^{1/h_m} \exp(it X_k) \frac{\psi^*(-t)}{\varphi^*(t)}dt. \end{equation} With proper tuning of the bandwidth parameter $h_m >0$ which governs a bias-variance trade-off, the above estimators achieve minimax rate optimality over certain classes of effect size distributions $G \in \mathcal{G}$. However, despite asymptotic rate-optimality guarantees, such Fourier approaches have not been found to perform particularly well in finite samples \citep{efron2014two}. Furthermore such methods are usually derived for specific classes $\mathcal{G}$, and are hard to tailor for $\mathcal{G}$ which might be more suitable to the application at hand. Finally, confidence intervals based on \eqref{eq:comte_butucea_estimator} have not been considered in the literature. Here, we also focus on estimators for \eqref{eq:lin_functional} that, like \eqref{eq:comte_butucea_estimator}, are affine in the empirical $X$-distribution $\hat{F}_n$, i.e., they can be written as below in terms of some function $Q$ and offset $Q_0$: \begin{equation} \label{eq:linear_estimator} \hat{L} = Q_0 + \frac{1}{m} \sum_{i=1}^m Q(X_i) = Q_0 + \int Q(u)d\hat{F}_n(u). \end{equation} But, instead of limiting ourselves to estimators that can be written explicitly in terms of the empirical characteristic function, we can attempt to optimize the choice of $Q(\cdot)$ in~\eqref{eq:linear_estimator} over a pre-specified convex function class $\mathcal{G}$. We follow a recent trend in using convex optimization tools to derive estimators that are carefully tailored to the problem at hand \citep{armstrong2018optimal, hirshberg2017balancing, imbens2017optimized, kallus2016generalized}. At this point, we pause to note that, unlike in \citet{donoho1994statistical} and the series of papers discussed above, finding the minimax choice of $Q(\cdot)$ for \eqref{eq:linear_estimator} is not a parametric convex problem. In order to recover the minimax $Q(\cdot)$ we would need to solve the optimization problem \begin{equation} \label{eq:minimax_problem} \begin{split} &\argmin_{Q_0,Q} \cb{\max \cb{\MSE((Q_0,Q), G) : G \in \mathcal{G}}}, \, \where \\ &\MSE((Q_0,Q), G) = \p{\EE[G]{Q_0 + Q(X_i)} - L(G)}^2 + \frac{1}{m} \Var[G]{Q(X_i)}. \end{split} \end{equation} This problem, however, is intractable: The difficult term here is \smash{$\Var[G]{Q(X_i)}$}, which depends on the product of $\varphi \star dG(\cdot)$ and $Q^2(\cdot)$. Thankfully, we can derive a good choice of $Q$ by solving a sharp approximation to \eqref{eq:minimax_problem}. First, to avoid regularity issues at infinity and so that our inference is not unduly sensitive to outliers, we let $M > 0$ be a (large) constant, and only optimize $Q(\cdot)$ over functions that are constant outside the interval $[-M, \, M]$. Then, suppose we have access to a pilot estimate $\bar{f}_m(\cdot)$ of the marginal density $f(\cdot)$ of $X$, along with a guarantee that, for some sequence $c_m \rightarrow 0$ \begin{equation} \label{eq:cm_ball} \begin{split} &\PP{\Norm{f(\cdot)-\bar{f}_m(\cdot)}_{\infty,M} \leq c_m} \rightarrow 1, \, \where \\ &\Norm{h}_{\infty,M} := \max\cb{\sup_{x \in [-M,M]} \abs{h(x)}, \abs{\int_{-\infty}^{-M} h(x)dx}, \abs{\int_{M}^{\infty} h(x)dx }}. \end{split} \end{equation} This is not a stringent assumption, as it is well known that $f(\cdot)$ can be accurately estimated in the Gaussian convolution model \citep{kim2014minimax}. As discussed in Section~\ref{subsec:instantiation} and Appendix~\ref{sec:marginal_nbhood}, we can obtain practical estimates of $f(\cdot)$ using the de la Vall\'ee-Poussin kernel density estimator~\citep{matias2004minimax} and choose $c_m$ with the Poisson bootstrap of~\citet{deheuvels2008asymptotic}. In our formal results, we allow for $c_m$ and $\bar{f}_m$ to be random. Given such a pilot, we can approximate the problematic variance term as\footnote{Here, upper bounding $\Var[G]{Q(X_i)}$ by $\EE[G]{Q^2(X_i)}$ does not cost us anything asymptotically. For our use cases, the quantity $|L(G)| \leq c \; \forall \; G \in \mathcal{G}$, so that we may assume that $\EE[G]{Q}=O(1)$. On the other hand, for the problems at hand $\MSE((Q_0,Q),G) \gg 1/m$; and, under weak assumptions, this implies that $\Var[G]{Q} \to \infty \text{ as } m\to \infty$. Asymptotically, we thus expect that ${\Var[G]{Q}}\,/\,{\EE[G]{Q^2}} \to 1 \text{ as } m \to \infty$; see the proof of Theorem~\ref{theo:lin_functional_clt} for a rigorous statement.}\textsuperscript{,}\footnote{We view this approximation as an operationalization of the white noise limit for density estimation in~\citet{donoho1989hardest}. Timothy Armstrong pointed to us that an alternative construction could proceed through the white noise limit with \smash{$\EEInline{dY(t)} =\sqrt{{f_G(t)}}$} of~\citet{nussbaum1996asymptotic}.} \begin{equation} \Var[G]{Q(X_i)} \leq \EE[G]{Q^2(X_i)} \approx \int_{\RR} Q^2(x)\bar{f}_m(x)dx \end{equation} for all $G \in \gcal_m$, where \begin{equation} \label{eq:localization} \gcal_m = \cb{\widetilde{G} \in \gcal : \Norm{\varphi \star d\widetilde{G} - \bar{f}_m}_{\infty,M} \leq c_m} \end{equation} is the set of all effect size distributions $G$ that yield marginal $X$-densities satisfying $||f(\cdot) - \bar{f}_m(\cdot)||_{\infty,M} \leq c_m$. We then propose using the following optimization problem as a surrogate for \eqref{eq:minimax_problem}, \begin{equation} \label{eq:minimax_problem_tractable1} \begin{split} &\argmin_{Q_0,Q} \bigg\{ \max \cb{ \p{Q_0 + \EE[G]{Q(X_i)} - L(G)}^2 : G \in \gcal_m}: \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \frac{1}{m}\int Q^2(x) \bar{f}_m(x) dx \leq \Gamma_m, \, \; Q \text{ is constant on } (-\infty, \, -M] \, \mathlarger{\cup}\, [M, \,+\infty) \bigg\}, \end{split} \end{equation} where $\Gamma_m$ is a tuning parameter used to optimize a bias-variance trade-off. As verified below, the induced $Q(\cdot)$ weighting function allows for rigorous inference about $L(G)$ and the above optimization problem allows for practical solvers. In order to avoid own-observation bias effects, we derive our pilot estimators by data-splitting: We use one half of the data to form the estimate $\bar{f}_m$ and specify $c_m, \Gamma_m$, and then use the other half to evaluate the linear estimator $\hat{L}:=\hat{L}_{\Gamma_m}:=\hat{L}_{\Gamma_m,m}$ from~\eqref{eq:linear_estimator} with $Q_0,Q(\cdot)$ derived from~\eqref{eq:minimax_problem_tractable1} and to construct confidence intervals. Algorithm~\ref{alg:linear_ci} summarizes the procedure. \begin{algorithm}[t] \caption{Confidence intervals for linear functionals $L(G)$ \label{alg:linear_ci}} \Input{Samples $\tilde{X}_1,\dotsc, \tilde{X}_m, X_1,\dotsc,X_m \simiid f_{G}=\varphi \star dG$, a nominal level $\alpha \in (0,1)$} Use the observations $\tilde{X}_1,\dotsc, \tilde{X}_m$ to form $\bar{f}_m(\cdot)$, an estimate of the marginal $X$-density and specify $\Gamma_m$, $c_m$ (which can depend on the data in the first fold).\; Solve the minimax problem~\eqref{eq:minimax_problem_tractable1} to get $Q_0, Q$ and the worst case bias $\hB$ as in~\eqref{eq:bias_est} (equivalently~\eqref{eq:worst_case_bias_modulus_formula}).\; Use the observations $X_1,\dotsc,X_m$ to form the estimate $\hat{L}_{\Gamma_m}$ of $L(G)$ as in~\eqref{eq:linear_estimator} and its estimated variance $\hV$ as in~\eqref{eq:sample_var_est}.\; Form bias-aware confidence intervals as in~\eqref{eq:im_iw_ci}.\; \end{algorithm} We state our main result about inference for linear functionals below. As is common in the literature on linear estimation~\citep{armstrong2018optimal,cai2004minimax,donoho1994statistical,low1995bias}, the modulus of continuity defined as follows plays a key role in our analysis: \begin{equation} \label{eq:continuous_modulus_problem} \begin{split} &\omega_m(\delta) := \sup\bigg\{ L(G_1) - L(G_{-1})\;\mid\; G_1, G_{-1} \in \mathcal{G}_m, \frac{ \p{\int_{-\infty}^{-M}\p{f_{G_1}(x)dx - f_{G_{-1}}(x)}dx}^2 \, dx} {\int_{-\infty}^{-M} \bar{f}_m(x)\, dx} \\ &\ \ \ \ \ \ + \int_{-M}^M \frac{ \p{f_{G_1}(x) - f_{G_{-1}}(x)}^2}{\bar{f}_m(x)}\ dx + \frac{\p{\int_M^{\infty} \p{f_{G_1}(x) - f_{G_{-1}}(x)}dx}^2} {\int_M^{\infty} \bar{f}_m(x)\, dx} \; \leq \; \delta^2\bigg\}. \end{split} \end{equation} Algorithm~\ref{alg:linear_ci} and the result below leave open the choice of some parameters such as $c_m$ and $\Gamma_m$, which furthermore may be random and depend on the first fold of the data. We will further elaborate on these choices in Remarks~\ref{rema:delta} and \ref{rema:cm}, as well as Section~\ref{subsec:instantiation}. \begin{theo} \label{theo:lin_functional_clt} Consider inference for a linear functional $L(G) = \int \psi(\mu)dG(\mu)$ via Algorithm~\ref{alg:linear_ci}. Further assume that $\mathcal{G}$ is convex, and that: \begin{itemize} \item[--] The true effect size distribution $G$ lies in $\gcal$, i.e., $G \in \gcal$. \item[--] The linear functional satisfies \smash{$\sup_{G \in \mathcal{G}} \abs{L(G)} < \infty$}. \item[--] \eqref{eq:cm_ball} holds, i.e., \smash{$\PP[G]{\Norm{f(\cdot)-\bar{f}_m(\cdot)}_{\infty,M} \leq c_m} \rightarrow 1$} as $m \to \infty$. \item[--] The localization radius $c_m$ as in~\eqref{eq:cm_ball} satisfies \smash{$c_m \overset{\mathbb P_{G}}{\to} 0$} as $m\to \infty$. \item[--] The tuning parameter $\Gamma_m$ in \eqref{eq:minimax_problem_tractable1} can be written as $\Gamma_m = \Gamma_m(\delta_m) := m^{-1} \omega'_m(\delta_m)^2$, where $\omega_m(\cdot)$ is as defined in \eqref{eq:continuous_modulus_problem} and $\delta_m > 0$ satisfies ${c_m^2}\,/\,(m \delta_m^2) \overset{\mathbb P_{G}}{\to} 0$. \item[--] $\PP[G]{B_m} \to 1 \text{ as } m \to \infty$, where $B_m$ is the event that there exist $G_1^{\delta_m},G_{-1}^{\delta_m} \in \mathcal{G}_m$ that solve the modulus problem~\eqref{eq:continuous_modulus_problem} at $\delta_m>0$, i.e. are such that $L(G_1^{\delta_m}) - L(G_{-1}^{\delta_m}) = \omega_m(\delta_m)$. \end{itemize} Then, letting $\tboldX = (\tilde{X}_i)_{i \geq 1}$, the resulting estimator $\hat{L}$ has the following properties, where $\hV$, $\hat{B}$ are as defined in Algorithm \ref{alg:linear_ci} and $\Bias[G]{\hat{L},L} = \EEInline[G]{\hat{L} \mid \tboldX} - L(G)$ is the conditional bias (omitting explicit dependence on $\tboldX$) \begin{align} &\left(\hat{L} - L(G) - \Bias[G]{\hat{L},L}\right)\,\Big/\,\sqrt{\Var[G]{\hat{L} \mid \tboldX}} \xrightarrow[]{d} \mathcal{N}(0,1), \\ \label{eq:var_est} &\hV/\Var[G]{\hat{L} \mid \tboldX} \xrightarrow[]{\;\;\mathbb P_{G}\;\;} 1 \;\text{ and }\;\int Q^2(x)\bar{f}_m(x)dx/\p{m\Var[G]{\hat{L} \mid \tboldX}} \xrightarrow[]{\;\;\mathbb P_{G}\;\;} 1, \\ &\PP[G]{\lvert\Bias[G]{\hat{L},L}\rvert \leq \hat{B}} \to 1 \text{ as }m\to \infty. \end{align} \end{theo} We provide the proof in Appendix~\ref{subsec:pf_theo_lin_clt}. By \eqref{eq:var_est} we see that both the sample variance based on the second sample, and the variance proxy based on the first sample correctly capture the conditional variance of our proposed estimator. This result also immediately enables a statement about confidence intervals. \begin{coro} \label{coro:valid_inf} Under the assumptions of Theorem~\ref{theo:lin_functional_clt}, the confidence intervals constructed through Algorithm~\ref{alg:linear_ci} provide asymptotically correct coverage of the target $L(G)$, i.e. \begin{equation} \liminf_{m \to \infty} \PP[G]{ L(G) \in \ii_\alpha} \geq 1-\alpha . \end{equation} \end{coro} \begin{rema} \label{rema:uniform_coverage} Although the results stated above only hold elementwise, we can also obtain uniform statements in the sense of, e.g., \citet{robins2006adaptive} by adding slightly more constraints on the class $\gcal$. Specifically, for some $\eta >0$, consider \begin{equation} \mathcal{G}^{\eta} = \cb{ G \in \mathcal{G}: \min\cb{ \int_{-\infty}^{-M} f_{G}(u)du,\; \int_{M}^{\infty} f_{G}(u)du,\;\inf_{u \in [-M,M]} f_{G}(u)} > \eta}, \end{equation} i.e., qualitatively, effect size distributions for which the induced marginal density $f_{G}(\cdot)$ cannot vanish anywhere. The proof of Theorem \ref{theo:lin_functional_clt} and Corollary \ref{coro:valid_inf} implies that then the above statements apply uniformly over $\mathcal{G}^{\eta}$, and in particular \begin{equation} \liminf_{m \to \infty} \inf\cb{\PP[G]{ L(G) \in \ii_\alpha} : G \in \gcal_\eta} \geq 1-\alpha, \end{equation} provided the following conditions hold: We have $\sup_{G \in \mathcal{G}^{\eta}} \PP[G]{G \notin \mathcal{G}_m} \to 0 \text{ as } m \to \infty$, and there exist deterministic sequences $c_m^{\text{det}}$, $\delta_m^{\text{det}}$ such that $c_m \leq c_m^{\text{det}}$, $\delta_m \geq \delta_m^{\text{det}}$ with probability 1 and $c_m^{\text{det}} \to 0$, $c_m^{\text{det}} = o\p{m^{1/2} \delta_m^{\det}}$. \end{rema} \begin{rema} The results and the proposed estimator appear to require the solution of the infinite dimensional convex problem~\eqref{eq:continuous_modulus_problem}, which is not necessarily tractable a-priori; for example a representer theorem might not exist. Nevertheless, in fact, the results also hold verbatim under an arbitrary discretization of the interval $[-M,M]$, as elaborated in Section~\ref{subsec:stein_heuristic}, as long as we replace the modulus $\omega_m(\delta)$ of equation~\eqref{eq:continuous_modulus_problem} by an appropriately discretized version thereof and defined below in~\eqref{eq:discrete_modulus_problem}. Discretization of $\mathcal{G}$ is discussed in Section~\ref{subsec:prior_misspefication}. \end{rema} \begin{rema} \label{rema:assumptions} The statistical assumption driving Theorem~\ref{theo:lin_functional_clt} is that model~\eqref{eq:EB} holds with $G \in \gcal$. Using a good choice of prior class $\gcal$ is thus critical, and we discuss this choice further in Sections~\ref{subsec:prior_misspefication}, \ref{subsec:instantiation} and~\ref{sec:discussion}. In contrast, all other assumptions are under control of the analyst and may be verified before any data analysis is conducted, see Corollary~\ref{coro:instantiated_estimator} for an example of such verification. \end{rema} \begin{rema} \label{rema:delta} One point of flexibility left open in Theorem \ref{theo:lin_functional_clt} is how to choose the tuning parameter $\delta_m$ (or equivalently $\Gamma_m = \Gamma_m(\delta_m)$ from~\eqref{eq:minimax_problem_tractable1}). In practice we use $\delta_m \geq \delta_m^{\text{min}}$ which optimizes a criterion of interest, such as the worst-case mean squared error \begin{equation} \label{eq:delta_MSE} \delta_{m}^{\MSE} \in \argmin_{\delta \geq \delta_m^{\text{min}}}\max_{G \in \mathcal{G}_m} \cb{ \Bias[G]{\hat{L}_{\Gamma_m(\delta)}, L}^2 + \Gamma_m(\delta)} \end{equation} An alternative, aligned with the goal of inference, is to choose $\delta_m$ which minimizes the length of the confidence intervals from~\eqref{eq:CI}, \begin{equation} \label{eq:delta_CI} \delta_{m}^{\text{CI}} \in \argmin_{\delta \geq \delta_m^{\text{min}}}\max_{G \in \mathcal{G}_m} \cb{\hat{t}_{\alpha}\p{\big\lvert\Bias[G]{\hat{L}_{\Gamma_m(\delta)}, L}\big\rvert,\; \Gamma_m(\delta) }} \end{equation} \end{rema} \begin{rema} \label{rema:cm} A final choice that needs to be made in instantiating our estimator is the sequence $c_m = o_{\mathbb P}(1)$. While valid, deterministic sequences are available, we obtained better empirical performance using a data-driven choice of $c_m$ based on a Poisson bootstrap proposed by \citet{deheuvels2008asymptotic}. We emphasize that our result remains valid with random sequences $c_m$; and required conditions for the method of \citet{deheuvels2008asymptotic} are provided in Propositions~\ref{prop:deheuvels_fact} and \ref{prop:deheuvels_bootstrap} in Appendix~\ref{sec:marginal_nbhood}. \end{rema} \subsection{Tractable Optimization with Stein's heuristic} \label{subsec:stein_heuristic} In our formal results from Theorem~\ref{theo:lin_functional_clt}, we have assumed that we can solve optimization problem~\eqref{eq:minimax_problem_tractable1}. However, at first sight, it is not obvious how to achieve this, since the problem is not concave in $G$, hence standard min-max results for convex-concave problems are not applicable. Nevertheless, \citet{donoho1994statistical} provides a solution to this optimization problem by formalizing a powerful heuristic that goes back to Charles Stein. The key steps are as follows: \begin{enumerate} \item We search for the hardest 1-dimensional subfamily, i.e., we find $G_1,G_{-1} \in \gcal_m$, such that solving problem~\eqref{eq:minimax_problem_tractable1} over $\text{ConvexHull}(G_1, G_{-1})$ (instead of over all of $\gcal_m$) is as hard as possible. The precise definition of ``hardest'' is given below in the modulus problem~\eqref{eq:discrete_modulus_problem}. \item We find the minimax optimal estimator of problem~\eqref{eq:minimax_problem_tractable1} over the hardest 1-dimensional subfamily. \item We then find that this solution is in fact optimal over all of $\gcal_m$. \end{enumerate} To make things more concrete and practical, we will proceed to discretize the optimization problem; our theoretical guarantees hold with or without discretization. Fixing $M > 0$, we consider a fine grid: \begin{equation} \label{eq:fine_grid} -M = t_{1,m} < t_{2,m} < \dotsc < t_{K_m-1,m} = M. \end{equation} Also let us define $t_{0,m}=-\infty$, $t_{K_m,m} =+\infty$ and $I_{k,m} = [t_{k-1,m}, t_{k,m})$ for $k \in \{1,\dotsc, K_m\}$ and $K_m \in \NN$, with our results valid for both $K_m$ fixed and $K_m \to \infty$ as $m \to \infty$. Next we restrict ourselves to optimization over $Q(\cdot)$ that are piecewise constant within each interval $I_{k,m}$.\footnote{ There is also a statistical interpretation of this discretization: We pass our observations $X_i$ through a further channel $\dd_m$ that discretizes them into the above partition, i.e. \smash{$X_i \mapsto \dd_m(X_i) := \sum_{k=1}^{K_m} k\ind(\cb{X_i \in I_{k,m}}) \in \{1,\dotsc,K_m\}$}. For the theory for our estimators, we allow an arbitrary partition and only fix the intervals $(-\infty,-M)$ and $(M,+\infty)$. In practice, the partition should be made as fine as possible, subject to computational constraints, and should also become finer as $m$ increases. While we do not pursue this further here, existing theory for discretization in statistical inverse problems~\citep{johnstone1991discretization} suggests that even coarse binning suffices to maintain the minimax risk.} Note that we also need to deal with discretization of $\gcal$; we address this issue in Section~\ref{subsec:prior_misspefication}. Next, to proceed with the first step of Stein's heuristic, for a fixed $\delta$ we consider the modulus of continuity problem: \begin{equation} \label{eq:discrete_modulus_problem} \omega_m(\delta) = \sup\cb{ L(G_1) - L(G_{-1})\;\mid\; G_1, G_{-1} \in \mathcal{G}_m,\; \sum_{k=1}^{K_m} \frac{(\nu_{G_1}(k) - \nu_{G_{-1}}(k))^2}{\bar{\nu}(k)} \; \leq \; \delta^2} \end{equation} Here we defined the marginal probability mass functions \smash{$\nu_{G}(k) = \int_{I_{k,m}} f_{G}(x) dx, G \in \mathcal{G}$} and \smash{$\bar{\nu}(k) = \int_{I_{k,m}} \bar{f}_m(x)dx$} (omitting dependence on $m$). We observe that the modulus of continuity $\omega_m(\delta)$ is non-decreasing, concave in $\delta >0$ and bounded from above if \smash{$\sup_{G \in \mathcal{G}_m} \abs{L(G)} < \infty$} (cf. Proposition~\ref{prop:modulus_properties} in Appendix~\ref{sec:modulus_properties}). In particular, the superdifferential $\partial \omega_m(\delta)$ is non-empty and there exists an $\omega_m'(\delta) \in \partial \omega_m(\delta)$. Now, if $G_1^{\delta}$, $G_{-1}^{\delta}$ are solutions of the modulus problem, then they define a hardest subfamily. Next, consider the estimator $\hat{L}_{\delta} = Q_0^{\delta} + \frac{1}{m}\sum_{i=1}^m Q^{\delta}(X_i)$, now parametrized by $\delta$ instead of $\Gamma_m=\Gamma_m(\delta)$, where: \begin{equation} \label{eq:optimal_delta_Q} \begin{aligned} &Q^{\delta}(x) = \sum_{k=1}^{K_m} \ind_{\cb{x \in I_{k,m}}}\frac{\omega'_m(\delta)}{\delta}\frac{ \nu_{G_1^{\delta}}(k) - \nu_{G_{-1}^{\delta}}(k)}{\bar{\nu}(k)}\\ &Q^{\delta}_0 = \frac{L(G_1^{\delta}) + L(G_{-1}^{\delta})}{2} - \frac{\omega'_m(\delta)}{\delta}\sum_{k=1}^{K_m} \frac{\p{\nu_{G_1^{\delta}}(k) - \nu_{G_{-1}^{\delta}}(k)}\p{\nu_{G_1^{\delta}}(k) + \nu_{G_{-1}^{\delta}(k)}}}{2\bar{\nu}(k)} \end{aligned} \end{equation} Then this estimator solves the minimax problem~\eqref{eq:minimax_problem_tractable1} over \smash{$\text{ConvexHull}(G_{1}^{\delta},G_{-1}^{\delta})$} for $\Gamma_m = \frac{1}{m}\omega_m'(\delta)^2$ among all estimator that are piecewise constant on the intervals $I_{k,m}$. In fact, it solves this minimax problem over all of $\gcal_m$, as can be verified by the proposition below. We note that the analogous statement also holds without discretization, i.e., with the modulus of continuity defined in~\eqref{eq:continuous_modulus_problem}. \begin{prop}[Properties of $\hat{L}_{\delta}$] \label{prop:minimax_estimator} Assume $\mathcal{G}_m$ is convex, $\sup_{G \in \mathcal{G}_m} \abs{L(G)} < \infty$ and that there exist $G_1^{\delta},\;G_{-1}^{\delta} \in \mathcal{G}_m$ that solve the modulus problem at $\delta>0$, i.e., are such that \smash{$\sum_{k=1}^{K_m} {(\nu_{G_1^{\delta}}(k) - \nu_{G_{-1}^{\delta}}(k))^2}\,/\,{\bar{\nu}(k)} = \delta^2$} and \smash{$L(G_1^{\delta}) - L(G_{-1}^{\delta}) = \omega_m(\delta)$}. Then: \begin{enumerate}[label=(\alph*)] \item The estimator $\hat{L}_{\delta}$ defined by~\eqref{eq:optimal_delta_Q}, achieves its worst case positive bias over $\mathcal{G}_m$ for estimating $L(G)$ at $G_{-1}^{\delta}$ and negative bias at $G_1^{\delta}$, i.e., \begin{equation} \sup_{G \in \mathcal{G}_m}\Bias[G]{\hat{L}_{\delta},L} = \Bias[G_{-1}^{\delta}]{\hat{L}_{\delta},L} = -\Bias[ G_{1}^{\delta}]{\hat{L}_{\delta},L} = - \inf_{G \in \mathcal{G}_m}\Bias[G]{\hat{L}_{\delta},L}. \end{equation} \item If we let $\Gamma_m = \frac{1}{m}\int Q^{\delta}(x)^2 \bar{f}_m(x)dx$, then for any other estimator $\tilde{L}$ of $L(G)$ of the form $ \tilde{L} = \tQ_0 + \frac{1}{m}\sum_{i=1}^m \tQ(X_i)$ with $\tQ(\cdot)$ piecewise constant on $I_{k,m}$ and for which $\frac{1}{m}\int \tQ^2(x)\bar{f}(x)dx \leq \Gamma_m$, it holds that: \begin{equation} \sup_{G \in \mathcal{G}_m} \Bias[G]{\tilde{L},L}^2 \geq \sup_{G \in \mathcal{G}_m}\Bias[G]{\hat{L}_{\delta},L}^2 \end{equation} \item For both $\Gamma_m$ and the worst case bias, we have explicit expressions in terms of the modulus $\omega_m(\delta)$ and $\omega_m'(\delta)$: \begin{align} &\sup_{G \in \mathcal{G}_m}\Bias[G]{\hat{L}_{\delta},L} = \frac{1}{2}\sqb{ \omega_m(\delta) - \delta \omega_m'(\delta)}, \label{eq:worst_case_bias_modulus_formula} \\ &\Gamma_m = \frac{1}{m}\int Q^{\delta}(x)^2 \bar{f}_m(x)dx = \frac{1}{m}\omega_m'(\delta)^2. \end{align} \end{enumerate} \end{prop} \subsection{Effect size distributions: misspecification and discretization} \label{subsec:prior_misspefication} As already mentioned in Remark~\ref{rema:assumptions}, the major statistical assumption behind Theorem~\ref{theo:lin_functional_clt} is that $G \in \gcal$. In this section we seek to explore the statistical consequences of misspecification, i.e., what happens if $G \notin \gcal$, but instead $G \in \widetilde{\gcal}$ for $\widetilde{\gcal} \neq \gcal$. These results will enable the development of statistically justified discretization schemes of the class of effect size distributions; say if we believe the true class is $\widetilde{\gcal}$, but instead run Algorithm~\ref{alg:linear_ci} with $\gcal$ that has a computable, finite-dimensional representation (where $\gcal$ may change with $m$ allowing for increasingly fine discretization). Let $\gcal_m, \widetilde{\gcal}_m$ the localizations of $\gcal$, resp. $\widetilde{\gcal}$ as in~\eqref{eq:localization}. First note that strong misspecification will be detected by the localization, since then \smash{$\gcal_m = \emptyset$} for large enough $m$. On the other hand, by construction, $\gcal_m \neq \emptyset$ implies that our variance calculations will be first-order correct in misspecified models. It then remains to study the potential additional bias incurred by misspecification. We define the excess bias of a linear estimator $\hat{L}$ (cf.~\eqref{eq:linear_estimator}) of $L(G)$ as: \begin{equation} \operatorname{ExcessBias}(\hat{L}, L;\; \gcal,\widetilde{\gcal}) :=\adjustlimits\sup_{\tilde{G} \in \widetilde{\gcal}}\inf_{ G \in \gcal} \abs{\Bias[G]{\hat{L},L} - \Bias[\tilde{G}]{\hat{L},L} } \end{equation} Then note for $\widetilde{G} \in \widetilde{\gcal}$ and the estimator $\hat{L}_{\delta_m}$~\eqref{eq:optimal_delta_Q} of $L(G)$: \begin{equation} \Bias[\widetilde{G}]{\hat{L}_{\delta_m}, L} \leq \underbrace{\sup_{G \in \gcal_m}\cb{\abs{\Bias[G]{\hat{L}_{\delta_m},L}}}}_{=\hB} + \operatorname{ExcessBias}(\hat{L}_{\delta_m}, L;\; \gcal_m, \widetilde{\gcal}_m) \end{equation} In particular, if we were to use the intervals from~\eqref{eq:im_iw_ci} with $\hat{B} + \operatorname{ExcessBias}(\hat{L}_{\delta_m}, L;\; \gcal_m, \widetilde{\gcal}_m)$ instead of $\hat{B}$, then inference would be valid. For practical applications it is therefore prudent to conduct sensitivity analyses. Furthermore, when discretizing, it suffices to make the excess bias negligible. The latter may be controlled by coverings of distributions in appropriate metrics. \begin{prop} \label{prop:excessbias_covering} Assume $L(G) = \int \psi(\mu)dG(\mu)$ as in~\eqref{eq:lin_functional} and that $Q(\cdot)$ from~\eqref{eq:linear_estimator} is bounded. \begin{enumerate}[label=(\alph*)] \item If $\abs{\psi(\mu)} \leq C_{\psi}$, then $$\operatorname{ExcessBias}(\hat{L}, L;\; \gcal, \widetilde{\gcal}) = O\bigg(\adjustlimits\sup_{\tilde{G} \in \widetilde{\gcal}}\inf_{ G \in \gcal} \{\TV(G,\tilde{G})\}\bigg),$$ with \smash{$\TV(G, \tilde{G}) = \sup_{A}\lvert G(A)-\tilde{G}(A)\rvert$} the total variation distance between $G$ and $\tilde{G}$. \item If $\psi(\mu)$ is $C_{\psi}$-Lipschitz continuous, then $$\operatorname{ExcessBias}(\hat{L}, L;\; \gcal, \widetilde{\gcal}) = O\bigg(\adjustlimits\sup_{\tilde{G} \in \widetilde{\gcal}}\inf_{ G \in \gcal}\{W_1(G,\tilde{G})\}\bigg),$$ with \smash{$W_1(G, \tilde{G}) = \inf\{ \EEInline{\abs{\mu - \tilde{\mu}}}:\; (\mu, \tilde{\mu}) \text{ random variable s.t. } \mu\sim G, \tilde{\mu} \sim \tilde{G}\}$} the Wasserstein distance between $G$ and $\tilde{G}$ (cf. \citet{panaretos2019statistical} and references therein). \end{enumerate} \end{prop} \noindent We now apply Proposition~\ref{prop:excessbias_covering} to justify concrete discretization schemes of two convex classes of effect size distributions. \begin{exam}[Discretization of Gaussian mixtures] \label{exam:disc_gaussian_mix} Consider the class of distributions $\mathcal{G}\mathcal{N}\p{\tau^2, \mathcal{K}}$ from~\eqref{eq:normal_mixing_class} in the special case where $\mathcal{K} = [-K,K]$ is a bounded interval. We first discretize $\mathcal{K}=[-K,K]$ as a finite grid \begin{equation} \label{eq:finite_grid} \mathcal{K}(p, K) = \cb{ 0, \pm \frac{K}{p}, \pm \frac{2K}{p}, \dotsc, \pm K},\;\;\; p \in \mathbb N \end{equation} Then we use $\mathcal{G}\mathcal{N}\p{\tau^2, \mathcal{K}(p, K)}$ as a discretization of $\mathcal{G}\mathcal{N}\p{\tau^2, [-K,K]}$ that is amenable to efficient computation: to represent $\mathcal{G}\mathcal{N}\p{\tau^2, \mathcal{K}(p, K)}$ it suffices to represent all discrete distributions supported on the grid $\mathcal{K}(p, K)$, which in turn may be represented as the $2B+1$-dimensional probability simplex, cf. Appendix~\ref{sec:gaussian_mix_discr} for more computational details. A direct calculation reveals that $\mathcal{G}\mathcal{N}\p{\tau^2, \mathcal{K}(p, K)}$ provides a $O(K/p)$ covering of $\mathcal{G}\mathcal{N}\p{\tau^2, [-K,K]}$ in both total variation and Wasserstein distance. Inference based on the discretized class will thus be valid for linear functionals satisfying a) or b) of Proposition~\ref{prop:excessbias_covering} as long as $p$ is large enough. \end{exam} \begin{exam}[Discretization of compactly supported distributions] As a second example, consider $\mathcal{P}([-K,K])$, the class of all distributions supported on $[-K,K]$. This class may be discretized, similarly to Example~\ref{exam:disc_gaussian_mix}, by instead considering $\mathcal{P}(\mathcal{K}(p,K))$, the class of all distributions supported on the grid $\mathcal{K}(p, K)$. This discretization provides a $O(K/p)$ covering of $\mathcal{P}([-K,K])$ in Wasserstein distance, but not in total variation distance. For this discretization scheme, Proposition~\ref{prop:excessbias_covering} only justifies inference for functionals satisfying b) of its statement. The implications for inference on empirical Bayes quantities (cf. Section~\ref{sec:calibrate}) is that we may use the above discretization to conduct inference for the posterior mean, but not necessarily for the local false sign rate. \end{exam} For other important classes of effect size distributions, more elaborate discretization schemes are required. The example we will consider are the Sobolev classes that place tail conditions on the characteristic function $g^*(t) = \int \exp(it \mu) dG(\mu)$; these classes have played a prominent role in the nonparametric deconvolution literature~\citep{butucea2009adaptive, pensky2017minimax} \begin{equation} \label{eq:sobolev_constr} \mathcal{S}(b, C_{\text{sob}}) = \cb{ G \text{ dbn.},\;g=G' \in \LR{1}\cap\LR{2}: \int_{-\infty}^{\infty} |g^*(t)|^2(t^2+1)^b dt \leq 2\pi C_{\text{sob}}} \end{equation} Our strategy for discretizing $\mathcal{S}(b, C_{\text{sob}})$ consists of truncating the expansion of $g=G'$ in the basis of Hermite functions. Concretely, let \smash{$ H_j(x) = (-1)^j \exp(x^2) \frac{d^j}{dx^j}\exp(-x^2)$} be the $j$-th Hermite polynomial and \smash{$h_j(x) = c_j H_j(x)\exp(-x^2/2)$} with \smash{$c_j = (2^j j!\sqrt{\pi})^{-1/2}$} the $j$-th Hermite function. The Hermite functions $h_j, j \geq 0$ form an orthonormal basis of $\LR{2}$ and so we may write $g(x) = \sum_{j=0}^{\infty} \alpha_j h_j(x)$ for $g = G'$ with $G \in \mathcal{S}(b, C_{\text{sob}})$. Define $\Pi_{p}$ as the projection of $g$ onto the first $p+1$ Hermite functions, i.e, \smash{$\Pi_{p}(g) = \sum_{j=0}^{p} \alpha_j h_j(x)$}. We discretize $\mathcal{S}(b, C_{\text{sob}})$ as \smash{$\Pi_{p}(\mathcal{S}(b, C_{\text{sob}}))$}\footnote{We abuse notation here as the discretization operates on the densities $g$ and not the distributions $G$.}. The following Proposition justifies the validity of the discretization for inference (with $p$ large enough). \begin{prop} \label{prop:sobolev_discretization} Assume that, for $G\in\mathcal{S}(b, C_{\text{sob}})$, the linear functional $L(G)$ can be written as $L(G) = \frac{1}{2\pi}\int \psi^*(-t)g^*(t)dt$ for some $\psi^*$ with $\abs{\psi^*(t)} \leq C_{\psi}$\footnote{We formally interpret $\psi^*$ as the Fourier transform of $\psi$, where $L(G) = \int \psi(\mu)dG(\mu)$ as in~\eqref{eq:lin_functional}. This interpretation is valid as soon as $\psi, \psi g$ and $\psi^* g^* \in \LR{1}$. However, the result is applicable more generally. For example, the choice $\psi^*(t)=1$ corresponds to point evaluation of the density at $0$, i.e., $L(G) = g(0)$, cf.~\citet{butucea2009adaptive}.}. Furthermore, let $\mathcal{S}(b, C_{\text{sob}}, C_{\text{herm}})$ be the class of distributions $G \in \mathcal{S}(b, C_{\text{sob}})$ whose density also satisfies $\int \mu^2 g(\mu)^2 d\mu \leq C_{\text{herm}}$\footnote{This constraint is related to the Hermite-Sobolev spaces studied by \citet{bongioanni2006sobolev} and \citet{belomestny2019sobolev}. Recent applications of the Hermite-Sobolev spaces to deconvolution problems include~\citet*{comte2018laguerre, sacko2019, kato2018inference}.}. Finally assume that $b \geq 1$ and that $Q(\cdot)$ from~\eqref{eq:linear_estimator} is bounded. Then: $$ \operatorname{ExcessBias}(\hat{L},L;\; \Pi_{p}(\mathcal{S}(b, C_{\text{sob}}, C_{\text{herm}})), \mathcal{S}(b, C_{\text{sob}}, C_{\text{herm}})) \to 0 \text{ as } {p} \to \infty$$ \end{prop} \begin{rema} Sections~\ref{subsec:stein_heuristic} and~\ref{subsec:prior_misspefication} have addressed discretization from a theoretical perspective. In Appendix~\ref{sec:computation} we provide computational details for the practical implementation of our estimators which e.g., involves repeated solves of the modulus problem along with computation of its superdifferential. \end{rema} \section{Confidence Intervals for Empirical Bayes Analysis} \label{sec:calibrate} We now return to our main subject, namely estimation of posterior expectations of the form \begin{equation} \label{eq:posterior_expectation} \theta_{G}(x) = \EE[G]{h(\mu) \mid X = x} = \displaystyle\frac{\int h(\mu) \varphi(x-\mu)dG(\mu)}{\int \varphi(x-\mu)dG(\mu)}. \end{equation} These are nonlinear functionals of $G$; however, as discussed in the introduction, our core strategy is to apply affine minimax estimation techniques to a linearization of $\theta_{G}(x)$. The idea of combining linearization with minimax linear estimation has been discussed in other contexts by \citet{armstrong2018sensitivity} and \citet{hirshberg2018debiased}. Let us write $\theta(x)=\theta_{G}(x) = A_{G}/F_{G}$ where $A_{G}$ and $F_{G}$ correspond to the numerator, resp. denominator in~\eqref{eq:posterior_expectation}, both of which are linear functionals of $G$. Assume that we have access to pilot estimates $\bar{A}=\bar{A}_m$ and $\bar{F}=\bar{F}_m$ (and thus also $\bar{\theta}(x)=\bar{A}/\bar{F}$) based on the first fold (i.e. based on $(\tilde{X}_1,\dotsc,\tilde{X}_m)$), then our goal is to use the machinery from Section~\ref{sec:lin_func} for estimation of linear functionals by linearizing $A_{G}/F_{G}$ around $\bar{A}/\bar{F}$. In particular, by Taylor's theorem, there exists some $\tilde{F}$ between $F_{G}$ and $\bar{F}$ such that \begin{equation} \label{eq:taylor_expansion} \begin{aligned} \frac{A_{G}}{F_{G}} &= \displaystyle\frac{A_{G}}{\bar{F}} + A_{G}\left(\frac{1}{F_{G}}-\frac{1}{\bar{F}} \right)\\ &= \frac{A_{G}}{\bar{F}} - \frac{A_{G}}{\bar{F}^2}\left(F_{G}-\bar{F}\right) + \frac{A_{G}}{\tilde{F}^3}(F_{G}-\bar{F})^2 \\ &= \frac{A_{G}}{\bar{F}} - \frac{\bar{A}}{\bar{F}^2}\left(F_{G}-\bar{F}\right) - \frac{(A_{G}-\bar{A})(F_{G}-\bar{F})}{\bar{F}^2} + \frac{A_{G}}{\tilde{F}^3}(F_{G}-\bar{F})^2 \\ &= \bar{\theta}(x) + \underbrace{\frac{1}{\bar{F}}(A_{G} - \bar{\theta}(x)F_{G})}_{=:\Delta_{G}(x)} \underbrace{- \frac{(A_{G}-\bar{A})(F_{G}-\bar{F})}{\bar{F}^2} + \frac{A_{G}}{\tilde{F}^3}(F_{G}-\bar{F})^2}_{=:\varepsilon_m}. \end{aligned} \end{equation} In other words, it holds that $\theta(x) \approx \bar{\theta}(x) + \Delta_{G}(x)$ and observe that $\Delta_{G}(x)$ is a linear functional of $G$ \begin{equation} \Delta_{G}(x) = \frac{1}{\bar{F}}\int(h(\mu)-\bar{\theta}(x))\varphi(x-\mu)dG(\mu). \end{equation} Therefore we can use our results from Section~\ref{sec:lin_func} to derive a confidence interval of $\Delta_{G}(x)$, namely $\hat{\Delta}(x) \pm \hat{t}_{\alpha}(\hB,\hV)$. Then we may estimate $\theta(x)$ by $\htheta(x) = \bar{\theta}(x) + \hat{\Delta}(x)$ and for $\eta >0$ we propose the interval \begin{equation} \ii_{\alpha,\eta} = [\htheta(x) - (1+\eta)\hat{t}_{\alpha}(\hB,\hV),\;\; \htheta(x) + (1+\eta)\hat{t}_{\alpha}(\hB,\hV)]. \end{equation} This interval indeed has the correct coverage asymptotically, as the next theorem shows. \begin{theo} \label{theo:posterior_exp_inference} Assume that: \begin{enumerate} \item The Assumptions of Theorem~\ref{theo:lin_functional_clt} hold \footnote{Note that now as $m$ varies the linear functional also changes.}. \item The pilot estimators $\bar{A} = \bar{A}_m$ and $\bar{F}=\bar{F}_m$ are $\LR{2}$-consistent and such that: \begin{align*} & \limsup_{m \to \infty} \frac{m}{\sqrt{\log(m)}} \EE[G]{(F_{G}-\bar{F}_m)^2} < \infty, & (A) \\ & \limsup_{m \to \infty} \sqrt{\log(m)} \EE[G]{(A_{G}-\bar{A}_m)^2} = 0. & (B) \end{align*} \item There exists $C > 0$, such that the modulus $\omega_m(\cdot)$ for estimating $\Delta_{G}(x) = (A_{G} - \bar{\theta} F_{G})/\bar{F}$ over $\mathcal{G}_m$ satisfies \begin{equation} \PP[G]{\omega_m^2\p{\frac{2}{\sqrt{m}}} \geq \frac{C}{m}} \to 1 \text{ as } m \to \infty. \end{equation} \end{enumerate} Then for any $\eta >0$, $\alpha \in (0,1)$ it holds that: \begin{equation} \liminf_{m \to \infty} \PP[G]{\theta_{G}(x) \in \ii_{\alpha, \eta}} \geq 1-\alpha. \end{equation} \end{theo} The assumptions of Theorem~\ref{theo:posterior_exp_inference} are easy to verify and mild. Assumption 2A requires that $\bar{A}_m$ has to converge in mean squared error slightly faster than $1/\sqrt{\log(m)}$. The rate requirement in Assumption 2B for $\bar{F}_m$ is also mild and achieved under no assumptions on $G \in \mathcal{G}$ by the De La Vall\'ee-Poussin kernel density estimator (see Proposition~\ref{prop:sinc_mse}), as well as the Butucea-Comte estimator, see Appendix~\ref{subsec:cb_marginal}. Finally, Assumption 3 essentially only requires that the estimation problem for $\Delta_{G}(x)$ over $\mathcal{G}_m$ is at least as hard as a parametric problem. In particular, the following high-level condition suffices to verify assumption 3C: There exists a $C>0$ such that \begin{equation} \PP[G]{\inf_{\hat{T} \text{affine}} \sup_{\widetilde{G} \in \mathcal{G}_m} \cb{m \EE[\widetilde{G}]{(\hat{T}- \Delta_{\widetilde{G}}(x))^2}} > C} \to 1 \text{ as } m \to \infty, \end{equation} i.e., the affine minimax risk over all affine estimators of $\Delta_{G}(x)$ based on $m$ new observations cannot be too small. \subsection{An instantiation of MCEB} \label{subsec:instantiation} As our methodology is fairly general, it leaves some design choices to the analyst who wants to use it. These include: \begin{enumerate} \item The class of potential effect size distributions $\mathcal{G}$. \item The estimator of the marginal density $\bar{f}_m$ and localization radius $c_m$. \item The choice of $\delta_m$. \item The pilot estimators for $A_{G}$ and $F_{G}$, i.e., the numerator and denominator of the actual target of estimation $\theta(x) = \theta_{G}(x) = A_{G}/F_{G}$. \end{enumerate} Here we propose concrete choices that allow for valid asymptotic inference as shown in Corollary \ref{coro:instantiated_estimator} below. For the {\bf distribution class} $\gcal$, we consider the family $\mathcal{G}\mathcal{N}(\tau^2, [-K,K])$ from~\eqref{eq:normal_mixing_class}, which is further parametrized by $\tau^2, \, K \in (0,\infty)$. This class of distributions has also been considered by~\citet{cordy1997deconvolution} and \citet{magder1996smooth}. Note that $\mathcal{G}\mathcal{N}(\tau^2,[-K,K]) \subset \mathcal{G}\mathcal{N}(\tau'^2,[-K',K'])$ for $\tau' \leq \tau$ and $K' \geq K$. Second, we obtain the \textbf{marginal density estimate} $\bar{f}_m$ using the De La Vall\'ee-Poussin kernel density estimator and choose $c_m$ with the Poisson bootstrap of~\citet{deheuvels2008asymptotic}. See Appendix~\ref{sec:marginal_nbhood} for concrete details. Third, as discussed in Remark \ref{rema:delta}, we choose the {\bf tuning parameter} $\delta_m$ to optimize the mean squared error (resp. confidence interval width) as in~\eqref{eq:delta_MSE} (resp.~\eqref{eq:delta_CI}), with $\p{\delta_m^{\text{min}}}^2 = c_m^2 \log(m)/m$. Finally, we obtain the {\bf pilot estimators} for $A_{G}$ and $F_{G}$ with the Butucea-Comte estimator~\eqref{eq:comte_butucea_estimator} with a deterministic bandwidth choice $h_m$ that leads to the asymptotically optimal rate. Appendix~\ref{sec:fourier} provides the details. \begin{coro} \label{coro:instantiated_estimator} The estimation scheme and setting outlined above, satisfy all assumptions needed to apply Theorem~\ref{theo:posterior_exp_inference} and we may conduct inference for: \begin{enumerate}[label=(\alph*)] \item The posterior expectation $\theta(x) = \EE[G]{\mu_i \mid X_i=x}$. \item The local false sign rate $\theta(x) = \PP[G]{\mu_i \geq 0 \mid X_i=x}$. \end{enumerate} \end{coro} In our simulation results we use exactly the methodology described above with further computational details provided in Appendix~\ref{sec:computation}. \section{Numerical study of proposed estimators} \label{sec:numerical} In this section we study numerically the behavior of our proposed estimation scheme. First, in Section \ref{sec:lin_example}, we examine the linear estimators developed in Section \ref{sec:lin_func} in the context of three statistical tasks: Pointwise marginal density estimation for $f_{G}(\cdot)$, tail probability estimation and effect size density estimation for $G(\cdot)$. We compare our approach to the empirical characteristic function approach of \citet{butucea2009adaptive}, and a variant of our estimator that isn't ``localized'' as in \eqref{eq:localization} and \eqref{eq:minimax_problem_tractable1}. Next, in Section \ref{sec:eb_example}, we present a simulation study for empirical Bayes inference that extends the example from the introduction. \subsection{Linear estimation} \label{sec:lin_example} Theorem~\ref{theo:lin_functional_clt} enables the use of our proposed methodology for inference of linear functionals $L(G)$ of the effect size distribution $G$. In this subsection we want to illustrate the form of our proposed linear estimators from~\eqref{eq:minimax_problem_tractable1}, shed light on Stein's worst-case subfamily heuristic from Section~\ref{subsec:stein_heuristic} and also understand the impact of optimizing over effect size distributions that are consistent with the $X$-marginal localization band~\eqref{eq:cm_ball}. In our examples we vary the linear functional, the true effect size distribution $G$, the class $\gcal$ and the sample size $m$. In each case we solve problem~\eqref{eq:minimax_problem_tractable1} with $\Gamma_m$ chosen to get the best $\MSE$ and with an oracle choice $\bar{f}_m(\cdot) = f_{G}(\cdot)$, Furthermore, we use the localized $\mathcal{G}_m$ with choice of $\Norm{\cdot}_{\infty,6}$-radius equal to $c_m = 0.02$. \textbf{Marginal $X$-density estimation:} First we consider estimating the marginal $X$-density at $0$, i.e. $L(G) = f_{G}(0) = \int \varphi(\mu)dG(\mu)$. This target, as discussed earlier, is easy to estimate (Appendix ~\ref{sec:marginal_nbhood}) and indeed pilot estimators of $f_{G}(x)$ (for different $x$) are a key ingredient for our proposal; both for the construction of the localization band~\eqref{eq:cm_ball} and the linearization step in~\eqref{eq:taylor_expansion} for empirical Bayes estimation. Nevertheless, it illustrates the key ideas pertaining to Stein's heuristic. We take $G=\frac{1}{2}\sqb{\nn(-2,0.2^2) + \nn(2,0.2^2)}$ as the true effect size distribution, $\mathcal{G} = \mathcal{G}\mathcal{N}(0.2^2, [-3,3])$ from~\eqref{eq:normal_mixing_class} as the convex class and use sample size $m=10000$. In Figure~\ref{fig:marginal_density_affine} we compare three affine estimators (Butucea-Comte~\eqref{eq:comte_butucea_estimator} with bandwidth $h_m = \log(m)^{-1/2}$ and the solutions to problem~\eqref{eq:minimax_problem_tractable1} over all of $\mathcal{G}$ ("Minimax"), respectively the localized $\mathcal{G}_m$ with $c_m=0.02$ ("Minimax-$\infty$")) and also show the densities defining the hardest 1-dimensional subproblems over $\mathcal{G}$ and $\mathcal{G}_m$, as per Stein's heuristic. In Table~\ref{tab:marginal_density_affine} we compare these estimators in terms of their standard error (under the true $G$) as well as the worst case bias over $\mathcal{G}$ and $\mathcal{G}_m$. As the problem is easy, we do not observe huge differences between these approaches, however note that localized Minimax-$\infty$ estimator has a much larger worst case bias over $\gcal$, as it is only tuned towards controlling bias over $\gcal_m$. It is able to trade this off against lower variance. \begin{figure} \centering \includegraphics[width=\textwidth]{marginal_density_affine} \caption{\textbf{Estimation of $L(G):=f_{G}(0)$:} \textbf{a)} $Q$-weighting functions for the Butucea-Comte estimator~\eqref{eq:comte_butucea_estimator}, as well as the solutions to problem~\eqref{eq:minimax_problem_tractable1} over $\mathcal{G} = \mathcal{G}\mathcal{N}(0.2^2, [-3,3])$ ("Minimax"), respectively $\mathcal{G}_m$ ("Minimax-$\infty$") with $m=10000$. \textbf{b)} The two densities $g_1 = G_1',g_{-1}=G_{-1}'$ defining the hardest subproblem over $\gcal$ and \textbf{c)} their induced marginal $X$-densities $f_{G_1},f_{G_{-1}}$. Note that their difference is magnified at our target of inference, namely $f(0)$. \textbf{d)} Hardest densities $g_1,g_{-1}$ over $\mathcal{G}_m$ and \textbf{e)} induced marginal densities. $\mathcal{G}_m$ is constrained so that it includes only $G \in \gcal$ such that $f_{G}$ lies inside the yellow ribbon of panel \textbf{e)}. Differences are still magnified at $f(0)$, however the additional localization makes them less pronounced.} \label{fig:marginal_density_affine} \end{figure} \begin{table}[] \centering \input{tables/marginal_density_affine.tex} \caption{For the same three estimators depicted in Figure~\ref{fig:marginal_density_affine}, we show the standard errors (under the true $G$), as well as the worst case bias over $\gcal$ and the localized $\gcal_m$.} \label{tab:marginal_density_affine} \end{table} \textbf{Tail probability estimation:} As a second didactic example, we consider estimation of tail probabilities of the form $L(G) = \PP[G]{\mu \geq 0} = \int \ind(\mu \geq 0) dG(\mu)$. For these, the Butucea-Comte estimator~\eqref{eq:comte_butucea_estimator} is not applicable, since $\mu \mapsto \ind(\mu \geq 0)$ is not in $\LR{1}$. Instead modifications such as the ones proposed by~\citet*{dattner2011deconvolution,pensky2017minimax} are necessary, while our approach works out of the box. The functional itself is interesting, since it corresponds to deconvolution of a distribution function, which is a notoriously difficult task~\citep*{fan1991optimal, dattner2011deconvolution, johannes2009deconvolution}. $G$ and $\mathcal{G}$ are chosen as for the marginal density result, while $m=200$. Figure~\ref{fig:prior_cdf_affine} and Table~\ref{tab:prior_cdf_affine} show the results. Note that the localized Minimax-$\infty$ estimator has a much lower standard error and worst case bias over $\gcal_m$ than the unconstrained estimator at the price of larger bias over $\gcal$. Hence, we observe that beyond enabling the proof of Theorem~\ref{theo:lin_functional_clt}, the localization constraint indeed leads to more well-behaved estimators: our estimator is a non-linear estimator of the linear functional, that locally leverages minimax linear estimation techniques; cf.~\citet{cai2004minimax} for a related approach in a different context. \begin{figure} \centering \includegraphics[width=\textwidth]{prior_cdf_affine} \caption{\textbf{Estimation of $L(G):=\PP[G]{\mu \geq 0}$:} \textbf{a)} $Q$-weighting functions for the solutions to problem~\eqref{eq:minimax_problem_tractable1} over $\mathcal{G}= \mathcal{G}\mathcal{N}(0.2^2, [-3,3])$ ("Minimax"), respectively $\mathcal{G}_m$ ("Minimax-$\infty$") with $m=200$. \textbf{b)} The two densities $g_1=G_1',g_{-1}=G_{-1}'$ defining the hardest subproblem over $\gcal$ and \textbf{c)} their induced marginal $X$-densities $f_{G_1},f_{G_{-1}}$. We may immediately see why the problem is so hard: The total mass the two densities put to the left or right of $0$ is very different, yet to the eye the induced marginal $X$-densities appear essentially indistinguishable. \textbf{d)} Hardest densities $g_1,g_{-1}$ over $\mathcal{G}_m$ and \textbf{e)} induced marginal densities. $\mathcal{G}_m$ is constrained so that it includes only $G \in \gcal$ such that $f_{G}$ lies inside the yellow ribbon of panel \textbf{e)}. The worst case densities again differ by having peaks just to the left/right of $0$, but are more restricted in their ability to do so by the localization constraint. As a result, in panel \textbf{a)} the Minimax-${\infty}$ estimator has less variance.} \label{fig:prior_cdf_affine} \end{figure} \begin{table}[] \centering \input{tables/prior_cdf_affine.tex} \caption{For the same estimators depicted in Figure~\ref{fig:prior_cdf_affine}, we show the standard errors (under the true $G$), as well as the worst case bias over $\gcal$ and the localized $\gcal_m$.} \label{tab:prior_cdf_affine} \end{table} \textbf{Effect size density estimation:} For our third example we consider estimation of $L(G) = G'(0)= g(0)$ over the Sobolev class $\mathcal{S}(2, 0.5)$ from~\eqref{eq:sobolev_constr}. We take $G=\nn(0,4)$ as the ground truth effect size distribution, use a localization band with width $c_m = 0.02$ and let the sample size be $m=200$. The setting here is directly comparable to classical deconvolution literature, such as~\citet{zhang1990fourier, fan1991optimal, meister2009deconvolution, butucea2009adaptive}; cf. Figure~\ref{fig:prior_density_affine} and Table{~\ref{tab:prior_density_affine} where we also compare to the Butucea-Comte estimator~\eqref{eq:comte_butucea_estimator} with bandwidth $h_m = (\log(m)/2)^{-1/2}$. \begin{figure} \centering \includegraphics[width=\textwidth]{prior_density_affine} \caption{\textbf{Estimation of $L(G):=G'(0)=g(0)$:} \textbf{a)} $Q$-weighting functions for the Butucea-Comte estimator~\eqref{eq:comte_butucea_estimator} and the solutions to problem~\eqref{eq:minimax_problem_tractable1} over the Sobolev class $\mathcal{G} = \mathcal{S}(2, 0.5)$ ("Minimax"), respectively $\mathcal{G}_m$ ("Minimax-$\infty$") with $m=200$. \textbf{b)} The two densities $g_1=G_1',g_{-1}=G_{-1}'$ defining the hardest subproblem over $\gcal$ and \textbf{c)} their induced marginal $X$-densities $f_{G_1},f_{G_{-1}}$. \textbf{d)} Hardest densities $g_1,g_{-1}$ over $\mathcal{G}_m$ and \textbf{e)} induced marginal densities. $\mathcal{G}_m$ is constrained so that it includes only $G \in \gcal$ such that $f_{G}$ lies inside the yellow ribbon of panel \textbf{e)}.} \label{fig:prior_density_affine} \end{figure} \begin{table}[] \centering \input{tables/prior_density_affine.tex} \caption{For the same estimators depicted in Figure~\ref{fig:prior_density_affine}, we show the standard errors (under the true $G$), as well as the worst case bias over $\gcal$ and the localized $\gcal_m$.} \label{tab:prior_density_affine} \end{table} \subsection{Empirical Bayes estimation} \label{sec:eb_example} In this section, we demonstrate the practical performance of our empirical Bayes confidence intervals, constructed as described in Section~\ref{subsec:instantiation} with $\delta_m$ chosen as in~\eqref{eq:delta_CI} to minimize the confidence interval width. We empirically verify the conclusions of Theorem~\ref{theo:posterior_exp_inference} and Corollary~\ref{coro:instantiated_estimator} that the MCEB intervals provide frequentist coverage of the empirical Bayes estimands, but also to show that their width is such that meaningful conclusions are possible. We also compare to the plug-in approach of~\citet{efron2016empirical} (Appendix~\ref{sec:exp_family}). As already described in the introduction (Figure~\ref{fig:lfsr_simulation}), in our simulation we consider the class $\mathcal{G}\mathcal{N}(0.2^2,[-3,3])$ from~\eqref{eq:normal_mixing_class}. In Figure~\ref{fig:lfsr_simulation} we use the effect size distributions $G_{\text{Unimodal}}, G_{\text{Bimodal}}$ defined in~\eqref{eq:prior_gs} as ground truth and conduct inference for the \textbf{local false sign rate} based on $10000$ observations. The plug-in approach is implemented with $U[-3.6,3.6]$ as the exponential family base measure and a natural spline with 5 degrees of freedom as the sufficient statistic. The reported results are averaged over 500 Monte Carlo replications. We next consider the ideal scenario for the plug-in approach. We choose $G$ (shown in Figure~\ref{fig:lfsr_simulation_logspine}a) an exponential family distribution with base measure $U[-4,6]$ and a natural spline with 5 degrees of freedom as the sufficient statistic. The natural parameters are chosen so as to match the Two tower distribution considered by \citet{efron2019bayes} (see Appendix~\ref{sec:exp_family}) and furthermore we implemented the plug-in approach with the correct choice of base-measure and sufficient statistic. We implement the MCEB approach with $\gcal = \mathcal{G}\mathcal{N}(0.2^2,[-6,6])$. The results are shown in Figure~\ref{fig:lfsr_simulation_logspine}. We point out that in this case the plug-in approach performs extremely well; the problem is now parametric and the bias is negligible. The MCEB approach leads to larger confidence intervals, however they are still informative It is also worth noting that in this case $G \notin \mathcal{G}$, but inference is valid nevertheless, cf. Section~\ref{subsec:prior_misspefication} \begin{figure} \centering \includegraphics[width=\textwidth]{pl_twotower_lfsr.pdf} \caption{\textbf{Inference for Local False Sign Rates:} \textbf{a)} Probability density function of the effect size distribution. \textbf{b)} The dotted curve shows the true target $\theta(x)=\PP{\mu_i \geq 0 \cond X_i=x}$, while the shaded areas show the expected confidence bands for the MCEB procedure, as well as the exponential family plug-in approach. \textbf{c)} The coverage (as a function of $x$) of the bands shown in panel b) where the dashed horizontal line corresponds to the nominal $90\%$ coverage.} \label{fig:lfsr_simulation_logspine} \end{figure} We next repeat the three scenarios above, but conduct inference for the \textbf{posterior mean} instead. Results are shown in Figure~\ref{fig:posterior_mean_simulation}. In this case inference for MCEB is valid again and interval widths are a lot shorter than in the case of the local false sign rate. On the other hand, we note that the plug-in approach can fail even in this simpler setting (panel f). \begin{figure} \begin{subfigure}{\textwidth} \centering \includegraphics[width=\linewidth]{pl_bimodal_postmean.pdf} \end{subfigure} \begin{subfigure}{\textwidth} \centering \includegraphics[width=\linewidth]{pl_unimodal_postmean.pdf} \end{subfigure} \begin{subfigure}{\textwidth} \centering \includegraphics[width=\linewidth]{pl_twotower_postmean.pdf} \end{subfigure} \caption{\textbf{Inference for Posterior Mean:} \textbf{a)} Probability density function of the effect size distribution $G_{\text{Bimodal}}$ defined in~\eqref{eq:prior_gs}. \textbf{b)} The dotted curve shows the true target $\theta(x)=\EE{\mu_i \cond X_i=x}$, while the shaded areas show the expected confidence bands for the MCEB procedure, as well as the exponential family plug-in approach. \textbf{c)} The coverage (as a function of $x$) of the bands shown in panel b) where the dashed horizontal line corresponds to the nominal $90\%$ coverage. \textbf{d,e,f)} Analogous results to panels a,b,c) however with the effect size distribution $G_{\text{Unimodal}}$ defined in~\eqref{eq:prior_gs}. \textbf{g,h,i)} Results for the logspline density. } \label{fig:posterior_mean_simulation} \end{figure} Furthermore, in Figure~\ref{fig:rmses} we summarize the performance of MCEB across all scenarios considered. We show the expected half interval width, the root mean squared error (RMSE), as well as the worst case root mean squared error (that is RMSE with true bias replaced by $\sup_{G \in \gcal_m}\Bias[G]{\hat{L},L}$ used by MCEB (and averaged over Monte Carlo replicates). \begin{figure} \centering \includegraphics[width=\textwidth]{rmse_ciwidth_plots} \caption{\text{Summary of simulation results for MCEB:} Each panel corresponds to one empirical Bayes estimand $\theta(x)$ and one ground truth effect size distribution $G$. We show the expected half confidence interval width, the root mean squared error (RMSE) and the worst case RMSE as a function of $x$.} \label{fig:rmses} \end{figure} One caveat of the proposed methodology is that it becomes unstable for $x$ at which the marginal density $f(x)$ is hard to estimate (i.e. the denominator in the Taylor expansion~~\eqref{eq:taylor_expansion} is too small). For example in Figure~\ref{fig:lfsr_simulation}c,f), at $x=3.0$, we cover with probability less than the nominal $90\%$. This caveat becomes worse as we move further into the tails of the marginal distribution. \section{Empirical Applications} For our empirical application, we use the implementation from Section~\ref{subsec:instantiation} with $\gcal = \mathcal{G}\mathcal{N}(0.2^2,[-3,3])$. However, for the pilot $\bar{\theta}(x)$, we replace the Butucea-Comte estimator by the exponential family $G$-model estimator~\citep{efron2016empirical,narasimhan2016g}, as we found this to improve empirical performance by enforcing monotonicity of the pilot estimator $x \mapsto \bar{\theta}(x)$. \subsection{Identifying Genes Associated with Prostate Cancer} \begin{figure} \centering \includegraphics[width=\textwidth]{dataset_plots} \caption{\textbf{Empirical applications:} \textbf{a)} Histogram of the first fold of the Prostate dataset~\citep{efron2012large,singh2002gene}, as well as marginal bands \smash{$\Norm{f_{G} - \bar{f}_m}_{\infty} \leq c_m$} used by MCEB. \textbf{b)} Inference for the posterior mean $\EE{\mu_i \cond X_i=x}$ in the Prostate dataset. We show the $90\%$ bands (in gray) returned by proposed MCEB method, as well the pilot and calibrated point estimators. \textbf{c)} Similar to b) however now we conduct inference for the local false sign rate $\PP{\mu_i \geq 0 \cond X_i=x}$. \textbf{d,e,f,)} Analogous results to panels a,b,c) as applied to the Neighborhood dataset~\citep{chetty2018impactsII}.} \label{fig:empirical_applications} \end{figure} Our first dataset is the ``Prostate'' dataset~ \citep{efron2012large,singh2002gene}, by now a classic dataset used to illustrate empirical Bayes principles. The dataset consists of Microarray expression levels measurements for $m = 6033$ genes of $52$ healthy men and $50$ men with prostate cancer. For each gene, a t-statistic $T_i$ is calculated (based on a two-sample equal variance t-test) and finally $z$-scores ($X_i$'s in our notation) are calculated as $\Phi^{-1}(F_{100}(T))$, where $\Phi$ is the standard Normal CDF and $F_{100}(T)$ is the CDF of the t-distribution with $100$ degrees of freedom. We consider inference for both the posterior mean $\EE{\mu_i \mid X_i=x}$ and the local false sign rate $\PP{\mu_i \geq 0 \mid X_i=x}$ and report pointwise $90\%$ confidence intervals. Both quantities are of considerable scientific interest. To motivate the first one, suppose we observe $X_i \gg 0$. It is well known that, due to selection bias, estimating $\hat{\mu}_i = X_i$ is likely to be biased away from zero; however, as discussed in \citet{efron2011tweedie}, the oracle Bayesian posterior means $\EE{\mu_i \mid X_i}$ act as estimates of $\mu_i$ that are immune to selection bias. The standard empirical Bayes approach provides point estimates of these oracle quantities by sharing information across genes, but the empirical Bayes estimation error may be rather opaque and so it is not clear to what extent the estimates $\hEE{\mu_i \mid X_i}$ eliminate selection bias. In contrast, by using our confidence intervals, we can conservatively estimate $\EE{\mu_i \mid X_i}$ by the lower end of the corresponding confidence bands. Similarly, confidence bands for the local false sign rate could be used to assess the reliability of a given $z$-score: Are we confident that we got the correct direction of the effect? Results of the analysis are shown in the top three panels of Figure~\ref{fig:empirical_applications}. As expected, the width of the confidence bands is a lot wider for the local false sign rate than for the posterior mean. However, in both cases, we find that our bias aware confidence intervals are short enough to allow for qualitatively meaningful conclusions, e.g., we can be quite confident that the probability that $\mu_i \geq 0$ when $X_i = -3$ is less than 20\%. \subsection{The Impact of Neighborhoods on Socioeconomic Mobility} Our second application is motivated by an example of \citet{abadie2018choosing} based on a dataset of \citet{chetty2018impactsII}. \citet{chetty2018impactsII} consider $m=595$ commuting zones (``Neighborhoods'') and provide estimates on the causal effect of spending 1 year there as a child on income rank at age 26 (conditionally on parent income rank being in the 25th percentile). In particular, for each zone the authors report an effect size estimate $\htau_i$, as well as an estimate of the standard error $\hat{\sigma}_i$. Here we specify $X_i = \htau_i / \hat{\sigma}_i$ for each $i$, such that $ X_i \approx \nn\p{\mu_i, \, 1}$, where $\mu_i = \tau_i / \sigma_i$, and $\tau_i$ and $\sigma_i$ correspond to the true effect size and standard error. Then, as above, we use our approach to make inferences about the distribution of $\mu_i$ conditionally on observing $X_i = x$. Results are shown in the bottom panels of Figure~\ref{fig:empirical_applications}. Interestingly, the effect size distribution for the neighborhood data appears to be shifted to positive values and not be concentrated around zero, while for the prostate data it appears to be more symmetric and concentrated around zero. As~\citet{abadie2018choosing} observe, there is no a-priori reason to expect sparsity around zero for the neighborhood data, as zero has no special role beyond that of normalization, so that the average effect is zero---in contrast, for gene differential expression studies, one generally expects most effects to be null, or close to null~\citep*{efron2001empirical}. The upshot is that we can now make stronger claims about the local false sign rate than in the genetics example when $X_i$ is positive: For the prostate data we infer with confidence that the local false sign rate is less or equal than $0.2$ at $x \geq 2.8$, whereas for the neighborhood data we can already make this claim for $x \geq 2.0$. \section{Discussion} \label{sec:discussion} We have presented a general approach towards building confidence intervals that explicitly account for estimation bias for empirical Bayes estimands defined in the hierarchical Gaussian model~\eqref{eq:EB}, as well as for linear functionals \eqref{eq:lin_functional} in the associated deconvolution model. Here, we focused on a handful of empirical Bayes estimands in the Gaussian model; our approach, however, allows for several extensions. First, while we have considered inference for empirical Bayes estimands $\EE[G]{h(\mu_i) \mid X_i=x}$, our methodology is also applicable to tail (rather than local) empirical Bayes quantities, such as the tail (marginal) false sign rate $\EE[G]{\mu_i \leq 0 \mid X_i \geq x}$ as considered in, e.g., \citet{yu2019adaptive}. Furthermore, model~\eqref{eq:EB} can be substantially extended as follows~\citep{efron2016empirical}: \begin{equation} \label{eq:general_EB} \theta_i \sim G, \ \ X_i \sim p_i(\cdot \cond \theta_i) \end{equation} This entails two extensions: First of all, we allow the likelihood $p_i(\cdot \cond \theta_i)$ to vary across $i$. For example, we could consider the Gaussian location model with per-observation noise standard deviation $\sigma_i$, so that $\theta_i = \mu_i$, and $p_i(\cdot \cond \theta_i) = \nn\p{\theta_i, \sigma_i^2}$ as considered e.g., by \citet*{weinstein2018group}. One way to deal with this would be to solve problem~\eqref{eq:minimax_problem_tractable1} separately for each $i$ to get $Q_i(\cdot)$ and use $\sum_{i=1}^m Q_i(X_i)/m$ as the final estimator. This is computationally feasible only if $\cb{\sigma_i \cond i \in \cb{1,\dotsc,m}}$ does not take too many values or can be appropriately binned. Another approach would be to treat $\sigma_i$ as i.i.d. and random and computationally derive bivariate linear functions $Q(X_i, \sigma_i)$. The second extension entails the use of likelihoods other than Gaussian. For example, we could consider the Poisson compound decision problem~\citep*{robbins1956empirical,brown2013poisson} in which $p_i(\cdot \cond \theta_i) = \Poisson{\lambda_i}$, where $\lambda_i = \theta_i$ or $\lambda_i = \exp(\theta_i)$. As another example, we could consider truncated Gaussian likelihoods, to account for selection bias~\citep{hung2018}. \citet{koenker2017rebayes} discuss further likelihoods that come up in empirical Bayes applications. Extensions of our methodology to other likelihoods amount to per-case constructions of the marginal distribution localization. An important consideration for the practical adoption of the MCEB intervals is the sensitivity to the non-parametric specification of the class of effect size distributions $\gcal$. The latter enables the evaluation of the potential worst-case bias and thus valid inference. Depending on the target of inference, the width of the confidence intervals (and the point estimates) may vary substantially (e.g., for the local false sign rate) or remain relatively stable (e.g., for the posterior mean). For the Normal mixture class $\mathcal{G}\mathcal{N}(\tau^2,[-K,K])$ in~\eqref{eq:normal_mixing_class}, one can empirically check this dependence by rerunning the analysis for different values of $\tau$. What range of $\tau$'s should one scan through for the sensitivity analysis? An upper bound can be derived from the fact that marginally the variance of the $X_i$'s must be at least $1+\tau^2$. However, it is not possible to identify a lower bound based on the data~\citep{donoho1988one}, since the sets are nested as $\mathcal{G}\mathcal{N}(\tau^2,[-K,K]) \subset \mathcal{G}\mathcal{N}(\tau'^2,[-K,K])$ for $\tau' < \tau$. And indeed, as $\tau \to 0$, the width of bands for the local false sign rate will often tend to $1$. Thus, the only way to obtain practically meaningful results is by the analyst choosing a range of $\tau$'s that appear to be plausible. Finally, especially when estimating the local false sign rate and related quantities, it is important to recall that the minimax estimation error for $\theta(x)$ decays extremely slowly (often poly-logarithmically) with sample size \citep{butucea2009adaptive,pensky2017minimax}. Unlike in classical estimation problems, we cannot expect to make our confidence intervals meaningfully shorter by, say, collecting 100 times more data than we have now. Thus, from a practical point of view, it may be helpful to interpret our confidence intervals as partial identification intervals of the type proposed in \citet{imbens2004confidence}, and to accept a certain amount of estimation uncertainty that cannot be eliminated with any reasonable sample sizes. From this perspective, the amount of smoothness we are willing to assume about the class of effect size distributions $\gcal$ determines the accuracy with which we can ever hope to learn $\theta(x)$, and the sensitivity analysis discussed above is closely aligned with recommendations for applications with partially identified parameters \citep{armstrong2018optimal,imbens2017optimized,rosenbaum2002observational}. \subsection*{Software} We provide reproducible code for all numerical results in the following Github repository: \url{https://github.com/nignatiadis/MCEBPaper}. A software package implementing the method is available at \url{https://github.com/nignatiadis/MinimaxCalibratedEBayes.jl}. The package has been implemented in the Julia programming language~\citep*{bezanson2017julia} and depends, among others, on the packages JuMP.jl~\citep{DunningHuchetteLubin2017}, ApproxFun.jl~\citep{Olver2014approxFun}, Distributions.jl~\citep{besanccon2019distributions} and EBayes.jl~\citep{ignatiadis2019covariate}. \subsection*{Acknowledgment} This paper was first presented on May 24th, 2018 at a workshop in honor of Bradley Efron's 80th birthday. We are grateful to Timothy Armstrong, Bradley Efron, Guido Imbens, Panagiotis Lolas, Michail Savvas, Paris Syminelakis, Han Wu and seminar participants at several venues for helpful feedback and discussions, and acknowledge support from a Ric Weiland Graduate Fellowship, a gift from Google, and National Science Foundation grant DMS-1916163. \bibliographystyle{plainnat}
1,941,325,219,957
arxiv
\section{Introduction} Galaxy morphology in the local universe provides significant information about the formation and evolution of galaxies. Massive galaxies in the nearby universe are well described by the Hubble sequence, which correlates with the dominance of galaxy's central bulge, surface brightness and colors. Hubble types are also broadly correlated with physical parameters, such as the star formation rate and dynamical properties \citep{rob94, bla03}. In the classical picture, late-type spiral galaxies are active star--forming structures with flattened, gas rich, rotationally supported exponential disks, while early-type galaxies are more luminous, massive and quiescent systems, supported by stellar velocity dispersion and have smooth elliptical isophotes with a so--called ``de Vaucouleurs" (or similar) light profile. In a color-magnitude (or mass) diagram of the local universe, the early- and late-type galaxies occupy two distinct regions, known as the red sequence and blue cloud, respectively (Blanton et al. 2003, Baldry et al. 2004). The red sequence consists of mostly non-star-forming galaxies with a bulge dominated structure and colors indicative of an old stellar population. In contrast, the galaxies lying in the blue cloud have different star formation properties, including blue star-forming stellar populations and mostly disk-like structures. The key question of how and over what time--scale the Hubble Sequence has formed remains unanswered. At a basic level, exploring the origin of the Hubble sequence can simply be done by investigating if high-redshift galaxies have distributions of morphological types (early- and late-type) and star-forming properties that resemble those in the nearby universe. Several surveys that have used the Hubble Space Telescope (HST) have observed that the properties of galaxies at $z\sim1$ are broadly consistent with those in the local universe (Bell et al. 2004: GEMS, Papovich et al. 2005: HDFN, Cassata et al. 2007 : COSMOS). However, the morphological analysis is still controversial at the peak epoch of star-formation activity ($z\sim2-3$). Until relatively recently, most studies of galaxy morphologies at $z>2$ have been performed at rest--frame ultraviolet (UV) wavelengths using optical imagers (such as HST/WFPC2 and HST/ACS). These works found that irregular or peculiar structures appear more common, and traditional Hubble types do not appear to be present at these epochs \citep{gia96a,gia96b, ste96, low97, lot04, pap05, lot06, rav06, law07, con08}. This is generally explained as due to the fact that UV radiation predominantly traces emission from the star-forming regions \citep{dic00}, which tend to be more clumped and irregularly distributed than older stellar populations, and also by the fact that quenched galaxies were missing from the optical images. The rest-frame optical regime is a better probe of the overall stellar distribution in galaxies, and early near--infrared (NIR) observations with {\it HST} and NICMOS of star--forming galaxies at $z>2$ from UV selected samples found that their morphology remains generally compact and disturbed also at rest-frame optical wavelengths and bear no obvious morphological similarities to lower redshift galaxies \citep{pap05, con08}. Interestingly, however, \cite{kri09} showed that 19 spectroscopically confirmed massive galaxies ($>10^{10.5}M_{\odot}$) at $z\sim 2.3$ are clearly separated into two classes as a blue cloud with large star-forming galaxies, and a red sequence with compact quiescent galaxies. Unlike late--type galaxies, early-type galaxies (ETGs) have been used to investigate the cosmic history of massive galaxies in many studies (e.g. Renzini (2006), and references therein) due to their simple elliptical morphologies and passively evolving stellar populations. At $z<1$, there is general consensus that the majority of massive ETGs ($M > 10^{11}M_{\odot}$) were already in place at $z\sim 0.8$, with a number density comparable to that of local galaxies \citep{cim02, im02}. A number of studies have reported the emergence of massive and compact galaxies by $z\sim 2-3$, which are already quenched ETGs \citep{cim04, dad05, tru06, tru07, van08, cas08, val10, cas10}. The number density of these galaxies rapidly increases, by a factor of five, from $z\sim2$ to $z\sim1$, and they are up to 5 times more compact in size than local ones with similar mass \citep{cas11, cas13}. Recent works have suggested, however, that a large fraction of, and possibly even all, massive, quiscent galaxies at $z\sim2$ are disk dominated \citep{vdw11}. While the observation of such disks at $z>2$ is based on morphological analysis alone, typically distribution of apparent elongation, with no dynamical measures at present, the existence a sizeable fraction of compact disks at $z>2$ among massive, passive galaxies \citep{van11,wan12,bru12} suggests that the physical mechanism responsible for quenching star-formation may be distinct from the process responsible for morphological transformation. Recently, studies of the morphologies of $z\sim2$ galaxies have advanced using the high resolution NIR Wide-Field Camera 3 (WFC3). \cite{szo11} found a variety of galaxy morphologies, ranging from large, blue, disk-like galaxies to compact, red, early-type galaxies at $z\sim2$ with 16 massive galaxies in the Hubble Ultra Deep Field (HUDF). \cite{cam11} also studied the rest-frame UV and optical morphologies with $1.5 <z<3.5$ galaxies determined by the YHVz color-color selection in the HUDF and the Early Release Science (ERS) field, and confirmed previous studies by showing in particular the presence of regular disk galaxies, which have been missing in previous studies, either because they are not detected at the available sensitivity or because their appearance is irregular at rest-frame UV wavelengths. The results from these studies are generally interpreted as possible evidence at $z\sim 2$, at least among the brightest galaxies at that epoch, of the general correlations between spectral types and morphology that today define the Hubble Sequence. The most important limitations of these works are that they are based on very small samples, which are not statistically significant (less than 20 galaxies) and not homogeneously selected, and their morphological analysis was restricted to only $S\acute{e}rsic$ profile fitting or visual classifications. Significant improvements are now possible using larger samples of panchromatic images from the CANDELS (Cosmic Assembly Near-infrared Extragalactic Legacy Survey) observations. \cite{wuy11} investigated how the structure of galaxies ($S\acute{e}rsic$ index and size) depends on galaxy position in the SFR--stellar mass diagrams since $z\sim2.5$, specifically showing strong trends of specific star formation rate (SSFR) with $S\acute{e}rsic$ index using large data sets combined in 4 different fields, COSMOS, UDS, GOODS-South and North. Although the $S\acute{e}rsic$ index, measured by fitting a single $S\acute{e}rsic$ profile to a galaxy, is the most common approach to analyzing galaxy morphology, it is also useful to study morphologies with non-parametric measures such as Gini ($G$) \citep{abr03}, $M_{20}$ (Lotz et al. 2004), multiplicity ($\Psi$) (Law et al. 2007) and CAS (Abraham et al. 1996, Conselice et al. 2003) since not all galaxies are described by smooth and symmetric profiles. In a recent CANDELS paper, \cite{wan12} used Gini ($G$), $M_{20}$ and visual classifications to identify a correlation between morphologies and star-formation status at $z\sim2$, showing two distinct populations, bulge-dominated quiescent galaxies, and disky or irregular star-forming galaxies, though they only use massive galaxies ($M>10^{11}M_{\odot}$). Recent panchromatic surveys such as CANDELS hold the promise of significant progress in investigating galaxy structures at high redshift because they combine sensitive {\it HST} morphology at rest-frame UV and optical wavelengths with the depth and accuracy of space--borne photometry. The CANDELS project also adds coverage of a substantial amount of sky, which results in samples whose size and dynamic range in mass are about one order of magnitude , or more, larger than in previous works. In this study, we extend previous results using a statistically significant sample (1,671 galaxies) down to a lower mass limit ($M > 10^{9}$ M$_{\odot}$ at $z\sim 1.4$ and $10^{10}$ M$_{\odot}$ at $z\sim 2.5$, specifically for passive galaxies) and using various morphological parameters (non-parametric diagnostics such as $G$, $M_{20}$, $\Psi$, Concentration (C) and Asymmetry (A), and the parametric $S\acute{e}rsic$ light profiles, characterized by the $S\acute{e}rsic$ index (n) and half-light radius $R_{e}$), as well as visual inspection. The combination of high--angular resolution and sensitivity afforded by the {\it HST}/ACS and WFC3 images with the relatively large size of the sample allows us to probe in a statistical fashion the correlations between galaxy structures and star-formation activity at $z\sim2$, i.e. the epoch when the cosmic star--formation activity reached its peak, to their counterparts in the local universe. The structure of this paper is as follows. The optical and infrared data and selection of our galaxy sample are introduced in Section 2. The rest-frame color and mass distributions are described in Section 3. We present the analysis of galaxy morphologies in the rest-frame optical using the distribution of non-parametric approaches, as well as $S\acute{e}rsic$ index and half-light radius in Section 4. Comparison with galaxies from the local universe is shown in Section 5 and the results of a comparison of rest-frame UV morphologies with rest-frame optical are presented in Section 6. Finally, we conclude with a discussion of our results and compare them to other studies in Section 7 and summarize our results in Section 8. \section{Data and Sample Selection} All the data used in this work come from the observations acquired during the GOODS (Giavalisco et al.\ 2004) and CANDELS (Grogin et al.\ 2012; Koekemoer et al.\ 2012) projects in the GOODS--South field, including both space--born (Chandra, Hubble and Spitzer) as well as ground--based (VLT) data. The CANDELS {\it HST} observations, including the details of the data acquisition, reduction and calibration, source identification and photometry extraction, are thoroughly described elsewhere (see Grogin et al.\ 2012; Koekemoer et al.\ 2012; Guo et al.\ 2013); here we briefly review key features of the WFC3 images that are relevant to this work. The {\it HST} component of CANDELS consists of a Multi-Cycle Treasury program to image five distinct fields (GOODS-North and -South, EGS, UDS and COSMOS) using both WFC3 and ACS. The whole project is organized as a two--tier Deep+Wide survey. The CANDELS/Deep survey covers about 125 square arc minutes to $\sim 10$-orbit depth within GOODS-North and -South \citep{gia04} at F105W(Y), F125W(J) and F160W(H), while the Wide survey covers a total of $\sim 800$ square arc minutes to $\sim 2$-orbit depth within all five CANDELS fields. In this study, we use the 4-epoch ( about 2 orbits) CANDELS F160W(H-band) imaging that covers about 115 square arc minutes ($\sim 2/3$ of the whole GOODS-S) including the GOODS-S deep region, plus the ERS \citep{win11}. This survey has a $5\sigma$ limiting depth of $H_{AB} = 27.7$, and a drizzled pixel scale of $0.06"$. A number of photometric catalogs exist based on CANDELS data, and here we use one where sources have been detected using the package SExtractor \citep{ber96} in the WFC3 H--band images, and multi--wavelength photometry has been obtained using a software package with an object template-fitting method (TFIT, Laidler et al. 2007). This catalog includes photometry from the {\it HST}/ ACS and WFC3 images in the BVizYJH bands; from VLT/VIMOS U and VLT/ISAAC Ks images; and from the Spitzer/IRAC images at 3.6, 4.5, 5.8 and 8.0 $\mu$m (Guo et al.\ 2013). We identify galaxies at $1.4< z\le 2.5$ with a broad range of star--formation properties, from passive to star forming, and with different levels of dust obscuration using photometric redshifts and SSFR estimated by fitting the CANDELS broad-band rest--frame UV/optical/NIR spectral energy distribution (SED) to spectral population synthesis models (hereafter, SED sample). Additionally, for comparison, we also select samples of galaxies using the BzK technique, a color selection based on the (B-z) vs. (z-K) color-color diagram, widely used to identify galaxies at $z\sim2$ relatively independently of their star--formation activity and dust obscuration properties \citep{dad04,dad07}. While characterized by some contamination by AGNs and low--redshift interlopers, as well as incompleteness to very young and dust--free star--forming galaxies (see \cite{dad04}), the BzK selection is overall quite effective and particularly economic in that it only requires the acquisition of three photometric bands. In contrast, the SED selection is observationally much more expensive because it requires a large number of photometric bands to yield robust photometric redshifts as well as robust measures of the stellar population properties, i.e. stellar mass, star--formation rate, dust obscuration and age of the dominant stellar population. For the same reason, however, it is less prone to the effects of photometric scatter and characterized by a higher degree of completeness than the BzK criterion. In view of the fact that in CANDELS the two GOODS fields have deeper and fully panchromatic images relative to the other fields of the survey, here we use the SED sample as our primary data set for our study, and compare it with the BzK sample to test if they yield similar conclusions about the general morphological properties of the galaxies mix at $z\sim 2$. Such a comparison, which at this level of sensitivity can only be made in the GOODS fields, is particulary useful for those other fields where data for selecting galaxies by means of SED fitting are not available or do not have sufficient wavelength coverage and/or sensitivity for accurate results. In our particular case, since the BzK sample is bsed on the ground--based K-band images, which are significantly shallower ($5\sigma$ limiting magnitudes of Ks=24.4) than the WFC3 images, the depth, and hence the size, of the sample is smaller than the SED one. However, since the efficiency and simplicity of the BzK selection criteria offer a distinct advantage in other fields of the CANDELS survey, where the rich complement of photometry of the GOODS--South field is not available, the knowledge of the relative performance and possible limitations of both selection criteria will be very useful. \begin{figure*} \begin{center} \epsscale{0.9} \plotone{fig1.eps} \caption{\small (z-K) vs. (B-z) diagram for the z-band selected sources in the GOODS- South field of the HST/ACS [black points] with $S/N_{k} > 7$ and $S/N_{z} > 10$. Sources above the dotted line are classified as the star-forming galaxies [sBzKs] and sources between the dotted and dashed lines are the passively evolving galaxies [pBzKs]. 1043 BzK galaxies are detected in WFC3/F160w (H-band) observation, using epoch 4 of the CANDELS. The blue and red circles identify these 981 sBzKs and 62 pBzKs, respectively [BzK sample]. } \label{fig:bzk} \end{center} \end{figure*} \subsection{The Galaxy Mix at $z\sim 2$: Photometric Redshift and SED--fitting selection} Measures of the stellar mass, star--formation rate, dust obscuration and age of the dominant stellar population have been obtained by Guo et al.\ (2013) using the TFIT panchromatic catalog of the GOODS--South field (see also Guo et al.\ 2011, 2012, where key results and features of the SED fitting have been discussed). Prior to carrying out the SED fitting, photometric redshifts have been measured for all galaxies from the 13--band UBVizYJHKsI$_1$I$_2$I$_3$I$_4$ photometry using the PEGASE 2.0 spectral library templates \citep{fio97}, as well as the available sample of 152 spectroscopic redshifts (about 4\% of our final sample) as a training set. In the redshift range considered here the CANDELS photometric redshifts are of good quality, as verified by comparing them against available spectroscopic ones. This comparison yields a mean and scatter ( $1\sigma$ deviation after $3\sigma$ clipping) in our photometric redshift measurement of 0.0005 and 0.03, respectively. The properties of the dominant stellar populations are subsequently derived by fitting the observed SED to the spectral population synthesis models by fixing the redshift to the photometrically derived value and using the redshift probability function, $P(z)$, to calculate the errors from a Monte Carlo bootstrap. The multi--band photometry is fit to the updated version of the \cite{bru03} spectral population synthesis library with a Salpeter initial mass function. We use either a constant star formation history or an exponentially declining one ($e^{-t/\tau}$), depending on which functions result in a smaller $\chi^{2}$ with the data. The Calzetti law \citep{cal00} is used for the dust obscuration model together with the \cite{mad95} prescription for the opacity of Inter galactic medium (IGM) (see Guo et al.\ 2013 for a full description of the procedure). In the redshift range $1.4 < z\le 2.5$, arbitrarily (but inconsequentially) chosen to reproduce that of the BzK selection criteria (see \cite{dad07}), the photo--z plus SED fitting procedures yield 3,542 galaxies with signal to noise ratio in the H-band $(S/N)_{H}>10$ (hereafter, SED sample). Star--forming and passive galaxies are defined based on the measure of the SSFR, namely the ratio of the star--formation rate to the stellar mass. Specifically, we define passive galaxies as those with $SSFR<0.01~Gyr^{-1}$, and using this criterion we find 105 passive galaxies and 3,437 star--forming ones out of the 3,539 comprising the SED sample. Thus, with our cut on the SSFR, 3\% of all galaxies at $z\sim 2$ are classified as passive. \subsection{The Galaxy Mix at $z\sim 2$: The BzK Selection} We have constructed the BzK sample by adopting the BzK color--color selection by \cite{dad04}, where galaxies of various ``spectral types" are identified by their position in the $(B-z)$ versus $(z-K)$ color-color diagram. The BzK selection is widely used to investigate the evolution of galaxies at $z\sim2$ \citep{dad05, red05, dad07,lin12, fan12, yum12}. To the extent that the average obscuration properties of the star--forming galaxies are well described by the Calzetti (2000) obscuration law, this rest UV/Optical color selection is sensitive to galaxies at $1.4<z\le2.5$ with a significantly broader range of dust obscuration than the UV selection alone (e.g. Reddy et al. 2009, 2010; also Guo et al. 2011 for a discussion). It is also sensitive to passively evolving galaxies in a similar redshift range, which the UV selection misses altogether. As in any selection of distant galaxies that is based on colors, however, the details of the redshift distribution of the selected galaxies depend on the relative sensitivity of the images and the shape of the adopted bandpasses. \begin{figure} \begin{center} \epsscale{1.0} \plotone{fig2.eps} \caption{\small Comparison of the photometric redshift distribution of the BzK sample to that of the SED sample (Top) and the number counts of SED sample, of the two BzK samples, and also of the whole H--band selected TFIT sample for comparison (Bottom). The thick solid line represents the H-band selected TFIT sample (e.g. SED sample without redshift limitation), and dotted line represents the ``Pure BzK". The redshift window, $ 1.4 < z \le 2.5$, is described as vertical dashed lines in the top panel. In the Bottom panel, the dot and dash dot lines are for the BzK sample (BzKs at $z\sim2$) and SED sample, respectively. } \label{fig:numcount} \end{center} \end{figure} The original BzK criteria were implemented using a sample where source detections were carried out in the K-band images, since these had sufficient sensitivity and were such that every galaxy detected at least in the z-band was detected in the K one with higher S/N. Since this is not the case with our data, where the {\it HST}/ACS z-band image is much deeper than the ground--based VLT/ISAAC Ks band image even for the reddest SED considered here, we contruct our BzK samples from the ACS z-band selected source catalog \citep{gia04}, where we further require $(S/N)_{K} > 7.0$ in the K-band and $(S/N)_{z}>10.0$ in the z-band to ensure robust color measurements. We then use the selection criteria introduced by \cite{dad04} as shown in Figure~\ref{fig:bzk}: \begin{eqnarray} BzK \equiv (z-K)-(B-z) \ge -0.2 \qquad \nonumber \\ \textrm{for star-forming galaxies (sBzKs) and } \nonumber \\ BzK < -0.2 ~~\cap ~~(z-K) > 2.5 \qquad \nonumber \\ \textrm{for passively evolving galaxies (pBzKs)}\nonumber \end{eqnarray} Out of a total of 1,043 BzK galaxies, we find 981 sBzKs (blue circles in Figure~\ref{fig:bzk}) and 62 pBzKs (red circles), namely 6\% of the sample is made of passive galaxies. This fraction is twice as large as the one of the SED sample, and the reason is that the BzK selection defines galaxies as passive solely based on their colors, while in the SED sample galaxies are defined as passive based on the SSFR. If the threshold were defined as SSR$<0.16$ Gyr$^{-1}$, then the SED sample and BzK would both have the same 6\% fraction of passive galaxies. Finally, it is important to keep in mind that all BzK galaxies also have detection in the WFC3/F160w CANDELS images, which we have used to analyze their rest--frame optical morphology. Compared to the SED selection, the BzK selection is relatively easy and economical to implement, requiring only three photometric bands, and it is largely independent of dust reddening. The combination of photometric scatter (which depends on the sensitivity of the data) and the intrinsic variance of galaxies' SED, however, result in some degree of contamination by interlopers (i.e. galaxies that are not in the targeted redshift range) as well as of incompleteness, namely loss of galaxies that are scattered away from the selection windows. For the same reasons, the separation between passive galaxies and star--forming ones suffers from scatter, in the sense that dust--reddened star--forming galaxies might be observed in the selection window of passive ones and vice versa (see, e.g. Daddi et al. 2004, 2005, 2007). \begin{figure*} \begin{center} \epsscale{1.0} \plotone{fig3.eps} \caption{\small The rest-frame U-V color versus Stellar mass for galaxies of both BzK (left) and SED (right) samples at $z\sim2$. The crosses and circles represent the star-forming (sBzKs) and passive (pBzKs), respectively. The rest-frame colors of the passive (pBzK) galaxies span a much smaller range than star-forming (sBzK) ones, with the two samples having distinct color distributions. The color-coding reflects the specific star formation rate (SSFR) defined as star formation rate divided by stellar mass (right panel). Most massive pBzKs have $SSFR < 0.01~ Gyr^{-1} $ which means they are rarely star-forming and intrinsically red in contrast to blue sBzKs with higher SSFR and lower mass. Diamond symbols represent the 70 x-ray detected galaxies (50 for BzK sample).} \label{fig:CM} \end{center} \end{figure*} To diminish the contamination by interlopers from outside the redshift range considered here and how it affects our morphological analysis, we can use the photometric redshift to restrict our sample to galaxies with $1.4< z\le 2.5$. This further cut serves two purposes. First, it filters away the high--z tail of our BzK sample, which very likely results from the combination of the relatively shallow depth of the ACS B--band images, and the fact that the sample is z--band selected. The cut also serves the purpose of creating our reference cosmic epoch to assess the state of galaxy morphology evolution. This leaves a final BzK sample of 736 galaxies down to $H\sim25$, of which 46 are classified as passively evolving (pBzK), i.e. 6.3\% of the sample, and 690 are star--forming (sBzK) galaxies. We explicitly note that using the photometric redshifts to eliminate likely interlopers has left the passive fraction essenctially unchanged. In the following we will refer to this photo--z filtered sample simply as the ``BzK sample'', while the original sample will be called the ``pure BzK'' one. The top panel of the Figure~\ref{fig:numcount} compares the photometric redshift distribution of the BzK sample to that of the whole H--band selected TFIT sample (e.g. SED sample not restricted by redshift), while the bottom one shows the number counts of the SED sample, of the two BzK samples, and also of the whole H--band selected TFIT sample for comparison. It can be seen from the photometric redshift histograms that the redshift distribution of BzKs is tapered at both ends of the corresponding selection window as a result of the color cuts built in the selection criteria. Clearly, this is not presented in the SED sample. It can also be seen that the number counts of the SED and BzK samples are very similar in shape, especially at $H < 25$, the former being slightly larger than the latter simply due to the larger completeness and the fact that the redshift distribution is not set by color cuts. Since the magnitude (mass) distribution is not similar especially at the faint end, we cut BzK and SED samples with $M>10^{9}M_{\odot}$ ( $H\lesssim 26$ : over the 90\% completeness limit of the CANDELS $H$ band in the GOODS-S Deep field) to study and compare the morphologies directly. As we shall see later, the morphological properties of the SED and BzK samples are essentially identical, suggesting they are both representative of the mix of bright galaxies at $z\sim 2$. \section{ Color-mass diagram at $z\sim2$} In Figure~\ref{fig:CM}, we report the distribution of rest-frame U--V (3730\AA -- 6030\AA) color versus stellar mass for the BzK (left) and SED (right) samples. The blue crosses and red circles represent the star-forming (sBzK) and passive (pBzK) galaxies, respectively. The rest-frame colors are measured using the best-fitting SED template, which is scaled to pass through the closest observed photometric point for each rest-frame wavelength we consider to derive the k-correction and subsequently the rest-frame magnitude. Figure~\ref{fig:CM} shows that there is a distinct difference in the color-mass diagram between star-forming and passive galaxies. In both our samples, the colors of passive galaxies (pBzKs) span a much smaller range than those of star-forming ones (sBzKs). An important question to answer, therefore, is whether or not the limited excursion of the intrinsic colors of the pBzKs is simply the result of their selection and not due to the characteristics of their SED. After all, these galaxies are selected specifically to be red, namely to have the observed colors expected from passively evolving galaxies (or galaxies with a relatively small specific star formation rate) observed at $1.4 < z \le 2.5$. To test this possibility, we have compared the scatter of the observed colors and of the intrinsic colors of our pBzK sample in bins of both apparent and absolute magnitude, which is shown in Figure~\ref{fig:p_sigma}. As it can be seen, the scatter {\it always} increases when going from the intrinsic colors to the observed ones, as one would expect in a sample with a relatively large dispersion of redshift. Moreover, the pBzKs occupy a significantly smaller range of stellar mass, and, at the same time, the two types occupy a disjoint range of SSFR (color-coding of points at Figure~\ref{fig:CM}). Taken all together, this is evidence that while pBzKs are selected to be red, thus covering a restricted range of both the {\it observed} B-z and z-K colors, their {\it intrinsic} colors are all very similar, since they span a range significantly smaller than the observed ones, denoting a similarity of physical properties. This conclusion is further reinforced by their small range of mass, since the color selection does not in principle set any restrictions on the stellar mass. \begin{figure} \begin{center} \epsscale{1.0} \plotone{fig4.eps} \caption{\small The observed colors (B-z and z-K) and rest-frame color (U-V) versus apparent ($H_{AB}$) and absolute magnitudes ($log(L_{H})$) for pBzK galaxies. The $\sigma$ values and error bars represent the standard deviation of colors.} \label{fig:p_sigma} \end{center} \end{figure} We note that although our star-forming and passive samples seperate well in Figure~\ref{fig:CM}. a small fraction of massive star-forming galaxies overlap with the passive ones. The majority of those massive star-forming galaxies are red due to UV colors which are reddened by dust, and similar trends are observed in the red sequence at $z\sim 0$ \citep{bal06}. In the BzK sample, 11 red sBzKs with $SSFR<0.01^{-2}~Gyr^{-1}$ are all massive ($M > 10^{10}M_{\odot}$) and generally red (rest-frame $U-V>1.3$ except two galaxies with $U-V>1.0$). They are visually characterized by compact structures, with the exception of one having large size ($R_{e}=3.5$ kpc) and a light profile characteristic of an exponential disk ($n=1.34$). These could be passive galaxies that are not classified as such by the BzK criterion either because of photometric scatter in the photometric measures or because of the galaxies' intrinsic SED variations (see Section 2.2). We find 70 (50 for the BzK sample) X-ray detected galaxies among our sample, marked as diamond symbols in Figure~\ref{fig:CM}. Most of them ($86\%$ :SED sample, $90\%$: BzK sample) are star-forming galaxies (sBzKs), and those X-ray detected galaxies are generally massive and compact. We do not exclude them from further study since they also follow a similar trend in the color-mass diagram, and have similar morphologies as non X-ray detected galaxies. \section{Morphological Classification Using Non-parametric Approaches} In order to investigate further the morphologies of galaxies within $1.4 < z \le 2.5$, we turn next to several non-parametric morphological diagnostics such as the Gini parameter ($G$), the second-order moment of the brightest 20\% of the galaxy pixels ($M_{20}$) and the multiplicity parameter ($\Psi$). Many studies have used these parameters to quantify galaxy morphology \citep{lot04, abr07, law07, ove10, law12, wan12} locally and at high redshift, generally showing that they are an effective and automated way to measure galaxy morphologies for large samples. These parameters quantify the spatial distribution of galaxy flux among the pixels, without assuming a particular analytic function for the galaxy's light distribution. Thus they may be a better characterization of the morhology of irregulars, as well as standard Hubble-type galaxies (Lotz et al. 2004, Hereafter, LPM04). Before measuring these parameters, we need to identify the pixels belonging to each galaxy. For each galaxy, we calculate the ``elliptical Petrosian radius'', $a_{p}$, which is defined like the Petrosian radius \citep{pet76} but uses ellipses instead of circles (LPM04). We use the segmentation map generated by Sextractor when making the H-band detections \citep{guo13}, and use the ellipticities and position of peak flux determined by Sextractor for each galaxy. We then set the semi-major axis $a_{p}$ to the value where the ratio of the surface brightness at $a_{p}$ to the mean surface brightness within $a_{p}$ is equal to 0.2. The surface brightness at each elliptical aperture, $a$, is measured as the mean surface brightness within an elliptical annulus from $0.8a$ to $1.2a$. There are 10 galaxies in the SED sample and one sBzK whose images comprise less than 28 pixels (corresponding to a circle with a radius of 3 pixels), which we have excluded from further analysis. Note again that we use galaxies with $M > 10^{9}M_{\odot}$ from both samples for our morphology analysis, which leaves us with 46 pBzKs and 669 sBzKs, and 104 passive and 1567 star-forming galaxies of the SED sample. Using the SED and BzK samples with stellar mass $>10^{9}M_{\odot}$, we first compute the G parameter defined in LPM04, which measures the cumulative distribution function of a galaxy's pixel values (light). Therefore, G of 1 would mean all light is in one pixel while G of 0 would mean every pixel has an equal share. Hence, G is used to distinguish between the galaxies for which fluxes are concentrated within a small region or uniformly diffuse. We also compute the $M_{20}$ parameter, which traces the spatial distribution of any bright nuclei, bars, spiral arms, and off--center star clusters. Typically, galaxies with high values ($M_{20} \ge -1.1$) are extended objects with double or multiple nuclei, whereas low values ($M_{20} \le -1.6$) are relatively smooth with a bright nucleus (see LMP04 for a detailed explanation of $M_{20}$). The third non-parametric coefficient is the multiplicity $\Psi$ \citep{law07}, designed to discriminate between sources based on how "multiple'' the source appears. Galaxies with lower $\Psi$ are compact galaxies with generally one nuclei while irregular galaxies with multiple clumps have higher $\Psi$ \citep{law07} (the definitions of each diagnostic are presented in the above references). \begin{figure*} \begin{center} \epsscale{0.8} \plotone{fig5.eps} \caption{\small Relative distributions of $G$ (top), $M_{20}$ (middle) and $\Psi$ (bottom) for BzK (left) and SED samples (right) having $M > 10^{9} M_{\odot}$. Red and blue histograms represent the pBzK (passive) and sBzK (star-forming) galaxies, respectively. Overall, both samples show similar morphological distribution in three parameters, such as passive (pBzK) galaxies have higher $G$ and lower $M_{20}$ and $\Psi$ in contrast to star-forming (sBzK) galaxies. This is consistent with the passive galaxies being compact and relatively smooth, while the star-forming ones are more extended and have more fine-scale structures.} \label{fig:nonpar} \end{center} \end{figure*} \subsection{Rest-frame Optical Morphology} Figure~\ref{fig:nonpar} shows the relative distribution of the $G$, $M_{20}$ and $\Psi$ for star--forming and passive galaxies of the BzK and SED samples (blue and red histogram, respectively). The $G$ values are mostly in the range $0.3-0.7$ with a mean of 0.43 for star-forming galaxies (0.48 for sBzKs) and 0.53 for passive ones (0.58 for pBzKs). Passive galaxies are shifted to higher $G$ than star-forming ones. The majority of pBzKs (90\%) and about 70\% of passive galaxies have $G>0.5$. The mean values of the $M_{20}$ for star-forming (sBzK) and passive (pBzK) galaxies are -1.47 (-1.54) and -1.68 (-1.73), respectively. The middle panel of Figure~\ref{fig:nonpar} shows that the passive galaxies (pBzKs) have lower values and show a peak at $\sim -1.7$ while the star-forming ones (sBzKs) exist in a wide range of $M_{20}$ values that are slightly skewed to higher $M_{20}$. Lastly, the $\Psi$ values of star-forming galaxies (sBzKs) have a range of values up to $\sim 5$, but most of the passive galaxies (SED: 90\% , BzK: 94\% ) have $\Psi < 1.0$. \cite{law12} find that spectroscopically confirmed star-forming galaxies at z=1.5--3.6 have $\Psi < 1$ for isolated regular galaxies, $1 < \Psi < 2$ for sources that show some morphological irregularities, and larger values for sources having multiple clumps that are separated. Therefore, all passive galaxies (pBzKs) tend to be dominated by one main clump while star-forming ones (sBzKs) can have two or more significant components in addition to a main nucleation. There is some degree of correlation between the $G$ and $M_{20}$, $\Psi$ measurements (see Figure~\ref{fig:dist}). The passive galaxies (pBzKs) reside in a narrow region with higher $G$ and lower $M_{20}$ and $\Psi$ indicating that they consist of one bright central source. In contrast with passive galaxies (pBzKs), star-forming ones (sBzKs) with lower $G$ have higher $M_{20}$ and $\Psi$ because galaxies with diffuse morphology tend to have a spread out flux distribution. Figure~\ref{fig:nonpar} and \ref{fig:dist} also show that there is an overlap in the distributions of morphological parameters of star-forming galaxies and passive ones. For example, there are star-forming galaxies with high $G$ and low $\Psi$ or $M_{20}$ that are located in same region where the bulk of passive galaxies are observed, and vice versa. \begin{figure*} \begin{center} \epsscale{0.8} \plotone{fig6.eps} \caption{\small Distribution of G vs. $M_{20}$ (top) and $\Psi$ (bottom) for BzK (left) and SED (right) samples having $M > 10^{9} M_{\odot}$. In both samples, there are clear morphological differences between passive galaxies and star-forming ones also seen in figure [\ref{fig:nonpar}]. However, in $G-M_{20}$ and $G-\Psi$ spaces, some star-forming galaxies show a similar morphological trend as passive one and several passive galaxies in the SED sample have lower $G$ and higher $M_{20}$ and $\Psi$ like star-forming ones. } \label{fig:dist} \end{center} \end{figure*} To illustrate and further explore these galaxies in the overlapped region, we have chosen star-forming galaxies with $G> 0.6$ and passive galaxies with $G<0.5$ for visual inspection and classification. We indeed found that the star--forming galaxies can generally be classified as blue spheroids and the passive ones as red disks. In agreement with \cite{law07}, the 35 star-forming galaxies with high $G$ visually appear as compact structures in Figure~\ref{fig:sfg}. Note that all these images have S/N ratio per pixel ($S/N_{pp}$) greater than 2.5, the threshold used in LPM04 for reliable measurements, and most of them (85\%) are relatively bluer than normal passive galaxies. About 40\% of star-forming galaxies in the sample of \cite{law12} were visually classified as such compact structures as well. Among passive galaxies in the SED sample, 10 of them have $S/N_{pp}<2.5$, and all have $G<0.5$. This indicates that we cannot measure reliable morphology due to the low $S/N_{pp}$ ratio. In Figure~\ref{fig:pg}, we show the 25 passive galaxies with $G<0.5$ and $S/N_{pp} >2.5$. Most of them have smooth structures, and some are elongated or have secondary structures. They are intrinsically red with rest-frame $(U-V) > 1.5$, and 16 galaxies are massive ($M>10^{10}M_{\odot}$). Red (passive) disks at high redshift have also been recently studied by other groups. For example, \cite{wan12} found that 30\% in quiescent galaxies of their sample with $M>10^{11} M_{\odot}$ at $1.5\le z \le 2.5$ can be morphologically classified as disks. This is generally consistent with the findings presented here, although we note that due to their low $S/N_{pp}$, some of our ``passive disks'' might actually be morphological mis-classifications or even be dust--obscured star--forming galaxies. Finally, we observe that, overall, the BzK and the SED samples have essentially identical distributions of morphological parameters, although the SED sample includes galaxies with lower $G$, namely those with $S/N_{pp} < 2.5$, which is the result of their lower surface brightness. \begin{figure} \begin{center} \epsscale{0.9} \plotone{fig7.eps} \caption{ \small Postage stamps of 35 star-forming galaxies ( in SED sample) with $G > 0.6$. Each postage stamp is $3.6 \times 3.6~arcsec^2$ and all images have been linearly scaled. The number in each stamp indicates the order of H-band magnitude, i.e. number 1 galaxy is the brightest one. As one can see, all star-forming galaxies with high $G$ show spheroid-like structures with a bright clump.} \label{fig:sfg} \end{center} \end{figure} \begin{figure} \begin{center} \epsscale{0.8} \plotone{fig8.eps} \caption{\small Postage stamps of 25 passive galaxies ( in SED sample) with $G < 0.5$ (Image stamp size, image scaling and magnitude order follow the same properties of Figure~\ref{fig:sfg}). Among 35 passive galaxies with $G>0.5$, 10 galaxies are excluded due to the low signal-to-noise ratio per pixel, $S/N_{p.p} < 2.5$. Those galaxies are all red and show extended stuctures as an example of red (passive) disks.} \label{fig:pg} \end{center} \end{figure} In addition to $G$, $M_{20}$ and $\Psi$, we measure the Concentration ($C$) and Asymmetry ($A$) of our samples. The concentration index $C$ \citep{abr96,con03} measures the concentration of flux. Typical values of $C$ range from $\sim 1$ for the least compact to $\sim5$ for most compact galaxies. Note that asymmetry $A$ \citep{sch95, abr96, con00} compute the $180$ degree rotational asymmetric light distribution of all galaxy and hence the most symmetric galaxies have $A=0$. We present the distribution of $C$ and $A$ for the BzK (left) and SED (right) samples in Figure~\ref{fig:cas}. As expected, the passive galaxies (pBzKs) are more similar to ellipticals in their $C$ and $A$ values, while the star-forming ones (sBzKs) are more spiral and merger-like. In the $C-A$ plane, passive galaxies (pBzKs) are different from star-forming ones (sBzKs), but the difference is not as significant compared to the difference in $G-M_{20}$ and $\Psi$. As expected, the distribution of $C$ and $A$ for both samples are similar as shown in the previous figures. \begin{figure} \begin{center} \epsscale{1.1} \plotone{fig9.eps} \caption{\small The plot of Asymmetry[A] vs. Concentration[C] for BzK(left) and SED (right) samples having $M > 10^{9} M_{\odot}$ and the histograms of each parameter. The passive galaxies (pBzKs) are more spheroidal like in their C and A values, while star-forming galaxies (sBzKs) are more spiral and merger like. Passive galaxies (pBzKs) are different in C-A plane than the star-forming ones (sBzKs) although the difference is not huge compared to $G-M_{20}$ and $-\Psi$.} \label{fig:cas} \end{center} \end{figure} In summary, the distributions of the non--parametric morphological diagnostics that we have considered here for both the BzK and SED samples are essentially the same in each spectral type class. Star--forming and passive galaxies clearly show different distributions of non-parametric morphological measures, and they are separated well in $G$--$M_{20}$ and $\Psi$ spaces. Passive galaxies (pBzKs) are mostly compact, spheroidal structures, and the majority of star-forming ones (sBzKs) are somewhat extended or have multiple clumps, similar to disks or irregular galaxies in the Local universe. These results agree with those of \cite{wan12} who also studied the morphologies of massive galaxies ($M > 10^{11}M_{\odot}$) at $z\sim2$ with $G$, $M_{20}$ and visual classifications. They found that the quiescent galaxies are bulge dominated and star-forming galaxies have disks or irregular morphologies visually as well as in the $G$ and $M_{20}$ analysis. We extend their study with a larger sample down to a lower mass limit, and obtain almost the same conclusion about galaxy morphologies at $z\sim 2$. \begin{figure} \begin{center} \epsscale{1.0} \plotone{fig10.eps} \caption {\small The distribution of morphological parameters as a function of $S\acute{e}rsic$ index ($n$) for the SED sample with $M> 10^{9}M_{\odot}$ in two different mass bins divided by a mass threshold, $M_{th}=10^{10}M_{\odot}$. The color-coding represents the rest-frame (U-V) color of galaxies, and the dotted vertical line is for $n=2.5$. About 96\% passive galaxies (circles) are more massive than $M_{th}$, while about 78\% of star-forming ones (crosses) have mass, $M < M_{th}$. Star-forming galaxies dominated by low $n$, especially at $M<M_{th}$ and passive ones mostly have higher $n$. Redder galaxies (mostly passive galaxies) tend to have higher $G$ and lower $M_{20}$ and $\Psi$ in the massive system.} \label{fig:sersic} \end{center} \end{figure} \begin{figure} \begin{center} \epsscale{1.0} \plotone{fig11.eps} \caption {\small The distribution of morphological parameters as a function of $S\acute{e}rsic$ index ($n$) for the BzK sample with $M> 10^{9}M_{\odot}$ in two different mass bins divided by a mass threshold, $M_{th}= 10^{10}M_{\odot}$. The color-coding represents the rest-frame (U-V) color of BzK sample, and the dotted vertical line is for $n=2.5$. All pBzKs (circles) are more massive than $M_{th}$, while about 70\% of sBzKs (crosses) have mass, $M < M_{th}$. sBzKs dominated by low $n$ and pBzKs mostly have higher $n$. The morphologies and rest-frame colors are well separated, especially in the massive systems. } \label{fig:sersicb} \end{center} \end{figure} In Appendix A, we investigate the robustness of non-parametric measures ($G$, $M_{20}$ and $\Psi$), mainly used in this study for morphological analysis, using GOODS-S and the Hubble Ultra Deep Field (UDF) images in the H-band. The UDF overlaps part of the GOODS-S imaging, but goes much deeper (5 $\sigma$ depth of 28.8), and thus offers an opportunity to test the dependence of parameters on the signal-to-noise per pixel ($S/N_{pp}$). We show that any difference between the two different fields, which have different exposure times, is relatively small for three parameters, with the scatter in measured properties increasing as S/N decrease. We find that most ($> 90\%$) of BzK galaxies have $S/N_{pp} > 2.5$, and $\sim70\%$ of the SED sample have $S/N_{pp} > 2.5$. We note that we do not exclude galaxies with $S/N_{pp} < 2.5$ since they rarely change our results in this study. \subsection{$G$, $M_{20}$ and $\Psi$ vs. $S\acute{e}rsic$ Index and $R_{e}$} \begin{figure} \epsscale{1.0} \plotone{fig12.eps} \caption{\small The distribution of morphological parameters as a function of half-light radius ($R_{e}$) for the SED sample with $M> 10^{9}M_{\odot}$ in two different mass bins divided by a mass threshold, $M_{th}= 10^{10}M_{\odot}$. Open symbols show the galaxies with $n < 2.5$ and filled symbols show galaxies with $n > 2.5$. Star-forming and passive galaxies are expressed as blue and red colors, respectively. Overall, star-forming galaxies tend to have larger effective radii than passive ones, even in the case of massive systems, $M > M_{th}$, and a half of passive galaxies show very compact morphologies, with $R_{e} < 1 kpc$. Galaxies with smaller $R_{e}$ tend to have a compact structure with high $G$ and low $\Psi$. } \label{fig:re} \end{figure} \begin{figure} \epsscale{1.0} \plotone{fig13.eps} \caption{\small The distribution of morphological parameters as a function of half-light radius ($R_{e}$) for the BzK sample with $M> 10^{9}M_{\odot}$ in two different mass bins divided by a mass threshold, $M_{th}=10^{10}M_{\odot}$. Open symbols show the galaxies with $n < 2.5$ and filled symbols show galaxies with $n > 2.5$. sBzKs and pBzKs are expressed as blue and red colors, respectively. The statistics of $R_{e}$ and overall morphologiacl distributions are same with SED sample of Figure~\ref{fig:re}.} \label{fig:reb} \end{figure} $S\acute{e}rsic$ index and half-light radius ($R_{e}$) have been successfully used to characterize galaxy morphology in many previous works, both at low and high redshift (low-z: Blanton et al. 2003, Driver et al. 2006; mid-z: Cheung et al. 2012; high-z: Ravindranath et al. 2006, Bell et al. 2012). Recently, \cite{bel12} and \cite{wuy11} show that the $S\acute{e}rsic$ index correlates well with quiescence in galaxies at $z \lesssim 2$. Therefore, we investigate how galaxy morphologies with $G,M_{20}$ and $\Psi$ correlate with $S\acute{e}rsic$ index (n) and $R_{e}$. We use the $S\acute{e}rsic$ index and $R_{e}$ \citep{vdw12} obtained by fitting a $S\acute{e}rsic$ profile to the galaxy image using GALFIT \citep{pen02}. Passive galaxies in both samples have $\langle n\rangle\sim 3.0$, and over 96\% of them have $n>1.0$, with 50\% having $n>2.5$. In contrast, star--forming galaxies have $\langle n\rangle\sim 1.5$, with 85\% of them having $n<2.5$. This suggests that the majority of star-forming (sBzK) galaxies have disk-like (exponential light profile) or irregular structure with a light profile shallower than an exponential one. In contrast, all passive galaxies (pBzKs) have a dominant bulge including some bulge+disk structures. A similar analysis of morphologies at $z<2.5$ using the $S\acute{e}rsic$ index in the SFR-mass diagram was carried out by \cite{wuy11}, who found that the main sequence (MS) consists of star-forming galaxies with near exponential profiles, and passive galaxies below the MS have higher $S\acute{e}rsic$ indices close to a de Vaucouleur profile ($n=4$). \cite{szo11} also reported similar results with 16 massive galaxies at $z\sim2$, and found that star-forming galaxies have diskier (low n) profiles than passive galaxies. We present the distribution of $G,M_{20}$ and $\Psi$ as a function of $S\acute{e}rsic$ index in two different mass bins divided by a threshold mass, $M_{th}=1\times10^{10}M_{\odot}$ in Figure~\ref{fig:sersic} (SED sample) and ~\ref{fig:sersicb} (BzK sample). In both samples, we find that there are significant correlations between $S\acute{e}rsic$ index and $G,M_{20}$ and $\Psi$, with galaxies with high $n$ having high $G$ and low $M_{20}, \Psi$, and vice versa. As we have already noted in Figure~\ref{fig:CM}, most of the passive galaxies (pBzKs) have masses greater than $1\times10^{10}M_{\odot}$, and the majority of star-forming galaxies (78\%) and $70\%$ for sBzKs have $M < 1\times10^{10}M_{\odot}$. In massive systems ($M \ge 1\times10^{10}M_{\odot}$), the two populations show a well-separated bimodal distribution in their morphologies and colors (see Figures 10 and 11). Red passive galaxies (pBzKs) show spheroidal-like structures with high $n$, $G$ and low $M_{20}$, $\Psi$, while blue star-forming ones (sBzKs) exhibit a larger variety of morphologies, but mainly have low $n$, $G$ and high $M_{20}$, $\Psi$. There are some star-forming galaxies (sBzKs) with high $S\acute{e}rsic$ index ($n > 2.5$, vertical dotted line in Figure~\ref{fig:sersic},~\ref{fig:sersicb}). They follow mostly the same trend in morphologies with higher $G$ and lower $M_{20}$, $\Psi$, indicating the presence of a bright center (see the blue spheroids in Figure~\ref{fig:sfg}). \cite{bel12} showed examples of such systems in their sample, and found that those appear to be spheroidal-like structures, but in many cases also have significant asymmetries, or faint secondary sources and tidal tails. A loose relation between non-parametric measures and $S\acute{e}rsic$ index is observed for massive galaxies in the right panel of Figure~\ref{fig:sersic}, \ref{fig:sersicb}, but not for low mass galaxies at all. This means that the commonly used $S\acute{e}rsic$ index is not enough to study morphology of those galaxies. Therefore, it is important to use the non-parametric diagnostics in addition to the $S\acute{e}rsic$ profile to quantify the morphology of galaxies towards the low end of the mass distribution. In Figure~\ref{fig:re} (SED sample) and ~\ref{fig:reb} (BzK sample), we plot the non-parametric measures as a function of $R_{e}$ in both small and large mass systems to see how the size varies along with spectral type and stellar masses. We find that $R_{e}$ also correlates well with all the non-parametric measures in general, as galaxies with low $G$ and high $M_{20}$, $\Psi$ have smaller sizes and relatively low $S\acute{e}rsic$ Index ($n<2.5$: Empty symbols) in both samples. Overall, star-forming galaxies (sBzKs) tend to have larger half-light radii than passive ones (pBzKs), even in the case of massive systems ($M \geq 1\times 10^{10}M_{\odot}$) and about half of passive (pBzK) galaxies show very compact morphologies, with $r_{e} < 1~kpc$. This is consistent with previous results, which find that passive galaxies are more compact than star--forming galaxies at $z\sim 2$ \citep{tof09, wuy11, cas11}, and the same general trend is observed at $z\sim 0$ among massive galaxies \citep{wil10}. \section{Comparison with the Local Universe} \begin{figure*} \epsscale{1.0} \plotone{fig14.eps} \caption{\small $G$ vs. $M_{20}$ and $\Psi$ for the galaxies at $z\sim2.3$ and local galaxies from LPM04. We compare the morphologies of BzK [from left: 1st panel] and SED sample [2nd panel] in WFC3 H-band with degraded B/g-band image of local galaxies. To reduce the effect of any morphological k-correction, we compare only the galaxies with redshifts $ 2.0 < z \le 2.5$. Local galaxies are selected to lie in the same $M_{B}-M*$ range ($0.0 \le (M_{B}-M*) \le 4.0$) as the $z\sim 2.3$ sample, assuming $M*=-20.1$ locally and $M*=-22.9$ at $z\ge 2$. The 3rd panel shows observed morphologies of normal local galaxy types expressed by the following colors ( violet: E/S0, magenta: Sa-Sbc, green: Sc-Sd, light blue: dI). Comparison between passive and star-forming galaxies in each sample (red dots and blue crosses, respectively) at $2.0 < z \le 2.5$ and the morphologies of redshifted local galaxies at the WFC3 H-band image resolution are described on the 1st and 2nd panels. Overall, galaxies at $z\sim2.3$ tend to have similar distribution in $G-M_{20}$ space with redshifted local galaxies even though there are many galaxies with higher $M_{20}$ for their $G$ than for any of the local ones. } \label{fig:local} \end{figure*} The strong correlation between galaxy color (and SFR) and morphology shown in the previous sections is reminiscent of the Hubble sequence at $z=0$. However, to understand if actually the Hubble sequence is in place at $z\sim2$, it is important to examine how galaxy morphologies at $z\sim2$ differ from the local galaxies. In general, comparing morphological parameters of local galaxies, which are observed at relatively high resolutions to their conterparts in high-z samples, whose resolution is lower, is not straightforward because because most morphological diagnostics do depend on the resolution. A fortunate case, however, is that of the comparison between local galaxies at redshift $0.05<z<0.1$ from the SDSS survey to galaxies at $z\sim 1$ observed with {\it HST}, since the difference of angular diameter distance at these two redshifts nearly perfectly compensates for the difference in the angular resolution of the two instrumental configuration \citep{nai10}. At this purpose, we should also keep in mind that in our adopted cosmology, the fractional variation of the angular diameter distance in the redshift range $1.4<z<2.5$ is only $\approx 5$\%. Furthermore, even though $G$ and $M_{20}$ measures are robust given the resolution of the observation, particularly if the data is deep enough to allow the Petrosian radius to be used to measure the parameters \citep{abr07}, we should nonetheless be careful when directly comparing the $G$ and $M_{20}$ from observations with different resolutions (LPM04; Lisker et al. 2008). Therefore, in this study, we compare the $z\sim2$ galaxy morphologies in the SED and BzK samples to those of the local galaxy sample of LPM04 after we simulate how they would appear in the CANDELS images if they were observed at redshift $z\sim 2$. For this reason, we have used the B--band and g--band images of the local galaxies \citep{fre96, aba03}, which at this redshift correspond to the H band (for details about the local galaxy observations, see LPM04). We have restricted the comparative analysis with the high--redshift galaxies to only those at $2.0 < z \le 2.5$ to minimize the possible effects of the morphological K-correction. Furthermore, we have only considered galaxies within a magnitude range of $0\le(M_{B}-M*)\le4$ (LPM04), where we take $M*=-20.1$ \citep{bla03b} for local galaxies, and $M*=-22.9$ for galaxies at $z \ge 2$ \citep{sha01}, assuming that the local galaxies were brighter in the past but did not evolve morphologically (LPM04). In this simulation, we first modify the angular sizes and surface brightness of local galaxy images to account for distance and cosmological effects. The images are rebinned to the pixel scale of the galaxies observed at $z=2.3$ (WFC3 pixel size is 0.06") and the flux in each pixel is rescaled so that the total magnitude of the galaxy corresponds to some preassigned value, for example to that of an M* galaxy at $z=2.3$. The modified images are then convolved with the WFC3 PSF and, lastly, we add Poisson noise appropriate to the WFC3/NIR observations using the IRAF task MKNOISE. In Figure~\ref{fig:local}, we present the $G$, $M_{20}$ and $\Psi$ measured from the redshifted modified local galaxy images. The measures of the redshifted galaxies (from the left, 1st and 2nd panels) are quite different from those of the original local galaxy images (3rd panel). This is in agreement with LPM04, who conclude that $z\sim2$ Lyman Break Galaxies (LBGs) do not have morphologies identical to local galaxies. Overall, the distributions of galaxies at $z\sim 2.3$ in both samples and that of the redshifted local galaxies are similar in $G$-$M_{20}$ space, as shown in the 1st and 2nd panel of Figure~\ref{fig:local}, but the high--redshift star--forming galaxies have a broader distribution of $M_{20}$ for a given value of $G$ than the redshifted local late types. This trend is reflected in the $G$-$\Psi$ plane (in the bottom panels), which shows lower $\Psi$ values for the redshifted local galaxies. Large and luminous star--forming disks are mostly responsible for this excess of galaxies with higher $M_{20}$ and broader (slightly higher) distribution of $\Psi$, another manifestation of the fact that disks at $z\sim 2$ are not simply scaled--up versions of the local ones in terms of star--formation rates, but are intrinsically different (e.g. Papovich et al.\ 2005, Law et al.\ 2007). Comparatively, the $z\sim 2$ passive galaxies have $G$ and $M_{20}$ values that are much more similar to those observed for the present--day E/S0 galaxies. Overall, the qualitative similarity of values, shapes and trends of the distributions of morphological parameters at low and high redshift suggests that the Hubble sequence is essentially in place at $z\sim 2$. Our comparative study also shows that while the morphology of the oldest systems at any epoch, i.e. the passively evolving galaxies, in general changes relatively little from $z\sim 0$ to the present, at least as traced by our diagnostics, disk galaxies underwent strong structural evolution over the same cosmic period. A noticeable exception is the evolution of the size of massive ellipticals, which at $z\sim 2$ were dominated by very compact galaxies, which had stellar density up to two order magnitudes higher than today's counterparts of similar mass, while at present such systems have essentially disappeared (see \cite{cas11, cas13}). Also, it is interesting to note that \cite{hua13} find that at even higher redshifts, i.e. $4<z<5$, the size distribution of star--forming galaxies is significantly larger than that predicted from the spin parameter distribution observed in cosmological N--body simulations, a marked difference from local disks which follow the simulation predictions very well. \section{ Comparison With Rest-Frame UV} \begin{figure} \centering \includegraphics[width=9cm,height=16cm]{fig15.eps} \caption{\small Comparison of the morphological parameters for galaxies at $z\sim2$ between the rest-frame UV (z-band) and optical (H-band) [BzK: left, SED: right]. The top, middle and bottom panels show $M_{20}$, $G$ and $\Psi$, respectively. Blue crosses and red circles represent the sBzKs (star-forming) and pBzKs (passive), respectively, and a dotted black line in each panel shows a linear correlation. The inset in each panel show the distribution of the fractional differences ($f$) of the parameters in the two rest-frame bands defined as $f(M_{20})=[M_{20}(z)-M_{20}(H)]/M_{20}(z)$. Negative $f(M_{20})$ and positive $f(G),~f(\Psi)$ means that parameters are bigger in the z-band in each plot.} \label{fig:zh} \end{figure} It is important to compare the rest--frame UV and optical morphologies, since the former traces the spatial distribution of star formation, and thus contains information on how, and where, galaxies grow in mass and evolve morphologically, while the latter traces the structure of their stellar components. With the CANDELS images we can study the relation between rest-frame optical (5300~\AA) and UV (2800~\AA) morphology with a large sample. Since non-parametric measures can vary systematically with the PSF and pixel size of the images as the resolution decreases (Lotz et al. 2004; Law et al. 2012), we made a version of the ACS z--band image which is PSF--matched to the WFC3 H--band one (by convolution with an ad--hoc kernel) and which we have re-binned to the same pixel scale of 0.06"/pixel. To make a meaningful comparison, we use the same segmentation map and the ``elliptical Petrosian radius" estimated from the H-band image and apply it to the PSF matched z-band image to measure the $G,~M_{20}$ and $\Psi$ in the rest-frame UV. In Figure~\ref{fig:zh} we compare the $G,M_{20}$ and $\Psi$ of the BzK and SED samples in the UV and optical rest-frames (passive (pBzK) and star-forming (sBzK) galaxies in red and blue, respectively). What we find from our measurements is that all three parameters are different between the H- and z-bands in general. First, the H-band derived $G$ is higher for both passive (pBzKs) and star-forming (sBzKs) galaxies at $z\sim2$. The values of $M_{20}$ at the two wavelengths are well correlated for both populations, but generally the z-band measurements have slightly higher values. In particular, we find that the scatter increases as $M_{20}$ increases. This means that galaxies are clumpier in the z-band and the difference in $M_{20}$ between the two bands is bigger in the case of multi-clump structures (e.g. higher $M_{20}$). The $\Psi$ values from the H-band and z-band for star-forming (sBzK) galaxies are well correlated, while most of the passive ones (pBzKs) in the z-band have higher $\Psi$ value than in the H--band. We compute the fractional differences of the three parameters between optical and UV defined as $f(M_{20})=[M_{20}(z)-M_{20}(H)]/M_{20}(z)$, shown in the insets of Figure~\ref{fig:zh} to check for offsets from the linear correlation. Negative $M_{20}$, positive $G$ and $\Psi$ imply that parameters in the z-band are higher than the ones in the H-band. Overall, in the rest-frame UV, galaxies appear to have higher $\Psi$ and $M_{20}$ and lower $G$ values than in the rest-frame optical since observations in the rest-frame UV show more fragmented structures than the rest-frame optical, especially for star-forming galaxies. We additionally find that the passive galaxies (pBzKs) in the rest-frame optical tend to have a higher $G$ and lower $\Psi$ and $M_{20}$ than in the rest-frame UV because the rest--frame optical light from old stellar populations is typically more concentrated than that from younger stellar populations, consistent with the result from \cite{guo11} that the inner region of passive galaxies at $z\sim2$ have a redder color gradient. A similar trend was noted by \cite{wuy12}, who found that the median galaxy size and $M_{20}$ are reduced (less clumpy), while $G$ and $C$ increase from rest-frame 2800~\AA\ and U band to the optical, using star-forming galaxies at $0.5 < z < 2.5$. \begin{figure} \centering \includegraphics[width=9.5cm,height=13cm]{fig16.eps} \caption{\small Postage stamps of 16 galaxies which are selected in both BzK and SED samples, including 8 pBzKs (1st \& 2nd columns) and 8 sBzKs (3rd \&4th columns) in the rest--frame optical (WFC3 H-band: 1st \& 3rd) and UV (ACS z-band: 2nd \& 4th). The galaxy images are ordered by decreasing magnitude from top to bottom (the eighth galaxy is the brightest one at each column). Each postage stamp is $3.6 \times 3.6~ arcsec^2$ and labels indicate the redshifts ($z$) and morphological parameters. Star-forming galaxies show a variety of morphologies, while all passive galaxies show bulge--like structures. The morphology between z and H-band of passive galaxies are almost identical, but this is not the case for star-forming ones. } \label{fig:mont} \end{figure} Sample images of galaxies in both rest-frame optical and UV are shown in Figure~\ref{fig:mont} to visually present the morphological differences between passive and star-forming galaxies and to see how nonparametric measures are correlated with the visual classifications. We have selected 16 galaxies (eight pBzKs and eight sBzKs) included in both BzK and SED samples. These images have relatively smooth and regular morphologies in both bands. The eight images for each spectral type are sorted by their H-band magnitude, with the brightest one located at the bottom and decreasing upward, and non-parametric statistics and redshifts of each galaxy labeled. This figure illustrates the good correspondence between the measured parameters and the visual morphologies of the galaxies. Most sBzKs are extended, and exhibit a broad range of morphologies, from isolated systems with a central bulge and galaxies with bulge and disk components, to irregular systems with multi-clumps. The pBzKs have morphologies similar to those of present day spheroids, bulges and ellipticals. Visually, the sBzKs appear clumpier in the rest-frame UV compared to optical, showing a dependence on wavelength with the exception of a few isolated cases, while pBzKs look very similar in both wavelengths. Overall, visual inspection shows that the morphological types in both bands are generally similar, in agreement with \cite{cas10} who found that passive galaxies ($SSFR <0.01~Gyr^{-1}$) have a ``weak morphological K-correction", with size being smaller in the rest-frame optical than in the UV. However, the comparison with non-parametric measures show that galaxies in the rest-frame UV are somewhat clumpier than rest-frame optical for both galaxy populations. For star-forming galaxies at $z > 1.5$, \cite{bon11} and \cite{law12} measured the internal color dispersion (ICD), and found that the morphological differences between the rest-frame UV and optical are typically small. However, the argument that the majority of ICDs for star-forming galaxies are larger than those for passive galaxies \citep{bon11} is not consistent with our finding of relatively large offset for pBzKs in Figure~\ref{fig:zh}. Since most of our pBzKs ($\sim 80\%$) are massive ($M > 3 \times 10^{10}M_{\odot}$), one possibility is that high mass galaxies tend to exhibit greater morphological differences with large ICD \citep{law12}. Furthermore, pBzKs are typically brighter and rather compact at rest-frame optical wavelengths, which results in higher $G$ and lower $M_{20}$ and $\Psi$ than in the rest-frame UV. On the other hand, \cite{szo11} found a strong dependence of the morphology on wavelength in a visual study of 16 massive galaxies at $z\sim2$. \section{Discussion} In this section we briefly compare our measures of the morphologies of the mix of galaxy populations in the redshift range $1.4 < z \le 2.5$ to the predictions of theoretical models. In particular, we discuss the evidence that the bimodal distribution of galaxies in the color-mass (or luminosity) diagram, namely the ``red sequence'' and ``blue cloud'', has already started to appear at $z\sim2$, and compare it with existing measures at lower redshift. The reliability of our non-parametric measures, $G$, $M_{20}$ and $\Psi$ and their performance in quantifying the morphology of galaxies at $z\sim2$, especially for less massive ones, are also discussed. Lastly, we discuss the comparison between BzK and SED selected samples. \subsection{ Comparison to the predictions of theoretical models} Our analysis of the various morphological indicators, both parametric and non-parametric ones, as well as a simple visual inspection (e.g. Figure~\ref{fig:mont}), have shown that star-forming galaxies (sBzKs) exhibit a broad variety of morphological structures, ranging from galaxies with a predominant disk morphology and varying degree of bulge--to--disk ratio to irregular (clumpy) structures to very compact and relatively regular galaxies. Generally, the mix of star--forming galaxies at $z\sim 2$ looks rather different from its counterpart in the local universe, showing a much larger fraction of irregular and disturbed morphologies, especially among massive and luminous galaxies, although bright galaxies that closely resemble local spirals are also observed. We do observe luminous, clumpy galaxies whose light profile is consistent with a disk (of course, we do not have dynamical information on these galaxies) and whose overall morphology is in broad qualitative agreement with the theoretical predictions of violent disk instability (VDI, Dekel et al. 2009a) as seen in high--resolution hydrodynamic cosmological simulations \citep{cev10}. These simulations show that galaxy disks are built up by accretion of continuous, intense, cold streams of gas that dissipate angular momentum in a thick, toomry--unstable disk, where star--forming clumps form. Subsequently, clumps migrate toward the center and edge, giving rise to bulges and pseudo--bulges. Observations of the star--formation rate, stellar mass and age of the clumps, as well as their average radial dependence relative to the center of the galaxies are also broadly consistent with this scenario (e.g. Guo et al. 2011). On the other hand, passive galaxies (pBzKs) are mostly spheroidal-like, comparatively more regular and compact structures, a fact that has been consistently observed in previous works \citep{dad05,fra08,van08, cas10, cas11}. It is important to keep in mind, however, that there is scant spectroscopic information on the dynamical properties of these galaxies, namely whether they are primarily supported by velocity dispersion or by rotation. While the modicus of spectroscopic observations currently available \citep{van11, ono11} is certainly consistent with the high--redshift passive galaxies being spheoroids, a significant or even dominant contribution by rotation cannot be ruled out given the limited angular resolution of the existing spectra, and some have indeed proposed that a significant fraction or maybe even most \citep{vdw11, bru12} of these galaxies are actually compact rotating disks. From the theory point of view, there are three distict scenarios for the formation of the compact spheroids, namely major mergers, multiple minor mergers or the migration of clumps driven by violent disk instabilities \citep{dek09b, gen11} to the disks center, building a massive bulge. Cosmological hydrodynamical simulations indicate that these processes, operating alone or in combination, can form compact spheroids. In most cases, the inner parts of the compact spheroids formed by VDI are rotating, and the outer parts are non-rotating, formed mostly by minor mergers. While the observed morphological properties of galaxies certainly include cases that are broadly consistent with these scenarios, it is clear that to make progress comparisons between the dynamical properties of the galaxies and the predictions of the simulations are necessary. These, however, require spectroscopic observations with sensitivity and spatial resolution that are not currently available. In general very compact and massive galaxies are thought to be the result of a highly dissipative process, either a major wet merger (e.g. Wuyts et~al. 2010) or direct accretion of cold gas. Accretion of cold gas from the inter--galactic medium \citep{bir03, ker05, dek06} can lead to the formation of compact, massive galaxies, either via VDI in a compact disk \citep{dek09b} or via direct accretion of the gas traveling directly to the galaxy center rapidly and forming stars in--situ \citep{joh12}. Quenching of the star formation subsequently takes place late when the supply of gas is halted. The simulations suggest that cold accretion is naturally interrupted in dark matter halos more massive $\approx 10^{12}$ M$_{\odot}$ once the shocked halo gas become too hot to allow the cold flows to penetrate the halo before they themselves get shock-heated \citep{dek06}, leading the formation of a massive compact passive galaxy. Additional feedback mechanisms from star--formation itself \citep{dia12} and AGN \citep{spr05} can also contribute to suppress the accretion of cold gas. As mentioned earilier, \cite{bru12} have studied the morphologies of massive galaxies at $1<z<3$ in the CANDELS-UDF field using $S\acute{e}rsic$ and bulge+disc models, finding that at $z>2$ massive galaxies are dominated by disk-like structures and 25-40\% of quiescent galaxies have disk-dominated morphologies. Following their classification of disks, namely $n<2.5$ (even though they also use bulge- to-total H-band flux ratio), we find that about 50\% of our passive galaxies (with $SSFR < 0.01~Gyr^{-1}$) have exponential light profiles with $n < 2.5$, i.e consistent with exponential disks. These roughly classified passive disks have mostly $G > 0.5$,~$M_{20} < 1.45,~\Psi < 1.0$, suggesting that they generally are not clumpy structures, and their morphology is characterized by a central bright nucleus surrounded by low surface brightness features. The presence of passive disks seems inconsistent with models where galaxy morphology transforms from a disk structure into a bulge followed by quenching of star formation as the galaxy evolves. The existence of passive disks is, however, predicted by hydrodynamic simulations \citep{ker05, dek08}, which show that these structures form when cold gas inflows are halted, thus quenching star formation without the transformation of morphology. For example, \citep{wil13} argue that the morphological properties and volume density of massive, compact passive galaxies at $z\sim 2$ and those of compact star--forming galaxies at $z>3$ are generally consistent with such a scenario. \subsection{Bimodal color distribution at $z<2.5$} In this study, to the extent that passive and star--forming galaxies can effectively be identified by means of broad--band colors, e.g. either the BzK selection criteria or via SED fitting, we have shown that passive (pBzK) and star-forming (sBzK) galaxies occupy regions of the rest-frame U-V and stellar mass diagram (e.g. Figure~\ref{fig:CM}) that are essentially the same as the ``red sequence'' and ``blue cloud'' observed in the local universe (e.g. Blanton et al.\ 2003, Bell et al.\ 2004). Passive galaxies (pBzKs) are intrinsically red and massive, whereas, star-forming galaxies (sBzKs) are generally bluer and have lower mass than passive (pBzK) ones (with the exception of about 7\% red massive sBzKs). The majority of the exceptions are massive dusty star-forming galaxies, and are largely redder than low mass star-forming galaxies (sBzKs) at $z\sim2$. Thus, they can contaminate the red sequence sample by a significant fraction, if selected based only on a single rest-frame (U-V) color \citep{bra09}, since most of the UV emission from high-redshift star formation is at least somewhat obscured by dust. An intriguing property of this color-mass diagram is the lack of passive galaxies with mass $M<10^{10}$ M$_{\odot}$. To a minor extent, this is the result of incompleteness, since passive galaxies with lower mass, and hence luminosity, become harder to detect. From the simulation using model galaxies in the GOODS-S Deep field mosaics, we confirm that the early-type galaxies with ``de Vaucouleurs" light profile ($S\acute{e}rsic$ Index = 4) are 90\% complete with $H<26$. Clearly, however, such a small incompleteness alone cannot explain the lack of low--mass passive galaxies at $z\sim 2$, and in fact such low--mass galaxies are actually detected by the SED selection, as shown in the right panel of Figure~\ref{fig:CM} which illustrates how massive red galaxies are rarely star-forming, and more actively star-forming galaxies are bluer and have lower masses. Furthermore, obscured star--forming galaxies in the same redshift range and with similar rest--frame (U-V) colors are detected in significantly larger number even at lower masses, as the right panel of the figure shows. This fact strongly suggests that the quenching of star-formation at this epoch is tightly correlated with the mass of the galaxies, with the most massive ones being significantly more likely to cease their star formation activity. At mass $M< 10^{10}$ M$_{\odot}$, galaxies appear much less likely to stop forming stars, a fact that has been observed by other groups. For example, Kauffmann et al.(2003) and Bell et al.(2007) observed that at $z<1$ the stellar mass value of about $3\times10^{10}M_{\odot}$ appears to be the transition mass point between galaxies that belong to the blue cloud (younger stellar populations) and those of the red sequence. There is evidence that this transition mass between quenched and star-forming galaxies has evolved significantly over cosmic time \citep{bun07} further supporting the downsizing scenario whereby more massive galaxies appear to quench first, and subsequently lower mass galaxies quench later. At high redshift, quenching appears to depend on galaxy stellar mass, perhaps through some internal process that is tied to the total mass of the galaxy of which the stellar one is a good proxy in passive systems, whereas later, environmental processes can contribute to galaxy quenching and can affect lower-mass galaxies (e.g. Peng et al 2010, Peng et al 2012). This process effectively builds up the lower-mass end of the red sequence over time by quenching lower-mass star-forming galaxies later when environmental processes become more influential. Our observed deficiency of lower-mass passive galaxies at $z\sim2$ is consistent with this scenario and the implied mechanisms by which galaxies quench their star-formation. Results from other deep extragalactic surveys have provided constraints on the buildup of stellar mass locked up in the red sequence, by studying how the bimodality of galaxy properties has changed over cosmic time. For example, using the COMBO-17 and DEEP2 surveys, \citep{bel04, fab07} have studied the evolution in the rest-frame color bimodality of galaxy samples out to $z\sim1$, finding evidence that the buildup of the red sequence must be accounted for by a combination of merging of galaxies already on the red sequence, as well as migration of star-forming galaxies that have quenched. Recently, Brammer et al (2011) extended the study of rest-frame color bimodality in galaxies to higher redshift, showing that star-forming and passive galaxies are still robustly separated in color over the redshift range $0.4 < z < 2.2$, and coming to the similar conclusion that the growth of the red sequence must come from both merging and migration, particularly for galaxies above the apparent quenching threshold, $M > 3\times10 M_{\odot}$. In this context, a possibility is that that the morphological bimodality we have observed in this study may imply that some degree of morphological transformation must accompany the quenching of star-forming galaxies at $z < 1.4$, if they are to match the properties of the red sequence after quenching. Regardless of the dominant mechanisms building the red sequence, and weather or not the dominant mechanisms evolve with redshift, our result suggests that the process was already underway at redshift 2. In other words, at this epoch, the formation of the Hubble sequence is already underway. Further detailed study of the evolution of morphological properties of galaxies as a function of mass and star-formation properties will be required to identify the specific mechanisms contributing to the growth of the red sequence. \subsection{ The reliability of non-parametric measures} We mainly use the $G$, $M_{20}$ and $\Psi$ to study the morphologies of galaxies at $z\sim2$. Those non-parametric measures are widely used to study galaxy morphologies at high redshift, especially for large samples \citep{law07, con08, law12, wan12}. We investigate the robustness of the $G$, $M_{20}$ and $\Psi$ parameters in relation to the SNR in Appendix A and show that any differences in the estimated parameters for the same galaxies observed in the GOODS-S and UDF images, whose only difference is the vastly different total exposure time, are relatively small. Most of the galaxies in our samples ( above $90\%$ of BzK and $70\%$ of SED selected galaxies) have reliable morphological measurements with $S/N_{pp} > 2.5$ at $z\sim2$. Moreover, the good correspondence between those parameters and visual inspection (in Figure~\ref{fig:mont}), as well as model-dependent parameters indicates that our $G$, $M_{20}$ and $\Psi$ measures are not biased by low signal-to-noise. We note that $S\acute{e}rsic$ index alone is generally not sufficient to quantify the morphology of low mass galaxies since we find no correlation between $S\acute{e}rsic$ index and non-parametric measures in the lower mass systems ($M < 10^{10}M_{\odot}$). Also non-parametric measures more effectively characterize the morphology of irregular galaxies (LPM04). Therefore, it is crucial to use non- parametric diagnostics instead of, or at least in parallel with, the commonly used $S\acute{e}rsic$ profile to study the morphology of lower mass galaxies and to explore the origin of the Hubble sequence at $z\sim2$, and epoch when many galaxies appear irregular. \subsection{ The good performance of the BzK-selected sample} Both the BzK and SED samples show very similar morphological distributions in all the analysis done here. The comparison of the two samples confirms that they are similarly representative of the mix of bright galaxies at $z\sim2$. We additionally compare the distribution of morphological parameters for the 136 spectroscopically confirmed BzK galaxies (8 pBzKs and 128 sBzKs, specz sample) to the parent distribution (BzK sample). As expected, the relative distributions of $G$, $M_{20}$ and $\Psi$ are similar to our parent sample, and the median values of each parameter are almost the same, with the exception of $G$ in the case of sBzKs. The average $G$ value of BzKs in the specz sample is slightly higher, since over $70\%$ of the specz sample are bright and massive ($M > 1\times 10^{10}M_{\odot}$) galaxies, which tend to have higher $G$. Thus, in conclusion, the very similar results derived from both samples proves the effectiveness of the BzK selection criteria in sampling the full diversity of the mix of massive galaxies at $z\sim 2$, at least as far as the morphological properties of relatively massive galaxies are considered. The BzK selection will be particularly effective, for example, in the other three CANDELS fields where the broad--band photometry is neither as deep or as broad in wavlength as in the two GOODS fields. \section{Summary} In this paper we have explored general trends between galaxy morphology and broad--band spectral types at $z\sim2$ using the {\it HST}/WFC3 H--band images taken in the GOODS--South field as part of the CANDELS survey, in combination with the existing GOODS ACS z-band images, as well as sensitivity--matched images at other wavelngths that are part of the GOODS data products \citep{gia04}. Combining the deep and high-resolution NIR data to optical data, we are able to study the dependence of morphologies on wavelengths and expand the scope of previous studies of galaxy morphologies at the same redshift \citep{kri09, cam11, szo11, wan12} with significantly larger sample size and lower mass limit ($>10^{9}M_{\odot}$). The galaxies of our primary sample are selected in the redshift range $1.4<z<2.5$ to cover a broad range of spectral types (star--formation properties and dust obscuration) using SED fitting to spectral population synthesis models (SED sample). For comparative reasons we also selected galaxies using the BzK creterion which culled galaxies in the same redshift range and with essentailly identical spectral properties, modulo a large contamination from interlopers and AGN. Analyses of the two samples show consistent results suggesting that the BzK and SED selection criteria are equivalent in sampling the mix of spectral types at $z\sim 2$. We investigate the rest-frame optical morphologies using five non-parametric approaches, mainly $G$, $M_{20}$ and $\Psi$ in addition to $C$ and $A$, and two model-dependent parameters obtained by fitting $S\acute{e}rsic$ profiles, namely $n$ and $R_{e}$. The major findings of this study are presented below. \begin{enumerate} \item In the rest-frame (U-V) color and mass diagram, our sample clearly separates into red massive passive galaxies with low SSFR and blue star-forming ones with less massive, high SSFR occupying the same regions in the color-mass diagram as the galaxies observed in the local universe. \item We find that galaxies with different spectral types are distinctly classified morphologically as two populations, especially for massive systems ($>10^{10}M_{\odot}$) : 1) star-forming galaxies are heterogeneous, with mixed features including bulges, disks, and irregular (or clumpy) structures, with relatively low $G$, $n$ and high $M_{20}$, $\Psi$; 2) passive galaxies are spheroidal-like compact structures with higher $G$, $n$ and lower $M_{20}$ and $\Psi$. Generally, the sizes of star-forming galaxies are larger than passive ones, even in massive systems, but some have very compact morphologies, with $R_{e} <1kpc$. We confirm using a variety of measures that star formation activity is correlated with morphology at $z\sim2$, with the passive galaxies looking similar to local passive ones although smaller, while star-forming galaxies show considerably more mophological diversity than massive star-forming galaxies on the Hubble sequence today. \item We show that the morphological analysis only using the $S\acute{e}rsic$ index is not sufficient to charaterize differences in morphologies especially for lower mass galaxies ($M<10^{10}M_{\odot}$). Therefore we conclude that it is important to use non-parametric measures to investigate the morphologies of high redshift galaxies in a broad range of stellar masses. In this study, the combination of large samples with a suite of morphological diagnaostics, both parametric and non--parametric ones, as well as visual inspections, gives us a significantly improved description of the state of galaxy morphologies at $z\sim2$ and its correlations with the spectral type, i.e. mostly the star--formation activity, expanding the significance and the scope of previous studies which were based on much smaller samples and only massive galaxies at the same epoch \citep{kri09,cam11, szo11, wan12}. \item Generally, $z\sim2$ galaxies show a similar trend in morphologies with those measured from the redshifted images of local galaxies, even though many of the star-forming galaxies have $M_{20}$ values higher than seen for galaxies in the local sample. The passive galaxies at $z\sim2$ have $G$ and $M_{20}$ values that are much more similar to those observed for the present--day E/S0 galaxies. \item Comparison of visual images between the rest-frame optical and near UV show that the morphological k-correction is generally weak, however, the comparison with non-parametric measures indicates that galaxies observed in the rest-frame UV are slightly clumpier, with lower $G$ and higher $M_{20}$ and $\Psi$, than rest-frame optical. \end{enumerate} Taken all together, our results show that the correlations between morphology as traced by a suite of common diagnostics, and broad--band UV/Optical spectral types of the mix of relatively massive galaxies (i.e. $M>10^{9}$ M$_{\odot}$) at $z\sim 2$ are quantitatively and qualitatively similar to those observed for their counterparts in the local universe. We interpret these results as evidence that the backbone of the Hubble Sequence observed today was already in place at $z\sim 2$. \acknowledgements This work is based on observations taken by the CANDELS Multi-Cycle Treasury Program with the NASA/ESA HST, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555.
1,941,325,219,958
arxiv
\section{Introduction} Wigner introduced his famous quasiprobability density function on phase space in order to consider semiclassical approximations to the quantum evolution of the density matrix \cite{Wigner32}. For a system with one degree of freedom and classical Hamiltonian \bea H=p^2/2m +V(q)\,, \label{hamiltonian1} \eea he found for the evolution of the Wigner function $W(q,\,p,\,t)$, \bea W(q,\,p,\,t)_t =V'(q)\,W(q,\,p,\,t)_p -\frac{p}{m}\,W(q,\,p,\,t)_q \mea\mea -\frac{\hbar^2}{24}\left\{ V'''(q)W(q,\,p,\,t)_{ppp} -\frac{3p}{m}\,V''(q)W(q,\,p,\,t)_{qpp}\right. \mea\mea \left. +\frac{3}{m}\,V'(q)W(q,\,p,\,t)_{qqp}\right\}+{\rm O}(\hbar^4)\,. \label{wigner_evolution1} \eea Here we have introduced the notation $V'(q)$ for $dV(q)/dq$ and $W_{p}$ for $\partial W/\partial p$, {\em etc.} The first two terms on the RHS in (\ref{wigner_evolution1}) define the classical Liouvillean evolution, while the terms of order $\hbar^2$ define the first semiclassical approximation to the full quantum evolution of the Wigner function, and so on. Following the subsequent works of Groenewold \cite{Groenewold46} and Moyal \cite{Moyal49}, we now recognize the RHS of (\ref{wigner_evolution1}) as the expansion in ascending powers of $\hbar$ of the star, or Moyal, bracket of the Hamiltonian $H$ and the Wigner function $W$, \bea \{H,\,W\}_{\star} = \frac{2}{\hbar}\,H\,\sin\left(\hbar J/2\right)\,W\,,\qquad J= \frac{\partial^L}{\partial q} \frac{\partial^R}{\partial p} - \frac{\partial^L}{\partial p} \frac{\partial^R}{\partial q}\,, \mea\mea W_t=\{H,\,W\}_{\star}=H\,J\,W-\frac{\hbar^2}{24}\,H\,J^3\,W+{\rm O}(\hbar^4)\,.\qquad \label{star_bracket1} \eea Here the superscripts $R$ and $L$ in the Janus operator $J$ indicate the directions in which the differential operators act. The leading term $H\,J\,W$ in the last equation represents the Poisson bracket of $H$ and $W$, and corresponds to the first two terms on the RHS in (\ref{wigner_evolution1}). All this is very well known. It is a central ingredient of the so-called phase space formulation of quantum mechanics \cite{Dubin00,Zachos02}, where operators on Hilbert space are mapped into functions on phase space, and in particular the density operator is mapped into the Wigner function. Less well known is that, in a completely analogous way, classical mechanics can be reformulated in Hilbert space \cite{Groenewold46,Vercin00,Bracken03}, with the classical Liouville density mapped into a quasidensity operator \cite{Muga92,Muga93,Muga94} that we have called elsewhere \cite{Wood04} the Groenewold operator. The evolution of this operator in time is defined by what we have called \cite{Bracken03} the odot bracket, and the expansion of this bracket in ascending powers of $\hbar$ defines a series of semiquantum approximations to classical dynamics, starting with the quantum commutator. In this way, we explore the classical-quantum interface in a new way, approaching classical mechanics from quantum mechanics, which is now regarded as a first approximation. So we stand on its head the traditional approach, which approaches quantum mechanics from classical mechanics, regarded as a first approximation. \section{The Weyl-Wigner transform} In order to see how this works, we recall firstly \cite{Dubin00,Zachos02,Cassinelli03} that the phase space formulation of quantum mechanics is defined by the Weyl-Wigner transform ${\cal W}$, which maps operators ${\hat A}$ on Hilbert space into functions $A$ on phase space, \bea A={\cal W}({\hat A})\,,\qquad A=A(q,\,p)\,, \label{WW_transform1} \eea and in particular defines the Wigner function $W={\cal W}({\hat \rho}/2\pi\hbar)$, where ${\hat \rho}$ is the density matrix defining the state of the quantum system. Then \bea \langle {\hat A}\rangle(t)={\rm Tr}({\hat \rho}(t) {\hat A})=\int A(q,\,p)\,W(q,\,p,\,t)\,dq\,dp=\langle A\rangle (t)\,, \mea\mea \int W(q,\,p,\,t)\,dq\,dp=1\,,\qquad\qquad\qquad \label{averages1} \eea but $W$ is not in general everywhere nonnegative; it is a quasiprobability density function. In more detail, $A$ is defined by first regarding ${\hat A}$ as an integral operator with kernel $A_K(x,\,y)=\langle x|{\hat A}|y\rangle$ in the coordinate representation, and then setting \bea A(q,\,p)={\cal W}({\hat A})(q,\,p)=\int A_K(q-x/2,\,q+x/2)\,e^{ipx/\hbar}\,dx\,. \label{WW_transform2} \eea The transform of the operator product on Hilbert space then defines the noncommutative star product on phase space, \bea {\cal W}({\hat A}{\hat B})=A\star B\,, \label{star_product} \eea leading to the star bracket as the image of $(1/i\hbar \times)$ the commutator, \bea {\cal W}([{\hat A},\,{\hat B}]/i\hbar)=(A\star B-B\star A)/i\hbar=\{A,\,B\}_{\star}\,. \label{star_bracket} \eea It can now be seen that the quantum evolution of the density matrix \bea {\hat \rho}_t= \frac{1}{i\hbar}[{\hat H},\,{\hat \rho}]\,, \label{quantum_evolution} \eea is mapped by the Weyl-Wigner transform ${\cal W}$ into the evolution equation for the Wigner function \bea W_t=\{H,\,W\}_{\star} \label{wigner_evolution3} \eea as in (\ref{star_bracket1}), so leading to the sequence of semiclassical approximations as described in the Introduction. In (\ref{quantum_evolution}), ${\hat H}$ is the quantum Hamiltonian operator, so that $H={\cal W}({\hat H})$. In order to define semiquantum approximations to classical dynamics, we begin by considering the inverse Weyl-Wigner transform ${\cal W}^{-1}$, which maps functions $A$ on phase space into operators ${\hat A}$ on Hilbert space, so enabling a Hilbert space formulation of classical mechanics \cite{Vercin00,Bracken03}. We have \bea A_K(x,y)= {\cal W}^{-1}(A)_K(x,y)= \frac{1}{2\pi\hbar}\int A([x+y]/2,\,p)\,e^{ip(x-y)/\hbar}\,dp\,, \label{WW_inverse} \eea which defines the kernel of ${\hat A}$, and hence ${\hat A}$ itself, in terms of $A$. In particular the Groenewold operator ${\hat G}(t)$ is defined as ${\hat G}={\cal W}^{-1}(2\pi\hbar\rho)$. Then \bea \langle A\rangle (t)=\int A(q,\,p)\,dq\,dp={\rm Tr}({\hat A}{\hat G}(t))=\langle {\hat A}\rangle\,, \mea\mea {\rm Tr}({\hat G}(t))=1\,,\qquad\qquad\quad \label{G_properties} \eea but ${\hat G}$ is not nonnegative definite in general; it is a quasidensity operator. It can be seen that the development so far is completely analogous to the development of the phase space formulation of quantum mechanics. We can use ${\cal W}^{-1}$ to map all of classical mechanics into a Hilbert space formulation \cite{Vercin00,Bracken03}. To complete the story, we need to say what happens to the classical evolution of the Liouville density \bea \rho_t=H\,J\,\rho \label{poisson} \eea under the action of ${\cal W}^{-1}$. Obviously the LHS maps into ${\hat G}_t$; the question is, what happens to the Poisson bracket on the RHS. \section{The odot product and odot bracket} To proceed, we note firstly by analogy with the definition of the star product that we can define a commutative odot product of operators on Hilbert space \bea {\hat A}\odot{\hat B}={\cal W}^{-1}(AB)\,. \label{odot_product} \eea This is an interesting product, quite distinct from the well known Jordan product of operators, which is also commutative. Unlike the Jordan product, however, this odot product is associative. Some of its other characteristic properties have been described elsewhere \cite{Vercin00,Bracken03,Dubin04}. Next we note that $A_q=\{A,\,p\}_{\star}$ so that \bea {\cal W}^{-1} (A_q)=\frac{1}{i\hbar}[{\hat p},\,{\hat A}]={\hat A}_q\,,\!\!\quad {\rm say}, \label{q_derivative1} \eea where ${\hat p}={\cal W}^{-1}(p)$. Similarly, we define \bea {\cal W}^{-1} (A_p)=\frac{1}{i\hbar}[{\hat A},\,{\hat q}]={\hat A}_p\,, \mea\mea {\cal W}^{-1} (A_{qp}) =\frac{1}{(i\hbar)^2}[[{\hat p},\,{\hat A}],\,{\hat q}]={\hat A}_{qp}\,,\quad etc. \label{q_derivative2} \eea Then \bea {\cal W}^{-1} \left( A\,J\,B\right)= {\hat A}_q \odot {\hat B}_p-{\hat A}_p\odot {\hat B}_q = \frac{1}{i\hbar}[{\hat A},\,{\hat B}]_{\odot}\,,\quad \!\!{\rm say}\,, \label{odot_bracket} \eea which defines the odot bracket as the inverse image of $i\hbar\times$ the Poisson bracket. From the Poisson bracket it inherits antisymmetry and a Jacobi identity. Our next task is to find an expansion of the odot bracket in ascending powers of $\hbar$, analogous to the expansion (\ref{star_bracket1}) of the star bracket, in order to define a sequence of semiquantum approximations to classical dynamics. \section{Semiquantum mechanics} We set \bea M=\frac{2}{\hbar}\sin\left( \frac{\hbar J}{2}\right)\,, \label{moyal_operator} \eea so we can write for any two functions $A$ and $B$, \bea \{A,\,B\}_{\star}=A\,M\,B\,, \label{moyal_operator2} \eea and note therefore that \bea {\cal W}^{-1}(A\,M\,B)=\frac{1}{i\hbar}[{\hat A},\,{\hat B}]\,. \label{transform_moyal} \eea Next we write the Poisson bracket as \bea A\,J\,B=A\,\frac{\hbar J/2}{\sin(\hbar J/2)}\,M\,B \label{inverse_series1} \eea and, noting that \bea \frac{\theta}{\sin(\theta)}=1+\theta^2/6+7\theta^4/360-\dots \label{inverse_series2} \eea we obtain from (\ref{transform_moyal}) and (\ref{q_derivative2}) that \bea {\cal W}^{-1}(A\,J\,B)=\qquad\qquad\qquad\qquad\qquad\qquad\qquad \mea\mea \frac{1}{i\hbar}[{\hat A},\,{\hat B}] -\frac{i\hbar}{24}\left([{\hat A}_{qq},\,{\hat B}_{pp}] -2[{\hat A}_{qp},\,{\hat B}_{qp}] +[{\hat A}_{pp},\,{\hat B}_{qq}]\right)+{\rm O}(\hbar^3)\,. \label{poisson_expansion1} \eea Now we can answer the question raised at the end of Section 2. Applying ${\cal W}^{-1}$ to both sides of (\ref{poisson}), we obtain \bea {\hat G}_t=\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \mea\mea \frac{1}{i\hbar}[{\hat H},\,{\hat G}] -\frac{i\hbar}{24}\left([{\hat H}_{qq},\,{\hat G}_{pp}] -2[{\hat H}_{qp},\,{\hat G}_{qp}] +[{\hat H}_{pp},\,{\hat G}_{qq}]\right)+{\rm O}(\hbar^3)\,. \label{G_expansion1} \eea Thus we see the evolution of ${\hat G}$ in Hilbert space, which is equivalent to the classical evolution of the Liouville density $\rho$ in phase space, is given as a series in ascending powers of $\hbar$. Keeping successively more terms in the series, we define a sequence of approximations to the classical evolution. Note that the lowest order term is just the quantum evolution, which now appears as the lowest order approximation to classical dynamics. As we add more and more terms, we have the possibility to explore the classical-quantum interface starting from the quantum side. This is completely complementary to what we normally do with semiclassical approximations to quantum mechanics. \section{Examples} J.G. Wood and I have explored semiquantum and semiclassical approximations for simple nonlinear systems with one degree of freedom \cite{Wood05}. Rather than Hamiltonians of the form (\ref{hamiltonian1}), we considered \begin{equation} H \,=\, E\sum_{k=0}^K b_k \left(H_0/E\right)^k\,, \label{hamiltonian_set} \end{equation} where $H_0$ is the simple harmonic oscillator Hamiltonian \bea H_0 \,=\, p^2/2m + m\omega^2q^2/2\,, \label{SHO_hamiltonian} \eea and $E$, $b_k$ are constants. These have the advantage that they are analytically tractable, but still show characteristic differences between the classical and quantum evolutions \cite{Milburn86}. In particular we considered \bea H_2=H_0^2/E\,,\qquad {\hat H}_2= \mu \hbar\omega \,({\hat N}^2 + {\hat N} + 1/4)\,, \mea\mea H_3=H_0^3/E^2\,,\quad {\hat H}_3= \mu ^2 \hbar\omega\, ({\hat N}^3 +3{\hat N}^2/2 +2{\hat N} + 3/4)\,, \mea\mea {\rm where}\quad{\hat H}_0=\hbar\omega\,({\hat N}+1/2)\,,\qquad\mu=\hbar\omega/E\,,\quad \label{hamiltonians} \eea with ${\hat H}={\cal W}^{-1}(H)$ in each case, and ${\hat N}$ the usual oscillator number operator. As an initial Liouville density on phase space, we took a Gaussian $\rho$ for which the initial Groenewold operator ${\hat G}={\cal W}^{-1}(2\pi\hbar\rho)$ equals a true density operator, namely the density operator for a pure coherent state. Differences between the classical and quantum evolutions of such an initial state, with the Hamiltonians $H_2$ and ${\hat H}_2$, respectively, are immediately apparent in the phase space plots Fig. \ref{whorl_fig} and Fig. \ref{quant_gaussian_fig}. Under the classical evolution, the density stays positive everywhere, but develops ``whorls," whereas under the quantum evolution, the density (Wigner function) becomes negative on some regions (shown in white) and is periodic \cite{Milburn86}. Conversely, under the classical dynamics, the Groenewold operator ${\hat G}$ develops negative eigenvalues \cite{Muga92,Muga93,Muga94,Habib02,Wood04}, whereas under the quantum evolution, such an initial pure-state density operator stays positive definite, with eigenvalues $0$ and $1$. Simliar remarks apply with the Hamiltonians $H_3$ and ${\hat H}_3$. \begin{figure} \centerline{\psfig{figure=brack_fig1.eps,height=100mm,width=110mm}} \caption{Density plots showing the classical evolution of an initial Gaussian density centered at $q_0=0.5$, $p_0=0$ as generated by the Hamiltonian $H=H_0^2/E$. The parameters $m,\omega,E$ have been set equal to $1$, and the times of the plots are, from left to right and top to bottom, $t=\pi/4$, $t=\pi/2$, $t=3\pi/4$ and $t=\pi$. \label{whorl_fig} } \end{figure} \begin{figure} \centerline{\psfig{figure=brack_fig2.eps,height=100mm,width=110mm}} \caption{Quantum evolution of an initial Gaussian Wigner function, with the same parameter values used in Figure \ref{whorl_fig}, and shown at the same times. Regions on which the pseudo-density becomes negative are shown in white in the first three plots. \label{quant_gaussian_fig}} \end{figure} There is no nontrivial semiquantum or semiclassical approximation ``in between" the classical and quantum dynamics for the Hamiltonians $H_2$ and ${\hat H}_2$. Each of the series (\ref{star_bracket1}) and (\ref{G_expansion1}) has just two terms in this case. For the series (\ref{star_bracket1}), the leading term defines the classical evolution, and adding the next term produces the full quantum dynamics. Similarly, for the series (\ref{G_expansion1}), the leading term defines the quantum evolution, and adding the next term produces the full classical dynamics. To see interesting differences betweem semiclassical and semiquantum approximations, we considered $H_3$ and ${\hat H}_3$, for which there are three terms in each of the series (\ref{star_bracket1}) and (\ref{G_expansion1}). Thus we can compare the classical evolution and the first semiclassical approximation to quantum dynamics, and also the quantum evolution and the first semiquantum approximation to classical dynamics. In Fig. \ref{cubic} we plot the expectation values of $q$, $p$ in the classical and semiclassical cases, calculated at each time using the Liouville density $\rho$ or the Wigner function $W$ as in (\ref{averages1}), and also the expectation values of ${\hat q}$ and ${\hat p}$ in the quantum and semiquantum cases, calculated at each time using the density operator ${\hat \rho}$ or the Groenewold operator ${\hat G}$ as in (\ref{G_properties}). From our results it is already clear that semiquantum and semiclassical approximations provide different information about the interface between classical and quantum behaviours. \begin{figure} \centerline{\psfig{figure=brack_fig3.eps,height=100mm,width=110mm}} \caption{Comparison of first moments of $\alpha=(q+ip)/\sqrt{2}$ for classical, semiquantum, quantum and semiclassical evolutions generated by the Hamiltonian $H=H_0^3/E^2$. Points on the classical, quantum, semiclassical and semiquantum curves are labelled by +, x, o and $\square$, respectively. The evolution is over the time-interval $[0,\pi]$ and again $m=\omega=E=1$, with $\mu=1/2$, $\alpha_0=0.5$. \label{cubic}} \end{figure} These expectation values compare ``classical-like" properties of the different evolutions. We also considered ``quantum-like" properties of the diferent evolutions, in particular the largest and smallest eigenvalues of ${\cal W}^{-1}(\rho)$ and ${\cal W}^{-1}(W)$ in the classical and semiclassical cases, and the largest and smallest eigenvalues of ${\hat \rho}$ and ${\hat G}$ in the quantum and semiquantum cases. The results are shown in Fig. \ref{cubic_eigenvalues}. \begin{figure} \centerline{\psfig{figure=brack_fig4.eps,height=100mm,width=110mm}} \caption{Comparison of largest two and least two eigenvalues for, from left to right and top to bottom, classical, semiquantum and semiclassical evolutions generated by $H=H_0^3/E^2$, for the time interval $[0,\pi]$, and with $m=\omega=E=1$ and $\mu=1/2$, $\alpha_0=0.5$. The evolution of the first moment is reproduced from Fig. \ref{cubic} in the graph at bottom right for comparison. Each of the other graphs also features the quantum spectrum $\{0,1\}$ and in all graphs the values at the time-points $t=1,2,3$ are marked $\lozenge$ (classical), o (semiclassical), $\triangle$ (quantum) and $\square$ (semiquantum). \label{cubic_eigenvalues}} \end{figure} \section{Conclusions} Semiquantum mechanics opens a new window on the interface between classical and quantum mechanics. Our investigations of examples as described above have not yet gone very far, but already we can say that semiquantum approximations show characteristic differences from semiclassical approximations for a given nonlinear system. More details are given in Ref. \cite{Wood05}. It is important to explore the nature of semiquantum approximations for other systems. For example, from the classical Hamiltonian $H$ as in (\ref{hamiltonian1}), we obtain \bea {\hat H}= {\hat p}^2/2m +V({\hat q}) \,, \label{hamiltonian2} \eea and substituting in (\ref{G_expansion1}), we get for the evolution of the Groenewold operator in such cases \bea {\hat G}_t= \frac{1}{i\hbar}[{\hat p}^2/2m +V({\hat q}),\,{\hat G}] -\frac{i\hbar}{24}[V''({\hat q}),\,{\hat G}_{pp}]+{\rm O}(\hbar^3)\,, \label{groenewold_evolution1} \eea which is to be compared with Wigner's famous formula (\ref{wigner_evolution1}) for the evolution of the Wigner function. The implications of (\ref{groenewold_evolution1}) for various important choices of $V$, in particular exactly solvable cases, should be examined. Of even more interest of course are systems with more degrees of freedom that show chaos at the classical level. For such systems it should be particularly interesting to consider differences in the spectral properties of ${\hat G}$ at different times in different approximations, in classically chaotic or integrable regimes, as well as the different behaviours of expectation values of important phase space variables. We hope to investigate some of these problems. \section*{Acknowledgments} Thanks to Dr. James Wood for much advice and assistance, and to a referee for bringing the paper by Vercin \cite{Vercin00} to our attention. This work was supported by Australian Research Council Grant DP0450778.
1,941,325,219,959
arxiv
\section{Introduction} The bounded derived category of coherent sheaves $D^b(\text{coh}(X))$ of an algebraic variety $X$ reflects important properties of the variety $X$. The abelian category of coherent sheaves $(\text{coh}(X))$ can be identified to $X$ itself, but $D^b(\text{coh}(X))$ has more symmetries and the investigation of its structure may reveal something more fundamental. If $X$ is a smooth projective variety, then $D^b(\text{coh}(X))$ satisfies finiteness properties such as finite homological dimension, Home-finiteness, and saturatedness (\cite{BvdB}). A birational map like blowing up is reflected to a semi-orthogonal decomposition (SOD) (\cite{BO}). Even singular varieties with quotient singularities can be considered in the same way by replacing them by smooth Deligne-Mumford stacks (\cite{DK}, \cite{SDG}), and we observe the parallelism between the minimal model program (MMP) and SOD. But recently we found that the category $D^b(\text{coh}(X))$ for a singular variety $X$ may have nice SOD when the singularities are not so bad (\cite{NC}, \cite{Kuznetsov}, \cite{KKS}) at least in the case of surfaces. The paper \cite{KKS} shows that the structure of $D^b(\text{coh}(X))$ is quite interesting at least in the case of a rational surface with cyclic quotient singularities. In this paper we calculate the case of dimension $3$. We expect that there are still richer structures in dimension $3$. We mainly consider $3$-fold with an ordinary double point (ODP). There are two cases; $\mathbf{Q}$-factorial and non-$\mathbf{Q}$-factorial. We will see the difference in the following. We note that an ordinary double point is $\mathbf{Q}$-factorial if and only if it is (locally) factorial. We also note that an ordinary double point on a rational surface is never factorial, but there are rational $3$-folds with factorial and non-factorial ODP's. \vskip 1pc We first recall some definitions and theorems on generators of categories in \S2 and the triangulated category of singularities $D_{\text{sg}}(X)$ of a variety $X$ in \S3. We calculate triangulated categories of singularities in the case of ODP in \S4. Then we prove the following in \S5: \begin{Thm}[= Theorem~\ref{main}] Let $X$ be a Gorenstein projective variety, let $L$ be a maximally Cohen-Macaulay coherent sheaf on $X$ which generates $D_{\text{sg}}(X)$, and let $F$ be a coherent sheaf which is constructed from $L$ by a flat non-commutative (NC) deformation over the endomorphism algebra $R = \text{End}(F)$ such that $\text{Hom}(F,F[p]) = 0$ for all $p \ne 0$. Then $D^b(\text{coh}(X))$ has an SOD into the triangulated subcategory generated by $F$, which is equivalent to $D^b(\text{mod-}R)$, and its right orthogonal complement which is saturated, i.e., the complement is similar to the case of a smooth projective variety. \end{Thm} We apply the theorem to the case where $X$ is a $3$-fold with only one ODP which is not $\mathbf{Q}$-factorial, and calculate the structure of $D^b(\text{coh}(X))$ in \S6: \begin{Thm}[= Theorem~\ref{ODP}] Let $X$ be a $3$-dimensional projective variety with only one ODP which is not $\mathbf{Q}$-factorial. Assume that there are reflexive sheaves $L_1,L_2$ of rank $1$ such that $\dim H^0(X, \mathcal{H}om(L_i,L_j)) = \delta_{ij}$ and that $H^p(X, \mathcal{H}om(L_i,L_j)) = 0$ for $p > 0$. Assume moreover that $L_1,L_2$ generate the triangulated category of singularities $D_{\text{sg}}(X)$. Then $L_1,L_2$ generate an admissible subcategory of $D^b(\text{coh}(X))$, which is equivalent to $D^b(\text{mod-}R)$ for a $4$-dimensional non-commutative algebra $R$, such that its right orthogonal complement is a saturated category. \end{Thm} In \S7, we calculate some examples. We give two examples where the assumptions of Theorem~\ref{ODP} are satisfied, and then we consider examples where the singularities are factorial ODP's. In the latter case, non-commutative (NC) deformations (\cite{NC}) of $L$ do not terminate and do not yield suitable coherent sheaf $F$. Indeed the versal deformation becomes a quasi-coherent sheaf corresponding to an infinite chain of coherent sheaves. In the appendix, we make a correction to an error in a cited paper \cite{NC} on NC deformations. We assume that the base field $k$ is algebraically closed of characteristic $0$ in this paper. \vskip 1pc I would like to dedicate this paper to Professor Miles Reid for the long lasting friendship since he was a Pos Doc and I was a graduate student in Tokyo (he kindly corrected my English in my master's thesis at that time). The author would like to thank Professor Keiji Oguiso for a help in Example~\ref{K3}, and Professor Alexander Kuznetsov for the correction in Appendix. This work was partly done while the author stayed at Korea Advanced Institute of Science and Technology (KAIST) and National Taiwan University. The author would like to thank Professor Yongnam Lee, Professor Jungkai Chen, Department of Mathematical Sciences of KAIST and National Center for Theoretical Sciences of Taiwan for the hospitality and excellent working conditions. This work was partly supported by Grant-in-Aid for Scientific Research (A) 16H02141. This was also partly supported by the National Science Foundation Grant DMS-1440140 while the author stayed at the Mathematical Sciences Research Institute in Berkeley during the Spring 2019 semester. \section{Generators} We collect some definitions and results concerning generators of categories and representabilities of functors. $T$ denotes a triangulated category in this section. A set of objects $E \subset T$ is said to be a {\em generator} if the right orthogonal complement defined by $E^{\perp} = \{A \in T \mid \text{Hom}_T(E,A[p]) = 0 \,\,\forall p\}$ is zero: $E^{\perp} = 0$. $E$ is said to be a {\em classical generator} if $T$ coincides with the smallest triangulated subcategory which contains $E$ and closed under direct summands. $E$ is saids to be a {\em strong generator} if there exists a number $n$ such that $T$ coincides with the full subcategory obtained from $E$ by taking finite direct sums, direct summands, shifts, and at most $n-1$ cones. $T$ is said to be {\em Karoubian} if every projector splits. The {\em idempotent completion} or the {\em Karoubian envelope} of a triangulated category is defined to be the category consisting of all kernels of all projectors. A triangulated full subcategory $B$ (resp. $C$) of $T$ is said to be {\em right (resp. left) admissible} if the natural functor $F: B \to T$ (resp. $F': C \to T$) has a right (resp. left) adjoint functor $G: T \to B$ (resp. $G': T \to C$). An expression \[ T = \langle C,B \rangle \] is said to be a {\em semi-orthogonal decomposition (SOD)} if $B,C$ are triangulated full subcategories such that $\text{Hom}_T(b,c) = 0$ for any $b \in B$ and $c \in C$ and such that, for any $a \in T$, there exists a distinguished triangle $b \to a \to c \to b[1]$ for some $b \in B$ and $c \in C$. In this case, $B$ (resp. $C$) is a right (resp. left) admissible subcategory. Conversely, if $B$ (resp. $C$) is a right (resp. left) admissible subcategory, then there is a semi-orthogonal decomposition $T = \langle C,B \rangle$ for $C = B^{\perp}$ (resp. $B = {}^{\perp}C$) (\cite{Bondal}). $T$ is said to be {\em Hom-finite} if $\sum_i \dim \text{Hom}(A,B[i]) < \infty$ for any objects $A,B \in T$. A cohomological functor $H: T^{\text{op}} \to (\text{Mod-}k)$ is said to be of {\em finite type} if $\sum_i \dim H(A[i]) < \infty$ for any object $A \in T$. A Hom-finite triangulated category $T$ is said to be {\em right saturated} if any cohomological functor $H$ of finite type is representable by some object $B \in T$. A right saturated full subcategory of a Hom-finite triangulated category is automatically right admissible and yields a semi-orthogonal decomposition (\cite{Bondal}). In a similar way, we define homological functor of finite type and a left saturated category which is automatically left admissible. If $T$ is Hom-finite, has a strong generator, and Karoubian, then $T$ is right saturated (\cite{BvdB} Theorem 1.3). Let $T$ be a triangulated category which has arbitrary coproducts (e.g., infinite direct sums). An object $A \in T$ is said to be {\em compact} if \[ \coprod_{\lambda} \text{Hom}(A,B_{\lambda}) \cong \text{Hom}(A, \coprod_{\lambda} B_{\lambda}) \] for all coproducts $\coprod_{\lambda} B_{\lambda}$. We denote by $T^c$ the full subcategory of $T$ consisting of all compact objects. If $X$ is a quasi-separated and quasi-compact scheme and $T = D(\text{Qcoh}(X))$, then $T^c = \text{Perf}(X)$, the triangulated subcategory of perfect complexes (\cite{BvdB} Theorem 3.1.1). $T$ is said to be {\em compactly generated} if $(T^c)^{\perp} = 0$. If $T$ is compactly generated, then $E \in T^c$ classically generates $T^c$ if and only if $E$ generates $T$ (\cite{RN}, \cite{BvdB} Theorem 2.1.2). \begin{Thm}[\cite{Neeman} Theorem 4.1 (Brown representability theorem)] Let $T$ be a compactly generated triangulated category and $T'$ be another triangulated category. Let $F: T \to T'$ be an exact functor which commutes with coproduct: \[ \coprod_{\lambda} F(A_{\lambda}) \cong F(\coprod_{\lambda} A_{\lambda}). \] Then there exists a right adjoint functor $G: T' \to T$: \[ \text{Hom}_{T'}(A,G(B)) \cong \text{Hom}_T(F(A),B). \] \end{Thm} \section{Orlov's triangulated category of singularities} We recall Orlov's theory of the triangulated category of singularities. Let $X$ be a separated noetherian scheme of finite Krull dimension whose category of coherent sheaves has enough locally free sheaves, e.g., a quasi-projective variety. Orlov defined a {\em triangulated category of singularities} $D_{\text{sg}}(X)$ as the quotient category of the bounded derived category of coherent sheaves $D^b(\text{coh}(X))$ by the category of perfect complexes $\text{Perf}(X)$: $D_{\text{sg}}(X) = D^b(\text{coh}(X))/\text{Perf}(X)$. The triangulated category of singularities behaves well when $X$ is Gorenstein: \begin{Prop}[\cite{Orlov1} Proposition 1.23]\label{Orlov-Prop1} Assume that $X$ is Gorenstein. Then any object in $D_{\text{sg}}(X)$ is isomorphic to the image of a coherent sheaf $F$ such that $\mathcal{H}om(F,\mathcal{O}_X[i]) = 0$ for all $i > 0$. \end{Prop} If $X$ is Gorenstein, then the natural morphism $F \to R\mathcal{H}om(R\mathcal{H}om(F,\mathcal{O}_X),\mathcal{O}_X)$ is an isomorphism. If $(R,M)$ is a Gorenstein complete local ring of dimension $d$, then the local duality theorem says that \[ \text{Ext}^i_R(F,R) \cong \text{Hom}_R(H_M^{d-i}(F),E) \] for any $R$-module $F$, where $E$ is an injective hull of $k = R/M$. Thus the condition $\mathcal{H}om(F,\mathcal{O}_X[i]) = 0$ for all $i > 0$ is equivalent to saying that $F$ is a maximally Cohen-Macaulay (MCM) sheaf. \begin{Thm}[\cite{Orlov1} Proposition 1.21]\label{Orlov-Prop2} Assume that $X$ is Gorenstein. Let $F$ be coherent sheaf such that $\mathcal{H}om(F,\mathcal{O}_X[i]) = 0$ for all $i > 0$. Let $G \in D^b(\text{coh}(X))$, and let $N$ be an integer such that $\text{Hom}(P,G[i]) = 0$ for all locally free sheaves $P$ and all $i > N$, e.g., $N = 0$ if $G$ is a sheaf and $X$ is affine. Then \[ \text{Hom}_{D_{\text{sg}}}(F,G[N]) \cong \text{Hom}_{D^b(\text{coh}(X))}(F,G[N])/R \] where $R$ is the subspace of morphisms which factor through locally free sheaves $P$ such as $F \to P \to G[N]$. \end{Thm} \begin{proof} \cite{Orlov1} Proposition 1.21 assumes that $G$ is a sheaf. But this assumption is not used in the proof. We note that $N$ depends on $G$. \end{proof} \begin{Thm}[Kn\"orrer periodicity \cite{Orlov1} Theorem 2.1]\label{Knorrer} Let $V$ be a separated regular noetherian scheme of finite Krull dimension (e.g., a smooth quasi-projective variety) and let $f: V \to \mathbf{A}^1$ be a flat morphism. Let $W = V \times \mathbf{A}^2$ and let $g = f + xy: W \to \mathbf{A}^1$ for coordinates $(x, y)$ on $\mathbf{A}^2$. Let $X$ (resp. $Y$) be the fiber of $f$ (resp. $g$) above $0$. Let $Z = \{f = x = 0\} \subset Y$, and let $i: Z \to Y$ and $q: Z \to X$ be natural morphisms. Then $\Phi = Ri_*q^*: D^b(\text{coh}(X)) \to D^b(\text{coh}(Y))$ induces an equivalence $\bar \Phi: D_{\text{sg}}(X) \to D_{\text{sg}}(Y)$. \end{Thm} \begin{Thm}[\cite{Orlov2} Theorem 2.10] Let $X$ and $X'$ be quasi-projective varieties. Assume that the formal completions $\hat X$ and $\hat X'$ along their singular loci are isomorphic. Then the idempotent completions of the triangulated categories of singularities $\overline{D_{\text{sg}}(X)}$ and $\overline{D_{\text{sg}}(X')}$ are equivalent. \end{Thm} \section{Triangulated categories of singularities for ordinary double points} We calculate triangulated categories of singularities for varieties with only ordinary double points. \begin{Expl}[\cite{Orlov1} Subsection 3.3] Let $X_n = \{x_0^2+\dots+x_n^2 = 0\} \subset \mathbf{A}^{n+1}$ be an ordinary double point of dimension $n$. Then $D_{\text{sg}}(X_n) \cong D_{\text{sg}}(X_{n+2})$ by the Kn\"orrer periodicity (\cite{Orlov1} Theorem 2.1). We consider $X_0$. Let $A = k[z]/z^2$. Then any object of $D_{\text{sg}}(X_0)$ is represented by a finite $A$-module. Therefore it is a direct sum of $V_1 = k = A/(z)$. A natural exact sequence $0 \to (z) \to A \to A/(z) \to 0$ yields an exact triangle $V_1 \to A \to V_1 \to V_1[1]$ in $D^b(\text{coh}(X_0))$, hence an isomorphism $V_1 \cong V_1[1]$ in $D_{\text{sg}}(X_0)$. Therefore we have $\text{Hom}_{D_{\text{sg}}(X_0)}(V_1,V_1) \cong k$ which is generated by the identity $\text{Id}$. It follows that $D_{\text{sg}}(X_0)$ is already idempotent complete. The translation functor takes $V_1 \mapsto V_1$ and $\text{Id} \mapsto \text{Id}$. The exact triangle are only trivial ones. \end{Expl} \begin{Expl} We consider $X_1$. Let $B = k[z,w]/(zw)$. Then any object of $D_{\text{sg}}(X_1)$ is represented by a finite $B$-module $M$ such that $\text{Hom}(M,B[i]) = 0$ for all $i > 0$. Therefore it is a direct sum of $M_z = B/(w)$ and $M_w = B/(z)$. A natural exact sequence $0 \to zB \to B \to B/(z) \to 0$ yields an exact triangle $M_z \to B \to M_w \to M_z[1]$ in $D^b(\text{coh}(X_1))$, hence an isomorphism $M_w \cong M_z[1]$ in $D_{\text{sg}}(X_1)$. We also have $M_z \cong M_w[1]$ in $D_{\text{sg}}(X_1)$. We have $\text{Hom}_{D^b(\text{coh}(X_1))}(M_z,M_z) \cong k[z]$. Since the multiplication map $z: M_z \to M_z$ is factored as $M_z \cong zB \subset B \to B/(w)$, we have $z \in R$ in the notation of Theorem~\ref{Orlov-Prop2}. Therefore we have $\text{Hom}_{D_{\text{sg}}(X_1)}(M_z,M_z) \cong k$ which is generated by the identity $\text{Id}_z$. Since $\text{Hom}_{D^b(\text{coh}(X_1))}(M_z,M_w) = 0$, we have $\text{Hom}_{D_{\text{sg}}(X_1)}(M_z,M_w) = 0$. It follows that $D_{\text{sg}}(X_1)$ is already idempotent complete. The translation functor takes $M_z \mapsto M_w$ and $\text{Id}_z \mapsto \text{Id}_w$. The exact triangle are only trivial ones. \end{Expl} \begin{Expl} We consider another $1$-dimensional scheme $Y_1$ whose singularity is analytically isomorphic to that of $X_1$ but not algebraically. Let $C = k[z,w]/(z^2+z^3 +w^2)$. The completion of $Y_1 = \text{Spec}(C)$ at the singularity is isomorphic to that of $X_1$, hence $\overline{D_{\text{sg}}(Y_1)} \cong \overline{D_{\text{sg}}(X_1)}$. But we will see that $D_{\text{sg}}(Y_1)$ is not equivalent to $D_{\text{sg}}(X_1)$ (\cite{Orlov2} Introduction). Let $C' \to C$ be the normalization. Then $C' \cong k[t]$ with $z = -t^2-1$ and $w = -t^3-t$. Any object of $D_{\text{sg}}(Y_1)$ is represented by a finite $C$-module $N$ such that $\text{Hom}(N,C[i]) = 0$ for all $i > 0$. Therefore it is a direct sum of $C'$. There are $2$ points of $Y'_1=\text{Spec}(C') \cong \mathbf{A}^1$ above the singular point of $Y$; we have $(z,w)C' = P \cap Q$ for prime ideals $P = (t+\sqrt{-1})$ and $Q = (t-\sqrt{-1})$ of $C'$. There is a surjective homomorphism $C^{\oplus 2} \to C'$ given by $(a,b) \mapsto a-bt$. The kernel of this homomorphism is equal to $(w,z)C'$, which is isomorphic to $C'$ as a $C$-module. Indeed we have $w-zt=0$ and $wt=-z^2-z$, etc. Therefore we have an exact sequence \[ 0 \to (w,z)C' \to C^{\oplus 2} \to C' \to 0 \] yielding an exact triangle $C' \to C^{\oplus 2} \to C' \to C'[1]$ in $D^b(\text{coh}(Y_1))$, hence an isomorphism $C' \cong C'[1]$ in $D_{\text{sg}}(Y_1)$. We have $\text{Hom}_{D^b(\text{coh}(Y_1))}(C',C') \cong C' \cong k[t]$ as $C$-modules, where a $C$-module homomorphism on the left hand side is mapped to the image of $1$ by the homomorphism. We note that a $C$-homomorphism is determined by the image of $1$ because $C' \to C$ is birational. Since the multiplication map $z: C' \to C'$ is factored as $C' \cong zC' \subset C \to C'$, we have $z \in R$ in the notation of Theorem~\ref{Orlov-Prop2}. Therefore we have $\text{Hom}_{D_{\text{sg}}(Y_1)}(C',C') \cong k[t]/(t^2+1)$. The translation functor takes $C' \mapsto C'$. The exact triangle are only trivial ones. There is an idempotent $(1 \pm \sqrt{-1}t)/2 \in \text{Hom}_{D_{\text{sg}}(Y_1)}(C',C')$. But there is no corresponding sheaf on $Y_1$. Therefore $D_{\text{sg}}(Y_1)$ is not idempotent complete. The corresponding idempotent completion is equivalent to $D_{\text{sg}}(X_1)$. \end{Expl} \begin{Expl} We consider $X_2$. We rewrite the equation of $X_2$ to $xy + z^2 = 0$. There is an equivalence $\Phi_1: D_{\text{sg}}(X_0) \cong D_{\text{sg}}(X_2)$ (Theorem~\ref{Knorrer}) which is given as follows. Let $Z_1 = \text{Spec}(k[y,z]/z^2)$. Then there are natural morphisms $q_1: Z_1 \to X_0$ and $i_1: Z_1 \subset X_2$. The equivalence is given by $\Phi_1 = Ri_{1*}q_1^*$. Let $L = \{x = z = 0\}$ be a line on the surface $X_2$ through the origin. $L$ is a prime divisor which is not a Cartier divisor, but $2L$ is a Cartier divisor. Since $V_1 = A/(z)$, we have $\Phi_1(V_1) = Ri_{1*}(k[y,z]/(z)) = k[x,y,z]/(x,z) = \mathcal{O}_L$. We have an exact sequence \[ 0 \to \mathcal{O}_{X_2}(-L) \to \mathcal{O}_{X_2} \to \mathcal{O}_L \to 0. \] Thus $\mathcal{O}_L \cong \mathcal{O}_{X_2}(-L)[1]$ in $D_{\text{sg}}(X_2)$. Therefore $D_{\text{sg}}(X_2)$ is spanned by a reflexive sheaf $\mathcal{O}_{X_2}(-L)$ of rank $1$. We note that $\mathcal{O}_L$ is a torsion sheaf, but $\mathcal{O}_{X_2}(-L)$ is a Cohen-Macaulay sheaf. There is an exact sequence \[ 0 \to \mathcal{O}_{X_2}(-L) \to F \to \mathcal{O}_{X_2}(-L) \to 0 \] for a locally free sheaf $F$ of rank $2$ (\cite{NC} Example 5.5). Thus $\mathcal{O}_{X_2}(-L) \cong \mathcal{O}_{X_2}(-L)[1]$ in $D_{\text{sg}}(X_2)$. \end{Expl} \begin{Expl} We consider $X_3$. This is the case of a non-$\mathbf{Q}$-factorial $3$-fold. We rewrite the equation of $X_3$ to $xy + zw = 0$. There is an equivalence $\Phi_2: D_{\text{sg}}(X_1) \cong D_{\text{sg}}(X_3)$ (Theorem~\ref{Knorrer}) which is given as follows. Let $Z_2 = \text{Spec}(k[y,z,w]/zw)$. Then there are natural morphisms $q_2: Z_2 \to X_1$ and $i_2: Z _2 \subset X_3$. The equivalence is given by $\Phi_2 = Ri_{2*}q_2^*$. Let $L = \{x=w=0\}$ and $L' = \{x=z=0\}$ be planes on $X_3$ through the origin. They are prime divisors which are not $\mathbf{Q}$-Cartier divisors. Since $M_z = B/(w)$ and $M_w = B/(z)$, we have $\Phi_2(M_z) = Ri_{2*}(k[y,z,w]/(w)) = k[x,y,z,w]/(x,w) = \mathcal{O}_L$ and $\Phi_2(M_w) = Ri_{2*}(k[y,z,w]/(z)) = k[x,y,z,w]/(x,z) = \mathcal{O}_{L'}$ We have an exact sequence \[ 0 \to \mathcal{O}_{X_3}(-L) \to \mathcal{O}_{X_3} \to \mathcal{O}_L \to 0. \] Thus $\mathcal{O}_L \cong \mathcal{O}_{X_3}(-L)[1]$ in $D_{\text{sg}}(X_3)$. We also have $\mathcal{O}_{L'} \cong \mathcal{O}_{X_3}(-L')[1]$ in $D_{\text{sg}}(X_3)$. Therefore $D_{\text{sg}}(X_3)$ is spanned by reflexive sheaves $\mathcal{O}_{X_3}(-L)$ and $\mathcal{O}_{X_3}(-L')$ of rank $1$. We note that $\mathcal{O}_L$ and $\mathcal{O}_{L'}$ are torsion sheaves, but $\mathcal{O}_{X_3}(-L)$ and $\mathcal{O}_{X_3}(-L')$ are Cohen-Macaulay sheaves. There is an exact sequence \[ 0 \to \mathcal{O}_{X_3}(-L) \to F \to \mathcal{O}_{X_3}(-L') \to 0 \] for a locally free sheaf $F$ of rank $2$ (\cite{NC} Example 5.6). Thus $\mathcal{O}_{X_3}(-L') \cong \mathcal{O}_{X_3}(-L)[1]$ in $D_{\text{sg}}(X_3)$. We also have $\mathcal{O}_{X_3}(-L) \cong \mathcal{O}_{X_3}(-L')[1]$ in $D_{\text{sg}}(X_3)$. \end{Expl} \begin{Expl} We consider a $3$-dimensional scheme $Y_3$ which is analytically isomorphic to $X_3$ at the singular points but not algebraically. $Y_3$ is $\mathbf{Q}$-factorial, hence locally factorial, because the fundamental group of the punctured neighborhood of the singularity is trivial. Let $Y_3$ be a $3$-fold defined by an equation $xy+z^2+z^3+w^2 = 0$. $Y_3$ has an ordinary double point which is $\mathbf{Q}$-factorial. There is an equivalence $\Phi'_2: D_{\text{sg}}(Y_1) \cong D_{\text{sg}}(Y_3)$ (Theorem~\ref{Knorrer}) which is given as follows. Let $Z'_2 = \text{Spec}(k[y,z,w]/(z^2+z^3+w^2))$. Then there are natural morphisms $q'_2: Z'_2 \to Y_1$ and $i'_2: Z' _2 \subset Y_3$. The equivalence is given by $\Phi'_2 = Ri'_{2*}(q'_2)^*$. Let $D = \{x=z^2+z^3+w^2=0\}$ be a prime divisor on $Y_3$, which is a Cartier divisor. Let $D' \to D$ be the normalization. On $Y_1$, we have $C' = k[t]$ with $z = -t^2-1$ and $w = -t^3-t$. On $Y_3$, we have $\mathcal{O}_{D'} = k[y,t]$. Thus we have $\Phi'_2(C') = Ri_{2*}(k[y,t]) = k[y,t] = \mathcal{O}_{D'}$. There are surjective homomorphisms $\mathcal{O}_{Y_3}^{\oplus 2} \to \mathcal{O}_D^{\oplus 2} \to \mathcal{O}_{D'}$, hence an exact sequence \[ 0 \to F \to \mathcal{O}_{Y_3}^{\oplus 2} \to \mathcal{O}_{D'} \to 0 \] defining a coherent sheaf $F$. Then $D_{\text{sg}}(Y_3)$ is spanned by $F$. The completion of $Y_3$ at the singular point is isomorphic to that of $X_3$, and the completion of $F$ corresponds to that of $\mathcal{O}_{X_3}(-L) \oplus \mathcal{O}_{X_3}(-L')$. \end{Expl} \section{Semi-orthogonal decomposition for a Gorenstein projective variety} The following is the main result of this section: \begin{Thm}\label{main} Let $X$ be a Gorenstein projective variety, let $L$ be a maximally Cohen-Macaulay sheaf on $X$, let $F$ be a coherent sheaf which is a perfect complex on $X$, and let $R = \text{End}_X(F)$ be the endomorphism ring. Assume the following conditions: \begin{enumerate} \item $\text{Hom}_{D^b(\text{coh}(X))}(F,F[p]) = 0$ for $p \ne 0$. \item $F$ is flat over $R$. \item $L$ generates the triangulated category of singularities $D_{\text{sg}}(X) = D^b(\text{coh}(X))/\text{Perf}(X)$ in the sense that $\text{Hom}(L, A) \ne 0$ for any $A \not\cong 0$ in $D_{\text{sg}}(X)$ (note that there is no shift of $A$). \item $L$ belongs to the triangulated subcategory $T$ of $D^b(\text{coh}(X))$ generated by $F$. \end{enumerate} Denote by $T^{\perp}$ the right orthogonal complement of $T$ in $D^b(\text{coh}(X))$, the full subcategory consisting of objects $A$ such that $\text{Hom}_{D^b(\text{coh}(X))}(F,A[p]) \cong 0$ for all $p$. Then the following hold. \begin{enumerate} \item There is an equivalence $T \cong D^b(\text{mod-}R)$, the bounded derived category of finitely generated right $R$-modules. \item There is a semi-orthogonal decomposition \[ D^b(\text{coh}(X)) = \langle T^{\perp},T \rangle. \] \item $T^{\perp} \subset \text{Perf}(X)$. \item $T^{\perp}$ is two-sided saturated. \end{enumerate} \end{Thm} \begin{proof} (1) and (2). We define functors between unbounded triangulated categories $\Phi: D(\text{Mod-}R) \to D(\text{Qcoh}(X))$ and $\Psi: D(\text{Qcoh}(X)) \to D(\text{Mod-}R)$ in the following, where $D(\text{Mod-}R)$ denotes the unbounded derived category of right $R$-modules which are not necessarily finitely generated. We set $\Phi(\bullet) = \bullet \otimes_R^L F$, where lower $R$ stands for the tensor product over $R$ and the upper $L$ for the left derived functor, and $\Psi(\bullet) = R\text{Hom}_X(F,\bullet)$. That is, we define $\Phi(A) = P_* \otimes_R F$ as complexes for a $K$-projective resolution $P_* \to A$ in $D(\text{Mod-}R)$, and $\Psi(B) = \text{Hom}_X(F,I_*)$ as complexes for a $K$-injective resolution $B \to I_*$ in $D(\text{Qcoh}(X))$. Since $F$ is flat over $R$ and perfect on $X$, these functors induces functors $\Phi_0: D^b(\text{mod-}R) \to D^b(\text{coh}(X))$ and $\Psi_0: D^b(\text{coh}(X)) \to D^b(\text{mod-}R)$, where $D^b(\text{mod-}R)$ denotes the bounded derived category of right $R$-modules which are finitely generated. We have \[ \begin{split} &\text{Hom}_X(\Phi(A),B) \cong \text{Hom}_X(P_* \otimes_R F,I_*) \\ &\cong \text{Hom}_R(P_*, \text{Hom}_X(F,I_*)) \cong \text{Hom}_R(A, \Psi(B)). \end{split} \] Therefore $\Phi$ and $\Psi$ are adjoints. We have \[ \Psi\Phi(A) = R\text{Hom}_X(F,P_* \otimes_R F) \cong P_* \otimes_R R\text{Hom}(F,F) \cong P_* \cong A. \] Thus adjunction morphism $A \to \Psi\Phi(A)$ is an isomorphism, and $\Phi$ is fully faithful. Let $T'$ be the image of $\Phi$, i.e., the triangulated subcategory of $D(\text{Qcoh}(X))$ generated by $F$. Then we conclude that there is a semi-orthogonal decomposition $D(\text{Qcoh}(X)) = \langle (T')^{\perp},T' \rangle$. By restriction, we deuce that $\Phi_0$ is fully faithful and $\Psi_0$ is its right adjoint. Therefore we have (1) and (2). We needed the unbounded version of the statement for the following proof of the assertion (4). \vskip 1pc (3) Let $G \in D^b(\text{coh}(X))$. If $G \not\in \text{Perf}(X)$, then $\bar G[N] \not\cong 0 \in D_{\text{sg}}(X)$, where $N$ is the number in Theorem~\ref{Orlov-Prop2} and $\bar G$ denotes the object $G$ in $D_{\text{sg}}(X)$. Then $\text{Hom}_{D_{\text{sg}}(X)}(\bar L,\bar G[N]) \ne 0$, hence $\text{Hom}_{D^b(\text{coh}(X))}(L,G[N]) \ne 0$ by Theorem~\ref{Orlov-Prop2}. Thus $G \not\in T^{\perp}$. Therefore $T^{\perp} \subset \text{Perf}(X)$. \vskip 1pc (4) We modify the proof of \cite{BvdB} Theorem A.1. We note that $T^{\perp}$ is Hom-finite because it is contained in $\text{Perf}(X)$. It has also a Serre functor given by $\otimes \omega_X[\dim X]$. Hence it is sufficient to prove that $T^{\perp}$ is right saturated. $(T')^{\perp}$ has arbitrary coproduct, and it is compactly generated by $T^{\perp} = (T')^{\perp} \cap \text{Perf}(X)$. Indeed, for any object $0 \not\cong B \in (T')^{\perp}$, there is $A \in \text{Perf}(X)$ such that $\text{Hom}_X(A,B) \ne 0$. By (2), there is a left adjoint functor $\Xi: D(\text{Qcoh}(X)) \to (T')^{\perp}$ of the inclusion functor $\Theta: (T')^{\perp} \to D(\text{Qcoh}(X))$ which induces a left adjoint functor $\Xi_0: D^b(\text{coh}(X)) \to T^{\perp}$ of the inclusion functor $\Theta_0: T^{\perp} \to D^b(\text{coh}(X))$. Then $\text{Hom}_X(\Xi_0(A),B) \ne 0$ because $\text{Hom}_X(F,B) = 0$. We use \cite{CKN} Lemma 2.14. Let $H: (T^{\perp})^{\text{op}} \to (\text{mod-}k)$ be any cohomological functor of finite type. We define $G = DH: T^{\perp} \to (\text{mod-}k)$ using the duality functor $D: (\text{mod-}k)^{\text{op}} \to (\text{mod-}k)$ given by $D(V) = \text{Hom}(V,k)$. Let $G': (T')^{\perp} \to (\text{mod-}k)$ be the Kan extension of $G$ given by $G'(B) = \text{colim}_{C \to B}G(C)$, where the colimit is taken for all morphisms from all compact objects $C \in T^{\perp}$. $DG'$ is represented by an object $A \in (T')^{\perp}$ by the Brown representability theorem. Since $DDH = H$ on $T^{\perp}$, we deduce that $H$ is represented by $A$. We have to prove that $A \in T^{\perp} = (T')^{\perp} \cap D^b(\text{coh}(X))$. We take an embedding $p: X \to \mathbf{P}^n$, and let $H'=H\Xi_0p^*: D^b(\text{coh}(\mathbf{P}^n))^{\text{op}} \to (\text{mod-}k)$ be the induced cohomological functor. By Beilinson's theorem (\cite{Beilinson}), there is an equivalence $t': D(\text{Mod-}S) \cong D(\text{Qcoh}(\mathbf{P}^n))$ which induces an equivalence $t: D^b(\text{mod-}S) \cong D^b(\text{coh}(\mathbf{P}^n))$ for some finite dimensional associative ring $S$. Let $H'' = H't: D^b(\text{mod-}S)^{\text{op}} \to (\text{mod-}k)$ and $A' = (t')^{-1}p_*\Theta(A) \in D(\text{Mod-}S)$. Then we have $H''(B) = \text{Hom}_X(\Xi_0p^*t(B),A) = \text{Hom}(B, A')$ for any $B \in D^b(\text{mod-}S)$, i.e., $H''$ is represented by $A'$. Therefore our assertion is reduced to showing that $A' \in D^b(\text{mod-}S)$. We have $\sum \dim \text{Hom}_S(S[n],A') = \sum H''(S[n]) < \infty$, hence $A' \in D^b(\text{mod-}S)$, and we are done. \end{proof} \begin{Rem} We will use the theorem in the case where $F$ is obtained as a versal non-commutative deformation of a simple collection $L$ as described in \cite{NC}. In this case $F$ is flat over the parameter algebra $R = \text{End}_X(F)$. We do not know how to generalize the theorem in the case where $F$ is not necessarily flat over its endomorphism ring. Indeed we do not know how to prove that the functor $\Phi: D^-(\text{mod-}R) \to D^-(\text{coh}(X))$ defined by $\Phi(\bullet) = \bullet \otimes_R^L F$ is bounded, i.e., $\Phi$ sends $D^b(\text{mod-}R)$ to $D^b(\text{coh}(X))$, though its right adjoint functor $\Psi: D^-(\text{coh}(X)) \to D^-(\text{mod-}R)$ defined by $\Psi(\bullet) = R\text{Hom}(F,\bullet)$ is bounded. \end{Rem} \section{Non-commutative deformation on a $3$-fold with a non-$\mathbf{Q}$-factorial ordinary double point} We will apply Theorem~\ref{main} to a $3$-fold with a non-$\mathbf{Q}$-factorial ordinary double point. The following theorem says that the conditions of Theorem~\ref{main} are satisfied under some assumptions: \begin{Thm}\label{ODP} Let $X$ be a projective variety of dimension $3$. Assume that there is only one singular point $P$ which is a non-$\mathbf{Q}$-factorial ordinary double point. Then there is a $\mathbf{Q}$-factorialization $f: Y \to X$, a projective birational morphism from a smooth projective variety whose exceptional locus $l$ is a smooth rational curve. Assume that there are divisors $D_1,D_2$ on $Y$ such that, for $L_i = f_*\mathcal{O}_Y(-D_i)$, the following conditions are satisfied: \begin{enumerate} \item $(D_1,l) = 1$ and $(D_2,l) = -1$. \item $(L_1,L_2)$ is a simple collection, i.e., $\dim \text{Hom}(L_i,L_j) = \delta_{ij}$. \item $H^p(X,f_*\mathcal{O}_Y(-D_i+D_j)) = 0$ for all $p > 0$ and all $i,j$. \end{enumerate} \noindent Then there are locally free sheaves $F_1,F_2$ of rank $2$ on $X$ given by non-trivial extensions \begin{equation}\label{F_i} \begin{split} 0 \to L_2 \to F_1 \to L_1 \to 0 \\ 0 \to L_1 \to F_2 \to L_2 \to 0 \end{split} \end{equation} such that, for $L = L_1 \oplus L_2$ and $F = F_1 \oplus F_2$, the following assertions hold: \begin{enumerate} \item $\text{Ext}^p(F,F) = 0$ for $p > 0$. \item $F$ is flat over $R := \text{End}(F) \cong \left( \begin{matrix} k & kt \\ kt & k \end{matrix} \right) \mod t^2$. \item $L$ is a Cohen-Macaulay sheaf which generates the triangulated category of singularities $D_{\text{sg}}(X)$. \item $L$ belong to the triangulated subcategory of $D^b(\text{coh}(X))$ generated by $F$. \end{enumerate} \end{Thm} \begin{proof} We consider $2$-pointed non-commutative (NC) deformations of a simple collection $(L_1,L_2)$ (\cite{NC}). There is a spectral sequence \[ H^p(X,\mathcal{E}xt^q(L_i,L_j)) \Rightarrow \text{Ext}^{p+q}(L_i,L_j) \] for any $i,j$. By the condition (3), we deduce that \[ \text{Ext}^p(L_i,L_j) \cong H^0(X, \mathcal{E}xt^p(L_i,L_j)) \] for all $p$. A neighborhood of the singular point $P \in X$ is analytically isomorphic to that of the vertex of the cone over $\mathbf{P}^1 \times \mathbf{P}^1$ in $\mathbf{P}^4$ considered in \cite{NC}~Example 5.6 (see also the last section). Since the sheaves $L_1$ and $L_2$ here correspond to the sheaves $\mathcal{O}_X(0,-1)$ and $\mathcal{O}_X(-1,0)$ there, the extension space $\mathcal{E}xt^1(L_1,L_2)$ is isomorphic to $\mathcal{E}xt^1(\mathcal{O}_X(0,-1),\mathcal{O}_X(-1,0))$ there. By the argument there, there exists a locally free sheaf $F_1$ in an analytic neighborhood of $P$ with the extension sequence as stated in the theorem, and similarly $F_2$. Since $\text{Ext}^1(L_i,L_j) \cong H^0(X, \mathcal{E}xt^1(L_i,L_j))$, we deduce that these extensions exist globally on $X$. We need to calculate $\mathcal{E}xt^p(L_i,L_j)$ for all $p$ at $P$. For this purpose we need the following calculation: \begin{Lem} Let $X'$ be the cone over $\mathbf{P}^1 \times \mathbf{P}^1$ in $\mathbf{P}^4$ as in \cite{NC}~Example 5.6 (we use the notation $X'$ in order to avoid a confusion). Let $G_1,G_2$ be locally free sheaves of rank $2$ on $X'$ defined by non-trivial extensions: \begin{equation} \begin{split} 0 \to \mathcal{O}_{X'}(-1,0) \to G_1 \to \mathcal{O}_{X'}(0,-1) \to 0 \\ 0 \to \mathcal{O}_{X'}(0,-1) \to G_2 \to \mathcal{O}_{X'}(-1,0) \to 0. \end{split} \end{equation} Then the following hold: \[ \begin{split} &(1) \,\, H^p(X',\mathcal{O}_{X'}(-1,0)) = H^p(X',\mathcal{O}_{X'}(-2,0)) = H^p(X',\mathcal{O}_{X'}(-1,1)) = 0, \,\, \forall p. \\ &(2) \,\, H^p(\mathcal{O}_{X'}(0,-1)) = H^p(X',\mathcal{O}_{X'}(0,-2)) = H^p(X',\mathcal{O}_{X'}(1,-1)) = 0, \,\, \forall p. \\ &(3) \,\, \text{Ext}^p(G_1,\mathcal{O}_{X'}(-1,0)) = \text{Ext}^p(G_1,\mathcal{O}_{X'}(0,-1)) = 0, \,\, \forall p > 0. \\ &(4) \,\, \text{Ext}^p(G_2,\mathcal{O}_{X'}(-1,0)) = \text{Ext}^p(G_2,\mathcal{O}_{X'}(0,-1)) = 0, \,\, \forall p > 0. \\ &(5) \,\, \dim \text{Hom}(G_1,\mathcal{O}_{X'}(0,-1)) = \dim \text{Hom}(G_2,\mathcal{O}_{X'}(-1,0)) = 1. \\ &(6) \,\, \text{Hom}(G_1,\mathcal{O}_{X'}(-1,0)) = \text{Hom}(G_2,\mathcal{O}_{X'}(0,-1)) = 0. \\ &(7) \,\, \text{Ext}^p(\mathcal{O}_{X'}(0,-1),\mathcal{O}_{X'}(0,-1)) = \text{Ext}^p(\mathcal{O}_{X'}(-1,0), \mathcal{O}_{X'}(-1,0)) = 0, \,\, p > 0, p \equiv 1 (\text{mod }2). \\ &(8) \,\, \text{Ext}^p(\mathcal{O}_{X'}(0,-1),\mathcal{O}_{X'}(-1,0)) = \text{Ext}^p(\mathcal{O}_{X'}(-1,0), \mathcal{O}_{X'}(0,-1)) = 0, \,\, p > 0, p \equiv 0 (\text{mod }2). \\ &(9) \,\, \dim \text{Ext}^p(\mathcal{O}_{X'}(0,-1),\mathcal{O}_{X'}(0,-1)) = \dim \text{Ext}^p(\mathcal{O}_{X'}(-1,0),\mathcal{O}_{X'}(-1,0)) = 1, \,\, p > 0, p \equiv 0 (\text{mod }2). \\ &(10) \,\, \dim \text{Ext}^p(\mathcal{O}_{X'}(0,-1),\mathcal{O}_{X'}(-1,0)) = \dim \text{Ext}^p(\mathcal{O}_{X'}(-1,0),\mathcal{O}_{X'}(0,-1)) = 1, \,\, p > 0, p \equiv 1 (\text{mod }2). \end{split} \] \end{Lem} \begin{proof} (1) and (2). Let $D \cong \mathbf{P}^2$ be a prime divisor on $X'$ corresponding to $\mathcal{O}_{X'}(1,0)$ such that there is an exact sequence \[ 0 \to \mathcal{O}_{X'}(-1,0) \to \mathcal{O}_{X'} \to \mathcal{O}_D \to 0. \] Since $H^p(X',\mathcal{O}_{X'}) \cong H^p(D,\mathcal{O}_D)$ for all $p$, we have $H^p(X',\mathcal{O}_{X'}(-1,0)) = 0$ for all $p$. We have an exact sequence \[ 0 \to \mathcal{O}_{X'}(-2,0) \to \mathcal{O}_{X'}(-1,0) \to \mathcal{O}_D(-P) \to 0 \] where $\mathcal{O}_D(-P)$ is the ideal sheaf of $P$ on $D$: \[ 0 \to \mathcal{O}_D(-P) \to \mathcal{O}_D \to \mathcal{O}_P \to 0. \] Since $H^p(\mathcal{O}_D) \cong H^p(\mathcal{O}_P)$ for all $p$, we have $H^p(\mathcal{O}_D(-P)) = 0$ for all $p$. Then we deduce that $H^p(X',\mathcal{O}_{X'}(-2,0)) = 0$ for all $p$. Let $S \cong \mathbf{P}^1 \times \mathbf{P}^1$ be a prime divisor on $X'$ corresponding to $\mathcal{O}_{X'}(1,1)$ such that there is an exact sequence \[ 0 \to \mathcal{O}_{X'}(-1,-1) \to \mathcal{O}_{X'} \to \mathcal{O}_S \to 0. \] Then we have \[ 0 \to \mathcal{O}_{X'}(-2,0) \to \mathcal{O}_{X'}(-1,1) \to \mathcal{O}_S(-1,1) \to 0. \] Since $H^p(S,\mathcal{O}_S(-1,1)) = 0$ for all $p$, we have $H^p(X',\mathcal{O}_{X'}(-1,1)) = 0$ for all $p$. The second assertion follows by symmetry. \vskip 1pc (3) and (4). There are spectral sequences \[ \begin{split} &E_2^{p,q} = H^p(X,\mathcal{E}xt^q(\mathcal{O}_{X'}(0,-1),\mathcal{O}_{X'}(-1,0))) \Rightarrow \text{Ext}^{p+q}(\mathcal{O}_{X'}(0,-1), \mathcal{O}_{X'}(-1,0)) \\ &E_2^{p,q} = H^p(X,\mathcal{E}xt^q(\mathcal{O}_{X'}(1,0),\mathcal{O}_{X'}(-1,0))) \Rightarrow \text{Ext}^{p+q}(\mathcal{O}_{X'}(1,0), \mathcal{O}_{X'}(-1,0)). \end{split} \] Then by (1), we obtain natural isomorphisms \[ \begin{split} &\text{Ext}^p(\mathcal{O}_{X'}(0,-1), \mathcal{O}_{X'}(-1,0)) \cong H^0(\mathcal{E}xt^p(\mathcal{O}_{X'}(0,-1), \mathcal{O}_{X'}(-1,0)) \\ &\cong \text{Ext}^p(\mathcal{O}_{X'}(1,0), \mathcal{O}_{X'}(-1,0)) \end{split} \] for all $p > 0$. We have a commutative diagram of exact sequences \[ \begin{CD} 0 @>>> \mathcal{O}_{X'}(-1,0) @>>> G_1 @>>> \mathcal{O}_{X'}(0,-1) @>>> 0 \\ @. @V=VV @VVV @VVV @. \\ 0 @>>> \mathcal{O}_{X'}(-1,0) @>>> \mathcal{O}_{X'}^2 @>>> \mathcal{O}_{X'}(1,0) @>>> 0 \end{CD} \] where the cokernels of the middle and right vertical arrows are isomorphic to $\mathcal{O}_S(1,0)$. By (1) we have isomorphisms \[ \text{Ext}^p(G_1,\mathcal{O}_{X'}(-1,0)) \cong \text{Ext}^{p+1}(\mathcal{O}_S(1,0), \mathcal{O}_{X'}(-1,0)) \cong 0 \] for all $p > 0$. Since $H^p(X',\mathcal{O}_{X'}) = 0$ for $p > 0$ and $H^p(X',\mathcal{O}_{X'}(-1,-1)) = 0$ for all $p$, we also have \[ \text{Ext}^p(G_1,\mathcal{O}_{X'}(0,-1)) \cong \text{Ext}^{p+1}(\mathcal{O}_S(1,0), \mathcal{O}_{X'}(0,-1)) \cong 0 \] for all $p > 0$. The second assertion follows by symmetry. \vskip 1pc (5) etc. We have a long exact sequence \[ \begin{split} &0 \to \text{Hom}(\mathcal{O}_{X'}(0,-1),\mathcal{O}_{X'}(0,-1)) \to \text{Hom}(G_1,\mathcal{O}_{X'}(0,-1)) \to \text{Hom}(\mathcal{O}_{X'}(-1,0),\mathcal{O}_{X'}(0,-1)) \\ &\to \text{Ext}^1(\mathcal{O}_{X'}(0,-1),\mathcal{O}_{X'}(0,-1)) \to \text{Ext}^1(G_1,\mathcal{O}_{X'}(0,-1)) \to \text{Ext}^1(\mathcal{O}_{X'}(-1,0),\mathcal{O}_{X'}(0,-1)) \to \dots \end{split} \] Since $\text{Hom}(\mathcal{O}_{X'}(-1,0),\mathcal{O}_{X'}(0,-1)) = 0$ and $\text{Ext}^p(G_1,\mathcal{O}_{X'}(0,-1)) = 0$ for $p > 0$, we deduce \begin{equation}\label{coh1} \begin{split} &\dim \text{Hom}(G_1,\mathcal{O}_{X'}(0,-1)) = 1, \,\, \text{Ext}^1(\mathcal{O}_{X'}(0,-1),\mathcal{O}_{X'}(0,-1)) = 0 \\ &\text{Ext}^p(\mathcal{O}_{X'}(-1,0),\mathcal{O}_{X'}(0,-1)) \cong \text{Ext}^{p+1}(\mathcal{O}_{X'}(0,-1),\mathcal{O}_{X'}(0,-1)) \,\, (p > 0). \end{split} \end{equation} We have a long exact sequence \[ \begin{split} &0 \to \text{Hom}(\mathcal{O}_{X'}(0,-1),\mathcal{O}_{X'}(-1,0)) \to \text{Hom}(G_1,\mathcal{O}_{X'}(-1,0)) \to \text{Hom}(\mathcal{O}_{X'}(-1,0),\mathcal{O}_{X'}(-1,0)) \\ &\to \text{Ext}^1(\mathcal{O}_{X'}(0,-1),\mathcal{O}_{X'}(-1,0)) \to \text{Ext}^1(G_1,\mathcal{O}_{X'}(-1,0)) \to \text{Ext}^1(\mathcal{O}_{X'}(-1,0),\mathcal{O}_{X'}(-1,0))) \to \dots \end{split} \] By construction, the homomorphism \[ k \cong \text{Hom}(\mathcal{O}_{X'}(-1,0),\mathcal{O}_{X'}(-1,0)) \to \text{Ext}^1(\mathcal{O}_{X'}(0,-1),\mathcal{O}_{X'}(-1,0)) \] is injective. Hence we deduce \begin{equation}\label{coh2} \begin{split} &\text{Hom}(G_1,\mathcal{O}_{X'}(-1,0)) = 0, \,\, \dim \text{Ext}^1(\mathcal{O}_{X'}(0,-1),\mathcal{O}_{X'}(-1,0)) = 1 \\ &\text{Ext}^p(\mathcal{O}_{X'}(-1,0),\mathcal{O}_{X'}(-1,0)) \cong \text{Ext}^{p+1}(\mathcal{O}_{X'}(0,-1),\mathcal{O}_{X'}(-1,0)) \,\, (p > 0). \end{split} \end{equation} By symmetry we also obtain \begin{equation}\label{coh3} \begin{split} &\text{Hom}(G_2,\mathcal{O}_{X'}(0,-1)) = 0, \,\, \dim \text{Hom}(G_2,\mathcal{O}_{X'}(-1,0)) = 1 \\ &\text{Ext}^1(\mathcal{O}_{X'}(-1,0),\mathcal{O}_{X'}(-1,0)) = 0 \,\, dim \text{Ext}^1(\mathcal{O}_{X'}(-1,0),\mathcal{O}_{X'}(0,-1)) = 1 \\ &\text{Ext}^p(\mathcal{O}_{X'}(0,-1),\mathcal{O}_{X'}(-1,0)) \cong \text{Ext}^{p+1}(\mathcal{O}_{X'}(-1,0),\mathcal{O}_{X'}(-1,0)) \,\, (p > 0) \\ &\text{Ext}^p(\mathcal{O}_{X'}(0,-1),\mathcal{O}_{X'}(0,-1)) \cong \text{Ext}^{p+1}(\mathcal{O}_{X'}(-1,0),\mathcal{O}_{X'}(0,-1)) \,\, (p > 0). \end{split} \end{equation} Combining equations (\ref{coh1}), (\ref{coh1}) and (\ref{coh1}), we obtain \[ \begin{split} &0 = \text{Ext}^1(\mathcal{O}_{X'}(0,-1),\mathcal{O}_{X'}(0,-1)) \cong \text{Ext}^2(\mathcal{O}_{X'}(-1,0),\mathcal{O}_{X'}(0,-1)) \\ &\cong \text{Ext}^3(\mathcal{O}_{X'}(0,-1),\mathcal{O}_{X'}(0,-1)) \cong \text{Ext}^4(\mathcal{O}_{X'}(-1,0),\mathcal{O}_{X'}(0,-1)) \cong \dots \\ &1 = \dim \text{Ext}^1(\mathcal{O}_{X'}(-1,0),\mathcal{O}_{X'}(0,-1)) = \dim \text{Ext}^2(\mathcal{O}_{X'}(0,-1),\mathcal{O}_{X'}(0,-1)) \\ &= \dim \text{Ext}^3(\mathcal{O}_{X'}(-1,0),\mathcal{O}_{X'}(0,-1)) = \text{Ext}^4(\mathcal{O}_{X'}(0,-1),\mathcal{O}_{X'}(0,-1)) = \dots \end{split} \] hence our remaining results. \end{proof} We go back to our original situation: \begin{Cor} \[ \begin{split} &(1) \,\, \text{Ext}^p(L_1,L_1) = \text{Ext}^p(L_2,L_2) = 0, \,\, p > 0, p \equiv 1 (\text{mod }2). \\ &(2) \,\, \text{Ext}^p(L_1,L_2) = \text{Ext}^p(L_2,L_1) = 0, \,\, p > 0, p \equiv 0 (\text{mod }2). \\ &(3) \,\, \dim \text{Ext}^p(L_1,L_1) = \dim \text{Ext}^p(L_2,L_2) = 1, \,\, p > 0, p \equiv 0 (\text{mod }2). \\ &(4) \,\, \dim \text{Ext}^p(L_1,L_2) = \dim \text{Ext}^p(L_2,L_1) = 1, \,\, p > 0, p \equiv 1 (\text{mod }2). \\ &(5) \,\, \dim \text{Hom}(F_i,L_i) = 1 \,\, (\forall i), \,\, \text{Hom}(F_i,L_j) = 0 \,\, (i \ne j). \\ &(6) \,\, \text{Ext}^p(F_i,L_j) = 0 \,\, p > 0, \forall i, \forall j. \\ &(7) \,\, \text{Ext}^p(F_i,F_j) = 0 \,\, p > 0, \forall i, \forall j. \end{split} \] \end{Cor} \begin{proof} Since $P \in X$ is analytically isomorphic to the situation of the lemma, we have isomorphisms between inner extension sheaves at the singular points. By the spectral sequence arguments, we obtain global assertions (1) through (4). We have exact sequences \[ \begin{split} &0 \to \text{Hom}(L_1,L_1) \to \text{Hom}(F_1,L_1) \to \text{Hom}(L_2,L_1) \\ &\to \text{Ext}^1(L_1,L_1) \to \text{Ext}^1(F_1,L_1) \to \text{Ext}^1(L_2,L_1) \to \dots \\ &0 \to \text{Hom}(L_1,L_2) \to \text{Hom}(F_1,L_2) \to \text{Hom}(L_2,L_2) \\ &\to \text{Ext}^1(L_1,L_2) \to \text{Ext}^1(F_1,L_2) \to \text{Ext}^1(L_2,L_2) \to \dots \end{split} \] Since $\text{Hom}(L_2,L_1) = 0$, we have $\dim \text{Hom}(F_1,L_1) = 1$. The homomorphism $\text{Hom}(L_2,L_2) \to \text{Ext}^1(L_1,L_2)$ is non-trivial because the extension is non-trivial. Hence $\text{Hom}(F_1,L_2) = 0$. We have $\text{Ext}^1(L_1,L_1) = 0$. We have a commutative diagram \[ \begin{CD} \text{Ext}^p(\mathcal{O}_{X'}(-1,0),\mathcal{O}_{X'}(0,-1)) @>{\cong}>> \mathcal{E}xt^p(L_2,L_1) @>{\cong}>> \text{Ext}^p(L_2,L_1) \\ @VVV @VVV @VVV \\ \text{Ext}^{p+1}(\mathcal{O}_{X'}(0,-1),\mathcal{O}_{X'}(0,-1)) @>{\cong}>> \mathcal{E}xt^{p+1}(L_1,L_1) @>{\cong}>> \text{Ext}^{p+1}(L_1,L_1) \end{CD} \] for all $p > 0$. Therefore the homomorphisms $\text{Ext}^p(L_2,L_1) \to \text{Ext}^{p+1}(L_1,L_1)$ are bijective. Thus we obtain $\text{Ext}^p(F_1,L_1) = 0$ for all $p > 0$. In a similar way, we deduce that the homomorphisms $\text{Ext}^p(L_2,L_2) \to \text{Ext}^{p+1}(L_1,L_2)$ are bijective, and we obtain $\text{Ext}^p(F_1,L_2) = 0$ for all $p > 0$. The assertions for $F_2$ are obtained by symmetry. (7) follows from exact sequences \[ \text{Ext}^p(F_i,L_{j'}) \to \text{Ext}^p(F_i,F_j) \to \text{Ext}^p(F_i,L_j) \] for all $i,j$ and $j' \ne j$. \end{proof} \begin{Rem} The $2$-periodicity is a consequence of the equalities $\bar L_1 = \bar L_2[1] = \bar L_1[2] \in D_{\text{sg}}(X)$. \end{Rem} We go back to the proof of the theorem. (7) of the above corollary implies our assertion (1). The assertion (2) is a consequence that $F$ is an NC deformation of $L$. Then we obtain a functor $\Phi: D^b(\text{mod-}R) \to D^b(\text{coh}(X))$ defined by $\Phi(\bullet) = \bullet \otimes_R F$. We note that the functor $\Phi$ is bounded because $F$ is flat over $R$. The $L_i$ are images of the simple $R$-modules by $\Phi$. Since $D^b(\text{mod-}R)$ is generated by $R$, so is the image of $\Phi$ by $F$, hence the assertion (4). For the assertion (3), we consider the following exact sequences of local cohomologies: \[ H^{p-1}_P(L_i) \to H^p_P(L_j) \to H^p_P(F_i) \] for $i \ne j$. Since the $L_i$ are reflexive, we have $H^p_P(L_j) = 0$ for $p < 2$. Since $X$ is Cohen-Macaulay, we have $H^p_P(F_i) = 0$ for $p < 3$. Therefore we deduce that $H^p_P(L_i) = 0$ for $p < 3$, i.e., the $L_i$ are maximally Cohen-Macaulay sheaves. They generate $D_{\text{sg}}(X)$ by the condition (1). Thus we complete the proof of the theorem. \end{proof} \begin{Rem} There is an exact sequence \[ 0 \to \mathcal{O}_{\mathbf{P}^1 \times \mathbf{P}^1}(-m,0) \to \mathcal{O}_{\mathbf{P}^1 \times \mathbf{P}^1}^2 \to \mathcal{O}_{\mathbf{P}^1 \times \mathbf{P}^1}(m,0) \to 0 \] on $\mathbf{P}^1 \times \mathbf{P}^1$ for any positive integer $m$. But the corresponding sequence \[ 0 \to \mathcal{O}_{X_3}(-m,0) \to \mathcal{O}_{X_3}^2 \to \mathcal{O}_{X_3}(m,0) \to 0 \] on the cone $X_3$ over $\mathbf{P}^1 \times \mathbf{P}^1$ in $\mathbf{P}^4$ is not exact if $m \ge 2$. This follows from the fact that the fiber $\mathcal{O}_{X_3}(m,0) \otimes \mathcal{O}_P \cong \mathcal{O}_{X_3}(0,-m) \otimes \mathcal{O}_P$ at the singular point $P$ has $m+1$ generators. Indeed if $xy+zw=0$ is the equation of $X_3$ at $P$, then $\mathcal{O}_{X_3}(0,-m) = (x^m,x^{m-1}z,\dots,z^m)$. Therefore the condition (1) of the theorem is necessary. \end{Rem} \begin{Rem} Our construction will be generalized to higher dimensions $X_n$ with $n \ge 4$ by using spinor bundles. Let $n' = n-1$. On $n'$-dimensional smooth quadric $\mathbf{Q}$, there are locally free sheaves $\Sigma_Q$ (resp. $\Sigma^+_Q$ and $\Sigma^-_Q$) of rank $2^{m-1}$ for $n' = 2m-1$ (resp. rank $2^{m-1}$ for $n' = 2m$) called {\em spinor bundles}. There are semi-orthogonal decompositions (\cite{Kapranov}): \[ \begin{split} &D^b(\text{coh}(Q)) = \langle \Sigma_Q(-n'),\mathcal{O}_Q(-n' + 1), \dots, \mathcal{O}_Q(-1),\mathcal{O}_Q \rangle, \,\,\, n' \text{ odd} \\ &D^b(\text{coh}(Q)) = \langle \Sigma^+_Q(-n'),\Sigma^-_Q(-n'),\mathcal{O}_Q(-n' + 1), \dots, \mathcal{O}_Q(-1),\mathcal{O}_Q \rangle, \,\,\,n' \text{ even}. \end{split} \] By \cite{Ottaviani}, there are exact sequences \[ \begin{split} &0 \to \Sigma_Q(-1) \to \mathcal{O}_Q^{2^m} \to \Sigma_Q \to 0, \,\,\, n' = 2m-1 \\ &0 \to \Sigma^+_Q(-1) \to \mathcal{O}_Q^{2^m} \to \Sigma^-_Q \to 0, \,\,\, n' = 2m \\ &0 \to \Sigma^-_Q(-1) \to \mathcal{O}_Q^{2^m} \to \Sigma^+_Q \to 0, \,\,\, n' = 2m. \end{split} \] Correspondingly, there are Cohen-Macaulay sheaves $\Sigma_X$ (resp. $\Sigma^+_X$ and $\Sigma^-_X$) of rank $2^{m-1}$ for $n = 2m$ (resp. rank $2^{m-1}$ for $n = 2m+1$) on $X = X_n$, and there are exact sequences \[ \begin{split} &0 \to \Sigma_X(-1) \to \mathcal{O}_X^{2^m} \to \Sigma_X \to 0, \,\,\, n = 2m \\ &0 \to \Sigma^+_X(-1) \to \mathcal{O}_X^{2^m} \to \Sigma^-_X \to 0, \,\,\, n = 2m+1 \\ &0 \to \Sigma^-_X(-1) \to \mathcal{O}_X^{2^m} \to \Sigma^+_X \to 0, \,\,\, n = 2m+1. \end{split} \] As in the same way as in the case of $n = 2,3$ (\cite{NC} Examples 5.5 and 5.6), if we pull back the sequences by injective homomorphisms $\Sigma_X(-1) \to \Sigma_X$ and $\Sigma_X^{\pm}(-1) \to \Sigma_X^{\pm}$, we obtain non-commutative deformations of $\Sigma_X(-1)$ and $\Sigma_X^{\pm}(-1)$ yielding locally free sheaves of rank $2^m$ which generate semi-orthogonal components of $D^b(\text{coh}(X))$. \end{Rem} \section{Examples} \subsection{Quadric cone} This is an example of a projective variety with a non-$\mathbf{Q}$-factorial ordinary double point considered in \cite{NC} Example 5.6. Let $X$ be a cone over $\mathbf{P}^1 \times \mathbf{P}^1$ in $\mathbf{P}^4$ defined by an equation $xy+zw = 0$. $X$ has one ordinary double point $P$, which is not $\mathbf{Q}$-factorial. Let $\mathcal{O}_X(a,b)$ be reflexive sheaf of rank $1$ for integers $a,b$ corresponding to the invertible sheaf $\mathcal{O}_{\mathbf{P}^1 \times \mathbf{P}^1}(a,b)$ of bidegree $(a,b)$. $\mathcal{O}_X(a,b)$ is invertible if and only if $a = b$. For example, a hyperplane section bundle is $\mathcal{O}_X(1,1)$. Let $f: Y \to X$ be a small resolution whose exceptional locus $C$ is isomorphic to $\mathbf{P}^1$. Let $D_1$ (resp. $D_2$) be a general member of the linear systems $\vert \mathcal{O}_X(0,1) \vert$ (resp. $\vert \mathcal{O}_X(1,0) \vert$), and $D'_i = f_*^{-1}D_i$ be the strict transforms. Then we have $(D'_1,C) = 1$ and $(D'_2,C) = -1$ if $f$ was chosen suitably. There are non-trivial extensions \[ \begin{split} &0 \to \mathcal{O}_X(-1,0) \to G_1 \to \mathcal{O}_X(0,-1) \to 0 \\ &0 \to \mathcal{O}_X(0,-1) \to G_2 \to \mathcal{O}_X(-1,0) \to 0 \end{split} \] for some locally free sheaves $G_i$ of rank $2$. Let $G = G_1 \oplus G_2$. Then $G$ is a relative exceptional object over a non-commutative ring $R$ of dimension $4$ over $k$: \[ R = \text{End}(G) = \left( \begin{matrix} k & kt \\ kt & k \end{matrix} \right) \mod t^2. \] The multiplication of $R$ is defined according to the matrix rule. There is a semi-orthogonal decomposition \[ D^b(X) = \langle \mathcal{O}(-2,-2), \mathcal{O}(-1,-1), G, \mathcal{O} \rangle \cong \langle D^b(k), D^b(k), D^b(R), D^b(k) \rangle. \] \subsection{2 point blow up of $\mathbf{P}^3$} This is another example of a projective variety with a non-$\mathbf{Q}$-factorial ordinary double point. Let $g: Y \to \mathbf{P}^3$ be a blowing up at 2 general points $P_1,P_2$, with exceptional divisors $E_1,E_2$. Let $l$ be the strict transform of the line connecting $P_1,P_2$. Let $f: Y \to X$ be the contraction of $l$ to a point. Let $H$ be the strict transform of a general plane on $\mathbf{P}^3$ to $Y$. We have $K_Y = -4H + 2E_1 + 2E_2$, and $f$ is given by the anti-canonical linear system $\vert -K_Y \vert$ which is nef and big. By the contraction theorem (\cite{KMM}), if $(D,l) = 0$ for a Cartier divisor $D$ on $Y$, then $\mathcal{O}_Y(D) = f^*\mathcal{O}_X(f_*D)$ for a Cartier divisor $f_*D$ on $X$. By \cite{Beilinson} and \cite{BO}, $D^b(Y)$ is classically generated by a full exceptional collection \[ D^b(Y) = \langle \mathcal{O}_{E_1}(2E_1), \mathcal{O}_{E_2}(2E_1), \mathcal{O}_{E_1}(E_1), \mathcal{O}_{E_2}(E_2), \mathcal{O}(-3H), \mathcal{O}(-2H), \mathcal{O}(-H), O \rangle. \] The following lemma shows that the conditions of Theorem~\ref{ODP} are satisfied: \begin{Lem} Let $D_1 = -H + E_1+E_2$, $D_2 = -E_1$, and $L_i = f_*\mathcal{O}_Y(-D_i)$ for $i=1,2$. Then the following hold. (1) $(D_1,l) = 1$, $(D_2,l) = -1$ and $R^pf_*\mathcal{O}_Y(D_i) = 0$ for $p > 0$ and all $i$. (2) $(L_1,L_2)$ is a simple collection. (3) $H^p(X, f_*(D_i-D_j)) = 0$ for all $p > 0$ and all $i,j$. \end{Lem} \begin{proof} (1) is clear. (2) Since the $L_i$ are reflexive sheaves of rank $1$, we have $\dim \text{Hom}(L_i,L_i) = 1$. We have $\text{Hom}(L_1,L_2) = H^0(Y,D_1-D_2) = H^0(Y,-H+2E_1+E_2) = 0$. We have also $\text{Hom}(L_2,L_1) = H^0(Y,-D_1+D_2) = H^0(Y,H-2E_1-E_2) = 0$. (3) If $i = j$, then $H^p(X,f_*O)) = H^p(Y,O) = 0$ for $p > 0$. We consider the case $i \ne j$. There is a commutative diagram of exact sequences \[ \begin{CD} 0 @>>> \mathcal{O}_Y(-D_1+D_2) @>>> \mathcal{O}_Y(H) @>>> \mathcal{O}_{2E_1} \oplus \mathcal{O}_{E_2} @>>> 0 \\ @. @VVV @VVV @VVV @. \\ 0 @>>> \mathcal{O}_l(-D_1+D_2) @>>> \mathcal{O}_l(H) @>>> \mathcal{O}_{2Q_1} \oplus \mathcal{O}_{Q_2} @>>> 0 \end{CD} \] where $Q_i = E_i \cap l$. In the associated long exact sequences, we have $H^0(\mathcal{O}_Y(-D_1+D_2)) = H^0(\mathcal{O}_l(-D_1+D_2)) = 0$, $\dim H^0(Y,\mathcal{O}(H)) = 4$, $\dim H^0(l,\mathcal{O}_l(H)) = 2$, $\dim H^0(\mathcal{O}_{2E_1} \oplus \mathcal{O}_{E_2}) = 5$, $\dim H^0(\mathcal{O}_{2Q_1} \oplus \mathcal{O}_{Q_2}) = 3$, and $H^p(Y,\mathcal{O}(H)) = H^p(l,\mathcal{O}_l(H)) = 0$ for $p > 0$. It follows that $\dim H^1(Y,\mathcal{O}(-D_1+D_2)) = \dim H^1(\mathcal{O}_l(-D_1+D_2)) = 1$ and $H^p(Y,\mathcal{O}(-D_1+D_2)) = 0$ for $p \ne 1$. Moreover the homomorphisms $H^0(Y,\mathcal{O}(H)) \to H^0(l,\mathcal{O}_l(H))$ and $H^0(\mathcal{O}_{2E_1} \oplus \mathcal{O}_{E_2}) \to H^0(\mathcal{O}_{2Q_1} \oplus \mathcal{O}_{Q_2})$ are surjective. It follows that the homomorphism $H^1(Y,\mathcal{O}_Y(-D_1+D_2)) \to H^1(\mathcal{O}_l(-D_1+D_2))$ is also surjective. We have an exact sequence \[ \begin{split} &0 \to H^1(X,f_*\mathcal{O}_Y(-D_1+D_2)) \to H^1(Y,\mathcal{O}_Y(-D_1+D_2)) \to H^0(X,R^1f_*\mathcal{O}_Y(-D_1+D_2)) \\ &\to H^2(X,f_*\mathcal{O}_Y(-D_1+D_2)) \to H^2(Y,\mathcal{O}_Y(-D_1+D_2)). \end{split} \] Since the restriction homomorphism $H^1(Y,\mathcal{O}_Y(-D_1+D_2)) \to H^1(l,\mathcal{O}_l(-D_1+D_2))$ is surjective, we conclude that $H^p(X,f_*(-D_1+D_2)) = 0$ for $p > 0$. There is an exact sequence \[ 0 \to \mathcal{O}(-H) \to \mathcal{O}(D_1-D_2) \to \mathcal{O}_{2E_1}(2E_1) \oplus \mathcal{O}_{E_2}(E_2) \to 0. \] We have \[ H^p(Y,\mathcal{O}(-H)) = H^p(\mathcal{O}_{2E_1}(2E_1) \oplus \mathcal{O}_{E_2}(E_2)) = 0 \] for all $p$. Hence $H^p(Y,\mathcal{O}_Y(D_1-D_2)) = 0$ for all $p$. Since $R^pf_*\mathcal{O}_Y(D_1-D_2) = 0$ for $p > 0$, we conclude that $H^p(X,f_*\mathcal{O}_Y(D_1-D_2)) = 0$ for $p > 0$. \end{proof} Let \[ \begin{split} 0 \to L_2 \to F_1 \to L_1 \to 0 \\ 0 \to L_1 \to F_2 \to L_2 \to 0 \end{split} \] be the universal extensions corresponding to $\text{Ext}^1(L_i,L_j)$ for $i \ne j$. Let $F = F_1 \oplus F_2$. We will calculate the right orthogonal complement $F^{\perp}$ in the rest of the example. We denote \[ \begin{split} &C'_1 = -3H+2E_1+E_2, \,\, C'_2 = -3H+E_1+2E_2 \\ &C'_3 = -2H+E_1+E_2, \,\, C'_4 = -H+E_1, \,\, C'_5 = 0. \end{split} \] Then $(C'_i,l) = 0$ for all $i$. Let $C_i = f_*C'_i$ be Cartier divisors on $X$ such that $C'_i = f^*C_i$. \begin{Lem} The right orthogonal complement $F^{\perp}$ in $D^b(\text{coh}(X))$ is generated by a strong exceptional collection consisting of the invertible sheaves $(\mathcal{O}_X(C_1),\dots,\mathcal{O}_X(C_5))$. \end{Lem} \begin{proof} (1) We prove that the sequence is a strong exceptional collection. Since $H^p(X,\mathcal{O}_X) = 0$ for $p > 0$, the $\mathcal{O}_X(C_i)$ are exceptional objects. We prove that $\text{Hom}(\mathcal{O}(C_i),\mathcal{O}(C_j)[p]) = 0$ for $i > j$ and for all $p$, and $\text{Hom}(\mathcal{O}(C_i),\mathcal{O}(C_j)[p]) = 0$ for all $i,j$ and $p > 0$. We note that $\text{Hom}_X(\mathcal{O}_X(C_i),\mathcal{O}_X(C_j)[p]) \cong \text{Hom}_Y(\mathcal{O}_Y(C'_i),\mathcal{O}_Y(C'_j)[p])$. We calculate \[ \begin{split} &H^p(Y,E_1-E_2) = H^p(Y,-H+E_1) = H^p(Y,-H+E_2) = H^p(Y,-2H+E_1+E_2) \\ &=H^p(Y,-2H+2E_2) = H^p(Y,-3H+2E_1+E_2) = H^p(Y,-3H+E_1+2E_2) = 0 \end{split} \] for all $p$. For example, we have an exact sequence \[ \dots \to H^p(Y,E_1-E_2) \to H^p(Y,E_1) \to H^p(E_2,\mathcal{O}_{E_2}) \to \dots. \] Since $H^p(Y,E_1) \cong H^p(Y,\mathcal{O}_Y)$, we have $H^p(Y,E_1-E_2)=0$ for all $p$. We have $H^p(Y,-3H+E_1+2E_2) = H^p(Y,-3H) = 0$ for all $p$, etc. We also calculate \[ \begin{split} &H^p(Y,-E_1+E_2) = H^p(Y,H-E_1) = H^p(Y,H-E_2) = H^p(Y,2H-E_1-E_2) \\ &= H^p(Y,2H-2E_2) = H^p(Y,3H-2E_1-E_2) = H^p(Y,3H-E_1-2E_2) = 0 \end{split} \] for $p > 0$. For example, we have an exact sequence \[ \dots \to H^p(Y,H-E_1) \to H^p(Y,H) \to H^p(E_1,\mathcal{O}_{E_1}) \to \dots. \] Since $H^0(Y,H) \to H^0(E_1,\mathcal{O}_{E_1})$ is surjective and $H^p(Y,H)=0$ for $p > 0$, we have $H^p(Y,H-E_1) = 0$ for $p > 0$, etc. \vskip 1pc (2) We prove that the $\mathcal{O}(C_i)$ belong to $F^{\perp}$. By the Grothendieck duality, we have \[ \text{Hom}_X(L_i,\mathcal{O}(C_j)[p]) = \text{Hom}_X(Rf_*\mathcal{O}_Y(D_i),\mathcal{O}_X(C_j)[p]) \cong \text{Hom}_Y(\mathcal{O}_Y(D_i),\mathcal{O}_Y(C'_j)[p]) \] because $f^!\mathcal{O}_X(C_j) \cong \mathcal{O}_Y(C'_j)$. Therefore we will prove that $H^p(Y,C_j-H+E_1+E_2) = H^p(Y,C_j - E_1) = 0$ for all $j$ and $p$. Since $K_Y = -4H+2E_1+2E_2$, we have \[ H^p(Y,-4H+3E_1+2E_2) = H^{3-p}(Y,-E_1)^* = 0 \] for all $p$. We also have \[ H^p(Y,-3H+E_1+E_2) \cong H^p(\mathbf{P}^3,-3H) = 0 \] for all $p$. Therefore we have $\mathcal{O}(C_1) \in F^{\perp}$. We have \[ \begin{split} &H^p(Y,-4H+2E_1+3E_2) = H^{3-p}(Y,-E_2)^* = 0 \\ &H^p(Y,-3H+2E_2) \cong H^p(\mathbf{P}^3,-3H) = 0 \end{split} \] for all $p$, hence we have $\mathcal{O}(C_2) \in F^{\perp}$. We have \[ \begin{split} &H^p(Y,\mathcal{O}(-3H+2E_1+2E_2)) \cong H^p(\mathbf{P}^3,\mathcal{O}(-3H)) = 0 \\ &H^p(Y,\mathcal{O}(-2H+E_2)) \cong H^p(\mathbf{P}^3,\mathcal{O}(-3H)) = 0 \\ &H^p(Y,\mathcal{O}(-2H+2E_1+E_2)) \cong H^p(\mathbf{P}^3,\mathcal{O}(-2H)) = 0 \\ &H^p(Y,\mathcal{O}(-H)) = 0 \\ &H^p(Y,\mathcal{O}(-H+E_1+E_2)) \cong H^p(\mathbf{P}^3,\mathcal{O}(-H)) = 0 \\ &H^p(Y,\mathcal{O}(-E_1)) = 0 \end{split} \] for all $p$, hence we have $\mathcal{O}_X(C_i) \in F^{\perp}$ for $i=3,4,5$. \vskip 1pc (3) We prove that the $\mathcal{O}_X(C_i)$ and the $L_j$ generate $D^b(\text{coh}(X))$. Then it follows that the $\mathcal{O}_X(C_i)$ generates $F^{\perp}$. We denote by $C$ the triangulated subcategory of $D^b(\text{coh}(X))$ generated by the $\mathcal{O}_X(C_i)$ and the $L_j$. The linear system $\vert H - E_1 - E_2 \vert$ is a pencil, and its base locus is nothing but $l$. The image of the the natural homomorphism $\mathcal{O}_Y^2 \to \mathcal{O}_Y(H - E_1 - E_2)$ is equal to $I_l(H - E_1 - E_2)$, where $I_l$ is the ideal sheaf of $l$. Hence we have an exact sequence \[ 0 \to \mathcal{O}_Y(-H+E_1+E_2) \to \mathcal{O}_Y^2 \to \mathcal{O}_Y(H - E_1 - E_2) \to \mathcal{O}_l(-1) \to 0. \] By pushing down to $X$, we obtain an exact sequence \[ 0 \to f_*\mathcal{O}_Y(-H+E_1+E_2) \to \mathcal{O}_X^2 \to L_1 \to 0. \] Therefore $C$ coincides with the triangulated subcategory generated by the $\mathcal{O}_X(C_j)$, the $L_j$ and $f_*\mathcal{O}_Y(-H+E_1+E_2)$. We have exact sequences \[ \begin{split} &0 \to \mathcal{O}_X \to f_*\mathcal{O}_Y(E_1) \to f_*\mathcal{O}_{E_1}(E_1) \to 0 \\ &0 \to f_*\mathcal{O}_Y(-H+E_2) \to f_*\mathcal{O}_Y(-H+E_1+E_2) \to f_*\mathcal{O}_{E_1}(E_1) \to 0. \end{split} \] Thus $f_*\mathcal{O}_Y(-H+E_2)$ can also be included in the set of generators of $C$. We note that $(-H+E_2,l) = 0$, hence $f_*\mathcal{O}_Y(-H+E_2)$ is an invertible sheaf on $X$. We need some lemmas: \begin{Lem}\label{classical} $D^b(\text{coh}(Y))$ is classically generated by the following full exceptional collection: \[ \begin{split} D^b(\text{coh}(Y)) = \langle &\mathcal{O}_Y(-3H+2E_1+E_2), \mathcal{O}_Y(-3H+E_1+2E_2), \mathcal{O}_Y(-2H+E_1+E_2), \\ &\mathcal{O}_Y(-H+E_1), \mathcal{O}_Y(-H+E_2)), \mathcal{O}_Y(-H+E_1+E_2), \mathcal{O}_Y, \mathcal{O}_Y(H-E_1-E_2) \rangle. \end{split} \] \end{Lem} We note that the images by $Rf_*$ of these exceptional objects on $Y$ are exactly the objects considered above as the generators of $C$. \begin{proof} We first prove that these objects constitute an exceptional collection. Since they are all line bundles, they are exceptional objects. We check their semi-orthogonality. We have $H^p(Y,E_1-E_2) = H^p(Y,-H+E_i) = H^p(Y,-2H+E_1+E_2)=H^p(Y,-2H+2E_i)=0$ for all $p$ and all $i$, hence the first 5 are semi-orthogonal. We have $H^p(Y,-2H+2E_1+2E_2) = H^p(-H+E_1+E_2) = 0$ for all $p$, hence the latter 3 are also semi-orthogonal. We have $H^p(Y,-2H+E_i)=H^p(Y,-H)=H^p(Y,-E_i)=0$ and $H^p(Y,-3H+2E_i+E_j)=H^p(Y,-2H+2E_i+E_j)=0$ for all $p$ and $i \ne j$. By the Serre duality, $H^p(Y,-4H+3E_i+2E_j)$ is dual to $H^{3-p}(Y,-E_i)=0$ for $i \ne j$. Hence the first 5 and the latter 3 are semi-orthogonal, and these 8 objects make an exceptional collection. We prove that these objects classically generate $D^b(\text{coh}(Y))$. Let $T$ be the full triangulated subcategory of $D^b(\text{coh}(Y))$ classically generated by the above exceptional collection. By the exact sequences \[ \begin{split} &0 \to \mathcal{O}_Y(-H+E_i) \to \mathcal{O}_Y(-H+E_1+E_2) \to \mathcal{O}_{E_j}(E_j) \to 0 \\ &0 \to \mathcal{O}_Y(-H) \to \mathcal{O}_Y(-H+E_i) \to \mathcal{O}_{E_i}(E_i) \to 0 \\ &0 \to \mathcal{O}_Y(-2H) \to \mathcal{O}_Y(-2H+E_1+E_2) \to \mathcal{O}_{E_1}(E_1) \oplus \mathcal{O}_{E_2}(E_2) \to 0 \end{split} \] for $i \ne j$, we deduce that the objects $\mathcal{O}_{E_i}(E_i)$ for $i=1,2$, $\mathcal{O}_Y(-H)$ and $\mathcal{O}_Y(-2H)$ can be included in the set of classical generators of $T$. From the exact sequences \[ \begin{split} &0 \to \mathcal{O}(-H+E_1+E_2) \to O^2 \to \mathcal{O}(H - E_1 - E_2) \to \mathcal{O}_l(-1) \to 0 \\ &0 \to \mathcal{O}(-3H+2E_1+2E_2) \to \mathcal{O}(-2H+E_1+E_2)^2 \to \mathcal{O}(-H) \to \mathcal{O}_L(-1)\to 0 \end{split} \] we deduce that $\mathcal{O}_Y(-3H+2E_1+2E_2)$ can also be included. From an exact sequence \[ 0 \to \mathcal{O}_Y(-3H+E_i+2E_j) \to \mathcal{O}_Y(-3H+2E_1+2E_2) \to \mathcal{O}_{E_i}(2E_i) \to 0 \] for $i \ne j$, we deduce that $\mathcal{O}_{E_i}(2E_i)$ can be included for $i = 1,2$. From \[ 0 \to \mathcal{O}_Y(-3H) \to \mathcal{O}_Y(-3H+2E_1+2E_2) \to \mathcal{O}_{2E_1}(2E_1) \oplus \mathcal{O}_{2E_2}(2E_2) \to 0 \] we deduce that $\mathcal{O}_Y(-3H)$ can be included. Therefore $T = D^b(\text{coh}(Y))$. \end{proof} The following lemma says that the direct image functor $Rf_*$ for a birational morphism $f$ is almost surjective for the bounded derived categories of coherent sheaves if $Rf_*$ preserves the structure sheaves: \begin{Lem} Let $f: Y \to X$ be a birational morphism of projective varieties. Assume that $Rf_*\mathcal{O}_Y \cong \mathcal{O}_X$. Then the Karoubian envelope of the image of the functor $Rf_*: D^b(\text{coh}(Y)) \to D^b(\text{coh}(X))$ coincides with $D^b(\text{coh}(X))$. \end{Lem} \begin{proof} By the assumption, we have $Rf_*Lf^* \cong \text{Id}$ on $D(\text{Qcoh}(X))$. Let $A \in D^b(\text{coh}(X))$ be any object. Then we have $Lf^*A \in D^-(\text{coh}(Y))$. We take a large integer $m$ and consider a natural distinguished triangle for truncations: \[ \begin{CD} \tau_{<-m}Lf^*A @>h>> Lf^*A @>>> \tau_{\ge -m}Lf^*A @>>> (\tau_{<-m}Lf^*A)[1]. \end{CD} \] We have a morphism $Rf_*(h): Rf_*(\tau_{<-m}Lf^*A) \to Rf_*Lf^*A \cong A$. Since $Rf_*$ is bounded and $A$ is bounded, $Rf_*(h) = 0$ for sufficiently large $m$. It follows that $A$ is a direct summand of $Rf_*(\tau_{\ge -m}Lf^*A)$ in $D(\text{Qcoh}(X))$ because $D(\text{Qcoh}(X))$ is Karoubian by \cite{BN}: \[ Rf_*(\tau_{\ge -m}Lf^*A) \cong Rf_*(\tau_{<-m}Lf^*A)[1] \oplus A. \] Since $Rf_*(\tau_{\ge -m}Lf^*A) \in D^b(\text{coh}(X))$, we conclude that $A$ belongs to the Karoubian completion of the image of $Rf_*$. \end{proof} Let $G$ be the set of exceptional objects which classically generates $D^b(\text{coh}(Y))$ in Lemma~\ref{classical}. We will prove that $Rf_*G$ generates $D^b(\text{coh}(X))$. Let $A \in D^b(\text{coh}(X))$ be any object. We need to prove that, if $\text{Hom}(Rf_*G,A[p]) = 0$ for all $p$, then $A \cong 0$. We know that $A$ is a direct summand of $Rf_*B$ for some object $B \in D^b(\text{coh}(Y))$. Since $G$ classically generates $D^b(\text{coh}(Y))$, we deduce that $\text{Hom}(Rf_*B,A[p]) = 0$ for all $B$ and all $p$. Then it follows that $A \cong 0$. \end{proof} \begin{Rem} It is interesting to calculate the derived categories of varieties which are obtained by blowing up $\mathbf{P}^3$ at more than $2$ points. Especially, if we blow up $6$ or more points, then the blown-up varieties have moduli. It is interesting to determine the semi-orthogonal complement of the trivial factors in this case. \end{Rem} \subsection{Locally factorial case} We consider an example of a $3$-fold with a $\mathbf{Q}$-factorial ordinary double point. We will see that similar arguments to the non-$\mathbf{Q}$-factorial case do not work because NC deformations do not terminate. We start with an example of a singular curve: \begin{Expl} Let $X$ be a nodal cubic curve defined by an equation $(x^2+y^2)z+y^3=0$ in $\mathbf{P}^2$. Let $P \in X$ be the singular point and let $\nu: X' \to X$ be the normalization. Then $X' \cong \mathbf{P}^1$ and $H^1(\mathcal{O}_X) = k$. We consider non-commutative deformations of a Cohen-Macaulay sheaf $L = \nu_*\mathcal{O}_{X'}$ which generates $D_{\text{sg}}(X)$. From an exact sequence \[ 0 \to \mathcal{O}_X \to L \to \mathcal{O}_P \to 0, \] we consider an associated long exact sequence to obtain \[ \begin{split} &\mathcal{H}om_{\mathcal{O}_X}(L,L) \cong \mathcal{H}om_{\mathcal{O}_X}(\mathcal{O}_X,L) \cong L \\ &\mathcal{E}xt^p_{\mathcal{O}_X}(\mathcal{O}_P,L) \cong \mathcal{E}xt^p_{\mathcal{O}_X}(L,L), \,\, p > 0. \end{split} \] Since $X$ is Gorenstein, we have \[ \mathcal{E}xt^p_{\mathcal{O}_X}(\mathcal{O}_P,\mathcal{O}_X) \cong \begin{cases} \mathcal{O}_P, \,\,&p = 1\\ 0, &p \ne 1. \end{cases} \] Therefore from another associated long exact sequence, we obtain \[ \mathcal{E}xt^p_{\mathcal{O}_X}(\mathcal{O}_P,L) \cong \mathcal{E}xt^p_{\mathcal{O}_X}(\mathcal{O}_P,\mathcal{O}_P) \] for all $p > 0$. In particular we have $\mathcal{E}xt^1_{\mathcal{O}_X}(L,L) \cong \mathcal{E}xt^1_{\mathcal{O}_X}(\mathcal{O}_P,\mathcal{O}_P) \cong k^2$. The versal NC deformation of $\mathcal{O}_P$ on $X$ is given by the completion of $X$ at $P$. Its parameter algebra is given by $k[[x,y]]/(xy)$. The versal NC deformation of $L$ is given by an infinite chain of smooth rational curves. It is the inverse limit of the sheaves $L_{i,j}$ for $i,j \to \infty$ defined in the following way. $L_{i,j}$ is the direct image sheaf of an invertible sheaf $L'_{i,j}$ on a chain of smooth rational curves of type $A_{i+j+1}$, where the degree of $L_{i,j}$ on the $m$-th component is equal to $0$ for $m = i + 1$, and to $1$ otherwise. The parameter algebra for $L_{i,j}$ is given by $k[x,y]/(xy,x^{i+1},y^{j+1})$. In particular NC deformations of $L$ do not stop after finitely many steps. \end{Expl} We consider a locally factorial surface: \begin{Expl}\label{K3} Let $\pi: Y_2 \to \mathbf{P}^2$ be a double cover whose ramification divisor is a generic curve of degree $6$ with one node at $Q \in \mathbf{P}^2$. Then $P = \pi^{-1}(Q) \in Y_2$ is the only singular point of $Y_2$. We claim that $P \in Y_2$ is a factorial ordinary double point. This construction and the following argument is communicated by Keiji Oguiso. Let $Y' \to Y_2$ be a minimal resolution with an exceptional divisor $E$. Then $Y'$ is a K3 surface, and the Neron-Severi lattice is given by $NS(Y') = \mathbf{Z}H \oplus \mathbf{Z}E$, where $H$ is the pull-back of a line on $\mathbf{P}^2$, due to the genericity of the ramification divisor. We have $(H^2) = 2$, $(E^2) = -2$, and $(H,E) = 0$, hence $P \in Y_2$ is factorial. We note that an ordinary double point on a rational surface, say $S$, is always non-factorial (though $2$-factorial). Indeed the Neron-Severi lattice of its resolution $S' \to S$ is always unimodular since $H^2(\mathcal{O}_{S'}) = 0$. Hence there is a curve on $S'$ whose intersection number with the exceptional curve is odd, and its image on $S$ is not a Cartier divisor. Let $l$ be a generic line in $\mathbf{P}^2$ through $Q$, and let $C = \pi^{-1}(l)$. Then $C$ is an irreducible curve of genus $1$ with a node. Let $\nu: C' \to C$ be the normalization, and let $L_{C'}$ be an invertible sheaf on $C'$ of degree $2$. Then there is a surjective homomorphism $\mathcal{O}_C^{\oplus 2} \to \nu_*L_{C'}$. Let $L$ be the kernel of the composite homomorphism $\mathcal{O}_{Y_2}^{\oplus 2} \to \mathcal{O}_C^{\oplus 2} \to \nu_*L_{C'}$: \[ 0 \to L \to \mathcal{O}_{Y_2}^{\oplus 2} \to \nu_*L_{C'} \to 0. \] $L$ is a reflexive sheaf of rank $2$ which is locally free except at $P$. We obtain $H^p(L) = 0$ for all $p$ from a long exact sequence. $C$ has two analytic branches at $P$, and $L$ is analytically isomorphic to a direct sum of reflexive sheaves of rank $1$ near $P$. We have exact sequences \[ \begin{split} &0 \to L^{\oplus 2} \to \mathcal{H}om(L,L) \to \mathcal{E}xt^1(\nu_*L_{C'},L) \to 0 \\ &0 \to \mathcal{H}om(\nu_*L_{C'},\nu_*L_{C'}) \to \mathcal{E}xt^1(\nu_*L_{C'},L) \to \mathcal{E}xt^1(\nu_*L_{C'},\mathcal{O}_{Y_2}^{\oplus 2}). \end{split} \] Since $Y_2$ is Gorenstein with $\omega_{Y_2} \cong \mathcal{O}_{Y_2}$, we have by the Grothendieck duality \[ \mathcal{E}xt^1(\nu_*L_{C'},\mathcal{O}_{Y_2}) \cong \nu_*L_{C'}^{-1}. \] Since $H^0(\nu_*L_{C'}^{-1}) = 0$, we have \[ \text{Hom}(L,L) \cong H^0(\mathcal{E}xt^1(\nu_*L_{C'},L)) \cong \text{Hom}(\nu_*L_{C'},\nu_*L_{C'}) \cong k. \] Thus $L$ is a simple sheaf on $Y_2$. The NC deformations of $L$ do not stop after finitely many steps. Indeed the two analytic components extend in an infinite chain as in the previous example. \end{Expl} Now we consider a $3$-fold with a $\mathbf{Q}$-factorial, hence factorial, ordinary double point: \begin{Expl} Let $X_0$ be a cubic $3$-fold in $\mathbf{P}^4$ with coordinates $(x,y,z,w,t)$ defined by an equation \[ x^3+3xy^2+w^3-3t(xy+z^2+w^2) = 0. \] The singular locus of $X_0$ consists of $2$-points $P = (0,0,0,0,1)$ and $P' = (0,-1,0,0,1)$ which are ordinary double points. Let $g: X \to X_0$ be the blowing up at $P'$ with the exceptional divisor $E \cong \mathbf{P}^1 \times \mathbf{P}^1$. The local equation of $X$ at $P$ can be written as \[ x(x^2+3y^2+y) + z^2 + w^2 + w^3 = 0. \] Hence $X$ is a projective variety with one $\mathbf{Q}$-factorial ODP. Let $D_0$ be a prime divisor on $X_0$ defined by $x = 0$. Then $D_0$ has an equation \[ w^3-3t(z^2+w^2) = 0 \] in $\mathbf{P}^3$ with coordinates $(y,z,w,t)$. $D_0$ is a cone over a nodal cubic curve. The vertex of the cone is at $Q = (0,1,0,0,0)$. The singular locus of $D_0$ is a line defined by $z = w = 0$. Let $\nu_0: D'_0 \to D_0$ be the normalization. $D'_0$ is the cone over a normal rational curve of degree $3$. $g$ induces a blowing up $g_D: D \to D_0$ at $P'$. The exceptional locus of $g_D$ consists of two lines $m_1, m_2$ on $\mathbf{P}^1 \times \mathbf{P}^1$. The singular locus of $D$ is a smooth rational curve which is the strict transform of the line $\{z = w = 0\}$. Let $\nu: D' \to D$ be the normalization. Let $l$ be a generic line on $D'$ through the vertex. Then there is a surjective homomorphism $\mathcal{O}_D^{\oplus 2} \to \nu_*\mathcal{O}_{D'}(l)$. Let $L$ be the kernel of the composition $\mathcal{O}_X^{\oplus 2} \to \mathcal{O}_D^{\oplus 2} \to \nu_*\mathcal{O}_{D'}(l)$. Thus we have an exact sequence \[ 0 \to L \to \mathcal{O}_X^{\oplus 2} \to \mu_*\mathcal{O}_{D'}(l) \to 0. \] There is an exact sequence \[ \dots \to H^p_P(L) \to H^p_P(\mathcal{O}_X^2) \to H^p_P(\nu_*\mathcal{O}_{D'}(l)) \to \dots \] Since $\mathcal{O}_X$ and $\nu_*\mathcal{O}_{D'}(l)$ have depth $3$ and $2$, respectively, $L$ is a maximally Cohen-Macaulay sheaf of rank $2$ on $X$ which is locally free except at $P$. $L$ generates $D_{\text{sg}}(X)$. We will prove that $L$ is a simple sheaf, i.e., $\text{End}(L) \cong k$. We have $\dim H^0(\mathcal{O}_X) = 1$, $H^p(\mathcal{O}_X) = 0$ for $p > 0$, $\dim H^0(\mathcal{O}_{D'}(l)) = 2$ and $H^p(\mathcal{O}_{D'}(l)) = 0$ for $p > 0$. Therefore we have $H^p(L) = 0$ for all $p$. We have an exact sequence \[ 0 \to L^{\oplus 2} \to \mathcal{H}om(L,L) \to \mathcal{E}xt^1(\mu_*\mathcal{O}_{D'}(l),L) \to 0 \] and $\mathcal{E}xt^p(L,L) \cong \mathcal{E}xt^{p+1}(\mu_*\mathcal{O}_{D'}(l),L)$ for $p > 0$. Since $X$ is Gorenstein, we have \[ \mathcal{E}xt^p(\mu_*\mathcal{O}_{D'}(l),\mathcal{O}_X) \cong \begin{cases} \mu_*\omega_{D'/X}(-l), \,\, &p = 1 \\ 0, \,\, &p \ne 1 \end{cases} \] by the Grothendieck duality. From a long exact sequence, we deduce \[ \begin{split} &0 \to \mathcal{H}om(\mu_*\mathcal{O}_{D'}(l),\mu_*\mathcal{O}_{D'}(l)) \to \mathcal{E}xt^1(\mu_*\mathcal{O}_{D'}(l),L) \to \mu_*\omega_{D'/X}(-l)^{\oplus 2} \\ &\to \mathcal{E}xt^1(\mu_*\mathcal{O}_{D'}(l),\mu_*\mathcal{O}_{D'}(l)) \to \mathcal{E}xt^2(\mu_*\mathcal{O}_{D'}(l),L) \to 0 \end{split} \] and $\mathcal{E}xt^p(\mu_*\mathcal{O}_{D'}(l),\mu_*\mathcal{O}_{D'}(l)) \cong \mathcal{E}xt^{p+1}(\mu_*\mathcal{O}_{D'}(l),L)$ for $p > 1$. Since $D \sim \mathcal{O}_X(1) - E$ on $X$, we calculate \[ \omega_{D'/X}(-l) \cong \mathcal{O}_{D'}(3l - (m_1+m_2) - (l-m_1) - (l-m_2) - l) \cong \mathcal{O}_{D'}. \] Thus $H^0(\mu_*\omega_{D'/X}(-l)^{\oplus 2}) \cong k^2$. $L$ is a locally free sheaf outside $P$, and $\mu_*\mathcal{O}_{D'}(l)$ is an invertible sheaf on the smooth locus of a Cartier divisor $D$. Moreover $\mu_*\mathcal{O}_{D'}(l)$ is analytically isomorphic to the sum of invertible sheaves on two Cartier divisors along the double locus of $D$ except at $P$ and $Q$. Therefore we have $\mathcal{E}xt^2(\mu_*\mathcal{O}_{D'}(l),L)$ is supported at $\{P,Q\}$. On the other hand, $\mathcal{E}xt^1(\mu_*\mathcal{O}_{D'}(l),\mu_*\mathcal{O}_{D'}(l))$ is an invertible sheaf on the smooth locus of $D$ and has higher rank along the double locus of $D$. It follows that the homomorphism \[ H^0(\mu_*\omega_{D'/X}(-l)^{\oplus 2}) \to H^0(\mathcal{E}xt^1(\mu_*\mathcal{O}_{D'}(l),\mu_*\mathcal{O}_{D'}(l))) \] is injective. Therefore \[ H^0(\mathcal{E}xt^1(\mu_*\mathcal{O}_{D'}(l),L)) \cong H^0(\mathcal{H}om(\mu_*\mathcal{O}_{D'}(l),\mu_*\mathcal{O}_{D'}(l))) \cong k \] hence $L$ is a simple sheaf. We consider NC deformations of $L$. The successive extensions of $L$ become an infinite chain as in the previous examples so that they do not stop after finitely many steps. Thus we do not obtain an SOD unlike the case of non-Q-factorial ODP. We note that there is an extension $F$ of $L$ by $L$ which is locally free. But $F$ has still an extension by $L$, and the successive extensions by $L$ do not stop. \end{Expl} \section{Appendix: Correction to \cite{NC}} We correct an error in \cite{NC}. In Example 5.7 of \cite{NC}, it is claimed that a collection $(\mathcal{O}_X(-d), G, \mathcal{O}_X)$ yields a semi-orthogonal decomposition of $D^b(\text{coh}(X))$. But it is not the case because the semi-orthogonality fails: $R\text{Hom}(G,\mathcal{O}_X(-d)) \ne 0$. The correct collection is $(G(-d), \mathcal{O}_X(-d), \mathcal{O}_X)$. Then we have $D^b(\text{coh}(X)) \cong \langle D^b(R), D^b(k), D^b(k) \rangle$. The only thing which is left to be proved is the semi-orthogonality. But we have $R\text{Hom}(\mathcal{O}_X(-d),G(-d)) \cong RH(X,G) = 0$. There are more information in \cite{Kuznetsov} and \cite{KKS}.
1,941,325,219,960
arxiv
\section{Introduction} \label{s1} The abundance and nature of dark matter in the halo of our Galaxy is rapidly making the transition from theoretical hypothesis to observational science. This has been facilitated by the deep surveys that are now achievable with instruments such as the Hubble Space Telescope (HST) and by several gravitational microlensing searches that are currently in progress. The 2nd-year results from the MACHO microlensing experiment (Alcock et al. \cite{alc97}) towards the Large Magellanic Cloud (LMC), a direction which is sensitive to lenses residing in the dark halo, indicate that a substantial fraction of the halo ($20-100\%$) comprises objects with a typical mass in the range $0.1-1~\mbox{M}_{\sun}$.\footnote{This conclusion can be avoided if one instead attributes the observations to lenses residing in a very massive disc, though to explain the MACHO results one requires a local disc column density in excess of that typically inferred from kinematical observations (Kuijken \& Gilmore \cite{kui89}; Bahcall et al. \cite{bah92}).} These results appear to be broadly supported by the provisional findings from 4 years of MACHO observations (Axelrod \cite{axe97}) which have uncovered at least 14 LMC microlensing candidates. Similar mass scales have also been implicated by the EROS~I microlensing experiment (Renault et al. \cite{ren97}), though with a somewhat lower inferred halo fraction. These results are consistent with the lenses being in the form of low-mass hydrogen-burning stars or white-dwarf remnants. However, both of these candidates appear unattractive when other observational and theoretical results are taken into consideration. The number density, age and mass function of white dwarfs in the Galaxy is strongly constrained by number counts of high-velocity dwarfs in the Solar neighbourhood, and by their helium production (Carr et al. \cite{carr84}; Ryu et al. \cite{ryu90}; Adams \& Laughlin \cite{adam96}; Chabrier et al. \cite{chab96}; Graff et al. \cite{graf97}). In particular, Chabrier et al. (\cite{chab96}) find that a halo fraction compatible with MACHO results requires that white dwarfs be older than 18 Gyr, though more recently Graff et al. (\cite{graf97}) have argued for a lower limit closer to 15.5 Gyr based upon reasonable white-dwarf model assumptions and a halo fraction of 30\%. The situation for low-mass stars appears at least as pessimistic with recent HST results indicating that a smoothly distributed population of low-mass stars can contribute no more than a few percent to the halo dark matter density, regardless of stellar metallicity (Bahcall et al. \cite{bah94}; Graff \& Freese \cite{graf96}; Flynn et al. \cite{fly96}; Kerins \cite{ker97}). It has been suggested (Kerins \cite{ker97}, hereafter Paper~I) that if low-mass stars are clumped into globular-cluster configurations then HST limits can be considerably weakened, since this introduces large fluctuations in number counts and also may prevent a significant fraction of sources within the cores of clusters from being resolved. Motivation for the cluster scenario comes from the predictions of some baryonic dark matter formation theories, which are discussed in Paper~I. However, such clusters are required to have masses and radii consistent with existing dynamical constraints on clusters and other massive objects residing in the halo. In Paper~I it was shown that agreement between HST counts, dynamical limits and the central value for the halo fraction inferred by MACHO (40\% for the halo model assumed) is possible if clusters have a mass around $4 \times 10^4~\mbox{M}_{\sun}$ and radius of a few parsecs. However, HST, MACHO and dynamical limits are all dependent upon the unknown halo distribution function, so these results are valid only for the spherically-symmetric, cored isothermal halo model adopted in Paper~I. In this paper the model dependency of such conclusions is investigated using the same set of reference halo models employed in the MACHO collaboration's analysis of its results. One of these models is similar, though not identical, to the model investigated in Paper~I, whilst the other models are constructed from the self-consistent set of `power-law' halo models presented by Evans (\cite{evan94}). All models are normalised to be consistent with observational constraints on the Galactic rotation curve and local column surface density. New data from the Hubble Deep Field (Flynn et al. \cite{fly96}) and Groth Strip (Gould et al. \cite{gou97}) are also incorporated, as well as two other new fields analysed by Gould et al., extending the analysis from 20 HST fields in Paper~I to 51 in this study. \section{Halo models} In Paper~I constraints on the halo fraction in clustered and unclustered low-mass stars are derived assuming the stars have zero metallicity and that the halo density $\rho$ varies with Galactocentric cylindrical coordinates ($R,z$) as \begin{equation} \rho = \rho_0 \left( \frac{R_{\rm c}^2 + R_0^2}{R_{\rm c}^2 + R^2 + z^2} \right) \label{e1} \end{equation} where, in Paper~I, the local density $\rho_0 = 0.01~\mbox{M}_{\sun}$~pc$^{-3}$, the Solar Galactocentric distance $R_0 = 8$~kpc, and the halo core radius $R_{\rm c} = 5$~kpc. The assumption of zero metallicity is maintained in the present analysis since one expects the halo to be perhaps the oldest of the Galactic components, and hence its constituents to have more or less primordial metallicity. The expected absolute magnitude in various photometric bands for such stars between the hydrogen-burning limit mass ($0.092~\mbox{M}_{\sun}$) and $0.2~\mbox{M}_{\sun}$ has been calculated by Saumon et al. (\cite{sau94}) and their results are employed here as in Paper~I. The model dependency of the conclusions in Paper~I is assessed by re-calculating the constraints for a number of different, but plausible, halo models. For ease of comparison the models selected are 5 of the reference halo models used by the MACHO collaboration (Alcock et al. \cite{alc96}) in its analysis. (MACHO considers a total of 8 Galactic models, though only 5 of the halo models have distinct functional forms.) All halo models assume $R_0 = 8.5$~kpc and $R_{\rm c} = 5$~kpc. The 5 models are denoted by MACHO as models A--D and S (for `standard'), and this labelling is maintained here. The standard model S has the same functional form as the halo investigated in Paper~I (i.e. it is described by Eq.~\ref{e1}) but uses the slightly larger IAU value for $R_0$ above and assumes a lower local density $\rho_0 = 0.0079~\mbox{M}_{\sun}$~pc$^{-3}$. Models A--D are drawn from the self-consistent family of power-law models (Evans \cite{evan94}), having density profiles \begin{eqnarray} \rho & \!\! = \!\! & \frac{v_{\rm a}^2 R_{\rm c}^\beta}{4 \pi G q} \frac{R_{\rm c}^2(1 + 2q^2) + R^2(1 - \beta q^2) + z^2[2 - q^{-2}(1+\beta)]}{(R_{\rm c}^2 + R^2 + z^2 q^{-2})^{(\beta+4)/2}}, \nonumber \\ & & \label{e2} \end{eqnarray} where $v_{\rm a}$ is the velocity normalisation, $q$ describes the flattening of equipotentials, $\beta$ determines the power-law slope of the density profile at large radii, and $\pi$ and $G$ have their usual meanings. For a flat rotation curve at large radii $\beta = 0$, where as for a rising curve $\beta < 0$ and for a falling one $\beta > 0$. \setcounter{table}{0} \begin{table} \caption{Parameter values for the 5 MACHO reference halo models A--D and S (Alcock et al. \cite{alc96}). $R_0 = 8.5$~kpc and $R_{\rm c} = 5$~kpc is assumed for all models. For models A--D the local density $\rho_0$ is derived from the parameters in columns 2--4. The local rotation speed $v_0$ is computed from the combined halo and disc mass within $R_0$.} \label{t1} \begin{tabular}{cccccc} \hline\noalign{\smallskip} Model & $v_{\rm a}$/km~s$^{-1}$ & $q$ & $\beta$ & $\rho_0/\mbox{M}_{\sun}$~pc$^{-3}$ & $v_0$/km~s$^{-1}$ \\ \noalign{\smallskip}\hline\noalign{\smallskip} A & 200 & 1 & 0.0 & 0.0115 & 224 \\ B & 200 & 1 & -0.2 & 0.0145 & 233 \\ C & 180 & 1 & 0.2 & 0.0073 & 203 \\ D & 200 & 0.71 & 0.0 & 0.0190 & 224 \\ S & -- & -- & -- & 0.0079 & 192 \\ \noalign{\smallskip}\hline \end{tabular} \end{table} The particular parameters for models A--D, along with those of model S are listed in Table~\ref{t1}. Model A is the closest analogy to model S within the power-law family of models, whilst model B has a rising rotation curve at large radii, model C a falling rotation curve, and model D a flattening equivalent to an E6 halo. When combined with the MACHO canonical Galactic disc (Alcock et al. \cite{alc96}), the models give values for the local Galactic rotation speed $v_0$ within 15\% of the IAU standard value of 220~km~s$^{-1}$ and have rotation curves that are consistent with observations. \section{HST observations and halo fraction constraints} Gould et al. (\cite{gou97}) have calculated the disc luminosity function for M-dwarf stars using data from several HST WFC2 fields. These include 22 fields originally analysed by Gould et al. (\cite{gou96}), along with the Hubble Deep Field, 28 overlapping fields comprising the Groth Strip, and 2 other new fields: a total of 53 WFC2 fields. In Paper~I, 20 of the original 22 fields are analysed, the other 2 fields being omitted due to statistical problems introduced by their close proximity to some of the other fields (namely that clusters appearing in these fields could also appear in the other fields and thus be double counted). In this study these 20 fields are combined with the new fields analysed by Gould et al. (\cite{gou97}), making the total number of fields 51. The nearest-neighbour separation between these fields is sufficiently large that double counting is not expected to be a problem for clusters of interest. (The overlapping Groth Strip fields are treated as a single large field for the purpose of this study.) The limiting and saturation $I$-band magnitudes for the fields are listed in Table~1 of Gould et al. (\cite{gou97}). The Groth Strip is treated as a single field with an angular coverage of 25.98 WFC2 fields (this accounts for overlaps) and magnitude limits corresponding to the modal values listed in Gould et al. (\cite{gou97}). As in Paper~I, these limits are translated into star-mass dependent limiting distances by converting the line-of-sight extinction values listed in Burstein \& Heiles (\cite{bur84}) to $I$-band reddenings and using the photometric predictions of Saumon et al. (\cite{sau94}) for zero-metallicity low-mass stars. The predictions for the $V$ and $I$ bands are well fit by the colour--magnitude relation \begin{equation} I = -11.45 \, (V-I)^2 + 40.7 \, (V-I) - 24.5 \label{e2.5} \end{equation} for $1.27 \le V-I \le 1.57$ (corresponding to $0.2 \ge m/\mbox{M}_{\sun} \ge 0.092$). The analyses for the unclustered and clustered scenarios proceed as in Paper~I, except that the models listed in Table~\ref{t1} of this paper now replace the model used there. The calculations for the cluster scenario, which are described in detail in Paper~I, assume that the surface-brightness profiles of the clusters follow the King (\cite{king62}) surface-brightness law and take into account cluster resolvability, as well as line-of-sight overlap. \setcounter{table}{1} \begin{table} \caption{Constraints on unclustered zero-metallicity low-mass stars in the Galactic halo arising from the detection of 145 candidate stars within 51 HST WFC2 fields. The second and third columns give the expected number of detectable stars $N_{\rm exp}$ for a full halo ($f_{\rm h} = 1$) for stars with masses of $0.2~\mbox{M}_{\sun}$ and $0.092~\mbox{M}_{\sun}$ (the hydrogen-burning limit mass), respectively. The last two columns give the 95\% confidence upper limit on the maximum halo fraction $f_{\rm max}$.} \label{t2} \begin{tabular}{ccccc} \hline\noalign{\smallskip} & \multicolumn{2}{c}{$N_{\rm exp}$} & \multicolumn{2}{c}{$f_{\rm max}$} \\ \noalign{\smallskip}\cline{2-5}\noalign{\smallskip} Model & $0.2~\mbox{M}_{\sun}$ & $0.092~\mbox{M}_{\sun}$ & $0.2~\mbox{M}_{\sun}$ & $0.092~\mbox{M}_{\sun}$ \\ \noalign{\smallskip}\hline\noalign{\smallskip} A & 183\,000 & 24\,100 & $9 \times 10^{-4}$ & 0.007 \\ B & 248\,000 & 30\,800 & $6 \times 10^{-4}$ & 0.005 \\ C & 109\,000 & 15\,000 & 0.0015 & 0.011 \\ D & 162\,000 & 34\,300 & 0.0010 & 0.005 \\ S & 141\,000 & 17\,100 & 0.0012 & 0.010 \\ \noalign{\smallskip}\hline \end{tabular} \end{table} Table~\ref{t2} lists the results for the unclustered scenario. Within the 51 HST WFC2 fields analysed a total of 145 candidate stars with $1.2 \le V-I \le 1.7$ are found, implying a 95\% confidence level (CL) upper limit on the average number of 166 stars. This colour range spans the $V-I$ colour predictions of Saumon et al. (\cite{sau94}) for stars with masses in the interval $0.092-0.2~\mbox{M}_{\sun}$, where the lower value corresponds to the hydrogen-burning limit. Comparison with the expected number tabulated in Table~\ref{t2} clearly shows that, for all models, even the lowest mass unclustered stars fall well short of providing the halo dark matter density inferred by MACHO. The upper limit on their fractional contribution $f_{\rm max}$ is shown for 0.2-$\mbox{M}_{\sun}$ and 0.092-$\mbox{M}_{\sun}$ stars. For the lowest mass stars $f_{\rm max}$ ranges from 0.5\% for models B and D to 1.1\% for the lighter halo model C. One interesting feature of Table~\ref{t2} is that for the flattened halo model D the expected number counts are enhanced for 0.092-$\mbox{M}_{\sun}$ stars relative to the predictions for the spherically symmetric models, producing the highest predicted number-count for these stars. This contrasts with the results for the brighter 0.2-$\mbox{M}_{\sun}$ stars, with the heavy halo model B producing the highest number-count prediction. The enhancement for 0.092-$\mbox{M}_{\sun}$ stars in model D arises because the flattening preferentially increases the stellar surface density near the Galactic plane, and this is reflected in the counts of 0.092-$\mbox{M}_{\sun}$ stars which can be at most only a few kpc from the plane if they are to be detected. \begin{figure*} \picplace{17.0cm} \caption[]{Comparison of constraints on the halo fraction $f_{\rm h}$ from HST limits, MACHO observations and dynamical constraints for the 5 halo reference models (A--D, S), assuming halo stars have a mass of $0.092~\mbox{M}_{\sun}$ and all reside in clusters with mass $M$ and radius $R$. The lower plateau to the left of each plot corresponds to the 95\%~CL upper limit $f_{\rm max}$ for the unclustered scenario inferred from HST counts (see Table~\ref{t2}). The upper plateau on the right corresponds to the 95\%~CL lower limit halo fraction $f_{\rm M,low}$ inferred by MACHO 1st- and 2nd-year observations (Alcock et al. \cite{alc97}), with the central value for the MACHO halo fraction $f_{\rm M}$ indicated by the skirting surrounding the plots (see also Table~\ref{t3}). The curved surface joining the lower and upper flat regions corresponds to the 95\%~CL upper limit on the halo fraction in clusters from HST counts. Also projected onto the plane $f_{\rm h} = f_{\rm M,low}$ are the cluster dynamical constraints (dashed lines) for the local Solar neighbourhood. The intersection between these constraints and the MACHO lower-limit plateau indicates cluster parameters compatible with HST, MACHO and dynamical constraints.} \label{f1} \end{figure*} The constraints on the halo fraction $f_{\rm h}$ for the clustered scenario as a function of cluster mass $M$ and radius $R$ are shown in Fig.~\ref{f1} for the 5 models (A--D, S) assuming all stars reside in clusters and have the hydrogen-burning limit mass of $0.092~\mbox{M}_{\sun}$. Each plot is characterised by a lower plateau to the left, an upper plateau to the right and a curved rising surface joining the two. This curved surface between the two flat regions represents the 95\%~CL upper limit halo fraction in clusters inferred from the presence of only 145 candidate stars within the 51 HST WFC2 fields. The constraints are actually calculated on the basis of no stars being present within these fields, since for clusters there is little difference in the constraints assuming no stars are found or assuming a few hundred stars are found. The reason for this, as discussed in Paper~I, is that the clusters considered here contain between 1000 and $10^7$ members each, so the presence of just one cluster within any of the HST fields would typically result in thousands if not millions of candidates being detected. The lower plateau shows the 95\%~CL upper limit halo fraction for the {\em unclustered \/} scenario (corresponding to the $f_{\rm max}$ values listed in Table~\ref{t2}). Clusters with masses and radii within this region have internal densities which are lower than that of the halo background average and are thus unphysical, since they represent local under-densities rather than over-densities. Clearly constraints on clusters cannot be stronger than constraints on a smooth stellar distribution. The intersection of the lower plateau with the curved rising surface therefore denotes the boundary between unphysical and physical cluster parameters. \setcounter{figure}{0} \begin{figure} \picplace{8.5cm} \caption[]{{\em continued.}} \end{figure} The upper plateau to the right represents the 95\%~CL {\em lower limit\/} on the halo fraction $f_{\rm M,low}$ inferred from MACHO 1st- and 2nd-year microlensing results (Alcock et al. \cite{alc97}). It is calculated by taking the 95\%~CL lower limit on the measured microlensing optical depth for all 8 MACHO events ($\tau > 1.47 \times 10^{-7}$), subtracting the optical depth contribution expected from non-halo components [corresponding to $\tau_{\rm non-halo} \simeq 5 \times 10^{-8}$ (Alcock et al. \cite{alc96})], and normalising to the optical depth prediction $\tau_{\rm exp}$ for a full halo ($f_{\rm h} = 1$) for each model. The top of the skirting surrounding each plot is normalised to the {\em central\/} MACHO value for the halo fraction $f_{\rm M}$ for comparison, and is calculated in a similar manner to the lower limit ($f_{\rm M}$ and $f_{\rm M,low}$, together with $\tau_{\rm exp}$, are tabulated in Table~\ref{t3} for each model). Since this plateau lies below the extrapolation of the HST cluster-fraction constraint [which rises asymptotically over this region -- see Fig.~2 of Paper~I], it is consistent with both MACHO and HST limits. The dashed lines in the plots of Fig.~\ref{f1} represent the dynamical constraints derived for the local Solar neighbourhood. In fact some of the HST fields are somewhat closer in to the Galactic centre, where the dynamical constraints are stronger, but most are further away so the limits shown are stronger than applicable for most of the HST fields. The functional form for the constraints are detailed in Paper~I and are dependent upon Galactic as well as cluster parameters [consult Lacey \& Ostriker (\cite{lac85}); Carr \& Lacey (\cite{carr87}); Moore (\cite{moo93}); Moore \& Silk (\cite{moo95}); Carr \& Sakellariadou (\cite{carr97}) for derivations, and see Carr (\cite{carr94}) for a detailed review of dynamical constraints]. Their variation from plot to plot is due to model variations in the local density and rotation speed (see Table~\ref{t1}). The dynamical constraints are projected onto the plane $f_{\rm h} = f_{\rm M,low}$ for direct comparison with the MACHO lower limits. The intersection of the MACHO lower-limit plateau with the dynamical limits therefore represents cluster parameters compatible with MACHO, dynamical limits, and the constraints from the 51 HST fields. For each model it is evident that the region compatible with all limits spans a significant range of masses and radii. For models C and D the maximum permitted cluster mass is around $5 \times 10^4~\mbox{M}_{\sun}$, whilst for models A, B and S one can have cluster masses in excess of $10^6~\mbox{M}_{\sun}$. Interestingly, whilst in the unclustered scenario the heavy halo model~B is the most strongly constrained in terms of allowed halo fraction $f_{\rm max}$, it nonetheless allows a relatively wide range of viable cluster masses in the clustered scenario. Conversely, the permitted cluster mass range for the light halo model~C is more restricted. This apparent paradox is due to the fact that the HST, dynamical and microlensing observations limit the halo density normalisation at different positions in the halo, so their intersection is sensitive to the halo density profile. In particular, the HST and dynamical limits essentially apply to the local Solar neighbourhood position ($R_0 = 8$~kpc) for clusters comprising relatively dim hydrogen-burning limit stars, where as the microlensing observations towards the LMC constrain the density of lenses at somewhat larger distances (primarily between 10 and 30~kpc from the Galactic centre, where the product of lens number density and lensing cross-section is largest). Hence, for a given microlensing constraint on the mass density of lenses at 10 to 30~kpc, the local dynamical and number-count constraints are weaker for haloes with rising rotation curves (such as model B) than for models with falling rotation curves (such as model C). The relatively large range in allowed cluster masses and radii for model~S is in apparent contrast to the results of Paper~I, in which the surviving parameter space is shown to be much smaller for the very similar model adopted there. There are two reasons for this apparent discrepancy: (1) in Fig.~\ref{f1} of this paper it is assumed that the clusters comprise hydrogen-burning limit stars, where as in Fig.~3 of Paper~I the constraints are shown for the brighter 0.2-$\mbox{M}_{\sun}$ stars; (2) in this study consistency is being demanded only with the {\em lower\/} limit MACHO halo fraction $f_{\rm M,low}$, rather than with the central value $f_{\rm M}$ as in Paper~I. This latter difference is particularly important because it enlarges both the sizes of the dynamically-permitted region and the MACHO plateau, and hence enlarges their intersection. Since these differences serve to maximise the size of the surviving region, the constraints shown in this paper should be taken as {\em firm\/} limits on allowed cluster parameters. \section{Constraints on cluster membership} Figure~\ref{f1} assumes that all stars reside in clusters at the present day, an unrealistic assumption since one expects some fraction of the clusters to have evaporated away over time. As in Paper~I one can place limits on the fraction of stars $f_{\rm c}$ which must remain in clusters by using the strong limits $f_{\rm max}$ on the unclustered scenario (listed in Table~\ref{t2}). Assuming the lower limit on the cluster halo fraction to be given by the lower limit inferred by MACHO, $f_{\rm M,low}$, the present-day halo fraction in stars which have evaporated away from clusters is $f_{h,*} > (1-f_{\rm c})f_{\rm M,low}$. Since HST observations demand $f_{h,*} \leq f_{\rm max}$ one has \begin{equation} f_{\rm c} > 1 - (f_{\rm max}/f_{\rm M,low}). \label{e3} \end{equation} The resulting values for $f_{\rm c}$ for 0.2-$\mbox{M}_{\sun}$ and 0.092-$\mbox{M}_{\sun}$ stars are given in Table~\ref{t3}. \setcounter{table}{2} \begin{table} \caption{Microlensing halo fractions and minimum clustering fractions for the reference halo models. Column~2 gives the expected optical depth for a full halo as calculated by Alcock et al. (\cite{alc96}). Column~3 gives the central value for the halo fraction using the 1st+2nd year optical depth estimate of $2.94 \times 10^{-7}$ measured by Alcock et al. (\cite{alc97}), and subtracting from it an optical depth of $5 \times 10^{-8}$ expected from non-halo populations. The 4th column gives the 95\%~CL lower limit on the halo fraction using the lower limit for the measured optical depth of $1.47 \times 10^{-7}$. The last two columns give the lower limit on the present-day clustering fraction using column~4, Eq.~\ref{e3} and Table~\ref{t2}.} \label{t3} \begin{tabular}{cccccc} \hline\noalign{\smallskip} & & & & \multicolumn{2}{c}{$f_{\rm c}$} \\ \noalign{\smallskip}\cline{5-6}\noalign{\smallskip} Model & $\tau_{\rm exp}/10^{-7}$ & $f_{\rm M}$ & $f_{\rm M,low}$ & $0.2~\mbox{M}_{\sun}$ & $0.092~\mbox{M}_{\sun}$ \\ \noalign{\smallskip}\hline\noalign{\smallskip} A & 5.6 & 0.43 & 0.17 & 0.995 & 0.96 \\ B & 8.1 & 0.30 & 0.12 & 0.995 & 0.96 \\ C & 3.0 & 0.81 & 0.32 & 0.995 & 0.97 \\ D & 6.0 & 0.41 & 0.16 & 0.994 & 0.97 \\ S & 4.7 & 0.52 & 0.21 & 0.994 & 0.95 \\ \noalign{\smallskip}\hline \end{tabular} \end{table} From Table~\ref{t3} it is clear that all models require a very high fraction of all stars to reside in clusters at present. Even for hydrogen-burning limit stars the required clustering fraction must be at least 95\% at present. Capriotti \& Hawley (\cite{cap96}) have undertaken a detailed analysis of cluster mass loss within an isothermal halo potential for a range of cluster masses, density profiles and Galactocentric distances. Their analysis takes account of evaporation, disruption and tidal processes. They find that clusters with masses between $10^5 - 10^7~\mbox{M}_{\sun}$ generally survive largely intact to the present day but that less massive clusters survive only if they have high central density concentrations, nearly circular orbits and reside at large distances from the Galactic centre. At the Solar position Capriotti \& Hawley find that clusters with a half-mass to tidal radii ratio of 0.3 (comparable to the value for the clusters analysed here and in Paper~I) survive more than 95\% intact only if they have masses exceeding $10^6~\mbox{M}_{\sun}$. However, there are a number of reasons why these limits may be stronger than applicable to the low-mass star cluster scenario. Firstly, Capriotti \& Hawley assume that the clusters comprise $m = 0.8~\mbox{M}_{\sun}$ stars (i.e. between 4 and 9 times more massive than the stars considered in the present study). The evaporation timescale scales approximately as $m^{-1}$ for a fixed cluster mass, so for clusters comprising lower mass stars the evaporation timescale is correspondingly longer. Secondly, the local halo density assumed by Capriotti \& Hawley of $0.0138~\mbox{M}_{\sun}$~pc$^{-3}$ at a Galactocentric distance of 8.5~kpc is on the higher end of the values for the halo models analysed in this paper, and is considerably larger than the allowed MACHO lower limit, $f_{\rm M,low} \rho_0$, on the local density in lenses (by a factor of between 5 and 10 after normalising to a distance $R_0 = 8$~kpc). Hence disruption due to close encounters with other clusters is substantially less in the halo models investigated here than for the model analysed by Capriotti \& Hawley. Lastly, a study by Oh \& Lin (\cite{oh92}) has shown that the cluster escape rates may be substantially smaller than commonly assumed due to angular momentum transfer arising from the action of the Galactic tidal torque on cluster stars with highly eccentric orbits (which in the absence of the torque would constitute the bulk of the escapees). The rates calculated by Oh \& Lin for isotropic cluster models are broadly consistent with the values used by Capriotti \& Hawley (\cite{cap96}) and other authors, but for the case of anisotropic stellar orbits the escape rates can be 1--2 orders of magnitude smaller, again implying correspondingly longer evaporation timescales. It therefore appears that, under certain conditions, one may be able to reconcile the high cluster fraction requirements derived in the present study with the findings of cluster dynamical studies, at least for clusters comprising stars close to the hydrogen-burning limit. In any case, the validity of the figures in Table~\ref{t3} depend upon just how smoothly distributed are the stars which have evaporated from clusters. If they still have not completely homogenised today, instead maintaining a somewhat lumpy distribution (reflecting their cluster origin), then the limits on $f_{\rm c}$ are too strong. For example, a cluster with a mass $3 \times 10^4~\mbox{M}_{\sun}$ and radius 3~pc represents an over-density of about $3 \times 10^4$ over the background average at the Solar neighbourhood [i.e $\delta \rho/\rho \equiv (\rho - \overline{\rho})/\overline{\rho} = 3 \times 10^4$]. However, an under-density in the unclustered (or more precisely `post-clustered') stellar population of just a factor 10 ($\delta \rho/\rho = -0.9$) over volumes larger than $3\times 10^5$~pc$^3$, which is roughly the volume probed by 50 HST fields for hydrogen-burning limit stars (and is of order 10 times smaller than the halo volume per cluster), is all that is required to weaken the constraints on $f_{\rm max}$ by a factor 10. This would result in a much more comfortable lower limit on $f_{\rm c}$ of just 0.5 for 0.092-$\mbox{M}_{\sun}$ stars, rather than 0.95. If the under-density is a factor 5 lower than the background ($\delta \rho/\rho = -0.8$) one requires $f_{\rm c} > 0.75$ for the lowest mass stars and for an under-density factor of 2 ($\delta \rho/\rho = -0.5$) $f_{\rm c}$ must exceed 0.9. In order to rule out the cluster scenario definitively (say with 95\% confidence) one needs a survey that is both sufficiently wide and deep that it might be expected to contain at least 3 clusters on average, regardless of their mass and radius (though their mass and radius must be dynamically permitted). From Fig.~\ref{f1} it appears that the most difficult dynamically-allowed clusters for HST to exclude are those with a mass of around $3 \times 10^4~\mbox{M}_{\sun}$. If the halo fraction in low-mass stars is around 40\%, typical of the preferred value for the MACHO results, then the local number density of such clusters is around 130~kpc$^{-3}$ (adopting a local halo density of $0.01~\mbox{M}_{\sun}$~pc$^{-3}$; in reality of course the average density within the fields is dependent upon the halo model and the field locations). If the clusters comprise hydrogen-burning limit zero-metallicity stars ($V-I = 1.57$) then a HST-type survey will be sensitive to them out to about 3.6~kpc in the $I$ band [using the colour-magnitude relation of Eq.~(\ref{e2.5}), and assuming a limiting $I$-band sensitivity of 24~mag], so in order to expect to detect at least 3 such clusters, the survey must cover a solid angle of at least 4.5~deg$^2$, or the equivalent of 3\,700 HST fields! Therefore, only if HST fails to detect any clusters from 3\,700 fields would dynamically-allowed clusters be ruled out with 95\% confidence from explaining all of the observed microlensing events. For comparison, an all-sky $K$-band survey over Galactic latitudes $|b| > 10 \degr$ requires a limiting magnitude of about 17.5 in order to produce similar constraints. This should be compared to the expected $K$-band limit of about 14 for the ground-based DENIS and 2MASS surveys. An easier alternative is to instead obtain several fields as close to the Galactic centre as is feasible, where the dynamical constraints are much stronger than for the Solar neighbourhood position. For low-mass stars, this necessitates a telescope such as HST with the capability for obtaining very deep fields, since shallow surveys with wide angular coverage essentially only probe the local Solar neighbourhood. \section{Conclusion} Kerins (\cite{ker97}), referred to as Paper~I, has suggested that low mass stars could provide the substantial dark matter fraction indicated by the combined 1st- and 2nd-year MACHO gravitational microlensing results. Whilst observations from Hubble Space Telescope (HST) and other instruments have been interpreted as excluding such stars from having a significant halo density, Paper~I shows that their density could in fact be substantial if they are grouped into globular-cluster configurations. The motivation for such clusters comes from the baryonic dark matter formation scenarios which are discussed in Paper~I. Paper~I calculates the constraints on such clusters (assuming they comprise low-mass stars of primordial metallicity) which arise from MACHO microlensing results, dynamical constraints on massive halo objects, and observations from 20 HST fields obtained by Gould et al. (\cite{gou96}). However, the results of Paper~I apply only to the spherically-symmetric cored isothermal halo model investigated there. In the present study, the number of HST fields utilised has been increased to 51, and now incorporates the Hubble Deep Field and Groth Strip fields (Gould et al. \cite{gou97}). The model dependency of the results in Paper~I has been tested by adopting 5 of the reference halo models employed in the MACHO collaboration's analysis of its microlensing results. One of the models is similar to the halo investigated in Paper~I whilst the other 4 are drawn from a self-consistent family of power-law halo models and comprise spherically-symmetric haloes with a rising rotation curve, a falling rotation curve and a flat curve, as well as a flattened (E6) halo model. The 51 HST fields contain just 145 candidates with $V-I$ colours between 1.2 and 1.7 (spanning the colour range predicted for zero-metallicity stars with masses between the hydrogen-burning limit and $0.2~\mbox{M}_{\sun}$) against the tens or hundreds of thousands predicted for the halo models. From this one concludes that the halo fraction in unclustered low-mass stars is at most $0.5-1.1\%$ with 95\% confidence, depending on the halo model, and in all cases falls well short of providing even the lower-limit halo fraction inferred by MACHO. However, in the cluster scenario there exists a wide range of cluster masses and radii which can allow a halo fraction consistent with the lower limit derived from MACHO microlensing results whilst remaining compatible with dynamical limits and HST observations. Consistency with the preferred microlensing halo fraction, rather than the lower limit, requires fine tuning of the cluster parameters (as found in Paper~I), but is possible for all models investigated. The one potentially serious problem for the cluster scenario is that the strong constraints on unclustered stars imply that an overwhelming fraction of all stars, at least 95\%, must still reside in clusters at the present day. This is higher than expected from generic cluster evaporation considerations for much of the permitted cluster mass range, though it may still be consistent with clusters comprising stars with anisotropic orbits. In any case, these limits assume that stars which have already evaporated from clusters now form a perfectly smooth distribution which traces the halo density profile. If instead these stars still have a lumpy distribution, reflecting the fact that they previously resided in clusters, then the cluster fraction limits are too strong. Probably the only way to definitively exclude or confirm the cluster scenario is to obtain several deep fields as close to the Galactic centre as is practical, where the strong dynamical constraints severely restrict the range of feasible cluster parameters. \begin{acknowledgements} I am grateful to Andy Gould, John Bahcall and Chris Flynn for their permission to use the HST data in advance of publication. This research is supported by a EU Marie Curie TMR Fellowship. \end{acknowledgements}
1,941,325,219,961
arxiv
\section{Analysis} Table \ref{tab:qual_examples} shows example interpretations by {\textsc{SelfExplain}}; we show some additional analysis of explanations from {\textsc{SelfExplain}}\footnote{additional analysis in appendix due to space constraints} in this section. \\ \noindent \textbf{Does {\textsc{SelfExplain}}'s explanation help predict model behavior?} In this setup, humans are presented with an explanation and an input, and must correctly predict the model’s output \citep{doshi2017towards,Lertvittayakumjorn2019HumangroundedEO,hase-bansal-2020-evaluating}. We randomly selected 16 samples spanning equal number of true positives, true negatives, false positives and false negatives from the dev set. Three human judges were tasked to predict the model decision with and without the presence of model explanation. We observe that when users were presented with the explanation, their ability to predict model decision improved by an average of 22\%, showing that with {\textsc{SelfExplain}}'s explanations, humans could better understand model's behavior. \\ \paragraph{Performance Analysis:} In \GIL, we study the performance trade-off of varying the number of retrieved influential concepts $K$. From a performance perspective, there is only marginal drop in moving from the base model to {\textsc{SelfExplain}}~model with both \GIL~and \LIL~(shown in Table \ref{tab:gil_k_analysis}). From our experiments with human judges, we found that for sentence level classification tasks $K=5$ is preferable for a balance of performance and the ease of interpretability. \begin{table}[!ht] \centering \begin{tabular}{@{}ccc@{}} \toprule \GIL~top-$K$ & steps/sec & memory \\ \midrule base & 2.74 & 1$x$ \\ $K$=5* & 2.50 & 1.03$x$ \\ $K$=100 & 2.48 & 1.04$x$ \\ $K$=1000 & 2.20 & 1.07$x$ \\ \bottomrule \end{tabular} \caption{Effect of $K$ from \GIL. We use {\textsc{SelfExplain}}-XLNet~on SST-2 for this analysis. *$K$=1/5/10 did not show considerable difference among them} \label{tab:gil_k_analysis} \end{table} \paragraph{\LIL-\GIL-Linear layer agreement:} To understand whether our explanations lead to predicting the same label as the model's prediction, we analyze whether the final logits activations on the \GIL~and \LIL~layers agree with the linear layer activations. Towards this, we compute an agreement between label distributions from \GIL~and \LIL~layers to the distribution of the linear layer. Our \LIL-\emph{linear} F1 is 96.6\%, \GIL-\emph{linear} F1 100\% and \GIL-\LIL-\emph{linear} F1 agreement is 96.6\% for {\textsc{SelfExplain}}-XLNet~ on the SST-2 dataset. We observe that the agreement rates between the \GIL~, \LIL~ and the linear layer are very high, validating that {\textsc{SelfExplain}}'s layers agree on the same model classification prediction, showing that \GIL~and \LIL~concepts lead to same predictions. \paragraph{Are \LIL~concepts relevant?} For this analysis, we randomly selected 50 samples from SST2 dev set and removed the top most salient phrases ranked by \LIL. Annotators were asked to predict the label without the most relevant local concept and the accuracy dropped by 7\%. We also computed the {\textsc{SelfExplain}}-XLNet~classifier's accuracy on the same input and the accuracy dropped by ${\sim}14\%$.\footnote{Statistically significant by Wilson interval test.} This suggests that \LIL~ captures relevant local concepts.\footnote{Samples from this experiment are shown in \S\ref{subsec:relevance_examples}.} \begin{table*}[ht] \centering \resizebox{\textwidth}{!}{% \begin{tabular}{@{}lll@{}} \toprule Input & \begin{tabular}[c]{@{}l@{}} Top \LIL~interpretations \end{tabular} & \begin{tabular}[c]{@{}l@{}}Top \GIL~interpretations \end{tabular} \\ \midrule \begin{tabular}[c]{@{}l@{}}it 's a very charming \\ and often affecting journey\end{tabular} & \begin{tabular}[c]{@{}l@{}}often affecting, \\ very charming\end{tabular} & \begin{tabular}[c]{@{}l@{}}scenes of cinematic perfection that steal your heart away, \\ submerged, that extravagantly\end{tabular} \\ \midrule \begin{tabular}[c]{@{}l@{}}it ' s a charming and often \\ affecting journey of people\end{tabular} & \begin{tabular}[c]{@{}l@{}}of people, \\ charming and often affecting\end{tabular} & \begin{tabular}[c]{@{}l@{}}scenes of cinematic perfection that steal your heart away, \\ submerged, that extravagantly\end{tabular} \\ \bottomrule \end{tabular}% } \caption{Sample (from SST-2) of an input perturbation lead to different local concepts, but global concepts remain stable.} \label{tab:similarity_example} \end{table*} \paragraph{Stability: do similar examples have similar explanations?} \citet{melis2018towards} argue that a crucial property that interpretable models need to address is \emph{stability}, where the model should be robust enough that a minimal change in the input should not lead to drastic changes in the observed interpretations. We qualitatively analyze this by measuring the overlap of {\textsc{SelfExplain}}'s extracted concepts for similar examples. Table~\ref{tab:similarity_example} shows a representative example in which minor variations in the input lead to differently ranked local phrases, but their global influential concepts remain stable. \section{Appendix} \subsection{Additional Analysis} \label{subsec:appendix_analysis} \begin{table*}[ht] \centering \resizebox{\textwidth}{!}{% \begin{tabular}{@{}lll@{}} \toprule Input & \begin{tabular}[c]{@{}l@{}} Top \LIL~interpretations \end{tabular} & \begin{tabular}[c]{@{}l@{}}Top \GIL~interpretations \end{tabular} \\ \midrule \begin{tabular}[c]{@{}l@{}}it 's a very charming \\ and often affecting journey\end{tabular} & \begin{tabular}[c]{@{}l@{}}often affecting, \\ very charming\end{tabular} & \begin{tabular}[c]{@{}l@{}}scenes of cinematic perfection that steal your heart away, \\ submerged, that extravagantly\end{tabular} \\ \midrule \begin{tabular}[c]{@{}l@{}}it ' s a charming and often \\ affecting journey of people\end{tabular} & \begin{tabular}[c]{@{}l@{}}of people, \\ charming and often affecting\end{tabular} & \begin{tabular}[c]{@{}l@{}}scenes of cinematic perfection that steal your heart away, \\ submerged, that extravagantly\end{tabular} \\ \bottomrule \end{tabular}% } \caption{Sample (from SST-2) of an input perturbation lead to different local concepts, but global concepts remain stable.} \label{tab:similarity_example} \end{table*} \paragraph{Stability: do similar examples have similar explanations?} \citet{melis2018towards} argue that a crucial property that interpretable models need to address is \emph{stability}, where the model should be robust enough that a minimal change in the input should not lead to drastic changes in the observed interpretations. We qualitatively analyze this by measuring the overlap of {\textsc{SelfExplain}}'s extracted concepts for similar examples. Table~\ref{tab:similarity_example} shows a representative example in which minor variations in the input lead to differently ranked local phrases, but their global influential concepts remain stable. \subsection{Qualitative Examples} \label{subsec:qual_examples} Table \ref{tab:qual_examples_appendix} shows some qualitative examples from our best performing SST-2 model. \begin{table*}[] \centering \resizebox{\textwidth}{!}{% \begin{tabular}{@{}lll@{}} \toprule Input Sentence & Explanation from Input & Explanation from Training Data \\ \midrule \begin{tabular}[c]{@{}l@{}}offers much to enjoy ... \\ and a lot to mull over in terms of love ,\\ loyalty and the nature of staying friends .\end{tabular} & ['much to enjoy', 'to enjoy', 'to mull over'] & \begin{tabular}[c]{@{}l@{}}['feel like you ate a reeses \\\ without the peanut butter']\end{tabular} \\ \begin{tabular}[c]{@{}l@{}}puts a human face on a land most \\ westerners are unfamiliar with .\end{tabular} & \begin{tabular}[c]{@{}l@{}}['put s a human face on a land most \\ westerners are unfamiliar with',\\ 'a human face']\end{tabular} & ['dazzle and delight us'] \\ nervous breakdowns are not entertaining . & ['n erv ous breakdown s', 'are not entertaining'] & ['mesmerizing portrait'] \\ too slow , too long and too little happens . & ['too long', 'too little happens', 'too little'] & \begin{tabular}[c]{@{}l@{}}['his reserved but existential poignancy', \\ 'very moving and revelatory footnote']\end{tabular} \\ very bad . & ['very bad'] & \begin{tabular}[c]{@{}l@{}}['held my interest precisely',\\ 'intriguing , observant', \\ 'held my interest']\end{tabular} \\ \begin{tabular}[c]{@{}l@{}}it haunts , horrifies , startles and fascinates ;\\ it is impossible to look away .\end{tabular} & \begin{tabular}[c]{@{}l@{}}['to look away', 'look away',\\ 'it haun ts , horr ifies , start les and fasc inates']\end{tabular} & \begin{tabular}[c]{@{}l@{}}['feel like you ate a reeses \\ without the peanut butter']\end{tabular} \\ it treats women like idiots . & ['treats women like idiots', 'like idiots'] & \begin{tabular}[c]{@{}l@{}}[ 'neither amusing \\ nor dramatic enough \\ to sustain interest']\end{tabular} \\ \begin{tabular}[c]{@{}l@{}}the director knows how to apply textural gloss , \\ but his portrait of sex-as-war is strictly sitcom .\end{tabular} & \begin{tabular}[c]{@{}l@{}}['the director', \\ 'his portrait of sex - as - war']\end{tabular} & \begin{tabular}[c]{@{}l@{}}[ 'absurd plot twists' ,\\ 'idiotic court maneuvers \\ and stupid characters']\end{tabular} \\ too much of the humor falls flat . & \begin{tabular}[c]{@{}l@{}}['too much of the humor', \\ 'too much', 'falls flat']\end{tabular} & ['infuriating'] \\ \begin{tabular}[c]{@{}l@{}}the jabs it employs are short , \\ carefully placed and dead-center .\end{tabular} & \begin{tabular}[c]{@{}l@{}}['it employs',\\ 'carefully placed', 'the j abs it employs']\end{tabular} & ['with terrific flair'] \\ \begin{tabular}[c]{@{}l@{}}the words , ` frankly , my dear , \\ i do n't give a damn ,\\ have never been more appropriate .\end{tabular} & ["do n 't give a damn"] & ['spiteful idiots'] \\ \begin{tabular}[c]{@{}l@{}}one of the best films of the year with its \\ exploration of the obstacles \\ to happiness faced by five contemporary \\ individuals ... a psychological masterpiece .\end{tabular} & \begin{tabular}[c]{@{}l@{}}['of the best films of the year', \\ 'of the year', 'the year']\end{tabular} & ['bang'] \\ \begin{tabular}[c]{@{}l@{}}my wife is an actress is an utterly \\ charming french comedy that feels so \\ american in sensibility and style it 's\\ virtually its own hollywood remake .\end{tabular} & \begin{tabular}[c]{@{}l@{}}['an utterly charming french comedy', \\ 'utterly charming', 'my wife']\end{tabular} & ['all surface psychodramatics'] \\ \bottomrule \end{tabular}% } \caption{Samples from {\textsc{SelfExplain}}'s interpreted output. } \label{tab:qual_examples_appendix} \end{table*} \subsection{Relevant Concept Removal} \label{subsec:relevance_examples} Table \ref{tab:lil_examples} shows us the samples where the model flipped the label after the most relevant local concept was removed. In this table, we show the original input, the perturbed input after removing the most relevant local concept, and the corresponding model predictions. \begin{table*}[] \centering \resizebox{\textwidth}{!}{% \begin{tabular}{@{}llll@{}} \toprule Original Input & Perturbed Input & \begin{tabular}[c]{@{}l@{}}Original \\ Prediction\end{tabular} & \begin{tabular}[c]{@{}l@{}}Perturbed\\ Prediction\end{tabular} \\ \midrule unflinchingly bleak and desperate & unflinch \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ & negative & positive \\ \begin{tabular}[c]{@{}l@{}}the acting , costumes , music , \\ cinematography and sound are all \\ astounding given the production 's\\ austere locales .\end{tabular} & \begin{tabular}[c]{@{}l@{}}\_\_\_\_\_\_\_\_ , costumes , music , cinematography \\ and sound are all astounding given the\\ production 's austere locales .\end{tabular} & positive & negative \\ \begin{tabular}[c]{@{}l@{}}we root for ( clara and paul ) , \\ even like them , \\ though perhaps it 's an emotion\\ closer to pity .\end{tabular} & \begin{tabular}[c]{@{}l@{}}we root for ( clara and paul ) ,\_\_\_\_\_\_\_\_\_\_\_ ,\\ though perhaps it 's an emotion closer to pity .\end{tabular} & positive & negative \\ \begin{tabular}[c]{@{}l@{}}the emotions are raw and will strike\\ a nerve with anyone who 's ever \\ had family trauma .\end{tabular} & \begin{tabular}[c]{@{}l@{}}\_\_\_\_\_\_\_\_\_\_ are raw and will strike a\\ nerve with anyone who 's ever had family trauma .\end{tabular} & positive & negative \\ holden caulfield did it better . & holden caulfield \_\_\_\_\_\_\_\_\_\_ . & negative & positive \\ \begin{tabular}[c]{@{}l@{}}it 's an offbeat treat that pokes fun at the \\ democratic exercise while also \\ examining its significance for those who take part .\end{tabular} & \begin{tabular}[c]{@{}l@{}}it 's an offbeat treat that pokes \\ fun at the democratic exercise\\ while also examining \_\_\_\_\_\_\_\_\_ for \\ those who take part .\end{tabular} & positive & negative \\ \begin{tabular}[c]{@{}l@{}}as surreal as a dream and as detailed as a \\ photograph , as visually dexterous \\ as it is at times imaginatively overwhelming .\end{tabular} & \begin{tabular}[c]{@{}l@{}}\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ and as detailed as a photograph ,\\ as visually dexterous as it is at times\\ imaginatively overwhelming .\end{tabular} & positive & negative \\ \begin{tabular}[c]{@{}l@{}}holm ... embodies the character with\\ an effortlessly regal charisma .\end{tabular} & \begin{tabular}[c]{@{}l@{}}holm ... embodies the\\ character with \_\_\_\_\_\_\_\_\_\_\_\_\end{tabular} & positive & negative \\ \begin{tabular}[c]{@{}l@{}}it 's hampered by a lifetime-channel \\ kind of plot and a lead actress who is out of her depth .\end{tabular} & \begin{tabular}[c]{@{}l@{}}it 's hampered by a \\ lifetime-channel kind of \\ plot and a lead actress\\ who is \_\_\_\_\_\_\_\_\_\_\_\_ .\end{tabular} & negative & negative \\ \bottomrule \end{tabular}% } \caption{Samples where the model predictions flipped after removing the most relevant local concept. } \label{tab:lil_examples} \end{table*} \section{Conclusion} In this paper, we propose {\textsc{SelfExplain}}, a novel self-explaining framework that enables explanations through higher-level concepts, improving from low-level word attributions. {\textsc{SelfExplain}}~ provides both local explanations (via relevance of each input concept) and global explanations (through influential concepts from the training data) in a single framework via two novel modules (\LIL~ and \GIL), and trainable end-to-end. Through human evaluation, we show that our interpreted model outputs are perceived as more trustworthy, understandable, and adequate for explaining model decisions compared to previous approaches to explainability. This opens an exciting research direction for building inherently interpretable models for text classification. Future work will extend the framework to other tasks and to longer contexts, beyond single input sentence. We will also explore additional approaches to extract target local and global concepts, including abstract syntactic, semantic, and pragmatic linguistic features. Finally, we will study what is the right level of abstraction for generating explanations for each of these tasks in a human-friendly way. \section*{Acknowledgements} This material is based upon work funded by the DARPA CMO under Contract No.~HR001120C0124, and by the United States Department of Energy (DOE) National Nuclear Security Administration (NNSA) Office of Defense Nuclear Nonproliferation Research and Development (DNN R\&D) Next-Generation AI research portfolio. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof. \section{Dataset and Experiments} \begin{table}[!ht] \centering \begin{tabular}{@{}lrrrr@{}} \toprule \textbf{Dataset} & $\textbf{C}$ & $\textbf{L}$ & \textbf{Train} & \textbf{Test} \\ \midrule SST-2 & 2 & 19 & 68,222 & 1,821 \\ SST-5 & 5 & 18 & 10,754 & 1,101 \\ TREC-6 & 6 & 10 & 5,451 & 500 \\ TREC-50 & 50 & 10 & 5,451 & 499 \\ SUBJ & 2 & 23 & 8,000 & 1,000 \\ \bottomrule \end{tabular}% \caption{Dataset statistics, where $\mathbf{C}$ is the number of classes and $\mathbf{L}$ is the average sentence length} \label{tab:data_stats} \end{table} \begin{table*}[!htbp] \centering \begin{tabular}{@{}cccccc@{}} \toprule Model & SST-2 & SST-5 & TREC-6 & TREC-50 & SUBJ\\ \midrule XLNet & 93.4 & 53.8 & \textbf{96.6} & 82.8 & 96.2 \\ {\textsc{SelfExplain}}-XLNet~($K$=5) & \textbf{94.6} & \textbf{55.2} & 96.4 & \textbf{83.0} & \textbf{96.4} \\ {\textsc{SelfExplain}}-XLNet~($K$=10) & 94.4 & 55.2 & 96.4 & 82.8 & 96.4 \\ \midrule RoBERTa & 94.8 & 53.5 & 97.0 & 89.0 & 96.2 \\ {\textsc{SelfExplain}}-RoBERTa~($K$=5) & \textbf{95.1} & \textbf{54.3} & \textbf{97.6} & \textbf{89.4} & \textbf{96.3} \\ {\textsc{SelfExplain}}-RoBERTa~ ($K$=10) & 95.1 & 54.1 & 97.6 & 89.2 & 96.3 \\ \bottomrule \end{tabular} \caption{Performance comparison of models with and without \GIL~ and \LIL~ layers. All experiments used the same encoder configurations. We use the development set for SST-2 results (test set of SST-2 is part of GLUE benchmark) and test sets for - SST-5, TREC-6, TREC-50 and SUBJ $\alpha,\beta = 0.1$ for all the above settings.} \label{tab:performance-nums} \end{table*} \paragraph{Datasets:} \label{subsec:datasets} We evaluate our framework on five classification datasets: (i) SST-2 \footnote{https://gluebenchmark.com/tasks} Sentiment Classification task \cite{socher2013recursive}: the task is to predict the sentiment of movie review sentences as a binary classification task. (ii) SST-5 \footnote{https://nlp.stanford.edu/sentiment/index.html} : a fine-grained sentiment classification task that uses the same dataset as before, but modifies it into a finer-grained 5-class classification task. (iii) TREC-6 \footnote{https://cogcomp.seas.upenn.edu/Data/QA/QC/} : a question classification task proposed by \citet{li2002learning}, where each question should be classified into one of 6 question types. (iv) TREC-50: a fine-grained version of the same TREC-6 question classification task with 50 classes (v) SUBJ: subjective/objective binary classification dataset \citep{pang2005seeing}. The dataset statistics are shown in Table \ref{tab:data_stats}. \paragraph{Experimental Settings:} For our {\textsc{SelfExplain}}~experiments, we consider two transformer encoder configurations as our base models: (1) RoBERTa~encoder \citep{liu2019roberta} --- a robustly optimized version of BERT \cite{devlin-etal-2019-bert}. (2) XLNet~encoder \cite{yang2019xlnet} --- a transformer model based on Transformer-XL \cite{dai2019transformer} architecture. We incorporate {\textsc{SelfExplain}}~ into RoBERTa~and XLNet, and use the above encoders without the \GIL~and \LIL~layers as the baselines. We generate parse trees \citep{Kitaev2018ConstituencyPW} to extract target concepts for the input and follow same pre-processing steps as the original encoder configurations for the rest. We also maintain the hyperparameters and weights from the pre-training of the encoders. The architecture with \GIL~ and \LIL~ modules are fine-tuned on datasets described in \S \ref{subsec:datasets}. For the number of global influential concepts $K$, we consider two settings $K=5,10$. We also perform hyperparameter tuning on $\alpha, \beta = \{ 0.01, 0.1, 0.5, 1.0 \}$ and report results on the best model configuration. All models were trained on an NVIDIA V-100 GPU. \paragraph{Classification Results :} \label{subsec:class_results} We first evaluate the utility of classification models after incorporating \GIL~and \LIL~layers in Table \ref{tab:performance-nums}. Across the different classification tasks, we observe that {\textsc{SelfExplain}}-RoBERTa~ and {\textsc{SelfExplain}}-XLNet~consistently show competitive performance compared to the base models except for a marginal drop in TREC-6 dataset for {\textsc{SelfExplain}}-XLNet. We also observe that the hyperparameter $K$ did not make noticeable difference. Additional ablation experiments in Table~\ref{tab:ablation} suggest that gains through \GIL~ and \LIL~ are complementary and both layers contribute to performance gains. \begin{table}[ht] \centering \small \begin{tabular}{@{}lc@{}} \toprule Model & Accuracy \\ \midrule XLNet-Base & 93.4 \\ {\textsc{SelfExplain}}-XLNet~ + \LIL & 94.3 \\ {\textsc{SelfExplain}}-XLNet~ + \GIL & 94.0 \\ {\textsc{SelfExplain}}-XLNet~ + \GIL~ + \LIL & 94.6 \\ \midrule RoBERTa-Base & 94.8 \\ {\textsc{SelfExplain}}-RoBERTa~ + \LIL & 94.8 \\ {\textsc{SelfExplain}}-RoBERTa~ + \GIL & 94.8 \\ {\textsc{SelfExplain}}-RoBERTa~ + \GIL~ + \LIL & 95.1 \\ \bottomrule \end{tabular} \caption{Ablation: {\textsc{SelfExplain}}-XLNet~ and {\textsc{SelfExplain}}-RoBERTa~ base models on SST-2.} \label{tab:ablation} \end{table} \section{Explanation Evaluation} Explanations are notoriously difficult to evaluate quantitatively \citep{DoshiVelez2017AccountabilityOA}. A \emph{good} model explanation should be (i) relevant to the current input and predictions and (ii) understandable to humans \citep{deyoung-etal-2020-eraser,jacovi-goldberg-2020-towards,Wiegreffe2020MeasuringAB,jain-etal-2020-learning}. Towards this, we evaluate whether the explanations along the following diverse criteria: \squishlist \item \textbf{Sufficiency} -- Do explanations sufficiently reflect the model predictions? \item \textbf{Plausibility} -- Do explanations appear plausible and understandable to humans? \item \textbf{Trustability} -- Do explanations improve human trust in model predictions? \squishend From {\textsc{SelfExplain}}, we extracted (i) \textit{Most relevant local concepts}: these are the top ranked phrases based on $\mathbf{r}(nt)_{1:J}$ from the \LIL~ layer and (ii) \textit{Top influential global concepts:} these are the most influential concepts $q_{1:K}$ ranked by the output of \GIL~ layer as the model explanations to be used for evaluations. \subsection{Do {\textsc{SelfExplain}}~ explanations reflect predicted labels?} \emph{Sufficiency} aims to evaluate whether model explanations alone are highly indicative of the predicted label \citep{Jacovi2018UnderstandingCN,Yu2019RethinkingCR}. ``Faithfulness-by-construction'' (FRESH) pipeline \citep{jain-etal-2020-learning} is an example of such framework to evaluate sufficiency of explanations: the sole explanations, without the remaining parts of the input, must be sufficient for predicting a label. In FRESH, a BERT \citep{devlin-etal-2019-bert} based classifier is trained to perform a task using only the extracted explanations without the rest of the input. An explanation that achieves high accuracy using this classifier is indicative of its ability to recover the original model prediction. We evaluate the explanations on the sentiment analysis task. Explanations from {\textsc{SelfExplain}}~are incorporated to the FRESH framework and we compare the predictive accuracy of the explanations in comparison to baseline explanation methods. Following \citet{jain-etal-2020-learning}, we use the same experimental setup and saliency-based baselines such as attention \citep{lei-etal-2016-rationalizing,bastings-etal-2019-interpretable} and gradient \citep{Li2016VisualizingAU} based explanation methods. From Table \ref{tab:quant_eval}\footnote{In these experiments, explanations are pruned at a maximum of 20\% of input. For {\textsc{SelfExplain}}, we select upto top-$K$ concepts thresholding at 20\% of input}, we observe that {\textsc{SelfExplain}}~ explanations from \LIL~and \GIL~show high predictive performance compared to all the baseline methods. Additionally, \GIL~explanations outperform full-text (an explanation that uses all of the input sample) performance, which is often considered an upper-bound for span-based explanation approaches. We hypothesize that this is because \GIL~explanation concepts from the training data are very relevant to help disambiguate the input text. In summary, outputs from {\textsc{SelfExplain}}~are more predictive of the label compared to prior explanation methods indicating higher sufficiency of explanations. \begin{table}[!ht] \centering \normalsize \begin{tabular}{@{}llc@{}} \toprule Model & Explanation & Accuracy \\ \midrule Full input text & - & 0.90 \\ \citet{lei-etal-2016-rationalizing} & \begin{tabular}[c]{@{}l@{}}contiguous\\ top-$K$ tokens \end{tabular} & \begin{tabular}[c]{@{}l@{}}0.71\\ 0.74\end{tabular} \\ \citet{bastings-etal-2019-interpretable} & \begin{tabular}[c]{@{}l@{}}contiguous\\ top-$K$ tokens \end{tabular} & \begin{tabular}[c]{@{}l@{}}0.60\\ 0.59\end{tabular} \\ \citet{Li2016VisualizingAU} & \begin{tabular}[c]{@{}l@{}}contiguous\\ top-$K$ tokens \end{tabular} & \begin{tabular}[c]{@{}l@{}}0.70\\ 0.68\end{tabular} \\ \texttt{[CLS]} Attn & \begin{tabular}[c]{@{}l@{}}contiguous\\ top-$K$ tokens \end{tabular} & \begin{tabular}[c]{@{}l@{}}0.81\\ 0.81\end{tabular} \\ \midrule {\textsc{SelfExplain}}-\LIL & top-$K$ concepts & \textbf{0.84} \\ {\textsc{SelfExplain}}-\GIL & top-$K$ concepts & \textbf{0.93} \\ \bottomrule \end{tabular} \caption{Model predictive performances (prediction accuracy) on SST-dataset test set. \emph{Contiguous} refers to explanations that are spans of text and top-$K$ refers to model-ranked top-$K$ tokens. {\textsc{SelfExplain}}~ also uses at most top-$K$ (where $K$=2) concepts for both \LIL~and \GIL. {\textsc{SelfExplain}}~ explanations from both \GIL~ and \LIL~ outperform all baselines.} \label{tab:quant_eval} \end{table} \subsection{Are {\textsc{SelfExplain}}~ explanations plausible and trustable for humans?} \begin{table*}[!ht] \centering \resizebox{\textwidth}{!}{% \begin{tabular}{llll} \hline Sample & $P_C$ & Top relevant phrases from LIL & Top influential concepts from \GIL \\ \hline \begin{tabular}[c]{@{}l@{}}the iditarod lasts for days -\\ this just felt like it did .\end{tabular} & neg & for days & \begin{tabular}[c]{@{}l@{}}exploitation piece,\\ heart attack\end{tabular} \\ \midrule \begin{tabular}[c]{@{}l@{}}corny, schmaltzy and predictable, but still \\ manages to be kind of heart warming, nonetheless.\end{tabular} & pos & corny, schmaltzy, of heart & \begin{tabular}[c]{@{}l@{}}successfully blended satire, \\ spell binding fun\end{tabular} \\ \midrule \begin{tabular}[c]{@{}l@{}}suffers from the lack of a \\ compelling or comprehensible narrative .\end{tabular} & neg & comprehensible, the lack of & \begin{tabular}[c]{@{}l@{}}empty theatres,\\ tumble weed\end{tabular} \\ \midrule \begin{tabular}[c]{@{}l@{}}the structure the film takes may find matt damon \\ and ben affleck once again looking for residuals\\ as this officially completes a \\ good will hunting trilogy that was never planned .\end{tabular} & pos & the structure of the film & \begin{tabular}[c]{@{}l@{}}bravo, \\ meaning and consolation\end{tabular} \\ \bottomrule \end{tabular}% } \caption{Sample output from the model and its corresponding local and global interpretable outputs SST-2 ($P_C$ stands for predicted class) (some input text cut for brevity). More qualitative examples in appendix \S\ref{subsec:qual_examples}} \label{tab:qual_examples} \end{table*} Human evaluation is commonly used to evaluate \textit{plausibility} and \textit{trustability}. To this end, 14 human judges\footnote{Annotators are graduate students in computer science.} annotated 50 samples from the SST-2 validation set of sentiment excerpts \citep{socher2013recursive}. Each judge compared local and global explanations produced by the {\textsc{SelfExplain}}-XLNet~ model against two commonly used interpretability methods (i) Influence functions \citep{han2020explaining} for global interpretability and (ii) Saliency detection \citep{Simonyan2014DeepIC} for local interpretability. We follow a setup discussed in \citet{han2020explaining}. Each judge was provided the evaluation criteria (detailed next) with a corresponding description. The models to be evaluated were anonymized and humans were asked to rate them according to the evaluation criteria alone. Following \citet{ehsan2019automated}, we analyse the \emph{plausibility} of explanations which aims to understand how users would perceive such explanations if they were generated by humans. We adopt two criteria proposed by \citet{ehsan2019automated}: \paragraph{Adequate justification}: Adequately justifying the prediction is considered to be an important criteria for acceptance of a model \citep{Davis1989PerceivedUP}. We evaluate the \emph{adequacy} of the explanation by asking human judges: ``Does the explanation adequately justifies the model prediction?'' Participants deemed explanations that were irrelevant or incomplete as less adequately justifying the model prediction. Human judges were shown the following: (i) input, (ii) gold label, (iii) predicted label, and (iv) explanations from baselines and {\textsc{SelfExplain}}. The models were anonymized and shuffled. Figure \ref{fig:eval_interpret} (left) shows that {\textsc{SelfExplain}}~ achieves a gain of 32\% in perceived adequate justification, providing further evidence that humans perceived {\textsc{SelfExplain}}~explanations as more plausible compared to the baselines. \begin{figure}[!ht] {\includegraphics[width=\columnwidth]{imgs/comparative_eval.pdf}} \caption{ \emph{Adequate justification} and \emph{understandability} of {\textsc{SelfExplain}}~against baselines. The vertical axis shows the percentage of samples evaluated by humans. Humans judge {\textsc{SelfExplain}}~ explanations to better justify the predictions and be more understandable. \label{fig:eval_interpret} \end{figure} \begin{figure}[!h] {\includegraphics[width=\columnwidth]{imgs/mean_trust.pdf}} \caption{\emph{Mean trust score} of {\textsc{SelfExplain}}~against baselines. The vertical axis show mean trust labeled on 1-5 likert scale. Humans judge {\textsc{SelfExplain}}~ explanations improve trust in model predictions.} \label{fig:trust_interpret} \end{figure} \paragraph{Understandability:} An essential criterion for transparency in an AI system is the ability of a user to \emph{understand} model explanations \citep{DoshiVelez2017AccountabilityOA}. Our understandability metric evaluates whether a human judge can understand the explanations presented by the model, which would equip a non-expert to verify the model predictions. Human judges were presented (i) the input, (ii) gold label, (iii) sentiment label prediction, and (iv) explanations from different methods (baselines, and {\textsc{SelfExplain}}), and were asked to select the explanation that they perceived to be more understandable. Figure \ref{fig:eval_interpret} (right) shows that {\textsc{SelfExplain}}~ achieves 29\% improvement over the best-performing baseline in terms of understandability of the model explanation. \paragraph{Trustability:} In addition to plausibility, we also evaluate user \emph{trust} of the explanations \citep{singh2018hierarchical,Jin2020Towards}. To evaluate user trust, We follow the same experimental setup as \citet{singh2018hierarchical} and \citet{Jin2020Towards} to compute the \emph{mean trust score}. For each data sample, subjects were shown explanations and the model prediction from the three interpretability methods and were asked to rate on a Likert scale of 1--5 based on how much trust did each of the model explanations instill. Figure \ref{fig:trust_interpret} shows the mean-trust score of {\textsc{SelfExplain}}~ in comparison to the baselines. We observe from the results that concept-based explanations are perceived more trustworthy for humans. \section{Introduction} Neural network models are often opaque: they provide limited insight into interpretations of model decisions and are typically treated as ``black boxes'' \citep{lipton2018mythos}. There has been ample evidence that such models overfit to spurious artifacts \citep{gururangan-etal-2018-annotation,McCoy2019RightFT,kumar-etal-2019-topics} and amplify biases in data \citep{zhao2017men,sun-etal-2019-mitigating}. This underscores the need to understand model decision making. \begin{figure}[t] {\includegraphics[width=\columnwidth]{imgs/local_global.pdf}} \caption{A sample of interpretable concepts from {\textsc{SelfExplain}}~ for a binary sentiment analysis task. Compared to saliency-map style word attributions, {\textsc{SelfExplain}}~ can provide explanations via concepts in the input sample and the concepts in the training data} \label{fig:local_global_example} \end{figure} Prior work in interpretability for neural text classification predominantly follows two approaches: (i) \emph{post-hoc explanation methods} that explain predictions for previously trained models based on model internals, and (ii) \emph{inherently interpretable models} whose interpretability is built-in and optimized jointly with the end task. While post-hoc methods \citep{Simonyan2014DeepIC,koh2017understanding,ribeiro-etal-2016-trust} are often the only option for already-trained models, inherently interpretable models \citep{melis2018towards,Arik2020ProtoAttendAP} may provide greater transparency since explanation capability is embedded directly within the model \citep{kim2014bayesian,doshi2017towards,rudin2019stop}. In natural language applications, feature attribution based on attention scores \citep{Xu2015ShowAA} has been the predominant method for developing inherently interpretable neural classifiers. Such methods interpret model decisions \emph{locally} by explaining the classifier's decision as a function of relevance of features (words) in input samples. However, such interpretations were shown to be unreliable \citep{serrano-smith-2019-attention,pruthi2019learning} and unfaithful \citep{jain-wallace-2019-attention,wiegreffe-pinter-2019-attention}. Moreover, with natural language being structured and compositional, explaining the role of higher-level compositional concepts like phrasal structures (beyond individual word-level feature attributions) remains an open challenge. Another known limitation of such feature attribution based methods is that the explanations are limited to the input feature space and often require additional methods \citep[e.g.][]{han2020explaining} for providing global explanations, i.e., explaining model decisions as a function of influential training data. In this work, we propose {\textsc{SelfExplain}}---a self explaining model that incorporates both global and local interpretability layers into neural text classifiers. Compared to word-level feature attributions, we use high-level phrase-based concepts, producing a more holistic picture of a classifier's decisions. {\textsc{SelfExplain}}~incorporates: (i) \emph{Locally Interpretable Layer} (\LIL), a layer that quantifies via activation difference, the relevance of each concept to the final label distribution of an input sample. (ii) \emph{Globally Interpretable Layer} (\GIL), a layer that uses maximum inner product search (MIPS) to retrieve the most influential concepts from the training data for a given input sample. We show how \GIL~and \LIL~layers can be integrated into transformer-based classifiers, converting them into self-explaining architectures. The interpretability of the classifier is enforced through regularization \citep{melis2018towards}, and the entire model is end-to-end differentiable. To the best of our knowledge, {\textsc{SelfExplain}}~is the first self-explaining neural text classification approach to provide both global and local interpretability in a single model. Ultimately, this work makes a step towards combining the generalization power of neural networks with the benefits of interpretable statistical classifiers with hand-engineered features: our experiments on three text classification tasks spanning five datasets with pretrained transformer models show that incorporating \LIL~and \GIL~layers facilitates richer interpretability while maintaining end-task performance. The explanations from {\textsc{SelfExplain}}~sufficiency reflect model predictions and are perceived by human annotators as more understandable, adequately justifying the model predictions and trustworthy, compared to strong baseline interpretability methods. \section{{\textsc{SelfExplain}}} \label{sec:model} Let $\mathcal{M}$ be a neural $C$-class classification model that maps $\mathcal{X} \rightarrow \mathcal{Y}$, where $\mathcal{X}$ are the inputs and $\mathcal{Y}$ are the outputs. {\textsc{SelfExplain}}~ builds into $\mathcal{M}$, and it provides a set of explanations $\mathcal{Z}$ via high-level ``concepts'' that explain the classifier's predictions. We first define interpretable concepts in \Sref{sec:concepts}. We then describe how these concepts are incorporated into a concept-aware encoder in \Sref{sec:concept_aware_encoder}. In \Sref{sec:lil}, we define our Local Interpretability Layer (\LIL), which provides local explanations by assigning relevance scores to the constituent concepts of the input. In \Sref{sec:gil}, we define our Global Interpretability Layer (\GIL), which provides global explanations by retrieving influential concepts from the training data. Finally, in \Sref{subsec:training}, we describe the end-to-end training procedure and optimization objectives. \subsection{Defining human-interpretable concepts} \label{sec:concepts} Since natural language is highly compositional \citep{montague1970english}, it is essential that interpreting a text sequence goes beyond individual words. We define the set of basic units that are interpretable by humans as \emph{concepts}. In principle, concepts can be words, phrases, sentences, paragraphs or abstract entities. In this work, we focus on phrases as our concepts, specifically all non-terminals in a constituency parse tree. Given any sequence $\mathbf{x} = \{w_i\}_{1:T}$, we decompose the sequence into its component non-terminals $N(\mathbf{x}) = \{ nt_j \}_{1:J}$, where $J$ denotes the number of non-terminal phrases in $\mathbf{x}$. Given an input sample $\mathbf{x}$, $\mathcal{M}$ is trained to produce two types of explanations: (i) global explanations from the training data $\mathcal{X}_{train}$ and (ii) local explanations, which are phrases in $\mathbf{x}$. We show an example in Figure~\ref{fig:local_global_example}. Global explanations are achieved by identifying the most influential concepts $\mathcal{C}_G$ from the ``concept store'' $\mathbf{Q}$, which is constructed to contain all concepts from the training set $\mathcal{X}_{train}$ by extracting phrases under each non-terminal in a syntax tree for every data sample (detailed in \Sref{sec:gil}). Local interpretability is achieved by decomposing the input sample $\mathbf{x}$ into its constituent phrases under each non-terminal in its syntax tree. Then each concept is assigned a score that quantifies its contribution to the sample's label distribution for a given task; $\mathcal{M}$ then outputs the most relevant local concepts $\mathcal{C}_L$. \begin{figure*}[!ht] \centering {\includegraphics[width=0.95\textwidth]{imgs/architecture.pdf}} \caption{Model Architecture: Our architecture comprises a base encoder that encodes the input and its relative non-terminals. \GIL~ then uses MIPS to retrieve the most influential concepts that \emph{globally} explain the sample, while \LIL~ computes a relevance score for each $nt_j$ that quantifies its relevance to predict the label. The model interpretability is enforced through regularization. Examples of top \LIL~concepts (extracted from the from input) are \{\emph{the good soup}, \emph{good}\}, and of top \GIL~concepts (from the training data) are \{\textit{great food, excellent taste}\}} \label{fig:architecture} \end{figure*} \subsection{Concept-Aware Encoder $\mathbf{E}$} \label{sec:concept_aware_encoder} We obtain the encoded representation of our input sequence $\mathbf{x} = \{w_i\}_{1:T}$ from a pretrained transformer model \cite{vaswani2017attention, liu2019roberta, yang2019xlnet} by extracting the final layer output as $\{\mathbf{h}_i\}_{1:T}$. Additionally, we compute representations of concepts, $\{\mathbf{u}_j\}_{1:J}$. For each non-terminal $nt_j$ in $\mathbf{x}$, we represent it as the mean of its constituent word representations $\mathbf{u}_j = \dfrac{\sum_{w_i \in nt_j} \mathbf{h}_i}{len(nt_j)}$ where $len(nt_j)$ represents the number of words in the phrase $nt_j$. To represent the root node ($\mathbb{S}$) of the syntax tree, $nt_{\mathbb{S}}$, we use the pooled representation (\texttt{[CLS]} token representation) of the pretrained transformer as $\mathbf{u}_{\mathbb{S}}$ for brevity.\footnote{We experimented with different pooling strategies (mean pooling, sum pooling and pooled \texttt{[CLS]} token representation) and all of them performed similarly. We chose to use the pooled \texttt{[CLS]} token for the final model as this is the most commonly used method for representing the entire input.} Following traditional neural classifier setup, the output of the classification layer $l_Y$ is computed as follows: \begin{align} l_Y &= \texttt{softmax}(\mathbf{W}_y \times g(\mathbf{u}_{\mathbb{S}}) + \mathbf{b}_y) \nonumber \\ P_C &= \argmax(l_Y) \nonumber \end{align} where $g$ is a $relu$ activation layer, $\mathbf{W}_y \in \mathbb{R}^{D \times C}$, and $P_C$ denotes the index of the predicted class. \subsection{Local Interpretability Layer (\textbf{\LIL})} \label{sec:lil} For local interpretability, we compute a local relevance score for all input concepts $\{nt_j\}_{1:J}$ from the sample $\mathbf{x}$. Approaches that assign relative importance scores to input features through activation differences \citep{pmlr-v70-shrikumar17a,Montavon2017ExplainingNC} are widely adopted for interpretability in computer vision applications. Motivated by this, we adopt a similar approach to NLP applications where we learn the attribution of each concept to the final label distribution via their activation differences. Each non-terminal $nt_j$ is assigned a score that quantifies the contribution of each $nt_j$ to the label in comparison to the contribution of the root node $nt_{\mathbb{S}}$. The most contributing phrases $\mathcal{C}_L$ is used to locally explain the model decisions. Given the encoder $\mathbf{E}$, \LIL~ computes the contribution solely from $nt_j$ to the final prediction. We first build a representation of the input without contribution of phrase $nt_j$ and use it to score the labels: \vspace{-7mm} \begin{align} t_j &= g(\mathbf{u}_j) - g(\mathbf{u}_{\mathbb{S}}) \nonumber \\ s_j &= \texttt{softmax}(\mathbf{W}_v \times t_j + \mathbf{b}_v) \nonumber \end{align} where $g$ is a $relu$ activation function, $t_j \in \mathbb{R}^D$, $s_j \in \mathbb{R}^C$, $\mathbf{W}_v \in \mathbb{R}^{D \times C} $. Here, $s_j$ signifies a label distribution without the contribution of $nt_j$. Using this, the relevance score of each $nt_j$ for the final prediction is given by the difference between the classifier score for the predicted label based on the entire input and the label score based on the input without $nt_j$: $\mathbf{r}_j = (l_{Y})_i\rvert_{i = P_C} - (s_j)_i\rvert_{i = P_C} \nonumber, $ where $\mathbf{r}_j$ is the relevance score of the concept $nt_j$. \subsection{Global Interpretability layer (\textbf{\GIL})} \label{sec:gil} The Global Interpretability Layer \GIL~aims to interpret each data sample $\mathbf{x}$ by providing a set of $K$ concepts from the training data which most influenced the model's predictions. Such an approach is advantageous as we can now understand how important concepts from the training set influenced the model decision to predict the label of a new input, providing more granularity than methods that use entire samples from the training data for post-hoc interpretability \citep{koh2017understanding,han2020explaining}. We first build a \emph{concept store} $Q$ which holds all the concepts from the training data. Given model $\mathcal{M}$ , we represent each concept candidate from the training data, $q_k$ as a mean pooled representation of its constituent words $q_k = \dfrac{\sum_{w \in q_k} e(w)}{len(q_k)} \in \mathbb{R}^D$, where $e$ represents the embedding layer of $\mathcal{M}$ and $len(q_k)$ represents the number of words in $q_k$. $Q$ is represented by a set of $\{q\}_{1:N_Q}$, which are $N_Q$ number of concepts from the training data. As the model $\mathcal{M}$ is finetuned for a downstream task, the representations $q_k$ are constantly updated. Typically, we re-index all candidate representations $q_k$ after every fixed number of training steps. For any input $\mathbf{x}$, \GIL~produces a set of $K$ concepts $\{q\}_{1:K}$ from $Q$ that are most influential as defined by the cosine similarity function: $$d(\mathbf{x}, Q) = \dfrac{\mathbf{x} \cdot q}{\| \mathbf{x} \| \| q \|} \quad \forall q \in Q $$ Taking $\mathbf{u}_{\mathbb{S}}$ as input, \GIL~uses dense inner product search to retrieve the top-$K$ influential concepts $\mathcal{C}_G$ for the sample. Differentiable approaches through Maximum Inner Product Search (MIPS) has been shown to be effective in Question-Answering settings \citep{guu2020realm, Dhingra2020Differentiable} to leverage retrieved knowledge for reasoning \footnote{MIPS can often be efficiently scaled using approximate algorithms \citep{shrivastava2014asymmetric} }. Motivated by this, we repurpose this retrieval approach to identify the influential concepts from the training data and learn it end-to-end via backpropagation. Our inner product model for \GIL~is defined as follows: $$p (q | \mathbf{x}_i) = \dfrac{exp \; d(\mathbf{u}_{\mathbb{S}}, q)}{\sum_{q'}exp \; d(\mathbf{u}_{\mathbb{S}}, q')} \nonumber$$ \subsection{Training} \label{subsec:training} {\textsc{SelfExplain}}~is trained to maximize the conditional log-likelihood of predicting the class at all the final layers: linear (for label prediction), \LIL~, and \GIL~. Regularizing models with explanation specific losses have been shown to improve inherently interpretable models \citep{melis2018towards} for local interpretability. We extend this idea for both global and local interpretable output for our classifier model. For our training, we regularize the loss through \GIL~and \LIL~layers by optimizing their output for the end-task as well. For the \GIL~layer, we aggregate the scores over all the retrieved $q_{1:K}$ as a weighted sum, followed by an activation layer, linear layer and softmax to compute the log-likelihood loss as follows: \begin{align} l_{G} &= \texttt{softmax}( \mathbf{W}_u \times g(\sum_{k=1}^K \mathbf{w}_k \times q_k) + \mathbf{b}_u ) \nonumber \end{align} and $ \mathcal{L}_G = - \sum_{c=1}^{C} y_c \text{ log}(l_G) $ where the global interpretable concepts are denoted by $\mathcal{C}_G = q_{1:K}$, $\mathbf{W}_u \in \mathbb{R}^{D \times C}$, $\mathbf{w}_k \in \mathbb{R}$ and $g$ represents $relu$ activation, and $l_G$ represents the softmax for the \GIL~layer. For the \LIL~layer, we compute a weighted aggregated representation over $s_j$ and compute the log-likelihood loss as follows: \begin{align} l_{L} &= \sum_{j, j \neq \mathbb{S}} \mathbf{w}_{sj} \times s_j,\text{ } \mathbf{w}_{sj} \in \mathbb{R} \nonumber \end{align} and $\mathcal{L}_L = - \sum_{c=1}^C y_c \text{ log} (l_L) $. To train the model, we optimize for the following joint loss, $$\mathcal{L} = \alpha \times \mathcal{L}_G + \beta \times \mathcal{L}_L + \mathcal{L}_Y $$ where $\mathcal{L}_Y = - \sum_{c=1}^{C} y_c \text{ } log(l_Y)$. Here, $\alpha$ and $\beta$ are regularization hyper-parameters. All loss components use cross-entropy loss based on task label $y_c$. \section{Related Work} \paragraph{Post-hoc Interpretation Methods:} Predominant based methods for post-hoc interpretability in NLP use gradient based methods \citep{Simonyan2014DeepIC,sundararajan2017axiomatic,smilkov2017smoothgrad}. Other post-hoc interpretability methods such as \citet{singh2018hierarchical} and \citet{Jin2020Towards} decompose relevant and irrelevant aspects from hidden states and obtain a relevance score. While the methods above focus on local interpretability, works such as \citet{han2020explaining} aim to retrieve influential training samples for global interpretations. Global interpretability methods are useful not only to facilitate explainability, but also to detect and mitigate artifacts in data \citep{pezeshkpour2021combining,han2021influence-tuning}. \paragraph{Inherently Intepretable Models:} Heat maps based on attention \citep{bahdanau2014neural} are one of the commonly used interpretability tools for many downstream tasks such as machine translation \citep{luong-etal-2015-effective}, summarization \citep{rush-etal-2015-neural} and reading comprehension \citet{NIPS2015_5945}. Another recent line of work explores collecting \emph{rationales} \citep{lei-etal-2016-rationalizing} through expert annotations \citep{zaidan2008modeling}. Notable work in collecting external rationales include Cos-E \citep{rajani2019salesforceexplain}, e-SNLI \citep{Camburu2018eSNLI} and recently, Eraser benchmark \citep{deyoung-etal-2020-eraser}. Alternative lines of work in this class of models include \citet{card2019deep} that relies on interpreting a given sample as a weighted sum of the training samples while \citet{croce2019auditing} identifies influential training samples using a kernel-based transformation function. \citet{jiang-bansal-2019-self} produce interpretations of a given sample through modular architectures, where model decisions are explained through outputs of intermediate modules. A class of inherently interpretable classifiers explain model predictions locally using human-understandable high-level \emph{concepts} such as prototypes \citep{melis2018towards,ChenEtAl2019} and interpretable classes \citep{koh2020concept,yeh2020completenessaware}. They were recently proposed for computer vision applications, but despite their promise have not yet been adopted in NLP. {\textsc{SelfExplain}}~is similar in spirit to \citet{melis2018towards} but additionally provides explanations via training data concepts for neural text classification tasks.
1,941,325,219,962
arxiv
\section{INTRODUCTION} Orthogonal frequency division multiplexing (OFDM) has been widely adopted in most of the digital video broadcasting standards as DVB-T \cite{DVBT}, DMB-T, ISDB-T. This success is due to its robustness to frequency selective fading and to the simplicity of the equalization function of the receiver. Indeed, by implementing inverse fast Fourier transform (IFFT) at the transmitter and FFT at the receiver, OFDM splits the single channel into multiple, parallel intersymbol interference (ISI) free subchannels. Therefore, each subchannel, also called subcarrier, can be easily equalized by only one coefficient. To equalize the signal, the receiver needs to estimate the channel frequency response for each subcarrier. In the DVB-T standard, some subcarriers are used as pilots and interpolating filtering techniques are applied to obtain the channel response for any subcarrier. Nevertheless, these pilots reduce the spectral efficiency of the system. To limit this problem, we propose to add a two dimensions (2D) linear precoding (LP) function before the OFDM modulation. The basic idea is to dedicate one of the precoding sequences to transmit a so-called spread pilot \cite{Cariou-IEE07} that will be used for the channel estimation. The merits of this channel estimation technique are not only due to the resource conservation possibility, but also to the flexibility offered by the adjustable time and frequency spreading lengths. In addition, note that the precoding component can be exploited to reduce the peak-to-average ratio (PAPR) of the OFDM system \cite{Nobilet}, or to perform the frequency synchronisation. The contribution of this article is twofold. First, a general framework is proposed to describe the 2D precoding technique used for channel estimation. Secondly, exploiting some properties of random matrix and free probability theories \cite{Tse-trans-InfoTheory} \cite{Debbah-trans-InfoTheory}, an analytical study of the proposed estimation method is presented. The article is organized as follows. In section 2, we present the principles of 2D LP OFDM, and detail the channel estimation technique using the spread pilots. In section 3, we analyse the theoretical performance of this channel estimation by developing the analytical expression of its mean square error (MSE). Then, simulation results in terms of MSE and bit error rate (BER) are presented and discussed in section 4. Concluding remarks are given in section 5. \section{SYSTEM DESCRIPTION} \subsection{2D LP OFDM} Fig. \ref{TX_RX} exhibits the proposed 2D LP OFDM system exploiting the spread pilot channel estimation technique. First of all, data bits are encoded, interleaved and converted to complex symbols $x_{m,s}\left[i\right]$. These data symbols are assumed to have zero mean and unit variance. They are interleaved before being precoded by a Walsh-Hadamard (WH) sequence $\textbf{c}_{i}$ of $L$ chips, with $0 \leq i \leq L=2^{n}$ and $n \in \mathbb{N}$. The chips obtained are mapped over a subset of $L=L_{t}.L_{f}$ subcarriers, with $L_{t}$ and $L_{f}$ the time and frequency spreading factors respectively. The first $L_{t}$ chips are allocated in the time direction. The next blocks of $L_{t}$ chips are allocated identically on adjacent subcarriers as illustrated in Fig. \ref{ChipMapping}. Therefore, the 2D chip mapping follows a zigzag in time. Let us define a frame as a set of $L_{t}$ adjacent OFDM symbols, and a sub-band as a set of $L_{f}$ adjacent subcarriers. In order to distinguish the different subsets of subcarriers, we define $m$ and $s$ the indexes referring to the frame and the sub-band respectively, with $0\leq s \leq S-1$. Given these notations, each chip $y_{m,s}\left[n,q\right]$ represents the complex symbol transmitted on the $n$th subcarrier during the $q$th OFDM symbol of the subset of subcarriers $\left[m,s\right]$, with $0\leq n \leq L_{f}-1$ and $0\leq q\leq L_{t}-1$. Hence, the transmitted signal on a subset of subcarriers $\left[m,s\right]$ writes: \begin{equation} \textbf{Y}_{m,s} = \textbf{C} \textbf{P} \textbf{x}_{m,s} \end{equation} where \small $\textbf{x}_{m,s} = \left[ x_{m,s}\left[0\right] \dots x_{m,s}\left[i\right] \dots x_{m,s}\left[L-1\right] \right]^{T}$ is the \small $\left[L\times1\right]$ \normalsize complex symbol vector, \small $\textbf{P} = diag \left\{ \sqrt{P_{0}} \dots \sqrt{P_{i}} \dots \sqrt{P_{L-1}} \right\}$ \normalsize is a \small $\left[L\times L\right]$ \normalsize diagonal matrix where $P_{i}$ is the power assigned to symbol $x_{m,s}\left[i\right]$, and $\textbf{C} = \left[\textbf{c}_{0} \dots \textbf{c}_{i} \dots \textbf{c}_{L-1}\right]$ is the WH precoding matrix whose $i$th column corresponds to $i$th precoding sequence \small $\textbf{c}_{i} = [ c_{i}\left[0,0\right] \dots c_{i}\left[n,q\right] \dots c_{i}[L_{f}-1,L_{t}-1] ]^{T}$. \normalsize We assume normalized precoding sequences, i.e. $c_{i}\left[n,q\right] = \pm\frac{1}{\sqrt{L}}$. Since the 2D chip mapping applied follows a zigzag in time, $c_{i}\left[n,q\right]$ is the $(n\times L_{t}+q)$th chip of the $i$th precoding sequence $\textbf{c}_{i}$. \begin{figure} [t] \begin{center} \includegraphics[width=1 \linewidth]{TX_RX2.ps} \caption{2D LP OFDM transmitter and receiver based on spread pilot channel estimation technique} \label{TX_RX} \end{center} \end{figure} \subsection{Spread pilot channel estimation principles} Inspired by pilot embedded techniques \cite{Chin-Globecom01}, channel estimation based on spread pilots consists of transmitting low level pilot-sequences concurrently with the data. In order to reduce the cross-interferences between pilots and data, the idea is to select a pilot sequence which is orthogonal with the data sequences. This is obtained by allocating one of the WH orthogonal sequences $\textbf{c}_{p}$ to the pilots on every subset of subcarriers. \begin{figure} [t] \begin{center} \includegraphics[width=0.7 \linewidth]{2Dter} \caption{2D chip mapping scheme} \label{ChipMapping} \end{center} \end{figure} Let $\textbf{H}_{m,s}$ be the \small $\left[L\times L\right]$ \normalsize diagonal matrix of the channel coefficients associated to a given subset of subcarriers $\left[m,s\right]$. After OFDM demodulation and 2D chip de-mapping, the received signal can be expressed as: \begin{equation} \textbf{Z}_{m,s} = \textbf{H}_{m,s} \textbf{Y}_{m,s} + \textbf{w}_{m,s} \end{equation} where $\textbf{w}_{m,s} = [ w_{m,s}\left[0,0 \right] \dots w_{m,s}\left[n,q\right] \dots w_{m,s} [ L_{f}-1,L_{t}-1 ] ]^{T}$ \normalsize is the additive white Gaussian noise (AWGN) vector having zero mean and variance $\sigma^{2}_{w} = E\left\{\left|w_{m,s}\left[n,q\right]\right|^{2}\right\}$. At the reception, the de-precoding function is processed before equalization. Therefore, an average channel coefficient $\widehat{H}_{avg}\left[m,s\right]$ is estimated by subset of subcarriers. It is obtained by de-precoding the signal received $\textbf{Z}_{m,s}$ by the pilot precoding sequence $\textbf{c}_{p}^{H}$ and then dividing by the pilot symbol $x_{m,s}^{\left(p\right)} = \sqrt{P_{p}} x_{m,s}\left[p\right]$ known by the receiver: \begin{align} \label{H_estim} \widehat{H}_{avg}\left[m,s\right] &= \frac{1}{x_{m,s}^{\left(p\right)}} \textbf{c}_{p}^{H} \textbf{Z}_{m,s} \nonumber \\ &= \frac{1}{x_{m,s}^{\left(p\right)}} \textbf{c}_{p}^{H} \left[\textbf{H}_{m,s} \textbf{C} \textbf{P} \textbf{x}_{m,s} + \textbf{w}_{m,s} \right] \end{align} \normalsize Let us define $\textbf{C}_{u} = [\textbf{c}_{0} \dots \textbf{c}_{i\neq p} \dots \textbf{c}_{L-1} ]$ the \small $\left[L\times\left(L-1\right)\right]$ \normalsize data precoding matrix, $\textbf{P}_{u} = \text{diag} \left\{ \sqrt{P_{0}} \dots \sqrt{P_{i\neq p}} \dots \sqrt{P_{L-1}} \right\}$ \normalsize the \small $\left[ \left(L-1\right) \times \left(L-1\right) \right]$ \normalsize diagonal matrix which entries are the powers assigned to the data symbols, and \small $\textbf{x}_{m,s}^{\left(u\right)} = [ x_{m,s}[0] \dots $ \\ $x_{m,s}\left[i\neq p\right] \dots x_{m,s}\left[L-1\right] ]^{T}$ \normalsize the \small $[\left(L-1\right)\times1]$ \normalsize data symbols vector. Given these notations, (\ref{H_estim}) can be rewritten as: \small \begin{align} \widehat{H}_{avg} & \left[m,s\right] \nonumber \\ &= \frac{1}{x_{m,s}^{\left(p\right)}} \left[ \textbf{c}_{p}^{H} \textbf{H}_{m,s} \textbf{c}_{p} x_{m,s}^{\left(p\right)} + \textbf{c}_{p}^{H} \textbf{H}_{m,s} \textbf{C}_{u} \textbf{P}_{u} \textbf{x}_{m,s}^{\left(u\right)} + \textbf{c}_{p}^{H} \textbf{w}_{m,s} \right] \nonumber \\ &= \frac{1}{L} tr\left\{ \textbf{H}_{m,s} \right\} + \frac{1}{x_{m,s}^{\left(p\right)}} \left[ \textbf{c}_{p}^{H} \textbf{H}_{m,s} \textbf{C}_{u} \textbf{P}_{u} \textbf{x}_{m,s}^{\left(u\right)} + \textbf{c}_{p}^{H} \textbf{w}_{m,s} \right] \nonumber \\ &= H_{avg}\left[m,s\right] + \textrm{SI}\left[m,s\right] + w' \end{align} \normalsize The first term $H_{avg}\left[m,s\right]$ is the average channel response globally experienced by the subset of subcarriers $\left[m,s\right]$. The second term represents the self-interference (SI). It results from the loss of orthogonality between the precoding sequences caused by the variance of the channel coefficients over the subset of subcarriers. In the sequel, we propose to analyse its variance. \section{THEORETICAL PERFORMANCE OF THE ESTIMATOR} In order to analyse the theoretical performance of the proposed estimator, we evaluate its MSE under the assumption of a wide-sense stationary uncorrelated scattering (WSSUS) channel. \begin{align} \label{MSE} \textrm{MSE}\left[m,s\right] &= E\left\{ \left| \widehat{H}_{avg}\left[m,s\right] - H_{avg}\left[m,s\right] \right|^{2} \right\} \nonumber \\ &= E\left\{ \left| \textrm{SI}\left[m,s\right] \right|^{2} \right\} + E\left\{ \left| w' \right|^{2} \right\} \end{align} First, let us compute the SI variance: \begin{equation} \label{VarSI} E\left\{ \left| \textrm{SI}\left[m,s\right] \right|^{2} \right\} = \frac{1}{P_{p}} E\left\{ \textbf{c}_{p}^{H} \textbf{H}_{m,s} \textbf{C}_{u} \textbf{P}_{u}' \textbf{C}_{u}^{H} \textbf{H}_{m,s}^{H} \textbf{c}_{p} \right\} \end{equation} where $\textbf{P}_{u}' = \textbf{P}_{u} \textbf{P}_{u}^{H} = \text{diag} \left\{ P_{0} \dots P_{i\neq p} \dots P_{L-1} \right\}$. Actually, (\ref{VarSI}) cannot be analyzed practically due to its complexity. Applying some properties of random matrix and free probability theories \cite{Tse-trans-InfoTheory} \cite{Debbah-trans-InfoTheory} which is stated in Appendix, a new SI variance formula can be derived: \begin{align} \label{VarSI2} &E\left\{ \left| \textrm{SI}\left[m,s\right] \right|^{2} \right\} = \frac{1}{P_{p}} E\left\{ \textbf{c}_{p}^{H} \textbf{H}_{m,s} \left( I-\textbf{c}_{p}\textbf{c}_{p}^{H} \right) \textbf{H}_{m,s}^{H} \textbf{c}_{p} \right\} \nonumber \\ &= \frac{1}{P_{p}} E\left\{ \textbf{c}_{p}^{H} \textbf{H}_{m,s} \textbf{H}_{m,s}^{H} \textbf{c}_{p} - \textbf{c}_{p}^{H} \textbf{H}_{m,s} \textbf{c}_{p} \textbf{c}_{p}^{H} \textbf{H}_{m,s}^{H} \textbf{c}_{p} \right\} \nonumber \\ &= \frac{1}{P_{p}} E\left\{ \underbrace{ \frac{1}{L} tr\left(\textbf{H}_{m,s} \textbf{H}_{m,s}^{H}\right) }_{A} - \frac{1}{L^{2}} \underbrace{ tr\left(\textbf{H}_{m,s}\right) tr\left(\textbf{H}_{m,s}^{H}\right) }_{B} \right\} \end{align} The expectation of $A$ is the average power of the channel coefficients on the subset of subcarriers $\left[m,s\right]$. Assuming that the channel coefficients are normalized, its value is one: \begin{align} \label{A} E\left\{ \frac{1}{L} tr\left( \textbf{H}_{m,s} \textbf{H}_{m,s}^{H} \right) \right\} &= \frac{1}{L} \sum_{n=0}^{L_{f}-1} \sum_{q=0}^{L_{t}-1 \vphantom{L_{f}}} E\left\{ \left| H_{m,s}\left[n,q\right] \right|^{2} \right\} \nonumber \\ &= 1 \end{align} The expectation of $B$ is a function of the autocorrelation of the channel $R_{HH}\left(\Delta n,\Delta q\right)$ whose expression \label{Rhh_final} is developed in Appendix. Indeed, it can be written: \\ \small \begin{equation} \label{B} \textrm{E} \left\{ tr\left(\textbf{H}_{m,s}\right) tr\left(\textbf{H}_{m,s}^{H}\right) \right\} = \sum^{L_{f}-1}_{n=0} \sum^{L_{t}-1 \vphantom{L_{f}}}_{q=0} \sum^{L_{f}-1}_{n'=0} \sum^{L_{t}-1 \vphantom{L_{f}}}_{q'=0} R_{HH}\left(\Delta n,\Delta q\right) \end{equation} \normalsize \normalsize where $\Delta n=n-n'$ and $\Delta q=q-q'$. Note that the autocorrelation function of the channel does not depend on the subset of subcarriers since the channel is WSSUS. By combining (\ref{A}) and (\ref{B}), the SI variance expression (\ref{VarSI2}) can be expressed as: \begin{align} \label{VarSI3} E & \left\{ \left| \textrm{SI} \right|^{2} \right\} = \nonumber \\ & \frac{1}{P_{p}} \left( 1 - \frac{1}{L^{2}} \sum^{L_{f}-1}_{n=0} \sum^{L_{t}-1 \vphantom{L_{f}-1}}_{q=0} \sum^{L_{f}-1}_{n'=0} \sum^{L_{t}-1 \vphantom{L_{f}-1}}_{q'=0} R_{HH}\left(\Delta n,\Delta q\right) \right) \end{align} Now, let us compute the noise variance: \begin{align} \label{VarNoise} E\left\{ \left| w' \right|^{2} \right\} &= \frac{1}{P_{p}} \textrm{E} \left\{ \textbf{c}_{p}^{H} \textbf{w}_{m,s} \textbf{w}_{m,s}^{H} \textbf{c}_{p} \right\} \nonumber \\ &= \frac{1}{P_{p}} \sigma_{w}^{2} \end{align} Finally, by combining the expressions of the SI variance (\ref{VarSI3}) and the noise variance (\ref{VarNoise}), the MSE (\ref{MSE}) writes: \small \begin{align} \label{MSE2} \textrm{MSE} = \frac{1}{P_{p}} \left(1-\frac{1}{L^{2}} \sum^{L_{f}-1}_{n=0} \sum^{L_{t}-1 \vphantom{L_{f}-1}}_{q=0} \sum^{L_{f}-1}_{n'=0} \sum^{L_{t}-1 \vphantom{L_{f}-1}}_{q'=0} R_{HH}\left(\Delta n,\Delta q\right)+\sigma_{w}^{2} \right) \end{align} \normalsize The analytical expression of the MSE of our estimator depends on the pilot power, the autocorrelation function of the channel and the noise variance. The autocorrelation of the channel (\ref{Rhh_final}) is a function of both the coherence bandwidth and the coherence time. We can then expect that the proposed estimator will be all the more efficient than the channel coefficients will be highly correlated within each subset of subcarriers. One can actually check that if the channel is flat over a subset of subcarriers, then the SI (\ref{VarSI3}) is null. Therefore, it is important to optimize the time and frequency spreading lengths, $L_{t}$ and $L_{f}$, according to the transmission scenario. \section{SIMULATION RESULTS} In this section, we analyse the performance of the proposed 2D LP OFDM system compared to the DVB-T standard under the COST207 Typical Urban 6 paths (TU6) channel model depicted in Table \ref{TU6} with different mobile speeds. We define the parameter $\beta$ as the product between the maximum Doppler frequency $f_{D}$ and the total OFDM symbol duration $T_{\text{OFDM}}$. Table \ref{SimParam} gives the simulation parameters and the useful bit rates of the DVB-T system and the proposed system. In the proposed system, only one spread pilot symbol is used over $L\geq16$, whereas the DVB-T system uses one pilot subcarrier over twelve. Therefore, a gain in terms of spectral efficiency and useful bit rates are obtained compared to the DVB-T system. These gains are all the higher than the spreading factor $L$ is high. Nevertheless, an increase of the spreading length produces a higher SI value. Consequently, a trade-off has to be made between the gain in term of spectral efficiency and the performance of the channel estimation. Fig. \ref{eqm} depicts the estimator performance in term of MSE for QPSK data symbols, different mobile speeds and different spreading factors. The curves represent the MSE obtained with the analytical expression (\ref{MSE2}), and the markers those obtained by simulation. We note that the MSE measured by simulation are really closed to those predicted with the MSE formula. This validates the analytical development made in section 3. We note that beyond a given ratio of the energy per bit to the noise spectral density ($\frac{Eb}{No}$), the MSE reaches a floor which is easily interpreted as being due to the SI (\ref{MSE}). Fig. \ref{ber_speed10} and Fig. \ref{ber_speed60} give the BER measured at the output of the Viterbi decoder for a mobile speed of 20 km/h and 120 km/h respectively. Note that the value of the pilot power $P_{p}$ has been optimized through simulation search in order to obtain the lowest BER for a given signal to noise ratio (SNR). The performance of the DVB-T system is given with perfect channel estimation, taking into account the power loss due to the amount of energy spent for the pilot subcarriers. It appears in Fig. \ref{ber_speed10}, for low-speed scenario, that the system performance is similar to that of the DVB-T system with perfect channel estimation. This is due to the power loss due to the pilot which is lower with the proposed system. In Fig. \ref{ber_speed60}, for high-speed scenario and QPSK, by choosing the spreading lengths offering the best performance, there is a loss of less than 1 dB for a $\text{BER}=10^{-4}$, comparing to perfect channel estimation case. For 16QAM, the loss is less than 2.5 dB which is really satisfying given that $\beta=0.018$, corresponding to a mobile speed of 120 km/h. \begin{table} \caption{Profile of TU6 channel} \begin{center} \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline \small & \small Tap1 & \small Tap2 & \small Tap3 & \small Tap4 & \small Tap5 & \small Tap6 & \small unit \\ \hline \small Delay &\small 0 &\small 0.2 &\small 0.5 &\small 1.6 &\small 2.3 &\small 5 & \small $\mu s$ \\ \hline \small Power &\small -3 &\small 0 &\small -5 &\small -6 &\small -8 &\small -10 & \small dB \\ \hline \end{tabular} \end{center} \label{TU6} \end{table} \begin{table} \caption{Simulation Parameters and Useful Bit Rates} \begin{center} \begin{tabular}{|l|l|} \hline \small Bandwidth &\small 8 MHz \\ \hline \small FFT size ($N_{\text{FFT}}$) &\small 2048 samples \\ \hline \small Guard Interval size &\small 512 samples (64 $\mu$s) \\ \hline \small OFDM symbol duration ($T_{\text{OFDM}})$ &\small 280 $\mu$s \\ \hline \small Rate of convolutional code &\small 1/2 using $\left(133,171\right)_{o}$ \\ \hline \small Constellations & \small QPSK and 16QAM\\ \hline \small Carrier frequency &\small 500 MHz \\ \hline \small Mobile Speeds &\small 20 km/h and 120 km/h \\ \hline \small Maximum Doppler frequencies ($f_{D}$) &\small 9.3 Hz and 55.6 Hz \\ \hline \small $\beta=f_{D} \times T_{\text{OFDM}}$ &\small 0.003 and 0.018 \\ \hline \hline \small Useful bit rates of DVB-T system &\small 4.98 Mbits/s for QPSK \\ \small &\small 9.95 Mbits/s for 16QAM \\ \hline \small Useful bit rates of 2D LP OFDM &\small 5.33 Mbits/s for $L=16$ \\ \small for QPSK &\small 5.51 Mbits/s for $L=32$ \\ &\small 5.60 Mbits/s for $L=64$ \\ \hline \small Useful bit rates of 2D LP OFDM &\small 10.66 Mbits/s for $L=16$ \\ \small for 16QAM &\small 11.02 Mbits/s for $L=32$ \\ &\small 11.20 Mbits/s for $L=64$ \\ \hline \end{tabular} \end{center} \label{SimParam} \end{table} \begin{figure} \begin{center} \includegraphics[width=1 \linewidth]{eqm_spawc4} \caption{MSE performance obtained with the analytical expression and by simulation ; QPSK ; Speeds: 20 km/h and 120 km/h ; $\beta$ = 0.003 and 0.018} \label{eqm} \end{center} \end{figure} \begin{figure} [t] \begin{center} \includegraphics[width=1 \linewidth]{spawc_speed10} \caption{Performance comparison between the DVB-T system with perfect channel estimation and the proposed 2D LP OFDM ; Speed: 20 km/h ; $\beta$ = 0.003 ; $L = 64$ ; $P_{p} = 7$} \label{ber_speed10} \end{center} \end{figure} \begin{figure} [t] \begin{center} \includegraphics[width=1 \linewidth]{spawc_speed60} \caption{Performance comparison between the DVB-T system with perfect channel estimation and the proposed 2D LP OFDM ; Speed: 120 km/h ; $\beta$ = 0.018 ; $L$ = 16, 32 and 64} \label{ber_speed60} \end{center} \end{figure} \section{CONCLUSION} In this paper, we propose a novel and very simple channel estimation for DVB-T. This technique, referred to as spread pilot channel estimation, allows to reduce the overhead part dedicated to channel estimation. An analytical expression of its MSE, which is a function of the autocorrelation of the channel, is given. It allows to highlight and understand that the choice of the spreading factors has to be made according to the channel characteristics. More generally, this estimation approach provides a good flexibility since it can be optimized for different mobility scenarios by choosing adequate time and frequency spreading factors. \\ \textit{This work was supported by the European project CELTIC B21C (``Broadcast for the 21st Century'').} \par \bigskip \begin{center} \textbf{APPENDIX} \end{center} \par \medskip In this section, a property from the random matrix and free probability theories is defined for the computation of the SI variance (\ref{VarSI}). Furthermore, the computation of the autocorrelation function of the channel $R_{HH}$ is carried out. \par \bigskip \noindent\textbf{Random matrix and free probability theories property} \par \medskip Let $\textbf{C}$ be a Haar distributed unitary matrix \cite{Debbah-trans-InfoTheory} of size $\left[L\times L\right]$. $\textbf{C}=\left( \textbf{c}_{p},\textbf{C}_{u} \right)$ can be decomposed into a vector $\textbf{c}_{p}$ of size $\left[L\times 1\right]$ and a matrix $\textbf{C}_{u}$ of size $\left[L\times \left(L-1\right) \right]$. Given these assumptions, it is proven in \cite{Chauffray-trans-InfoTheory} that: \begin{equation} \label{RandomMatrix} \textbf{C}_{u} \textbf{P}_{u}' \textbf{C}_{u}^{H} \xrightarrow[]{L\rightarrow\infty} \alpha P_{u} \left( I-\textbf{c}_{p} \textbf{c}_{p}^{H} \right) \end{equation} where $\alpha=1$ is the system load and $P_{u}=1$ is the power of the interfering users. \par \bigskip \noindent\textbf{Autocorrelation function of the channel} \par \medskip The autocorrelation function of the channel writes: \begin{equation} \label{Rhh1} R_{HH}\left(\Delta n,\Delta q \right) = E\left\{ H_{m,s}\left[n,q\right] H_{m,s}^{*}\left[n-\Delta n,q-\Delta q \right] \right\} \end{equation} We can express the frequency channel coefficients $H_{m,s}\left[n,q\right]$ as a function of the channel impulse response (CIR): \begin{equation} \label{FFT_CIR} H_{m,s}\left[n,q\right] = \sum^{N_{\text{FFT}}-1}_{k=0} \gamma_{m,q}\left[k\right] e^{ -2j \pi \frac{\left(s L_{f}+n\right)}{N_{\text{FFT}}} k } \end{equation} where $\gamma_{m,q}\left[k\right]$ is the complex amplitude of the $k$th sample of the CIR during the $q$th OFDM symbol of the $m$th frame, and $N_{\text{FFT}}$ is the FFT size. Therefore, by injecting (\ref{FFT_CIR}) in (\ref{Rhh1}), the autocorrelation function of the channel can be rewritten as: \small \begin{equation} \label{Rhh2} \begin{split} R_{HH} & \left(\Delta n,\Delta q\right) = \\ & \frac{1}{N_{\text{FFT}}} \sum_{k=0}^{N_{\text{FFT}}-1} \sum_{k'=0}^{N_{\text{FFT}}-1} \textrm{E}\left\{ \gamma_{m,q}^{}\left[k\right] \gamma_{m,q-\Delta q}^{*}\left[k'\right] \right\} e^{-2j\pi \frac{\Delta n}{N_{\text{FFT}}} k } \end{split} \end{equation} \normalsize Since different taps of the CIR are uncorrelated, it comes: \begin{equation} \label{Rhh3} \begin{split}R_{HH} & \left(\Delta n,\Delta q\right) = \nonumber \\ & \frac{1}{N_{\text{FFT}}} \sum_{k=0}^{N_{\text{FFT}}-1} \textrm{E}\left\{ \gamma_{m,q}^{}\left[k\right] \gamma_{m,q-\Delta q}^{*}\left[k\right] \right\} e^{-2j\pi \frac{\Delta n}{N_{\text{FFT}}} k } \end{split} \end{equation} \normalsize According to Jake's model \cite{Jakes}, the correlation of the $k$th sample of the CIR is: \begin{equation} \label{JakesEq} \textrm{E}\left\{ \gamma_{m,q}^{}\left[k\right] \gamma_{m,q-\Delta q}^{*}\left[k\right] \right\} = \rho_{k} J_{0}\left(2\pi f_{D} \Delta q T_{\text{OFDM}} \right) \end{equation} where $\rho_{k}$ is the power of the $k$th sample of the CIR, $J_{0}\left(.\right)$ the zeroth-order Bessel function of the first kind, $f_{D}$ the maximum Doppler frequency and $T_{\text{OFDM}}$ the total OFDM symbol duration. Finally, the autocorrelation function of the channel (\ref{Rhh3}) can be expressed as: \begin{equation} \label{Rhh_final} \begin{split} R_{HH} & \left(\Delta n,\Delta q\right) = \\ & \frac{1}{N_{\text{FFT}}} \sum^{N_{\text{FFT}}-1}_{k=0} \rho_{k} e^{-2j \pi \frac{\Delta n}{N_{\text{FFT}}} k } J_{0}\left( 2\pi f_{D} \Delta q T_{\text{OFDM}} \right) \end{split} \end{equation}
1,941,325,219,963
arxiv
\section{Introduction} Neural machine translation (NMT), which uses a single large neural network to model the entire translation process, has recently been shown to outperform traditional statistical machine translation (SMT) such as phrase-based machine translation (PBMT) on several translation tasks \cite{koehn2003statistical,bahdanau2014neural,sennrich-haddow-birch:2016:WMT}. Compared to traditional SMT, NMT generally produces more fluent translations, but often sacrifices adequacy, such as translating source words into completely unrelated target words, over-translation or under-translation \cite{koehn2017six}. There are a number of methods that combine the two paradigms to address their respective weaknesses. For example, it is possible to incorporate neural features into traditional SMT models to disambiguate hypotheses \cite{neubig15wat,stahlberg-EtAl:2016:P16-2}. However, the search space of traditional SMT is usually limited by translation rule tables, reducing the ability of these models to generate hypotheses on the same level of fluency as NMT, even after reranking. There are also methods that incorporate knowledge from traditional SMT into NMT, such as lexical translation probabilities \cite{arthur-neubig-nakamura:2016:EMNLP2016,he2016improved}, phrase memory \cite{tang2016neural,zhang-EtAl:2017:Long2}, and $n$-gram posterior probabilities based on traditional SMT translation lattices \cite{stahlberg-EtAl:2017:EACLshort}. These improve the adequacy of NMT outputs, but do not impose hard alignment constraints like traditional SMT systems and therefore cannot effectively solve all over-translation or under-translation problems. In this paper, we propose a method that exploits an existing phrase-based translation model to compute the phrase-based decoding cost for a given NMT translation.% \footnote{In fact, our method can take in the output of \textit{any} up-stream system, but we experiment exclusively with using it to rerank NMT output.} That is, we force a phrase-based translation system to take in the source sentence and generate an NMT translation. Then we use the cost of this phrase-based forced decoding to rerank the NMT outputs. The phrase-based decoding cost will heavily punish completely unrelated translations, over-translations, and under-translations, as they will not be able to be found in the translation phrase table. One challenge in implementing this method is that the NMT output may not be in the search space of the phrase-based translation model, which is limited by the phrase-based translation rule table. To solve this problem, we propose a soft forced decoding algorithm, which is based on the standard phrase-based decoding algorithm and integrates new types of translation rules (deleting a source word or inserting a target word). The proposed forced decoding algorithm can always successfully find a decoding path and compute a phrase-based decoding cost for any NMT output. Another challenge is that we need a diverse NMT $n$-best list for reranking. Because beam search for NMT often lacks diversity in the beam -- candidates only have slight differences, with most of the words overlapping -- we use a random sampling method to obtain a more diverse $n$-best list. We test the proposed method on English-to-Chinese, English-to-Japanese, English-to-German and English-to-French translation tasks, obtaining large improvements over a strong NMT baseline that already incorporates discrete lexicon features. \begin{figure*}[t] \center \includegraphics[width=0.99\textwidth]{expand.pdf} \caption{An example of phrase-based decoding. } \label{f1} \end{figure*} \section{Attentional NMT} Our baseline NMT model is similar to the attentional model of \newcite{bahdanau2014neural}, which includes an encoder, a decoder and an attention (alignment) model. Given a source sentence $F = \left\{ {{f_1},...,{f_J}} \right\}$, the encoder learns an annotation ${h_j} = \left[ {{{\vec h}_j};{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\leftarrow$}} \over h} }_j}} \right]$ for $f_j$ using a bi-directional recurrent neural network. The decoder generates the target translation from left to right. The probability of generating next word $e_i$ is,\footnote{$g$, $f$ and $a$ in Equation~\ref{e1}, \ref{e2} and \ref{e4} are nonlinear, potentially multi-layered, functions.} \begin{equation} P_{NMT}\left( {{e_i}|e_1^{i - 1},F} \right) = softmax\left( {g\left( {{e_{i - 1}},{t_i},{s_i}} \right)} \right) \label{e1} \end{equation} where $t_i$ is a decoding state for time step $i$, computed by, \begin{equation} {t_i} = f\left( {{t_{i - 1}},{e_{i - 1}},{s_i}} \right) \label{e2} \end{equation} $s_i$ is a source representation for time $i$, calculated as, \begin{equation} {s_i} = \sum\limits_{j = 1}^J {{\alpha _{i,j}} \cdot {h_j}} \label{e3} \end{equation} where ${\alpha _{i,j}}$ scores how well the inputs around position $j$ and the output at position $i$ match, computed as, \begin{equation} {\alpha _{i,j}} = \frac{{\exp \left( {a\left( {{t_{i - 1}},{h_j}} \right)} \right)}}{{\sum\limits_{k = 1}^J {\exp \left( {a\left( {{t_{i - 1}},{h_k}} \right)} \right)} }} \label{e4} \end{equation} As we can see, NMT only learns an attention (alignment) distribution for each target word over all source words and does not provides exact mutually-exclusive word or phrase level alignments. As a result, it is known that attentional NMT systems make mistakes in over- or under-translation \cite{cohn-EtAl:2016:N16-1,mi-EtAl:2016:EMNLP2016}. \section{Phrase-based Forced Decoding for NMT} \subsection{Phrase-based SMT} In phrase-based SMT \cite{koehn2003statistical}, a phrase-based translation rule $r$ includes a source phrase, a target phrase and a translation score $S\left( r \right)$. Phrase-based translation rules can be extracted from the word-aligned training set and then used to translate new sentences. Word alignments for the training set can be obtained by IBM models \cite{brown1993mathematics}. Phrase-based decoding uses a list of translation rules to translate source phrases in the input sentence and generate target phrases from left to right. A basic concept in phrase-based decoding is hypotheses. As shown in Figure~\ref{f1}, the hypothesis $H_1$ consists of two rules $r_1$ and $r_2$. The score of a hypothesis $S\left( H \right)$ can be calculated as the product of the scores of all applied rules.\footnote{In actual phrase-based decoding it is common to integrate reordering probabilities in the forced decoding score defined in Equation~\ref{p2}. However, because NMT generally produces more properly ordered sentences than traditional SMT, in this work we do not consider reordering probabilities in our forced decoding algorithm.} An existing hypothesis can be expanded into a new hypothesis by applying a new rule. As shown in Figure~\ref{f1}, $H_1$ can be expanded into $H_2$, $H_3$ and $H_4$. $H_2$ cannot be further expanded, because it covers all source words, while $H_3$ and $H_4$ can (and must) be further expanded. The decoder starts with an initial empty hypothesis $H_0$ and selects the hypothesis with the highest score from all completed hypotheses. During decoding, hypotheses are stored in stacks. For a source sentence with $J$ words, the decoder builds $J$ stacks. The hypotheses that cover $j$ source words are stored in stack $s_j$. The decoder expands hypotheses in ${s_1},{s_2},...,{s_J}$ in turn as shown in Algorithm~\ref{alg1}. \begin{algorithm}[h] \label{a1} \caption{Standard phrase-based decoding.} \begin{algorithmic} \REQUIRE Source sentence $F$ with length $J$ \ENSURE Translation $E$ and decoding path $D$ \STATE initialize $H_0$ and ${s_1},{s_2},...,{s_J}$ \STATE \textsc{Expand}$\left( {{H_{0}}} \right)$ \FOR {$j=1$ to $J-1$} \FOR {each hypothesis $H_{jk}$ in $s_j$} \STATE \textsc{Expand}$\left( {{H_{jk}}} \right)$ \ENDFOR \ENDFOR \STATE select best hypothesis in $s_J$ \end{algorithmic} \label{alg1} \end{algorithm} Here, \textsc{Expand}$\left( {{H}} \right)$ is expanding $H$ to get new hypotheses and putting the new hypotheses into corresponding stacks. For each stack, a beam of the best $n$ hypotheses is kept to speed up the decoding process. \subsection{Forced Decoding for NMT} As stated in the introduction, our goal is not to generate new hypotheses with phrase-based SMT, but instead use the phrase-based model to calculate scores for NMT output. In order to do so, we can perform \textit{forced decoding}, which is very similar to the algorithm in the previous section but discards all partial hypotheses that do not match the NMT output. However, the NMT output is not limited by the phrase-based rule table, so there may be no decoding path that completely matches the NMT output when using only the phrase-based rules. To remedy this problem, inspired by previous work in forced decoding for training phrase-based SMT systems \cite{wuebker-mauser-ney:2010:ACL,wuebker-hwang-quirk:2012:WMT} we propose a soft forced decoding algorithm that can always successfully find a decoding path for a source sentence $F$ and an NMT translation $E$. First, we introduce two new types of rules R$_1$ and R$_2$. \paragraph{R$_1$}A source word $f$ can be translated into a special word $\texttt{null}$. This corresponds to deleting $f$ during translation. The score of deleting $f$ is calculated as, \begin{equation} s \left( {f \to \texttt{null}} \right) = \frac{{\text{unalign}\left( f \right)}}{{\left| \mathcal{T} \right|}} \label{r1} \end{equation} where ${\text{unalign}\left( f \right)}$ is how many times $f$ is unaligned in the word-aligned training set $\mathcal{T}$ and $\left| \mathcal{T} \right|$ is the number of sentence pairs in $\mathcal{T}$. \paragraph{R$_2$}A target word $e$ can be translated from a special word $\texttt{null}$, which corresponds to inserting $e$ during translation. The score of inserting $e$ is calculated as, \begin{equation} s \left( {\texttt{null} \to e} \right) = \frac{{\text{unalign}\left( e \right)}}{{\left| \mathcal{T} \right|}} \label{r2} \end{equation} where ${\text{unalign}\left( e \right)}$ is how many times $e$ is unaligned in $\mathcal{T}$. One motivation for Equations~\ref{r1} and~\ref{r2} is that function words usually have high frequencies, but do not have as clear a correspondence with a word in the other language as content words. As a result, in the training set function words are more often unaligned than content words. As an example, Table~\ref{times} and Table~\ref{timesfr} show how many times different words occur and how many times they are unaligned in the word-aligned training set of English-to-Chinese and English-to-French tasks in our experiments. As we can see, generally there are less unaligned words in the English-to-French task, however, function words are more likely to be unaligned in both tasks. Based on Equation~\ref{r1} and Equation~\ref{r2}, the scores of deleting or inserting ``of" and ``a" will be higher. \begin{table}[h] \center \begin{tabular}{l|rrrr} \hline Words&of&a&practice&water\\ \hline Occur&1.3M&1.0M&2.2K&29K\\ Unaligned&0.51M&0.41M&0.25K&3.5K\\ \hline \end{tabular} \caption{The number of times that words occur in the English-to-Chinese training corpus and the number of times that they are unaligned. } \label{times} \end{table} \begin{table}[h] \center \begin{tabular}{l|rrrr} \hline Words&of&a&practice&water\\ \hline Occur&1.7M&0.83M&8.8K&7.4K\\ Unaligned&0.16M&0.12M&0.38K&0.19K\\ \hline \end{tabular} \caption{The number of times that words occur in the English-to-French training corpus and the number of times that they are unaligned. } \label{timesfr} \end{table} In our forced decoding, we choose to model the score of each translation rule that exists in the phrase table as the product of direct and inverse phrase translation probabilities. To make sure that the scale of the scores for R$_1$ and R$_2$ match the other phrase (which are the product of two probabilities), we use the square of the score in Equation~\ref{r1}/\ref{r2} as the rule score for R$_1$/R$_2$. \begin{algorithm}[t] \label{a1} \caption{Forced phrase-based decoding.} \begin{algorithmic} \REQUIRE Source sentence $F$ with length $J$ and translation $E$ with length $I$ \ENSURE Decoding path $D$ \STATE initialize $H_0$ and ${s{'_1}},{s{'_2}},...,{s{'_I}}$ \STATE \textsc{Expand}$\left( {{H_{0}}} \right)$ \STATE expand $H_{0}$ with rule \texttt{null}$\to e_{1}$ \FOR {$i=1$ to $I-1$} \FOR {each hypothesis $H_{ik}$ in $s{'_i}$} \STATE \textsc{Expand}$\left( {{H_{ik}}} \right)$ \STATE expand $H_{ik}$ with rule \texttt{null}$\to e_{i+1}$ \ENDFOR \ENDFOR \FOR {each hypothesis $H_{Ik}$ in $s{'_I}$} \STATE update $S\left( {{H_{Ik}}} \right)$ for uncovered source words \ENDFOR \STATE select best hypothesis in $s{'_I}$ \end{algorithmic} \label{alg2} \end{algorithm} Algorithm~\ref{alg2} shows the forced decoding algorithm that integrates the new rules. Because the translation $E$ is given for the forced decoding algorithm, the proposed forced decoding algorithm keeps $I$ stacks, where $I$ is the length of $E$. In other words, the stack size is corresponding to the target word size during forced decoding while the stack size is corresponding to the source word size during standard phrase-based decoding. The stack $s{'_i}$ in Algorithm~\ref{alg2} contains all hypotheses in which the first $i$ target words have been generated. We expand hypotheses in ${s{'_1}},{s{'_2}},...,{s{'_I}}$ in turn. When expanding a hypothesis $H_{old}$ in $s{'_i}$, besides expanding it using the original rule table \textsc{Expand}$\left( {{H_{old}}} \right)$,\footnote{The new introduced word inserting/deleting rules are not used when performing \textsc{Expand}$\left( {{H_{old}}} \right)$.} we also expand $H_{old}$ by inserting the next target word $e_{i+1}$ at the end of $H_{old}$ to get an additional hypothesis $H_{new}$ and put $H_{new}$ into $s{'_{i+1}}$. For a final hypothesis in stack $s{'_I}$, it may not cover all source words. We update its score by translating uncovered words into $\texttt{null}$. Because different decoding paths can generate the same final translation, there can be different decoding paths that fit the NMT translation $E$. We use the score of the single decoding path with the highest decoding score as the forced decoding score for $E$. \section{Reranking NMT Outputs with Phrase-based Decoding Score} We rerank the $n$-best NMT outputs using the phrase-based forced decoding score according to Equation~\ref{rerank}. \begin{equation}\small \log P\left( {E|F} \right) = {w_1} \cdot \log {P_n}\left( {E|F} \right) + {w_2} \cdot \log {S_d}\left( {E|F} \right) \label{rerank} \end{equation} where ${P_n}\left( {E|F} \right)$ is the original NMT translation probability as calculated by Equation~\ref{e1}; \begin{equation}\small {P_n}\left( {E|F} \right) = \prod\limits_{i = 1}^I {P_{NMT}\left( {{e_i}|e_1^{i - 1},F} \right)} \label{p1} \end{equation} ${S_d}\left( {E|F} \right)$ is the forced decoding score, which is the score of the decoding path $\hat D$ with the highest decoding score as described above; \begin{equation}\small {S_d}\left( {E|F} \right) = \prod\nolimits_{r \in \hat D} {S\left( r \right)} \label{p2} \end{equation} $w_1$ and $w_2$ are weights that can be tuned on the $n$-best list of the development set. The easiest way to get an $n$-best list for NMT is by using the $n$-best translations from beam search, which is the standard decoding algorithm for NMT. While beam search is likely to find the highest-scoring hypothesis, it often lacks diversity in the beam: candidates only have slight differences, with most of the words overlapping. In order to obtain a more diverse list of hypotheses for reranking, we additionally augment the 1-best hypothesis discovered by beam search with translations sampled from the NMT conditional probability distribution. The standard method for sampling hypotheses in NMT is ancestral sampling, where we randomly select a word from the vocabulary according to $P_{NMT}\left( {e_i|e_1^{i - 1},F} \right)$ \cite{shen-EtAl:2016:P16-1}. This will make a diverse list of hypotheses, but may reduce the probability of selecting a highly scoring hypothesis, and the whole $n$-best list may not contain any candidate with better translation quality than the standard beam search output. Instead, we take an alternative approach that proved empirically better in our experiments: at each time step $i$, we use sampling to randomly select the next word from $e'$ and $e''$ according to Equation~\ref{random}. Here, $e'$ and $e''$ are the two target words with the highest probability according to Equation~\ref{e1}. \begin{equation}\large \begin{array}{l} {P_{rdm}}\left( {e'} \right) = \frac{{P_{NMT}\left( {e'|e_1^{i - 1},F} \right)}}{{P_{NMT}\left( {e'|e_1^{i - 1},F} \right) + P_{NMT}\left( {e''|e_1^{i - 1},F} \right)}}\\ {P_{rdm}}\left( {e''} \right) = \frac{{P_{NMT}\left( {e''|e_1^{i - 1},F} \right)}}{{P_{NMT}\left( {e'|e_1^{i - 1},F} \right) + P_{NMT}\left( {e''|e_1^{i - 1},F} \right)}} \end{array} \label{random} \end{equation} The sampling process ends when $\left\langle {/s} \right\rangle $ is selected as the next word. We repeat the decoding process $1,000$ times to sample $1,000$ outputs for each source sentence. We additionally add the 1-best output of standard beam search, making the size of the list used for reranking to be $1,001$. \section{Experiments} \subsection{Settings} \begin{table}[t]\small \begin{tabular}{l|ll|ll} \hline && & SOURCE & TARGET \\ \hline \multirow{5}{0.3in}{en-de}&TRAIN&\#Sents& \multicolumn{2}{c}{1.90M}\\ &&\#Words& 52.2M&49.7M \\ &&\#Vocab& 113K&376K \\ \cline{2-5} &DEV&\#Sents&\multicolumn{2}{c}{3,003} \\ &&\#Words&67.6K&63.0K\\ \cline{2-5} &TEST&\#Sents&\multicolumn{2}{c}{2,169} \\ &&\#Words&46.8K&44.0K\\ \hline \multirow{5}{0.3in}{en-fr}&TRAIN&\#Sents& \multicolumn{2}{c}{1.99M}\\ &&\#Words& 54.4M& 60.4M\\ &&\#Vocab& 114K&137K \\ \cline{2-5} &DEV&\#Sents&\multicolumn{2}{c}{3,003} \\ &&\#Words&71.1K&81.1K\\ \cline{2-5} &TEST&\#Sents&\multicolumn{2}{c}{1.5K} \\ &&\#Words&27.1K&29.8K\\ \hline \multirow{5}{0.3in}{en-zh}&TRAIN&\#Sents& \multicolumn{2}{c}{954K}\\ &&\#Words& 40.4M& 37.2M\\ &&\#Vocab& 504K&288K \\ \cline{2-5} &DEV&\#Sents&\multicolumn{2}{c}{2K} \\ &&\#Words&77.5K&75.4K\\ \cline{2-5} &TEST&\#Sents&\multicolumn{2}{c}{2K} \\ &&\#Words&58.1K&55.5K\\ \hline \multirow{5}{0.3in}{en-ja}&TRAIN&\#Sents& \multicolumn{2}{c}{3.14M}\\ &&\#Words& 104M& 118M\\ &&\#Vocab& 273K& 150K\\ \cline{2-5} &DEV&\#Sents&\multicolumn{2}{c}{2K} \\ &&\#Words&66.5K&74.6K\\ \cline{2-5} &TEST&\#Sents&\multicolumn{2}{c}{2K} \\ &&\#Words&70.6K&78.5K\\ \hline \end{tabular} \caption{Data sets.} \label{data} \end{table} \begin{table*}[t] \small \center \begin{tabular}{l|ll|ll|ll|ll} \hline &en-zh&&en-ja&&en-de&&en-fr&\\ &dev&test&dev&test&dev&test&dev&test\\ \hline PBMT&30.73&27.72&35.67&33.46&12.37&13.95&25.96&27.50\\ NMT &34.60&32.71&41.67&39.00&12.52&14.05&23.63&23.99\\ NMT+lex&36.06&34.80&44.47&41.09&13.36&15.60&24.00&24.91\\ \hline NMT+lex+rerank($P_n$) &34.38&33.23&38.92&34.18&12.34&13.59&23.13&23.61\\ NMT+lex+rerank($S_d$) &36.17&34.09&42.91&40.16&13.08&15.29&24.28&25.71\\ NMT+lex+rerank($P_n$+$S_d$)& 37.94&\bf 35.59&45.34&41.75&\bf 14.56&\bf 16.61&\bf 25.96&\bf27.12\\ \hline NMT+lex+rerank($P_n$+WP)&37.44&34.93& 45.81& 41.90&13.75&15.46&24.47&25.09\\ NMT+lex+rerank($S_d$+WP)&36.44&33.73&43.52&40.49&13.39&15.71&24.74&26.25\\ NMT+lex+rerank($P_n$+$S_d$+WP)&\bf38.69&\bf35.75&\bf46.92&\bf43.17&\bf14.61&\bf16.65&\bf25.98&\bf27.15\\ \hline \end{tabular} \caption{Translation results (BLEU). NMT+lex: \cite{arthur-neubig-nakamura:2016:EMNLP2016}; NMT+lex+rerank: we rerank the $n$-best outputs of NMT+lex using different features ($P_n$, $S_d$ and WP). } \label{results} \center \begin{tabular}{l|ll|ll|ll|ll} \hline &en-zh&&en-ja&&en-de&&en-fr&\\ &METEOR&chrF&METEOR&chrF&METEOR&chrF&METEOR&chrF\\ \hline PBMT&34.70&37.87&35.22&39.45&26.66&50.02&32.33&56.36\\ NMT &34.51&39.91&35.07&42.02&24.91&44.50&29.58&49.99\\ NMT+lex&35.56&42.22&36.48&44.34&25.49&45.67&30.10&50.89\\ \hline NMT+lex+rerank($P_n$) &34.56&40.80&32.63&38.57&23.57&40.35&29.15&48.64\\ NMT+lex+rerank($S_d$) &36.02&42.65&36.87&44.85&\bf 26.48&\bf 48.73&\bf 31.56&\bf 54.42\\ NMT+lex+rerank($P_n$+$S_d$)&36.40&43.73&37.22&45.69&\bf 26.26&47.27&\bf 31.62&53.99\\ \hline NMT+lex+rerank($P_n$+WP)&36.04&42.86&36.90&44.93&25.03&44.05&30.21&50.78\\ NMT+lex+rerank($S_d$+WP)&36.34&42.78&37.05&45.03& 26.16&47.82& 31.32&53.75\\ NMT+lex+rerank($P_n$+$S_d$+WP)&\bf 36.88&\bf 44.09&\bf 37.94&\bf 46.66&\bf 26.20&47.12&\bf 31.61&53.98\\ \hline \end{tabular} \caption{METEOR and chrF scores on the test sets for different system outputs in Table~\ref{results}.} \label{otherresults} \center \begin{tabular}{l|ll|ll|ll|ll} \hline &en-zh&&en-ja&&en-de&&en-fr&\\ &dev&test&dev&test&dev&test&dev&test\\ \hline PBMT &1.008&1.018&1.005&0.998&1.077&1.069&0.986&1.004\\ NMT &0.953&0.954&0.960&0.961&1.059&1.038&0.985&0.977\\ NMT+lex &0.936&0.966&0.955&0.963&1.054&1.019&1.030&0.977\\ \hline NMT+lex+rerank($P_n$) &0.875&0.898&0.814&0.775&0.874&0.854&0.904&0.900\\ NMT+lex+rerank($S_d$) &0.973&0.989&0.985&0.981&1.062&1.060&1.030&1.031\\ NMT+lex+rerank($P_n$+$S_d$) &0.949&0.965&0.945&0.936&1.000&0.992&0.999&0.992\\ \hline NMT+lex+rerank($P_n$+WP)&0.996&1.019&0.999&0.983&1.000&0.975&0.998&1.001\\ NMT+lex+rerank($S_d$+WP)&1.000&1.024&1.001&1.001&1.011&1.007&0.999&0.989\\ NMT+lex+rerank($P_n$+$S_d$+WP)&0.990&1.014&1.000&0.986&1.000&0.989&1.000&0.992\\ \hline \end{tabular} \caption{Ratio of translation length to reference length for different system outputs in Table~\ref{results}.} \label{ratio} \end{table*} We evaluated the proposed approach for English-to-Chinese (en-zh), English-to-Japanese (en-ja), English-to-German (en-de) and English-to-French (en-fr) translation tasks. For the en-zh and en-ja tasks, we used datasets provided for the patent machine translation task at NTCIR-9 \cite{goto2011overview}.\footnote{Note that NTCIR-9 only contained a Chinese-to-English translation task, we used English as the source language in our experiments. In NTCIR-9, the development and test sets were both provided for the zh-en task while only the test set was provided for the en-ja task. We used the sentences from the NTCIR-8 en-ja and ja-en test sets as the development set in our experiments.} For the en-de and en-fr tasks, we used version 7 of the Europarl corpus as training data, WMT 2014 test sets as our development sets and WMT 2015 test sets as our test sets. The detailed statistics for training, development and test sets are given in Table~\ref{data}. The word segmentation was done by BaseSeg \cite{zhao2006improved} for Chinese and Mecab\footnote{http://sourceforge.net/projects/mecab/files/} for Japanese. We built attentional NMT systems with Lamtram\footnote{https://github.com/neubig/lamtram}. Word embedding size and hidden layer size are both 512. We used Byte-pair encoding (BPE) \cite{sennrich-haddow-birch:2016:P16-12} and set the vocabulary size to be 50K. We used the Adam algorithm for optimization. \begin{table*}[t]\small \center \begin{tabular}{lp{12cm}} \hline Source&for \colorbox{light-gray}{hypophysectomized (hypop hy sec to mized)} rats , the drinking water additionally contains 5 \% glucose .\\ \hline Reference& \begin{CJK}{UTF8}{gbsn}对于(for) \colorbox{light-gray}{去(remove) 垂体(hypophysis)} 大(big) 鼠(rat) , 饮用水(drinking water) 中(in) 另外(also) 含有(contain) 5 % 葡萄糖(glucose) 。\end{CJK}\\ \hline PBMT& \begin{CJK}{UTF8}{gbsn}用于(for) 大(big) 鼠(rat) \colorbox{light-gray}{垂体(hypophysis) HySecto,(Hy Sec to ,)} 饮用水(drinking water) 另外(also) 含有(contain) 5 % 葡萄糖(glucose) 。\end{CJK}\\ \hline NMT& \begin{CJK}{UTF8}{gbsn}对于(for) \colorbox{light-gray}{过(pass) 盲肠(cecum)} 的(of) 大(big) 鼠(rat) , 饮用水(drinking water) 另外(also) 含有(contain) 5 % 葡萄糖(glucose) 。\end{CJK}\\ \hline NMT+lex& \multirow{3}{5in}{ \begin{CJK}{UTF8}{gbsn}对于(for) \colorbox{light-gray}{低(low) 酪(cheese) 蛋白(protein) 切除(remove)} 的(of) 大(big) 鼠(rat) , 饮用水(drinking water) 另外(also) 含有(contain) 5 % 葡萄糖(glucose) 。\end{CJK}}\\ NMT+lex+$P_n$&\\ NMT+lex+$P_n$+WP&\\ \hline NMT+lex+$S_d$& \multirow{3}{5in}{\begin{CJK}{UTF8}{gbsn}对于(for) \colorbox{light-gray}{垂体(hypophysis) 在(is) 切除(remove)} 大(big) 鼠(rat) 中(in) , 饮用水(drinking water) 另外(also) 含有(contain) 5 % 葡萄糖(glucose) 。\end{CJK}}\\ NMT+lex+$S_d$+WP&\\ &\\ \hline NMT+lex+$P_n$+$S_d$& \multirow{3}{5in}{ \begin{CJK}{UTF8}{gbsn}对于(for) \colorbox{light-gray}{垂体(hypophysis) 在(is) 切除(remove)} 的(of) 大(big) 鼠(rat) 中(in) , 饮用水(drinking water) 另外(also) 含有(contain) 5 % 葡萄糖(glucose) 。\end{CJK}}\\ NMT+lex+$P_n$+$S_d$+WP&\\ &\\ \hline \end{tabular} \caption{An example of improving inaccurate rare word translation by using $S_d$ for reranking.} \label{rare} \end{table*} To obtain a phrase-based translation rule table for our forced decoding algorithm, we used GIZA++ \cite{och2003systematic} and \textit{grow-diag-final-and} heuristic to obtain symmetric word alignments for the training set. Then we extracted the rule table using Moses \cite{koehn2007moses}. \begin{table*}[t]\small \center \begin{tabular}{p{2.8cm}p{12.3cm}} \hline Source&such changes in reaction conditions include , but are not limited to , \colorbox{light-gray}{an increase in temperature or change in ph} .\\ \hline Reference&\begin{CJK}{UTF8}{gbsn}所(such) 述(said) 反应(reaction) 条件(condition) 的(of) 改变(change) 包括(include) 但(but) 不(not) 限于(limit) \colorbox{light-gray}{温度(temperature) 的(of) 增加(increase) 或(or) pH 值(value) 的(of) 改变(change)} 。\end{CJK}\\ \hline PBMT&\begin{CJK}{UTF8}{gbsn}中(in) 的(of) 这种(such) 变化(change) 的(of) 反应(reaction) 条件(condition) 包括(include) , 但(but) 不(not) 限于(limit) , \colorbox{light-gray}{增加(increase) 的(of) 温度(temperature) 或(or) pH 变化(change)} 。\end{CJK}\\ \hline NMT&\begin{CJK}{UTF8}{gbsn}这种(such) 反应(reaction) 条件(condition) 的(of) 变化(change) 包括(include) 但(but) 不(not) 限于(limit) \colorbox{light-gray}{pH 或(or) pH 的(of) 变化(change)} 。\end{CJK}\\ \hline NMT+lex&\multirow{3}{5in}{\begin{CJK}{UTF8}{gbsn}这种(such) 反应(reaction) 条件(condition) 的(of) 变化(change) 包括(include) , 但(but) 不(not) 限于(limit) , \colorbox{light-gray}{pH 的(of) 升高(increase) 或(or) pH 变化(change)} 。\end{CJK}}\\ NMT+lex+$P_n$&\\ &\\ \hline NMT+lex+$S_d$&\begin{CJK}{UTF8}{gbsn}这种(such) 反应(reaction) 条件(condition) 的(of) 变化(change) 包括(include) 但(but) 不(not) 限于(limit) , \colorbox{light-gray}{温度(temperature) 的(of) 升高(increase) 或(or) 改变(change) pH 值(value)} 。\end{CJK}\\ \hline NMT+lex+$P_n$+$S_d$&\begin{CJK}{UTF8}{gbsn}这种(such) 反应(reaction) 条件(condition) 的(of) 变化(change) 包括(include) , 但(but) 不(not) 限于(limit) , \colorbox{light-gray}{温度(temperature) 的(of) 升高(increase) 或(or) 改变(change) pH 值(value)} 。\end{CJK}\\ \hline NMT+lex+$P_n$+WP&\begin{CJK}{UTF8}{gbsn}这种(such) 反应(reaction) 条件(condition) 的(of) 变化(change) 包括(include) , 但(but) 不(not) 限于(limit) , \colorbox{light-gray}{pH 的(of) 升高(increase) 或(or) 改变(change) pH 值(value)} 。\end{CJK}\\ \hline NMT+lex+$S_d$+WP&\multirow{3}{5in}{\begin{CJK}{UTF8}{gbsn}这种(such) 反应(reaction) 条件(condition) 的(of) 变化(change) 包括(include) , 但(but) 不(not) 限于(limit) , \colorbox{light-gray}{温度(temperature) 的(of) 升高(increase) 或(or) 改变(change) pH 值(value)} 。\end{CJK}}\\ NMT+lex+$P_n$+$S_d$+WP&\\ &\\ \hline \end{tabular} \caption{An example of improving under-translation and over-translation by using $S_d$ for reranking.} \label{underover} \end{table*} \subsection{Results and Analysis} Table~\ref{results} shows results of the phrase-based SMT system\footnote{We used the default Moses settings for phrase-based SMT.}, the baseline NMT system, the lexicon integration method \cite{arthur-neubig-nakamura:2016:EMNLP2016} and the proposed reranking method. We tested three features for reranking: the NMT score $P_n$, the forced decoding score $S_d$ and a word penalty (WP) feature, which is the length of the translation. The best NMT system and the systems that have no significant difference from the best NMT system at the $p < 0.05$ level using bootstrap resampling \cite{koehn2004statistical} are shown in bold font. As we can see, integrating lexical translation probabilities improved the baseline NMT system and reranking with the three features all together achieved further improvements for all four language pairs. Even on English-to-Chinese and English-to-Japanese tasks, where the NMT system outperformed the phrase-based SMT system by 7-8 BLEU scores, using the forced decoding score for reranking NMT outputs can still achieve significant improvements. With or without the word penalty feature, using both $P_n$ and $S_d$ for reranking gave better results than only using $P_n$ or $S_d$ alone. We also show METEOR and chrF scores on the test sets in Table~\ref{otherresults}. Our reranking method improved both METEOR and chrF significantly. \paragraph{The Word Penalty Feature} The word penalty feature generally improved the reranking results, especially when only the NMT score $P_n$ was used for reranking. As we can see, using only $P_n$ for reranking decreased the translation quality compared to the standard beam search result of NMT. Because the search spaces of beam search and random sampling are quite different, the best beam search output does not necessarily have the highest NMT score compared to random sampling outputs. Therefore, even the $P_n$ reranking results do have higher NMT scores, but have lower BLEU scores according to Table~\ref{results}. To explain why this happened, we show the ratio of translation length to reference length in Table~\ref{ratio}. As we can see, the $P_n$ reranking outputs are much shorter. This is because NMT generally prefers shorter translations, since Equation~\ref{p1} multiplies all target word probabilities together. So the word penalty feature can improve the $P_n$ reranking results considerably, by preferring longer sentences. Because the forced decoding score $S_d$ as shown in Equation~\ref{p2} does not obviously prefer shorter or longer sentences, when $S_d$ was used for reranking, the word penalty feature became less helpful. When both $P_n$ and $S_d$ were used for reranking, the word penalty feature only achieved further significant improvement on the English-to-Japanese task. \begin{table}[H]\small \center \begin{tabular}{p{6.2cm}r} \hline \bf T$_1$ (NMT+lex):&\\ {\begin{CJK}{UTF8}{gbsn}for $\to$对于(for)\end{CJK}} &{-3.04}\\ \colorbox{light-gray}{ $r_a$: \begin{CJK}{UTF8}{gbsn}hy $\to$低(low)\end{CJK}} &\colorbox{light-gray}{-12.19}\\ \colorbox{light-gray}{ $r_b$: \begin{CJK}{UTF8}{gbsn}\texttt{null}$\to$酪(cheese)\end{CJK}} & \colorbox{light-gray}{-21.99}\\ \colorbox{light-gray}{$r_c$: \begin{CJK}{UTF8}{gbsn}\texttt{null}$\to$蛋白(protein)\end{CJK}} & \colorbox{light-gray}{-13.83}\\ {\begin{CJK}{UTF8}{gbsn}to mized $\to$切除(remove)\end{CJK}} &{-6.22}\\ {\begin{CJK}{UTF8}{gbsn}\texttt{null}$\to$的(of)\end{CJK}} & {-1.53}\\ {\begin{CJK}{UTF8}{gbsn}rats $\to$大(big) 鼠(rat)\end{CJK}} &{-1.52}\\ \colorbox{light-gray}{\begin{CJK}{UTF8}{gbsn}, the drinking water $\to$, 饮用水(drinking water)\end{CJK}} &\colorbox{light-gray}{-1.38}\\ {\begin{CJK}{UTF8}{gbsn}additionally contains $\to$另外(also) 含有(contain)\end{CJK}} &{-3.68}\\ {\begin{CJK}{UTF8}{gbsn}5 \% $\to$5 %\end{CJK}} &{-0.51}\\ {\begin{CJK}{UTF8}{gbsn}glucose . $\to$葡萄糖(glucose) 。\end{CJK}} &{-0.60}\\ \colorbox{light-gray}{ $r_d$: hypop$\to$\texttt{null}} & \colorbox{light-gray}{-25.33}\\ {sec$\to$\texttt{null}} & {-20.66}\\ \hline \bf T$_2$ (NMT+lex+$P_n$+$S_d$):&\\ {\begin{CJK}{UTF8}{gbsn}for $\to$对于(for)\end{CJK}} &{-3.04}\\ \colorbox{light-gray}{\begin{CJK}{UTF8}{gbsn}hypop hy $\to$垂体(hypophysis)\end{CJK}} &\colorbox{light-gray}{-5.09}\\ \colorbox{light-gray}{\begin{CJK}{UTF8}{gbsn}the $\to$在(is)\end{CJK}} &\colorbox{light-gray}{-5.32}\\ {\begin{CJK}{UTF8}{gbsn}to mized $\to$切除(remove)\end{CJK}} &{-6.22}\\ {\begin{CJK}{UTF8}{gbsn}\texttt{null}$\to$的(of)\end{CJK}} & {-1.53}\\ {\begin{CJK}{UTF8}{gbsn}rats $\to$大(big) 鼠(rat)\end{CJK}} &{-1.52}\\ \colorbox{light-gray}{\begin{CJK}{UTF8}{gbsn}, $\to$中(in) ,\end{CJK}} &\colorbox{light-gray}{-4.11}\\ \colorbox{light-gray}{\begin{CJK}{UTF8}{gbsn}drinking water $\to$饮用水(drinking water)\end{CJK}} &\colorbox{light-gray}{-1.03}\\ {\begin{CJK}{UTF8}{gbsn}additionally contains $\to$另外(also) 含有(contain)\end{CJK}} &{-3.68}\\ {\begin{CJK}{UTF8}{gbsn}5 \% $\to$5 %\end{CJK}} &{-0.51}\\ {\begin{CJK}{UTF8}{gbsn}glucose . $\to$葡萄糖(glucose) 。\end{CJK}} &{-0.60}\\ {sec$\to$\texttt{null}}& {-20.66}\\ \hline \end{tabular} \caption{Forced decoding paths for T$_1$ and T$_2$: used rules and log scores. The translation rules with shade are used only for T$_1$ or T$_2$.} \label{rules} \end{table} Table~\ref{rare} gives translation examples of our reranking method from the English-to-Chinese task. The source English word ``hypophysectomized" is an unknown word which does not occur in the training set. By employing BPE, this word is split into ``hypop", ``hy", ``sec", ``to" and ``mized". The correct translation for ``hypophysectomized" is ``\begin{CJK}{UTF8}{gbsn}去(remove) 垂体(hypophysis)\end{CJK}" as shown in the reference sentence. The original attentional NMT translated it into incorrect translation ``\begin{CJK}{UTF8}{gbsn}过(pass) 盲肠(cecum)\end{CJK}". After integrating lexicons, the NMT system translated it into ``\begin{CJK}{UTF8}{gbsn}低(low) 酪(cheese) 蛋白(protein) 切除(remove)\end{CJK}". The last word ``\begin{CJK}{UTF8}{gbsn}切除(remove)\end{CJK}" is correct, but the rest of the translation is still wrong. Only by using the forced decoding score $S_d$ for reranking, we get the more accurate translation ``\begin{CJK}{UTF8}{gbsn}垂体(hypophysis) 在(is) 切除(remove)\end{CJK}". To further demonstrate how the reranking method works, Table~\ref{rules} shows translation rules and their log-scores contained in the forced decoding paths found for T$_1$, the NMT translation without reranking and T$_2$, the NMT translation using both $P_n$ and $S_d$ for reranking. As we can see, the four rules $r_a$, $r_b$, $r_c$ and $r_d$ used for T$_1$ have low scores. $r_a$ is an unlikely translation. In $r_b$, $r_c$ and $r_d$, ``\begin{CJK}{UTF8}{gbsn}酪(cheese)\end{CJK}", ``\begin{CJK}{UTF8}{gbsn}蛋白(protein)\end{CJK}" and ``hypop" are content words, which are unlikely to be deleted or inserted during translation. Table~\ref{rules} also shows that the translation of function words is very flexible. The score of inserting a function word ``\begin{CJK}{UTF8}{gbsn}的(of)\end{CJK}" is very high. The translation rule ``\begin{CJK}{UTF8}{gbsn}the $\to$在(is)\end{CJK}" used for T$_2$ is incorrect, but its score is relatively high, because function words are often incorrectly aligned in the training set. The reason why function words are more likely to be incorrectly aligned to each other is that they usually have high frequencies and do not have clear correspondences between different languages. In T$_1$, ``hypophysectomized (hypop hy sec to mized)" is incorrectly translated into ``\begin{CJK}{UTF8}{gbsn}低(low) 酪(cheese) 蛋白(protein) 切除(remove)\end{CJK}". However, from Table~\ref{rules}, we can see that the forced decoding algorithm learns it as unlikely translation (hy$\to$\begin{CJK}{UTF8}{gbsn}低(low)\end{CJK}), over-translation (\texttt{null}$\to$\begin{CJK}{UTF8}{gbsn}酪(cheese)\end{CJK}, \texttt{null}$\to$\begin{CJK}{UTF8}{gbsn}蛋白(protein)\end{CJK}) and under-translation (hypop$\to$\texttt{null}, sec$\to$\texttt{null}), because there is no translation rule between ``hypop" ``sec" and ``\begin{CJK}{UTF8}{gbsn}酪(cheese)\end{CJK}" ``\begin{CJK}{UTF8}{gbsn}蛋白(protein)\end{CJK}". Because content words are unlikely to be deleted or inserted during translation, they have low forced decoding scores. So using the forced decoding score for reranking NMT outputs can naturally improve over-translation or under-translation as shown in Table~\ref{underover}. As we can see, without using $S_d$ for reranking, NMT under-translated ``temperature" and over-translated ``ph" twice, which will be assigned low scores by forced decoding. By using $S_d$ for reranking, the correct translation was selected. We did human evaluation on 100 sentences randomly selected from the English-to-Chinese test set to test the effectiveness of our forced decoding method. We compared the outputs of two systems: \begin{itemize} \item NMT+lex+rerank($P_n$+WP) \item NMT+lex+rerank($P_n$+$S_d$+WP) \end{itemize} For each source sentence, we compared the two system outputs. Table~\ref{human} shows the numbers of sentences that our forced decoding feature helped to reduce completely unrelated translation, over-translation and under-translation. The last line of Table~\ref{human} means that for 73 source sentences, our forced decoding feature neither reduced nor caused more unrelated/over/under translation. That is our forced decoding feature never caused more unrelated/over/under translation for the sampled 100 sentences, which shows that our method is very robust for improving unrelated/over/under translation. \begin{table}[!h]\small \center \begin{tabular}{ll|r} \hline \multirow{4}{0.3in}{Reduce}& both under- and over- translation&2\\ & under-translation&11\\ & over-translation& 10\\ & unrelated translation&4\\ \hline \multicolumn{2}{l|}{No difference} &73\\ \hline \end{tabular} \caption{Human evaluation results.} \label{human} \end{table} \paragraph{Reranking PBMT Outputs with NMT} We also did experiments that use the NMT score as an additional feature to rerank PBMT outputs (unique $1,000$-best list). The results are shown in Table~\ref{smt-rerank}. We also copy results of baseline PBMT and NMT from Table~\ref{results} for direct comparison. As we can see, using NMT to rerank PBMT outputs achieved improvements over the baseline PBMT system. However, when the baseline NMT system is significantly better than the baseline PBMT system (en-zh, en-ja), even using NMT to rerank PBMT outputs still achieved lower translation quality compared to the baseline NMT system. \begin{table}[!h]\small \center \begin{tabular}{l|lllll} \hline & &en-zh&en-ja&en-de&en-fr\\ \hline PBMT+rerank& &32.77&37.68&\bf 14.23&\bf 28.86\\ PBMT&dev &30.73&35.67&12.37&25.96\\ NMT& &\bf 34.60&\bf 41.67&12.52&23.63\\ \hline PBMT+rerank & &30.04&35.14&\bf 15.89&\bf 29.77\\ PBMT &test&27.72&33.46&13.95&27.50\\ NMT &&\bf 32.71&\bf 39.00&14.05&23.99\\ \hline \end{tabular} \caption{Results of using NMT for reranking PBMT outputs.} \label{smt-rerank} \end{table} \section{Related Work} \newcite{wuebker-mauser-ney:2010:ACL,wuebker-hwang-quirk:2012:WMT} applied forced decoding on the training set to improve the training process of phrase-based SMT and prune the phrase-based rule table. They also used word insertions and deletions for forced decoding, but they used a high penalty for all insertions and deletions. In contrast, our soft forced decoding algorithm for NMT outputs uses a small penalty for function words and a high penalty for content words, because function words are usually translated very flexibly and more likely to be inserted or deleted compared to content words. For example, the under-translation of a content word can hurt the adequacy of the translation heavily. But function words may naturally disappear during translation (e.g. the English word ``the" disappears in Chinese). By assigning a high penalty to words that should not be deleted or inserted during translation, our soft forced decoding method aims to improve the adequacy of NMT, which is very different from previous forced decoding methods that are used to improve general SMT training \cite{yu-EtAl:2013:EMNLP,xiao2016loss}. A major difference of traditional SMT and NMT is that the alignment model in traditional SMT provides exact word or phrase level alignments between the source and target sentences while the attention model in NMT only computes an alignment probability distribution for each target word over all source words, which is the main reason why NMT is more likely to produce completely unrelated translations, over-translation or under-translation compared to traditional SMT. To relieve NMT of these problems, there are methods that modify the NMT neural network structure \cite{tu-EtAl:2016:P16-1,meng-EtAl:2016:COLING,alkhouli-EtAl:2016:WMT} while we rerank NMT outputs by exploiting knowledge from traditional SMT. There are also existing methods that rerank NMT outputs by using target-bidirectional NMT models \cite{liu-EtAl:2016:N16-11,sennrich-haddow-birch:2016:WMT}. Their reranking method aims to overcome the issue of unbalanced accuracy in NMT outputs while our reranking method aims to solve the inadequacy problem of NMT. \section{Conclusion} In this paper, we propose to exploit an existing phrase-based SMT model to compute the phrase-based decoding cost for NMT outputs and then use the phrase-based decoding cost to rerank the $n$-best NMT outputs, so we can combine the advantages of both PBMT and NMT. Because an NMT output may not be in the search space of standard phrase-based SMT, we propose a forced decoding algorithm, which can always successfully find a decoding path for any NMT output by deleting source words and inserting target words. Results show that using the forced decoding cost to rerank NMT outputs improved translation accuracy on four different language pairs.
1,941,325,219,964
arxiv
\section{Introduction} The last few decades have witnessed the emergence of various algorithms that require the calculation of the eigendecomposition of matrices. A few known examples are Google's PageRank \cite{page1999pagerank}, PCA \cite{shlens2014tutorial}, Laplacian eigenmaps \cite{belkin2003laplacian}, LLE \cite{roweis2000nonlinear} and MDS \cite{buja2008data}. Since datasets nowadays may contain tens of millions of data points, an efficient calculation of the eigendecomposition becomes fundamental. In many scenarios, only part of the eigendecomposition, i.e., only the leading eigenvalues and eigenvectors, can or need to be calculated. While algorithms for eigendecomposition, such as the Lanczos algorithm and some variants of SVD, are designed especially for this task, they still require a hefty amount of calculations. A natural question that arises in such cases is how to update the eigendecomposition of a matrix given its partial eigendecomposition and some ``small" perturbation to it, without repeating the entire decomposition again. In this paper, we focus on rank-one updates of symmetric matrices. The classical approach for such an update is updating the eigenvalues using the roots of the secular equation, see e.g., \cite{bunch1978rank}. However, several other approaches for updating the eigenvalues and eigenvectors of a perturbed matrix have been suggested. The popular ones are quite general and include recalculating from scratch or restarting the power method \cite{langville2006updating} and perturbation methods \cite{stewart1990matrix}. Some methods that utilize the structure of a specific problem were suggested, with Google's page rank being the most popular application \cite[Chapter~10]{langville2011google}. Another important method is based on a geometric embedding of the available data \cite{brand2006fast}. This approach becomes computationally attractive when one updates a low-rank matrix. Many of the methods mentioned above are inapplicable or provide very poor guarantees in cases where we do not have access to the complete eigendecomposition. Some methods assume that the updated matrix is low-rank, which is not always the right model for real-world data. Finally, almost none of the existing approaches is equipped with error analysis. In our method, we provide a rank-one update algorithm that does not require the full eigendecomposition of the matrix and does not assume that it is low rank. We demonstrate that the structure of the problem enables us to use the unknown tail of the eigenvalues in order to improve the accuracy of the update. Additionally, the complexity of our algorithm is linear in the number of rows of the matrix. We also analyse the accuracy of our method, showing that it is independent of the number of unknown eigenpairs, but rather only depends on their ``behavior". This observation is confirmed by both synthetic and real-world examples. The eigenvalues and eigenvectors of the graph Laplacian have been of special interest recently, as evident by its various applications in machine learning, dimensionality reduction \cite{belkin2003laplacian, coifman2006diffusion}, clustering \cite{ng2002spectral, von2007tutorial}, graph theory \cite{chung1997spectral} and image processing \cite{coifman2008graph}. The problem of out-of-sample extension of the graph Laplacian, which will be described in detail later, is essentially updating the eigendecomposition of the graph Laplacian after the insertion of a new vertex to the graph. This problem is often addressed by the Nystr{\"o}m method \cite{bengio2004out}. We propose a different approach to the extension, based on the observation that under mild assumptions, the insertion of a new vertex to a graph translates to an almost rank-one update of the corresponding graph Laplacian matrix. Then, we apply our rank-one update algorithm to estimate the extension with high accuracy. The paper is organized as follows. In Section~\ref{sec:rank_one_update}, we derive our algorithm for the symmetric rank-one update based on partial eigendecomposition and analyse its error. In Section~\ref{sec:updating_problem}, we describe the application of the algorithm to the extension of the graph Laplacian. In Section~\ref{sec:numeric}, we illustrate numerically some of our theoretical results from Section~\ref{sec:rank_one_update} and Section~\ref{sec:updating_problem} for both synthetic and real data. We give some concluding remarks in Section~\ref{sec:conclusions}. \section{rank-one update with partial spectral information} \label{sec:rank_one_update} Computing the spectrum of a matrix following its rank-one update is a classical task in perturbation theory, e.g., \cite[Chapter 7]{bender2013advanced}. Given a rank-one perturbation to a matrix, the spectrum of the perturbed matrix is related to that of the original matrix via the secular equation, which involves the entire spectrum of the original matrix. However, if only the few leading eigenpairs of the original matrix are known, the classical approach requires further adaptation. Inspired by \cite{bunch1978rank}, we propose a solution to the ``partial knowledge" rank-one update problem, where we aim to estimate the leading eigenpairs of a matrix after a rank-one perturbation, having only part of the eigendecomposition of the original matrix. We describe in detail the derivation of our method and provide error bounds and complexity analysis. \subsection{Notation, classical setting, and problem formulation} \label{subsec:notation_rank_one_update} We denote by $X = [x_1x_2 \cdots x_n]$ a matrix expressed by its column vectors, and by $X^{(m)} = [x_1\cdots x_m]$ its truncated version consisting only of its first $m$ columns, $m<n$. Let $A$ be an $n \times n$ symmetric real matrix with real (not necessarily distinct) eigenvalues $\lambda_1 \ge \lambda_2 \ge \cdots \ge \lambda_n$ and their associated orthogonal eigenvectors $q_1,\ldots,q_n$. We denote this eigendecomposition by $A = Q\Lambda Q^T$, with $Q = [q_1q_2 \cdots q_n]$ and $\Lambda = \diag (\lambda_1, \ldots,\lambda_n)$. We focus on the problem of (symmetric) rank-one update, where we wish to find the eigendecomposition of \begin{equation} \label{eq:prob} A + \rho vv^T, \quad \rho \in \mathbb{R}, \quad v \in \mathbb{R}^n, \quad \norm{v} = 1. \end{equation} We denote the updated eigenvalues by $t_1 \ge t_2 \ge \cdots \ge t_n$ and their associated, orthogonal eigenvectors by $p_1,\ldots,p_n$ to form the decomposition $A + \rho vv^T = PTP^T$, with $P = [p_1p_2 \cdots p_n]$ and $T = \diag (t_1, \ldots,t_n)$. Approximated objects (whether scalars or vectors) constructed in this section are denoted by an over tilde. For example, an approximation for $x$ is denoted by $\widetilde{x}$. The relation between the decompositions before and after the rank-one update is well-studied, e.g., \cite{bunch1978rank, ding2007eigenvalues}. Without loss of generality, we further assume that $\lambda_1 > \lambda_2 > \cdots > \lambda_n$ and that for $z = Q^Tv$ we have $z_j \neq 0$ for all $1 \leq j \leq n$. The deflation process in~\cite{bunch1978rank} reduces any update~\eqref{eq:prob} to this form. Given the eigendecomposition $A = Q\Lambda Q^T$, the updated eigenvalues $t_1,...,t_n$ of $A + \rho v v^T$ are given by the $n$ roots of the secular equation \begin{equation} \label{eqn:secular_equation} w(t) = 1 + \rho \sum_{i=1}^{n}\frac{z_i^2}{\lambda_i - t} , \quad z = Q^Tv. \end{equation} The corresponding eigenvector for the $k$-th root (eigenvalue) $t_k$ is given by the explicit formula \begin{equation} \label{eqn:EigenvaectorFormula} p_k = \frac{Q\Delta_k^{-1}z}{\norm{Q\Delta_k^{-1}z}} , \quad z = Q^Tv , \quad \Delta_k = \Lambda - t_k I . \end{equation} An important assumption in the above is the knowledge of the full eigendecomposition of the matrix $A$. This is not always feasible in modern problems due to high computational and storage costs. Therefore, a natural question is what can one do in cases where only part of the spectrum is known. Thus, we are interested in the following problem. Let $A$ be an ${n \times n}$ real symmetric matrix and let $1\le m < n$. Assume we have only the first $m$ leading eigenvalues $\lambda_1 > \lambda_2 > \cdots > \lambda_m$ of $A$ and their associated eigenvectors $q_1,\ldots,q_m$. Find an estimate to the first $m$ leading eigenpairs of $A + \rho vv^T$ with $\rho \in \mathbb{R}$ and $\norm{v}=1$. \subsection{Truncating the secular equation} \label{sec:trunc_secular} We start by considering the first part of the above problem --- the eigenvalues. The classical perturbation method solves for the eigenvalues of $A + \rho vv^T$ by finding the roots of the secular equation~\eqref{eqn:secular_equation}. We introduce two modifications of the secular equation, adapted to our new setting. Using the notation of~\eqref{eqn:secular_equation}, we have from the orthogonality of $Q$ that \begin{equation} \|z\|^2 = \|Q^Tv\|^2 = \|v\|^2 = 1. \end{equation} Therefore, $\sum_{j=m+1}^{n}{z_j^2} = 1 - \sum_{i=1}^{m}{z_i^2}$. Since the last $n-m$ eigenvalues of $A$ are unknown, we denote by $\mu < \lambda_m$ a fixed scalar, whose purpose is to approximate $\lambda_j$, $j=m+1,\ldots,n$. Choosing $\mu$ will be discussed below. We then define the first order truncated secular equation by \begin{equation} \label{eq:TSE} w_{1}(t ; \mu) = 1 + \rho \left( \sum_{i=1}^{m} {\frac{z_i^2}{\lambda_i - t}} + \frac{1 - \sum_{i=1}^{m}{z_i^2}}{\mu - t} \right) , \end{equation} where $z=(Q^{(m)})^Tv$ is a vector of length $m$ (the first $m$ entries of $Q^Tv$), with the columns of the matrix $Q^{(m)}$ consisting of the (orthogonal) eigenvectors corresponding to the (known) leading eigenvalues of $A$. As a first observation, we bound the error obtained from the new formula. Namely, we show that the deviation of the $m$ largest roots of the truncated secular equation \eqref{eq:TSE} from the roots $t_1,\ldots,t_m$ of~\eqref{eqn:secular_equation} is of the order of $\max_{m+1 \leq j \leq n} \abs{\lambda_j - \mu }$. \begin{proposition} \label{prop:TSE_roots} Let $\rho>0$. Then, there exist $m$ roots $\widetilde{t}_1,\ldots,\widetilde{t}_m$ of $w_{1}(t ; \mu)$ of \eqref{eq:TSE}, such that \begin{equation} \label{eq:error_first_order} \abs{ t_k-\widetilde{t_k}} \le C_k \max_{m+1 \leq j \leq n} \abs{\lambda_j - \mu} , \quad k=1,\ldots,m, \end{equation} where ${t_1},\ldots,{t_m}$ are the largest $m$ roots of \eqref{eqn:secular_equation}, and $C_k$ is a constant bounded by \[ (\lambda_{k} - \mu) ^{-1}(\lambda_m - \lambda_{m+1})^{-1} \max \{ (\lambda_{k} - \lambda_1)^2,(\lambda_{k-1} - \lambda_{n})^2 \}. \] \end{proposition} \begin{proof} We start by showing the existence of the roots of \eqref{eq:TSE}. Indeed, since $w_1(t ; \mu)$ is monotone and since $\lim_{t\to\lambda_i^{\pm}} w_1(t;\mu) = \mp\infty$ for $i=1,\ldots,m$, there exists a single root in any segment $[\lambda_j,\lambda_{j-1}]$, $2 \le j \le m$. Additionally, since $\lim_{t \rightarrow \infty}w_1(t ; \mu) = 1 $ and by classical perturbations bounds for the eigenvalues of symmetric matrices~\cite[Corollary 8.1.6]{golub2012matrix}, there exits a single root in the interval $[\lambda_1, \lambda_1 + \rho]$. These are the $m$ roots of $w_1(t, \mu)$ in the interval $[\lambda_m, \lambda_1 + \rho]$. For the bound in~\eqref{eq:error_first_order}, we expand \begin{equation} \label{eqn:summand_first_app} \frac{z_i^2}{\lambda_i - t} = \frac{z_i^2}{\mu-t} + \frac{z_i^2(\mu - \lambda_i)}{(\mu-t)(\lambda_i - t)} . \end{equation} Therefore, by splitting the sum in \eqref{eqn:secular_equation} and using \eqref{eqn:summand_first_app} we have \begin{equation} \label{eq:se_split} w(t) = \underbrace{1 + \rho \sum_{i=1}^{m} {\frac{z_i^2}{\lambda_i - t}} + \rho \sum_{i=m+1}^{n} {\frac{z_i^2}{\mu -t}}}_{w_{1}(t ; \mu)} + \underbrace{\rho \sum_{i=m+1}^{n} \frac{z_i^2 (\mu - \lambda_i)}{(\mu - t)(\lambda_i-t)}}_{e(t;\mu)} . \end{equation} By Taylor expansion of \eqref{eqn:secular_equation}, \begin{equation} 0 = w(\widetilde{t}_k - (\widetilde{t}_k-t_k)) = w(\widetilde{t}_k) - (\widetilde{t}_k-t_k) \frac{d}{dt}w(\xi) = w_{1}(\widetilde{t}_k ; \mu) + e(\widetilde{t}_k ; \mu ) - (\widetilde{t}_k-t_k) \frac{d}{dt}w(\xi) , \end{equation} where $\widetilde{t}_k, \xi \in (\lambda_{k},\lambda_{k-1})$. By definition, $w_{1}(\widetilde{t}_k ; \mu)=0$. In addition, the derivative of the secular equation does not have a real root, meaning that \begin{equation} \label{eqn:bnd_roots_differ} (\widetilde{t}_k-t_k) = \frac{e(\widetilde{t}_k;\mu)}{ \frac{d}{dt}w(\xi)} . \end{equation} For the error term $e(\widetilde{t}_k;\mu)$ we have \begin{equation} \label{eq:min1} \abs{e(\widetilde{t}_k;\mu)} \leq \frac{\rho}{\abs{\mu - \widetilde{t}_k}\abs{\lambda_{m+1}-\widetilde{t}_k} }\sum_{i=m+1}^{n} z_i^2 \abs{\lambda_i - \mu} \leq \frac{\rho}{\abs{\mu - \widetilde{t}_k}\abs{\lambda_{m+1}-\widetilde{t}_k} } \max_{m+1 \leq i \leq n}\abs{\lambda_i - \mu} , \end{equation} where the last inequality follows since $\|z\|_2 = 1$. In addition, $\widetilde{t}_k\ge\lambda_{m}$ and $\mu < \lambda_{m}$ so the denominator in~\eqref{eq:min1} is bounded from below by $|\mu - \lambda_{k}||\lambda_m - \lambda_{m+1}|$. Back to \eqref{eqn:bnd_roots_differ}, the derivative of the secular equation is \begin{equation} \frac{d}{dt}w(t) = \rho \sum_{i=1}^n \frac{z_i^2}{(\lambda_i-t)^2} , \end{equation} and thus \begin{equation} \abs{ \frac{d}{dt}w(t)} \ge \rho \sum_{i=1}^n \frac{z_i^2}{(\lambda_i-t)^2} \ge \rho \min_{1\le i\le n} \left\lbrace \frac{1}{(\lambda_i-t)^2}\right\rbrace \sum_{i=1}^n z_i^2 = \rho \left\lbrace \max_{1\le i\le n}(\lambda_i-t)^2 \right\rbrace^{-1} . \end{equation} Therefore, \begin{equation} \abs{\frac{1}{ \frac{d}{dt}w(t)}} \le \frac{\max \{ (\lambda_{k} - \lambda_1)^2,(\lambda_{k-1} - \lambda_{n})^2 \} }{\rho } , \quad t\in ( \lambda_{k},\lambda_{k-1} ). \end{equation} \end{proof} \begin{remark} Proposition~\ref{prop:TSE_roots} describes the case of $\rho>0$. The case of $\rho<0$ is analogous with one exception -- the last root $\widetilde{t}_m$ is merely guaranteed to lie in the segment $[\mu,\lambda_{m}]$. Consequently, the constant $C_m$ cannot be bounded with the same arguments. \end{remark} We now address the issue of choosing $\mu$. A common assumption in many real world applications is that the matrix $A$ is low-rank, and thus the unknown eigenvalues are zero, implying the choice $\mu = 0$. This is indeed the case for several important kernel matrices, as we will see in the next section. For matrices that are not low rank, the error term of Proposition~\ref{prop:TSE_roots} using $\mu = 0$ would be $O(|\lambda_{m+1}|)$ and we have no reason to believe that this will result in a good approximation. A better method for choosing $\mu$ would be to minimize the sum in the middle term of \eqref{eq:min1}. However, an analytic minimizer is not attainable in this case since both $\lambda_i$ and $z_i^2$, $i = m+1,...,n$, are unknown. Shortly, we will devise an approximation of the secular equation for which an analytic minimizer can be calculated. Nevertheless, assuming the trace of $A$ is available, an intuitive choice for $\mu$ that works well in practice and is also fast to compute is the mean of the unknown eigenvalues, which is accessible since \begin{equation} \label{eq:mu_mean} \mu_{mean} = \frac{\sum_{i=m+1}^n \lambda_i}{n-m} = \frac{\operatorname{tr}(A) - \sum_{i=1}^{m} \lambda_i}{n - m} . \end{equation} Following the proof of Proposition~\ref{prop:TSE_roots}, we are encouraged to try to improve the approximation to the eigenvalues of~\eqref{eq:prob} by using a higher order approximation for \eqref{eqn:summand_first_app}, namely, \begin{equation} \label{eqn:ImprovedExpansion} \frac{z_i^2}{\lambda_i - t} = \frac{z_i^2}{\mu-t} - \frac{z_i^2(\lambda_i - \mu)}{(\mu-t)^2} + \frac{z_i^2(\lambda_i - \mu)^2}{(\mu - t)^2(\lambda_i - t)} . \end{equation} Since $Aq_i = \lambda_i q_i$, we have \begin{equation} \sum_{i=m+1}^{n}z_i^2\lambda_i = \sum_{i=m+1}^{n}(q_i^Tv)(q_i^Tv)\lambda_i = \sum_{i=m+1}^{n}(v^T\lambda_iq_i)(q_i^Tv) = \sum_{i=m+1}^{n}(v^TAq_i)(q_i^Tv) , \end{equation} and thus \begin{equation} \label{eq:s} \sum_{i=m+1}^{n}z_i^2\lambda_i = v^TA \left( I - Q^{(m)} (Q^{(m)})^T \right) v \triangleq s , \end{equation} which is a known quantity. This analysis gives rise to the second order approximation of the secular equation \begin{equation} \label{eq:CTSE} w_{2}(t ; \mu) = 1 + \rho \left( \sum_{i=1}^{m} {\frac{z_i^2}{\lambda_i - t}} + \frac{1 - \sum_{i=1}^{m}{z_i^2}}{\mu-t} - \frac{s - \mu (1 - \sum_{i=1}^{m}{z_i^2})}{(\mu - t)^2} \right) . \end{equation} In this case, the roots of $w_{2}(t ; \mu)$ of \eqref{eq:CTSE} are at most $\max_{m+1 \leq i \leq n} (\lambda_i -\mu)^2 $ away from the roots of the original secular equation \eqref{eqn:secular_equation}. This is concluded in the next result, which is analogous to Proposition~\ref{prop:TSE_roots}. \begin{proposition} \label{prop:CTSE_roots} Let $\rho>0$. Then, there are $m$ roots $\widetilde{t}_1,\ldots,\widetilde{t}_m$ of $w_{2}(t ; \mu)$ of \eqref{eq:CTSE}, such that \begin{equation} \label{eq:tse_second_order} \abs{ t_k-\widetilde{t}_k} \le C_k \max_{m+1 \leq j \leq n} (\lambda_j - \mu)^2 , \quad k=1,\ldots,m, \end{equation} where ${t_1},\ldots,{t_m}$ are the largest $m$ roots of \eqref{eqn:secular_equation}, and $C_k$ is a constant bounded by \[ (\lambda_{k} - \mu) ^{-2}(\lambda_m - \lambda_{m+1})^{-1} \max \{ (\lambda_{k} - \lambda_1)^2,(\lambda_{k-1} - \lambda_{n})^2 \} . \] \end{proposition} \begin{proof} This proof is similar to the proof of Proposition~\ref{prop:TSE_roots}. Here, we have $w(t) = w_{2}(t;\mu) + e(t;\mu)$ with \begin{equation} \label{eq:err2} e(t;\mu) = \rho \sum_{i=m+1}^{n} \frac{z_i^2 (\lambda_i - \mu)^2}{(\mu - t)^2(\lambda_i-t)} . \end{equation} Then, due to the additional $\mu - t$ in the denominator of the error term~\eqref{eq:err2} compared to the error term of~\eqref{eq:se_split}, the bound for \eqref{eqn:bnd_roots_differ} has an additional factor of $(\lambda_k - \mu)^{-1}$ in the constant $C_k$. \end{proof} To conclude the above discussion on the two approximations of the secular equation, we present Figure~\ref{fig:se_and2truncatedSE}. In this figure, we construct a matrix of size $n=4$ with eigenvalues $0.1,0.2,0.3,0.4$ and random orthogonal eigenvectors. To form the truncated equations, we use $m=2$ and $\mu = \mu_{mean}$ of \eqref{eq:mu_mean}, which in this case satisfies $\mu=0.15$. The figure depicts the two truncated secular equations $w_1$ of \eqref{eq:TSE} and $w_2$ of \eqref{eq:CTSE}, for a rank-one update with $\rho>0$, alongside with the original secular equation of \eqref{eqn:secular_equation}. The two roots that are approximated are on the white part of the figure. We zoom in on a neighbourhood of the second root of the secular equation $t_2$, to observe how the second order approximation has a closer root than the root of the first order approximation, as theory suggests. The other two roots (that are not approximated) are on the grey part of the figure, where the asymptotic behaviour around $\mu$ of the two approximations is demonstrated. \begin{figure} \centering \includegraphics[width=.55\textwidth]{secular_draw_V4} \caption{The secular equation \eqref{eqn:secular_equation} and its two approximations: the first order $w_1$ of \eqref{eq:TSE} and the second order $w_2$ of \eqref{eq:CTSE}. The original matrix has four eigenvalues at $0.1,0.2,0.3$ and $0.4$ and a rank-one update with $\rho>0$. The approximations use $m=2$, and $\mu = \mu_{mean}=0.15$. In the lower right corner, we zoom-in to a small neighborhood of the second root.} \label{fig:se_and2truncatedSE} \end{figure} We address once again the choice of $\mu$. Setting $\mu = 0$ implies that the eigenvalues $\lambda_j$, $j=m+1,\ldots,n$ are assumed to be small. Then, we get according to Proposition~\ref{prop:CTSE_roots} an improved error of $O(\lambda_{m+1}^2)$. Nevertheless, in this case of a second order approximation to the secular equation \eqref{eq:CTSE}, an improved method for choosing $\mu$ is possible by minimizing an upper bound on~\eqref{eq:err2}. For simplicity, we further assume that $\mu \leq \lambda_{m+1}$. Since the denominator in \eqref{eq:err2} depends on $t$ (specifically, on the yet to be calculated approximated eigenvalues), we bound it by some constant that will apply to all eigenvalues simultaneously. One such bound is $e(t ; \mu) \leq \rho (\lambda_{m} - \lambda_{m+1})^{-3} \sum_{i=m+1}^{n} z_i^2 (\lambda_i - \mu)^2$ which holds for all $t \in (\lambda_{m}, \lambda_{1} + \rho)$. We would then like to minimize $\sum_{i=m+1}^{n}z_i^2(\lambda_i - \mu)^2$. By standard methods we get the minimizer \begin{equation} \label{eq:mu_opt} \mu_\ast = \frac{\sum_{i=m+1}^nz_i^2\lambda_i}{ \sum_{i=m+1}^{n}z_i^2} = \frac{s}{1 - \sum_{i=1}^{m}z_i^2} , \end{equation} where $s$ is defined in~\eqref{eq:s}. The minimizer $\mu_\ast$ is essentially a weighted mean of the unknown eigenvalues (and thus obeys the assumption $\mu \leq \lambda_{m+1}$). Unlike $\mu_{mean}$, this variant does not require the knowledge of $\operatorname{tr}(A)$ but rather a few matrix-vector evaluations to calculate $s$ of~\eqref{eq:s}. Interestingly, note that when using $\mu = \mu_\ast$ we have \begin{equation} w_2(t ; \mu_\ast) = w_1(t ; \mu_\ast) , \end{equation} meaning that we have a second order approximation in both formulas. Next, we address the problem of eigenvectors estimation. \subsection{Truncated formulas for the eigenvectors} In this section, we introduce two approximations to the eigenvectors formula~\eqref{eqn:EigenvaectorFormula}. These are analogous to the approximations to the secular equation from the previous section. The two approximations are designed to use only the $m$ leading eigenvalues and their eigenvectors, and differ in accuracy and time complexity. A naive way to truncate the eigenvectors formula \eqref{eqn:EigenvaectorFormula} is by calculating \begin{equation} \label{eq:naive_TEF} \widetilde{p_i} = Q^{(m)}(\Delta_i^{(m)})^{-1}(Q^{(m)})^Tv , \quad \Delta_i^{(m)} = \diag\left(\lambda_1 - t_i,\ldots,\lambda_m - t_i\right), \quad i=1,\ldots,m , \end{equation} followed by a normalization, where $t_i$ are the roots of the secular equation (the updated eigenvalues) in descending order. We now ignore for a while the normalization and focus on the unnormalized vectors, namely (see \eqref{eqn:EigenvaectorFormula}), \begin{equation} \label{eqn:simplify_eigenvector_formula} \begin{split} {p_i} & = Q(\Lambda - t_i I)^{-1}Q^Tv \\ & = \begin{bmatrix} q_1 & ... & q_n \end{bmatrix} \begin{bmatrix} \frac{\langle q_1, v \rangle}{\lambda_1 - t_i} \\ ... \\ \frac{\langle q_n, v \rangle}{\lambda_n - t_i} \\ \end{bmatrix} = \sum_{k=1}^{n} \frac{\langle q_k, v \rangle}{\lambda_k - t_i}q_k \\ & = \underbrace{ \sum_{k=1}^{m} \frac{\langle q_k, v \rangle}{\lambda_k - t_i}q_k}_{\text{known}} + \underbrace{ \sum_{k=m+1}^{n} \frac{\langle q_k, v \rangle}{\lambda_k - t_i}q_k}_{\text{unknown}} . \end{split} \end{equation} Note that the sum of unknown terms, without weights, is accessible as \begin{equation} \label{eqn:sum_unknown_proj} \sum_{k=m+1}^{n} \langle q_k, v \rangle q_k = \sum_{k=1}^{n} \langle q_k, v \rangle q_k - \sum_{k=1}^{m} \langle q_k, v \rangle q_k = v - \sum_{k=1}^{m}q_kq^T_kv = v - Q^{(m)} (Q^{(m)})^Tv . \end{equation} Again, we denote by $\mu$ a fixed parameter whose purpose is to approximate the unknown eigenvalues. Having the $m$ leading eigenvectors in $Q^{(m)}$, and recalling that $\Delta_i^{(m)} = \diag\left(\lambda_1-t_i,\ldots,\lambda_n-t_i \right)$, we define the first order truncated eigenvectors formula for $1\le i \le m$ as \begin{equation} \label{eqn:TEF} \widetilde{p_i} = Q^{(m)}(\Delta_i^{(m)})^{-1}(Q^{(m)})^Tv + \frac{1}{\mu - t_i}r , \quad r = v - Q^{(m)} (Q^{(m)})^Tv . \end{equation} The second order truncated eigenvectors formula, which is the eigenvectors analogue of \eqref{eq:CTSE}, is given by \begin{equation} \label{eq:CTEF} \widetilde{p_i} = Q^{(m)}(\Delta_i^{(m)})^{-1}(Q^{(m)})^Tv + \left(\frac{1}{\mu - t_i} + \frac{\mu}{(\mu - t_i)^2}\right)r - \frac{1}{(\mu - t_i)^2}Ar , \quad r = v - Q^{(m)} (Q^{(m)})^Tv . \end{equation} Note that $r$ and $Ar$ are constant vectors and can be computed once for all $1\le i \le m$. The error bounds of both formulas \eqref{eqn:TEF} and \eqref{eq:CTEF} are summarized in the following theorem. \begin{theorem} \label{thm:err_bnd_trunc_vecs} Let $A = Q \Lambda Q^T $ be an ${n \times n}$ real symmetric matrix with $m$ known leading eigenvalues $\lambda_1 > \cdots > \lambda_m$ and known corresponding eigenvectors $q_1, \ldots, q_m$. The $m$ leading eigenvectors $p_1,\ldots,p_m$ of the rank-one update $A+\rho vv^T$, with $\norm{v} = 1$ and $\rho \in \mathbb{R}$ can be approximated by \eqref{eqn:TEF} or \eqref{eq:CTEF}, given their associated leading eigenvalues $t_1,\ldots,t_m$ and a fixed scalar $\mu$, such that the approximations satisfy: \begin{enumerate} \item For $\widetilde{p_i}$ of \eqref{eqn:TEF}, \begin{equation} \label{eqn:EVfirstbound} \norm{p_i - \widetilde{p_i}} \le C_i \max_{m+1 \leq i \leq n} | \lambda_{i} - \mu| , \quad C_i \le \abs{\mu -t_i}^{-1}\abs{\lambda_{m+1}-t_i}^{-1} . \end{equation} \item For $\widetilde{p_i}$ of \eqref{eq:CTEF}, \begin{equation} \label{eqn:EVsecondbound} \norm{p_i - \widetilde{p_i}} \le C_i \max_{m+1 \leq i \leq n}|\lambda_{i} - \mu|^2 , \quad C_i \le \abs{\mu -t_i}^{-2}\abs{\lambda_{m+1}-t_i}^{-1} . \end{equation} \end{enumerate} \end{theorem} \begin{proof} For~\eqref{eqn:EVfirstbound}, by \eqref{eqn:simplify_eigenvector_formula} and \eqref{eqn:TEF} we have \begin{equation*} e_i = \norm{p_i- \widetilde{p_i}} = \norm{\sum_{k=m+1}^{n} \frac{\langle q_k, v \rangle}{\lambda_k - t_i}q_k + \frac{1}{\mu - t_i}r }. \end{equation*} Similarly to \eqref{eqn:summand_first_app}, $\frac{1}{\lambda_k - t_i} = \frac{1}{\mu-t_i} + \frac{(\mu - \lambda_k)}{(\mu-t_i)(\lambda_k - t_i)}$, and using \eqref{eqn:sum_unknown_proj} and the orthogonality of $q_i$, we have \begin{align} \label{eq:err_vec1} e_i^2 & = \norm{\sum_{k=m+1}^{n} \frac{(\mu - \lambda_k) \langle q_k, v \rangle}{(\mu - t_i)(\lambda_k - t_i)}q_k }^2 \nonumber = \sum_{k=m+1}^{n} \left( \frac{(\mu - \lambda_k) \langle q_k, v \rangle}{(\mu - t_i)(\lambda_k - t_i)} \right)^2\nonumber \\ & \leq \frac{1}{(\mu - t_i)^2(\lambda_{m+1} - t_i)^2} \sum_{k=m+1}^{n} (\mu - \lambda_k)^2 \langle q_k, v \rangle ^2 \nonumber \\ & = \frac{1}{(\mu - t_i)^2(\lambda_{m+1} - t_i)^2} \sum_{k=m+1}^{n} (\mu - \lambda_k)^2 z_k^2 . \end{align} Since $\|Q^Tv\| = 1$, taking the square root of~\eqref{eq:err_vec1} gives us the bound. For~\eqref{eqn:EVsecondbound}, we use~\eqref{eqn:ImprovedExpansion}, and recall that $Aq_k = \lambda_kq_k$ which in this case means \begin{equation*} \frac{\mu}{(\mu - t_i)^2}r - \frac{1}{(\mu - t_i)^2}Ar = \frac{1}{(\mu - t_i)^2} \sum_{k=m+1}^{n} \langle q_k, v \rangle (\mu - \lambda_k) q_k . \end{equation*} Thus, we have for the corrected formula \eqref{eq:CTEF} with exact eigenvalues, \begin{equation} \norm{p_i- \widetilde{p_i}}^2 = \norm{\sum_{k=m+1}^{n} \frac{(\mu - \lambda_k)^2 \langle q_k, v \rangle}{(\mu - t_i)^2(\lambda_k - t_i)}q_k }^2 = \sum_{k=m+1}^{n} \left( \frac{(\mu - \lambda_k)^2 \langle q_k, v \rangle}{(\mu - t_i)^2(\lambda_k - t_i)} \right)^2 , \end{equation} and the second claim follows as before. \end{proof} As with the eigenvalues, under the low rank assumption ($\mu = 0$), Theorem~\ref{thm:err_bnd_trunc_vecs} guarantees errors of $O(|\lambda_{m+1}|)$ and $O(\lambda_{m+1}^2)$ for \eqref{eqn:TEF} and \eqref{eq:CTEF}, respectively. The value of $\mu$ which minimizes the bound on the last term in \eqref{eq:err_vec1} is $\mu_\ast$ of \eqref{eq:mu_opt}, and is thus expected to provide a better approximation than $\mu = 0$. Experimental results have shown that the choice $\mu = \mu_{mean}$ of \eqref{eq:mu_mean} is competitive with $\mu_\ast$ while being slightly faster to compute. Note that the approximate formulas \eqref{eqn:TEF} and \eqref{eq:CTEF} will generally not produce an orthogonal set of vectors. In that case, a re-orthogonalization procedure may be used. This issue is discussed in Appendix~\ref{app:loss_orth}. \subsection{Algorithm summary} Given a parameter $\mu$, we have provided first and second order truncated approximations to the secular equation and corresponding formulas for the eigenvectors. As for $\mu$, we suggested three choices. If the matrix is low-rank, choose $\mu = 0$. Otherwise, choose either $\mu_\ast$ which minimizes the error term, or $\mu_{mean}$ which is faster to compute. We summarize the previous subsections in Algorithm~\ref{alg:trunc}, which computes the symmetric rank-one update with partial spectrum. \begin{algorithm}[ht] \caption{rank-one update with partial spectrum} \label{alg:trunc} \begin{algorithmic}[1] \REQUIRE $m$ leading eigenpairs $\{(\lambda_i,q_i)\}_{i=1}^m$ of a symmetric matrix $A$, a vector $v \in \mathbb{R}^n$ with $\|v\| = 1$ and a scalar $\rho > 0$ \ENSURE An approximation $ \{(\widetilde{t_i},\widetilde{p_i})\}_{i=1}^m$ of the eigenpairs of $A + \rho vv^T$ \STATE Choose a parameter $\mu$ (i.e., $\mu=0$, \eqref{eq:mu_mean} or \eqref{eq:mu_opt}). \label{alg1:line1} \STATE Calculate the $m$ largest roots $\{(\widetilde{t_i}\}_{i=1}^m$ of a truncated secular equation (either \eqref{eq:TSE} or \eqref{eq:CTSE}) \label{alg1:line2} \FORALL { $\{{q_i}\}_{i=1}^m$ } \STATE find $\widetilde{p_i}$ by a truncated eigenvectors formula (either \eqref{eqn:TEF} or \eqref{eq:CTEF}) \label{alg1:line4} \ENDFOR \end{algorithmic} \end{algorithm} A complexity analysis of Algorithm~\ref{alg:trunc} is provided in Appendix~\ref{sec:analysis_1}. \section{Updating the graph Laplacian for out-of-sample extension} \label{sec:updating_problem} In this section, we introduce an application of the rank-one update scheme of Section~\ref{sec:rank_one_update}, to the problem of out-of-sample extension of the graph Laplacian. We start by formulating the problem, and then justify the use of a rank-one update by proving that a single point extension of the graph Laplacian is close to a rank-one perturbation. We conclude the section with a few algorithms, which are demonstrated numerically in Section~\ref{sec:numeric}. \subsection{Preliminaries and problem formulation} \label{sec:prel} We begin by introducing the notation and the model for the extension problem. Given a set of discrete points $\mathcal{X} = \{ x_i \}_{i=1}^n \subset \mathbb{R}^d $, we define a weighted graph whose vertices are the given points. An edge is added to the graph if its two vertices are ``similar". The common ways of defining ``similar" include: \begin{enumerate} \item $k$-nearest neighbours (kNN) -- Vertex $i$ and $j$ are connected iff $i$ is within the kNN of $j$ or vice versa. \item $\delta$-neighbourhood -- Vertex $i$ and $j$ are connected iff $\|x_i - x_j\| < \delta$ for some $\delta > 0$. \end{enumerate} Each edge in the graph is assigned a weight, usually determined by a kernel function. A kernel function $K$ is a symmetric function $K \colon \mathbb{R}^d \times \mathbb{R}^d \to \mathbb{R} $. The weight on the edge between vertices $i$ and $j$ is set to $w_{ij} = K(x_i, x_j)$. A kernel is said to be radial if \begin{equation} \label{eqn:radialKer} K(x,y) = g \left(\norm{x-y} \right), \quad x,y \in \mathbb{R}^d , \end{equation} for a non-negative real function $g$. One common choice of a kernel is the heat kernel (also known as the Gaussian kernel) that induces the weights \begin{equation} \label{eqn:guass_ker} w_{ij} = \exp \big( -\frac{\|x_i - x_j\|^2}{\varepsilon} \big), \end{equation} for some fixed width parameter $\varepsilon > 0$. Given the weight matrix $W = \{w_{ij} \}$ and its corresponding (diagonal) degrees matrix $D$ whose diagonal is $D_{ii} = \sum_{j=1}^{n}W_{ij}$, the graph Laplacian is typically defined as either $L=D^{-1}W$ (random walk graph Laplacian) or $L = D^{-\frac12}WD^{-\frac12}$ (Symmetric normalized graph Laplacian) \cite{belkin2003laplacian}. Note that most authors define the symmetric normalized graph Laplacian as $L = I - D^{-\frac12}WD^{-\frac12}$. The latter definition of the graph Laplacian merely applies an affine transformation to the eigenvalues of the graph Laplacian and does not change the corresponding eigenvectors. Since we define our method to act on the largest eigenvalues, we prefer using $L = D^{-\frac12}WD^{-\frac12}$. Recall that $W$ and $L$ are $n \times n$ matrices where $n$ is the number of samples in $\mathcal{X}$. In the following, we consider only the case of the symmetric normalized graph Laplacian. Nevertheless, similar results can be obtained for the random walk graph Laplacian, as it satisfies a similarity relation with the symmetric graph Laplacian. Henceforth, unless otherwise stated, by referring to the ``graph Laplacian" we mean the symmetric normalized graph Laplacian. \noindent We now formulate the out-of-sample extension of the graph Laplacian. Let $\mathcal{X} = \{ x_i \}_{i=1}^n \subset \mathbb{R}^d $ and let $x_0 \not\in \mathcal{X}$ be a new point in $\mathbb{R}^d $. Denote by $L_0$ the graph Laplacian constructed from $\mathcal{X}$ using a given kernel. Assume the top $m$ eigenvalues and eigenvectors of $L_0$ are known ($m<n$). The out-of-sample extension problem is to find the top $m$ eigenpairs of $L_1$, the graph Laplacian constructed from $\mathcal{X} \cup \{x_0\}$. The out-of-sample extension problem is reduced to a symmetric rank-one update as follows. With a slight abuse of notation, we also denote by $L_0$ the original graph Laplacian to which we added $x_0$ as an isolated vertex. That is, $L_0$ is now an $(n+1) \times (n+1)$ matrix whose first row and column correspond to the point $x_0$: the first row and column have $1$ on the diagonal and $0$ otherwise. Note that the dimensions of this augmented $L_0$ are identical to the dimensions of $L_1$. We will argue that the difference matrix $\Delta L = L_1 - L_0$ is very close to being rank-one (a claim that will be formulated and proven in the next section). In other words, by looking at $\rho = \lambda_1(\Delta L)$ (the leading eigenvalue of $\Delta L$), and its associated eigenvector $v = q_1(\Delta L)$, we estimate the leading eigenpairs of $L_1$ using the proxy \begin{equation} \label{eq:proxy} \widetilde{L}_1 = L_0 + \rho vv^T . \end{equation} An illustration of the out-of-sample extension problem is given in Figure~\ref{fig:newnode}. \begin{figure} \centering \begin{subfigure}[b]{0.4\textwidth} \includegraphics[width=\textwidth]{graph_disconnected.png} \caption{The graph corresponding to $L_0$, with $x_0$ (empty dot) disconnected.} \end{subfigure} \begin{subfigure}[b]{0.4\textwidth} \includegraphics[width=\textwidth]{graph_connected.png} \caption{The graph corresponding to $L_1$, with $x_0$ (empty dot) connected.} \end{subfigure} \caption{Adding a new sample point $x_0$ to the graph. }\label{fig:newnode} \end{figure} \subsection{Updating the graph Laplacian is almost a rank-one perturbation} As described in Section~\ref{sec:prel}, the weights on the edges of the graph are determined by a kernel. For our subsequent claims, we will require that our kernel is radial with $g(0) > 0$, and that in some neighbourhood of $0$ its derivative is bounded, that is $\abs{\frac{d}{dx}g} < M$ for some $M > 0$. These requirements are not too restrictive, as they are met by most common kernels used, such as the heat kernel. In the following analysis, we consider graphs constructed using $\delta$-neighbourhoods (see Section~\ref{sec:prel}). As we will see next, the analogue for kNN is straightforward. For $\delta$-neighbourhoods, we require the parameter $\delta$ to be ``small enough", and more specifically, to satisfy $\delta < \frac{g(0)}{2M}$. This assumption is not too restrictive as the purpose of constructing similarity graphs is to model the local neighbourhood relationships between the data points~\cite{von2007tutorial}. We denote by $k$ the minimal number of neighbours of a vertex. In addition, we denote by $c_1 \ge 1$ a constant such that $c_1 \cdot k$ is the maximal number of neighbours of a vertex (we assume that $c_1$ is independent of $k$). Denote by $\sigma_i(X)$, $i=1,\ldots,n$ the singular values of a squared symmetric matrix $X$ (in descending order). We now present the main theoretical result of this section. \begin{theorem} \label{thm:rankone} Under the assumptions and notation described above, let $L_0$ and $L_1$ be two graph Laplacians before and after the addition of a new vertex, respectively. Then, there exists a constant $\beta$, independent of $k$, such that \[ \sigma_1(L_1 - L_0) = 1 - \frac{\beta}{k} \qquad \text{ and } \qquad \sigma_i(L_1 - L_0) = \frac{\beta}{k}, \quad i \geq 2 . \] \end{theorem} Theorem~\ref{thm:rankone} shows that for large enough $k$, $\sigma_1(L_1 - L_0) \approx 1$ and $\sigma_i(L_1 - L_0) \approx 0$, $ i \geq 2$. In other words, $\Delta L = L_1 - L_0$ is indeed close to being rank-one. \subsection{Proof of Theorem~\ref{thm:rankone}} The proof is divided into a few steps. First, we adapt a classical result from perturbation theory called Weyl's Theorem \cite{stewart1998perturbation} to our setting for an initial bound on the singular values of $\Delta L$. Then, we use our assumptions to derive, based on the specific structure of the graph Laplacian, the required constants and bounds to use in the main body of the proof. From classical perturbation theory we have the following result regarding the singular values of a matrix. \begin{theorem}[Weyl's Theorem] \label{thm:weyl} Let $S,E \in \mathbb{R}^{n \times n}$. Then, for all $1 \leq i \leq n$ we have \[ \abs{\sigma_i(S+E) - \sigma_i(S)} \leq \norm{E}_2. \] \end{theorem} As it turns out, for the special case where $S$ is diagonal, we can further improve the above estimation. \begin{theorem} \label{thm:extendweyl} Let $S\in \mathbb{R}^{n \times n}$ be a diagonal matrix whose diagonal entries are different from each other, and let $E \in \mathbb{R}^{n \times n}$. Assume, without loss of generality, that the diagonal entries of $S$ are given in a descending order of magnitude. Denote $E = \left( e_{ij}\right)$. Let $\eta > 0$ be such that $\|E\|_2 < \eta$. Then, for a small enough $\eta$, there exists $c_H = c_H(S, \eta) > 0$ independent of $E$ so that for all $1 \leq i \leq n$, \[ \abs{\sigma_i(S+E) - \sigma_i(S) - \operatorname{sign}(S_{ii})e_{ii}} \leq c_H \norm{E}^2_F . \] \end{theorem} The proof of Theorem~\ref{thm:extendweyl} is given in Appendix~\ref{sec:ProofOfWeyl2thm}. Recall that our aim is to bound the singular values of $\Delta L = L_1 - L_0$, that is of the difference matrix between the graph Laplacians before and after the insertion of a new vertex. To apply Theorem \ref{thm:extendweyl}, we denote $S = \operatorname{diag}(\Delta L)$ and $E = \Delta L - S$. Assume we permuted the indices of the vertices of the graph such that the diagonal entries of $S$ are in descending order. Note that in our specific case, the diagonal entries of $E$ are in fact zero. Therefore, by Theorem~\ref{thm:extendweyl} there exists $c^\prime>0$ so that \begin{equation} \label{eqn:dls} \abs{\sigma_i(\Delta L) - \sigma_i(S)} \leq c^\prime \norm{E}^2_F. \end{equation} It is clear now that estimating $\norm{E}_F$ will provide us the relation between the singular values of $\Delta L$ and $S$. We start by examining $\Delta L$; its only nonzero elements are the ones affected by the introduction of the new vertex. There are at most $c_1k$ such rows, each consists of at most $c_1k$ nonzero elements by assumption. Thus, the total number of elements changed in these rows is at most $c_1k \times c_1k = c_1^2k^2$. Due to symmetry, the same goes for the columns, thus, we have at most $c_1^2k^2 + c_1^2k^2 = 2c_1^2k^2$ changed entries. In other words, using the convention that $\operatorname{nnz}(X)$ is the number of nonzero elements of a matrix $X$, we have that \begin{equation} \label{eqn:boundnnz} \operatorname{nnz}(\Delta L) \leq (2c_1^2)k^2 . \end{equation} An element-wise estimation of the entries of the graph Laplacian, stating they are of order $\frac{1}{k}$, is given next. \begin{lemma} \label{lemma:bounding_elementwise} Let $L =(\ell_{i,j})$ be a graph Laplacian, calculated using $\delta$-neighbourhoods, using a radial kernel $g$ with a bounded derivative $\abs{\frac{d}{dx}g} < M$ such that $g(0) > 2M\delta$. Then, \begin{equation} \frac{1}{c_1c} \cdot \frac{1}{k} < \ell_{ij} < \frac{c}{k} , \quad 1\le i,j \le n , \quad c = 1+ \frac{g(0)}{M\delta}. \end{equation} \end{lemma} \begin{proof} Let $\alpha_{ij} = \norm{x_i - x_j}$. Then, using Lagrange's remainder theorem, for each entry $w_{ij}$ of the weight matrix $W$ there exists $\xi_{ij}$ such that \begin{equation} \label{eq:wij} w_{ij} = g(\|x_i - x_j\|) = g(0) + \frac{d}{dx}g(\xi_{ij})\alpha_{ij} . \end{equation} Since $\alpha_{ij} < \delta$ and $\abs{\frac{d}{dx}g} < M$, we have that an upper bound on $w_{ij}$ is $g(0) + M\delta$. On the other hand, $g(0) > 2M\delta$, so we get the bounds \begin{equation} \label{eq:w_bounds} M\delta < w_{ij} < g(0) + M\delta. \end{equation} The $ij$-th entry of the graph Laplacian is \begin{equation} \label{eq:gl_entry} \ell_{ij} = \frac{w_{ij}}{\sqrt{\sum_p w_{pj}} \cdot \sqrt{\sum_p w_{ip}}} , \end{equation} where the two sums are taken over all the neighbours of the $i$-th and $j$-th vertices. \noindent The number of neighbours of each vertex is at least $k$ so \begin{equation} \label{eq:sum_gl} \sum_p w_{pj} \ge k M \delta . \end{equation} Therefore, \begin{equation} \ell_{ij} < \frac{g(0) + M\delta}{\sqrt{kM\delta}\sqrt{kM\delta}} = \frac{g(0) + M\delta}{M\delta k} . \end{equation} Similarly, using the upper bound $c_1 k$ on the number of neighbours we get \begin{equation} \ell_{ij} > \frac{M\delta}{c_1(g(0) + M\delta)k} . \end{equation} \end{proof} An immediate conclusion from Lemma~\ref{lemma:bounding_elementwise} is the following. \begin{lemma} \label{lemma:entries} The entries of $\Delta L$ are of order $O(\frac1k)$ except for the first entry $(\Delta L )_{11}$ which is $-1 + O(\frac1k)$. \end{lemma} \begin{proof} Denote by $l^0_{ij}$ and $l^1_{ij}$ the $(i,j)$ entry of $L_0$ and $L_1$, respectively $(1 \leq i,j \leq n + 1)$. By Lemma~\ref{lemma:bounding_elementwise}, for $(i,j) \neq (1,1)$ both $l^0_{ij}$ and $ l^1_{ij}$ are $O(\frac1k)$, and thus the entries of $\Delta L$, which are of the form $l^1_{ij} - l^0_{ij}$, are $O(\frac1k)$. In the case $i=j=1$, by construction $l^0_{11} = 1$ and thus $(\Delta L )_{11} = l^1_{11} - l^0_{11} = -1 + O(\frac1k)$. \end{proof} It follows that $\Delta L$ is dominated by its first entry, and so it is somewhat unsurprising that it is close to being rank-one. A sharper element-wise bound is given in the following lemma. \begin{lemma} \label{lem:boundsize} The entries of $\Delta L$ that are not on the first row/column are smaller in magnitude than $\frac{c^2}{2k^2}$, $c = 1+ \frac{g(0)}{M\delta}$. \end{lemma} The proof of Lemma~\ref{lem:boundsize} is given in Appendix~\ref{sec:ProofOfLemmaBoundSize}. According to~\eqref{eqn:boundnnz}, $\Delta L$ has at most $2c_1^2k^2$ nonzero elements. At most $2c_1k$ of those are on the first row and column. The magnitude of these elements is at most $\frac{c}{k}$. The rest of the nonzero elements, in light of Lemma~\ref{lem:boundsize}, have magnitude of at most $\frac{c^2}{2k^2}$. Consider the non-zero elements of $E = \Delta L - \operatorname{diag}(\Delta L)$. The ones that are on the first row/column have magnitude of at most $\frac{c}{k}$, and there are at most $c_1k$ of them. Within the elements that are not on the first row/column, based on \eqref{eqn:boundnnz}, there are at most $2c_1^2k^2$ nonzero elements, and by Lemma~\ref{lem:boundsize}, their magnitude is at most $\frac{c^2}{2k^2}$ in size. Therefore, we can bound the Frobenius norm of $E$ as \begin{equation} \|E\|_F^2 \leq 2c_1k \cdot \Bigg( \frac{c}{k} \Bigg) ^2 + 2c_1^2k^2 \Bigg( \frac{c^2}{2k^2} \Bigg) ^2 = \frac{2c^2c_1}{k} + \frac{c^4c_1^2}{2k^2} < \frac{2c^4c_1^2}{k} + \frac{c^4c_1^2}{k} = \frac{3c^4c_1^2}{k} . \end{equation} Namely, \begin{equation} \|E\|_F \leq \sqrt{3}c^2c_1\frac{1}{\sqrt{k}} . \end{equation} We finally prove our main theorem. \begin{proof}[Proof of Theorem~\ref{thm:rankone}] Recall that $S = \operatorname{diag}(\Delta L)$. Denoting $ \tilde{c} = 3c^4c_1^2c'$, by \eqref{eqn:dls}, \begin{equation} \label{eq:bound} |\sigma_i(\Delta L) - \sigma_i(S)| < c' \|E\|_F^2 < c' \cdot \Bigg( \frac{\sqrt{3}c^2c_1}{\sqrt{k}} \Bigg)^2 = \frac{3c^4c_1^2c'}{k} = \frac{\widetilde{c}}{k} , \quad 1 \leq i \leq n . \end{equation} The largest singular value of $S$ is the absolute value of its largest entry, and by Lemma~\ref{lemma:entries} we have $\sigma_1 (S) = |(\Delta L)_{11}| $ for large enough $k$. By Lemma~\ref{lemma:bounding_elementwise} and~\eqref{eq:bound}, \begin{equation} \sigma_1 (\Delta L) < \sigma_1 (S) + \frac{\widetilde{c}}{k} < 1 - \frac{1}{c_1ck} + \frac{\widetilde{c}}{k} = 1 - \frac{1 + \widetilde{c}c_1c}{c_1ck} , \end{equation} and \begin{equation} \sigma_1 (\Delta L) > \sigma_1 (S) - \frac{\widetilde{c}}{k} > 1 - \frac{c}{k} - \frac{\widetilde{c}}{k} = 1 - \frac{c + \widetilde{c}}{k} . \end{equation} Namely, $\sigma_1(\Delta L)$ is of order $1 - \frac{1}{k}$. The other singular values of $S$ are the other diagonal entries, which are at most $\frac{c}{k}$, by Lemma~\ref{lemma:bounding_elementwise}. Thus, by~\eqref{eq:bound} we have\begin{equation} \sigma_i (\Delta L) < \sigma_i (S) + \frac{\widetilde{c}}{k} < \frac{c}{k} + \frac{\widetilde{c}}{k} = \frac{ \widetilde{c} + c}{k} , \quad i \geq 2 , \end{equation} which shows that $\sigma_i (\Delta L)$ is of order $\frac{1}{k}$ as required. \end{proof} \subsection{Rank-one update and error analysis} \label{sec:alg_gl} We next discuss the required adjustments for applying Algorithm~\ref{alg:trunc} of rank-one update to the out-of-sample extension problem. We wish to find the best rank-one approximation of $\Delta L = L_1 - L_0$, which we denote by $(\Delta L)_1$. Such an approximation requires recovering the largest singular value of $\Delta L$, denoted by $\sigma_1(\Delta L)$, and its corresponding left and right singular vectors, denoted by $v_L$ and $v_R$ respectively. Denote by $(\rho, v)$ the top eigenpair of $\Delta L$. Since $\Delta L \neq 0$ is symmetric, if its largest eigenvalue $\lambda$ is positive, then $\sigma_1(\Delta L) = \lambda$ and $v_L = v_R = v$. If $\lambda < 0$ then $\lambda = -\sigma_1(\Delta L)$ and $v_L = -v_R = v$. Thus, in both cases, the best rank-one approximation of $\Delta L$ is \begin{equation} \label{eqn:deltaL1} (\Delta L)_1 = \sigma_1(\Delta L)v_Lv_R^T = \rho vv^T. \end{equation} The out-of-sample extension algorithm, together with a perturbation correction that will be introduced shortly, is described in Algorithm~\ref{alg:extension_algo_corr}. The approximated eigenpairs returned by the algorithm are affected by two types of error: the error induced by truncating the rank-one update equations, which was discussed in Section~\ref{sec:rank_one_update}, and the error induced by the rank-one approximation of $\Delta L$, which we examine now. To analyze the error of Algorithm~\ref{alg:extension_algo_corr} and to further improve our approximation for the updated eigenvalues and eigenvectors, we use two classical results from matrix perturbation theory. These results, Lemma~\ref{lemma:pert1} and Lemma~\ref{lemma:pert2}, are given without proofs, and the interested reader is referred to \cite[Chapter~4]{byron2012mathematics}. As before, we denote by $q_i(X)$ a normalized eigenvector that is associated with the $i$-th largest eigenvalue of $X$. \begin{lemma} \label{lemma:pert1} Let $A, B \in \mathbb{R}^{n \times n}$ be symmetric matrices. The following holds for all $1 \le i \le n$, \begin{enumerate} \item $|\lambda_i(A + B) - \lambda_i(A)| = O( \norm{B} )$. \item $\norm{q_i(A + B) - q_i(A)} = O( \norm{B} )$. \end{enumerate} \end{lemma} \noindent Using Lemma~\ref{lemma:pert1}, let \begin{equation} \label{eqn:coreection_explained} A + B = L_1 = L_0 + \Delta L, \quad A = L_0 + (\Delta L)_1, \end{equation} where $(\Delta L)_1$ is defined in~\eqref{eqn:deltaL1}. Then, the rank-one update~\eqref{eq:proxy} induces an error of order $\norm{\Delta L - (\Delta L)_1} =\sigma_2(\Delta L)$, and by Theorem~\ref{thm:rankone} we conclude that this error is of order \[ \sigma_2(\Delta L) = O\left(\frac{1}{k}\right) . \] Similarly to Section~\ref{sec:rank_one_update}, we can obtain higher order approximation using a further correction, based on the following result. \begin{lemma} \label{lemma:pert2} Let $A, B \in \mathbb{R}^{n \times n}$ be symmetric matrices. The following holds for all $1 \le i \le n$, \begin{enumerate} \item $\Big| \lambda_i(A + B) - \big[ \lambda_i(A) + q_i^T(A)Bq_i(A) \big]\Big| = O(\norm{B}^2)$. \item $\norm{q_i(A + B) - \big[ q_i(A) + \sum_{j \neq i}{\frac{q_j(A)^T B q_i(A)}{\lambda_i(A) - \lambda_j(A)}q_j(A)}\big]} = O(\norm{B}^2)$. \end{enumerate} \end{lemma} Lemma~\ref{lemma:pert2} gives rise to an improved error bound due to the extra correction term. Using \eqref{eqn:coreection_explained}, the rank-one update~\eqref{eq:proxy} followed by the correction term obtained by~\eqref{lemma:pert2} induces an error of order $\norm{\Delta L - (\Delta L)_1}^2 =\sigma_2^2(\Delta L)$, and by Theorem~\ref{thm:rankone}, we get that \[ \sigma_2^2(\Delta L) = O\left(\frac{1}{k^2}\right) . \] The perturbation correction is embedded in our method as described in Algorithm~\ref{alg:extension_algo_corr}. The complexity of Algorithm~\ref{alg:extension_algo_corr} is discussed in detail on Appendix~\ref{sec:comp_lag}. \begin{algorithm}[ht] \caption{Out-of-sample extension of the graph Laplacian} \label{alg:extension_algo_corr} \begin{algorithmic}[1] \REQUIRE The original graph Laplacian $L_0$ and its top $m$ eigenpairs $ \left\lbrace \left( \lambda_i, q_i \right) \right\rbrace_{i=1}^m$. \\ A new sample point $x_0$. \ENSURE Approximate top eigenpairs $ \left\lbrace \left( \widehat{t_i},\widehat{p_i} \right) \right\rbrace_{i=1}^m$ \STATE $L_1 \gets $ the graph Laplacian of $\mathcal{X} \cup \{x_0\}$ \label{alg2:line1} \STATE $\Delta L \gets L_1 - L_0$ \STATE $ \rho \gets \lambda_1(\Delta L)$ \label{alg2:line3} \STATE $ v \gets q_1 ( \Delta L)$ \label{alg2:line4} \STATE Approximate $\left\lbrace \left( \widetilde{t_i},\widetilde{p_i} \right) \right\rbrace_{i=1}^m $ using Algorithm~\ref{alg:trunc} with input $\left( \left\lbrace \left( \lambda_i, q_i \right) \right\rbrace_{i=1}^m, \rho, v \right)$ \label{alg2:line5} \STATE $C \gets L_1 - (L_0 + \rho v v^T)$ \label{alg2:line6} \FORALL[perturbation correction]{ $i = 1 ... m $ } \label{alg2:line7} \STATE $ \widehat{t_i} \gets \widetilde{t_i} + \widetilde{p_i^T} C \widetilde{p_i}$ \STATE $ \widehat{p_i} \gets \widetilde{p_i} + \sum_{j\neq i} { \frac{\widetilde{p_j}^TC\widetilde{p_i}}{\widetilde{t_i} - \widetilde{t_j}} \widetilde{p_j} }$ \label{alg2:line9} \ENDFOR \label{alg2:line10} \end{algorithmic} \end{algorithm} \section{Numerical examples} \label{sec:numeric} In this section, we provide various numerical examples to demonstrate empirically the theory developed in the previous sections. We use both synthetic datasets as well as real-world datasets. We begin by providing several numerical examples for the rank-one update formulas of Section~\ref{sec:rank_one_update}. These examples demonstrate the high accuracy of the methods, as well as their runtime efficiency. We continue by providing numerical examples for Section~\ref{sec:updating_problem}, showing numerically that inserting a new vertex to the graph Laplacian is almost a rank-one update to its matrix. We proceed by applying our algorithm for updating the eigenvalues and eigenvectors of the graph Laplacian to real-world data and measure the accuracy of our approach compared to other methods. All experiments were performed on an Intel i7 desktop with $8$GB of RAM. All algorithms were implemented in MATLAB. The code to reproduce the examples is available at https://github.com/roymitz/rank-one-update. \subsection{Truncated formulas for rank-one update (Section~\ref{sec:rank_one_update})} We start with a synthetic example to demonstrate empirically the use of the truncated secular equation and eigenvectors formula for the rank-one update problem. We generate a random symmetric matrix $A \in \mathbb{R}^{n \times n}$ with $n = 1000$ and $m = 10$ known leading eigenvalues whose magnitude is $O(1)$, together with their corresponding eigenvectors. The rest of the eigenvalues are unknown to the algorithm and are drawn from a normal distribution with mean $\hat{\mu}$ and standard deviation of $\sigma = 0.0001$. For the update, we use a random perturbation vector $v$. The goal is to recover the $m$ top eigenpairs of $A + vv^T$ for various values of $\hat{\mu}$. As a rough estimate for the unknown eigenvalues, our parameter $\mu$ is chosen to be either $\mu = 0$ or $\mu = \mu_\ast$ \eqref{eq:mu_opt}. The results are shown in Table~\ref{tbl:synt_errors_mu} and Figure~\ref{fig:sec_err_decay}. For the approximate eigenvalues, we measure the absolute errors of the first order method \eqref{eq:TSE} and the second order method \eqref{eq:CTSE}, for the two different choices of $\mu$. Note that for $\mu = \mu_\ast$, the two estimations are identical and thus appear in the same column of the table. For the eigenvectors, the norm of the approximation error is presented, for the two different methods: first order of \eqref{eqn:TEF} and second order of \eqref{eq:CTEF} using the two different choices of $\mu$. According to Section~\ref{sec:rank_one_update}, for the above setting we expect the case $\mu = 0$ to yield errors of magnitude $O(\hat{\mu})$ for the first order approximations, and of magnitude $O(\hat{\mu}^2)$ for the second order approximations. For $\mu = \mu_\ast$ we expect errors independent of $\hat{\mu}$. This may be observed in Table~\ref{tbl:synt_errors_mu}, but is even clearer in Figure~\ref{fig:sec_err_decay}, where we have a line with zero slope for $\mu_\ast$ (error is independent on $\hat{\mu}$), a line with slope equal to one for $\mu = 0$ and first order approximation (linear error decay), and a line with slope equal to two for $\mu = 0$ and second order approximation (quadratic error decay). \begin{table} \begin{adjustbox}{max width=\textwidth} \centering \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline & \multicolumn{3}{c|}{Eigenvalues} & \multicolumn{4}{c|}{Eigenvectors} \\ \hline $\hat{\mu}$ & \begin{tabular}[c]{@{}c@{}}first order\\ $\mu = 0$\end{tabular} & \begin{tabular}[c]{@{}c@{}}second order\\ $\mu = 0$\end{tabular} & \begin{tabular}[c]{@{}c@{}}first order\\second order \\ $\mu = \mu_\ast$\end{tabular} & \begin{tabular}[c]{@{}c@{}}first order\\ $\mu = 0$\end{tabular} & \begin{tabular}[c]{@{}c@{}}second order\\ $\mu = 0$\end{tabular} & \begin{tabular}[c]{@{}c@{}}first order\\ $\mu = \mu_\ast$ \end{tabular} & \begin{tabular}[c]{@{}c@{}}second order\\ $\mu = \mu_\ast$\end{tabular} \\ \hline 1e-00 & 8.79e-02 & 3.82e-02 & 9.22e-10 & 1.79e-01 & 1.70e-01 & 3.45e-05 & 5.25e-08 \\ \hline 1e-01 & 4.20e-03 & 4.24e-04 & 4.42e-10 & 1.26e-02 & 7.90e-03 & 9.68e-06 & 8.27e-09 \\ \hline 1e-02 & 3.08e-04 & 2.77e-06 & 2.72e-10 & 7.83e-04 & 9.72e-05 & 8.28e-06 & 9.61e-09 \\ \hline 1e-03 & 3.00e-05 & 2.68e-08 & 2.61e-10 & 7.66e-05 & 1.00e-06 & 8.20e-06 & 9.88e-09 \\ \hline 1e-04 & 3.12e-06 & 5.83e-10 & 2.95e-10 & 1.17e-05 & 2.21e-08 & 8.72e-06 & 1.12e-08 \\ \hline \end{tabular} \end{adjustbox} \caption{Absolute errors for the synthetic example with $n = 1000, m = 10$, with unknown eigenvalues distributed normally with mean $\hat{\mu}$ and standard deviation $\sigma = 0.0001$.} \label{tbl:synt_errors_mu} \end{table} \begin{figure} \begin{center} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{err_synt_vals2.eps} \caption{Eigenvalues absolute error} \end{subfigure} \quad \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{err_synt_vecs2.eps} \caption{Eigenvectors absolute error} \end{subfigure} \end{center} \caption{Plot of $\log_2$-absolute error as a function of $\log_2\hat{\mu}$ for the synthetic example with $n = 1000, m = 10$, with unknown eigenvalues normally distributed with mean $\hat{\mu}$ and standard deviation $\sigma = 0.0001$. We can notice the three main trends: the error when using $\mu_\ast$ is independent of $\hat{\mu}$, the error when using $\mu = 0$ and the first order approximation decay\ linearly, and the error when using $\mu = 0$ and the second order approximation decay quadratically.} \label{fig:sec_err_decay} \end{figure} The next example demonstrates the mean running time of Algorithm~\ref{alg:trunc} for $10$ independent runs, compared to MATLAB's function \texttt{eigs(L1, m)} for calculating the leading $m$ eigenvalues and eigenvectors. The setting of the example is as follows. A symmetric sparse random matrix with $O(100 \cdot n)$ non-zero entries and a sparse random vector $v$ with $O(100)$ entries were generated. We then used two variants of our algorithm to update the eigenpairs: first order approximation with $\mu = 0$ (fastest variant) and second order approximation with $\mu = \mu_\ast$ (slowest variant). Table~\ref{tbl:runtime} demonstrates the dependence of the running time on $n$ and $m$. While MATLAB's algorithm is accurate and ours is only an approximate, we can see that for relatively small values of $n$ our algorithm is more than an order of magnitude faster. Due to the linear dependence on $n$, we can expect this difference to be even more dramatic for larger values of $n$, as witnessed in Figure~\ref{fig:runtime_n}. Additionally, the runtime differences between the two variants of our algorithm are negligible. \begin{table} \centering \begin{subtable}{.45\textwidth} \begin{adjustbox}{width={\textwidth},totalheight={\textheight},keepaspectratio} \begin{tabular}{|c|l|l|l|} \hline $n$ & \multicolumn{1}{c|}{MATLAB} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}first order\\ $\mu = 0$\end{tabular}} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}second order\\ $\mu = \mu_\ast$\end{tabular}} \\ \hline 2000 & 0.47 $\pm$ 0.05 & 0.75 $\pm$ 0.02 & 0.85 $\pm$ 0.02 \\ \hline 4000 & 1.71 $\pm$ 0.10 & 0.79 $\pm$ 0.02 & 0.92 $\pm$ 0.02 \\ \hline 8000 & 5.34 $\pm$ 0.05 & 0.76 $\pm$ 0.03 & 0.88 $\pm$ 0.02 \\ \hline 16000 & 17.1 $\pm$ 0.12 & 0.86 $\pm$ 0.03 & 0.97 $\pm$ 0.02 \\ \hline 32000 & 50.6 $\pm$ 0.57 & 0.97 $\pm$ 0.03 & 1.11 $\pm$ 0.02 \\ \hline 64000 & 154 $\pm$ 1.01 & 1.23 $\pm$ 0.01 & 1.36 $\pm$ 0.01 \\ \hline \end{tabular} \end{adjustbox} \caption{Timing is seconds for varying $n$ with a fixed number of eigenpairs, $m = 10$.} \end{subtable} \hspace{1cm} \begin{subtable}{.45\textwidth} \begin{adjustbox}{width={\textwidth},totalheight={\textheight},keepaspectratio} \begin{tabular}{|c|l|l|l|} \hline $m$ & \multicolumn{1}{c|}{MATLAB} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}first order\\ $\mu = 0$\end{tabular}} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}second order\\ $\mu = \mu_\ast$\end{tabular}} \\ \hline 50 & 12.4 $\pm$ 0.13 & 0.45 $\pm$ 0.01 & 0.51 $\pm$ 0.01 \\ \hline 100 & 26.5 $\pm$ 0.34 & 0.95 $\pm$ 0.01 & 1.08 $\pm$ 0.01 \\ \hline 200 & 60.5 $\pm$ 0.33 & 1.98 $\pm$ 0.04 & 2.21 $\pm$ 0.04 \\ \hline 400 & 155 $\pm$ 4.05 & 4.74 $\pm$ 0.20 & 5.08 $\pm$ 0.42 \\ \hline 600 & 345 $\pm$ 4.05 & 7.14 $\pm$ 0.20 & 8.25 $\pm$ 0.42 \\ \hline 800 & 542 $\pm$ 4.05 & 10.9 $\pm$ 0.20 & 11.7 $\pm$ 0.42 \\ \hline \end{tabular} \end{adjustbox} \caption{Timing is seconds for a varying number of eigenpairs $m$ with a fixed matrix size $n = 20,000$.} \end{subtable} \caption{Performance measurements.} \label{tbl:runtime} \end{table} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{time_vs_n_fig.eps} \caption{Plot of $\log_2$-runtime as a function of $\log_2n$, for matrices of size $n$. We can see that the runtime difference between our algorithms and MATLAB's \texttt{eigs} increases with $n$.} \label{fig:runtime_n} \end{figure} \subsection{Updating the graph Laplacian (Section~\ref{sec:updating_problem})} We provide several examples using three real-world datasets to demonstrate the update of the symmetric graph Laplacian. The datasets are described in Table~\ref{tbl:datasets}. \begin{table} \begin{center} \begin{tabular} { | l | l | l | p{7cm} |} \hline Name & Samples & Attributes & Description \\ \hline MNIST & 60,000 & 784 & Grey scale images of handwritten digits between $0$ and $9$ \\ \hline Poker Hand & 25,000 & 10 & Each record is a hand consisting of five playing cards drawn from a standard deck of 52 cards\\ \hline Yeast & 1484 & 8 & Information about a set of yeast cells \\ \hline \end{tabular} \end{center} \caption{Real-world datasets.} \label{tbl:datasets} \end{table} In the first example, we demonstrate that inserting a new vertex to the graph Laplacian is almost rank-one, as suggested by Theorem~\ref{thm:rankone}. In this example, for each dataset, we first randomly select a subset of it, and construct the symmetric graph Laplacian $L_0$ of the selected subset, leaving the first vertex out. Then, we connect this vertex to the graph, which results in a new graph Laplacian $L_1$. Finally we compute the first and second singular values of $\Delta L = L_1 - L_0$. We repeat this experiment 10 times, each time with a different random subset of the data. The results of the mean magnitude of the singular values are shown in Table~\ref{tbl:sigma1} for various datasets and values of $k$. Clearly, one can observe, as predicted by the theory, that the first singular value is very close to $1$, while the second singular value is close to $0$. \begin{table} \begin{center} \begin{tabular}{|l|l|l|l|l|l|l|} \hline & \multicolumn{2}{l|}{$k=5$} & \multicolumn{2}{l|}{$k=10$} & \multicolumn{2}{l|}{$k=20$} \\ \hline dataset & $\sigma_1$ & $\sigma_2$ & $\sigma_1$ & $\sigma_2$ & $\sigma_1$ & $\sigma_2$ \\ \hline MNIST (5K samples) & 0.91 & 0.09 & 0.97 & 0.05 & 0.99 & 0.02 \\ \hline poker (10K samples) & 0.94 & 0.09 & 0.98 & 0.05 & 0.98 & 0.02 \\ \hline yeast (1.5K samples) & 0.94 & 0.11 & 0.97 & 0.05 & 0.99 & 0.03 \\ \hline \end{tabular} \caption{The two largest singular values of $\Delta L$ for real-world datasets and three different values of $k$. As theory predicts, there is a two orders of magnitude difference between the first and second singular values, indicating that indeed $\Delta L$ is close to being rank-one.} \label{tbl:sigma1} \end{center} \end{table} Next, we demonstrate empirically the dependence of the singular values on $k$. Specifically, Theorem~\ref{thm:rankone} implies that up to a constant \[ \log(\sigma_1(\Delta L) - 1) = \log(\sigma_i(\Delta L)) = -k , \quad 2 \le i \le n. \] Thus the log of the singular values is expected to be linear in $k$ with slope that equals to 1. This is demonstrated for the poker dataset in Figure~\ref{fig:mnist_sings}. Similar results were obtained for the other datasets as well. \begin{figure}[t] \centering \begin{subfigure}[b]{0.4\textwidth} \includegraphics[width=\textwidth]{poker_s1.eps} \caption{$1 - \sigma_1(\Delta L) = O\big(\frac{1}{k}\big)$. The dependence of $\operatorname{log}_2(1 - \sigma_1)$ on $k$ is linear with slope $(-1)$.} \end{subfigure} \qquad \quad \begin{subfigure}[b]{0.42\textwidth} \includegraphics[width=\textwidth]{poker_s234_fig.eps} \caption{$\sigma_i(\Delta L) = O\big(\frac{1}{k}\big)$ for $i=2,3,4$. The dependence of $\operatorname{log}_2(\sigma_i)$ on $k$ is linear with slope $(-1)$.} \end{subfigure} \caption{Demonstration of Theorem~\ref{thm:rankone} for the poker dataset with $n=10,000$.}\label{fig:mnist_sings} \end{figure} In the second example, we perform out-of-sample extension using several methods and compare their accuracy. As a benchmark for the eigenvectors extension, we use the Nystr{\"o}m method \cite{bengio2004out}, which is a widely used method for this task. Additionally, we use the naive approach of merely having the old eigenvalues and eigenvectors as approximations to the new ones. Regarding our methods, we use both the first order (\eqref{eq:TSE}, \eqref{eqn:TEF}) and second order (\eqref{eq:CTSE},\eqref{eq:CTEF}) approximations described in Section~\ref{sec:rank_one_update}. For our methods, we also apply the perturbation correction described in Algorithm~\ref{alg:extension_algo_corr}. To compare the performance of the different algorithms, we measure the angles between the true eigenvectors and their approximations, and report the maximal angle out of the $m$ angles calculated. The results reported are the mean error of $10$ independent experiments, that is, picking randomly a vertex for the out-of-sample extension in each experiment. The full comparison between the described methods is given in Table~\ref{tbl:comparison}, where for each dataset we also mention the parameter $\varepsilon$ of the width of the Gaussian (see \eqref{eqn:guass_ker}) that we used for constructing the graph Laplacian. On the second column of Table~\ref{tbl:comparison} is the absolute error of the eigenvalues for each method except the Nystr{\"o}m method which we use only to extend the eigenvectors. On the third column is the absolute error of the eigenvalues after performing the perturbation correction. The fifth and sixth columns present the error of the eigenvectors estimation, before and after perturbation correction, respectively. We can see that our methods outperform the other approaches. As expected, the second order approximations using $\mu_{*}$ present the best performance and is marked in bold. \begin{table} \centering \begin{tabular}{|l|l|l|l|l|} \hline MNIST & Eigenvalues & \begin{tabular}[c]{@{}l@{}}Eigenvalues\\ (after correction)\end{tabular} & Eigenvectors & \begin{tabular}[c]{@{}l@{}}Eigenvectors\\ (after correction)\end{tabular} \\ \hline No update & 7.85e-05 & - & 2.83\degree & - \\ \hline Nystr{\"o}m & - & - & 1.56\degree & - \\ \hline First order ($\mu = 0$) & 5.09e-05 & 7.73e-06 & 1.00\degree & 0.84\degree \\ \hline \textbf{Second order ($\mu = \mu_\ast$)} & \textbf{5.06e-05} & \textbf{7.70e-06} & \textbf{1.00\degree} & \textbf{0.82\degree} \\ \hline \end{tabular} \bigskip \begin{tabular}{|l|l|l|l|l|} \hline Poker & Eigenvalues & \begin{tabular}[c]{@{}l@{}}Eigenvalues\\ (after correction)\end{tabular} & Eigenvectors & \begin{tabular}[c]{@{}l@{}}Eigenvectors\\ (after correction)\end{tabular} \\ \hline No update & 1.97e-05 & - & 3.04\degree & - \\ \hline Nystr{\"o}m & - & - & 2.71\degree & - \\ \hline First order ($\mu = 0$) & 1.44e-05 & 6.18e-06 & 1.89\degree & 0.95\degree \\ \hline \textbf{Second order ($\mu = \mu_\ast$)} & \textbf{1.43se-05} & \textbf{6.15e-06} & \textbf{1.88\degree} & \textbf{0.94\degree} \\ \hline \end{tabular} \bigskip \begin{tabular}{|l|l|l|l|l|} \hline Yeast & Eigenvalues & \begin{tabular}[c]{@{}l@{}}Eigenvalues\\ (after correction)\end{tabular} & Eigenvectors & \begin{tabular}[c]{@{}l@{}}Eigenvectors\\ (after correction)\end{tabular} \\ \hline No update & 1.58e-04 & - & 2.40\degree & - \\ \hline Nystr{\"o}m & - & - & 0.65\degree & - \\ \hline First order ($\mu = 0$) & 1.11e-04 & 1.78e-06 & 0.42\degree & 0.35\degree \\ \hline \textbf{Second order ($\mu = \mu_\ast$)} & \textbf{1.11e-04} & \textbf{1.78e-06 } & \textbf{0.41\degree} & \textbf{0.33\degree} \\ \hline \end{tabular} \caption{Error comparison for three datasets: MNIST $(n = 1000, m = 5, k = 10, \varepsilon = 100)$, poker $(n = 3000, m = 5, k = 100, \varepsilon = 100)$ and yeast $(n = 1400, m = 5, k = 100, \varepsilon = 100)$. Best performance is marked in bold.} \label{tbl:comparison} \end{table} In the last example, we demonstrate a practical rather than a numerical advantage of our method. Starting with $1500$ random samples from the MNIST dataset, we split this set into a train set consisting of $1000$ samples and a test set consisting of the remaining $500$ samples. Each point in the train set is in $\mathbb{R}^{784}$. We then embed the train set samples in $\mathbb{R}^{10}$ using Laplacian eigenmaps~\cite{belkin2003laplacian} with parameters $k=10$ and $\varepsilon = 100$ for constructing the graph Laplacian. For each sample in the test set, we perform an out-of-sample-extension using four different methods: recalculation of the new embedding (which is the optimal, expensive method), no update where the test points are embedded naively to the origin, Nystr{\"o}m method, and our method. We then train a 15-NN classifier on the embedded vectors of the train set. We use this classifier to label the given test sample and compare it to the true label. Table~\ref{tbl:ml_comparison} summarizes the accuracy of each extension method on the test set. One can see that our method performs considerably better than the other approaches, and its performance is very close to the best possible results obtained by the method of recalculating the entire embedding. \begin{table} \centering \begin{tabular}{|l|l|} \hline Method & Accuracy \\ \hline Recalculation (Optimal) &68\% \\ \hline No update & 12\% \\ \hline Nystr{\"o}m & 58\% \\ \hline Our method & 67\% \\ \hline \end{tabular} \caption{Accuracy for out-of-sample extension of the MNIST dataset for several methods.} \label{tbl:ml_comparison} \end{table} \section{Conclusions} \label{sec:conclusions} In this paper, we proposed an approximation algorithm for the rank-one update of a symmetric matrix when only part of its spectrum is known. We provided error bounds for our algorithm and showed that they are independent of the number of unknown eigenvalues. As implied both by theory and numerical examples, our algorithm performs best when the unknown eigenvalues are clustered (i.e., close to each other). On the other hand, numerical evidence shows that when the unknown eigenvalues are not clustered, the results may deteriorate but are still no worse than neglecting the unknown eigenvalues (low rank approximation). As a possible application, we proposed the out-of-sample extension of the graph Laplacian matrix, and demonstrated that our method provides superior results. \section*{Acknowledgments} This research was supported by THE ISRAEL SCIENCE FOUNDATION grant No. 578/14, by Award Number R01GM090200 from the NIGMS, and by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement 723991 - CRYOMATH). NS was partially supported by the Moore Foundation Data-Driven Discovery Investigator Award. \bibliographystyle{plain}
1,941,325,219,965
arxiv
\section{Introduction}\label{sec:introduction} The proliferation of multi-agent environments in emerging applications such as internet-of-things (IoT), networked sensing and autonomous systems, together with the necessity of training machine learning models using distributed systems in federated learning, leads to a growing need of developing decentralized algorithms for optimizing finite-sum problems. Specifically, the goal is to minimize the global objective function: \begin{equation} \underset{{\x\,\in\, \mathbb{R}^d}}{\text{minimize}} \quad f(\x) : = \frac{1}{N} \sum_{\bm{z}\in\cM} \ell(\x; \bz), \end{equation} where $\bx \in \R^d$ denotes the parameter of interest, $\ell(\x; \bz)$ denotes the sample loss of the sample $\bz$, $\cM$ denotes the entire dataset, and $N = |\cM|$ denotes the number of data samples in the entire dataset. Of particular interest to this paper is the nonconvex setting, where $\ell(\x; \bz)$ is nonconvex with respect to $\x$, due to its ubiquity across problems in machine learning and signal processing, including but not limited to nonlinear estimation, neural network training, and so on. In a prototypical decentralized environment, however, each agent only has access to a disjoint subset of the data samples, and aims to work collaboratively to optimize $f(\x)$, by only exchanging information with its neighbors over a predetermined network topology. Assuming the data are distributed equally among all agents,\footnote{It is straightforward to generalize to the unequal splitting case with a proper reweighting.} each agent thus possesses $m := N / n$ samples, and $f(\x)$ can be rewritten as \begin{equation*} f(\x) = \frac{1}{n}\sum_{i=1}^n f_i(\x) , \end{equation*} where \begin{equation*}f_i(\x): = \frac{1}{m} \sum_{\bz \in \cM_i}\ell(\x; \bz) \end{equation*} denotes the local objective function averaged over the local dataset $\cM_i$ at the $i$th agent ($1\leq i\leq n$) and $\cM = \cup_{i=1}^n \cM_i$. The communication pattern of the agents is specified via an undirected graph $\mathcal{G}=(\mathcal{V}, \mathcal{E})$, where $\mathcal{V}$ denotes the set of all agents, and two agents can exchange information if and only if there is an edge in $\mathcal{E}$ connecting them. Unlike the master/slave setting, the decentralized setting, sometimes also called the network setting, does not admit a parameter server to facilitate global information sharing, therefore is much more challenging to understand and delineate the impact of the network graph. Roughly speaking, in a typical decentralized algorithm, the agents alternate between (1) communication, which propagates local information and enforces consensus, and (2) computation, which updates individual parameter estimates and improves convergence using information received from the neighbors. The resource efficiency of a decentralized algorithm can often be measured in terms of its computation complexity and communication complexity. For example, communication can be extremely time-consuming and become the top priority when the bandwidth is limited. On the other hand, minimizing computation, especially at resource-constrained agents (e.g., power-hungry IoT or mobile devices), is also critical to ensure the overall efficiency. Achieving a desired level of resource efficiency for a decentralized algorithm often requires careful and delicate trade-offs between computation and communication, as these objectives are often conflicting in nature. \subsection{Our contributions} \label{sub:contributions} The central contribution of this paper lies in the development of a new resource-efficient algorithm for nonconvex finite-sum optimization problems in a decentralized environment, dubbed \underline{DE}centralized \underline{ST}ochastic \underline{RE}cur\underline{S}ive gradient method\underline{S} (DESTRESS ). DESTRESS provably finds first-order stationary points of the global objective function $f(\x)$ with the optimal incremental first-order (IFO) oracle complexity, i.e. the complexity of evaluating sample gradients, matching state-of-the-art centralized algorithms, but at a much lower communication complexity compared to existing decentralized algorithms over a wide range of parameter regimes. To achieve resource efficiency, DESTRESS leverages several key ideas in the algorithm design. To save local computation, DESTRESS harnesses the finite-sum structure of the empirical risk function by performing stochastic variance-reduced recursive gradient updates at each agent \cite{nguyen2019finite,fang2018spider,wang2019spiderboost,li2019ssrgd,li2021zerosarah,li2021page}, an approach that is shown to be optimal in terms of IFO complexity in the centralized setting. To save communication, DESTRESS employs gradient tracking \cite{zhu2010discrete} with a few mixing rounds per iteration, which helps accelerate the convergence through better information sharing \cite{li2020communication}; the extra mixing scheme can be implemented using Chebyshev acceleration \cite{arioli2014chebyshev} to further improve the communication efficiency. In a nutshell, to find an $\epsilon$-approximate first-order stationary points, i.e. $ \E \big\| \nabla f(\x^{\mathsf{output}}) \big\|^2_2\leq \epsilon$, where $\x^{\mathsf{output}}$ is the output of DESTRESS , and the expectation is taken with respect to the randomness of the algorithm, DESTRESS requires: \begin{itemize} \item $O \big(m + (m/n)^{1/2} L / \epsilon \big)$ per-agent IFO calls,\footnote{The big-$O$ notation is defined in \Cref{sub:paper_organization_and_notation}.} which is {\em network-independent}; and \item $O\Big( \frac{1}{(1 - \alpha)^{1/2}} \cdot \big( (mn)^{1/2} + L / \epsilon \big) \Big)$ rounds of communication, \end{itemize} where $L$ is the smoothness parameter of the sample loss, $\alpha \in [0,1) $ is the mixing rate of the network topology, $n$ is the number of agents, and $m=N/n$ is the local sample size. \begin{table}[tb] \centering \resizebox{\textwidth}{!}{\begin{tabular}{c||c|c|c} \toprule \textcolor{rgb:red,255;green,0;blue,0}{} Algorithms &Setting & Per-agent IFO Complexity & Communication Rounds \\ \hline\hline SVRG & \multirow{2}{*}{centralized} & \multirow{2}{*}{$N + \frac{N^{2/3} L}{\epsilon}$} & \multirow{2}{*}{n/a} \\ \cite{allen2016variance,reddi2016stochastic} & & & \\ \hline SCSG/SVRG+ & \multirow{2}{*}{centralized} & \multirow{2}{*}{$N + \frac{N^{2/3} L}{\epsilon}$} & \multirow{2}{*}{n/a} \\ \cite{lei2019non,li2018simple} & & & \\ \hline SARAH/SPIDER/SpiderBoost & \multirow{2}{*}{centralized} & \multirow{2}{*}{$N + \frac{N^{1/2} L}{\epsilon}$} & \multirow{2}{*}{n/a} \\ \cite{nguyen2019finite,fang2018spider,wang2019spiderboost} & & & \\ \hline SSRGD/ZeroSARAH/PAGE & \multirow{2}{*}{centralized} & \multirow{2}{*}{$N + \frac{N^{1/2} L}{\epsilon}$} & \multirow{2}{*}{n/a} \\ \cite{li2019ssrgd,li2021zerosarah,li2021page} & & & \\ \hline D-GET & \multirow{2}{*}{decentralized} & \multirow{2}{*}{$m + \frac{1}{(1 - \alpha)^2} \cdot \frac{m^{1/2} L}{\epsilon}$} & \multirow{2}{*}{Same as IFO} \\ \cite{sun2020improving} & & & \\ \hline GT-SARAH & \multirow{2}{*}{decentralized} & \multirow{2}{*}{$m + \max \Big( \frac{1}{(1 - \alpha)^2}, \big(\frac mn \big)^{1/2}, \frac{(m/n + 1)^{1/3}}{1 - \alpha} \Big) \cdot \frac{L}{\epsilon}$} & \multirow{2}{*}{ Same as IFO } \\ \cite{xin2020near} & & & \\ \hline \hline DESTRESS & \multirow{2}{*}{decentralized} & \multirow{2}{*}{ $m + \frac{(m/n)^{1/2} L}{\epsilon}$} & \multirow{2}{*}{$\frac{1}{(1 - \alpha)^{1/2}} \cdot \Big( (mn)^{1/2} + \frac {L}{\epsilon} \Big)$} \\ (this paper) & & & \\ \bottomrule \hline \end{tabular} } \caption{The per-agent IFO complexities and communication complexities to find $\epsilon$-approximate first-order stationary points by stochastic variance-reduced algorithms for nonconvex finite-sum problems. The algorithms listed in the first three rows are designed for the centralized setting, and the remaining D-GET, GT-SARAH and our DESTRESS are in the decentralized setting. The big-$O$ notation is omitted for simplicity. Here, $n$ is the number of agents, $m=N/n$ is the local sample size, $L$ is the smoothness parameter of the sample loss, and $\alpha \in [0,1) $ is the mixing rate of the network topology. \label{table:1st} } \end{table} \paragraph{Comparisons with existing algorithms.} \Cref{table:1st} summarizes the convergence guarantees of representative stochastic variance-reduced algorithms for finding first-order stationary points across centralized and decentralized communication settings. \begin{itemize} \item In terms of the computation complexity, the overall IFO complexity of DESTRESS ---when summed over all agents---becomes $$ n \cdot O \big(m + (m/n)^{1/2} L / \epsilon \big) = O \big(mn + (mn)^{1/2} L/ \epsilon \big)= O \big( N + N^{1/2} L/ \epsilon \big) ,$$ matching the optimal IFO complexity of centralized algorithms (e.g., SPIDER~\cite{fang2018spider}, PAGE~\cite{li2021page}) and distributed master/slave algorithms (e.g., D-ZeroSARAH~\cite{li2021zerosarah}). However, the state-of-the-art decentralized algorithm GT-SARAH~\cite{xin2020near} still did not achieve this optimal IFO complexity for most situations (see Table~\ref{table:1st}). To the best of our knowledge, DESTRESS is the first algorithm to achieve the optimal IFO complexity for the decentralized setting regardless of network topology. \item When it comes to the communication complexity, it is observed that the communication rounds of DESTRESS can be decomposed into the sum of an $\epsilon$-independent term and an $\epsilon$-dependent term, e.g., $$ \underbrace{ \frac{ 1}{(1 - \alpha)^{1/2}} \cdot (mn)^{1/2} }_{\epsilon-\mathsf{independent}} + \underbrace{ \frac{1}{(1 - \alpha)^{1/2}} \cdot \frac{L}{\epsilon}}_{\epsilon-\mathsf{dependent}} ; $$ similar decompositions also apply to competing decentralized algorithms. DESTRESS significantly improves the $\epsilon$-dependent term of D-GET and GT-SARAH by at least a factor of $\frac{1}{(1-\alpha)^{3/2}}$, and therefore, saves more communications over poorly-connected networks. Further, the $\epsilon$-independent term of DESTRESS is also smaller than that of D-GET/GT-SARAH as long as the local sample size is sufficient large, i.e. $m = \Omega \big( \frac{n}{1-\alpha} \big) $, which also holds for a wide variety of application scenarios. To gain further insights in terms of the communication savings of DESTRESS , \Cref{table:1st_communication} further compares the communication complexities of decentralized algorithms for finding first-order stationary points under three common network settings. \end{itemize} \begin{table}[tb] \centering \resizebox{\textwidth}{!}{ \begin{tabular}{c||c|c|c} \toprule & Erd\H{o}s-R\'enyi graph & 2-D grid graph & Path graph \\ \hline \hline $1 - \alpha$ & \multirow{2}{*}{$1$} & \multirow{2}{*}{$\frac{1}{n \log n}$} & \multirow{2}{*}{$\frac{1}{n^2}$} \\ (spectral gap) & & & \\ \hline D-GET & \multirow{2}{*}{$m + \frac{m^{1/2} L}{\epsilon}$} & \multirow{2}{*}{$m + \frac{m^{1/2} n^2 L}{\epsilon}$} & \multirow{2}{*}{$m + \frac{m^{1/2} n^4L}{\epsilon}$} \\ \cite{sun2020improving} & & & \\ \hline GT-SARAH & \multirow{2}{*}{$m + \max \Big\{1,~ \big(\frac{m}{n} \big)^{1/3},~ \big(\frac mn \big)^{1/2} \Big\} \cdot \frac{L}{\epsilon}$} & \multirow{2}{*}{$m + \max \Big\{n^2,~ m^{1/3}n^{2/3},~ \big(\frac mn \big)^{1/2} \Big\} \cdot \frac{L}{\epsilon}$} & \multirow{2}{*}{$m + \max \Big\{n^4,~ m^{1/3}n^{5/3},~ \big(\frac mn \big)^{1/2} \Big\} \cdot \frac{L}{\epsilon}$} \\ \cite{xin2020near} & & & \\ \hline DESTRESS & \multirow{2}{*}{$(mn)^{1/2} + \frac {L}{\epsilon}$} & \multirow{2}{*}{$m^{1/2}n + \frac {n^{1/2} L}{\epsilon} $} & \multirow{2}{*}{$(m n^3)^{1/2} + \frac {nL}{\epsilon} $} \\ (this paper) & & & \\ \hline \hline Improvement factors & \multirow{2}{*}{$\big(\frac mn \big)^{1/2}$ } & \multirow{2}{*}{$\frac{m^{1/2}}{n}$ } & \multirow{2}{*}{$\frac{m^{1/2}}{n^{3/2}}$ } \\ for $\epsilon$-independent term & & & \\ \hline Improvement factors & \multirow{2}{*}{$\max \Big\{1,~ \big(\frac{m}{n} \big)^{1/3},~\big(\frac mn \big)^{1/2} \Big\}$ } & \multirow{2}{*}{$\max \Big\{ n^{3/2},~ m^{1/3}n^{1/6},~ \frac{m^{1/2}}{n} \Big\}$} & \multirow{2}{*}{$\max \Big\{ n^3,~ m^{1/3}n^{2/3},~ \frac{m^{1/2}}{n^{3/2}}\Big\}$} \\ for $\epsilon$-dependent term & & & \\ \bottomrule \hline \end{tabular} } \caption{Detailed comparisons of the communication complexities of D-GET, GT-SARAH and DESTRESS under three graph topologies, where the last two rows delineate the improve factors of DESTRESS over existing algorithms. The communication savings become significant especially when $m = \Omega \big( \frac{n}{1-\alpha} \big) $. The complexities are simplified by plugging the bound on the spectral gap $1-\alpha$ from \cite[Proposition 5]{nedic2018network}. Here, $n$ is the number of agents, $m=N/n$ is the local sample size, $L$ is the smoothness parameter of the sample loss, and $\alpha \in [0,1) $ is the mixing rate of the network topology. The big-$O$ notations and logarithmic terms are omitted for simplicity. \label{table:1st_communication} } \end{table} In sum, DESTRESS harnesses the ideas of variance reduction, gradient tracking and extra mixing in a sophisticated manner to achieve a scalable decentralized algorithm for nonconvex empirical risk minimization that is competitive in both computation and communication over existing approaches. \subsection{Additional related works}\label{sub:related_works} Decentralized optimization and learning have been studied extensively, with contemporary emphasis on the capabilities to scale gracefully to large-scale problems --- both in terms of the size of the data and the size of the network. For the conciseness of the paper, we focus our discussions on the most relevant literature and refer interested readers to recent overviews \cite{nokleby2020scaling,xin2020general,xin2020decentralized} for further references. \paragraph{Stochastic variance-reduced methods.} Many variants of stochastic variance-reduced gradient based methods have been proposed for finite-sum optimization for finding first-order stationary points, including but not limited to SVRG \cite{Johnson2013,allen2016variance,reddi2016stochastic}, SCSG \cite{lei2019non}, SVRG+ \cite{li2018simple}, SAGA \cite{defazio2014saga}, SARAH \cite{nguyen2017sarah,nguyen2019finite}, SPIDER \cite{fang2018spider}, SpiderBoost \cite{wang2019spiderboost}, SSRGD \cite{li2019ssrgd}, ZeroSARAH \cite{li2021zerosarah} and PAGE \cite{li2021page,li2021short}. SVRG/SVRG+/SCSG/SAGA utilize stochastic variance-reduced gradients as a corrected estimator of the full gradient, but can only achieve a sub-optimal IFO complexity of $O(N + N^{2/3} L / \epsilon)$. Other algorithms such as SARAH, SPIDER, SpiderBoost, SSRGD and PAGE adopt stochastic recursive gradients to improve the IFO complexity to $O(N + N^{1/2} L / \epsilon)$, which is optimal indicated by the lower bound provided in \cite{fang2018spider,li2021page}. DESTRESS also utilizes the stochastic recursive gradients to perform variance reduction, which results in the optimal IFO complexity for finding first-order stationary points. \paragraph{Decentralized stochastic nonconvex optimization.} There has been a flurry of recent activities in decentralized nonconvex optimization in both the master/slave setting and the network setting. In the master/slave setting, \cite{cen2019convergence} simplifies the approaches in \cite{lee2017distributed} for distributing stochastic variance-reduced algorithms without requiring sampling extra data. In particular, D-SARAH \cite{cen2019convergence} extends SARAH to the master/slave setting but with a slightly worse IFO complexity and a sample-independent communication complexity. D-ZeroSARAH \cite{li2021zerosarah} obtains the optimal IFO complexity in the master/slave setting. In the network setting, D-PSGD \cite{lian2017can} and SGP \cite{assran2019stochastic} extend stochastic gradient descent (SGD) to solve the nonconvex decentralized expectation minimization problems with sub-optimal rates. However, due to the noisy stochastic gradients, D-PSGD can only use diminishing step size to ensure convergence, and SGP uses a small step size on the order of $1/K$, where $K$ denotes the total iterations. $D^2$ \cite{tang2018d2} introduces a variance-reduced correction term to D-PSGD, which allows a constant step size and hence reaches a better convergence rate. Gradient tracking \cite{zhu2010discrete,qu2018harnessing} provides a systematic approach to estimate the global gradient at each agent, which allows one to easily design decentralized optimization algorithms based on existing centralized algorithms. This idea is applied in \cite{zhang2020decentralized} to extend SGD to the decentralized setting, and in \cite{li2020communication} to extend quasi-Newton algorithms as well as stochastic variance-reduced algorithms, with performance guarantees for optimizing strongly convex functions. GT-SAGA \cite{xin2020fast} further uses SAGA-style updates and reaches a convergence rate that matches SAGA \cite{defazio2014saga,reddi2016fasta}. However, GT-SAGA requires to store a variable table, which leads to a high memory complexity. D-GET \cite{sun2020improving} and GT-SARAH \cite{xin2020near} adopt equivalent recursive local gradient estimators to enable the use of constant step sizes without extra memory usage. The IFO complexity of GT-SARAH is optimal in the restrictive range $m\gtrsim \frac{n}{(1-\alpha)^6}$, while DESTRESS achieves the optimal IFO over all parameter regimes. In addition to variance reduction techniques, performing multiple mixing steps between local updates can greatly improve the dependence of the network in convergence rates, which is equivalent of communicating over a better-connected communication graph for the agents, which in turn leads to a faster convergence (and a better overall efficiency) due to better information mixing. This technique is applied by a number of recent literature including \cite{berahas2018balancing,pan2019d,berahas2020convergence,li2020communication,hashemi2020benefits,iakovidou2021s}, and its effectiveness is verified both in theory and experiments. Our algorithm also adopts the extra mixing steps, which leads to better IFO complexity and communication complexity. \subsection{Paper organization and notation} \label{sub:paper_organization_and_notation} \Cref{sec:preliminaries} introduces preliminary concepts and the algorithm development, \Cref{sec:algorithm} shows the theoretical performance guarantees for DESTRESS , \Cref{sec:numerical} provides numerical evidence to support the analysis, and \Cref{sec:conclusion} concludes the paper. Proofs and experiment settings are postponed to appendices. Throughout this paper, we use boldface letters to represent matrices and vectors. We use $\| \cdot \|_{\mathsf{op}}$ for matrix operator norm, $\otimes$ for the Kronecker product, $\bI_n$ for the $n$-dimensional identity matrix and $\bm1_n$ for the $n$-dimensional all-one vector. For two real functions $f(\cdot)$ and $g(\cdot)$ defined on $\R^+$, we say $f(x) = O \big( g(x) \big)$ or $f(x) \lesssim g(x)$ if there exists some universal constant $M > 0$ such that $f(x) \leq M g(x)$. The notation $f(x) =\Omega \big( g(x) \big)$ or $f(x)\gtrsim g(x)$ means $g(x) = O\big(f(x) \big)$. \section{Preliminaries and Proposed Algorithm}\label{sec:preliminaries} We start by describing a few useful preliminary concepts and definitions in \Cref{sec:preliminary}, then present the proposed algorithm in \Cref{sub:algorithm_development}. \subsection{Preliminaries}\label{sec:preliminary} \paragraph{Mixing.}The information mixing between agents is conducted by updating the local information via a weighted sum of information from neighbors, which is characterized by a mixing (gossiping) matrix. Concerning this matrix is an important quantity called the mixing rate, defined in \Cref{definition:mixing_matrix}. \begin{definition}[Mixing matrix and mixing rate] \label{definition:mixing_matrix} The {\em mixing matrix} is a matrix $\bW = [w_{ij}] \in \R^{n \times n}$, such that $w_{ij} = 0$ if agent $i$ and $j$ are not connected according to the communication graph $\cG$. Furthermore, $\bW \bm1_n = \bm1_n$ and $\bW^\top \bm1_n = \bm1_n$. The {\em mixing rate} of a mixing matrix $\bW$ is defined as \begin{align} \alpha := \big\|\bW - \tfrac1n \bm1_n\bm1_n^\top \big\|_{\mathsf{op}}. \label{eq:def_alpha0} \end{align} \end{definition} The mixing rate indicates the speed of information shared across the network. For example, for a fully-connected network, choosing $\bW = \tfrac{1}{n}\bm1_n\bm1_n^\top$ leads to $\alpha = 0$. For general networks and mixing matrices, \cite[Proposition 5]{nedic2018network} provides comprehensive bounds on $1-\alpha$---also known as the spectral gap---for various graphs. In practice, FDLA matrices \cite{Xiao2004} are more favorable because it can achieve a much smaller mixing rate, but they usually contain negative elements and are not symmetric. Different from other algorithms that require the mixing matrix to be doubly-stochastic, our analysis can handle arbitrary mixing matrices as long as their row/column sums equal to one. \paragraph{Dynamic average consensus.} It has been well understood by now that using a naive mixing of local information merely, e.g. the local gradients of neighboring agents, does not lead to fast convergence of decentralized extensions of centralized methods \cite{nedic2009distributed,shi2015extra}. This is due to the fact that the quantity of interest in solving decentralized optimization problems is often iteration-varying, which naive mixing is unable to track; consequently, an accumulation of errors leads to either slow convergence or poor accuracy. Fortunately, the general scheme of dynamic average consensus \cite{zhu2010discrete} proves to be extremely effective in this regard to track the dynamic average of local variables over the course of iterative algorithms, and has been applied to extend many central algorithms to decentralized settings, e.g. \cite{nedic2017achieving,qu2018harnessing,di2016next,li2020communication}. This idea, also known as ``gradient tracking'' in the literature, essentially adds a correction term to the naive information mixing, which we will employ in the communication stage of the proposed DESTRESS algorithm to track the dynamic average of local gradients. \paragraph{Stochastic recursive gradient methods.} Stochastic recursive gradients methods \cite{nguyen2019finite,fang2018spider,wang2019spiderboost,li2019ssrgd} achieve the optimal IFO complexity in the centralized setting for nonconvex finite-sum optimization, which make it natural to adapt them to the decentralized setting with the hope of maintaining the appealing IFO complexity. Roughly speaking, these methods use a nested loop structure to iteratively refine the parameter, where 1) a global gradient evaluation is performed at each outer loop, and 2) a stochastic recursive gradient estimator is used to calculate the gradient and update the parameter at each inner loop. In the proposed DESTRESS algorithm, this nested loop structure lends itself to a natural decentralized scheme, as will be seen momentarily. \paragraph{Additional notation.} For convenience of presentation, define the stacked vector $\x \in \R^{nd}$ and its average over all agents $\bbx \in \R^d$ as \begin{align} \label{eq:dev_vectors} \x := \big[ \x_1^\top, \cdots, \x_n^\top \big]^{\top}, \quad \bbx = \frac1n \sum_{i=1}^n \x_i. \end{align} The vectors $\bs$, $\bbs$, $\bu$, $\bbu$, $\bv$ and $\bbv$ are defined in the same fashion. In addition, for a stacked vector $\x \in \R^{nd}$, we introduce the distributed gradient $\nabla F(\x) \in \R^{nd}$ as \begin{align} \label{eq:def_gradient} \nabla F(\x) := [\nabla f_1(\x_1)^\top, \cdots, \nabla f_n(\x_n)^\top]^\top . \end{align} \subsection{The DESTRESS Algorithm} \label{sub:algorithm_development} Detailed in \Cref{alg:network_sarah}, we propose a novel decentralized stochastic optimization algorithm, dubbed DESTRESS , for finding first-order order stationary points of nonconvex finite-sum problems. Motivated by stochastic recursive gradient methods in the centralized setting, DESTRESS has a nested loop structure: \begin{enumerate} \item The outer loop adopts dynamic average consensus to estimate and track the global gradient $\nabla F(\bx^{(t)})$ at each agent in \eqref{eq:gradient_tracking}, where $\bx^{(t)}$ is the stacked parameter estimate (cf.~\eqref{eq:def_gradient}). This helps to ``reset'' the stochastic gradient to a less noisy starting gradient $\bv^{(t), 0}=\bs^{(t)}$ of the inner loop. A key property of \eqref{eq:gradient_tracking}---which is a direct consequence of dynamic average consensus---is that the average of $\bs^{(t)}$ equals to the dynamic average of local gradients, i.e. $\bbs^{(t)} = \tfrac1n \sum_{i\in[n]} \bs_i^{(t)} = \tfrac1n \sum_{i\in[n]} \nabla f_i(\x_i^{(t)})$. \item The inner loop refines the parameter estimate $\bu^{(t), 0} = \x^{(t)}$ by by performing stochastic recursive gradient updates in \eqref{eq:inner_loop}, where the stochastic recursive gradient $\bg^{(t), s}$ is updated in \eqref{eq:inner_loop_sg} via sampling mini-batches from the local dataset. \end{enumerate} To complete the last mile, inspired by \cite{li2020communication}, we allow DESTRESS to perform a few rounds of mixing or gossiping whenever communication takes place, to enable better information sharing and faster convergence. Specifically, DESTRESS performs $K_{\mathsf{out}}$ and $K_{\mathsf{in}}$ mixing steps for the outer and inner loops respectively per iteration, which is equivalent to using $$ \bW_{\mathsf{out}} = \bW^{K_{\mathsf{out}}} \qquad \mbox{and} \qquad \bW_{\mathsf{in}} = \bW^{K_{\mathsf{in}}}$$ as mixing matrices, and correspondingly a network with better connectivity; see \eqref{eq:gradient_tracking}, \eqref{eq:inner_loop_var} and \eqref{eq:inner_loop_grad}. Note that \Cref{alg:network_sarah} is written in matrix notation, where the mixing steps are described by $\bW_{\mathsf{in}} \otimes \bI_n$ or $\bW_{\mathsf{out}} \otimes \bI_{n}$ and applied to all agents simultaneously. The extra mixing steps can be implemented by Chebyshev acceleration \cite{arioli2014chebyshev} with improved communication efficiency. \begin{algorithm}[t] \caption{DESTRESS for decentralized nonconvex finite-sum optimization} \label{alg:network_sarah} \begin{algorithmic}[1] \STATE {\textbf{input:} initial parameter $\bbx^{(0)}$, step size $\eta$, number of outer loops $T$, number of inner loops $S$ and number of communication steps $K_{\mathsf{in}}$ and $K_{\mathsf{out}}$. } \STATE {\textbf{initialization:} set $\x_i^{(0)} = \bbx^{(0)}$ and $\bs_i^{(0)} = \nabla f(\bbx^{(0)})$ for all agents $1 \leq i \leq n$. } \FOR {$t = 1, \ldots, T$} \STATE{Set the new parameter estimate $\x^{(t)} = \bu^{(t-1), S}$.} \STATE{Update the global gradient estimate by aggregated local information and gradient tracking:} \begin{align} \bs^{(t)} =& (\bW_{\mathsf{out}} \otimes \bI_d) \Big( \bs^{(t-1)} + \nabla F \big( \x^{(t)} \big) - \nabla F \big( \x^{(t-1)} \big) \Big) \label{eq:gradient_tracking} \end{align} \STATE{Set $\bu^{(t), 0} = \bx^{(t)}$ and $\bv^{(t), 0} = \bs^{(t)}$.} \FOR{$s=1,...,S$} \STATE Each agent $i$ samples a mini-batch $\cZ_i^{(t), s}$ of size $b$ from $\cM_i$ uniformly at random, and then performs the following updates: \begin{subequations} \label{eq:inner_loop} \begin{align} \bu^{(t), s} &= (\bW_{\mathsf{in}} \otimes \bI_d) (\bu^{(t), s-1} - \eta \bv^{(t), s-1}), \label{eq:inner_loop_var} \\ \bg_i^{(t),s} & = \frac1b \sum_{\bz_i \in \cZ_i^{(t), s}} \Big( \nabla \ell (\bu_i^{(t), s}; \bz_i) - \nabla \ell (\bu_i^{(t), s-1}; \bz_i) \Big) + \bv_i^{(t), s-1} , \label{eq:inner_loop_sg} \\ \bv^{(t), s} &= (\bW_{\mathsf{in}} \otimes \bI_d) \bg^{(t),s} . \label{eq:inner_loop_grad} \end{align} \end{subequations} \vspace{-15pt} \ENDFOR \ENDFOR \STATE {\textbf{output:} ~$\x^{\mathsf{output}} \sim \text{Uniform} (\{\bu_i^{(t), s-1} | i \in [n], t \in [T], s \in [S] \})$.} \end{algorithmic} \end{algorithm} Compared with existing decentralized algorithms based on stochastic variance-reduced algorithms such as D-GET \cite{sun2020improving} and GT-SARAH \cite{xin2020near}, DESTRESS utilizes different gradient estimators and communication protocols: First, DESTRESS produces a sequence of reference points $x^{(t)}$---which converge to a global first-order stationary point---to ``restart'' the inner loops periodically using fresher information; secondly, the communication and computation in DESTRESS are paced differently due to the introduction of extra mixing, which allow a more flexible trade-off schemes between different types of resources. \section{Performance Guarantees} \label{sec:algorithm} This section presents the performance guarantees of DESTRESS for finding first-order stationary points of the global objective function $f(\cdot)$. \subsection{Assumptions} We first introduce \Cref{assumption:lipschitz_gradient} and \Cref{assumption:optimality_lower_bounded}, which are standard assumptions imposed on the loss function. \Cref{assumption:lipschitz_gradient} implies that all local objective functions $f_i(\cdot)$ and the global objective function $f(\cdot)$ also have Lipschitz gradients, and \Cref{assumption:optimality_lower_bounded} guarantees there's no-trivial solutions. \begin{assumption}[Lipschitz gradient] \label{assumption:lipschitz_gradient} The sample loss function $\ell(\x; \bz)$ has $L$-Lipschitz gradients for all $\bz \in \cM$ and $\x \in \R^d$, namely, $\big\| \nabla \ell (\x; \bz) - \nabla \ell (\x'; \bz) \big\|_2 \leq L \| \x - \x' \|_2$, $\forall \x, \x' \in \R^d$ and $\bz \in \cM$. \end{assumption} \begin{assumption}[Function boundedness] \label{assumption:optimality_lower_bounded} The global objective function $f(\cdot)$ is bounded below, i.e., $f^* = \inf_{\bx\in\mathbb{R}^d} f(\bx ) > -\infty$. \end{assumption} Due to the nonconvexity, first-order algorithms are generally guaranteed to converge to only first-order stationary points of the global loss function $f(\cdot)$, defined below in \Cref{definition:1st}. \begin{definition}[First-order stationary point] \label{definition:1st} A point $\x \in \R^d$ is called an $\epsilon$-approximate first-order stationary point of a differentiable function $f(\cdot)$ if \begin{align*} \| \nabla f(\x) \|_2^2 \leq \epsilon . \end{align*} \end{definition} \subsection{Main theorem} \Cref{theorem:network_sarah_non_convex}, whose proof is deferred to Appendix~\ref{sec:proof_of_theorem_ref}, shows that DESTRESS converges in expectation to an approximate first-order stationary point, under suitable parameter choices. \begin{theorem}[First-order optimality] \label{theorem:network_sarah_non_convex} Assume \Cref{assumption:lipschitz_gradient,assumption:optimality_lower_bounded} holds. Set $K_{\mathsf{in}}$, $K_{\mathsf{out}}$, $S$, $b$ and $\eta$ to be positive, and the step size satisfies \begin{equation}\label{eq:step_size_condition} \eta \leq \frac{1}{L}\cdot \min\Big\{ \frac{(1 - \alpha^{K_{\mathsf{in}}})(1 - \alpha^{K_{\mathsf{out}}})}{10 \big(1 + \alpha^{K_{\mathsf{in}}} \alpha^{K_{\mathsf{out}}} \sqrt{n b} \big) \big( \sqrt{S/(nb)} + 1 \big)},\, \frac{(1 - \alpha^{K_{\mathsf{in}}})^3}{10 \alpha^{K_{\mathsf{in}}}},\, \frac{(1 - \alpha^{K_{\mathsf{in}}})^{3/2} (1 - \alpha^{K_{\mathsf{out}}})}{4 \sqrt{6} \alpha^{K_{\mathsf{in}}} \alpha^{K_{\mathsf{out}}}} \Big\}. \end{equation} The output produced by \Cref{alg:network_sarah} satisfies \begin{align} & \E \big\| \nabla f(\x^{\mathsf{output}}) \big\|^2_2 \leq \frac{4}{\eta TS} \Big( \E[ f(\bbx^{(0)})] - f^* \Big) . \label{eq:theorem_1} \end{align} \end{theorem} If the communication network is fully-connected, we can choose design the mixing matrix such that its mixing rate $\alpha=0$, then \Cref{theorem:network_sarah_non_convex} reduces to \cite[Theorem 1]{nguyen2019finite}, its counterpart in the centralized setting. For general decentralized settings with arbitrary mixing schedules, \Cref{theorem:network_sarah_non_convex} provides a comprehensive characterization of the convergence rate, where an $\epsilon$-approximate first-order stationary point can be found on expectation in a total of $$ TS = O\left(\frac{\E[ f(\bbx^{(0)})] - f^*}{\eta \epsilon} \right)$$ iterations, where $T$ is the number of outer iterations and $S$ is the number of inner iterations. Clearly, a larger step size $\eta$, as allowable by \eqref{eq:step_size_condition}, hints on a smaller iteration complexity. Examining closer the conditions in \eqref{eq:step_size_condition}, the step size depends on three terms: the first term is similar to the requirement in the centralized setting with additional network-dependent corrections, while the rest two terms depend only on the network connectivity. For well-connected networks where $ \alpha=O(1)$, the first term will dominant---indicating the iteration complexity is close to that in the centralized setting. For poorly-connected network, however, carefully designing the mixing parameters is desirable to ensure a desirable trade-off between convergence speed and communication cost. The following corollary, whose proof is in \Cref{sec:proof_of_corollay_1}, provides specific parameter choices, under which DESTRESS achieves the optimal per-agent IFO complexity. \begin{corollary}[Complexity for finding first-order stationary points] \label{corollary:network_sarah_non_convex_two} Under conditions of \Cref{theorem:network_sarah_non_convex}, set \begin{equation}\label{eq:param_choices} S = \Big\lceil \sqrt{mn} \Big\rceil, \, b = \left\lceil \sqrt{m/n} \right\rceil,\, K_{\mathsf{out}} = \left\lceil \frac{\log (\sqrt{nb} + 1)}{(1 - \alpha)^{1/2}} \right\rceil,\, K_{\mathsf{in}} = \left\lceil \frac{\log 2}{(1 - \alpha)^{1/2}} \right\rceil, \, \eta = \frac{1}{160 L} \end{equation} and implement the mixing steps using Chebyshev's acceleration \cite{arioli2014chebyshev}. To reach an $\epsilon$-approximate first-order stationary point, DESTRESS takes $O\Big( m + \frac{(m/n)^{1/2} L}{\epsilon} \Big)$ IFO calls per agent, and $O\Big(\frac{1}{(1 - \alpha)^{1/2}} \cdot \big( (mn)^{1/2} + \frac {L}{\epsilon} \big) \Big)$ rounds of communication. \end{corollary} As elaborated in Section~\ref{sub:contributions}, DESTRESS achieves a network-independent IFO complexity that matches the optimal complexity in the centralized setting. In addition, when the accuracy $\epsilon\lesssim L/(mn)^{1/2} $, DESTRESS reaches a communication complexity of $O\big( \frac{1}{(1 - \alpha)^{1/2}} \cdot \frac{L}{\epsilon} \big)$, which is independent of the sample size. \section{Numerical Experiments} \label{sec:numerical} This section provides numerical experiments on real datasets to evaluate our proposed algorithm DESTRESS with comparisons against two existing baselines: DSGD \cite{nedic2009distributed,lian2017can} and GT-SARAH \cite{xin2020near}. To allow for reproducibility, all codes can be found at \begin{center} \href{https://github.com/liboyue/Network-Distributed-Algorithm}{https://github.com/liboyue/Network-Distributed-Algorithm}. \end{center} For all experiments, we set the number of agents $n = 20$, and split the dataset uniformly at random to each agent. We run each experiment on three communication graphs with the same data assignment and starting point: Erd\"{o}s-R\`{e}nyi graph (with $p = 0.3$ as the connectivity probability), grid graph, and path graph. The mixing matrices are chosen as the symmetric fastest distributed linear averaging (FDLA) matrices \cite{Xiao2004} generated according to different graph topologies, and the extra mixing steps are implemented by Chebyshev's acceleration \cite{arioli2014chebyshev} to save communications as described earlier. To ensure convergence, DSGD adopts a diminishing step size schedule. All the parameters are tuned manually for best performance. We defer a detailed account of the baseline algorithms as well as parameter choices in Appendix~\ref{sec:baseline}. \subsection{Regularized logistic regression} \label{sub:logistic_regression} To begin with, we employ logistic regression with nonconvex regularization to solve a binary classification problem using the Gisette dataset.\footnote{The dataset can be accessed at \href{https://archive.ics.uci.edu/ml/datasets/Gisette}{https://archive.ics.uci.edu/ml/datasets/Gisette}.} We split the Gisette dataset to $n=20$ agents, where each agent receives $m=300$ training samples of dimension $d=5000$. The sample loss function is given as \begin{align*} \ell(\x; \{ \bm f, l \}) = -l \log \Big(\frac{1}{1 + \exp( \bx^\top \bm f)} \Big) + (1 - l) \log \Big( \frac{\exp(\bx^\top \bm f)}{1 + \exp(\bx^\top \bm f)} \Big) + \lambda \sum_{i=1}^d \frac{x_i^2}{1 + x_i^2}, \end{align*} where $\{ \bm f, l \}$ represents a training tuple, $\bm f \in \R^d$ is the feature vector and $l \in \{0, 1\}$ is the label, and $\lambda$ is the regularization parameter. For this experiment, we set $\lambda = 0.01$. \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{figures/gisette-er} \\ (a) Erd\"{o}s-R\`{e}nyi graph \\ \vspace{0.05in} \includegraphics[width=\textwidth]{figures/gisette-grid} (b) Grid graph \\ \vspace{0.05in} \includegraphics[width=\textwidth]{figures/gisette-path} (c) Path graph \caption{The training loss and testing accuracy with respect to the number of communication rounds (left two panels) and gradient evaluations (right two panels) for DSGD, GT-SARAH and DESTRESS when training a regularized logistic regression model on the Gisette dataset. Due to the initial full-gradient computation, the gradient evaluations of DESTRESS and GT-SARAH do not start from $0$. \label{fig:gisette_expander} } \end{figure} \Cref{fig:gisette_expander} shows the loss and testing accuracy for all algorithms. DESTRESS significantly outperforms other algorithms both in terms of communication and computation. It is worth noting that, DSGD converges very fast at the beginning of training, but cannot sustain the progress due to the diminishing schedule of step sizes. On the contrary, the variance-reduced algorithms can converge with a constant step size, and hence converge better overall. Moreover, due to the refined gradient estimation and information mixing designs, DESTRESS can bear a larger step size than GT-SARAH, which leads to the fastest convergence and best overall performance. In addition, a larger number of extra mixing steps leads to a better performance when the graph topology becomes less connected. \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{figures/mnist-er} \\ (a) Erd\"{o}s-R\`{e}nyi graph \\ \vspace{0.05in} \includegraphics[width=\textwidth]{figures/mnist-grid} (b) Grid graph \\ \vspace{0.05in} \includegraphics[width=\textwidth]{figures/mnist-path} (c) Path graph \caption{The training loss and testing accuracy with respect to the number of communication rounds (left two panels) and gradient evaluations (right two panels) for DSGD, GT-SARAH and DESTRESS when training a one-hidden-layer neural network on the MNIST dataset. Due to the initial full-gradient computation, the gradient evaluations of DESTRESS and GT-SARAH do not start from $0$. \label{fig:nn} } \end{figure} \subsection{Neural network training}\label{sub:neural_network} Next, we compare the performance of DESTRESS with comparisons to DSGD and GT-SARAH for training a one-hidden-layer neural network with $64$ hidden neurons and sigmoid activations for classifying the MNIST dataset \cite{deng2012mnist}. We evenly split $60,000$ training samples to $20$ agents at random. \Cref{fig:nn} plots the training loss and testing accuracy against the number of communication rounds and gradient evaluations for all algorithms. Again, DESTRESS significantly outperforms other algorithms in terms of computation and communication costs due to the larger step size and extra mixing, which validates our theoretical analysis. \section{Conclusions}\label{sec:conclusion} In this paper, we proposed DESTRESS for decentralized nonconvex finite-sum optimization, where both its theoretical convergence guarantees and empirical performances on real-world datasets were presented. In sum, DESTRESS matches the optimal IFO complexity of centralized SARAH-type methods for finding first-order stationary points, and improves both computation and communication complexities for a broad range of parameters regimes compared with existing approaches. A natural and important extension of this paper is to generalize and develop convergence guarantees of DESTRESS for finding second-order stationary points, which we leave to future works. \section*{Acknowledgements} This work is supported in part by ONR N00014-19-1-2404, by AFRL under FA8750-20-2-0504, and by NSF under CCF-1901199 and CCF-2007911. \bibliographystyle{alphaabbr}
1,941,325,219,966
arxiv
\section*{Introduction} A huge amount of investigations in cosmology during the last quarter of a sentury motivate theoreticians to develop new gravitational theories. Many of these theories are modifications of General Relativity (GR). Any modification of GR includes additional fields (such as scalar field, torsion, second metric tensor etc.) or higher derivatives in field equations or extra spatial dimensions. It is impossible to avoid all above-mentioned features. Lovelock gravity \cite{lovelock} has no additional fieds and no higher derivatives. It is based on\\ \textbf{Lovelock theorem}\\ If in $n$-dimensional riemannian space one needs tensor $G_{\mu\nu}$ (gravity field tensor) with the following features: \begin{enumerate} \item $G_{\mu\nu}$ is symmetric: $G_{\mu\nu}=G_{\nu\mu}$, \item $G_{\mu\nu}$ is divergence free: $\nabla^\mu G_{\mu\nu}=0$, \item $G_{\mu\nu}$ is a concomitant of the metric tensor and its first two derivatives:\\ $G_{\mu\nu}= G_{\mu\nu}(g_{\mu\nu},\d_\alpha g_{\mu\nu},\d_\alpha\d_\beta g_{\mu\nu})$, \end{enumerate} then general expression for $G_{\mu\nu}$ is \begin{equation}\label{lovelock} G^\mu_{\phantom{\mu}\nu}=\sum_{p=1}^{m-1}\alpha_p G^{(p)\mu}_{\phantom{(p)\mu}\nu}+\Lambda \delta^{\mu}_{\phantom{\mu}\nu},\end{equation} where $$ m=\frac1 2 n,\qquad\mbox{if $n$ is even,}$$ $$ m=\frac1 2 (n+1),\qquad\mbox{if $n$ is odd,}$$ \begin{equation}\label{lovelocktensor} G^{(p)\mu}_{\phantom{(p)\mu}\nu}=\delta^{\mu\lambda_1\lambda_2 \cdots\lambda_{2p}}_{\nu\sigma_1\sigma_2\cdots\sigma_{2p}}R_ {\lambda_1\lambda_2}^{\phantom{\lambda_1\lambda_2}\sigma_1\sigma_2}R_{\lambda_3\lambda_4}^ {\phantom{\lambda_3\lambda_4}\sigma_3\sigma_4}\cdots R_{\lambda_{2p-1}\lambda_{2p}}^{\phantom{\lambda_{2p-1}\lambda_{2p}}\sigma_{2p-1}\sigma_{2p}},\end{equation} $\alpha_p$, $\Lambda$ are arbitrary constants, $\delta^{\mu_1\cdots\mu_k}_{\nu_1\cdots\nu_k}$ is multidimensional delta-symbol, which equals to one, if $\nu_1\cdots\nu_k$ is even permutation of $\mu_1\cdots\mu_k$, equals to minus one, if odd, and equals to zero in other cases. We will call tensor (\ref{lovelocktensor}) a $p$-th order Lovelock tensor. It is easy to understand that in 4-dimensional spacetime only 0-th and 1-st Lovelock tensors are nonzero, so \begin{equation} G_{\mu\nu}=\alpha(R_{\mu\nu}-\dfrac12Rg_{\mu\nu})+\beta g_{\mu\nu} \end{equation} and we have ordinary Hilbert-Einstein equations with cosmological term. Hence if we need new results in Lovelock gravity then we should consider spacetimes with 5 dimensions or more. But in such a case we should explain an invisibility of extra dimensions. We can do this by means of Kluza-Klein approach: extra spacial dimensions are considered as closed and small. But such an approach means that space is anisotropic. So we should look for anisotropic solutions of gravity field equations. And it is interesting to consider maximally anisotropic space: it might arise isotropization of 3-dimensional visible space or invisible dimensions might behave in different ways. \section{Earlier-obtained anisotropic cosmological solutions} \subsection{Power-law solutions} Consider metric tensor with power-law scale factors: \begin{equation}g_{\mu\nu}=\diag\{-1,t^{2p_1},\ldots,t^{2p_n}\},\end{equation} where $p_i,$ $i=1,\ldots,n$ are constant values (power-law parameters). In the first order (i. e. in ordinary GR) we have Kasner solution for an empty space \cite{kasner}: \begin{equation}\label{Kasner-1}\sum_ip_i=1,\quad\sum_{i<j}p_ip_j=0.\end{equation} and Jacobs solution for maximally stiff fluid ($p=w\rho$) \cite{jacobs}: \begin{equation}\label{Jacobs-1}w=1,\quad\sum_ip_i=1,\quad \sum_{i<j}p_ip_j=-\dac{1}{4\alpha_1}\cdot\dac{8\pi G}{c^4}\varepsilon_0,\end{equation} where $\varepsilon_0$ is initial matter density. In the second order equations $\alpha_2{G^{(2)}}_{\mu\nu}=\varkappa T_{\mu\nu}$ have an analogue of Kasner solution \begin{equation}\label{Kasner-2}\sum_ip_i=3,\quad\sum_{i<j<k<l}p_ip_jp_kp_l=0,\end{equation} discovered by N. Deruelle \cite{deruelle} and rediscovered by A. Toporensky and P. Tretyakov \cite{toporensky-tretyakov} and an analogue of Jacobs solution \begin{equation}\label{Jacobs-2}w=1/3,\quad\sum_ip_i=3,\quad \sum_{i<j<k<l}p_ip_jp_kp_l=-\dac{1}{96\alpha_2}\cdot\dac{8\pi G}{c^4}\varepsilon_0,\end{equation} discovered by author \cite{kirnos-makarenko-pavluchenko-toporensky}. For an arbitrary $p$-th order ($\alpha_p{G^{(p)}}_{\mu\nu}=\varkappa T_{\mu\nu}$, hereinafter $\varkappa\equiv8\pi G/c^4$) Kasner solution has been generalized by S. Pavluchenko \cite{pavluchenko} and, independently, by author \cite{arbitrary_order}: \begin{equation}\label{Kasner}\sum_i p_i=2p-1,\quad\sum_{i_1<i_2<\ldots<i_{2p}}p_{i_1}p_{i_2}\cdots p_{i_{2p}}=0,\end{equation} Jacobs solution has been generalized by author \cite{arbitrary_order}: \begin{equation}\label{Jacobs}\sum_i p_i=2p-1,\quad\sum_{i_1<i_2<\ldots<i_{2p}}p_{i_1}p_{i_2}\cdots p_{i_{2p}}=-\dac{1}{\alpha_p 2^p(2p)!}\cdot\dac{8\pi G}{c^4}\varepsilon_0.\end{equation} Unfortunately all these solutions involve only one order of Lovelock gravity (without involving lower orders) and, secondly, solutions with matter order specific value of EoS parameter $w$. \subsection{Exponential solutions} Such shortcomings are absent for exponential solutions: \begin{equation}g_{\mu\nu}=\diag\{-1,e^{2H_1t},\ldots,e^{2H_nt}\},\end{equation} where $H_i,$ $i=1,\ldots,n$ are constant values (Hubble parameters). In the second order equations \begin{equation}{G^{(1)}}_{\mu\nu}+\alpha_2{G^{(2)}}_{\mu\nu}=\varkappa T_{\mu\nu}\end{equation} have solution \cite{kirnos-pavluchenko-toporensky} \begin{equation}\begin{array}{l} \sum_i H_i=0,\\ \sum_i H_i^2=(1-3w)\varkappa_0,\vphantom{\dfrac12}\\ \sum_i H_i^4=\dfrac{w-1}{2\alpha_2}\varkappa_0+\dfrac{(1-3w)^2}{2}\varkappa_0^2 \end{array}\end{equation} (hereinafter $\varkappa_0\equiv\dfrac{8\pi G}{c^4}\varepsilon_0$). In the third order equations \begin{equation}\alpha_1{G^{(1)}}_{\mu\nu}+\alpha_2{G^{(2)}}_{\mu\nu}+\alpha_3{G^{(3)}}_{\mu\nu}=\varkappa T_{\mu\nu}\end{equation} have solution \cite{arbitrary_order} \begin{equation}\label{3_order}\begin{array}{l} \sigma_1=0,\\ \sigma_4=\dfrac12\sigma_2^2-\dfrac{\alpha_1}{2\alpha_2}\sigma_2-\dfrac{1-5w}{16\alpha_2}\varkappa_0,\vphantom{\dfrac{\dfrac12}{\dfrac12}}\\ \sigma_6=\dfrac13\sigma_3^2+\dfrac14\sigma_2^3-\dfrac{3\alpha_1}{8\alpha_2}\sigma_2^2+\dfrac{1}{96}\(\dfrac{\alpha_1}{\alpha_2}-\dfrac{9(1-5w)}{2\alpha_2}\varkappa_0\) \sigma_2+\dfrac{1-3w}{384\alpha_3}\varkappa_0 \end{array}\end{equation} (hereinafter $\sigma_s\equiv\sum_i H_i^s,$ $s=1,2,\ldots$). \section{New solutions in 4th and 5th orders} In this paper we will obtain exponential solutions in 4th and 5th orders of Lovelock gravity. Firstly define the notations: \begin{equation}\sigma_s\equiv\sum_i H_i^s,\end{equation} \begin{equation}\zeta_p\equiv\sum_{i_1}H_{i_1}\sum_{i_2\neq i_1}H_{i_2}\sum_{i_3\neq i_1,i_2}H_{i_3}\cdots\sum_{i_p\neq i_1,i_2,\ldots i_{p-1}}H_{i_p},\end{equation} \begin{equation}\zeta_{p(j)}\equiv\sum_{i_1\neq j}H_{i_1}\sum_{i_2\neq j,i_1}H_{i_2}\sum_{i_3\neq j,i_1,i_2}H_{i_3}\cdots\sum_{i_p\neq j,i_1,i_2,\ldots i_{p-1}}H_{i_p},\end{equation} \begin{equation}\eta_{p(j)}\equiv\sum_{i_1\neq j}H_{i_1}^2\sum_{i_2\neq j,i_1}H_{i_2}\sum_{i_3\neq j,i_1,i_2}H_{i_3}\cdots\sum_{i_p\neq j,i_1,i_2,\ldots i_{p-1}}H_{i_p},\end{equation} where $s$, $p$ are arbitrary integers, the summation is over all the spatial dimensions. And find a relation between these values: \begin{equation}\label{zeta-recursion}\begin{array}{l}\displaystyle\zeta_p=\sum_{i_1}H_{i_1}\cdots\sum_{i_{p-1}\neq\ldots}H_{i_{p-1}}(\sigma_1-H_{i_1}-H_{i_2}-\cdots-H_{i_{p-1}})=\\ \displaystyle\quad{}=\sigma_1\zeta_{p-1}-(p-1)\sum_{i_1}H_{i_1}\cdots\sum_{i_{p-2}\neq\ldots}H_{i_{p-2}} \sum_{i_{p-1}\neq\ldots}H_{i_{p-1}}^2=\\ \displaystyle\quad{}=\sigma_1\zeta_{p-1}-(p-1)\sum_{i_1}H_{i_1}\cdots\sum_{i_{p-2}\neq\ldots}H_{i_{p-2}}(\sigma_2-H_{i_1}^2-\cdots-H_{i_{p-2}})=\\ \displaystyle\quad{}=\sigma_1\zeta_{p-1}-(p-1)\sigma_2\zeta_{p-2}+(p-1)(p-2)\sum_{i_1}H_{i_1}\cdots\sum_{i_{p-3}\neq\ldots}H_{i_{p-3}}\sum_{i_{p-2}\neq\ldots}H_{i_{p-2}}^3=\\ \displaystyle\quad{}=\cdots= \sum_{k=1}^{p-2}(-1)^{k-1}\dac{(p-1)!}{(p-k)!}\sigma_k\zeta_{p-k}+(-1)^{p-2}(p-1)(p-2)\cdots 4\cdot 3\cdot 2\times\\ \displaystyle\quad{}\times\sum_{i_1}H_{i_1}(\sigma_{p-1}-H_{i_1}^{p-1})=\sum_{k=1}^p(-1)^{k-1}\dac{(p-1)!}{(p-k)!}\sigma_k\zeta_{p-k},\end{array}\end{equation} where it is noted: $\zeta_0\equiv 1$. Thus, \begin{equation}\begin{array}{l}\zeta_0\equiv 1,\\ \displaystyle\zeta_p=\sum_{k=1}^p(-1)^{k-1}\dac{(p-1)!}{(p-k)!}\sigma_k\zeta_{p-k}.\end{array}\end{equation} Now express $\zeta_p$ (we use condition $\sigma_1=0$ obtained in \cite{kirnos-pavluchenko-toporensky}): \begin{equation}\label{zeta_p}\begin{array}{l}\zeta_0=1,\qquad\zeta_1=0,\qquad \zeta_2=-\sigma_2,\qquad\zeta_3=2\sigma_3,\qquad\zeta_4=3\sigma_2^2-6\sigma_4,\\ \zeta_5=-20\sigma_3\sigma_2+24\sigma_5,\qquad \zeta_6=-15\sigma_2^3+90\sigma_4\sigma_2+40\sigma_3^2-120\sigma_6,\vphantom{\dfrac12}\\ \zeta_7=210\sigma_2^2\sigma_3-504\sigma_2\sigma_5-420\sigma_3\sigma_4+720\sigma_7,\\ \zeta_8=105\sigma_2^4-1260\sigma_4\sigma_2^2-1120\sigma_3^2\sigma_2+3360\sigma_6\sigma_2+ 2688\sigma_5\sigma_3+1260\sigma_4^2-5040\sigma_8,\vphantom{\dfrac12}\\ \zeta_9=-2520\sigma_3\sigma_2^3+9072\sigma_5\sigma_2^2+12096\sigma_4\sigma_3\sigma_2-25920\sigma_7\sigma_2+2240\sigma_3^3+20160\sigma_6\sigma_3-\\ \qquad\quad{}\vphantom{\dfrac12}-18144\sigma_5\sigma_4+40320\sigma_9,\\ \zeta_{10}=-945\sigma_2^5+18900\sigma_4\sigma_2^3+25200\sigma_3^2\sigma_2^2-75600\sigma_6\sigma_2^2-120960\sigma_5\sigma_3\sigma_2- 56700\sigma_4^2\sigma_2 +\\ \qquad\quad{}+\vphantom{\dfrac12} 226800\sigma_8\sigma_2-50400\sigma_4\sigma_3^2+172800\sigma_7\sigma_3+151200\sigma_6\sigma_4+72576\sigma_5^2-362880\sigma_{10}. \end{array}\end{equation} Acting as in (\ref{zeta-recursion}) we obtain \begin{equation} \displaystyle\zeta_{p(j)}=\sum_{k=1}^p(-1)^{k-1}\dac{(p-1)!}{(p-k)!}(\sigma_k-H_j^k)\zeta_{p-k(j)},\end{equation} \begin{equation} \displaystyle\eta_{p(j)}=\sum_{k=1}^p(-1)^{k-1}\dac{(p-1)!}{(p-k)!}(\sigma_{k+1}-H_j^{k+1})\zeta_{p-k(j)}.\end{equation} Using these equations one can obtain \begin{equation}\label{zeta_p(j)}\begin{array}{l}\zeta_{1(j)}=-H_j,\qquad \zeta_{2(j)}=2H_j^2-\sigma_2,\qquad \zeta_{3(j)}=-6H_j^3+3\sigma_2H_j+2\sigma_3,\vphantom{\dac 1 2}\\ \zeta_{4(j)}=24H_j^4-12\sigma_2H_j^2-8\sigma_3H_j+3\sigma_2^2-6\sigma_4,\vphantom{\dac 1 2}\\ \zeta_{5(j)}=-120H_j^5+60\sigma_2H_j^3+40\sigma_3H_j^2-15\sigma_2^2H_j+ 30\sigma_4H_j-20\sigma_3\sigma_2+24\sigma_5,\vphantom{\dac 1 2}\\ \zeta_{6(j)}=720H_j^6-360\sigma_2H_j^4-240\sigma_3H_j^3+90\sigma_2^2H_j^2-180\sigma_4H_j^2+ 120\sigma_3\sigma_2H_j-\vphantom{\dac 1 2}\\ \qquad\quad{}-144\sigma_5H_j-15\sigma_2^3+90\sigma_4\sigma_2+40\sigma_3^2-120\sigma_6,\vphantom{\dac 1 2}\\ \zeta_{7(j)}=-5040H_j^7+2520\sigma_2H_j^5+1680\sigma_3H_j^4- 630\sigma_2^2H_j^3+1260\sigma_4H_j^3-840\sigma_3\sigma_2H_j^2+\vphantom{\dac 1 2}\\ \qquad\quad{}+1008\sigma_5H_j^2+105\sigma_2^3H_j-630\sigma_4\sigma_2H_j-280\sigma_3^2H_j+ 840\sigma_6H_j+210\sigma_3\sigma_2^2-\vphantom{\dac 1 2}\\ \qquad\quad{}-504\sigma_5\sigma_2-420\sigma_4\sigma_3+720\sigma_7,\vphantom{\dac 1 2}\\ \zeta_{8(j)}=40320H_j^8-20160\sigma_2H_j^6-13440\sigma_3H_j^5+5040\sigma_2^2H_j^4-10080\sigma_4H_j^4+6720\sigma_3\sigma_2H_j^3-\vphantom{\dac 1 2}\\ \qquad\quad{}\vphantom{\dfrac12}-8064\sigma_5H_j^3-840\sigma_2^3H_j^2+5040\sigma_4\sigma_2H_j^2+2240\sigma_3^2H_j^2-6720\sigma_6H_j^2- 1680\sigma_3\sigma_2^2H_j+\vphantom{\dac 1 2}\\ \qquad\quad{}+4032\sigma_5\sigma_2H_j+3360\sigma_4\sigma_3H_j-5760\sigma_7H_j+105\sigma_2^4-1260\sigma_4\sigma_2^2-1120\sigma_3^2\sigma_2+\vphantom{\dac 1 2}\\ \qquad\quad{}+3360\sigma_6\sigma_2+ 2688\sigma_5\sigma_3+ 1260\sigma_4^2-5040\sigma_8,\vphantom{\dfrac12}\vphantom{\dac 1 2}\\ \displaystyle\zeta_{9(j)}=\sum_{s_1l_1+\ldots+s_kl_k+m=9}(-1)^{l_1+\ldots+l_k+9}\dfrac{9!}{s_1^{l_1}\cdot\ldots\cdot s_k^{l_k}l_1!\cdot\ldots\cdot l_k!}\sigma_{s_1}^{l_1}\cdot\ldots \cdot\sigma_{s_k}^{l_k}H_j^m,\vphantom{\dac 1 2} \end{array}\end{equation} \begin{equation}\label{eta_p(j)}\begin{array}{l} \eta_{3(j)}=3\sigma_2H_j^2-\sigma_2^2-6H_j^4+2\sigma_3H_j+3\sigma_4,\vphantom{\dac 1 2}\\ \eta_{5(j)}=-120H_j^6+60\sigma_2H_j^4-15\sigma_2^2H_j^2-20\sigma_3\sigma_2H_j+ 40\sigma_3H_j^3+30\sigma_4H_j^2+\vphantom{\dac 1 2}\\ \qquad\qquad{}+3\sigma_2^3-18\sigma_4\sigma_2-8\sigma_3^2+24\sigma_5H_j+24\sigma_6,\vphantom{\dac 1 2}\\ \eta_{7(j)}=2520\sigma_2H_j^6-630\sigma_2^2H_j^4-840\sigma_3\sigma_2H_j^3+105\sigma_2^3H_j^2- 630\sigma_4\sigma_2H_j^2+\vphantom{\dac 1 2}\\ \qquad\quad{}+210\sigma_3\sigma_2^2H_j-504\sigma_5\sigma_2H_j-15\sigma_2^4+ 180\sigma_4\sigma_2^2+160\sigma_3^2\sigma_2-\vphantom{\dac 1 2}\\ \qquad\quad{}-480\sigma_6\sigma_2-5040H_j^8+ 1680\sigma_3H_j^5+1260\sigma_4H_j^4+1008\sigma_5H_j^3-280\sigma_3^2H_j^2+\vphantom{\dac 1 2}\\ \qquad\quad{}+ 840\sigma_6H_j^2-420\sigma_4\sigma_3H_j-384\sigma_5\sigma_3-180\sigma_4^2+ 720\sigma_7H_j+720\sigma_8,\vphantom{\dac 1 2}\\ \eta_{9(j)}=H_j\zeta_{9(j)}+105\sigma_2^5-2100\sigma_4\sigma_2^3-2800\sigma_3^2\sigma_2^2+8\cdot7\cdot6\cdot5\cdot5\sigma_6\sigma_2^2+ 10\cdot8\cdot7\cdot6\cdot4\sigma_5\sigma_3\sigma_2+\vphantom{\dac 1 2}\\ \qquad\quad{}+10\cdot9\cdot7\cdot10\sigma_4^2\sigma_2-7!\cdot5\sigma_8\sigma_2+10\cdot8\cdot7\cdot5\cdot2\sigma_4\sigma_3^2- 10\cdot8\cdot6\cdot5\cdot4\cdot2\sigma_7\sigma_3-\vphantom{\dfrac12}\\ \qquad\quad{}-10\cdot8\cdot7\cdot6\cdot5\sigma_6\sigma_4+8!\sigma_{10}-8\cdot7\cdot6\cdot4\cdot3\cdot2\sigma_5^2, \end{array}\end{equation} Now consider field equations. Taking metrics in the form \begin{equation}g_{\mu\nu}=\diag\{-1,e^{2H_1t},\ldots,e^{2H_nt}\},\end{equation} we have \begin{equation}\label{G00-exp-3} {G^{(p)0}}_0=2^p(2p)!\sum_{i_1<i_2<\cdots<i_{2p}}H_{i_1}H_{i_2}\cdots H_{i_{2p}}=2^p\zeta_{2p},\end{equation} \begin{equation}\begin{array}{ll}\displaystyle{G^{(p)j}}_j&\displaystyle=2^p(2p)!\sum_{\substack{i_2,i_3,\ldots,i_{2p}\neq j\\ i_2<i_3<\cdots<i_{2p}}} H_{i_2}^2H_{i_3}H_{i_4}\cdots H_{i_{2p}}+\\ &\displaystyle\quad{}+2^p(2p)!\sum_{\substack{i_1,i_2,\ldots,i_{2p}\neq j\\ i_1<i_2<\cdots<i_{2p}}} H_{i_1}H_{i_2}\cdots H_{i_{2p}}=2^p\cdot 2p\eta_{2p-1(j)}+2^p\zeta_{2p(j)}=\vphantom{\dac 1 2}\\ &\displaystyle\quad{}=2^p\eta_{2p-1(j)}-2^pH_j\zeta_{2p-1(j)}.\vphantom{\dac 1 2}\end{array}\end{equation} Using (\ref{zeta_p}), (\ref{zeta_p(j)}), (\ref{eta_p(j)}), one can obtain \begin{equation}\begin{array}{l}{G^{(1)j}}_j=2\sigma_2,\qquad {G^{(2)j}}_j=8\sigma_4-4\sigma_2^2,\\ {G^{(3)j}}_j=24\sigma_2^3-144\sigma_4\sigma_2-64\sigma_3^2+192\sigma_6,\vphantom{\dac 1 2}\\ {G^{(4)j}}_j=16(-15\sigma_2^4+180\sigma_4\sigma_2^2+160\sigma_3^2\sigma_2-480\sigma_6\sigma_2- 384\sigma_5\sigma_3-180\sigma_4^2+720\sigma_8),\vphantom{\dac 1 2}\\ {G^{(5)j}}_j=32\[\vphantom{\dfrac12}105\sigma_2^5-2100\sigma_4\sigma_2^3-2800\sigma_3^2\sigma_2^2+8400\sigma_6\sigma_2^2+ 560\cdot24\sigma_5\sigma_3\sigma_2+ 6300\sigma_4^2\sigma_2-\right.\vphantom{\dac 1 2}\\ \left.\qquad\quad{}-5\cdot7!\sigma_8\sigma_2+5600\sigma_4\sigma_3^2-48\cdot400\sigma_7\sigma_3-56\cdot300\sigma_6\sigma_4+8!\sigma_{10}-\dfrac{8!}{5}\sigma_5^2\].\end{array}\end{equation} Thus equations \begin{equation}\alpha_1{G^{(1)\mu}}_\nu+\alpha_2{G^{(2)\mu}}_\nu+\alpha_3{G^{(3)\mu}}_\nu+\alpha_4{G^{(4)\mu}}_\nu+\alpha_5{G^{(5)\mu}}_\nu=\varkappa {T^\mu}_\nu\end{equation} take form \begin{equation}\label{xi-eq}\left\{\begin{array}{l}-\xi_1-3\xi_2-5\xi_3-7\xi_4-9\xi_5=-\varkappa\varepsilon_0,\\ \xi_1+\xi_2+\xi_3+\xi_4+\xi_5=w\varkappa\varepsilon_0,\end{array}\right.\end{equation} where \begin{equation}\begin{array}{l} \xi_1\equiv2\alpha_1\sigma_2,\qquad\xi_2\equiv-4\alpha_2(\sigma_2^2-2\sigma_4),\\ \xi_3\equiv-8\alpha_3(-3\sigma_2^3+18\sigma_4\sigma_2+8\sigma_3^2-24\sigma_6),\vphantom{\dfrac12}\\ \xi_4\equiv16\alpha_4(-15\sigma_2^4+180\sigma_4\sigma_2^2+160\sigma_3^2\sigma_2-480\sigma_6\sigma_2- 384\sigma_5\sigma_3-180\sigma_4^2+720\sigma_8),\vphantom{\dfrac12}\\ \xi_5\equiv32\alpha_5\[105\sigma_2^5-2100\sigma_4\sigma_2^3-2800\sigma_3^2\sigma_2^2+8400\sigma_6\sigma_2^2+ 13440\sigma_5\sigma_3\sigma_2+6300\sigma_4^2\sigma_2 -\right.\vphantom{\dfrac12}\\ \left.\qquad\quad{}- 25200\sigma_8\sigma_2+5600\sigma_4\sigma_3^2-19200\sigma_7\sigma_3-16800\sigma_6\sigma_4-8064\sigma_5^2+40320\sigma_{10}\].\vphantom{\dfrac12} \end{array}\end{equation} In the 3rd order of Lovelock gravity ($\alpha_4=\alpha_5=0$) it is easy to obtain \begin{equation}\xi_2=-2\xi_1+\dac{5w-1}{2}\varkappa\varepsilon_0,\qquad \xi_3=\xi_1+\dac{1-3w}{2}\varkappa\varepsilon_0.\end{equation} From these equations we have solution (\ref{3_order}). In the 4th order ($\alpha_5=0$) one can obtain \begin{equation}\begin{array}{l}\xi_3=-2\xi_2-3\xi_1+\dfrac{7w-1}{2}\varkappa\varepsilon_0,\\ \xi_4=\xi_2+2\xi_1+\dfrac{1-5w}{2}\varkappa\varepsilon_0,\end{array}\end{equation} so solution is \begin{equation}\begin{array}{l}\sigma_1=0,\vphantom{\dfrac12}\\ \sigma_6=-\dfrac 1 8 \sigma_2^3+\dfrac34\sigma_4\sigma_2+\dfrac13\sigma_3^2+\dfrac{\alpha_2}{24\alpha_3}\(\sigma_2^2-2\sigma_4\)-\dfrac{\alpha_1}{32\alpha_3}\sigma_2-\dfrac{1-7w}{384\alpha_3}\varkappa\varepsilon_0,\\ \sigma_8=-\dfrac{1}{16}\sigma_2^4+\dfrac14\sigma_4\sigma_2^2+\dfrac{\alpha_2}{36\alpha_3}\(\sigma_2^3-2\sigma_4\sigma_2\)-\dfrac{\alpha_1}{48\alpha_3}\sigma_2^2- \dfrac{1-7w}{576\alpha_3}\sigma_2\varkappa\varepsilon_0+\dfrac{8}{15}\sigma_5\sigma_3+\vphantom{\dfrac{\dfrac12}{\dfrac12}}\\ \qquad\quad{}+\dfrac14\sigma_4^2-\dfrac{\alpha_2}{2880\alpha_4}\(\sigma_2^2-2\sigma_4\)+ \dfrac{\alpha_1}{2880\alpha_4}\sigma_2+\dfrac{1-5w}{23040\alpha_4}\varkappa\varepsilon_0.\end{array}\end{equation} In the 5th order equations (\ref{xi-eq}) imply \begin{equation}\begin{array}{l}\xi_4=\dfrac92w\varkappa_0-4\xi_1-3\xi_2-2\xi_3-\dfrac{\varkappa_0}{2},\\ \xi_5=-\dfrac72w\varkappa_0+3\xi_1+2\xi_2+\xi_3+\dfrac{\varkappa_0}2,\end{array}\end{equation} from what we have \begin{equation}\begin{array}{l}\sigma_1=0,\vphantom{\dfrac12}\\ \sigma_8=\dfrac{9w\varkappa_0}{23040\alpha_4}-\dfrac{\alpha_1}{1440\alpha_4}\sigma_2+\dfrac{\alpha_2}{960\alpha_4}\(\sigma_2^2-2\sigma_4\)+ \dfrac{\alpha_3}{720\alpha_4}\(-3\sigma_2^3+18\sigma_4\sigma_2+8\sigma_3^2-24\sigma_6\)+\\ \qquad\quad{}+\dfrac1{48}\sigma_2^4-\dfrac14\sigma_4\sigma_2- \dfrac29\sigma_3^2\sigma_2+ \dfrac23\sigma_6\sigma_2+\dfrac8{15}\sigma_5\sigma_3+\dfrac14\sigma_4^2-\dfrac{\varkappa_0}{23040\alpha_4},\vphantom{\dfrac{\dfrac12}{\dfrac12}}\\ \sigma_{10}=-\dfrac{7w\varkappa_0}{64\cdot8!\alpha_5}+\dfrac{3\alpha_1}{16\cdot8!\alpha_5}\sigma_2-\dfrac{\alpha_2}{4\cdot8!\alpha_5}\(\sigma_2^2-2\sigma_4\)-\dfrac{\alpha_3}{4\cdot8!\alpha_5}\(-3\sigma_2^3+18\sigma_4\sigma_2+8\sigma_3^2-\right.\\ \qquad\quad{}-\left.24\sigma_6\)-\dfrac{105}{8!}\sigma_2^5+\dfrac{2100}{8!}\sigma_4\sigma_2^3+\dfrac{2800}{8!}\sigma_3^2\sigma_2^2-\dfrac{8400}{8!}\sigma_6\sigma_2^2-\dfrac13\sigma_5\sigma_3\sigma_2-\dfrac{6300}{8!}\sigma_4^2\sigma_2+\vphantom{\dfrac{\dfrac12}{\dfrac12}}\\ \qquad\quad{}+\dfrac{25200}{8!}\sigma_8\sigma_2- \dfrac{5600}{8!}\sigma_4\sigma_3^2+\dfrac{19200}{8!}\sigma_7\sigma_3+\dfrac{16800}{8!}\sigma_6\sigma_4+\dfrac15\sigma_5^2+\dfrac{\varkappa_0}{64\cdot8!\alpha_5}.\end{array}\end{equation} \section{Exponential solution in an arbitrary order (supposition)} Generalizing these equalities we may suppose that in an arbitrary order equations \begin{equation} \sum_{i=1}^p\alpha_i{G^{(i)}}_{\mu\nu}=\varkappa T_{\mu\nu} \end{equation} take form \begin{equation}\begin{array}{l}\left\{\begin{array}{l}\displaystyle\sum^p_{i=1}(2i-1)\xi_i=\varkappa_0,\\ \displaystyle\sum^p_{i=1}\xi_i=w\varkappa_0,\end{array}\right.\end{array}\end{equation} where \begin{equation}\begin{array}{l}\xi_i\equiv\alpha_i{G^{(i)j}}_j,\vphantom{\dfrac12}\\ \displaystyle{G^{(i)j}}_j=\dfrac{2^i}{2i-1}\sum_{s_1l_1+\ldots+s_kl_k=2i}(-1)^{l_1+\ldots+l_k+2i-1}\dfrac{(2i)!}{s_1^{l_1}\cdot\ldots\cdot s_k^{l_k}l_1!\cdot\ldots\cdot l_k!}\sigma_{s_1}^{l_1}\cdot\ldots \cdot\sigma_{s_k}^{l_k},\end{array}\end{equation} so \begin{equation}\begin{array}{l}\displaystyle\xi_{p-1}=\[\dfrac{2p-1}2w-\dfrac12\]\varkappa_0-\sum_{i=1}^{p-2}(p-i)\xi_i,\\ \displaystyle\xi_p=\[\dfrac{2p-3}{2}w-\dfrac12\]\varkappa_0+\sum_{i=1}^{p-2}(p-i-1)\xi_i,\end{array}\end{equation} from what we have \begin{equation}\begin{array}{l}\displaystyle\dfrac{2^{p-1}\alpha_{p-1}}{2p-3}\sum_{s_1l_1+\ldots+s_kl_k=2p-2}(-1)^{l_1+\ldots+l_k+2p-3}\dfrac{(2p-2)!}{s_1^{l_1}\cdot\ldots\cdot s_k^{l_k}l_1!\cdot\ldots\cdot l_k!} \sigma_{s_1}^{l_1}\cdot\ldots \cdot\sigma_{s_k}^{l_k}=\\ \displaystyle\qquad\quad{}=\[\dfrac{2p-1}2w-\dfrac12\]\varkappa_0-\sum_{i=1}^{p-2}(p-i)\dfrac{2^i\alpha_i}{2i-1}\sum_{s_1l_1+\ldots+s_kl_k=2i}(-1)^{l_1+\ldots+l_k+2i-1}\times\\ \displaystyle\qquad\quad{}\times\dfrac{(2i)!}{s_1^{l_1}\cdot\ldots\cdot s_k^{l_k}l_1!\cdot\ldots\cdot l_k!} \sigma_{s_1}^{l_1}\cdot\ldots \cdot\sigma_{s_k}^{l_k},\\ \displaystyle\dfrac{2^p\alpha_p}{2p-1}\sum_{s_1l_1+\ldots+s_kl_k=2p}(-1)^{l_1+\ldots+l_k+2p-1}\dfrac{(2p)!}{s_1^{l_1}\cdot\ldots\cdot s_k^{l_k}l_1!\cdot\ldots\cdot l_k!} \sigma_{s_1}^{l_1}\cdot\ldots \cdot\sigma_{s_k}^{l_k}=\\ \displaystyle\qquad\quad{}=\[\dfrac{2p-3}2w-\dfrac12\]\varkappa_0-\sum_{i=1}^{p-2}(p-i-1)\dfrac{2^i\alpha_i}{2i-1}\sum_{s_1l_1+\ldots+s_kl_k=2i}(-1)^{l_1+\ldots+l_k+2i-1}\times\\ \displaystyle\qquad\quad{}\times\dfrac{(2i)!}{s_1^{l_1}\cdot\ldots\cdot s_k^{l_k}l_1!\cdot\ldots\cdot l_k!} \sigma_{s_1}^{l_1}\cdot\ldots \cdot\sigma_{s_k}^{l_k}.\end{array}\end{equation} Unfortunately, it is only supposition, but I hope to prove it in the next work. \section*{Conclusions} Anisotropic exponential cosmological solutions for a space of arbitrary dimension filled with ordinary matter in the 4th and 5th orders of Lovelock gravity were obtained. Also we have supposed a generalization of such solutions on an arbitrary order. All the solutions are represented as a set of conditions on Hubble parameters. Unfortunately, it is the problem to write down every Hubble parameter as a fuction of parameters $n$, $\alpha_i$, $w$ and $\varkappa_0$. Moreover, such conditions may be uncompatible. For the second order of Lovelock gravity this problem was investigated in \cite{chirkov-pavluchenko-toporensky}. \section*{Acknowledgements} The author is grateful to Alexey V. Toporensky and Alexander A. Reshetnyak for helpful discussions. The research was fulfilled within the RFBR Project No. 17-02-01333.
1,941,325,219,967
arxiv
\section{Introduction} Coupled map lattices (CMLs), dynamical systems with discrete space and time, are being intensively investigated since the early 80's as models of many spatiotemporal phenomena occurring in a wide variety of systems \cite{books}. One of such phenomena is synchronization and, in particular, amongst the various kinds of synchronized behavior, {\em complete synchronization} (CS) \cite{boccaletti} occurring in regular arrays of coupled chaotic systems. Intermittency plays an important role in the destabilization of the synchronized state\cite{intermit}. Intermittent transitions between laminar, quiescent states, and irregular bursting have been investigated in many dynamical systems, their scaling properties being determined\cite{int_teo,int_exp,ott}. Theoretical derivations of these scalings usually assume Gaussian fluctuations for describing the irregular bursts between consecutive laminar regions. Although this may be a suitable approximation for many systems, there are cases where deviations from Gaussianity can be observed, specially in the vicinity of the synchronization threshold. It is our purpose to exhibit such anomalous behavior. Let us consider, as paradigmatic example of a spatially extended system, periodic chains of $N$ one-dimensional chaotic maps $x \mapsto f(x)$ evolving through a distance-dependent diffusive coupling: \begin{equation} x^{(i)}_{t+1}=(1-\varepsilon)f(x^{(i)}_{t})+\frac{\varepsilon} {\eta}\sum^{N'}_{r=1} B(r) \left( f(x^{(i-r)}_{t})+f(x^{(i+r)}_{t}) \right), \label{CML} \end{equation} where $x^{(i)}_t$ represents the state variable for the site $i$ $(i=1,2,...,N)$ at time $t$, $\varepsilon \ge 0$ is the coupling strength, $B(r)$ an arbitrary function of $r$ and $\eta=2 \sum^{N'}_{r=1}B(r)$ a normalization factor, with $N'=(N-1)/2$ for odd $N$. CS takes place when the dynamical variables that define the state of each map adopt the same value for all the coupled maps at each time step $t$, i.e., $x^{(1)}_t=x^{(2)}_t=\ldots=x^{(N)}_t\equiv x^{(*)}_t$. It can be easily verified that this state is solution of Eq. (\ref{CML}). The question is whether it is stable or not with respect to small perturbations in directions transversal to the CS state in CML phase space. A criterion for stability can be drawn from the asymptotic Lyapunov spectrum (LS). As far as the CS state lies along the direction associated to the largest exponent, the CS state will be transversely stable if the $(N-1)$ remaining exponents are negative. Therefore, if there is a single attractor, negativity of the second largest Lyapunov exponent implies that the CS state is asymptotically attained\cite{gade}. Depending thus on system size and on the parameters defining both the chaotic map and $B(r)$, there may exist an interval of values of the coupling strength $\varepsilon$ for which the CS state is asymptotically stable. Explicit results have been shown before, for instance, for algebraically decaying interactions, i.e., $B(r)=1/r^\alpha$, with $0\le\alpha$\cite{apbv03,abv04} and for uniform interactions with a cut-off distance $\beta$, with $1\le\beta\le N'$\cite{gallas}. In those cases, for a given system size, one finds a {\em synchronization domain} in parameter space. Inside this domain, CS eventually occurs after a transient whose typical duration diverges as one approaches the critical frontier. Outside the synchronization domain, there may occur intermittent behavior. A characterization of phenomena associated to a blowout bifurcation, such as intermittency\cite{ding_wang}, can be performed through the analysis of the distribution of largest transversal finite-time Lyapunov exponents (LTFEs). We will show that for Ulam maps coupled through schemes of the form (\ref{CML}), the probability density function (PDF) of LTFEs deviates from a Gaussian law. This will be shown to have important consequences at criticality. Let us remark that analytical results are specially relevant in this field due to the very nature of the phenomena involved, sensitive to the unavoidable finite precision of numerical simulations \cite{egs}. \section{Finite-time Lyapunov spectrum} \label{sec_ftls} The dynamics of tangent vectors $\xi =(\delta x^{(1)},\delta x^{(2)},\ldots,$ $\delta x^{(N)})^T$ is obtained by differentiation of the evolution equations (\ref{CML}). In order to obtain finite-time exponents, we proceed analogously to what we have done before for the calculation of asymptotic exponents \cite{apbv03,abv04}. For self-containedness, let us review the steps. The tangent dynamics is given by $ \xi_{i+1}={\bf T}_i\xi_{i}$, where the Jacobian matrix ${\bf T}_i$ is \begin{equation} {\bf T}_i=\biggl( (1-\varepsilon)+\frac{\varepsilon} {\eta}{\bf B} \biggr){\bf D}_i, \label{tanmatrix} \end{equation} with the matrices ${\bf D}_i$ and ${\bf B}$ defined, respectively, by $D_{i}^{jk}= f^\prime(x^{(j)}_i) \delta_{jk}$ and $B_{jk}= B(r_{jk}) (1-\delta_{jk})$ , being $r_{jk}=\mbox{min}_{l\in \cal{Z}}|j-k+lN|$. Once the initial conditions have been specified, the LS of finite-time Lyapunov exponents (FTEs) $\{ \bar\lambda_k(n) \}$, calculated over a time interval of length $n$, is extracted from the evolution of the initial tangent vector $\xi_0$: $\xi_n = {\bf \cal T}_n\xi_0$, where ${\bf \cal T}_n \equiv {\bf T}_{n-1}\dots {\bf T}_{1}{\bf T}_{0}$. The FTEs are obtained as $\bar\lambda_k(n)=\ln\bar\Lambda^{(k)}_n$, for $k=1,\ldots,N$, where $\{\bar\Lambda_n^{(k)}\}$ are the eigenvalues of $\hat{\Lambda}_n= ({\bf \cal T}^{T}_n {\bf \cal T}_n )^{\frac{1}{2n}}$\cite{ruelle}. In CS states, the dynamical variables of all maps coincide at each time step $i$, i.e, $x^{(1)}_i=x^{(2)}_i=\ldots=x^{(N)}_i\equiv x^{(*)}_i$. In this case, ${\bf D}_i=f^\prime(x^{(*)}_i)\openone_{N}$, thus, ${\bf T}_i=f^\prime(x^{(*)}_i)\hat{\bf B}$ and ${\bf \cal T}^{T}_i {\bf \cal T}_i = (\prod_{j=0}^{n-1} [f^\prime(x^{(*)}_j)]^{2}) \hat{\bf B}^{2i}$. Therefore, one arrives at the following expression for the spectrum of $N$ {\em finite-time} Lyapunov exponents, over a time interval of length $n$, in {\em CS states}: \begin{equation} \label{lambdak} \lambda_k(n)\,=\, \frac{1}{n}\sum^{n-1}_{i=0}\ln|f^\prime(x^{(*)}_i)| \,+\, c_k, \;\;\;\mbox{for $k=1,\ldots,N$}, \end{equation} with $c_k \equiv \ln|1-\varepsilon +\varepsilon b_k/\eta|$, where $b_k$, the eigenvalues of the interaction matrix ${\bf B}$, are given by $b_k=2\sum^{N^{\prime}}_{m=1}B(m)\cos(2\pi k m/N)$, for odd $N$. Here $x^{(*)}_i=f^i(x^{(*)}_0)$ is the $i$th iterate of the initial condition $x^{(*)}_0$, the same for the $N$ maps, since we are dealing with CS states. In the asymptotic case $n\to \infty$, and assuming ergodicity, the first term in the left-hand side of Eq. (\ref{lambdak}), which represents a time-average, can be substituted by an average over the single-map attractor. In such case one gets the asymptotic LS \begin{equation} \lambda_k\,=\,\lambda_{U} \,+\, c_k, \label{asympt} \end{equation} where $\lambda_{U}=\langle \ln|f^\prime(x^{(*)})| \rangle$ is the Lyapunov exponent of the uncoupled map. Notice that the parameters that define the particular uncoupled map affect only $\lambda_U$, while $\{ c_k \}$ are determined by the particular cyclic dependence on distance in the regular coupling scheme. As discussed in the Introduction, the asymptotic LS provides a criterion for synchronization, namely the negativity of second largest (or largest transversal) asymptotic exponent $\lambda_\perp$. For the particular interaction $B(r)=1/r^\alpha$, that we will consider in numerical simulations, the above condition leads to\cite{apbv03,abv04} \begin{equation} \label{critical} \frac{ 1-{\rm e}^{-\lambda_U} } {1- b_1/\eta } < \varepsilon < \frac{ 1+{\rm e}^{-\lambda_U} } { 1- b_{N'}/\eta}. \end{equation} \section{Distribution of finite-time Lyapunov exponents} \label{sec_distrib} Unlike infinite-time exponents, local exponents strongly depend on the initial conditions. Starting from random initial conditions, the fluctuations in the values of $\lambda_k(n)$ arise from the summation in (\ref{lambdak}) only, since $\{c_k\}$ are constant. As a consequence, all the $\lambda_k(n)$ of the spectrum will have a PDF with the same shape, differing only in the mean value $\langle \lambda_k(n) \rangle = \lambda_U+c_k$, which coincides with the asymptotic exponent $\lambda_k$. In particular the variance of finite-time Lyapunov exponents (FTEs) in the CS states does not depend on $c_k$ (since it is an additive constant). Therefore, it is the same for all the exponents of the spectrum, as expected also from the fact that the source of the fluctuations is unique. Along a trajectory, fluctuations shift the finite-time Lyapunov spectrum as a whole. A further consequence is that the variance of the local exponents in CS states does not depend on the lattice parameters embodied in $\{c_k\}$, but only on the local features of the individual map. In fact, in CS states, all maps evolve with the dynamics of an uncoupled map. Being so, the PDFs of FTEs can be straightforwardly obtained from the PDF of the local map. Hence, let us omit by the moment the index $k$. Our discussion will be valid for any $\lambda_k(n)$, in particular for $\lambda_\perp(n)$. For the Ulam map, $x\mapsto4x(1-x)$, a smooth approximate expression for the PDF of FTEs is available \cite{ulam}, namely \begin{equation} \label{smooth} P(\lambda(n)) \simeq \frac{2n}{\pi^2} \ln (\coth{| n[\lambda(n)-\lambda]/2|})\, , \end{equation} valid for large enough $n$. For $\lambda(n)<\lambda$, expression (\ref{smooth}) is exact, however, for $\lambda(n)>\lambda$, the exact distribution presents a complex structure with $2^{n-1}$ spikes that get narrower and accumulate close to the mean value with increasing $n$\cite{ulam,ramaswamy}. Therefore, in the latter interval, expression (\ref{smooth}) constitutes a smooth approximation, such that the sharp spikes have been trimmed by finite size bins. Moreover, the exact PDF is non-null only for $\lambda(n)-\lambda\le \ln2$. Even the smooth distribution (\ref{smooth}) is markedly different from Gaussian. It is divergent at $\lambda(n)=\lambda$ and falls off with exponential tails. It is noteworthy that the variance decays anomalously as $1/n^2$ \cite{ulam,ts95} \begin{equation} \sigma^2(n)=\frac{\pi^2}{6n^2}\left(1-\frac{1}{2^{n}}\right) \;, \end{equation} instead of the usual $1/n$ decay. Also notice that the PDFs (\ref{smooth}) for different values of $n$ would collapse into a single shape via rescaling by $n$. \begin{figure}[htb] \begin{center} \includegraphics*[bb=20 270 560 720, width=0.45\textwidth]{figure1.eps} \end{center} \caption{Comparison between numerical and theoretical PDFs. They correspond to the largest transversal time-20 exponent of $N=21$ coupled Ulam maps, with algebraically decaying interactions, for $\varepsilon=0.8$ and different values of $\alpha$ (above $\alpha_c\simeq 0.867$ stability of the CS state is lost). Numerical PDFs are represented by symbols. Solid lines associated to full symbols correspond to the theoretical prediction given by Eq. (\ref{smooth}), while those associated to hollow symbols correspond to Gaussian fittings. Inset: semi-log representation to exhibit the tails. } \label{fig_distr} \end{figure} Fig.~\ref{fig_distr} exhibits numerical PDFs for CMLs together with the analytical prediction given by Eq. (\ref{smooth}). Numerical PDFs were built by choosing $10^4$ initial conditions and computing the second eigenvalue of the matrix $\hat{\Lambda}_n$ after a transient. Eq. (\ref{smooth}), obtained for uncoupled maps, is in excellent agreement with numerical results for LTFEs in the coherent states of CMLs, as expected. It is worth noting that, although expression (\ref{smooth}) was derived for CS states, it remains still a good approximation even close outside the synchronization domain. This means that the correlations that lead to violation of the central limit theorem persist in that region. However, far enough from the threshold, the terms that contribute to FTEs become uncorrelated by the ``bath'' of coupled chaotic maps, and the central limit theorem holds, leading to Gaussian shapes (see Fig.~\ref{fig_distr}). Notice that the PDF for $\alpha=1.0$, although Gaussian in the central part, stills falls at right with a exponential tail. A detailed analysis of correlations giving rise to (\ref{smooth}) can be found in Refs. \cite{ulam,ts95}. Basically, the autocorrelation function $C(n)$ of one-step FTEs is $C(n)=-\pi^2 2^{-n}/24$, that is, it decays exponentially with time $n$ (with a characteristic time equal to the inverse of $\lambda_U=\ln 2$). However, since successive one-step exponents are dependent through the deterministic logistic mapping, non-Gaussian PDFs of FTEs arise. \section{Consequences of fluctuating exponents} \label{sec_conseq} \subsection{Subcritical regime} For parameter values belonging to the synchronization domain, that is, for subcritical $\alpha$, being all other parameters fixed, the system eventually converges to the CS state. In fact, asymptotically, CS states are stable since the distribution of LTFEs collapses to a Dirac delta function centered at $\lambda_\perp$, which is negative in that domain. The relaxation to a CS state can be measured, for instance, by means of either the distance to the SM, defined through $d(t)=\sqrt{ \sum_i(x^{(i)}_t-\langle{x}_t\rangle)^2 }$, or the order parameter $R(t)=|\sum_i\exp(2\pi x^{(i)}_t)|/N$. Since, for small deviations from the SM, both quantities are related through $d^2\simeq (1-R^2)/2$, we will exhibit the time evolution of $d^2$ only. After a very brief transient, the decay to the CS state is exponential with a characteristic time given by $\tau_c=1/|\lambda_\perp|$ (see Fig.~\ref{fig_sub}), that diverges at the critical frontier. For the power-law interaction, $\lambda_\perp$ scales as $|\lambda_\perp|\sim |\alpha-\alpha_c|$ and $|\lambda_\perp|\sim |\varepsilon-\varepsilon_c|$, at the critical point. \begin{figure}[htb] \begin{center} \includegraphics*[bb=80 320 540 650, width=0.45\textwidth]{figure2.eps} \end{center} \caption{Relaxation to the synchronization manifold. Time series of $d^2$ for the same CML of Fig.~\ref{fig_distr} with $\alpha=0.8$ (subcritical). The dashed line corresponds to the exponential law indicated in the figure, for comparison. } \label{fig_sub} \end{figure} The fact that the distribution of LFTEs spreads over negative and positive values (Fig.~\ref{fig_distr}), implies that the exponents, computed over finite-time segments of a trajectory, fluctuate around zero. On one hand, as one approaches the frontier subcritically, the mean value of the distribution shifts to zero from negative values. On the other hand, as one follows a trajectory for a longer time interval, the PDF of LTFEs concentrates around the mean. Then, there may be segments of trajectory in which the lattice is repelled from the synchronization manifold (SM). But, in average, it is attracted exponentially fast. Due to the finite precision of computer calculations, the distance to the SM saturates (see Fig.~\ref{fig_sub}). Intrinsic noise, due to numerical truncation, may drive the state of the system slightly away from the saturation level. However, each time this happens, the distance decays, again exponentially fast, to its lower bound. Numerical results in the subcritical regime, illustrated in Fig.~\ref{fig_sub}, can be suitably described by $d(t)=d_o \exp[\lambda_\perp(t)\,t]$. Then, from Eq. (\ref{lambdak}), we have \begin{equation} \label{distance} d(t)\;=\; d_o\exp[\lambda_\perp \,t + \sum_{n=0}^t \zeta(n) ] \end{equation} where we have split the fluctuating component $\zeta=\lambda_\perp(1)-\lambda_\perp$, corresponding to successive (time-correlated) centered one-step LTFEs. The evolution of the distance to the SM can also be modeled by a It\^o multiplicative stochastic differential equation for $d$, \begin{equation}\label{ito} \dot{d}=\lambda_\perp d+\zeta(t)d \,, \end{equation} where $\zeta(t)$ is a colored non-Gaussian noise and time is continuous. As shown in Sect. \ref{sec_distrib}, in particular, the distribution of $\lambda_\perp(1)$ in synchronized states follows that of one-step FTEs in the uncoupled map. An exact expression can be straightforwardly derived from the invariant measure of the Ulam attractor\cite{ramaswamy}. Then, one has \begin{equation} \label{one} P(\zeta) =\frac{2}{\pi}\frac{1}{\sqrt{4{\rm e}^{-2\zeta}-1}} \, , \end{equation} with $\zeta\in(-\infty,\ln2)$. Notice that $\zeta$, with zero-mean, is bounded from above. This explains the upper bound of the fluctuations superposed to the exponential decay in Fig. (\ref{fig_sub}). In fact, the fluctuations around the exponential envelope can be fully described by the statistics of time-one exponents. The time series and distribution of $\zeta$ are presented in Fig.~\ref{fig_zeta} for supercritical $\alpha$, showing that even close outside the synchronization domain, the distribution of $\zeta$ follows that of the uncoupled map. \begin{figure}[htb] \begin{center} \includegraphics*[bb=100 200 508 650, width=0.45\textwidth]{figure3.eps} \end{center} \caption{Time evolution of $\zeta=\lambda_\perp(1)-\lambda_\perp$ (A) for the same CML of Fig.~\ref{fig_distr} with $\alpha=0.9$ (supercritical). Normalized histogram of $\zeta$ (B). Inset: semi-log representation to exhibit the exponential tail. Full lines correspond to Eq. (\ref{one}). } \label{fig_zeta} \end{figure} Breakdown of shadowability might occur\cite{shadow}, as exponents fluctuating about zero are a signature of unstable dimension variability. Taking into account that the PDF is non-null for $\lambda_\perp(n) \le \lambda_\perp +\ln 2$\cite{ulam,ramaswamy}, then if $\lambda_\perp(\alpha,\varepsilon,N)<-\ln 2$, the finite-time exponents are negative for almost any initial condition. Thus, this point would correspond to the onset of shadowability. Whereas, for $-\ln 2<\lambda_\perp$, although the mean of the distribution may be negative, there is always a non-null fraction $f$ of positive exponents given by $f= \int_0^\infty {\rm d} \lambda_\perp(n) P(\lambda_\perp(n))$, pointing to the possibility of loss of shadowability of numerical trajectories\cite{review}. Because $f$ grows from zero with a very small slope, since the positive tail of the PDF is approximately exponential, then the onset may appear shifted towards the threshold in numerical computations\cite{validity}. \subsection{Supercritical regime} For supercritical $\alpha$, that is outside the synchronization domain, CS states are not asymptotically stable, because $\lambda_\perp>0$. Close to the threshold (up to $\alpha\simeq 1$ for $\epsilon=0.8$), the numerical estimate for the second largest transversal exponent fairly coincides with $\lambda_\perp$ (calculated for synchronized states). Moreover, close enough to the boundary, correlated bursts away from the SM occur (see Fig.~\ref{fig_super}). Although this figure exhibits a time series up to $t=2000$, the same features are observed for longer runs (performed typically up to $t=10^7$). The intermittent behavior can also be understood in terms of the distribution of the finite-time largest transversal exponent $\lambda_\perp(n)$, that close to the threshold can be described by Eq. (\ref{smooth}). Although the average $\langle \lambda_\perp(n)\rangle = \lambda_\perp$ is positive, there is a non-null probability that the LTFE be negative, thus leading to intermittent behavior at finite times. The distribution of LTFEs indicates that there are time intervals during which the trajectories are either attracted to or repelled from the SM. Despite the average duration of the synchronized time intervals increases when approaching the critical frontier, synchronization is not attained as a final stable state. \begin{figure}[htb] \begin{center} \includegraphics*[bb=80 320 540 700, width=0.45\textwidth]{figure4.eps} \end{center} \caption{Time series of $d^2$ for the same CML of Fig.~\ref{fig_distr} with $\alpha\simeq 0.9$ (supercritical). Inset: semi-log representation of the same data. The dashed line corresponds to the exponential law indicated in the figure, for comparison. } \label{fig_super} \end{figure} Intermittent bursts grow exponentially with a characteristic time given by $1/\lambda_\perp$ (see inset of Fig.~\ref{fig_super}). Therefore, the characteristic time has the same scaling laws as the subcritical characteristic time discussed above. The distance to the SM fluctuates around a reference level ($\langle d\rangle\simeq 0.037 $ for the parameters in Fig.~\ref{fig_super}) that increases with $\alpha$. Nonlinearities in Eq.~(\ref{ito}) cause saturation of the growth experienced by the distance to the SM due to the positiveness of $\lambda_\perp$, and keep the distances within a bounded interval. Close to the threshold, the average distance increases linearly with $\lambda_\perp\sim|\alpha-\alpha_c|$. Also, for increasing $\alpha$, the correlated bursts become more frequent (hence, its duration becomes shorter) such that far from the frontier fluctuations become uncorrelated and the intermittent clustering effect disappears. Moreover, in that region, the numerical estimate of the largest transversal exponent significantly deviates from the one calculated for synchronized states and new features, out of the scope of the present work, occur. Therefore, we will focus on the near vicinity of the threshold. The histogram of logarithmic distances $y\equiv\ln d$ is presented in Fig.~\ref{fig_dista}. The distribution initially grows approximately as $\exp(ay)$ and above the maximum value falls off faster than exponentially. The coefficient of the exponential argument is $a\simeq 1$, at variance, with the value $a=2\lambda_\perp/\sigma_1^2\equiv h$ (hyperbolicity exponent)\cite{ott}, where $\sigma^2$ is the variance of time-1 exponents, derived under assumption of Gaussian fluctuations. In fact, in the continuum approximation, the evolution of $y\equiv\ln d$ follows, at first order, the Langevin equation \begin{equation} \label{langevin} \dot{y}=\lambda_\perp + \zeta(t) \, \end{equation} where $\zeta$ is a fluctuating quantity with zero-mean and variance $\sigma_1^2$. If fluctuations were white Gaussian ones, then, from the stationary solution of its associated Fokker-Planck equation, the distribution of $y$ should increase following the law $P(y)\sim \exp(hy)$, with $h$ defined as above. Multiplicative corrections to Eq. (\ref{langevin}), to model the upper-bounded behavior of $d$, will affect the shape of $P(y)$ mainly above the average value, and are not expected to affect the small $d$ behavior. The main point is that the stochastic process $\{\zeta\}$ is not white nor Gaussian, as can be neatly observed in Fig. \ref{fig_zeta}, what may explain the deviation from the $\exp(hy)$ law. \begin{figure}[htb] \begin{center} \includegraphics*[bb=60 380 520 670, width=0.45\textwidth]{figure5.eps} \end{center} \caption{Distribution of the logarithmic distance $y=\ln d$, for different values of $\alpha$. Inset: exponent $a$, resulting from the best exponential fit to the initial increasing regime, and hyperbolicity exponent $h$ and as a function of $\alpha$. } \label{fig_dista} \end{figure} A universal result is that, at the onset of intermittency, the distribution of laminar phases (inter-burst intervals) decays as a power-law\cite{int_teo}, meaning the presence of lengths of arbitrarily large size. In particular the size of the average plateau diverges. Moving far from the onset, the tail of the distribution of laminar phases is gradually dominated by an exponential decay. In general, the distribution of lamellar phases can be obtained by solving a first-return problem. For the power-law decay, the exponent $\beta=3/2$ was found to be universal, as far as the central limit approximation holds\cite{int_teo}. However, as we have seen, in our case, very close to the threshold, correlations persist and the distribution of finite-time exponents, even, and specially, in the central part (where it is divergent), deviates from the Gaussian approximation. Therefore, deviations from the 3/2 power law are expected. We measured inter-burst sizes, that is, the length of time segments during which the distance $d$ remains below a threshold value $d_o$. Numerical distributions of inter-burst sizes, for values of parameter $\alpha$ close to the synchronization threshold, are displayed in Fig.~\ref{fig_bursts}. In general, a rapid decay is observed for very small $\tau$ (not exhibited). Since a power-law behavior is expected for sufficiently large $\tau$, histogram heights for small values of $\tau$ are not exhibited in the figure, for the sake of clearness. One sees that the PDF of inter-burst intervals follows a power-law with $\beta\simeq 1$, at neat variance with the value $\beta= 3/2$. A cross-over to a asymptotic exponential regime is always observed. We have already seen in Fig.~\ref{fig_distr} that, below $\alpha\simeq 1$, correlations persist. This may explain the power-law exponent observed in Fig.~\ref{fig_dista}. \begin{figure}[htb] \begin{center} \includegraphics*[bb=60 400 550 705, width=0.45\textwidth]{figure6.eps} \end{center} \caption{Distribution of inter-burst times for the same CML of Fig.~\ref{fig_distr} with different (supercritical) values of $\alpha$ indicated on the graph. In all cases, $d_o\simeq 2\langle d\rangle$, but we verified that decay laws do not substantially depend on the choice of the threshold $d_o$. Full lines, corresponding to the indicated power-laws, were drawn for comparison. } \label{fig_bursts} \end{figure} \section{Summary} We have presented analytical results for the PDF of LTFEs in CS states of coupled Ulam maps. They were confirmed by numerical experiments performed for CMLs with interactions decaying with distance as a power law. The knowledge of the statistical properties of finite-time Lyapunov exponents allows to understand the anomalies in bursting behavior close to the SM. As a consequence, universal laws derived under assumption of Gaussian fluctuations do not hold generically. Our results were obtained for local Ulam maps, however, the observed features may occur in a wider class of extended systems, as long as other deterministic chaotic or stochastic processes present similar deviations from Gaussianity\cite{ramaswamy}. \section*{Acknowledgments} This work was partially supported by Brazilian agencies CNPq and Funda\c{c}\~ao Arauc\'aria.
1,941,325,219,968
arxiv
\section{Introduction} Detached, double-lined spectroscopic binaries that are also eclipsing provide an accurate determination of stellar mass, radius and temperature for each of their individual components, and hence constitute a strong test of single star stellar evolution theory \citep{last}. Eclipsing binary (EB) stars are very important for the study of stellar astrophysics. Their particular geometrical layout, their dynamics and radiative physics enable a detailed and accurate modelling and analysis of the acquired data, and allow to measure many basic physical parameters of the components \citep{h2009}. Studies of binary stars by all the techniques available in modern astrophysics allow to measure a wide range of parameters for each of the component stars, with some of them determined with very high accuracy (e.g., uncertainties of less than 1\%). This also refers to the pre-main-sequence (PMS) stars. Through a complete analysis of spectroscopy and photometry of these systems orbital and physical parameters of the two stars can be accurately derived. However, until recently there were only seven known low-mass pre-main sequence EBs with $M$ $<$ 1.5 $M_\odot$: RXJ~0529.4+0041A \citep{cov2000,cov2004}, V1174~Ori \citep{stas2004}, 2MJ0535-05 \citep{stas2006,stas2007}, JW380 \citep{irwin}, Par~1802 \citep{cargi,stas2008}, ASAS~J0528+03 \citep{stempels}, and~MML53 \citep{hebb}. Mutiple systems of two or more bodies seem to be the norm at all stellar evolutionary stages, according to observations, and it has been accepted for some time now that binarity and multiplicity is established as the principal channel of star formation \citep{Reip}. For this reason, knowledge of the formation of multiple stars becomes necessary to understand star formation in general \citep{del2004}. Moreover, observations of young multiple systems allow to test evolutionary models in early stages of stellar evolution. On the other hand, it is important to increase the sample of young stars (especially PMS) because main-sequence, solar-type stars are well described by stellar evolution models (observations agree well with theoretical isochrones), but recent measurements of the stellar properties of low-mass dwarfs and young PMS stars remain problematic for the existing models \citep{mo-ca2012}. In this paper we present the orbital and physical analysis of an eclipsing binary V1200~Centauri (HD~120778, HIP~67712, ASAS~J135218-3837.3; $\alpha$~=~13:52:17.51, $\delta$ = -38:37:16.82). It is a well detached system with a circular orbit, a short orbital period ($\sim$2.5 days), and observed parameters consistent with the pre-main-sequence evolutionary stage. We also announce the discovery that it is a triple hierarchical system. This system was first studied in 1954 in the Cape Photographic Catalog 1950 \citep{jack}, but only the magnitudes and epoch. Then in 1984 the spectral type of F5V was defined \citep{houk}. The system is described as an Eclipsing Algol (EA) by \citet{otero} and has a $V$ magnitude of 8.551 \citep{and-fra}. As a bright star, it was a subject of spectroscopic studies of the Geneva-Copenhagen survey \citep{Nordstr,holm2009}, where was treated as a single star, however due to large brightness ratio in the visual we can consider some of their results reliable. The parallax and proper motion are also known: $\pi$ = 8.43(94)~mas, $\mu_\alpha$ = -72.49(87), $\mu_\delta$ = -44.20(78)~mas~yr$^{-1}$ \citep{vLe07}. In Section 2 we describe the observations used to identify V1200~Cen as an eclipsing and spectroscopic binary with a circumbinary companion, in Section 3 we determine the orbital parameters of the system and physical properties of the component stars of and compare them with theoretical isochrones. Finally, in Section 4 we discuss the future observations needed to fully analyse the system. \section{Observations} \subsection{Echelle spectroscopy} Observations with PUCHEROS instrument were carried out between January and May 2012 at the 50-cm telescope of the Observatory UC Santa Martina located near Santiago, Chile. The telescope is the European Southern Observatory (ESO) instrument, formerly located at La Silla. PUCHEROS is the Pontificia Universidad Cat\'olica High Echelle Resolution Optical Spectrograph developed at the Center of Astro-Engeneering of Pontificia Universidad Cat\'olica de Chile \citep{infante,vanzi}. The spectrograph is based on a classic echelle design, fed by the telescope through an optical fibre and it covers the visible range from 390 to 730 nm in one shot reaching a spectral resolution of 20,000. It is located in a stable position, in a room adjacent to the telescope. The science image was typically created from four separate observations, 15 minutes each, combined later into one frame. For the wavelength calibration we used exposures of ThAr lamps taken before and after the science sequence. The data reduction of PUCHEROS spectra was performed with the task echelle in the IRAF package, following standard steps: CCD reduction, spectrum extraction and wavelength calibration. The data from PUCHEROS were supplemented with spectra observed with the CORALIE spectrograph at the 1.2 m Euler telescope. CORALIE is a fibre-fed cross-dispersed echelle spectrograph, it covers the spectral range from 380 to 690 nm reaching a spectral resolution of 50,000. Observations were made in a simultaneous wavelength calibration mode, where the light of the object is collected by one fibre, and of the ThAr lamp by the other. The spectra were taken between June 2012 and July 2013. CORALIE data were reduced with a dedicated, Python-based pipeline \citep{jor14}. \subsection{Photometry} Photometric data were obtained from the All-Sky Automated Survey (ASAS) \citep{pojm,pacz} and from the SuperWASP transiting planet survey \citep{poll2006}. ASAS has produced an extensive catalogue of variable stars (ACVS) of the southern hemisphere\footnote{\texttt{http://www.astrouw.edu.pl/asas/?page=acvs}}. In this work, we use 495 data points from the third stage of the survey, obtained in the $V$ filter between 2000 and 2009. SuperWASP is a wide-field photometric variability survey designed to detect transiting gas-giant planets around bright main sequence stars. The survey cameras observe bright stars (V$\sim$ 9-13) at high precision (1\%) using a broad $V+R$ band filter. 3234 data points have been extracted from the SuperWASP public archive\footnote{\texttt{http://exoplanetarchive.ipac.caltech.edu/\\applications/ExoTables/search.html?dataset\\=superwasptimeseries}}. \section{Analysis} \subsection{Radial velocities and orbital solution} \begin{table*} \centering \caption{Radial velocity (RV) measurements of V1200~Cen, together with the final measurement errors ($\sigma$) and residuals from the final three-body fit ($O-C$). All values in km~s$^{-1}$. In the last column, 5/P denotes OUC-50cm/PUCHEROS and E/C Euler 1.2m/CORALIE observations.}\label{tab_rv} \begin{tabular}{lrrrrrrc} \hline\hline JD-2450000 & $v_1$ & $\sigma_1$ & $O-C_1$ & $v_2$ & $\sigma_2$ & $O-C_2$ & Tel./Sp. \\ \hline 5714.615861 & 45.958 & 0.646 & -0.340 & --- & --- & --- & 5/P \\ 5736.539995 & 64.395 & 0.494 & 0.222 & -127.519 & 2.640 & -0.757 & 5/P \\ 5737.639889 & -67.029 & 0.523 & 0.394 & 88.377 & 4.198 & -1.718 & 5/P \\ 5750.604835 & -62.826 & 2.446 & 1.482 & --- & --- & --- & 5/P \\ 5751.584224 & 74.651 & 0.536 & -0.788 & -126.370 & 4.324 & 3.856 & 5/P \\ 6066.642808 & 47.460 & 1.498 & -0.858 & --- & --- & --- & 5/P \\ 6066.665643 & 51.655 & 0.841 & 0.658 & --- & --- & --- & 5/P \\ 6078.565477 & -36.335 & 2.129 & 1.846 & --- & --- & --- & 5/P \\ 6080.625298 & -89.867 & 0.163 & -0.009 & 112.325 & 1.503 & 0.657 & E/C \\ 6081.564728 & 52.113 & 0.228 & -0.021 & -116.745 & 1.075 & -0.456 & E/C \\ 6179.474281 & -26.024 & 0.167 & 0.035 & 80.336 & 0.885 & -1.105 & E/C \\ 6346.690592 & -12.831 & 0.169 & -0.321 & 67.855 & 0.876 & 0.573 & E/C \\ 6348.857536 & -55.020 & 0.165 & 0.255 & 136.192 & 1.064 & 1.044 & E/C \\ 6349.894755 & 94.865 & 0.194 & 0.023 & -107.687 & 1.017 & -0.499 & E/C \\ 6397.520928 & 38.353 & 0.112 & -0.034 & -71.655 & 0.772 & 1.072 & E/C \\ 6398.517694 & -77.575 & 0.116 & 0.017 & 112.000 & 0.951 & -0.331 & E/C \\ 6497.610599 & -67.667 & 0.157 & -0.152 & 133.439 & 0.797 & 0.229 & E/C \\ 6498.610654 & 64.361 & 0.113 & 0.082 & -78.099 & 0.942 & 0.453 & E/C \\ \hline \end{tabular} \end{table*} \begin{figure} \centering \includegraphics[width=\columnwidth]{RV_v1200.eps} \caption{Three-body orbital model of V1200~Cen based on the RV measurements from PUCHEROS (circles) and CORALIE (triangles). Filled symbols are for the primary component, and open ones for the secondary. Black dotted line on panel a) marks the systemic velocity $v_\gamma$. {\it a)} Keplerian orbit of the AB pair as a function of the orbital phase, with the perturbation removed; {\it b)} residuals of the full model as a function of phase; {\it c)} perturbation from the third body as a function of time, Keplerian orbit removed; {\it d)}~residuals of the full model as a function of time.}\label{fig_rv} \end{figure} Radial velocities (RVs) were measured with an implementation of the two-dimensional cross-correlation technique \textsc{todcor} \citep{zuc94} with synthetic spectra used as templates. The formal RV measurement errors were computed from the bootstrap analysis of \textsc{todcor} maps created by adding randomly selected single-order \textsc{todcor} maps. The peaks of the cross-correlation function (CCF) coming from both components were significantly different in height, due to the large brightness ratio of the two stars. The PUCHEROS spectra usually had lower signal-to-noise (S/N) ratio (5-30) and the CCF peak from the secondary was not always recognized. The S/N of the CORALIE data was normally higher (25-60) and the secondary's CCF peak was more prominent, still very low though. With \textsc{todcor} we also tried to estimate the intensity ratio of the two components, but due to low S/N of the cool secondary the results were very uncertain and we find them unreliable. The orbital solution was done simultaneously on all RV measurements with our simple procedure, which fits a double-Keplerian orbit using the Levenberg-Marquartd algorithm \citep[for a more detailed description, see][]{h2009}, and allows for Monte-Carlo and bootstrap analysis to obtain reliable estimations of uncertainties. We used the photometric data first to find the orbital period, and check for eventual eccentricity (see next Section). The eccentricity was found to be indistinguishable from zero so it was held fixed to 0.0 in the further steps. In the orbital analysis also the period and $T_0$ were held fixed. In the final orbital analysis we have also kept fixed the period and $T_0$. We the used values found by JKTEBOP (see Table 3), i.e: \begin{equation} Min_I = 2451883.8813(19) + E 2.4828752(22) \end{equation} The weight of the points was scaled according to the formal errors found by \textsc{todcor}. The procedure also allows for fitting for the differences of the zero points of different instruments, separately for each component. Those shifts were found to be of a similar value than their uncertainties, and different for each star. This can be explained by a mismatch between the spectrum and the template used, and large rotational velocities of the two stars. We started with a purely double-Keplerian solution with no perturbations, but the solutions we were getting were not satisfactory, with high reduced $\chi^2 \simeq 1600$, $rms$ of the fit about 10 km/s for both components, and residuals of both components correlated, i.e. differing from the model by a similar value. We then used a modified version of our procedure to look for a third, circumbinary body in the system. We fitted for the outer orbit's period $P_3$, amplitude of the inner pair velocity variations $K_{12}$, and the base epoch $T_3$, eccentricity $e_3$ and the argument of pericentre $\omega$. We ran the fitting procedure again on the whole data set, and found a satisfactory solution, characterised by a much lower $rms$ -- 0.98 and 1.50 km/s for the primary and secondary, respectively and much lower reduced $\chi^2 = 1.29$. All of our spectra have barycentric correction, we used IRAF's \textit{bcvcor} for the PUCHEROS data and in the case of CORALIE data the correction is implemented in the reduction pipeline. Therefore, the scale of the perturbation and the outer orbit's eccentricity can't be explained by the improper correction for the barycentre. Hereafter, following the usual convention, we will refer to the eclipsing pair as AB, and to the third body as C. We present all the measurements, together with the errors and residuals, in Table \ref{tab_rv}. In Table \ref{tab_orbit} we present our results of the full orbital analysis. The parameters of the circumbinary orbit should be treated as preliminary, but the binary pair's orbital elements are well constrained. The RV measurement errors were initially scaled in such way that the final reduced $\chi^2$ was close to 1, and the fit itself was not affected (weights not changed). The errors were found by a bootstrap analysis (10000 iterations). In such a way we take care of the possible systematics and obtain reliable uncertainties of the final parameters. Our model, separated into the Keplerian and perturbation components, is presented in Figure \ref{fig_rv}. \begin{table} \centering \caption{Results of the RV analysis and the orbital parameters of V1200~Cen.}\label{tab_orbit} \begin{tabular}{lcc} \hline\hline Parameter & Value & $\pm$\\ \hline \multicolumn{3}{c}{\it AB eclipsing pair}\\ $K_1$ [km~s$^{-1}$] & 78.23 & 0.37 \\ $K_2$ [km~s$^{-1}$] & 126.0 & 1.1 \\ $v_\gamma$ [km~s$^{-1}$] & 10.92 & 0.94 \\ $a_{12} \sin{i}$ [R$_\odot$] & 10.026 & 0.058\\ $e_{12}$ & 0.0 & (fixed) \\ $q$ & 0.6208 & 0.0062\\ $M_1 \sin^3{i}$ [M$_\odot$] & 1.352 & 0.027 \\ $M_2 \sin^3{i}$ [M$_\odot$] & 0.839 & 0.012 \\ \multicolumn{3}{c}{\it Outer orbit}\\ $P_3$ [d] & 351.5 & 3.4 \\ $T_3$ [JD-2540000] & 5358.2 & 8.0 \\ $K_{12}$ [km~s$^{-1}$] & 18.37 & 0.11 \\ $e_3$ & 0.42 & 0.09 \\ $\omega_3$ & 156 & 7 \\ $a_3\sin{i_3}$ [AU] & 0.538 & 0.033\\ $f$ [M$_\odot$] & 0.099 & 0.025\\ $M_3 (i_3=90^\circ)$ [M$_\odot$] & 0.662 & 0.066 \\ \multicolumn{3}{c}{\it Other fit parameters}\\ E/C$-$5/P${_1}^a$ [km~s$^{-1}$] & 0.98 & 0.31 \\ E/C$-$5/P${_2}^a$ [km~s$^{-1}$] & 5.32 & 2.00 \\ DoF & \multicolumn{2}{c}{20} \\ $rms_1$ [km~s$^{-1}$] & \multicolumn{2}{c}{0.68}\\ $rms_1$ [km~s$^{-1}$] & \multicolumn{2}{c}{1.40}\\ \hline \end{tabular} \\$^a$ E/C$-$5/P is the difference in spectrograph zero \\points measured for each component separately. \end{table} It is worth noting that the mass function $f(M_3)$ was calculated from the full formula: \begin{equation} f(M_3) = \frac{M_3^3 \sin^3{i_3}}{(M_1+M_2+M_3)^2} \end{equation} and is quite large. The minimum mass of the third body (for $i_3=90^\circ$) is a substantial fraction of the total mass of the eclipsing pair, so the popular approximation $(M_1+M_2+M_3)^2 \simeq (M_1+M_2)^2$ is not valid. We checked for tertiary eclipses in the residuals of the light curve models (see next Section) and we found no obvious evidence for them, which means that the outer orbit's inclination is different from $\sim90^\circ$. We have also failed to identify a tertiary peak in the CCF, which means that the tertiary component is significantly dimmer than the secondary. All this is consistent with for example a 0.7 M$_\odot$ star on a $70-75^\circ$ orbit. {It is also not excluded that the component C may itself be a binary composed of two similar, lower-mass stars, which together exceed the mass of B. This would be in agreement with \citet{tok14}, who found no hierarchical triples with outer orbits shorter than $\sim$1000 days. The mechanism leading to a formation of such double-double system, with short inner orbital periods and outer period shorter than 1000$\sim$ d, assumes non-aligned outer and inner orbits \citep{whi01,tok14}. It is thus possible that in V1200~Cen the mutual inclination between the outer and inner orbit is large, making the Kozai mechanism possible (Kozai 1962)}. \subsection{Light curve modelling} \begin{figure} \centering \includegraphics[width=\columnwidth]{LC_v1200.eps} \caption{Observed ASAS (top) and SuperWASP (bottom) light curves with \textsc{jktebop} models over-plotted. Lower panels show the residuals of the fit. } \label{fig_lc} \end{figure} \begin{table*} \centering \caption{Parameters obtained from the the ASAS and SuperWASP (SW) LC analysis. The adopted values are weighted averages.}\label{tab_lc} \begin{tabular}{lcccccc} \hline\hline Parameter & ASAS Value & $\pm$ &SW Value & $\pm$ & Adopted Value & $\pm$ \\ \hline $P$ [d] & 2.4828778 & 0.0000043& 2.4828752 & 0.0000025 &2.4828752 & 0.0000022\\ $T_{0}$ [JD] &1883.8789 & 0.0031 &1883.8827 & 0.0024 &1883.8813 & 0.0019 \\ $i$ [$^\circ$] & 81.9 & $^{+2.8}_{-1.3}$ &81.6 & $^{+1.6}_{-1.3}$ & 81.8 & $^{+1.4}_{-1.2}$\\ $r_{1}$ & 0.137 & $^{+0.014}_{-0.015}$ & 0.138 & $^{+0.025}_{-0.034}$ & 0.137 & $^{+0.014}_{-0.015}$ \\ $r_{2}$ & 0.107 & $^{+0.024}_{-0.039}$ & 0.110 & $^{+0.038}_{-0.026}$ & 0.109 & $^{+0.022}_{-0.025}$ \\ \hline \end{tabular} \end{table*} \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{inc_rr.eps} \caption{Results of the error analysis performed with \textsc{jktebop} with Monte-Carlo on the ASAS (red) and residual shifts on the SuperWASP data (green). Plots present the distribution of consecutive solutions on the $r_1 + r_2$ vs. $i$ (left) and $k=r_2/r_1$ vs. $i$ (right) panels. Black stars with error bars correspond to the adopted values with their $1\sigma$ uncertainties.} \label{fig_inc_rr} \end{figure*} Both ASAS and SuperWASP light curves were fitted using the \textsc{jktebop} code (Southworth et al. 2004a,b), which is based on the \textit{Eclipsing Binaries Orbit Program} \citep[\textsc{ebop};][]{pophet81}, and \textsc{phoebe} \citep{PZ2005} -- an implementation of the WD code \citep{wd71}. \textsc{jktebop} determines the optimal model light curve that matches the observed photometry and reports the parameters obtained for the model. The parameters derived include the period $P$, time of minimum light $T_{0}$, surface brightness ratio, relative sum of the radii $(R_{1} + R_{2})/a$, ratio of the radii $R_2/R1$, inclination $i$, eccentricity $e$, and argument of periastron $\omega$. The routine also takes into account the effects of limb darkening, gravity darkening and reflection effects. We adopted the logarithmic limb darkening law from \cite{claret}. To obtain the limb darkening coefficients we considered the ASAS light curve to be in Johnson's $V$ passband. For the SuperWASP filter we interpolated the coefficients from \cite{poll2006}. {We also checked the ASAS data for O-C timing variations, and we did not find any significant ones. Thus we did not correct the light curves for the light time effect induced by the third body.} We used \textsc{phoebe} to obtain the temperature of the secondary star, using the temperature of the primary star known previously from \citet{holm2009}. We did not use the latest Geneva-Copenhagen survey results, as they rely on infrared data, which in the case of V1200~Cen can be affected by the secondary and tertiary star. The value of the mass ratio was found in the previous orbital analysis. Figure \ref{fig_lc} shows the observed ASAS and SWASP light curves together with their models. Reliable uncertainties in case of ASAS data were calculated with the Monte-Carlo method (10000 runs) and with residual-shifts \citep{sou08} in case of SWASP data. Figure \ref{fig_inc_rr} shows the distribution of $r_1+r_2$ and $k$ as the function of the inclination angle. The usual correlation is clearly visible. The results of the LC analysis with \textsc{jktebop} are presented in Table \ref{tab_lc}. We found that the ratios of radii ($k$) and effective temperatures are the most uncertain values, vastly contributing to errors of such physical parameters as absolute radii or luminosities. This is due to large scatter of the photometric data and low contribution of the star B (12\% in $V$). One way to constraint $k$ would be to use intensity ratios from \textsc{todcor}, but as we mentioned the S/N of the spectra was too low. Data of a much higher S/N are needed, optimally taken in IR (both photometry and spectroscopy), where the secondary's contribution is larger. \subsection{Kinematics} Using the known parallax and proper motion \citep{leeuwen} and our value of the systemic velocity $v_\gamma$, we have calculated the galactic velocities: $U=-36.7\pm3.3$, $V=-21.8\pm3.6$ and $W=-1.8\pm0.6$~km~s$^{-1}$ (no correction for the solar movement has been done). These values put V1200~Cen in the Hyades moving group \citep{sea07,zha09}, which suggests the age of $\sim$625~Myr. However, \citet{fam08} have shown that about half of the stars that reside in the same area in the velocity space as the Hyades group, actually does not belong to it. Nevertheless, V1200~Cen seems to {be a young system belonging} to the thin galactic disk. \subsection{Absolute parameters} \begin{table} \centering \caption{Physical parameters of V1200~Cen obtained with \textsc{jktabsdim} on the basis of spectroscopic values of $T_{\rm{eff,1}}$ and $[Fe/H]$.}\label{tab_phys} \begin{tabular}{lcc} \hline\hline Parameter & Value & $\pm$ \\ \hline $a$ [R$_\odot$] & 10.13 & $^{+0.07}_{-0.06}$\\ $M_1$ [M$_\odot$] & 1.394 & 0.030 \\ $M_2$ [M$_\odot$] & 0.866 & 0.015 \\ $R_1$ [R$_\odot$] & 1.39 & $^{+0.14}_{-0.15}$\\ $R_2$ [R$_\odot$] & 1.10 & $^{+0.22}_{-0.25}$\\ log $g_{1}$ & 4.30 & $^{-0.09}_{+0.10}$\\ log $g_{2}$ & 4.29 & $^{-0.18}_{+0.20}$\\ $[Fe/H]$ & -0.18$^a$ & \\ $T_{\rm{eff,1}}$ [K] & 6266$^a$ & 94 \\ $T_{\rm{eff,2}}$ [K] & 4650$^b$ & 900 \\ $L_1$ [$\log($L/L$_\odot)$] & 0.42 & $^{+0.09}_{-0.10}$\\ $L_2$ [$\log($L/L$_\odot)$] & -0.29 & $^{+0.38}_{-0.40}$\\ $d$ [pc] & 98 & 11\\ \hline \end{tabular} \\$^a$ From \citet{holm2009}. \\$^b$ From temperature ratio obtained with \textsc{phoebe}. \end{table} The absolute dimensions and distance were calculated using \textsc{jktabsdim} with the results obtained from the radial velocity and light curve analysis. The parameters with their respective uncertainties used for the input were the velocity semi amplitudes (km~s$^{-1}$), period (days), orbital inclination (degrees), fractional stellar radii (i.e. in units of the orbital major semi-axis), effective temperatures, and apparent magnitudes in the $B$ and $V$ filters. We did not use the available $JHK$ photometry as it could have been affected by the third star. For the interstellar reddening we considered E(B-V)=0, and we observed no major differences for the distances obtained in the two filters, both in agreement with the value of the parallax 8.43$\pm$0.94~mas \citep[119$\pm$13~pc;][]{leeuwen}. We present the the physical parameters of V1200~Cen in Table \ref{tab_phys}. \subsection{Evolutionary status} In Figure \ref{fig_iso} we present our results from Table \ref{tab_phys} plotted over theoretical stellar evolution models of \citet{sies} and from Yonsei-Yale \citep[Y$^{2}$;][]{yi2001} set for ages of 6, 10, 15, 20 and 625 Myr. For the Y$^{2}$ we also include the 3 Gyr isochrone. The values from Table \ref{tab_phys} are marked as black points. The two most precise values we have are the mass and temperature of the primary. One can see that it is too cool for a main sequence object, so it is either evolved, as the 3~Gyr isochrone suggests, or at its pre-main-sequence (PMS) stage. However, both Y$^2$ and Siess' models predict a drastic change in temperature and radius between 10 and 20 Myr. The primary's radius is much too small for an evolved star and agrees with the late PMS, meanwhile the temperature clearly suggests younger age, inconsistent with the radius. We also note that secondary's parameters are much better reproduced by PMS models, especially its radius, which is much larger than main-sequence objects of the same mass, as expected for stars that are still evolving onto the main sequence. The comparison of our results with the model predictions indicates that V1200~Cen is a pre-main-sequence system, but no age is fully consistent with the data, and the resulting distance is only in a fair agreement with results from the $Hipparcos$. Unfortunately, the uncertainties are very large, especially for the secondary component, so our conclusions cannot be treated as final. The strongest constrain we have comes from the primary's temperature, which was derived spectroscopically by \citet{holm2009}. { We find almost exactly the same temperature -- 6263~K -- using the observed $B-V$ colour (0.475~mag) and calibrations by \citet{sek2000}. One has to remember that the Holmberg's temperature was derived under the assumption that the star is single.} Thus, it is { still possible} that the true $T_{eff}$ is higher. We run a series of tests to find the temperatures that give the best agreement with the $Hipparcos$ parallax, assuming their ratio to be the same as found by us with \textsc{phoebe}. { We found the best match to the observed distance for much higher temperatures of 6900 and 5120~K for the primary and secondary respectively.} The resulting effective temperatures and related luminosities are summarised in Table \ref{tab_phys_hot}. { We plot them on the Figure \ref{fig_iso} with red symbols.} \begin{table} \centering \caption{Radiative parameters of V1200~Cen obtained with \textsc{jktabsdim} by fitting the temperature scale to match the $Hipparcos$ distance.}\label{tab_phys_hot} \begin{tabular}{lcc} \hline\hline Parameter & Value & $\pm$ \\ \hline $T_{\rm{eff,1}}$ [K] & 6900 & 100$^a$ \\ $T_{\rm{eff,2}}$ [K] & 5120$^b$ & 900$^a$ \\ $L_1$ [$\log($L/L$_\odot)$] & 0.59 & $^{+0.09}_{-0.10}$\\ $L_2$ [$\log($L/L$_\odot)$] & -0.13 & $^{+0.36}_{-0.37}$\\ \hline \end{tabular} \\$^a$ Uncertainty assumed. \\$^b$ From temperature ratio obtained with \textsc{phoebe}. \end{table} One can see that the new { higher} values of radiative parameters fit the main sequence or even 20~Myr models, and both radii still agree with the 30~Myr isochrone. The age of 30~Myr also happens to be the time scale of circularisation of the system's orbit. All in all, we get a self-consistent model of a { 30-625~Myr old, young multiple. As an attempt to distinguish between high and low temperature scale we also run a simplified spectral analysis with the Spectroscopy Made Easy package \citep[SME;][]{val96}. We took the CORALIE spectra, shifted by the measured velocity of the primary and stacked them together. We run the SME on a portion of spectrum spanning 6190-6260\AA, and keeping the [$Fe/H$], $\log(g)$ and $v_{rot}$ on values from or expected from Tab. \ref{tab_phys}. The secondary's contribution was treated as a contribution to the continuum, constant across the wavelength range. We obtained $T_{eff,1}\sim6000$~K, favouring the lower temperature scale, and younger ages (Fig. \ref{fig_iso}). We are aware of the fact that such analysis is affected by the secondary, and we do not treat it as a proof for the correctness of Holmberg's temperature, but we find it unlikely to be off by almost 1000~K, between spectral types F0 and F8.} \begin{figure*} \begin{picture}(500,600) \put(0,0){\includegraphics[width=0.45\textwidth]{V1200CenML_iso.eps}} \put(250,0){\includegraphics[width=0.45\textwidth]{V1200CenML_isoSiess.eps}} \put(0,200){\includegraphics[width=0.45\textwidth]{V1200CenMTeff_iso.eps}} \put(250,200){\includegraphics[width=0.45\textwidth]{V1200CenMTeff_isoSiess.eps}} \put(0,400){\includegraphics[width=0.45\textwidth]{V1200CenMR_iso.eps}} \put(250,400){\includegraphics[width=0.45\textwidth]{V1200CenMR_isoSiess.eps}} \end{picture} \caption{Mass vs. radius, $\log(L)$ and log($T_{\mbox{eff}}$). The left column shows our results and the isochrones from Y$^{2}$ for ages of 6, 10, 15, 20, 625 Myr and 3 Gyr. The right column shows the same but using the \citet{sies} isochrones (without 3 Gyr). { Black} symbols are for our results from Table \ref{tab_phys} and { red} symbols for the results from Table \ref{tab_phys_hot}.}\label{fig_iso} \end{figure*} \section{Conclusions} We report the discovery that the eclipsing binary V1200~Centauri is a triple, {likely a member of the Hyades moving group, but the largely inflated secondary's radius may suggest that the system may be PMS around 30 Myr}. There are few such objects known so far, however they are very important for calibrating stellar evolution models at young ages, where stars are changing rapidly as they evolve onto the main sequence. Analysis of ASAS and SuperWASP light curves combined with radial velocity measurements allowed to obtain absolute parameters of the system. Through our analysis of radial velocities and orbital solution we determined the presence of a third companion, in a wide orbit. We reached a good precision in mass determination (2.2 and 1.7\%), but other parameters (radii, temperatures, luminosities) are not that well established. Further analysis of the system allowed us to compare our results with stellar evolution models, obtaining an approximate age of 30 Myr. Despite its brightness, data of higher S/N are required to better constrain physical parameters of the system, especially temperatures and ratio of the radii. Possible solution would be to obtain photometry and spectroscopy in the IR, where components B and C contribute more than in visual. The secondary eclipse would be deeper, and the influence of the third light should be detectable. Also, if RVs of the star C were measured, a full dynamical solution of the system could be obtained, including mass of the third star and the inclination of its orbit. This would also allow for comparing the isochrones with three stars, thus constrain the age and evolutionary status even better. With its probable age, V1200~Cen is an important object to study the tidal and third-body interactions young binaries. \newpage \section*{Acknowledgements} PUCHEROS was funded by CONICYT through project FONDECYT No. 1095187. J.C acknowledges J.M. Fern\'andez for his assistance during all of the observations with PUCHEROS. K.G.H. acknowledges support provided by the by the National Astronomical Observatory of Japan as Subaru Astronomical Research Fellow and the Proyecto FONDECYT Postdoctoral No. 3120153. L.V acknowledges support by Fondecyt 1095187, 1130849 and FONDEF CA13I10203. N.E and R.B are supported by CONICYT-PCHA/Doctorado Nacional. We also acknowledge the support provided by the Polish National Science Center through grants 2011/03/N/ST9/01819, 2011/01/N/ST9/02209 and 5813/B/H03/2011/40. Support for A.J. and M.C. is provided by the Ministry for the Economy, Development, and Tourism's Programa Iniciativa Cient\'{i}fica Milenio through grant IC\,120009, awarded to the Millennium Institute of Astrophysics (MAS), and by Proyecto Basal PFB-06/2007. M.C. acknowledges additional support by FONDECYT grant \#1141141.
1,941,325,219,969
arxiv
\section{Introduction} The brain is widely understood to operate via densely-interconnected network behavior \cite{fornito_fundamentals_2016}. Many diseases have been proposed to be essentially ``connectivity diseases", such as schizophrenia, Alzheimer's, and other dementias \cite{van_den_heuvel_exploring_2010}. Hence, initiatives such as the Human Connectome Project \cite{sporns_human_2005} have high hopes for solving open problems in brain function and disease \cite{craddock_imaging_2013}. At the same time, modularity of function, to at least some degree, is also clearly evident in the brain. Brain lesions in specific locations commonly lead to specific defects \cite{blumenfeld_neuroanatomy_2010}, for example Broca's aphasia resulting from lesions in Broca's area, or memory defects from lesions in the hippocampus. Another source of support comes from the regional differences in cytoarchitecture, identified by Brodmann and others in cadavers \cite{zilles_centenary_2010}. Due to this combination of modular and network function, the brain is often described as having a hierarchical architecture \cite{van_den_heuvel_exploring_2010}. Brain parcellation, therefore, can be described as identification of the macroscopic-to-mesoscopic level (or levels) of this hierarchy. The simplest and most common \cite{stanley_defining_2013}, approach to parcellation is the use of pre-defined neuroanatomical maps defining regions of interest (ROI) \cite{evans_brain_2012}. Functional imaging data, such as a functional magnetic resonance imaging (fMRI) scan, is first normalized into a common coordinate system system such as Talairach coordinates \cite{lancaster_automated_2000}. Then, pre-defined masks provide a direct identification of groups of image voxels onto regions describing the parcels. From here researchers can compute the net activity of the parcel by averaging the time courses of contained parcels, then analyze the networked activity between the parcels. However the brain is known to vary significantly between individuals, particularly in association regions of the cortex \cite{wang_parcellating_2015}, rendering the subsequent analyses inaccurate for such regions. This is of course problematic as it is the complex processes which are believed to involve such regions \cite{blumenfeld_neuroanatomy_2010}, which are the most poorly-understood functions of the brain. Further, it is not clear that cytoarchitecture alone can serve to determine the functional divisions in the brain \cite{arslan_joint_2015}. A variety of data-driven methods for parcellation have been investigated \cite{thirion_which_2014,craddock_neuroimage_2018,de_reus_parcellation-based_2013, arslan_human_2018}. These use the similarities between the time courses of the voxels to group them into parcels \cite{blumensath_spatially_2013}. The simplest idea is to apply clustering methods, unsupervised learning approaches from the machine learning field \cite{hastie_unsupervised_2009}, directly to the voxel time courses. Multiple types of clustering have been adapted to parcellation, including $k$-means clustering \cite{goutte_clustering_1999, thirion_which_2014}, hierarchical clustering \cite{goutte_clustering_1999,filzmoser_hierarchical_1999,arslan_multi-level_2015,blumensath_spatially_2013}, and fuzzy clustering approaches \cite{baumgartner_fuzzy_1997,baumgartner_quantification_1998,golay_new_1998}. Related methods include the identification of boundaries between groups of similar time courses \cite{gordon_generation_2016}, matrix factorization methods \cite{blumensath_sparse_2014,cai_estimation_2018}, dictionary learning \cite{wang_cerebellar_2016}, and processing the time courses to form new types of features to cluster \cite{goutte_feature-space_2001,mezer_cluster_2009}. Fuzzy c-means was once considered the favorite \cite{goutte_feature-space_2001}, but research has largely taken a different direction in recent decades. Despite several years of progress, the problem of individual parcellation continues to be considered unsolved, with no clearly-superior approach \cite{arslan_human_2018}. Recent research has sought to utilize the network connectivity itself to improve upon parcellation \cite{eickhoff_connectivity-based_2015}. The most direct approach is to apply a clustering method to the covariance matrix or a similar quantity describing correlations between time courses. Such analyses have primarily been limited to specific sub-regions such as the orbitofrontax cortex \cite{kahnt_connectivity-based_2012}, post-central gyrus \cite{roca_inter-subject_2010}, or medial frontal cortex \cite{kim_defining_2010}. This is likely due at least in part to the fact that clustering of correlations produces a poor parcellation at the scale of the entire brain. Approaches such as Yeo et al \cite{thomas_yeo_organization_2011} yield small numbers (e.g., seven) of brain-wide networks, rather than spatially-localized parcels. A closely-related direction is based on the graph partitioning perspective, leveraging advances from spectral graph theory \cite{chung_spectral_1997} and the fast-growing field of network science \cite{brandes_what_2013}. Here the time courses are used to generate a large graph connecting all voxels, which may be weighted or binary, though to our knowledge it is always undirected and unsigned. Then a graph partitioning technique such as normalized cuts (Ncut) \cite{shi_normalized_2000} or spectral clustering \cite{weiss_segmentation_1999,meila_learning_2002,belkin_laplacian_2003,saerens_principal_2004,von_luxburg_tutorial_2007,meila_spectral_2015} is applied to this dense graph, defining parcels as subgraphs with denser internal connections. As with the clustering of correlations, immediate adaptations of these methods tend to lack the spatial continuity of ROI, instead resulting in a small number of brain-wide networks \cite{heuvel_normalized_2008,venkataraman_exploring_2009}, or being restricted in analysis to a single ROI \cite{shen_graph-theory_2010}. Some researchers have developed methods which achieve more realistic-looking parcels by imposing spatial constraints \cite{thirion_dealing_2006}, such as by only allowing adjacent pixels to be connected \cite{craddock_whole_2012,arslan_joint_2015}, or by combining neighbors into ``super-voxels" \cite{wang_supervoxel-based_2016,wang_generation_2018}. While spatial constraints or similar regularization techniques exhibit good reproducibility, this may simply be a result of bias, as such methods do not necessarily perform well in other metrics \cite{arslan_human_2018}. In this paper we present a new approach to parcellation based on taking an image science perspective on the problem of predicting activity within a network, as initially proposed in \cite{dillon_regularized_2017}. In imaging, a resolution cell is defined as the smallest region within which no further detail may be discerned. Here we analogously consider groups of voxels who's connectivity cannot be independently resolved, and suggest this implies they belong in the same parcel. We will show that using clustering to produce a parcellation of resolution cells in this way, yields a new variant on spectral clustering, which is able to form realistic-looking ROI without need for strict distance regularization. We demonstrate the approach using real fMRI data, where we find that the resulting parcels are also more predictive of network activity. \section{Methods} Spectral clustering is a very popular clustering technique \cite{von_luxburg_tutorial_2007} which has been used to solve problems in a wide range of areas such as image processing \cite{weiss_segmentation_1999}, graph theory \cite{saerens_principal_2004}, clustering on nonlinear manifolds \cite{belkin_laplacian_2003}, and brain parcellation as reviewed in the previous section. Spectral clustering is commonly described as a continuous relaxation of the normalized cut algorithm \cite{von_luxburg_tutorial_2007} for partitioning graphs. The discrete version of the normalized cut algorithm is NP-hard \cite{von_luxburg_tutorial_2007}, while the continuous approximation (which we call ``spectral clustering", often also referred to in the literature as simply ``the Ncut algorithm") can be performed with basic linear algebra methods. The most common form \cite{von_luxburg_tutorial_2007} of spectral clustering is provided in Algorithm 1. \begin{algorithm} \begin{algorithmic}[1] \caption{Spectral Clustering} \STATE Form undirected unsigned weighted graph represented by weighted adjacency matrix $\mathbf W$ from correlations between time-courses. Choose number of clusters $K$. \STATE Compute graph Laplacian $\mathbf L = \mathbf D - \mathbf W$. The matrix $\mathbf D$ is a diagonal matrix where $D_{i,i}$ is the degree of node $i$. \STATE Compute $K$ smallest eigenvalues $\lambda_1, ..., \lambda_K$ and corresponding eigenvectors $\mathbf (\mathbf v_1, ..., \mathbf v_K) = \mathbf V_1$ from the eigenvalue decomposition $\mathbf L = \mathbf U \mathbf S \mathbf V^T$, where $\mathbf v_i$ is the $i$th column of $\mathbf V$. \STATE Apply $k$-means clustering to the rows of $\mathbf V_1$ with $K$ clusters. \label{SC_algo} \end{algorithmic} \end{algorithm} In addition to approximately-optimal partitioning of graphs, other interpretations have been noted for spectral clustering \cite{meila_spectral_2015}, sometimes involving minor variants on Algorithm \ref{SC_algo}, such as by normalizing the Laplacian differently. In \cite{meila_learning_2002}, it is shown that one can also use the largest eigenvalues and corresponding eigenvectors of a normalized version of the adjacency matrix $\mathbf W$ (normalized to have unit row sums). The question of which variant works best has generally been left to empirical testing \cite{von_luxburg_tutorial_2007}. Our approach addresses the problem from an entirely different direction, based on the idea of resolving unknown edges, leading to a new variant of spectral clustering. In \cite{dillon_image_2016} the concept of resolution was applied to estimating the relation of functional imaging to phenotypes, resulting in modular regions suggestive of brain parcellation. In \cite{dillon_regularized_2017}, the resolution concept was applied to the relation between the activity of different voxels, i.e., network estimation, and used to perform brain parcellation. First we will review that approach here. Let $\mathbf A$ be a matrix containing fMRI data, where $\mathbf a_i$, the $i$th column of $\mathbf A$, contains the time series describing the activity of the $i$th voxel. We assume the data has been preprocessed to remove artifacts, and standardized. The neighborhood selection problem \cite{meinshausen_high-dimensional_2006} is the regression problem to estimate the functional connectivity of the $k$th voxel given all other voxels, which we formulate as \begin{align} \begin{array}{c} \mathbf x_k^* = \arg\underset{\mathbf x_k}{\min} \; \Vert \mathbf A \mathbf x_k - \mathbf a_k \Vert, \end{array} \label{eq_nbhd_opt} \end{align} where $\mathbf x_k^*$ is the $k$th column of the weighted adjacency matrix $\mathbf X$. Note that we do not restrict the diagonal of $\mathbf X$ to be zero, hence we allow self loops in the network. A typical approach is to seek a sparse solution to Eq. (\ref{eq_nbhd_opt}), as in \cite{meinshausen_high-dimensional_2006}, by imposing an appropriate regularization term. This is especially valued in applications such as connectomics due to the high redundancy in the data, i.e., strong correlations between voxel time series. Of course the goal of parcellation is to identify modules based on these correlations. To that end we will use an approach from image science for analyzing the redundancy in an inverse problem. In this case, the inverse problem is the linear model, \begin{align} \mathbf A \mathbf x_k = \mathbf a_k, \label{eq_inverse_nbhd} \end{align} where we view $\mathbf a_k$ in Eq. (\ref{eq_nbhd_opt}) as a measured output, and $\mathbf x_k$ as an unknown input we wish to determine. The data matrix $\mathbf A$ serves as a forward model, an operator which transforms the input to the output, causing some degree of information loss. The resolution matrix \cite{jackson_interpretation_1972} describes this information loss for inverse problems, and is defined as \begin{align} \mathbf R = \mathbf A^\dagger \mathbf A, \end{align} Where $\mathbf A^\dagger$ is the pseudoinverse of $\mathbf A$. The resolution matrix is sometimes described as an approximate identity matrix for the problem \cite{ganse_uncertainty_2013}. Consider that if $\mathbf A$ was invertible, then $\mathbf A^\dagger = \mathbf A^{-1}$ and hence $\mathbf R = \mathbf I$, the identity matrix. Further, an invertible $\mathbf A$ means Eq. (\ref{eq_inverse_nbhd}) can be solved exactly for $\mathbf x_k$ using the inverse, and indeed Eq. (\ref{eq_nbhd_opt}) will find this exact solution. Generally, the more $\mathbf R$ differs from the identity matrix, the more information loss there is, and the worse our estimate of the true connectivity will be. A simple example is depicted in Fig. \ref{img_identity_vs_reso}. \begin{figure}[h!] \centering \scalebox{0.55}{\includegraphics[clip=true, trim=0in 0in 0in 0in]{identity_matrix_image.png}} \scalebox{0.55}{\includegraphics[clip=true, trim=0in 0in 0in 0in]{reso_matrix_image.png}} \caption{Identity matrix (left) versus resolution matrix (right) which depicts a blurring of each sample among the nearest three samples. if the resolution equals the identity then we have maximum resolution, as we can perfectly reconstruct the unknown $\mathbf x_k$.} \label{img_identity_vs_reso} \end{figure} We can see that this is a kind of minimal choice of blurring region by solving for the best approximation to an inverse, using the following optimization program \begin{align} \begin{array}{c} \mathbf y_i^* = \arg\underset{\mathbf y_i}{\min} \; \Vert \mathbf A^T \mathbf y_i - \mathbf e_i \Vert_2^2, \end{array} \label{eq_resoopt} \end{align} where $\mathbf e_i$ is a vector of zeros with a value of one in the $i$th element. The solution to this problem is \begin{align} \mathbf y_i^* = (\mathbf A^T)^\dagger \mathbf e_i, \label{eq_y_pinv} \end{align} which is a row of the pseudoinverse. Further, note that $\mathbf A^T \mathbf y_i^* = \mathbf A^T (\mathbf A^T)^\dagger \mathbf e_i = \mathbf r_i$, the $i$th row of $\mathbf R$. So $\mathbf r_i$ is an optimal approximation to $\mathbf e_i$. Since $\mathbf e_i$ is the $i$th row of the identity matrix, $\mathbf R$ is an optimal approximation to the identity. In Fig. \ref{pic_network} we give a simple simulation to demonstrate the resolution matrix for a network, where correlation is due to connectivity rather than simply missing information. Fig. \ref{pic_network} depicts the network, data matrix of time courses, and resolution matrix. \begin{figure}[h!] \centering \scalebox{0.27}{\includegraphics[clip=true, trim=0in 0.0in 0in 0.0in]{network_pic_2.png}} \scalebox{0.35}{\includegraphics[clip=true, trim=0in 0in 0in 0.15in]{networkroi01_sim0_A.pdf}} \scalebox{0.35}{\includegraphics[clip=true, trim=0in 0in 0in 0.15in]{networkroi01_sim0_R.pdf}} \caption{Simple network (left) consisting of densely-interconnected nodes in three groups; corresponding data matrix (middle) and resolution matrix (right) which is able to resolve the subnetworks individually, but not the nodes within, as the groups differ in their time courses but the nodes within each group have identical time courses.} \label{pic_network} \end{figure} We see that the resolution matrix, formed by applying the pseudoinverse of the data matrix to the data matrix itself, is able to separate the three groups but not resolve nodes any further. \subsection{Regularization} Thus far we have neglected the role of noise and other sources of error. The typical inverse problems approach to noise is to treat it as unwanted variation in the measured output data, which in the model of Eq. (\ref{eq_inverse_nbhd}) would imply the following, \begin{align} \mathbf A \mathbf x_k + \mathbf n_k = \mathbf a_k, \label{eq_inverse_nbhd_noisy} \end{align} where $\mathbf n_k$ is a random vector of noise signal. In inverse problems, such noise effects are known to cause a loss of resolution \cite{dillon_computational_2016}. In our adaptation of this perspective to connectivity estimation, however, we must take additional care; any signal noise in our measurements $\mathbf a_k$ will also afflict our forward model $\mathbf A$, as $\mathbf a_k$ is itself a column from $\mathbf A$. E.g. we have $\mathbf A_{\mathbf N} = \mathbf A + \mathbf N$, where $\mathbf N$ is a matrix of random noise signals. In other words, signal noise directly becomes model error as well. This is especially problematic because it has an opposite effect, making the resolution appear to be higher than it truly is. Consider the case where the true $\mathbf A$ should have low rank and be singular. The addition of white noise would result in an invertible (though perhaps poorly-conditioned) $\mathbf A_{\mathbf N}$, which would appear to have maximum resolution, yielding the identity matrix from $\mathbf R = \mathbf A_{\mathbf N}^\dagger \mathbf A_{\mathbf N} = \mathbf A_{\mathbf N}^{-1} \mathbf A_{\mathbf N} = \mathbf I$. This is demonstrated in Fig. \ref{fig_noisynetwork}, where a noisy version of the $\mathbf A$ matrix from Fig. \ref{pic_network} results in a resolution matrix that is approimately the identity. \begin{figure}[h!] \centering \scalebox{0.35}{\includegraphics[clip=true, trim=0in 0in 0in 0.15in]{networkroi01_sim0a_A.pdf}} \scalebox{0.35}{\includegraphics[clip=true, trim=0in 0in 0in 0.15in]{networkroi01_sim0a_R0.pdf}} \scalebox{0.35}{\includegraphics[clip=true, trim=0in 0in 0in 0.15in]{networkroi01_sim0a_R014.pdf}} \caption{Noisy version of data matrix (left), resolution matrix estimate with no regularization (middle), and with regularization (right), demonstrating false gains in resolution due to noise and their correction by regularization.} \label{fig_noisynetwork} \end{figure} Noise is commonly addressed in inverse problems by regularization of the resolution matrix using Tikhonov regularization \cite{an_simple_2012}. This effectively replaces Eq. (\ref{eq_nbhd_opt}) with the following regularized regression problem, \begin{align} \begin{array}{c} \mathbf x_k^* = \arg\underset{\mathbf x_k}{\min} \; \Vert \mathbf A \mathbf x_k - \mathbf a_k \Vert_2^2 + \mu \Vert \mathbf x_k \Vert_2^2, \end{array} \label{eq_nbhd_opt_l2} \end{align} which requires choice of a regularization parameter $\mu$. In the underdetermined case (i.e., more voxels than time samples) this has analytical solution, \begin{align} \mathbf x_k^* = \mathbf A^T (\mathbf A \mathbf A^T + \mu \mathbf I)^{-1}\mathbf a_k, \label{eq_xk_mu} \end{align} defining the $\ell_2$-regularized pseudoinverse, \begin{align} \mathbf A^\dagger_\mu = \mathbf A^T (\mathbf A \mathbf A^T + \mu \mathbf I)^{-1}, \label{pinv_mu} \end{align} and corresponding $\ell_2$-regularized resolution matrix \begin{align} \mathbf R_\mu = \mathbf A^\dagger_\mu \mathbf A. \label{eq_Rmu} \end{align} This is the approach taken in \cite{dillon_regularized_2017}. Note that a regularized resolution matrix is the product of the noisy dataset with a regularized version of its pseudoinverse. A difficulty of regularization methods is of course the need to choose a regularization parameter. However, we note that such decisions are already routinely made in fMRI preprocessing. In particular, a common practice is to truncate the singular value decomposition (SVD) of the data, in order to remove weak components \cite{caballero-gaudes_methods_2017}. Further, a number of researchers have developed techniques and guidelines for selection of the proper cutoff for such preprocessing steps \cite{strother_evaluating_2006, churchill_optimizing_2012,yourganov_dimensionality_2011}. Next we will show that if a dataset has been preprocessed in this way, then the resulting resolution matrix can be viewed as having been regularized. Consider the $m \times n$ dataset $\mathbf A$ where $m<n$ and $\mathbf A$ is full row rank. The SVD is \begin{align} \mathbf A = \mathbf U \mathbf S \mathbf V^T = \sum_{i=1}^m \sigma_i \mathbf u_i \mathbf v_i^T \label{eq_A_SVD} \end{align} where $\mathbf U$ and $\mathbf V$ are left and right singular vectors with columns $\mathbf u_i$ and $\mathbf v_i$, respectively, and $\mathbf S$ is a $m \times n$ diagonal matrix of singular values $\sigma_i$. The truncated SVD of $\mathbf A$ is \begin{align} \mathbf A_r = \mathbf U_r \mathbf S_r \mathbf V_r^T = \sum_{i=1}^r \sigma_i \mathbf u_i \mathbf v_i^T, \label{eq_Ar} \end{align} where $r$ is the rank to which the SVD is truncated, $\mathbf U_r$ and $\mathbf V_r$ are the first $r$ columns of $\mathbf U$ and $\mathbf V$, respectively, and $\mathbf S_r$ is the first $r$ rows of $\mathbf S$. The truncated-SVD (TSVD) regularized solution to Eq. (\ref{eq_inverse_nbhd_noisy}) would then be \cite{hansen_truncatedsvd_1987}, \begin{align} \hat{\mathbf x}_k = \mathbf V_r \mathbf S_r^{-T} \mathbf U_r^T \mathbf a_k = \mathbf A_r^\dagger \mathbf a_k, \label{eq_tsvd_soln} \end{align} where $\mathbf S_r^{-T}$ is the $n \times r$ diagonal matrix of the inverses of the first $r$ singular values. This method has long been known to produce similar results to those of $\ell_2$ regularization \cite{hansen_truncatedsvd_1987}. In Eq. (\ref{eq_tsvd_soln}) we have also defined the TSVD-regularized pseudoinverse $\mathbf A_r^\dagger$ by analogy with Eq. (\ref{pinv_mu}), \begin{align} \mathbf A_r^\dagger = \mathbf V_r \mathbf S_r^{-T} \mathbf U_r^T = \sum_{i=1}^r \sigma_i^{-1} \mathbf v_i \mathbf u_i^T. \end{align} Note that we use Greek subscripts to denote the $\ell_2$-regularized pseudoinverse, and Latin subscripts to denote the SVD-regularized version. By computing the resolution matrix using this dimensionality-reduced data set, we get the TSVD-regularized resolution matrix, \begin{align} \mathbf R_r = \mathbf A^\dagger_r \mathbf A_r = \mathbf A^\dagger_r \mathbf A. \end{align} So by leveraging results for optimal choice of preprocessing, in this case for truncating the SVD of the data to eliminate weak components, we get a regularized resolution matrix estimate. \subsection{Spatial smoothing} Spatial smoothing is another common preprocessing step used for fMRI data \cite{strother_evaluating_2006}, which also can be viewed as a regularization of the resolution matrix. In this case, the effect is immediately visible since the resolution matrix itself takes on the smothing operation, as a ``blurring'' of the identity. Earlier, with Fig. \ref{img_identity_vs_reso}, we noted the interpretation of $\mathbf R$ as a blurring operator \cite{backus_resolving_1968}. Consider that the least-squares solution to Eq. (\ref{eq_inverse_nbhd}) is \begin{align} \hat{\mathbf x}_k &= \mathbf A^\dagger \mathbf A \mathbf x_k \notag \\ &= \mathbf R \mathbf x_k. \label{eq_a_hat} \end{align} So the least-squares solution $\hat{\mathbf x}_k$ is a ``blurred" version of the true solution, with blurring described by the resolution matrix. In particular, note that this blurring operator projects the true solution onto the rowspace of $\mathbf A$. Such projections are known as estimable functions \cite{rodgers_estimable_1982}, meaning features which may be estimated even when the forward model is not invertible \cite{dillon_robust_2017}. In computational imaging, resolution cells may be viewed as local averages which may be estimated from the data. Similarly in a connectivity estimation problem, while we may not be able to estimate the edge weight for a particular edge to a node, we may be able to estimate the average weight over multiple edges connecting to the node. The averaging operator is the corresponding row of the resolution matrix. The spatial smoothing kernel, therefore, directly relates to the resulting blurring kernel of the resolution matrix. Spatial smoothing creates local correlations between nearby time-series, which the resolution matrix subsequently describes. This is demonstrated in the simulation of Fig. \ref{fig_noise_filt_sim}. \begin{figure}[h!] \centering \scalebox{0.6}{\includegraphics[clip=true, trim=0in 0in 0in 0in]{spectral02_1D_filter_sim.pdf}} \caption{Noisy version of data matrix (left), resolution matrix estimate with no regularization (middle), and with regularization (right), demonstrating false gains in resolution due to noise and their correction by regularization.} \label{fig_noise_filt_sim} \end{figure} In this case the resolution matrix is an approximate identity, while the spatial smoothing produces a similarly-smoothed version of the identity matrix. We also see that the blurring kernel described by the resolution matrix columns is very similar to the smoothing kernel used on the data. This produces an important regularization effect in clustering techniques. \subsection{Resolution Clustering} The example in Fig. \ref{pic_network} motivates the idea of separating the sub-networks by clustering the resolution matrix. And indeed this was demonstrated in \cite{dillon_regularized_2017} using the the method is presented in Algorithm 2. \begin{algorithm} \begin{algorithmic}[1] \caption{Resolution Clustering} \STATE Form standardized data matrix $\mathbf A$ containing time series as columns. Choose regularization parameter $\mu$ and number of clusters $K$. \STATE Compute $\mathbf A^{\dagger}_\mu$, the regularized pseudoinverse of $\mathbf A$. \STATE Apply $k$-means clustering to the columns of $\mathbf R = \mathbf A^{\dagger}_\mu \mathbf A$ with $K$ clusters. \label{pinv_algo} \end{algorithmic} \end{algorithm} In \cite{dillon_regularized_2017} a memory-efficient clustering algorithm was also provided to handle large datasets $\mathbf A$ for which $\mathbf R = \mathbf A^{\dagger}_\mu \mathbf A$ would be too large to store in memory. For example, the Philadelphia Neurodevelopmental Cohort fMRI data volumes are $79 \times 95 \times 79$ voxels, for 124 time samples, resulting in a $\mathbf A$ matrix of size $592895 \times 124$. The resulting resolution matrix would be $592895 \times 592895$ (as is the covariance matrix for forming network affinity matrices). However we never need store this in its entirety. We will describe that method here, as the approach can be used for clustering regularized resolution matrices as well as for clustering the sample covariance matrix via $\mathbf A^T \mathbf A$. A basic $k$-means algorithm is presented in Algorithm 3. \begin{algorithm} \begin{algorithmic}[1] \caption{$k$-means applied to data columns of matrix $\mathbf M$} \STATE Choose number of clusters $K$ and initialize cluster centers $\mathbf c_k, \; k=1,...,K$ \WHILE {Convergence criterion not met} \STATE Label each column with that of nearest cluster center: $l_k = \arg \min_i D_{ik}$, where $D_{ik}$ is distance between column $\mathbf m_k$ and cluster center $\mathbf c_i$ \STATE Recalculate cluster centers as mean over data columns with same label: $\mathbf c_i = \frac{1}{\vert S_i \vert}\sum_{j \in S_i} \mathbf m_j$, where $S_i = \left\{k | l_k = i \right\} $. \ENDWHILE \label{kmeans_algo} \end{algorithmic} \end{algorithm} Here we consider the general case of clustering columns of $\mathbf M = \mathbf P \mathbf Q$, where $\mathbf P$ (of size $n \times m$) and $\mathbf Q$ (of size $m \times n$) are stored in memory but $\mathbf M$ is too large to store. So for resolution $\mathbf M = \mathbf R$ and $\mathbf P$ and $\mathbf Q$ are $\mathbf A^\dagger$ and $\mathbf A$, respectively. First note that the squared distances between a given center $\mathbf c_i$ and a column $\mathbf m_k$ of $\mathbf M$ can be calculated as \begin{align} D_{ik}^2 &= \Vert \mathbf c_i - \mathbf m_k \Vert_2^2 \notag \\ &= \mathbf c_i^T \mathbf c_i + \mathbf m_k^T \mathbf m_k - 2 \mathbf c_i^T \mathbf m_k \notag \\ &= \mathbf c_i^T \mathbf c_i + \mathbf m_k^T \mathbf m_k - 2 \mathbf c_i^T \mathbf P \mathbf q_k. \label{eq_Dik2} \end{align} Since we are only concerned with the class index $i$ of the cluster with the minimum distance to each column, we do not need to compute the $\mathbf m_k^T \mathbf m_k$ term. So we can compute \begin{align} l_k &= \arg \min_i D_{ik}^2 \notag \\ &= \arg \min_i \left\{ \mathbf c_i^T\mathbf c_i - 2 \mathbf c_i^T \mathbf P \mathbf q_k \right\}. \label{eq_lk} \end{align} By forming a matrix $\mathbf C$ with cluster centers $\mathbf c_i$ as columns, we can efficiently compute the cross term in brackets for all $i$ and $k$ as $(\mathbf C^T \mathbf P) \mathbf Q$, a $K$ by $n$ matrix. Along similar lines, we can efficiently compute the mean over columns in each cluster by noting that the mean over a set $S$ of columns can be written as \begin{align} \mathbf c_i &= \frac{1}{\vert S \vert}\sum_{j \in S} \mathbf m_j = \frac{1}{\vert S \vert}\sum_{j \in S} \mathbf P \mathbf q_j = \frac{1}{\vert S \vert} \mathbf P \sum_{j \in S} \mathbf q_j. \label{eq_ci} \end{align} So for clustering the resolution matrix, we have that clustering of $\mathbf R$ requires additional storage for a matrix the same size as $\mathbf A$ (i.e., $\mathbf A^\dagger$), and roughly double the number of calculations as conventional clustering, with two matrix-vector multiplies replacing each single one in the conventional algorithm. \subsection{Spectral Resolution Clustering} For TSVD regularization, we can use Algorithm 2 and the memory-efficient technique of the previous section, replacing $\mathbf A_\mu^\dagger$ with $\mathbf A_r^\dagger$. However we can find an potentially even more efficient algorithm, which can utilize preprocessing calculations that are already being performed. First note that using the SVD of $\mathbf A = \mathbf U \mathbf S \mathbf V^T$, we get \begin{align} \mathbf R = \mathbf A^\dagger \mathbf A = \mathbf V \mathbf V^T. \end{align} Similarly for the truncated SVD, $\mathbf A_r = \mathbf U_r \mathbf S_r \mathbf V_r$, we get \begin{align} \mathbf R_r = \mathbf A_r^\dagger \mathbf A = \mathbf A_r^\dagger \mathbf A_r = \mathbf V_r \mathbf V_r^T. \label{eq_Rr_svd} \end{align} Using the result from the previous section, now with $\mathbf P = \mathbf V_r$ and $\mathbf Q = \mathbf V_r^T$, we have from Eq. (\ref{eq_ci}), \begin{align} \mathbf c_i &= \frac{1}{\vert S \vert} \mathbf P \sum_{j \in S} \mathbf q_j \notag \\ &= \frac{1}{\vert S \vert} \mathbf V_r \sum_{j \in S} \mathbf v_r^{(j)}, \label{eq_ci_VVT} \end{align} where $\mathbf v_r^{(j)}$ is the (transposed) $j$th row of $\mathbf V_r$. Then combining Eq. (\ref{eq_ci_VVT}) and Eq. (\ref{eq_Dik2}), we get \begin{align} D_{ik}^2 &= \Vert \mathbf c_i - \mathbf m_k \Vert_2^2 \notag \\ &= \left\Vert \frac{1}{\vert S \vert} \mathbf V_r \sum_{j \in S} \mathbf v_r^{(j)} - \mathbf V_r \mathbf v_r^{(k)} \right\Vert_2^2 \notag \\ &= \left\Vert \frac{1}{\vert S \vert} \sum_{j \in S} \mathbf v_r^{(j)} - \mathbf v_r^{(k)} \right\Vert_2^2 \label{eq_Dik2} \end{align} So the distances are equal to the distances between rows of $\mathbf V_r$ and centers resulting from the average over the rows of $\mathbf V_r$ corresponding to the cluster. The method is summarized in Algorithm 4. \begin{algorithm} \begin{algorithmic}[1] \caption{TSVD-regularized Resolution Clustering.} \STATE Form standardized data matrix $\mathbf A$ containing time series as columns. Choose rank truncation $r$ and number of clusters $K$. \STATE Compute $r$ singular vectors $\mathbf (\mathbf v_1, ..., \mathbf v_r) = \mathbf V_r$ corresponding to largest $r$ singular values of $\mathbf A$. \STATE Apply $k$-means clustering to the rows of $\mathbf V_r$ with $K$ clusters. \label{TSVD_spectral_reso_algo} \end{algorithmic} \end{algorithm} Next consider that in typical approaches to spectral clustering start by forming the graph from $\mathbf A$, for example by computing pairwise correlations between columns (representing time series) then zeroing values below a threshold or otherwise computing a distance metric that yields a non-negative adjacency matrix. If we stopped at the raw correlation estimate, and did not perform these heuristic adjustments, we would compute a scaled version as $\mathbf A^T \mathbf A$, which can be viewed as a dense signed adjacency matrix itself. The eigenvalue decomposition of this matrix gives \begin{align} \mathbf A^T \mathbf A = \mathbf V \boldsymbol{\Lambda} \mathbf V^T, \end{align} where $\boldsymbol{\Lambda}$ is a diagonal matrix of eigenvalues. So Algorithm 4 can be viewed as a close relative of Algorithm 1 applied to this version of an adjacency matrix (recall that \cite{meila_learning_2002} showed that one can equivalently use eigenvectors of a version of the adjacency matrix). Hence our principled goal of segmenting network resolution yields a variation on spectral clustering. We also have a direct interpretation of the choice of model order (i.e., the truncation rank $r$) based on regularization of the neighborhood estimation problem for predicting node activity. And further, this choice may be considered separately from the choice of number of clusters. Next we show how to relate the $\ell_2$-regularization method into this same framework. If we input the SVD of $\mathbf A$ from Eq. (\ref{eq_A_SVD}) into Eq. (\ref{pinv_mu}), we get \cite{hansen_truncatedsvd_1987} \begin{align} \mathbf A^\dagger_\mu &= \sum_{i=1}^m \frac{\sigma_i}{\sigma_i^2 + \mu} \mathbf v_i \mathbf u_i^T. \label{pinv_mu_svd} \end{align} Inputting this into the resolution matrix of Eq. (\ref{eq_Rmu}) gives \begin{align} \mathbf R_\mu &= \sum_{i=1}^m \frac{\sigma_i^2}{\sigma_i^2 + \mu} \mathbf v_i \mathbf v_i^T = \mathbf V_\mu \mathbf V_\mu^T. \label{eq_Rmu_svd} \end{align} where we have defined $\mathbf V_\mu$ as a matrix of weighted singular vectors \begin{align} \mathbf V_\mu = \mathbf V \mathbf D_{\mathbf w}, \end{align} using $\mathbf D_{\mathbf w}$, a diagonal matrix with diagonal $\mathbf w$ where $w_i = \sqrt{\frac{\sigma_i^2}{\sigma_i^2 + \mu}}$. This suggests the general method of Algorithm 5, where have the following options for $\mathbf w$, \begin{align} (\mathbf w_r)_i &= \begin{cases} 1, i \le r \\ 0, i > r \end{cases} \\ (\mathbf w_\mu)_i &= \sqrt{\frac{\sigma_i^2}{\sigma_i^2 + \mu}}. \end{align} \begin{algorithm} \begin{algorithmic}[1] \caption{Weighted Spectral Resolution Clustering).} \STATE Form standardized data matrix $\mathbf A$ containing time series as columns. Choose rank truncation $r$, number of clusters $K$, and weighting $\mathbf w$. \STATE Compute $r$ singular vectors $\mathbf (\mathbf v_1, ..., \mathbf v_r) = \mathbf V_r$ corresponding to largest $r$ singular values of $\mathbf A$. \STATE Apply $k$-means clustering to the rows of $\mathbf V \mathbf D_{\mathbf w}$ with $K$ clusters, where $\mathbf D_{\mathbf w}$ is the diagonal matrix with $\mathbf w$ on the diagonal. \label{weighted_spectral_reso_algo} \end{algorithmic} \end{algorithm} This further provides the ability to generalize to other weightings which may yield more optimal estimators in the neighborhood estimation problem. \section{Real Data Results} We used data from the Philadelphia Neurodevelopmental Cohort \cite{satterthwaite_philadelphia_2016}, which contains three scans for each subject, a resting-state scan and two task scans. The data was preprocessed using SPM, which included spatial smoothing with a 5 mm kernel, and registration to normalized coordinates. We then formed masks of the brain region by setting a threshold, and selected the 100 subjects which had the highest common overlap between masks. The data was downsampled by a factor of three in each dimension, which allowed it to be made small enough to form the adjacency matrix in memory for other spectral clustering methods (as efficient code for most methods was not available). Examples of resolution estimates for this dataset are provided in Fig. \ref{fig_example_pnc_reso_cells}. \begin{figure}[h!] \centering \scalebox{0.5}{\includegraphics[clip=true, trim=0in 0in 0in 0in]{networkroi34_ex1_dir123.png}} \caption{Single resolution cell for voxel in right precentral gyrus for individuals from PNC dataset; visualized as top view of brain, with each voxel colored by magnitude of its contribution to the resolution cell; the stronger the signal, the more ``unresolvable'' the given voxel is from the chosen point in precentral gyrus; each column shows the resolution cell for a different scan, and each row represents a different subject. Note that this is a column of the resolution matrix, reformed into a 3D image, then viewed from above.} \label{fig_example_pnc_reso_cells} \end{figure} \subsection{Hyperparameter Estimation} We used a cross-validation method to determine the parameters for the TSVD and $\ell_2$-regularized methods, based on the prediction of activity. Each choice of parameter was evaluated by the following steps, for each subject: \begin{enumerate} \item Separate data into training and test sets, forming separate matrices $\mathbf A^{(train)}$ and $\mathbf A^{(test)}$; standardize and preprocess independently (to the degree possible). \item Using training set data, compute regularized predictor $\mathbf x_k^{(train)}$ for every voxel $k$, via Eq. (\ref{eq_xk_mu}) or (\ref{eq_tsvd_soln}), depending on method. \item Set predictor values in vicinity of voxel $k$ to zero in order to eliminate self-loops. \item Rescale predictor to compensate for exclusion of vicinity of voxel $k$. \item Using test set data, compute residual $\Vert\mathbf A^{(test)} \mathbf x_k^{(train)} - \mathbf a_k^{(test)}\Vert$ for every voxel $k$. \item Average residual over all voxels and all cross-validation folds, and choose parameter which minimizes the residual. \end{enumerate} For the training and test sets, we tried splitting single scans into three parts, by separating the time series into thirds. We also tried using one of a given subject's scans as training set and testing against the other two. The latter method has the advantage that the training and test sets are preprocessed completely independently (i.e., normalization of coordinates and temporal filtering) so there is no opportunity for sharing information. We found the optimals to be roughly the same for either approach to cross-validation, so provide the results for the multiple scan approach. We used a similar procedure to test different choices of temporal filtering, but the applying filtering resulted in only a small variation in prediction accuracy, so we omitted this preprocessing step. Recall that $\mathbf x_k$ can be viewed as a column of a weighted adjacency matrix $\mathbf X$. So in effect we are using $\mathbf A^{(train)}$ to estimate connectivity, then using this network to predict the activity on $\mathbf A^{(test)}$. It is necessary to exclude the region around the $k$th voxel because this would include a self-loop, which makes predictions that are trivially accurate. To correct for this exclusion of part of the predictor, we estimate an optimal scalar for the predictor (also using the training set) by finding the optimal value to minimize \begin{align} \alpha^* = \arg \min_\alpha \sum_k\Vert\mathbf A^{(train)} \mathbf x_k^{(train)}\times \alpha - \mathbf a_k^{(train)}\Vert^2 \label{eq_optimal_alpha} \end{align} which has analytical solution, \begin{align} \alpha^* = \frac {\sum_{i,j} A^{(train)}_{i,j}\left(\mathbf A^{(train)} \mathbf X^{(train)}\right)_{i,j}} {\sum_{i,j} \left(\mathbf A^{(train)} \mathbf X^{(train)}\right)_{i,j}^2}. \end{align} We computed prediction residual with and without this scalar adjustment. Figure \ref{fig_cv_svd_vs_rr} gives the results of the three-fold cross-validation test for ten different choices of regularization for both the $\ell_2$ and TSVD methods. For the TSVD, the ten steps correspond to tenths of the total number of nonzero singular values. So the minimum around 3 to 5 on the horizontal axis corresponds to 30 to 50 percent dimensionality reduction for the TSVD method. For the $\ell_2$ regularization we used the following multiples of the maximum singular value: 0.001, 0.01, 0.1, 0.2, 0.3, 0.5, 1.0, 5.0, 10.0. So the minimum around 3 to 5 corresponds to $0.1 \sigma_{max}$ to $0.3 \sigma_{max}$. This range of tests was repeated for multiple choices of spatial smoothing and multiple choices of the exclusion region. We see that if the exclusion region is small (5mm) then a choice of no regularization was optimal, which suggests self-loops were still having an effect due to spatial smoothing. Otherwise we had relatively consistent results with the $\ell_2$-regularization outperforming the TSVD method, and minima in the 3 to 5 range. \begin{figure}[h!] \centering \scalebox{0.6}{\includegraphics[clip=true, trim=0in 0in 0in 0in]{networkroi38_cv_filt012_RR_vs_SVD_dist5.png}} \scalebox{0.6}{\includegraphics[clip=true, trim=0in 0in 0in 0in]{networkroi38_cv_filt012_RR_vs_SVD_dist10.png}} \scalebox{0.6}{\includegraphics[clip=true, trim=0in 0in 0in 0in]{networkroi38_cv_filt012_RR_vs_SVD_dist15.png}} \caption{Average 3-fold cross-validated fractional residual plotted versus regularization parameter (10 choices). Excluded region size = 5 mm (top), 10 mm (middle), 15 mm (bottom). Spatial smoothing kernel size = 5 mm (left column), 10 mm (middle column), 15 mm (right column). Plots are given both with and without the $\alpha^*$ scaling. Optimal of 3-5 for SVD corresponds to 30-50 percent singular vectors retained. Optimal of 3-5 for L2 corresponds to $0.1 \sigma_{max}$ to $0.3 \sigma_{max}$.} \label{fig_cv_svd_vs_rr} \end{figure} \subsection{Parcel Comparisons} Next we computed individual parcellations for each of thee three scans for the 100 subjects. We used the number of clusters as $k=116$ (so we could compare to the predefined parcellation), and used random starting clusters. The methods used are listed below: \begin{description}[font=\sffamily\bfseries, leftmargin=1.5cm, style=nextline] \item[Rr] $k$-means clustering of columns of $\mathbf R_r$. I.e., truncated-SVD clustering method of Algorithm 4, with cutoff of 40 percent singular values. \item[Rl] $k$-means clustering of columns of $\mathbf R_\mu$. I.e., $\ell_2$-regularized clustering method of Algorithm 2, with $\mu = 0.3 \sigma_{max}$. \item[A] $k$-means clustering of the time-series corresponding to individual voxels. I.e. clustering of (standardized) columns of the matrix $\mathbf A$. \item[Ar] $k$-means clustering of columns of the dimensionality-reduced matrix $\mathbf A_r$ of Eq. (\ref{eq_Ar}), with a cutoff of 40 percent singular values. \item[AA] $k$-means clustering of columns of the scaled sample covariance matrix $\mathbf A^T\mathbf A$. \item[WNN] Spatially-constrained spectral clustering; weighted adjacency matrix formed by pairwise voxel correlations greater than 0.4 for nearest neighbors (adjacent voxels) only. \item[NN] Spectral clustering of binary graph formed by connecting nearest neighbors (adjacent voxels) only; imaging data was not used. \item[XYZ] $k$-means clustering of $3 \times n$ position matrix; each column is the three-dimensional location of a voxel. \item[AAL] The 116 Automated Anatomical Labeling regions of interest \cite{tzourio-mazoyer_automated_2002}. \item[RNG] Parcels defined by randomly labeling each voxel. \end{description} We tried the standard approaches to spectral clustering but were not able to get reasonable-looking parcellations with them, even with high levels of spatial smoothing. Figure \ref{fig_example_parcels} shows an example of the parcellations for a single subject for three of the methods. \begin{figure}[h!] \centering \scalebox{0.75}{\includegraphics[clip=true, trim=0in 0in 0in 0in]{networkroi41_noroimask_slice.pdf}} \caption{Horizontal slice of parcellation for single subject, for three different methods (rows) and three different degrees of spatial smoothing (columns). Top row is the ``A'' method; middle row shows the ``AA'' method; bottom row gives the ``Rr'' method; left column is with low filtering (5 mm kernel); middle column is medium filtering (10 mm kernel); right column is high filtering (15 mm kernel).} \label{fig_example_parcels} \end{figure} First we tested the consistency of results from using different scans for the same subject by computing the average over dice coefficients between a cluster in one scan and the most-similar cluster in the comparison scan. The results are given in Fig. \ref{fig_dice}, for both low (5 mm kernel) and high (15 mm kernel) spatial smoothing. \begin{figure}[h!] \centering \scalebox{0.5}{\includegraphics[clip=true, trim=0in 0in 0in 0in]{networkroi41_run20_dice_hilo_bar.pdf}} \caption{Average dice coefficients comparing parcellations between different scans of same subject. Spatial smoothing improves the dice coefficients for the data-dependent methods due to effectively imposing distance regularization.} \label{fig_dice} \end{figure} The averages were rather low, which may result from several causes. Notably, the scans were of different types, resting-state versus different task scans, which can account for roughly 30 percent of the variance in the time-series \cite{wang_parcellating_2015}. Further, while we chose the subjects with the best-aligned scans, they were still not perfectly aligned. Consider that the XYZ and NN methods essentially produce tilings of the volume which should be approximately equal for the scans depending how well-aligned they are. Hence these may serve as something of an upper limit on the data-dependent methods' reproducibility. We find that the WNN produces the most similar clusters of the data-dependent methods, though at low smoothing its dice coefficients are only slightly higher than the resolution-based methods. At high smoothing the WNN method appears identical to the NN method. We also note that sizable improvements in dice coefficients can be achieved by spatial smoothing, likely due to its effective distance regularization effect, and probably biasing results towards the XYZ and NN methods. Fig. \ref{fig_r2} gives a metric of the average parcel size, computed as the square root of the average squared distance between the parcel centroid and each voxel in the parcel. \begin{figure}[h!] \centering \scalebox{0.5}{\includegraphics[clip=true, trim=0in 0in 0in 0in]{networkroi41_run20_r2_hilo_bar.pdf}} \caption{Root mean-squared cluster size. Spatial smoothing reduces the parcel size due to effectively imposing distance regularization.} \label{fig_r2} \end{figure} Here spatial smoothing reduces the size of parcels for the data-dependent methods, except for WNN. As more compact parcels tend to fit more with our prior belief about the modularity of the brain (see Fig. \ref{fig_example_parcels})), we presume smaller parcels here are generally better, at least to some degree. Under this assumption, then the best results are achieved by the WNN method, though its immunity to the distance-regularizing effect of spatial smoothing, plus its similarity to the NN method, makes the validity of the parcel sizes questionable. Next-smallest clusters are achieved by the resolution-based methods. Lastly we computed metrics to test the homogeneity and separation of the parcels. We tested homogeneity with two metrics; the first was the average unexplained variance within parcels. This is the fraction of remaining signal energy after removing the average signal in the parcel, averaged over parcels. Second we computed the average absolute Pearson correlation between every pair of time series within each parcel, excluding self-correlations. We tested parcel separation by computing the average absolute correlation between average signals in different parcels. Our results agreed with those reported elsewhere \cite{arslan_human_2018}, where it was noted that despite the superior repeatability (i.e. higher dice coefficients) of spectral clustering, the homogeneity was inferior to that of $k$-means clustering (which we are calling the ``A" method). This makes sense as $k$-means is explicitly a greedy algorithm for optimizing this metric. However, we further tested the validity of these results by calculating the metrics for parcels generated using one scan, when applied to another scan for the same subject. Fig. \ref{fig_fuev_x-y} gives the fractional unexplained variance for all possible combinations of scans. \begin{figure}[h!] \centering \scalebox{0.5}{\includegraphics[clip=true, trim=0in 0in 0in 0in]{networkroi41_run20_fuevAAL_CV_barchart_allcombos.pdf}} \caption{Average fractional unexplained variance from applying parcellation from one scan to another scan; figure title $x$-$y$ refers to parcellation scan $x$ and test scan $y$. Note $\mathbf A$ method is best only for same-scan tests while resolution-based methods are consistently best for cross-scan tests.} \label{fig_fuev_x-y} \end{figure} In this case we found the resolution-based methods to be consistently superior; when the same scan was used both to produce the parcels and to test them (i.e., scans $1$-$1$, $2$-$2$ and $3$-$3$), the ``A'' method is best (lowest unexplained variance), but for the other charts, the Rl and Rr methods had the lowest unexplained variance. We further see that the $\ell_2$-method maintains a small advantage over TSVD, which agrees with the superior predictive performance of $\ell_2$-regularization from Fig. \ref{fig_cv_svd_vs_rr}. The average for tests of parcels on the same scan of a subject versus their other scans are provided in Table \ref{table_mets_x-y}, where we see consistent performance for the different metrics; resolution methods have the lowest unexplained variance, highest internal correlation, and lowest between-parcel correlations for cross-scan tests. \begin{table}[] \centering \caption{Table of metrics for different clustering methods, as measured on same scan that clustering was performed versus measured on different (cross) scan to test generalizability; The resolution-based methods (Rr and Rl) perform consistently better on cross-scan tests with lowest unexplained varience, highest internal correlation, and lowest correlations between parcels.} \begin{tabular}{l|cc|cc|cc} & \multicolumn{2}{c}{Unexplained Variance} & \multicolumn{2}{c}{Internal Correlation} & \multicolumn{2}{c}{Parcel Correlation} \\ Method & Same & Cross & Same & Cross & Same & Cross \\ \hline Rr & 0.340 & 0.358 & 0.653 & 0.635 & 0.468 & 0.493 \\ Rl & 0.314 & 0.352 & 0.680 & 0.638 & 0.454 & 0.492 \\ A & 0.291 & 0.370 & 0.704 & 0.620 & 0.447 & 0.504 \\ Ar & 0.292 & 0.374 & 0.702 & 0.618 & 0.446 & 0.506 \\ AA & 0.347 & 0.464 & 0.644 & 0.525 & 0.486 & 0.582 \\ WNN & 0.359 & 0.378 & 0.635 & 0.606 & 0.517 & 0.521 \\ NN & 0.373 & 0.380 & 0.618 & 0.611 & 0.539 & 0.537 \\ XYZ & 0.392 & 0.391 & 0.600 & 0.601 & 0.538 & 0.538 \\ AAL & 0.397 & 0.396 & 0.591 & 0.594 & 0.536 & 0.536 \\ RNG & 0.654 & 0.653 & 0.333 & 0.334 & 0.961 & 0.961 \end{tabular} \label{table_mets_x-y} \end{table} \section{Discussion} In summary, we demonstrated how the concept of resolution could be adapted to the neighborhood regression problem for estimating network connectivity. We showed that the intuitive idea of clustering this resolution matrix led to a new kind of spectral clustering. Further, we found different algorithm variants depending on the form of regularization used for the neighborhood prediction; a SVD truncation-based regularization led to a more traditional algorithm based on clustering of singular vectors, while a $\ell_2$-penalized regularization led to an algorithm based on clustering weighted singular vectors. This provides a new perspective on spectral clustering which allows more principled decisions on open questions such as the choice of model order as well as on data preprocessing decisions, which are typically made independently and heuristically. We tested the approach for parcellation of fMRI data, and found that the proposed methods yielded parcels which were more consistent across scans as well as more compact spatially versus conventional approaches based on clustering of voxel time series or their correlations. Further, while the spatially-constrained spectral clustering method produced parcels with higher dice coefficients and smaller average size, the unexplained variance is higher for this method, meaning the parcels are a worse approximation to the measured data overall. It is particularly interesting that the resolution-based methods outperform the basic time-series clustering (the ``A'' method) for the cross-scan tests of unexplained variance, suggesting that their basis for clustering is more robust. The fact that the ``A'' method is superior when tested on the same scans provides validation that the method was performed properly. Also the fact that the methods WNN, NN, and XYZ, which produce more compact clusters (and indeed slightly more similar clusters), also performed worse on the unexplained variance shows that the success of the resolution-based methods here is not simply due to increased consistency or compactness of clusters. Further the lack of improvement in the``Ar'' method over the ``A'' method suggests it is not simply a benefit due to the regularization. As noted above, the homogeneity metrics are biased towards the basic ``A'' method; it is a method which groups the most similar time series, and the homogeneity metrics test which method successfully grouped the most similar time series. A metric which might be more interesting from a hierarchical network perspective is something similar to our cross-validated preprocessing, where we test which parcellation can produce the best network. we might measure this by testing how well we can use the signals from other parcels to predict a given parcel's signal. The difficulty with this metric is that it rewards bad parcellations. If a certain module with high internal connectivity is broken in to multiple parcels, those parcels can be used to more reliably predict their neighbors (which should be in the same parcel). This is of course compounded by spatial smoothing which creates correlations between nearby voxels. A more efficent method for determining the regularization in the preprocessing stage would also be valuable . The exclusion of the local neighborhood (to prevent self-loops from biasing every result towards the unregularized extreme) prevents the exploitation of the low-rank structure of the covariance matrix, as each neighborhood has a different exclusion region. This renders the approach difficult for very large datasets, and requires subsequent metrics of parcellation or later stages to determine the optimal preprocessing parameters. \section{acknowledgments} The authors wish to thank the NIH (R01 GM109068, R01 MH104680, R01 MH107354) and NSF (1539067) for their partial support.
1,941,325,219,970
arxiv
\section{Introduction}\label{sec:introduction} \setcounter{footnote}{0} It is widely believed that supernova remnants (SNRs) are the primary sources of Galactic cosmic rays (CRs) observed on Earth, up to the knee energy at $\sim10^{15}$~eV, as first proposed by \cite{Ginzburg1961}. CRs can be accelerated to very high energies at collisionless shocks driven by Supernova (SN) explosions through diffusive shock acceleration \citep[DSA;][]{Axford1977, Krymskii1977, Bell1978a,Bell1978b, Blandford1978}. During this process the kinetic energy released in SN explosions has to be transferred to CRs with an efficiency of $\sim$ 10\% \citep{Ginzburg1964}. A description of the acceleration process can be achieved within a non-linear DSA theory \citep{Malkov2001} with magnetic field amplification, probably by accelerated particles themselves, due to streaming instability \citep{Bell2004, Amato2006, Caprioli2008}. The strong evidence for large magnetic fields in the shock region is given by the observation of narrow filaments of non-thermal X-ray radiation in young SNRs \citep{Ballet2006, Vink2012}. The non-thermal emission produced through the interaction of accelerated particles with radiation and/or matter in the environment of the SNR, via synchrotron (SC), inverse Compton (IC), non-thermal bremsstrahlung, hadronic interactions and subsequent $\pi^0$ decay, gives information about the particle acceleration mechanisms at work in these sources. RCW 86 \citep{Rodgers1960}, also known as MSH 14$-$6{\em3} \citep{Mills1961} or G315.4-2.3, is a SNR located in the southern sky. The origin of this SNR is still debated but recent studies \citep{Williams2011, Broersen2014} suggest that RCW 86 is associated to the historical SN 185 \citep{Stephenson2002} and is the result of a Type Ia explosion, also supported by the large amount of Fe \cite[$\sim$1 M$_{\odot}$;][]{Yamaguchi2011}. A large shell ($\sim 40'$ in diameter) is clearly detected in radio \citep{Kesteven1987}, optical \citep{Smith1997}, infrared \citep{Williams2011}, X-rays \citep{Pisarski1984} and very-high-energy (VHE; E $>$ 0.1 TeV) $\gamma$-rays \citep{Aharonian2009, Abramowski2015}. At high-energy (HE, 0.1 $<$ E $<$ 100 GeV) $\gamma$-rays, \cite{Lemoine2012} derived upper limits on the flux and \cite{Yuan2014} reported the detection of a pointlike $\gamma$-ray source matching the position of RCW 86. In 2015, RCW 86 was reported for the first time as an extended source (with a radius of $0\fdg27$ above 50 GeV) in the Second Catalog of Hard \textit{Fermi}-LAT Sources \citep{2FHL2015}. X-ray observations of RCW 86 reveal a non-spherically symmetric shell with both thermal ($0.5-2$ keV) and non-thermal ($2-5$ keV) emission, with different morphologies. The soft X-rays are related to optical emission from non-radiative shocks and IR emission from collisionally heated dust, whereas the hard X-ray continuum, located mostly in the southwestern part of the remnant, is due to SC radiation coming from electrons accelerated at the reverse shock of the remnant, as suggested by its spatial correlation with the strong Fe$-$K line emission \citep{Rho2002}. Using \emph{Suzaku} telescope data, \cite{Ueno2007} produced a map of the Fe$-$K line emission in the southwestern part of the remnant, showing that the Fe$-$K line emission correlates well with the radio SC emission. Furthermore, the higher temperature plasma, which mostly contains the strong Fe$-$K line emission, suggests that this line originates from Fe-rich ejecta heated by a reverse shock \citep{Ueno2007, Yamaguchi2008}. This X-ray SC radiation is produced by TeV electrons accelerated at the shock as confirmed by the VHE $\gamma$-ray emission detected with the H.E.S.S. experiment \citep{Aharonian2009, Abramowski2015}. The distance of RCW 86 is estimated to be 2.5 $\pm$ 0.5 kpc through the recent proper motion measurements by \cite{Helder2013}, combined with plasma temperature measurements based on the broad H$\alpha$ lines. The ambient density around RCW 86 is inhomogeneous and the shock speed value, as well as the magnetic field, change along the shell-like structure. In particular, in the southwest and northwest regions shocks are slow, around $\sim$ 600--800 km s$^{-1}$ \citep{Long1990, Ghavamian2001}, and post-shock densities are relatively high \cite[$\sim$ 2 cm$^{-3}$;][]{Williams2011}. Whereas, faster shocks \cite[$\sim$ 2700 kms$^{-1}$ and 6000 $\pm$ 2800 km s$^{-1}$;][]{Vink2006,Helder2009} and lower densities \cite[$\sim$ 0.1 -- 0.3 cm$^{-3}$;][]{Yamaguchi2008} have been measured in the northeast (NE) region. The large size of this young remnant as well as the asymmetry in its morphology can be explained by an off-center explosion in a low-density cavity, as proposed by \cite{Williams2011}. Here we present the results of a deep morphological analysis with the new \textit{Fermi}-LAT event reconstruction set, Pass 8, as well as a study of the broadband emission using the available information in the radio, X-ray and VHE $\gamma$-ray domains. \section{\textit{Fermi}-LAT and Pass 8 description} The \textit{Fermi}-LAT is a $\gamma$-ray telescope which detects photons by conversion into electron--positron pairs in the energy range between 20 MeV to higher than 500 GeV, as described in \cite{Atwood2009}. The LAT is made of a high-resolution converter/tracker (for direction measurement of the incident $\gamma$-rays), a CsI(Tl) crystal calorimeter (for energy measurement), and an anti-coincidence detector to identify the background by charged particles. The LAT has a large effective area ($\sim$ 8200 cm$^{2}$ on-axis above 1~GeV), a wide field of view ($\sim$ 2.4 sr) as well as good angular resolution (with a 68$\%$ containment radius of $\sim 0\fdg8$ at 1 GeV). Since the launch of the spacecraft in June 2008, the LAT event-level analysis has been periodically upgraded to take advantage of the increasing knowledge of the \textit{Fermi}-LAT functioning as well as the environment in which it operates. Following Pass 7, released in August 2011, Pass 8 is the latest version of the \textit{Fermi}-LAT data\footnote{\textit{Passes} correspond to the release of upgraded versions of the LAT event-level analysis framework.}. The development of Pass 8 was the result of a long-term effort aimed at a radical revision of the entire event-level analysis and tends to realize the full scientific potential of the LAT \citep{Atwood2013}. Combining the improvement of the effective area, the point-spread function and the energy resolution with the large amount of data collected by the LAT since its launch, Pass 8 is a powerful tool to identify and study extended $\gamma$-ray sources. \section{\textit{Fermi}-LAT observations and data analysis} We analysed 6.5 years of data collected between August 4$^{th}$, 2008 and January 31$^{st}$, 2015 within a $15^\circ \times 15^\circ$ region centered on the position of RCW 86. We used events with energies between 100 MeV and 500 GeV with a maximum zenith angle of $100^\circ$ to limit the contamination due to the Earth Limb. To assure good quality events, we excluded the time intervals when the \textit{Fermi} spacecraft was within crossed the South Atlantic Anomaly were excluded. We used the version \verb+10-00-03+ of the ScienceTools and the \verb+P8R2_V6+ Instrument Response Functions (IRFs) with the event class \verb+SOURCE+, which corresponds to the best compromise between the number of selected photons and the charged particle residual background for the study of point-like or slightly extended sources. Two tools were used for the analysis: \verb+gtlike+ for the spectral analysis and \verb+pointlike+ for the spatial analysis. \verb+gtlike+ is a binned maximum likelihood method \citep{Mattox1996} implemented in the \textit{Fermi} Science Tools. \verb+pointlike+ is an alternative code used for fast analysis of \textit{Fermi}-LAT data and able to characterize the extension of a source \citep{Kerr2011}. These tools fit a source model to the data along with models for the residual charged particles and diffuse $\gamma$-ray emission. The Galactic diffuse emission was modeled by the standard LAT diffuse emission ring-hybrid model \verb+gll_iem_v06.fits+ and the residual background and extragalactic radiation were described by a single isotropic component with the spectral shape in the tabulated model \verb+iso_P8R2_SOURCE_V6_v06.txt+. The models are available from the \textit{Fermi} Science Support Center. Sources located in the $15^\circ \times 15^\circ$ region centered on RCW 86 and included in the \textit{Fermi}-LAT Third Source Catalogue \cite[][hereafter 3FGL]{Acero2015}, based on the first four years of Pass 7 data, were added to our spectral-spatial model of the region. Only sources within 5$^\circ$ around the position of RCW 86 were refitted, in addition to the Galactic diffuse and isotropic emission. The energy dispersion, which is defined in terms of the fractional difference between the reconstructed energy and the true energy of the events, was taken into account in both spatial and spectral analysis to consider the imperfection of the energy reconstruction. \subsection{Morphological analysis} As it is critical to have a better angular resolution for the morphological analysis, we perform the spatial analysis above 1 GeV. The resulting PSF is $\sim 0\fdg27$ for a photon index of 1.5 (see Section 3.2). We first fitted the $15^\circ \times 15^\circ$ region centered on RCW 86 with the 3FGL sources, the Galactic diffuse and isotropic emission templates and computed the \textit{Fermi}-LAT Test Statistic ($TS$) map ($4^\circ \times 4^\circ$ with 0.02 degrees per pixel, see Figure~\ref{fig:tsmap}). The $TS$ is defined as twice the difference between the log-likelihood $L_1$ obtained by fitting a source model plus the background model to the data, and the log-likelihood $L_0$ obtained by fitting the background model only, i.e $TS = 2(L_1-L_0)$. Figure~\ref{fig:tsmap} contains the $TS$ value for a point source of fixed photon index $\Gamma = 2$ in each pixel of the map, thus giving a measure of the statistical significance for the detection of a $\gamma$-ray source with that spectrum in excess of the background. The $TS$ map revealed a significant emission coincident with the position of RCW 86, coming essentially from the NE region of the remnant (there is no significant variation of the background emission within $0\fdg5$). Additional excess $\gamma$-ray emissions were also detected in the region of interest. Therefore, we added 4 new sources, denoted with the identifiers Src1, Src2, Src3, and Src4 (only Src1 is visible in the field of view of Figure~\ref{fig:tsmap}), in our model to take into account these signals. These regions of $\gamma$-ray emission are considered to belong to background sources not detected in 3FGL due to the larger dataset and the increased effective area of Pass 8 data. \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{figure1.eps} \caption{Test Statistic ($TS$) map above 1 GeV centered on RCW 86. The green cross indicates a point-like source that has been added to the source model to fit background emissions in the vinicity of RCW 86. \label{fig:tsmap}} \end{figure} To determine the best morphology of RCW 86, the data were fitted with different spatial models (point-source, disk, ring) while fitting the spectrum of the source (normalization and spectral index) simultaneously. The results of this first analysis are reported in Table~\ref{tab:results_ptlike}. The significance of the extension of RCW 86 is quantified with $TS_{\rm ext}$, which is defined as twice the difference between the log-likelihood of an extended source model and the log-likelihood of a point-like source model. For the uniform disk hypothesis, the source is significantly extended ($TS_{\rm ext}$ = 68, which corresponds to a significance of $\sim 8\sigma$ for the extension) with respect to the LAT point-spread function (PSF). The fitted radius, $0\fdg37 \pm 0\fdg02_{\rm stat}$, is in good agreement with the size of the SNR as seen in radio \citep{Kesteven1987}, infrared \citep{Williams2011}, X-rays \citep{Pisarski1984} and VHE $\gamma$-rays \citep{Abramowski2015}. Using a ring as a spatial model, the log-likelihood is improved in comparison to that obtained with a uniform disk by only 2.6$\sigma$, which is not enough to claim that the source has a shell-like morphology. The spatial analysis was confirmed by \verb+gtlike+, as shown in Table~\ref{tab:results_gtlike}. \begin{table*}[ht] \caption{Centroid and extension fits of the LAT data using pointlike above 1 GeV. Uncertainties are statistical errors (68\% containment). N$_{\rm dof}$ corresponds to the number of degrees of freedom for each model.\label{tab:results_ptlike}} \centering \begin{tabular}{lcccccc} \hline \hline Spatial Model & $TS_{\rm ext}$ & R.A. ($^\circ$) & Dec. ($^\circ$) & Radius ($^\circ$) & Inner Radius ($^\circ$) & N$_{\rm\ dof}$\rule[-4pt]{0pt}{12pt}\\ \hline Point Source & - & 220.56 $\pm$ 0.07 & $-$62.25 $\pm$ 0.02 & - & - & 3\rule[-3pt]{0pt}{12pt}\\ Disk & 68 & 220.73 $\pm$ 0.04 & $-$62.47 $\pm$ 0.03 & $0.37 \pm 0.02$ & - & 4\rule[-3pt]{0pt}{12pt}\\ Ring & 75 & 220.74 $\pm$ 0.02 & $-$62.51 $\pm$ 0.02 & $0.31 \pm 0.02$ & $0.21 \pm 0.02$ & 5\rule[-3pt]{0pt}{12pt}\\ \hline \end{tabular} \end{table*} In addition to geometrical models, we have fitted the LAT data with MWL morphological templates to evaluate the correspondance of the $\gamma$-ray emission above 1 GeV coming from RCW 86 with different source morphologies (see Figure~\ref{fig:tsmap_mwl}). For that purpose, we compared the $TS$ obtained with the best-fit uniform disk model (see Table \ref{tab:results_ptlike}) and the ones obtained with the MWL morphological templates. The radio data are from the Molonglo Observatory Synthesis Telescope (MOST) at 843 MHz \citep{Murphy2007} and the TeV data from H.E.S.S. \citep{Abramowski2015}. Concerning the X-ray data, we used the observations of the space telescope \textit{XMM-Newton}, in the energy range $0.5-1$ keV and $2-5$ keV \citep{Broersen2014}, to estimate the correlation with thermal and non-thermal X-ray emission separately. The analysis revealed a good match between the HE $\gamma$-ray emission and the VHE $\gamma$-ray and non-thermal X-ray emissions. However, the radio and thermal X-ray signals do not fit well with the HE $\gamma$-ray emission. \begin{figure} \centering \begin{tabular}{cc} \includegraphics[height=1.25in]{figure2_most.eps} & \includegraphics[height=1.25in]{figure2_hess.eps}\\ \includegraphics[height=1.25in]{figure2_xmm_therm.eps} & \includegraphics[height=1.25in]{figure2_xmm_nontherm.eps}\\ \end{tabular} \caption{Test Statistic ($TS$) map above 1 GeV centered on RCW 86 with MWL contours. All data sets have been smoothed such that their angular resolution is similar to the Fermi-LAT PSF (0$\fdg$27 at 68\% C.L.). The radio (top-left), TeV (top-right), thermal (bottom-left) and non-thermal (bottom-right) X-ray data are from \cite{Murphy2007}, \cite{Abramowski2015} and \cite{Broersen2014} respectively. \label{fig:tsmap_mwl}} \end{figure} \begin{table}[ht] \caption{$TS$ values obtained with gtlike above 1 GeV. Numbers in parentheses correspond to the values obtained after fitting separately the two hemispheres shown in Figure~\ref{fig:divided_templates}. \label{tab:results_gtlike}} \centering \begin{tabular}{lcc} \hline \hline Template & $TS$ & N$_{\rm dof}$\rule[-4pt]{0pt}{12pt}\\ \hline Point Source & 34 & 4\rule[-3pt]{0pt}{12pt}\\ Disk & 97 (99) & 5 (8)\rule[-3pt]{0pt}{12pt}\\ Ring & 104 (105) & 6 (9)\rule[-3pt]{0pt}{12pt}\\ MOST (843 MHz) & 77 (86) & 2 (5)\rule[-3pt]{0pt}{12pt}\\ \textit{XMM-Newton} ($0.5-1$ keV) & 62 (90) & 2 (5)\rule[-3pt]{0pt}{12pt}\\ \textit{XMM-Newton} ($2-5$ keV) & 91 (101) & 2 (5)\rule[-3pt]{0pt}{12pt}\\ H.E.S.S. & 98 (100) & 2 (5)\rule[-3pt]{0pt}{12pt}\\ \hline \end{tabular} \end{table} As RCW 86 is known to present an asymmetry in its morphology from radio to VHE $\gamma$-rays \citep{Broersen2014}, we divided the spatial models and fit separately the two half-templates in order to quantify the difference between the NE and the SW region of the SNR, using the improved PSF of the Pass 8 data. We determined the best (most significant) angle of division by computing the log-likelihood for 18 regularly spaced angles, from $0^\circ$ to $170^\circ$, $0^\circ$ corresponding to a division along the line north/south in equatorial coordinates (the white dashed line in Figure~\ref{fig:divided_templates}). This analysis was performed for all the templates and we obtained the same best angle (the green dashed line in Figure~\ref{fig:divided_templates}) for all of them. By comparing the results for non-divided and divided templates presented in Table~\ref{tab:results_gtlike}, we notice that the division improves significantly the likelihood for the MOST and the \textit{XMM-Newton} templates. Moreover, the non-thermal X-ray template is as good as the disk and the H.E.S.S. templates when it is divided. This indicates that the X-ray and radio morphologies do not reproduce well the HE $\gamma$-ray signal in the case of a single region, as it will be confirmed in Section 7 with the broadband modeling of the spectrum. However, the likelihood is not much improved when dividing the disk and the H.E.S.S. template, showing that they well reproduce the whole SNR as seen with \textit{Fermi}-LAT. The non-divided H.E.S.S. template provides the highest TS of all the non-divided (and divided when taking into account the number of degrees of freedom) templates when fitting the HE $\gamma$-ray emission. As a consequence, this template was used to perform the spectral analysis. \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{figure3_division.eps} \caption{The white dashed line corresponds to the north/south line in equatorial coordinates ($\theta = 0^\circ$) while the green one represents the best angle of division ($\theta = 110^\circ$). The green circle corresponds to the best-fit disk model provided by pointlike in Table \ref{tab:results_ptlike} and the cyan quadrants represent the two regions defined in \cite{Abramowski2015} that are studied in Section 3.2. \label{fig:divided_templates}} \end{figure} \subsection{Spectral analysis} To study the spectrum of RCW 86, we performed a maximum likelihood fit with \verb+gtlike+ in the energy range 100 MeV -- 500 GeV, using the non-divided H.E.S.S. template as a spatial model. The \textit{Fermi}-LAT data are well described by a power law function ($TS$ = 99), with a photon index of $1.42\ \pm\ 0.1_{\rm stat}\ \pm\ 0.06_{\rm syst}$ and an energy flux above 100 MeV of ($2.91$ $\pm$ $0.8_{\rm stat}$ $\pm$ $0.12_{\rm syst}$) $\times$ $10^{-11}$ erg cm$^{-2}$ s$^{-1}$. One should note that the photon index $\Gamma$ is linked to the radio energy index $\alpha$ (defined as $S_{\nu} \propto \nu^{-\alpha}$) by the relation $\Gamma$ = 1 + $\alpha$. For RCW 86, $\alpha$ is found to be 0.6 and is therefore consistent with a hard index in GeV. We also performed several fits while fixing the index of the power law at different values: 1.5, 1.6, 1.7 and 1.8. In each case, we measured the deterioration of the log-likelihood by computing the difference between the $TS$ of the best-fit and the $TS$ obtained with the fixed index. As a result, we excluded $\Gamma > 1.7$ at more than 3$\sigma$. Although a broken power-law seems more likely when considering both \textit{Fermi}-LAT and H.E.S.S. data, the log-likelihood is improved by only 2$\sigma$ ($\Delta TS$ = 7 for 2 more degrees of freedom) when fitting the spectra with this function. The four additional background sources (Src1, Src2, Src3, and Src4) were taken into account in these fits and their best-fit positions and spectral parameters are given in Table~\ref{tab:specAna_bkgdsrc}. Despite having a $TS$ lower than 25, Src1 and Src2 were kept in our background model to avoid any contamination to the RCW 86 spectrum. \begin{table*}[ht] \caption{Best spectral fit parameters obtained for the nearby background sources Src1, Src2, Src3 and Src4 with gtlike using the H.E.S.S. template for RCW 86 above 100 MeV. The errors are statistical errors only. \label{tab:specAna_bkgdsrc}} \centering \begin{tabular}{lccccc} \hline \hline Source Name & R.A. ($^\circ$)& Dec. ($^\circ$)& Spectral Index & Flux ($\times$10$^{-12}$ erg/cm$^{2}$/s) & $TS$\rule[-4pt]{0pt}{12pt}\\ \hline Src1 & 220.66 $\pm$ 0.05 & $-$63.21 $\pm$ 0.02 & $1.69 \pm 0.20$ & $5.11^{+3.41}_{-2.51}$ & 24\rule[-3pt]{0pt}{12pt}\\ Src2 & 223.15 $\pm$ 0.07 & $-$63.00 $\pm$ 0.04 & $1.82 \pm 0.24$ & $4.19^{+3.41}_{-2.25}$ & 18\rule[-3pt]{0pt}{12pt}\\ Src3 & 224.15 $\pm$ 0.14 & $-$63.22 $\pm$ 0.05 & $2.56 \pm 0.13$ & $7.11^{+1.53}_{-1.65}$ & 26\rule[-3pt]{0pt}{12pt}\\ Src4 & 222.50 $\pm$ 0.09 & $-$60.36 $\pm$ 0.05 & $2.54 \pm 0.12$ & $15.3^{+2.67}_{-2.61}$ & 52\rule[-3pt]{0pt}{12pt}\\ \hline \end{tabular} \end{table*} Systematic errors are defined as Err$_{\rm syst} = \sqrt{(Err_{\rm iem})^2 + (Err_{\rm irf})^2 + (Err_{\rm model})^2}$. This expression takes into account the imperfection of the Galactic diffuse emission model (Err$_{\rm iem}$), uncertainties in the effective area calibration (Err$_{\rm irf}$) and uncertainties on the source shape (Err$_{\rm model}$). The first one was estimated by using alternative Interstellar Emission Models (IEM), as described in \cite{SNRCAT2015}. For the second one, we applied scaling functions that change the effective area \citep[$\pm$ 3\% for 100 MeV -- 100 GeV and $\pm$ $3\% + 10\% \times (\log (E/{\rm MeV})-5)$ above 100 GeV][]{Fermi2012}. The third one was obtained by fitting the $\gamma$-ray emission with the best spatial models (the disk and the ring) provided by \verb+pointlike+. Figure~\ref{fig:sed} shows the Spectral Energy Distribution (SED) of RCW 86. The \textit{Fermi}-LAT spectral points were obtained by dividing the 100 MeV -- 500 GeV range into nine logarithmically spaced energy bins. We assumed a power law with a spectral index of 1.4 and used the H.E.S.S. spatial model. In addition to RCW 86, only very bright and close sources ($TS >$ 500 and within a radius of 5$^\circ$), as well as the Galactic diffuse and the isotropic emission were fitted. We derived the 95\% C.L. upper limit in the 0.1 -- 1.7 GeV band, combining the first three bins in which no signal was detected by the \textit{Fermi}-LAT. \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{figure4_sed.eps} \caption{Spectral Energy Distribution of RCW 86 from 100 MeV to 50 TeV with the Fermi-LAT and H.E.S.S. \citep{Abramowski2015} data points shown as black circles and triangles, respectively. The black solid line passing through the H.E.S.S. points corresponds to the best-fit spectrum from \cite{Abramowski2015}. The smaller and larger errors on the \textit{Fermi} fluxes are statistical and total errors (quadratic sum of statistical and systematic errors), respectively. The range of upper limit values (black and red lines) correspond to the uncertainty in the diffuse modeling. The dark gray shaded area (delimited by black dashed lines) represents the 68\% confidence band of the fitted \textit{Fermi}-LAT spectrum.\label{fig:sed}} \end{figure} To pursue our purpose of understanding the variation of the physical conditions in RCW 86, we performed a spectral analysis between 100 MeV and 500 GeV using the H.E.S.S. template divided in half along the green dashed line shown in Figure~\ref{fig:divided_templates}. We obtained an index of 1.36 $\pm$ 0.17$_{\rm stat}$ and an energy flux above 100 MeV of (1.69 $\pm$ 0.63) $\times$ 10$^{-11}$ erg cm$^{-2}$ s$^{-1}$ for the upper region and an index of 1.62 $\pm$ 0.17$_{\rm stat}$ and an energy flux of (0.97 $\pm$ 0.31) $\times$ 10$^{-11}$ erg cm$^{-2}$ s$^{-1}$ for the lower region. The index seems to be harder in the upper region but there is no significant difference when taking into account the errors. In addition to that, we also studied the spectrum of the two specific areas that were defined in \cite{Abramowski2015}, as shown in Figure~\ref{fig:divided_templates}. The NE and SW quadrants were fitted in two different fits. For each quadrant we subtracted it from the rest of the disk and fitted both the quadrant and the complementary region simultaneously. The analysis of the \textit{Fermi}-LAT data revealed significant $\gamma$-ray emission at $\sim 4.7 \sigma$ ($TS$ = 22) in the NE region but no signal was detected in the SW region ($TS <$ 3). The spectrum of the NE signal is well-fitted by a power law function with a hard index of $1.33 \pm 0.20 $ and an energy flux of $(1.2 \pm 0.5_{\rm stat})$ $\times$ 10$^{-11}$ erg cm$^{-2}$ s$^{-1}$ and we derived a 95\% C.L. upper limit for the SW region (1.09 $\times$ 10$^{-12}$ erg cm$^{-2}$ s$^{-1}$). \section{Radio Continuum Data} RCW~86 is included in the second Molonglo Galactic Plane Survey (called hereafter MGPS-2) performed by the Molonglo Observatory Synthesis Telescope (MOST), at 843~MHz with a bandwidth of 3 MHz and a resolution of $\approx 45 \times 51$~arcsec$^2$ \citep{Murphy2007}. However, the MOST data are missing structures on scales larger than $\sim 20{-}30$~arcmin, and this survey does not recover the total radio emission from RCW~86 \cite[its integrated flux density is $\approx 20$~Jy in MGPS-2, whereas $\approx 55$~Jy is expected from previous observations, e.g.][]{Caswell1975}. Instead, to obtain radio flux densities for regions of the SNR, we used Parkes survey observations at 2.4~GHz, with a resolution of $10.2 \times 10.6$~arcmin$^2$, from \cite{Duncan1995}. The integrated flux density of RCW~86 in this survey is $\approx 25$~Jy, in reasonable agreement with that expected, showing that this single-dish survey is not missing flux from this source. Given the low resolution of this survey, the flux densities in the NE and SW quadrants of \cite{Abramowski2015} were obtained by integrating out to somewhat larger radii of 40~arcmin, which gives 4.3~Jy in the NE quadrant and 10.4~Jy in the SW quadrant. \section{X-ray observations} To estimate the non-thermal X-ray emission from the NE and SW regions of RCW 86, we analyzed the spectra of these two regions using data of the EPIC-MOS2 instrument of XMM-Newton. Since the SNR is larger than the XMM-Newton field of view the NE and SW regions were split over several observations, and we took care of this by using spectral analysis of mutually exclusive regions, which together overlapped entirely with the regions indicated in Figure~\ref{fig:divided_templates}. For the NE we used observations number 0208000101 (Jan. 26th 2004, 59.992 ks) and 0504810301 (Aug. 25th 2007, 72.762 ks). For the SW region we used observations 0110010701 (Aug. 16th 2000, 23.314 ks) and 0504810401 (Aug. 23rd 2007, 116.782 ks). The spectra presented in Figure~\ref{fig:plot_xrays} were extracted using the standard XMM-Newton analysis package XMM SAS, version 14.0 \footnote{See http://xmm.esac.esa.int/sas/ for more information about the software.}. The background spectra were obtained from empty regions in the field, taken from the same observations. The extracted spectra were then analyzed with the spectral analysis software xspec 12.8 \citep{Arnaud1996} using the ``vnei'' model plus power law component, both corrected for Galactic absorption. The absorption columns are (2.5 $\pm$ 0.1)$\times$ 10$^{21}$ cm$^{-2}$ for the southern part and (4.8 $\pm$ 0.1) $\times$ 10$^{21}$ cm$^{-2}$ for the northern part. From the best fit models we obtained the fluxes in the 3-5 keV band, which for RCW 86 is totally dominated by SC emission. The fluxes are estimated to be $10.0 \times 10^{-12}$ erg cm$^{-2}$ s$^{-1}$ and $4.3 \times 10^{-12}$ erg cm$^{-2}$ s$^{-1}$ for the SW and NE regions, respectively. The flux measurements have errors of the order of 5-10\%, mostly dominated by systematic errors, as absolute flux calibration of X-ray instruments is accurate at the 5\% level. For both regions, the power law index is measured to be $3.0 \pm 0.2$, with the error mostly due to variations within each region. \begin{figure}[ht] \centering \includegraphics[height=2.4in]{figure5-1_xray_raw_spectra_north.eps} \includegraphics[height=2.4in]{figure5-2_xray_raw_spectra_south.eps} \caption{Observed spectra for the split northern regions (Top) and the southern regions (Bottom). Due to the limited field of view of \textit{XMM-Newton}, two pointings were needed to cover each quadrant shown in Figure~\ref{fig:divided_templates}, hence the black and red curves in both plots. Solid lines show the spectral fits while the dotted lines give the contributions of the thermal components, which are negligible above 2 keV.\label{fig:plot_xrays}} \end{figure} \section{The surrounding interstellar medium} We have analyzed the cold neutral gas in the environs of RCW 86 to investigate the characteristics of the surrounding gas. To carry out this search we used data at $\lambda$ = 21 cm acquired with the Australia Telescope Compact Array (ATCA) on March 24, 2002. To recover the missing short spatial frequencies, the ATCA data were combined in the u-v plane with single dish observations performed with the Parkes radio telescope. The final data are arranged in a cube with an angular resolution of $2^\prime.7 \times 2^\prime.5$ (R.A. $\times$ Dec.) and a 1$\sigma$ rms noise in line-free channels of about 1 K. The cube covers the velocity range -120.00 to +126.00 km s$^{-1}$ with a velocity resolution of 0.82 km s$^{-1}$. The whole H{\sc i} cube was inspected searching for imprints in the surrounding medium that might have been produced by the SN explosion and/or its precursor star. Different morphological signatures can be left in the interstellar gas by these expanding events, like cavities blown up by the stellar wind of the pre-supernova, bubbles surrounded by a higher density neutral shell, accelerated clouds seen in projection against the center of the SNR, etc. These kind of features have been identified in association with several Galactic SNRs (see e.g. \cite{Park2013} and references therein). In the case of RCW 86 we detected the presence of an elongated cavity, about 1$\fdg$5 in size, that runs almost parallel to the Galactic plane, in the velocity interval between $\sim -38$ km~s$^{-1}$ and $\sim -32$ km~s$^{-1}$ (all velocities are referred to the Local Standard of Rest, LSR). Within this velocity range, more precisely between $\sim -35$ km~s$^{-1}$ and $\sim -33$ km~s$^{-1}$ the SNR appears surrounded by a tenuous, approximately circular H{\sc i} shell with variable brightness distribution. These morphological findings are in very good agreement with the predictions made on the basis of radio continuum, X-rays, infrared observations and hydrodynamic simulations \citep{Vink1997, Dickel2001, Williams2011, Broersen2014}. After applying a circular rotation model for our Galaxy for $l = 315\fdg4$, $b = -2\fdg3$, the LSR radial velocity interval of the observed features translates into a distance of $\sim 2.5 \pm 0.3$ kpc. This distance is in very good agreement with that previously obtained for RCW 86 on the basis of optical measurements of proper motions of the filaments \citep{Rosado1996, Sollerman2003}, suggesting that this gas is placed at the same distance as RCW 86. In addition to the morphological signatures, an independent test of the adopted central radial velocity can be done by comparing the absorbing column density N$_{\rm H}$ integrated between us and the SNR with Yamaguchi's (2011) best fits derived from X-ray observations. From our H{\sc i} data we obtain N$_{\rm H}$ = 2.6 $\times 10^{21}$ cm$^{-2}$ (for the whole annulus shown in Figure~\ref{fig:annulus}), in good concordance with the values N$_{\rm H}$ = 2.9 or 2.8 $\pm$ 0.3 $\times 10^{21}$ cm$^{-2}$, obtained from X-ray data (where the different values depend on the model) and the absorption column given in Section 5 for the southern region. The apparent discrepancy with the absorption columns mentioned in Section 5 for the northern region can be due to the presence of H2, a contribution to which $\lambda$ 21 cm observations are not sensitive. As the distribution of the molecular gas (CO in particular) perpendicular to the plane has a semi-scale of $\sim$ 55 pc in the inner Galaxy, the presence of some molecular gas fragmented in small isolated clouds is natural at the height of RCW 86 ($\sim$ 100 pc). The line of sight where the X-ray absorption was calculated might have crossed one of these cloudlets. Overall, the fact that these H{\sc i} features simultaneously fulfill morphological and kinematical criteria, strongly suggest that the neutral gas observed in this velocity interval is physically associated with the SNR. Figure~\ref{fig:nhd} (Top) shows the local H{\sc i} distribution in a large field (over 5 square degrees) in direction to RCW 86, as observed around the radial velocity of $\sim - 34$ km s$^{-1}$. The white contours show the radio continuum emission at 843 MHz from the MGPS-2 data. Figure~\ref{fig:nhd} (Bottom) shows the same as in Fig.~\ref{fig:nhd} (Top) but in a smaller region around RCW 86 and using a different scale so as to emphasize the fainter inner shell. Figure 7 displays a radial profile traced across the line shown in Fig.~\ref{fig:nhd} (Bottom). The arrows indicate the approximate locations of the walls of the outer cavity and the inner shell. The H{\sc i} observations can be used to carry out independent estimates of the volume density of the SNR environs. We considered four regions corresponding to the four quadrants of two concentric circles traced with inner and outer radii coincident with the radio shell (as shown in Figure~\ref{fig:annulus}). In this estimate, two aspects have to be considered: the background emission contribution, that takes into account emission that may come from far gas whose emission is detected at the same radial velocities in this direction of the Galaxy, and the geometry of the associated gas along the line of sight. For the first issue we subtracted a uniform background of T=25 K, a value estimated from the inspection in the observed field of regions free of structures down to the angular resolution of the data. This assumption is reasonable since the neutral gas located at the far distance is at a height well below the Galactic plane in the direction to $b=-2\fdg3$. Concerning the three-dimensional distribution of the adjacent gas, we tested the two usually adopted geometries: a cylindrical ring with a depth similar to the SNR diameter (the case where the gas accompanies a barrel-shaped SNR), and a spherical shell surrounding the SNR. This last case is consistent with the geometry suggested by the Balmer-dominated filaments that encircle almost the complete periphery of RCW86 \citep{Smith1997}. It is known that the Balmer-dominated filaments arise from relatively high velocity shocks passing through partially neutral gas. It is natural then to assume that the cold H{\sc i} mimics the optically depicted SNR. In any case in both considered geometries the results obtained were very similar and in what follows we list an average between the two results. For region 1 (NW quadrant) n$_{\rm H} \sim 1.5$ cm$^{-3}$; for region 2 (NE quadrant) n$_{\rm H} \sim 1$ cm$^{-3}$; for region 3 (SE quadrant) n$_{\rm H} \sim 1$ cm$^{-3}$ and for region 4 (SW quadrant) n$_{\rm H} \sim 1.2$ cm$^{-3}$. In all cases the intrinsic error of the quoted numbers is of about 30\% taking into account the uncertainty in the distance (of 25\%) and the approximate background subtraction. For the interior of the SNR we estimated n$_{\rm H} \sim 0.5 $ cm$^{-3}$. The complete analysis of the H{\sc i} in the direction of RCW 86 will be published elsewhere. \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{figure6-1_nhd.eps} \includegraphics[width=\columnwidth]{figure6-2_nhd.eps} \caption{Neutral hydrogen distribution around RCW 86. Top: the HI distribution is displayed with a linear scale between 32.8 and 89.8 K in an ample field around RCW 86 at the LSR radial velocity of -34 km s$^{-1}$. Bottom: the same, but in a smaller field in the vicinity of RCW 86, displayed between 32.8 and 73.3 K to emphasize the presence of the internal shell. The white contours show the radio continuum emission at 843 MHz from the MGPS-2 data. The line included in the bottom panel indicates the direction where the distribution profile shown in Figure~\ref{fig:radial_profile} was extracted. \label{fig:nhd}} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.9\columnwidth]{figure7_radial_profile.eps} \caption{Radial profile of the HI distribution measured along the green line showed in Figure~\ref{fig:nhd} (Right). The arrows indicate the presence of an inner shell within a more extended cavity. The x-axis corresponds to the equatorial coordinates (in degrees) while the y-axis shows the temperature (in kelvin). \label{fig:radial_profile}} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.9\columnwidth]{figure8_annulus.eps} \caption{Neutral hydrogen distribution around RCW 86 at the LSR radial velocity of -34 km s$^{-1}$. The green annulus delimits the regions where the atomic density was estimated.\label{fig:annulus}} \end{figure} \section{Discussion} \subsection{Broadband modeling} The main difficulty in determining the origin of the $\gamma$-ray emission of RCW 86 lies in the competition of two major channels of $\gamma$-ray production: the IC scattering of high energy leptons on local photon fields (leptonic scenario) or the decay of neutral pions produced by the interaction between accelerated protons and interstellar clouds located near the remnant (hadronic scenario). We performed a broadband modeling of the non-thermal emission of RCW 86 using radio, X-ray and VHE $\gamma$-ray data, in addition to the \textit{Fermi}-LAT observations. This modeling (shown in Figure~\ref{fig:sed_modeling}) aims to constrain key parameters such as the average magnetic field and the fraction of the total SN energy which is transferred into protons and electrons. The particle spectra are assumed to follow a power-law function with an exponential cutoff dN/dE $\propto$ E$^{-\Gamma} \times$ exp(-E/E$_{\rm max}$) with the same index for both distributions (electron and proton), starting at 511 keV for the electrons and 1 GeV for protons. The escape of accelerated particles confined in the magnetic field of the shock is taken into account, assuming a shell thickness of $0\fdg1$. This value was obtained by fitting the \textit{Fermi}-LAT data with a ring model and is in agreement with Abramowski et al.'s 2015 estimates. We define $\eta_{\rm e,n}$ as the ratio of the total energy injected into accelerated particles $W_{\rm e,p}$ to the standard energy of a Type Ia SN explosion $E_{\rm SN}$, assumed to be 10$^{51}$. The so-called electron-to-proton ratio $K_{\rm ep}$ is also computed, at momentum 1 GeV c$^{-1}$ and may be compared to the value measured in cosmic-rays ($K_{\rm ep} \sim$ 10$^{-2}$). \subsection{Modeling of the whole SNR} Here we present the results of two leptonic scenario models. Since a pure hadronic scenario requires unlikely parameter values such as a very hard spectral index for protons \citep[as it was already suggested in][]{Lemoine2012} and a high magnetic field ($B > 100$~$\mu$G), we did not consider this case. The presence of a high magnetic field is not excluded in very thin regions, near the shock, but it is very unlikely to have such high values for the whole remnant. Moreover, a spectral index softer than 1.7 is excluded with more than 3$\sigma$, as described in Section 3.2. The hadronic model relying on the interactions between escaped protons and a dense interstellar medium in the vicinity of the remnant, as proposed in \cite{Gabici2009}, seems also ruled out by the non-detection of molecular clouds in the NE part of the remnant, where the $\gamma$-ray signal detected by the \textit{Fermi}-LAT is the most important. Figure~\ref{fig:sed_modeling} shows the result for a one-zone model (top) in which we assumed that SC and IC photons are produced by electrons confined in the same emitting region with a constant magnetic field and a two-zone model (bottom) in which we considered two different populations of radio, X-ray and $\gamma$-ray emitting particles. Parameters of the latter population were obtained with a $\chi ^2$ fit without considering the radio emitting population for which parameters were determined afterwards in respect to the previous results. The two-zone model is motivated by the bad correlation between the radio and the $\gamma$-ray data, as shown in Section 3, and by several publications which reported large variations of the physical conditions in RCW 86 \citep{Vink2006, Broersen2014}. To be more conservative on the fraction of the energy injected in protons, the only photon field that was taken into account for the IC scattering of electrons is the Cosmic Microwave Background (CMB). The best-fit parameters for these two models are given in Table~\ref{tab:models_parameters}. For the whole remnant, the radio points in the one-zone model imply a soft spectral index ($\sim$ 2.4) which would lead to a very strong bremsstrahlung component below 1 GeV. To reconcile this low energy component with the new \textit{Fermi}-LAT upper limit at 1 GeV derived in Section 3.2, a maximum density of 0.1 cm$^{-3}$ needs to be assumed. In the case of a two-zone model, we obtained a more reasonable index ($\sim$2.2) with a density of 1.0 cm$^{-3}$ and an energy of 2\% of $E_{\rm SN}$ that goes into protons. Considering a distance of 2.5 kpc and a shell thickness of $0\fdg1$, the energy density of CRs is estimated at $\sim$40 eV cm$^{-3}$. In both cases, the magnetic field is around 10 $\mu$G, in agreement with previous modeling by \cite{Lemoine2012}, \cite{Yuan2014} and \cite{Abramowski2015}. \subsection{Modeling of the NE and SW regions} In addition to the modeling of the whole SNR, we studied the broadband signal emitted by the NE and SW regions defined in \cite{Abramowski2015} and for which we performed a spectral analysis in Section 3.2 of this paper. In this work, we gathered radio (Section 4), X-ray (Section 5), GeV (Section 3.2) and TeV data \citep{Abramowski2015} for these two regions and performed a modeling assuming that each region sees a different population of emitting particles. The spectral points for the NE region were obtained by dividing the 100 MeV -- 500 GeV energy range into three logarithmically spaced bins only (instead of nine for the whole remnant) because of the reduced statistics. We derived a 95\% C.L upper limit for the first bin and obtained significant fluxes for the two other bins. To limit the number of fitted parameters, the density was assumed to be of 1.0 cm$^{-3}$ for both regions (which is in agreement with the values derived in Section 6). We decided to use the best index previously obtained for the whole remnant, in the case of a two-zone model, (2.21) and fixed it for both electron and proton distributions for the two regions, since there is no evidence in favour of different injection slopes in the remnant. The energy injected in protons was fixed to 0.5\% of $E_{\rm SN}$ (which corresponds to a quarter of the value used for the whole SNR) and the density at 1.0 cm$^{-3}$. Results are shown in Figure~\ref{fig:sed_modeling_regions} and Table~\ref{tab:models_parameters} summarizes the parameters for the two models, obtained with a $\chi ^2$ fit. We can notice that the magnetic field is slightly higher in the SW than in the NE, implying a magnetic field gradient in a direction away from the Galactic Plane possibly due to the shock interaction with a denser medium. Another interesting point is that the magnetic field of the NE region is very close to the one obtained for the whole SNR, which is consistent with the fact that most of the GeV emission is detected in the NE part of the remnant. Moreover, $E_{\rm max}$ is also higher in the SW than in the NE which is in agreement with the values of the magnetic field: at early times, when the maximum energy is not limited by SC losses, a higher magnetic field implies a higher $E_{\rm max}$. Overall, the MWL data indicate variations of the magnetic field within the SNR. The radio emission corresponds to regions with high magnetic fields whereas the GeV emission detected by \textit{Fermi}-LAT corresponds to regions with mixed magnetic fields. And since the H.E.S.S. map shows brighter emission coming from the inside of the remnant than in radio and X-rays, the reverse shock could also be responsible for the CR acceleration but with a lower magnetic field. In the near future, a deep study of RCW 86 with the Cherenkov Telescope Array \citep{CTA2011} and ASTRO-H \citep{Takahashi2014} could constrain the magnetic field and provide a precise map of its fluctuation at smaller scales. \section{Conclusions} Analyzing more than 6 years of \textit{Fermi}-LAT Pass 8 data, we present the first deep study of the morphology and spectrum of the young SNR RCW 86. The spectrum is described by a pure power-law function with an index of $1.42 \pm 0.1_{\rm stat} \pm 0.06_{\rm syst}$ in the LAT energy range (0.1-500 GeV). The broadband emission from radio to TeV cannot be described by a pure hadronic scenario due to the very hard spectral index in the GeV range, the high magnetic field needed and the lack of a high density medium. The two-zone model provides new constraints on the fraction of the total energy injected in protons and the most conservative value amounts to $\sim 2 \times 10^{49}$ erg for a density of 1 cm$^{-3}$. Finally, the non-detection of the SW region of RCW 86, which is very bright in radio, X-rays and at TeV energies, provides specific constraints on this part of the remnant, in terms of the acceleration mechanism as well as the gas density and the magnetic field. \textit{Acknowledgments}. The \textit{Fermi} LAT Collaboration acknowledges generous ongoing support from a number of agencies and institutes that have supported both the development and the operation of the LAT as well as scientific data analysis. These include the National Aeronautics and Space Administration and the Department of Energy in the United States, the Commissariat \`a l'Energie Atomique and the Centre National de la Recherche Scientifique / Institut National de Physique Nucl\'eaire et de Physique des Particules in France, the Agenzia Spaziale Italiana and the Istituto Nazionale di Fisica Nucleare in Italy, the Ministry of Education, Culture, Sports, Science and Technology (MEXT), High Energy Accelerator Research Organization (KEK) and Japan Aerospace Exploration Agency (JAXA) in Japan, and the K.~A.~Wallenberg Foundation, the Swedish Research Council and the Swedish National Space Board in Sweden. Additional support for science analysis during the operations phase is gratefully acknowledged from the Istituto Nazionale di Astrofisica in Italy and the Centre National d'\'Etudes Spatiales in France. GD and EG are members of CIC-CONICET (Argentina), LD is Fellow of CONICET (Argentina). They are supported through grants from CONICET and ANPCyT (Argentina). We acknowledge to Estela Reynoso and Anne Green who collaborated in the first stages of the HI data acquisition and preocessing. \bibliographystyle{apj}
1,941,325,219,971
arxiv
\section{\@startsection{section}{1}{\z@}{3.5ex plus 1ex minus .2ex}{2.3ex plus .2ex}{\large\bf}} \def\thesection{\arabic{section}} \def\thesubsection{\arabic{section}.\arabic{subsection}} \def\thesubsubsection{\arabic{subsubsection}} \def
1,941,325,219,972
arxiv
\section{Introduction} The search for concrete materials to realize novel topological states of matter is an exciting frontier in condensed matter physics \cite{Hasan2010, Qi2011, Ando2013}. In that search, topological superconductors attract particular attention due to the potential appearance of exotic quasiparticles called Majorana fermions at their boundaries \cite{Qi2011, Alicea2012, Beenakker2013, Elliott2015, Sato2017}. The superconductors derived from topological insulators (TIs) are expected to be a fertile ground in this respect, owing to the strong spin-orbit coupling which may give rise to an unconventional momentum-dependent superconducting gap even for the isotropic pairing force coming from conventional electron-phonon interactions \cite{Fu2010, Ando2015}. The first of such materials was Cu$_x$Bi$_2$Se$_3$ \cite{Hor2010}, which is synthesized by intercalating Cu into the van der Waals gap of the prototypical TI material Bi$_2$Se$_3$. Cu$_x$Bi$_2$Se$_3$ shows superconductivity with $T_c \simeq$ 3 K for $x \simeq$ 0.3, and early point-contact spectroscopy measurements pointed to the occurrence of topological superconductivity associated with surface Majorana fermions \cite{Sasaki2011}. Recent measurements of its bulk superconducting properties have elucidated \cite{Matano2016,Yonezawa2017} that it realizes a topological superconducting state which spontaneously breaks in-plane rotational symmetry in a two-fold symmetric manner, even though the crystal lattice symmetry is three-fold. Such an unconventional state is consistent with one of the four possible superconducting states constrained by the $D_{3d}$ lattice symmetry of Bi$_2$Se$_3$ \cite{Fu2010, Ando2015}; this state, named $\Delta_{4x}$ or $\Delta_{4y}$ state depending on the direction of nodes or gap minima, is characterized by a nematic order parameter and hence is called a {\it nematic superconducting state} \cite{Fu2014}. It was reported that Sr$_x$Bi$_2$Se$_3$ \cite{Liu2015} and Nb$_x$Bi$_2$Se$_3$ \cite{Qiu2015} also realize the nematic superconducting state \cite{Nikitin2016, Pan2016, Shen2017, Asaba2017, Du2017, Kuntsevich2018, Smylie2018}. An interesting superconducting compound related to Cu$_x$Bi$_2$Se$_3$ is Cu$_{x}$(PbSe)$_{5}$(Bi$_{2}$Se$_{3}$)$_{6}$ (CPSBS), which was discovered in 2014 \cite{Sasaki2014}. Its parent compound (PbSe)$_{5}$(Bi$_{2}$Se$_{3}$)$_{6}$ (PSBS) can be viewed as a natural heterostructure formed by a stack of two-quintuple-layer (QL) Bi$_{2}$Se$_{3}$ units alternating with one-bilayer PbSe units \cite{Nakayama2012, Fang2014, Segawa2015, Nakayama2015, Momida2018}. Since the binary compound PbSe is a topologically trivial insulator, PSBS consists of ultra-thin TI layers separated by trivial-insulator layers. When Cu is intercalated into the van der Waals gap in the Bi$_{2}$Se$_{3}$ unit of PSBS, superconductivity with $T_c$ = 2.8~K shows up and nearly 100\% superconducting volume fraction can be obtained for $x \simeq$ 1.5. Since the structural unit responsible for superconductivity in CPSBS is essentially Cu$_x$Bi$_2$Se$_3$, one would expect the same unconventional superconducting state to be realized in CPSBS. Nevertheless, there is a marked difference between the two compounds: whereas there is strong evidence that CPSBS has gap nodes \cite{Sasaki2014}, Cu$_x$Bi$_2$Se$_3$ is fully gapped \cite{Kriener2011}. Hence, it is important to clarify the nature of the nodal superconducting state in CPSBS. In this paper, we report our discovery of two-fold symmetry in the upper critical field $H_{c2}$ and the specific heat $c_p$ in their dependencies on the magnetic-field direction in the basal plane. The pattern of the two-fold symmetry indicates that the gap nodes are lying in the mirror plane of the crystal, suggesting that the $\Delta_{4x}$ state with symmetry-protected nodes is realized in CPSBS. This is in contrast to the $\Delta_{4y}$ state realized in Cu$_x$Bi$_2$Se$_3$, in which the nodes are not protected by symmetry and thus are lifted to form gap minima. We discuss that the likely cause of the $\Delta_{4x}$ state is the weak distortion of the Bi$_2$Se$_3$ lattice imposed by the PbSe units. This establishes CPSBS as a nematic topological superconductor with symmetry-protected nodes. High-quality PSBS single crystals were grown by using a modified Bridgman method following Refs. \cite{Sasaki2014, Segawa2015}. X-ray Laue images were used for identifying the crystallographic $a$ axis upon cutting the pristine crystals, which were then electrochemically treated to intercalate Cu following the recipe of Kriener {\it et al.} \cite{Kriener2011b}, and the superconductivity was activated by annealing. The precise $x$ values determined by the weight change \cite{Kriener2011b} was 1.47 for the two samples presented here. The superconducting shielding fraction (SF) of the samples was measured in a commercial SQUID magnetometer. Further experimental details are given in the Supplemental Material \cite{Suppl}. \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{fig1.pdf} \caption{(a) Schematic pictures of the setups to measure the in-plane magnetic-field-direction dependencies of the out-of-plane resistance $R_{\perp}$ and the specific heat $c_p$. (b) The monoclinic $a$ axis of CPSBS lies in the mirror plane of the Bi$_2$Se$_3$ layers which nearly preserve the trigonal symmetry. (c) Temperature dependence of $R_{\perp}$ in sample A used for the resistive $H_{c2}$ measurements. (d) The zero-field-cooled (ZFC) magnetization data showing shielding fractions of 75~\% and 104~\% in samples A (red) and B (green), respectively. (e) Temperature dependence of the electronic specific heat $c_{\rm el}$ in 0 T for sample B used for detailed $c_{\rm el}(H)$ measurements; the solid line is the theoretical curve for a line-nodal superconducting gap \cite{Won1994} assuming a superconducting volume fraction of 85~\%. To facilitate the comparison with Cu$_x$Bi$_2$Se$_3$, the molar volume is taken here for 1 mole of Bi$_2$Se$_3$.} \end{figure} To elucidate the possible in-plane anisotropy of the superconducting state, we employed the measurements of both the out-of-plane resistance $R_{\perp}$ and the specific heat $c_p$ in various orientations of the in-plane magnetic field $H$ [see Fig. 1(a) for configurations]. From the $R_{\perp}(H)$ data, the upper critical field $H_{c2}$ was extracted by registering the field where 50\% of the normal-state resistance is recovered. Note that $R_{\perp}$ measurements do not impose any in-plane anisotropy associated with the current direction. The $c_p$ measurements were performed with a standard relaxation method using a home-built calorimeter \cite{Suppl} optimized for small heat capacities. Both measurements were done in a split-coil magnet with a $^3$He insert (Oxford Instruments Heliox), with which the magnetic-field direction with respect to the sample holder can be changed with a high accuracy ($\pm1\degree$) by rotating the insert in the magnet. With a manual second rotation axis on the cold finger, measurements with $H$ rotating in either the $ab$ or $ac^*$ plane were possible (note that $\vec{c^*} \parallel \vec{a} \times \vec{b}$ \cite{Segawa2015, Suppl}). We estimate the possible misalignment of the magnetic field to be $\pm 2\degree$. The two samples used for measuring $R_{\perp}$ and $c_p$ shown here presented the SF of 75\% and 104\%, respectively [Fig. 1(d)]. Here, no demagnetization correction is applied, since the magnetic field was applied parallel to the wide face of the platelet-shaped samples so that the demagnetization factor was $>$ 0.95. \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{fig2.pdf} \caption{(a) $R_{\perp}(H)$ curves measured for three principal directions of applied magnetic fields, showing a clear difference in $H_{c2}$. (b) Magnetic-field-direction dependencies of $H_{c2}$ obtained from the $R_{\perp}(H)$ data in the in-plane rotation (main panel) and the out-of-plane rotation (inset); the angles $\varphi$ and $\psi$ are measured from the $a$ axis.} \end{figure} The temperature dependence of $R_{\perp}$ presents a weak upturn below $\sim$100 K [Fig. 1(c)], which reflects the quasi-two-dimensional (2D) electronic states of CPSBS. The $R_{\perp}(H)$ curves measured at 0.5 K with the applied magnetic field in the three orthogonal directions, $a$, $b$, and $c^*$ axes, are shown in Fig. 2(a). One can immediately see that $H_{c2}$ for the three magnetic-field directions are different; the smallest value for $H \parallel c^*$ is a consequence of the quasi-2D nature and was already reported \cite{Sasaki2014}, but the anisotropy between $H \parallel a$ and $H \parallel b$ is a new observation. The precise in-plane magnetic-field-direction dependence of $H_{c2}$ at 0.5 K is shown in the main panel of Fig. 2(b), where one can see clear two-fold symmetry with maxima at $H \parallel a$ and the variation $\Delta H_{c2}^{\parallel}$ of $\sim$0.25 T. As explained in detail in the Supplemental Material \cite{Suppl}, the $a$ axis in CPSBS is parallel to the mirror plane and hence the direction of $H_{c2}$ maxima is 90$^{\circ}$ rotated from that in Cu$_x$Bi$_2$Se$_3$ \cite{Yonezawa2017}. We note that anisotropic $H_{c2}$ measurements with the current along the $b$ axis were also performed, and $H_{c2}$ was not affected by the current direction \cite{Suppl}. Also, the $H_{c2}$ anisotropy in $R_{\perp}(H)$ was reproduced in one more sample \cite{Suppl}. For comparison, the magnetic-field-direction dependence of $H_{c2}$ at 0.5 K in the $ac^*$ plane is shown in the inset of Fig. 2(b), where the magnitude of the variation in $H_{c2}$, $\Delta H_{c2}^{\perp}$, is about 1.0 T. This $\Delta H_{c2}^{\perp}$ value means that, for the observed two-fold in-plane anisotropy with $\Delta H_{c2}^{\parallel} \sim$ 0.25 T to be ascribed to an accidental $c^*$-axis component of $H$, a sample misalignment of $\sim$30$\degree$ would be necessary. This is obviously beyond the possible error in our experimental setup, and one can conclude that the two-fold in-plane anisotropy is intrinsic. Due to the volume sensitivity, the $c_p(T)$ data provides a better estimate of the superconducting volume fraction (VF) than the diamagnetic SF. After subtracting the phononic contribution \cite{Suppl}, the electronic specific heat $c_{\rm el}$ shows a clear anomaly associated with the superconducting transition; Fig. 1(e) shows a plot of $c_{\rm el}/T$ vs $T$, which is fitted with a line-nodal gap theory \cite{Won1994} used for CPSBS in Ref. \cite{Sasaki2014}. This fitting yields the superconducting VF of 85\% for this sample, which is used for further $c_{\rm el}$ measurements. \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{fig3.pdf} \caption{Magnetic-field dependencies of $c_{\rm el}/T$ at (a) 2.01 K, (b) 1.01 K, (c) 0.76 K, and (d) 0.50 K measured in $H \parallel a$ and $H \parallel b$. The dashed line in panel (d) shows the $\sqrt{H}$ behavior expected for a superconducting gap with line nodes, as was already reported in Ref. \cite{Sasaki2014}.} \end{figure} \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{fig4.pdf} \caption{(a),(b) Magnification of the $c_{\rm el}(H)$ behavior near the $H_{c2}$ at (a) 0.76 K and (b) 1.01 K for $H \parallel a$ and $H \parallel b$, showing how the $H_{c2}$ values were extracted. (c),(d) Temperature dependencies of $H_{c2}$ extracted from (c) the middle-point in the $R_{\perp}(H)$ transitions in sample A and (d) disappearance of the superconducting contribution in $c_{\rm el}$ in sample B, for the principal magnetic-field directions; the solid lines are fits to the empirical $\sim 1-(T/T_c)^{1.2}$ dependence.} \end{figure} The magnetic-field dependencies of $c_{\rm el}$ at various temperatures for both $H \parallel a$ and $H \parallel b$ are shown in Fig. 3; the data presented here are after subtracting the Schottky anomaly \cite{Suppl} by using the same $g$-factor as that reported in Ref. \cite{Sasaki2014}. One can see that at 2.01 and 1.01 K, $c_{\rm el}$ changes little above a certain $H$ value, which we identify as $H_{c2}$. However, at lower temperature ($\lesssim$ 0.5 K) the change in the $c_{\rm el}(H)/T$ behavior across $H_{c2}$ becomes less evident and we lose the sensitivity to determine $H_{c2}$. As a result, the in-plane anisotropy in $H_{c2}$ is best visible in $c_{\rm el}$ at intermediate temperatures around 1 K [Figs. 4(a) and 4(b)]. In our analysis of $c_{\rm el}(H)$, $H_{c2}$ was determined as the crossing point of the two linear fittings of the $c_{\rm el}/T$ vs $H$ data below and above $H_{c2}$ as shown in Figs. 4(a) and 4(b); here, one can see that the difference in $H_{c2}$ for $H \parallel a$ and $H \parallel b$ is better discernible at 1.01 K with $\Delta H_{c2} \sim$ 0.34 T than at 0.76 K. Importantly, $H_{c2}$ is larger for $H \parallel a$, which is consistent with the result of the $R_{\perp}(H)$ measurements. The temperature dependencies of $H_{c2}$ extracted from $R_{\perp}(H)$ and $c_{\rm el}(H)$ for the principal magnetic-field orientations are plotted in Figs. 4(c) and 4(d), respectively. The absolute values of $H_{c2}$ in the two panels are different, mainly because Fig. 4(c) shows the mid-point of the transition while Fig. 4(d) shows the complete suppression. Nevertheless, the in-plane anisotropy is consistently found in the $R_{\perp}(H)$ and $c_{\rm el}(H)$ results. In Figs. 4(c) and 4(d), the $H_{c2}(T)$ data are fitted empirically with $H_{c2}(T) = H_{c2}(0) \left[1-(T/T_c)^{a} \right]$ with $a \approx$ 1.2; the inapplicability of the conventional Werthamer-Helfand-Hohenberg theory for $H_{c2}(T)$ was already reported for CPSBS and was discussed to be a possible consequence of unconventional pairing \cite{Sasaki2014}. \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{fig5.pdf} \caption{(a),(b) Change in $c_{\rm el}/T$ as a function of the angle $\varphi$ of the applied in-plane magnetic field at constant strengths of $H$ across $H_{c2}$ (2.4 -- 6.0 T, the data are shifted for clarity) at 0.76 and 1.01 K, presenting two-fold symmetric oscillations at $H < H_{c2}$. (c) Dependence of the oscillation amplitude on the strength of $H$ at 0.76 and 1.01 K, demonstrating its quick disappearance above $H_{c2}$. (d) Schematic pictures of the $\Delta_{4y}$ and $\Delta_{4x}$ gaps, which are realized in Cu$_x$Bi$_2$Se$_3$ and CPSBS, respectively, in relation to the Bi$_2$Se$_3$ lattice.} \end{figure} To supplement the conclusion from $H_{c2}$, we have also measured the detailed magnetic-field-direction dependence of $c_{\rm el}$ at 0.76 and 1.01 K in various strengths of the in-plane magnetic field from 2.4 to 6.0 T [Figs. 5(a) and 5(b)]. One can clearly see two-fold symmetric variations where the maxima occur at $H \parallel a$ for $H < H_{c2}$, but the anisotropy quickly disappears for $H > H_{c2}$ [see also Fig. 5(c)]. This disappearance of the anisotropy in the normal state strongly supports the interpretation that the anisotropy is due to the nematicity which arises spontaneously in the superconducting state. It also demonstrates that the observed $c_p$ anisotropy cannot be due to some $g$-factor anisotropy which might show up through the Schottky anomaly. It is useful to note that in Cu$_x$Bi$_2$Se$_3$, a sign change in the magnetic-field-direction dependence of $c_{\rm el}$ was observed \cite{Yonezawa2017}; namely, the maxima in $c_{\rm el}$ were observed for $H$ normal to a mirror plane at high $T$ and/or high $H$, but at low $T$ and low $H$, $c_{\rm el}$ presented minima in this direction. Such a switching behavior was explained as a result of the competition between the Doppler-shift effect and the vortex-scattering effect discussed by Vorontsov and Vekhter (VV) \cite{Vorontsov2006}. According to VV, the latter effect is dominant at higher $H$ at any temperature in a nodal superconductor and causes the maxima in $c_{\rm el}$ to appear for $H$ in the nodal direction. In view of the VV theory, the two-fold in-plane anisotropy in CPSBS with maxima in $c_{\rm el}$ appearing for $H \parallel a$ near $H_{c2}$ points to the realization of the $\Delta_{4x}$-type superconducting gap, which has gap nodes in the mirror plane [see Fig. 5(d)]. This conclusion is different from that for Cu$_x$Bi$_2$Se$_3$ \cite{Yonezawa2017}, where the $\Delta_{4y}$ state is realized. Note that the direction of maxima in $H_{c2}(\varphi)$ \cite{Venderbos2016} is also consistent with the $\Delta_{4x}$ gap in CPSBS and with the $\Delta_{4y}$ gap in Cu$_x$Bi$_2$Se$_3$. While the $\Delta_{4x}$ state was originally predicted for the three-dimensional ellipsoidal Fermi surface of Bi$_2$Se$_3$ to have point nodes \cite{Fu2010}, the quasi-2D nature of the Fermi surface in CPSBS \cite{Nakayama2015} extends the original point nodes into line nodes along the $c^*$ direction, at least in the extreme 2D limit \cite{Hashimoto2014}. As pointed out by Fu \cite{Fu2014}, the nodes in the $\Delta_{4x}$ state are protected by mirror symmetry, which explains why CPSBS is a nodal superconductor despite its essential similarity to Cu$_x$Bi$_2$Se$_3$. It is useful to mention that the crystallographic symmetry of CPSBS belongs to the monoclinic space group $C2/m$ \cite{Fang2014, Segawa2015, Momida2018}, which means that the lattice is actually two-fold symmetric \cite{Foot2}. The monoclinic nature arises from the fact that PSBS is a heterostructure of two dissimilar crystal symmetries, the trigonal lattice of Bi$_2$Se$_3$ and the square lattice of PbSe (see \cite{Suppl} for details). The lowering of the symmetry makes one of the three equivalent mirror planes in Bi$_2$Se$_3$ to be the only mirror plane, which contains the monoclinic $a$ axis; in fact, there is a weak but finite uniaxial distortion \cite{Suppl} in the Bi$_2$Se$_3$ QL units in PSBS \cite{Fang2014, Segawa2015}. According to the theory \cite{Fu2014, Venderbos2016}, under the constraint of the $D_{3d}$ point group, an odd-parity superconducting state which breaks in-plane rotation symmetry must obey $E_u$ symmetry and in general has a nematic gap function $\Delta(\mathbf{k}) = \eta_1 \Delta_{4x} + \eta_2 \Delta_{4y}$, where the two nodal gap functions $\Delta_{4x}$ and $\Delta_{4y}$ are degenerate and $\vec{\eta} = (\eta_1, \eta_2)$ can be viewed as the nematic director. This is why the $E_u$ state is called nematic. However, for the physical properties to present a two-fold anisotropy, a uniaxial symmetry-breaking perturbation is necessary \cite{Venderbos2016}. In CPSBS, the weak uniaxial lattice distortion, which leads to the $C2/m$ symmetry, is apparently responsible for lifting the degeneracy between $\Delta_{4x}$ and $\Delta_{4y}$ and make the nematic director to take the definite direction $\vec{\eta}$ = (1,0). Such a situation is rather similar to that realized in the high-$T_c$ cuprate YBa$_2$Cu$_3$O$_y$, in which a tiny orthorhombic distortion dictates the orientation of the spontaneously-formed nematic state \cite{Ando2002}, although the nematicity is about the normal state in YBa$_2$Cu$_3$O$_y$ while it is about the superconducting state in CPSBS. We note that the ARPES measurements on superconducting CPSBS found no clear evidence for two-fold-symmetric Fermi surface distortion within the experimental error of $\sim$2 \% \cite{Nakayama2015}. Hence, the possible anisotropy in the Fermi velocity $v_{\rm F}$ cannot be large enough to directly account for the observed two-fold anisotropy in $H_{c2}$ of $\sim$10\%, but it must be responsible for lifting the degeneracy in the nematic state. The emerging picture is that the microscopic physics of electrons in doped Bi$_2$Se$_3$ chooses the $E_u$ superconducting state to be the most energetically favorable, and then a symmetry-breaking perturbation sets the direction of $\vec{\eta}$, so that a two-fold anisotropy shows up \cite{noteMR}. In this regard, there is a report that two-fold anisotropy in Sr$_x$Bi$_2$Se$_3$ correlates with a weak structural distortion \cite{Kuntsevich2018}. Interestingly, existence of gap nodes has been suggested for Nb$_x$Bi$_2$Se$_3$ \cite{Smylie2016, Smylie2017}, which implies that the symmetry-breaking perturbation in Nb$_x$Bi$_2$Se$_3$ is different from that in Cu$_x$Bi$_2$Se$_3$ and prompts $\vec{\eta}$ to take (1,0). In summary, we found that both the $H_{c2}$ and the $c_p$ of superconducting CPSBS present two-fold-symmetric in-plane anisotropy with maxima occurring for $H \parallel a$. This points to the realization of the $\Delta_{4x}$-type superconducting gap associated with symmetry-protected line nodes extending along the $c^*$ direction. Hence, CPSBS is a nematic topological superconductor differing from Cu$_x$Bi$_2$Se$_3$ in the orientation of the nematic director. \begin{acknowledgments} We thank T. Sato for re-analyzing the raw data of Ref. \cite{Nakayama2015} to check for possible Fermi-surface anisotropy in CPSBS. We also thank Y. Vinkler-Aviv for helpful discussions about the point group. This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project number 277146847 - CRC 1238 (Subprojects A04 and B01). \end{acknowledgments}
1,941,325,219,973
arxiv
\section{Introduction} \label{sec:intro} The Planck's constant $h$ plays a central role in quantum mechanics. It was first introduced in 1900 by Max Planck in his study on the blackbody radiation~\cite{planck} as the proportionality constant between the minimal increment of energy of a charged oscillator in a cavity hosting blackbody radiation and the frequency of its associated electromagnetic wave. In 1905 Albert Einstein explained the photoelectric effect postulating that luminous energy can be absorbed or emitted only in discrete amounts, called quanta~\cite{einstein}. The light quantum behaved as an electrically neutral particle and was called ``photon''. The Planck-Einstein relation, $E=h\nu$, connects the photon energy with its associated wave frequency. Nowadays, the measurement of the Planck's constant is ordinarily performed by physics students in many educational laboratories, both in universities and in high schools. The most common techniques exploit the blackbody radiation (see refs.~\cite{george,manikopoulos,crandall,dryzek,brizuela}), the emission of light by LEDs when a forward bias is applied (see ref.~\cite{nieves}) or the photoelectric effect (see refs.~\cite{oleary,hall,bobst,barnett,garver}). Almost all measurements of $h$ exploiting the photoelectric effect are based on the principle of the experiment carried out by Millikan in the years from 1912 to 1915~\cite{millikan}. It is worth here to point out that, although the title of his 1916 article is ``A direct photoelectric determination of Planck's $h$'', in his experiment Millikan could not measure $h$, but he measured the ratio $h/e$ between the Planck's constant and the electron charge; then, using the value of $e$ that he had previously measured~\cite{millikan2,millikan3}, he was able to evaluate $h$ \footnote{When discussing his results with sodium, Millikan writes: \begin{quote} ``We may conclude then that the slope of the volt-frequency line for sodium is the mean of $4.124$ and $4.131$, namely $4.128 \times 10^{-15}$ which, with my value of $e$, yields $h=6.569 \times 10^{-27} \units{erg~sec}$''. \end{quote} }. Although the apparatus used by Millikan was rather complex, the method chosen for the measurement of $h/e$ is quite simple. The detectors basically consisted of a metal surface, which was illuminated with different monochromatic light sources, and a collector electrode, kept at a lower potential with respect to the metal. For each frequency of the incident light, the potential was adjusted until no current flowed through the collector, thus allowing to evaluate the ``stopping potential''. It is straightforward to show that the stopping potential increases linearly with the frequency of the incident light, and the slope of the straight line is given by $h/e$. A linear fit of the stopping potentials at different frequencies allows therefore to evaluate Planck's constant if the value of the electric unit charge $e$ is known. In the various didactic experiments proposed to measure the ratio $h/e$ exploiting photoelectric effect, a variety of devices and light sources are used (see again the examples in refs.~\cite{oleary,hall,bobst,barnett,garver}). In this paper we present a novel didactic experience for measuring $h/e$ using a photomultiplier tube (PMT) and a set of light emitting diodes (LEDs). We propose this experience to undergraduate physics students attending our introductory laboratory course to quantum physics. PMTs are very common devices in atomic and nuclear physics, and can be easily available in a didactic laboratory. The main advantage of a PMT with respect to a conventional photoelectric cell resides in the fact that photoelectrons extracted at the cathode are considerably amplified (the typical gain is of $\sim 10^{5} \div 10^{6}$), thus providing detectable output currents even when a large fraction of them is repelled back to the photocathode. This feature will help in evaluating the stopping potential as we will discuss later in Sec.~\ref{sec:analysis}. The paper is organized as follows: in Sec.~\ref{sec:setup} we describe the instrumentation and the theoretical principles of the measurement; in Sec.~\ref{sec:analysis} we propose a method to analyze the data collected in the experiment; finally in Sec.~\ref{sec:discussion} we discuss the results obtained and some possible strategies to improve the experiment. \section{Experimental setup} \label{sec:setup} PMTs are widely used in many fields of physics to convert an incident flux of light into an electric signal. A PMT is a vacuum tube consisting of a photocathode and an electron multiplier, composed by a set of electrodes called ``dynodes'' at increasing potentials. Incident light enters into the tube through the photocathode, and the electrons are extracted as a consequence of photoelectric effect (photoelectrons). Photoelectrons are accelerated by an appropriate electric field towards the first dynode of the multiplier, where a few secondary electrons are extracted. The multiplication process is repeated through all the dynodes, until the electrons ejected from the last dynode are finally collected by the anode, which produces the current signal that can be read out. A PMT can be operated either in pulsed mode or with a continuous light flux. In our experience we used a Philips XP 2008 PMT~\cite{philips}. The photocathode is a thin film (a few $\units{nm}$ thick) made of a Sb-Cs alloy deposited over a glass window, and is sensitive to a range of wavelengths that extends from approximately $280 \units{nm}$ to $700 \units{nm}$. The upper limit of this interval is set by the work function of the metal alloy, while the lower limit is set by the glass of the window, which is opaque to UV photons. The photocathode works in transmission mode, i.e. photoelectrons are collected from the opposite side of incident light. The electron multiplier consists of a set of $10$ dynodes, each made of a Be-Cu alloy. \begin{figure}[t] \includegraphics[width=0.95\columnwidth]{Divider.eps} \caption{Electric scheme of the voltage divider of the Philips XP 2008 PMT used in the experiment. The photocathode is indicated with K, the anode with A and the various dynodes with \mbox{Dy$_i$}. The values of the resistances are $R_{0}=1\units{M\Omega}$, $R=150\units{k\Omega}$ and $R_{L}=100\units{k\Omega}$. The resistance $R_{1}$ of the potentiometer connecting the cathode with the first dynode is allowed to vary in the range $[0,10]\units{k\Omega}$. The capacitors in the last stages are used to keep the voltage differences stable, and their capacitance is $C_{S}=0.01\units{nF}$.} \label{fig:divider} \end{figure} Fig.~\ref{fig:divider} shows the electric scheme of the voltage divider used to provide the voltage differences to the dynodes. Unlike a standard PMT voltage divider, here the negative high voltage is supplied to the first dynode (not to the photocathode, which is grounded through the resistor $R_{0}$), thus ensuring that the photocathode K is kept at higher voltage with respect to the first dynode Dy1. The voltage difference between the two electrodes can be adjusted by changing the variable resistance $R_{1}$, and is given by: \begin{equation} V_{K} - V_{Dy1} = \frac{R_{1}V_{0}}{R_{0}+R_{1}} \approx \frac{R_{1}}{R_{0}}V_{0}. \label{eq:vkd} \end{equation} In writing eq.~\ref{eq:vkd} we took into account the fact that $R_{1} \ll R_{0}$ (see Fig.~\ref{fig:divider}). If a high voltage $V_{0}=1000\units{V}$ is supplied to the PMT, a maximum voltage difference of $10\units{V}$ can be applied between K and Dy1. Since $V_{K} > V_{Dy1}$, photoelectrons extracted from K will be slowed down in their motion towards Dy1 and eventually sent back to K. On the other hand, like in standard PMT voltage dividers, the dynodes and the anode are kept at increasing potentials (if $V_{0}=1000\units{V}$, the average voltage differences between each pair of dynodes will be of the order of $100\units{V}$). In this way, photoelectrons eventually reaching Dy1 will be multiplied, producing a detectable current signal at the anode A, and consequently a voltage difference across the load resistor $R_{L}$. \setlength{\tabcolsep}{12pt} \begin{table}[t] \begin{tabular}{lcc} \hline LED & Peak wavelength ($\units{nm}$) & Peak frequency ($\units{10^{14} Hz}$) \\ \hline red & $631 (17)$ & $4.75 (0.13)$ \\ yellow & $585 (15)$ & $5.11 (0.13)$ \\ green & $563 (12)$ & $5.32 (0.11)$ \\ blue & $472 (14)$ & $6.34 (0.18)$ \\ violet & $403 (6)$ & $7.42 (0.08)$ \\ \hline \end{tabular} \caption{Main features of the emission spectra of the LEDs used in the measurement. The wavelength and frequency spectra $(dN/d\nu \propto (1/\nu^{2}) dN/d\lambda)$ have been fitted with gaussian functions. Here we report the mean values and, in brackets, the standard deviations of each fit function.} \label{tab:LED} \end{table} To perform our measurements we used five LEDs, emitting visible light of different colors ranging from red to violet. We preliminarily measured their emission spectra using an OCEAN OPTICS HR2000+ spectrometer~\cite{ocean}. Tab.~\ref{tab:LED} shows the peak values of the wavelengths and frequencies of each LED. The emission spectra of each LED have been fitted with gaussian functions. The values of the peak wavelengths (frequencies) and the corresponding standard deviations are reported in Tab.~\ref{tab:LED}. \begin{figure}[t] \includegraphics[width=0.95\columnwidth]{Layout.eps} \caption{Photo of the experimental setup. The window of the photocathode is coupled to a plastic support with a hole drilled at its center, in which the different LEDs can be inserted. The PMT and the support are wrapped with black tape, to prevent external light entering in the device. Two digital multimeters are used to measure the voltage differences between the photocathode and the first dynode and across the load resistor. The knob on the left of the PMT is connected to a potentiometer, which allows to adjust the voltage of the first dynode. } \label{fig:setup} \end{figure} Fig.~\ref{fig:setup} shows the experimental setup. The photocathode window is coupled to a plastic support with a hole drilled at its center where the different LEDs can be inserted. The PMT and the support are wrapped with black tape, to prevent external light entering into the device. The hole is also covered with black tape when a LED is inserted to perform a measurement. The voltage differences between K and Dy1 and across the load resistor $R_{L}$ are measured by two digital multimeters. The knob placed on the left of the PMT is connected to a potentiometer which allows the user to adjust the value of $R_{1}$ and consequently the voltage $V_{K}-V_{Dy1}$. The high voltage is supplied to the PMT by means of a CAEN N471A NIM power supply module~\cite{caen} (not shown in the figure). In our measurements we operated the PMT with high voltages in the range $700-1000\units{V}$. This choice allows to keep a high PMT gain without incurring saturation effects due to large number of electrons flowing across the last dynodes. The students should investigate the dependence of the voltage difference $V_{L}$ across the load resistor on the retarding potential $V_{R}=V_{K}-V_{Dy1}$ for the various LEDs. The value of $V_{L}$ is proportional to the anode current and consequently to the rate of photoelectrons collected by Dy1. During a measurement, the voltage across the LED must be kept constant, thus ensuring that the intensity of the light entering the PMT is also constant. Photoelectrons extracted from the photocathode will have different initial kinetic energies up to a maximum value given by: \begin{equation} E_{K,max} = h \nu - W \end{equation} where $h \nu$ is the energy of incident photons and $W$ is the work function of the photocathode. If $V_{R}=0$, all the photoelectrons extracted from K will be able to reach Dy1, and a current will flow through $R_{L}$. If $V_{R}$ is increased, only the more energetic photoelectrons will be collected by Dy1 and therefore the output current will decrease. When $eV_{R} \geq E_{K,max}$ the photoelectrons will not be allowed to reach Dy1 and the current flowing through $R_{L}$ is expected to vanish. The value \begin{equation} V_{S} = \frac{E_{K,max}}{e} = \frac{h \nu - W}{e} \label{eq:vs} \end{equation} represents the stopping potential, that depends on the energy of incident photons and on the work function of the photocathode. From the plots of $V_{L}$ as a function of $V_{S}$ (hereafter we will refer to these plots as ``photoelectric curves''), the students will be able to evaluate the stopping potential $V_{S}$ for each LED. The values of $V_{S}$ will then be plotted against the frequency $\nu$ of the incident light, and the data will be fitted with a straight line. According to equation~\ref{eq:vs}, the value of $h/e$ will be derived from the slope of the line, while the value of $W/e$ will be derived from the intercept.\footnote{The intercept corresponds to the ratio $W/e$ with a change of sign. If voltages are measured in units of $\units{V}$, the value of $W/e$ will also be in units of $\units{V}$, and will correspond to the value of $W$ in units of $\units{eV}$.} \section{Data analysis} \label{sec:analysis} \begin{figure}[t] \includegraphics[width=0.47\columnwidth]{rosso_04.eps} \includegraphics[width=0.47\columnwidth]{giallo_03.eps} \includegraphics[width=0.47\columnwidth]{verde_03.eps} \includegraphics[width=0.47\columnwidth]{blu_01.eps} \includegraphics[width=0.47\columnwidth]{viola_01.eps} \caption{Examples of photoelectric curves obtained with the various LEDs. The continuous red and green lines represent the fits of the regions $V_{R}<V_{S}$ and $V_{R}>V_{S}$ with the two functions in eq.~\ref{eq:fit}. The values of the fitted parameters and of the $\chi^{2}$ are shown in the top right panels of each plot, with the same color code. The dashed lines are obtained extrapolating the two fit functions outside the corresponding fit regions. A black star is drawn at the intersection point between the two fit functions. The abscissa of the intersection point provides the estimate of the stopping potential. A zoom of the region where the stopping potential is found is shown in the inset of each plot. } \label{fig:photocurves} \end{figure} Fig.~\ref{fig:photocurves} shows some examples of photoelectric curves obtained when the PMT is illuminated with the various LEDs. As expected, the value of $V_{L}$ decreases with increasing $V_{R}$, but never drops to zero. This behavior can be explained by taking into account that a fraction of the incident photons can pass through the photocathode without interacting, and can extract photoelectrons from the first dynode. These electrons are accelerated towards Dy2, thus contributing to the output signal because of the high PMT gain. Hence, even when $V_{R}>V_{S}$, a background current will flow through the load resistor $R_{L}$, and consequently a steady positive value of $V_{L}$ will be measured. The fraction of photons extracting photoelectrons from the first dynode changes with the photon energy, as the absorption probabilities in the photocathode and in the first dynode are strongly dependent on the photon energy. Another possible contribution to the background anode current could be due to ambient light entering into the device, but we ruled out this contribution performing a preliminary set of measurements with the LEDs being turned off, in which we observed $V_{L}=0$ for any value of $V_{R}$. Finally, it is also worth to point out here that the electron optics of a PMT is designed for electrodes kept at increasing potentials. Therefore electrons emitted from the photocathode are accelerated towards the first dynode and are focused onto its center regardless their emission angle, thus ensuring optimal collection efficiency. Setting in our device a retarding potential between K and Dy1, we introduce a distortion in the electron optics of the PMT, that affects the trajectories of photoelectrons preventing them to reach the first dynode. However, even when $V_{R}>V_{S}$, some photoelectrons travelling in weaker field regions might be able to reach the first dynode, contributing to the output signal. We performed several sets of measurements, changing either the high voltage $V_{0}$ supplied to the PMT or the intensity of the light emitted by the various LEDs. An increase of $V_{0}$ will result in an increase of the gain of the electron multiplier, while an increase of the light intensity will result in an increase of the number of photoelectrons. In particular, we observe that, if the light intensity is kept constant and $V_{0}$ is changed, the shape of the photoelectric curves does not change, but the values of $V_{L}$ corresponding to a given $V_{R}$ increase with increasing $V_{0}$. Similarly, if $V_{0}$ is kept constant and the light intensity is changed, the shape of the photoelectric curves does not change, but the values of $V_{L}$ increase with increasing light intensity. This behavior is observed for a wide range of high voltages ($V_{0} \sim 700 \div 1100 \units{V}$) and LED intensities (here the range depends on the color of the LEDs). However, if the voltage across the load resistor becomes too large ($V_{L} \gtrsim 1\units{V}$), saturation effects might occur due to the large number of electrons moving across the last dynodes because the capacitors in the last stages of the voltage divider could not be able to keep the voltage differences stable. Another feature of the photoelectric curves shown in Fig.~\ref{fig:photocurves} is that the transition between the regime in which photoelectrons emitted from K are collected by Dy1 and the regime in which photoelectrons are repelled is not sharp, i.e. the slope of the photoelectric curve changes smoothly with $V_{R}$, thus making the determination of $V_{S}$ not straightforward. This behavior is due to the spread in the photoelectron kinetic energies when they are emitted from the photocathode. It is also worth to point out here that, since photons emitted by LEDs are not monochromatic (as shown in Tab.~\ref{tab:LED} the widths of the frequency spectra are $\sim 2 \div 3\%$ of the corresponding peak values), the photoelectric curves cannot be described in terms of a single value of the stopping potential, but it would be more appropriate to take into account the dependence of the stopping potential on the frequency. Hereafter we will neglect this depencence and we will assume that each photoelectric curve can be described in terms of the stopping potential $V_{S}$ corresponding to the peak frequency of the LED. The determination of either an analytical or a numerical model of the photoelectric curves would be rather complex and perhaps would go beyond the scope of an introductory laboratory course for undergraduate students. Therefore, to analyze the data collected by the students carrying out the experiment, we developed a phenomenological approach. After the analysis of many photoelectric curves obtained in different conditions (different LED intensities and different PMT high voltages), we noticed that the asymptotic behavior of all photoelectric curves can be adequately described by the following functions: \begin{equation} V_{L} = \left\{ \begin{array}{ll} a_{1} + b_{1} e^{-\frac{V_{R}}{c_{1}}} & \textrm{for}~V_{R} < V_{S} \\ a_{2} + b_{2} V_{R} & \textrm{for}~V_{R} > V_{S} \\ \end{array} \right. \label{eq:fit} \end{equation} For each photoelectric curve we select two sets of points, belonging to the regions $V_{R}<V_{S}$ and $V_{R}>V_{S}$, and we fit these points with the functions in eq.~\ref{eq:fit}, thus determining the parameters $a_{1}$, $b_{1}$, $c_{1}$, $a_{2}$ and $b_{2}$. The fits are performed using the free data analysis software ROOT~\cite{root}, provided by CERN. We then define the value of the stopping potential $V_{S}$ as the abscissa of the intersection point of the two curves, which can be evaluated solving the following non-linear equation: \begin{equation} a_{1} + b_{1} e^{-\frac{V_{S}}{c_{1}}} = a_{2} + b_{2} V_{S}. \label{eq:vsfit} \end{equation} The previous equation, which gives $V_{S}$ as a function of the parameters $a_{1}$, $b_{1}$, $c_{1}$, $a_{2}$ and $b_{2}$, cannot be solved analytically, but can be easily solved in a numerical way, for instance using the bisection method. This procedure is graphically illustrated in the plots of Fig.~\ref{fig:photocurves}, where we superimposed to each photoelectric curve the functions obtained from the two fits, also showing the position of the intersection between the two curves. As we anticipated in Sec.~\ref{sec:intro}, it is worth to point out here that the detection of a significant steady background current instead of a slowly vanishing current helps to better define the stopping potential. To evaluate the error on $V_{S}$ we use the standard error propagation formula, starting from the errors on $a_{1}$, $b_{1}$, $c_{1}$, $a_{2}$ and $b_{2}$, which are computed by the ROOT software when performing the fits. However, since $V_{S}$ is an implicit function of the parameters, a numerical approach is also needed to evaluate its partial derivatives with respect to the various parameters. For instance, to evaluate the partial derivative $\partial V_{S} / \partial a_{1}$, we start from the set of fitted parameters and we change $a_{1}$ into $a_{1}'=a_{1}+\delta a_{1}$ \footnote{According to the definition of derivative, the condition $\delta a_{1} \rightarrow 0$ must be fulfilled, and therefore one must choose $\delta a_{1}$ such that $| \delta a_{1} | \ll |a_{1}| $.}; then we solve eq.~\ref{eq:vsfit} with the value of $a_{1}'$, obtaining a new solution $V_{S}'$ and finally we evaluate the partial derivative as $\partial V_{S} / \partial a_{1} \approx \delta V_{S} / \delta a_{1}$, where $\delta V_{S}=V_{S}'-V_{S}$. In the same way we calculate the partial derivatives of $V_{S}$ with respect to the other parameters. \begin{figure}[t] \includegraphics[width=0.95\columnwidth]{Fit.eps} \caption{Stopping potentials for the various LEDs as a function of frequency. The horizontal error bars represent the width of the spectra shown in Tab.~\ref{tab:LED}, while the vertical error bars represent the uncertainties on the stopping potentials, evaluated as discussed in sec.~\ref{sec:analysis}. The points are well fitted with a straight line. The fit results are summarized in the top panel (the parameters $p_{0}$ and $p_{1}$ are respectively the intercept the slope), where the $\chi^{2}$ of the fit is also shown. } \label{fig:stoppingpotentials} \end{figure} The procedure used to evaluate $h/e$ from the measured stopping potentials is illustrated in Fig.~\ref{fig:stoppingpotentials}, where the stopping potentials obtained from the analysis of the photoelectric curves shown in Fig.~\ref{fig:photocurves} are plotted against the frequency of the incident light. The error bars associated to the LED frequencies are the widths of their emission spectra, which are taken from Tab.~\ref{tab:LED}, while those associated to the stopping potentials are calculated following the approach described above. A linear fit of the experimental points is then performed. In the example shown in Fig.~\ref{fig:stoppingpotentials}, the fit procedure yields a $\chi^{2}/d.o.f.=0.037/3$, which suggests that the error bars associated to the stopping potentials are overestimated, a feature which might be a consequence of the phenomenological model that we adopted to describe the photoelectric curves. According to eq.~\ref{eq:vs}, the slope of the line corresponds to $h/e$, while its intercept corresponds to $W/e$. Assuming for the electron charge the current value $e=1.60 \times 10^{-19}\units{C}$, the fit of the data shown in Fig.~\ref{fig:stoppingpotentials} yields for the Planck's constant a value $h = (6.75 \pm 1.11) \times 10^{-34} \units{J \cdot s}$ and for the work function of the photocathode a value $W = (0.78 \pm 0.43) \units{eV}$. \section{Discussion and conclusions} \label{sec:discussion} The measurement of $h/e$ proposed in the present paper yields an uncertainty of about $20\%$ on the value of $h/e$ and an uncertainty larger than $50\%$ on $W/e$. The main sources of error are the spreads on the LED frequencies and the uncertainties on the values of the stopping potentials. To mitigate the effects of the frequency spreads, one could use monochromatic light sources coupling the LEDs to appropriate filters, or even using laser sources. The uncertainties on the stopping potentials could also be reduced with a more appropriate modeling of the photoelectric curves, which goes beyond the scope of an introductory laboratory course. Despite the poor precision attained, we strongly believe that this measurement of $h/e$ is extremely useful from the educational point of view, because not only it allows to understand the main features of the photoelectric effect, but it also stimulates further considerations about the physics involved in the measurement and on the technique adopted.
1,941,325,219,974
arxiv
\section{Introduction} In 1872 Felix Klein formulated his Erlangen program as such: ``A manifold is given and with it a group of transformations. [..] Develop the theory of invariants with regard to this group'' (\cite{Klein1927}, p. 28). According to him, Sophus Lie accepted this program and spread it among his students.\footnote{The history of the ``Erlanger Programm'' is much more complicated, though, cf. \cite{Rowe89}, \cite{Hawk1989}. For the importance of Klein’s ideas for physics cf. \cite{Kastrup87}.} In the first part of what follows, in the spirit of Klein, several new concepts will be introduced and investigated: {\em weak (Lie) motions} (cf. section \ref{section:weakmotion}) and {\em groups of extended motions} (cf. section \ref{section:extmotion}). The latter concept is related to a suggested widening of the physicists' concept of a Lie algebra to particular tangent Lie algebroids ({\em extended Lie algebras}). Some of the corresponding finite transformations forming groups are presented: they are no longer Lie groups. I will also propose an extension of the Cartan-Killing form which up to now seemingly has not been studied. Its definition allows the introduction of Riemannian and Lorentz metrics on the sections of a subbundle of the tangent bundle. The mathematical literature for algebroids and groupoids (eg., \cite{Mack2005}), has lead to a few formal applications to Lagrangian mechanics \cite{Liber1996}, \cite{Martin2001}, \cite{Grabo2006}. The particular tangent Lie algebroids presented here are an example for such structures much closer to physics than the examples usually given by mathematicians. \section{Lie-dragging} \label{section:Liedrag} \subsection{Preliminaries} \label{subsection: prelim} In metric geometry, the concept of symmetry may be expressed by an isometry of the metrical tensor $g_{ab}$ of such a space. This means that this tensor field remains unchanged along the flow of a vector field $X$. An expression for this demand may be formulated by help of the Lie derivative defined for tangent vector fields $X:=\xi^{a} \frac{\partial}{\partial x^{a}}, Y:=\eta^{a} \frac{\partial}{\partial x^{a}}$ by:\begin{equation} {\cal{L}}_{X} Y = [X, Y], \label{Liederi}\end{equation} where $[.~,~.] $ denotes the Lie-bracket $[A, B] = AB-BA$. If (\ref{Liederi}) is expressed by the components $\xi^{a}, \eta^{a}$ of the tangent vectors $X, Y$, then \begin{equation} {\cal{L}}_{\xi} \eta^{a} = \eta^{a}_{~, c} \xi^{c} - \eta^{c} \xi^{a}_{~, c}~, \label{Liedericomp}\end{equation} where $\eta^{a}_{~, c}=\frac{\partial \eta^{a}}{\partial x^{c}}$. If ${\cal{L}}_{X} Y =0$, the vector field $X$ is called a symmetry of the vector field $Y$.\footnote{Such symmetries play an important role for the integration of differential equation, cf. \cite{Wino97}.} The Leibniz rule holds for the Lie derivative.\footnote{Latin indices from the beginning ($a, b, c,..$) and end of the alphabet ($r,s, t,..$) run from 1 to n or 0 to n-1 where n is the dimension of the space considered. Indices from the middle ($i, j, k, l, ..$) may take other values. The summation convention is used except when indicated otherwise.} From (\ref{Liederi}) we have \begin{equation} {\cal{L}}_{Z} {\cal{L}}_{X} Y = [Z, [X,Y]], \label{Liederi2},\end{equation} and with help of the Jacobi identity: \begin{equation} {\cal{L}}_{Z} {\cal{L}}_{X} Y + {\cal{L}}_{Y} {\cal{L}}_{Z} X + {\cal{L}}_{X} {\cal{L}}_{Y} Z= [Z, [X, Y]] + [Y, [Z, X]] + [X, [Y, Z]] = 0. \label{Jaco}\end{equation} From (\ref{Jaco}): \begin{equation} {\cal{L}}_{Z} {\cal{L}}_{X} Y - {\cal{L}}_{X} {\cal{L}}_{Z} Y= [[X,Z], Y] = {\cal{L}}_{[X,Z]} Y = {\cal{L}}_{{\cal{L}}_{X}Z} Y~. \label{redu}\end{equation}\\ For a Lie group, a special subspace of the tangent space is formed by the infinitesimal generators $X_{(i)}:=\xi_{(i)}^{a}\frac{\partial}{\partial x^{a}},~ (i, j, l = 1, 2, .., p)$ of a Lie-algebra \begin{equation} [X_{(i)}, X_{(j})] = c_{ij}^{~~l} X_{(l)}~, \label{Liealgeb}\end{equation} with structure constants\footnote{In current mathematical literature, the definitition of a Lie algebra is much more general. It is defined either as a module ${\cal B}(M)$ of the set of all $ C^{\infty}$-vector fields on a $C^{\infty}$-manifold with a multiplication introduced via the Lie-bracket, or as a finite-dimensional vector space $V$ over the real or complex numbers with a bilinear multiplication on it defined by an anti-commuting bracket $[~ ,~ ]$ satisfying the Jacobi identity (\ref{Jaco}).} $ c_{ij}^{~~l}$. From (\ref{Liealgeb}) we obtain: \begin{equation}{\cal{L}}_{X_{i}} {\cal{L}}_{X_{j}} X_{k} = c_{jk}^{~~l} c_{il}^{~~m}X_{m}\label{Liealg2}\end{equation} such that according to (\ref{Jaco}):\begin{equation} c_{jk}^{~~l} c_{il}^{~~m} +c_{ij}^{~~l} c_{kl}^{~~m}+c_{ki}^{~~l} c_{jl}^{~~m} =0. \label{Jaco2}\end{equation} A symmetric bilinear form, the Cartan-Killing form, may be introduced:\begin{equation} \sigma_{ij}:= c_{il}^{~~m}c_{jm}^{~~~l}~.\end{equation} If it is nondegenerate, i.e., for semisimple Lie groups, $\sigma_{ij}$ can be used as a metric in group space. In section \ref{section:algebext}, we will permit that the structure constants become directly dependent on the components $\xi^{a}_{~i}$ of the vector fields $X_{i}(x)$: they will become {\em structure functions}.\footnote{The {\em structure constants} in (\ref{Liealgeb}) are brought into the definitions of a Lie algebra presented in the previous footnote by the choice of a basis $\{Y_1, Y_2,.. ,Y_n\}$ of $V$. The multiplicative action is determined for all vectors $ X, Y$ of $V$ only if all brackets $[X, Y]$ are known. According to one author: ``We 'know' them by writing them as linear combinations of the $Y_{i}$. The coefficients $ c_{ij}^{~~l}$ in the relations $[Y_{i}, Y_{j}]= c_{ij}^{~~l} X_{l}$ are called structure constants'' (\cite{Samel89}, pp. 1, 5). This recipee no longer works for vector fields which cannot be generated by linear combinations with constant coefficients from a basis. Cf. section \ref{section:algebext}.}\\ \subsection{Lie-dragging (with examples)} \label{subsection:Liedragging} Under ``Lie-dragging'' with regard to an arbitrary $C^{\infty}$ vector field $X=\xi^{a} \frac{\partial}{\partial x^{a}}$ we understand the operation of the Lie derivative on any geometric object {\em without} the simultaneous requirement that the result be zero.\footnote{This use of the name ``Lie-dragging'' is different from the one in \cite{Schutz82}. By (\ref{Liederi}), the Lie-dragging of a vector field is expressed.} Applied to the metric $g_{ab}$, this means \begin{equation}{\cal{L}}_{\xi} g_{ab} = \gamma_{ab}~, \label{liedrag} \end{equation} where $\gamma_{ab}$ is a symmetric tensor of any rank between 0 and n (in n-dimensional space). In the sequel we will be interested in the case $\gamma_{ab} \neq \lambda g_{ab}$. For a tensor field, Lie-dragging neither conserves the rank of the field, nor, if it is excerted on a symmetric bilinear form, its signature. The quest for the conditions that Lie-dragging leads to a specific rank or specific signature of a tensor field could be among the first mathematical investigations into the concept (with rank 0 of $\gamma_{ab}$ being set aside). Also, the vector fields $X$ might be classified according to whether Lie-dragging with them leads to a prescribed rank for given metric $g_{ab}$. In any case, not every arbitrary $\gamma_{ab}$ can be reached by Lie-dragging (cf. Appendix 1).\\ \noindent Equation (\ref{liedrag}) can be read in different ways:\\ \noindent A) Given a single vector field (a set of vector fields) {\em and} an arbitrary metric $g_{ab}$; the set of all possible bilinear forms $\gamma_{ab}$ is to be determined by a straightforward calculation. This is an intermediate step for the determination of weak Lie motions of $g_{ab}$.\\ \noindent B) Given a single vector field (a set of vector fields) and a fixed target tensor $\gamma_{ab};$ the metrics $g_{ab}$ which are Lie-dragged into it are to be determined. This requires solving a system of 1st-order PDEs.\\ \noindent C) Given both a start metric $g_{ab}$ and a target metric $\gamma_{ab}$. The task is to determine the vector fields $X$ dragging the one into the other.\footnote{If we ask for both, ${\cal{L}}_{X}g_{ab} = \gamma_{ab}$ and ${\cal{L}}_{X}\gamma_{ab} = g_{ab}$, then we are back to weak homothetic mappings for both $g$ and $\gamma$. Cf. next section.} \\ For a first example for Lie-dragging in space-time leading to tensors of lower rank, we look at the Kasner metric: \begin{equation} ds^2 = (dx^0)^2 - (x^0)^{2p_1}(dx^1)^2 - (x^0)^{2p_2}(dx^2)^2 -(x^0)^{2p_3}(dx^3)^2~, \label{Kasner}\end{equation} an exact solution of Einstein's vacuum field equations if $p_1 + p_2 + p_3 = 1 = (p_1)^2+(p_2)^2+(p_3)^2~, p_1, p_2, p_3$ constants. Lie-dragging with \begin{center} $X=\delta_0^a \frac{\partial}{\partial x^a}$ \end{center} leads to a bilinear form of rank 3, i.e., after a coordiante change, to the space sections: \begin{equation} ds^2 = - (y^0)^{2p_1}(dy^1)^2 - (y^0)^{2p_2}(dy^2)^2 - (y^0)^{2p_3}(dy^3)^2. \nonumber \end{equation} Unlike this, Lie-dragging of (\ref{Kasner}) with \begin{center}$X= f(x^0)\delta_1^a \frac{\partial}{\partial x^a}$ \end{center} leads to a tensor of rank 2: $\gamma_{ab}= 2 \frac{df(x^0)}{dx^0}~ g_{1(a}\delta_{b)}^{~0}$. In the second example, a Lie-dragged metric of rank 1 is prescribed. Let \begin{equation} {\cal{L}}_{\xi} g_{ab} = X_a X_b ~,\end{equation} with the vector field $X$ tangent to a null geodesic:\begin{equation}(\overset{g}{\nabla}_{b}X_{a}) X^{b}= 0~, g_{ab} X^{a}X^{b}= 0\label{examp1} ~. \end{equation} From the definition of $ {\cal{L}}_{\xi} g_{ab}$ given in (\ref{isomet2}) and (\ref{examp1}), $(X^{s} \xi_s)_{,a}X^a =0 $ follows: $X^{s} \xi_s$ must be constant along the geodesic. (\ref{examp1}) leads to a restriction on $\xi$ for given null geodesic, or for $X^a$ if the vector field $\xi$ is given. $X^a$ generates a super-weak motion (cf. section \ref{section:weakmotion}).\\ The collineations presented in section \ref{section:motions} are also examples for Lie-dragging. \section{Motions and Collineations} \label{section:motions} On a manifold with differentiable metric structure, a motion is defined by the vanishing of the Lie-derivative of the metric with regard to the tangent vector field $X=\xi^a\frac{\partial}{\partial x^{a}}$: \begin{eqnarray} {\cal{L}}_{X} g(Y,Z) = 0 = X g(Y,Z) + g(Z,{\cal{L}}_{X}Y) + g(Y,{\cal{L}}_{X}Z) \nonumber\\ = X g(Y,Z) + g(Z,[X,Y]) + g(Y,[X,Z]), \label{Liecoofree}\end{eqnarray} where $X, Y, Z$ are tangent vector fields. In local coordinates, (\ref{Liecoofree}) reads as: \begin{equation} \gamma_{ab}= {\cal{L}}_{\xi} g_{ab}=0= g_{ab,c}~\xi^{c} + g_{cb}~\xi^{c}_{~,a} + g_{ac}~\xi^{c}_{~,b} ~, \label{isomet}\end{equation} with $g_{ab}= g_{ba}$. The vector field $\xi$ is named a {\em Killing vector}; its components generate an infinitesimal symmetry transformation:\footnote{For mechanical systems in phase space, this infinitesimal symmetry transformation is applied to the generalized coordinates and supplemented by an infinitesimal transformation for the momenta: $p_a \rightarrow p_{a'}=p_a + \eta_a $ with an additional infinitesimal generator $\eta_a$. Cf. \cite{Pang2009}. The authors use the name ``weak-Lie'' symmetry for what we would name Lie symmetry.} $x^{i} \rightarrow x^{i'}= x^{i} + \xi^{i}$. (\ref{isomet}) may be expressed in a different form:\footnote{Symmetrization brackets are used: $A_{(r}B_{s)}= \frac{1}{2}(A_rB_s + A_sB_r);~A_{[r}B_{s]}= \frac{1}{2}(A_rB_s - A_sB_r)$.}\begin{equation}{\cal{L}}_{\xi} g_{ab} = 2\overset{g}{\nabla}_{(a}\xi_{b)} = 0 .\label{isomet2}\end{equation} In (\ref{isomet2}), $\overset{g}{\nabla}$ is the covariant derivative with respect to the metric $g_{ab}$ (Levi Civita connection), and $\xi_a =g_{ab}\xi^{b}$. From (\ref{isomet}) we can conclude that ${\cal{L}}_{\xi}ds= 0$ for all $dx^a$, i.e., all distances remain invariant. A consequence of (\ref{isomet}) is that the motions $\xi$ form a Lie group and the corresponding infinitesimal generators $X_{(i)}:=\xi_{(i)}^{\sigma} \frac{\partial}{\partial x^{\sigma}}$ a Lie algebra (\ref{Liealgeb}) (cf. \cite{Yano57}). \\ As an example for a group of motions in 3-dimensional Euclidean space, we start from a Lie group $G_3$ acting on $V_3$ with finite transformations: \begin{equation} x^{1'} = x^{1} + c_{1},~ x^{2'} = x^{2} + c_{2}x^{1},~x^{3'} = x^{3} + c_{3} ~.\label{fintrans} \end{equation} The corresponding Lie algebra is (\cite{Petrov1969}, p. 213): \begin{equation} [X_1, X_2]=0,~ [X_1, X_3]=0,~[X_2, X_3]= X_1~. \label{LieG3} \end{equation} Lie-dragging with the vector fields $\xi^a_1=\delta^a_2,~\xi^a_2=\delta^a_3,~\xi^a_3=-\delta^a_1 + x^3\xi^a_2 $ gives:\begin{eqnarray}{\cal{L}}_{\xi_1} g_{ab} = g_{ab,2}=: \overset{(1)}{\gamma}_{ab}~,~{\cal{L}}_{\xi_2} g_{ab} = g_{ab,3}=: \overset{(2)}{\gamma}_{ab}~, ~\nonumber\\{\cal{L}}_{\xi_3} g_{ab} = -g_{ab,1} + x^3 g_{ab,2} + 2 g_{2(a}\delta_{b)}^3 =: \overset{(3)}{\gamma}_{ab}~. \end{eqnarray} All $ \overset{(i)}{\gamma}_{ab}$ can have full rank. The demand $\overset{(i)}{\gamma}_{ab}=0,~ i= 1, 2, 3 ,$ makes this $G_3$ a group of motions whence follows: \begin{gather*} g_{ab}= \begin{pmatrix} \alpha_{11}^{(0)} & \alpha_{12}^{(0)} & P_1 \\ \alpha_{21}^{(0)} & \alpha_{22}^{(0)} & P'_1 \\ P_1 & P'_1 & P_2 \end{pmatrix}, \label{G3}\end{gather*} where $P_1= \alpha_{12}^{(0)} x^{1} + \alpha_{13}^{(0)}, P'_1 = \alpha_{22}^{(0)} x^{1} + \alpha_{23}^{(0)}$ and $P_2 =\alpha_{22}^{(0)} (x^{1})^{2} + 2 \alpha_{23}^{(0)}x^{1} + \alpha_{33}^{(0)}$ with $\alpha_{33}^{(0)}, \alpha_{1p}^{(0)}, \alpha_{2p}^{(0)}~, (p = 1, 2, 3)$ constants. We will see in section \ref{G3weak} how the metric looks if the group is demanded to be a complete set of {\em weak} (Lie) motions.\\ Further types of symmetries are defined by the vanishing of the Lie derivative applied to other geometric objects like connection (``affine collineations'' ${\cal{L}}_{\xi}~ \Gamma^{~~~c}_{ab}(g)= 0$, cf. \cite{Maar87}), curvature tensor (``curvature collineations'' ${\cal{L}}_{\xi}~R^{c}_{~dab}(g)=0$, cf. \cite{KaLeDa1969}), Ricci tensor (``Ricci'' or ``contracted curvature collineations'' ${\cal{L}}_{\xi}~R^{c}_{~abc}(g)=0,$ cf. \cite{Collin1970}). Another generalization is the concept of conformal Killing vector, defined by: \begin{equation} {\cal{L}}_{\xi} g_{ab} = \lambda (x^1,..x^n) g_{ab} ~, \label{isomet3}~.\end{equation} A subcase are {\em homothetic} motions with $\lambda = \lambda_{0} =$ const. Conformal Killing vectors are included in what follows. Thus, (\ref{isomet}) and (\ref{isomet3}) are particular subcases of Lie-dragging: they constitute a fixed point in the map of symmetric differentiable tensor fields $g_{ab}$ of full rank defined by Lie-dragging. \section{Weak Lie motions (weak symmetries)} \label{section:weakmotion} In the 80s, a concept of ``p-invariance'' has been introduced \cite{Papa83}:\begin{equation} {\cal{L}}_{\xi}......{\cal{L}}_{\xi}~g_{ab} = 0~, \end{equation} with p Lie derivatives, $p>1$, acting on the metric. At the time, for $p=2$ an application has been given in Einstein-Maxwell theory \cite{Goe84}. In the following we will concentrate on this case $p=2$.\\ {\em Definition 1}:\\ An infinitesimal point transformation $x \rightarrow x+\xi$ satisfying \begin{equation}{\cal{L}}_{\xi}{\cal{L}}_{\xi}g_{ab}=0,~{\cal{L}}_{\xi}g_{ab}\neq 0, \label{isomet5}\end{equation} generates a ``weak Lie motion''.\\ A coordinate-free formulation of (\ref{isomet5}) is: \begin{equation} {\cal{L}}_{W}{\cal{L}}_{Z} g(X,Y)= [W, Z] g(X,Y) - g([W,[Z,X]], Y) - g(X, [Y,[W,Z]]). \nonumber\end{equation} \noindent If applied to other geometric objects, we call (\ref{isomet5}) ``weak symmetry''.\footnote{In the set of solutions of (\ref{isomet5}), the isometries (motions) must also occur. We speak of {\em genuine} weak Lie motions when motions are to be excluded.}We also use the expression {\em weak isometry}. \noindent ({\ref{isomet5}) can be read in two ways:\\ - The metric $g_{ab}$ is given; determine the generator $\xi$ of a weak Lie motion;\\ - A vector field or a Lie algebra is given; determine the metric $g_{ab}$ which allows these fields as weak Lie motions.\\ As has been pointed out in \cite{Papa83}, a disadvantage of the new concept is that ${\cal{L}}_{\xi}{\cal{L}}_{\xi}g^{ab} = 0$ does not follow from ${\cal{L}}_{\xi}{\cal{L}}_{\xi}g_{ab} = 0$ for ${\cal{L}}_{\xi}g_{ab}\neq 0$. In fact:\begin{equation}{\cal{L}}_{\xi}{\cal{L}}_{\xi}g^{ab} = -g^{as}g^{bt}{\cal{L}}_{\xi}{\cal{L}}_{\xi}g_{st} + 2g^{at}g^{bp}g^{sq}({\cal{L}}_{\xi}g_{pq}) ({\cal{L}}_{\xi}g_{st})~. \label{difsym}\end{equation} Consequently, in general ${\cal{L}}_{\xi}{\cal{L}}_{\xi}g^{ab} = 0$ and ${\cal{L}}_{\xi}{\cal{L}}_{\xi}g_{ab} = 0$ define slightly different invariance concepts. If both conditions are imposed, ${\cal{L}}_{\xi}g_{ab} = \Phi (x) k_{a}k_{b}$ with the null vector $ k_{a}~ (g^{rs} k_{r}k_{s}=0)$, and arbitrary scalar function $\Phi$ follows. In this case, we call the weak motion generated by $X =\xi^{a} \frac{\partial}{\partial x^{a}}$ a {\em super weak motion}. It entails the existence of a null vector $ k_{a}$ with ${\cal{L}}_{\xi} k^{a} = - k^{a}{\cal{L}}_{\xi}(ln \Phi)$.\footnote{In general relativity, $T^{ab}= \Phi (x) k_{a}k_{b}$ describes a null-fluid. What is called here super-weak motion, would have be named {\em cosymmetric-2-invariance} in (\cite{Papa83}, p. 138).} In Euclidean space ${\cal{L}}_{\xi}g_{ab} = 0$ results. For $p>2$ the situation would become still more complicated.\\ \subsection{First examples and generalizations} \subsubsection{Weak symmetries} \label{subsubsection:weaksym} That a weak symmetry can be really weaker than a symmetry is seen already when the Lie derivative is applied twice to a function $f(x^1,... x^n)$: \begin{equation} {\cal{L}}_{X}{\cal{L}}_{X}f = {\cal{L}}_{\xi}{\cal{L}}_{\xi}f = XXf \overset{!}{=} 0~.\label{Lie2}\end{equation} In n-dimensional Euclidean space $R^n$, for a translation in the direction of the k-axis with $\xi^{i}=\delta^{i}_{(k)}$, we obtain from (\ref{Lie2}) $f= x^k f_1(x^1,..,x^{k-1}, x^{k+1},..,..x^n) + f_2((x^1,..,x^{k-1}, x^{k+1},..,..x^n)$ in place of $f= f((x^1,..,x^{k-1}, x^{k+1},..,..x^n)$ for ${\cal{L}}_{\xi}f \overset{!}{=} 0$. For the full translation group of $R^n$, (\ref{Lie2}) leads to a polynomial of degree $n$ in the variables $(x^1,..,x^{n})$ with constant coefficients and linear in each variable $(x^1,..,x^{n})$. Thus, for $n=3$, $f=c_{123} x^1 x^2 x^3 + \Sigma_{r, s=1;r<s,}^{3}c_{rs} x^{r} x^{s} +\Sigma_{s=1}^{3}c_{s}x^{s} + c_{0}$ as compared to $f=f_0$ for the translation group as a group of motions.\footnote{Note that this result follows only if definition 3 for a complete set of weak symmetries is applied, cf. next sextion.}\\ For a rotation $R^{i}_{k} = x^{i}\frac{\partial}{\partial x^{k}}-x^{k}\frac{\partial}{\partial x^{i}}~ (i, k$ {\em fixed}), a function satisfying ${\cal{L}}_{\xi}{\cal{L}}_{\xi}f \overset{!}{=}0$ \linebreak is given by $f= \alpha_1(x^1,..,x^{i-1}, x^{i+1},..x^{k-1}, x^{k+1},..x^n)\times arctan\frac{x^{i}}{x^{k}}$ \linebreak + $\alpha_2(x^1,..,x^{i-1}, x^{i+1},..x^{k-1}, x^{k+1},..x^n)$, with ${\cal{L}}_{\xi}f= -\alpha_{1} \neq 0 $ for this rotation. For the full rotation group $SO(3)$ in 3-dimensional space, $f= f (\sqrt{(x^1)^2 + (x^2)^2 +(x^3)^2}~)$ follows: no genuine weak motion is possible in this case. These examples show that the set of weak-Lie invariant {\em functions} can be larger.\\ A {\em generalization} of a subgroup of the abelian translation group in an n-dimensional euclidean space is given by: \begin{equation} x^{1'} = x^{1} + G^{1}(x^{k+1},.., x^{n}),..,x^{k'} = x^{k} + G^{k}(x^{k+1},..,x^{n}), x^{(k+1)'}=x^{k+1},..,x^{n'}=x^{n}, \label{freefunc}\end{equation} with arbitrary $C^{\infty}$ functions $G^1, G^2,.., G^k$. Weak Lie symmetry under this group for the function $f(x^{1},..,x^{n})$ leads to the same result as for the translation group, although (\ref{freefunc}) no longer is a Lie group A link between weak Lie symmetry of scalars and weak Lie motions can be found in conformally flat metrics: $g_{ab}= f(x^1, x^2,.., x^n) \eta_{ab}$ due to \begin{equation} {\cal{L}}_{\xi} {\cal{L}}_{\xi} g_{ab}= ({\cal{L}}_{\xi} {\cal{L}}_{\xi} f) \eta_{ab} + 2 {\cal{L}}_{\xi} f~ {\cal{L}}_{\xi} \eta_{ab} + {\cal{L}}_{\xi} {\cal{L}}_{\xi} \eta_{ab} \end{equation} In the special case of (\ref{isomet3}) follows: \begin{equation} {\cal{L}}_{\xi} {\cal{L}}_{\xi} g_{ab} = (\lambda^2+ \lambda_{,s}\xi^{s}) g_{ab} ~,~ {\cal{L}}_{\xi} {\cal{L}}_{\xi} g^{ab} = (\lambda^2- \lambda_{,s}\xi^{s}) g^{ab} ~. \label{isomet4}\end{equation} Hence, in this case nothing new is obtained by letting the Lie-derivative act twice. The concept of conformal Killing vector could also be weakend to {\em weak conformal Killing vector} by the demand: \begin{equation} {\cal{L}}_{\xi}{\cal{L}}_{\xi}g_{ab}= \lambda(x^i)g_{ab}~,~{\cal{L}}_{\xi}g_{ab}\neq \mu(x^j) g_{ab}~.\end{equation} \subsubsection{Weak collineations} For {\em weak Lie affine collineations}, we find:\begin{eqnarray} {\cal{L}}_{\xi} {\cal{L}}_{\xi} \Gamma^{~~~c}_{ab}(g) = \xi^{s} \overset{g}{\nabla}_{(a}[{\cal{L}}_{\xi} \Gamma^{~~~c}_{b)s}(g)] + [{\cal{L}}_{\xi} \Gamma^{~~~c}_{bs}(g)]\overset{g}{\nabla}_{a}\xi^{s} \nonumber\\ + [{\cal{L}}_{\xi} \Gamma^{~~~c}_{as}(g)] \overset{g}{\nabla}_{b}\xi^{s} - [{\cal{L}}_{\xi} \Gamma^{~~~s}_{ab}(g)]\overset{g} {\nabla}_{s}\xi^{c}~.\label{bicol1}\end{eqnarray} Insertion of $ {\cal{L}}_{\xi}~ \Gamma^{~~~c}_{ab}(g) = \overset{g}{\nabla}_{a} \overset{g}{\nabla}_{b}\xi^{c}+ R^{c}_{~bda}(g)\xi^{d}$ into (\ref{bicol1}) leads to:\begin{eqnarray} {\cal{L}}_{\xi} {\cal{L}}_{\xi} \Gamma^{~~~c}_{ab}(g) = \xi^{s} \overset{g}{\nabla}_{(a} \overset{g}{\nabla}_{b)}\overset{g}{\nabla}_{s}\xi^{c} + \xi^{s} [\overset{g}{\nabla}_{(a} R^{c}_{~|ds|b)}]\xi^{d} + \xi^{s} R^{c}_{~|ds|(b} \overset{g}{\nabla}_{a)}\xi^{d}\nonumber\\ + R^{c}_{~dbs}\xi^{d} ~\overset{g}{\nabla}_{a}\xi^{s} + R^{c}_{~das}\xi^{d} ~\overset{g}{\nabla}_{b}\xi^{s} - R^{s}_{~dab}\xi^{d} ~\overset{g}{\nabla}_{s}\xi^{c} + \overset{g}{\nabla}_{b}\overset{g}{\nabla}_{s}\xi^{c}\overset{g}{\nabla}_{a}\xi^{ s} \nonumber \\+ \overset{g}{\nabla}_{a}\overset{g}{\nabla}_{s}\xi^{c}\overset{g}{\nabla}_{b}\xi^{ s} -\overset{g}{\nabla}_{a}\overset{g}{\nabla}_{b}\xi^{s}\overset{g}{\nabla}_{s}\xi^ {c}~. \end{eqnarray} In Minkowski space, the condition is obtained: \begin{equation}\xi^{s}\partial_{a} \partial_{b} \partial_{s}~ \xi^{c} + \partial_{b} \partial_{s} \xi^{c}~ \partial_{a} \xi^{s} + \partial_{a} \partial_{s} \xi^{c}~ \partial_{b} \xi^{s} - \partial_{a} \partial_{b} \xi^{s}~ \partial_{s} \xi^{c} = 0~ . \end{equation} to be satisfied by the generators of the weak Lie affine collineation. A particular solution is given by $ \xi^{c}= \beta^{c}f(\alpha_{rs} x^{r}x^{s})$ with constants $\alpha_{rs}, \beta^{c}$ and $\beta^{s}\alpha_{sa}= 0$ and arbitrary $C^{3}$-function $f$. If spaces with a Riemannian (Lorentzian) metric are considered, the following expression for weak affine collineations obtains: \begin{equation}{\cal{L}}_{\xi}{\cal{L}}_{\xi}\{_{ab}^{c}\}= -g^{cp}g^{sq} \gamma_{pq}[\overset{g}{\nabla}_{(a}\gamma_{b)s} -\frac{1}{2}\overset{g}{\nabla}_{s}{\cal{L}}_{\xi}\gamma_{ab}] + g^{cs} [\overset{g}{\nabla}_{(a}{\cal{L}}_{\xi}\gamma_{b)s} - \frac{1}{2}\overset{g}{\nabla}_{s}\gamma_{ab} - {\cal{L}}_{\xi}\{_{ab}^{t}\}\gamma_{st}]~, \label{riemcollin} \end{equation} where $\gamma_{ab}$ was defined in (\ref{liedrag}). The concept of weak Lie curvature collineations could also be introduced:~ ${\cal{L}}_{\xi}{\cal{L}}_{\xi}R^{c}_{~dab}(g)=0$. This concept leads to 4th-order PDEs. \subsection{Complete sets of weak Lie motions} \label{subsection:weakogroup} If $g_{ab}$ allows the maximal group of motions with $(^{n+1}_{~~2})$ parameters, no genuine weak Lie motions do exist. If $g_{ab}$ allows a r-parameter group of motions, then $(^{n+1}_{~~2}) -r$ genuine weak Lie motions may exist. The case of a Lie group with $(^{n+1}_{~~2}) - 1$ parameters acting as an isometry group cannot occur in n-dimensional space (Fubini 1903). Hence, in space-time which allows a 10-parameter group as maximal group, no 9-parameter Lie group exists. For 4-dimensional Lorentz-space (with signature $\pm 2$), 8-parameter Lie groups are likewise excluded as isometry groups (Jegorov 1955) (\cite{Petrov1969}, p. 134).\footnote{This does not hold for Finsler geometry by which an 8-parameter Lie groups is admitted. Cf. \cite{Bogo1977}, \cite{Bogo1994}, \cite{Bogoen1999}} Thus, besides the maximal group, the largest group of motions in space-time is a 7-parameter group.\footnote{Petrov's claim that for 4-dimensional Lorentz spaces 7-parameter Lie groups are excluded, is not correct, cf. \cite{Petrov1969}, p. 134), \cite{bible}, p. 122).} In this case, the largest group of weak Lie motions would then be a 3-parameter Lie group.\\ According to (\ref{redu}), a consequence for weak motions is:\footnote{If an extended Lie algebra is used, on the r.h.s. of (\ref{Liebrak2}), the term $2 c_{ji~, (a}^{~~k} g_{b)c}\xi_{k}^{c}$ must be added.} \begin{equation} ({\cal{L}}_{\xi_{i}}{\cal{L}}_{\xi_{j}} - {\cal{L}}_{\xi_{j}}{\cal{L}}_{\xi_{i}}) g_{ab}= {\cal{L}}_{({\cal{L}}_{\xi_{i}}\xi_{j})} g_{ab}= {\cal{L}}_{c_{ji}^{~k}\xi_{k}} g_{ab} \\=c_{ji}^{~k}{\cal{L}}_{\xi_{k}}g_{ab}.\label{Liebrak2}\end{equation} \noindent (\ref{Liebrak2}) provides a hint about how a {\em group of weak Lie symmetries} is to be defined when a set of vector fields, $\xi, \eta, \zeta, ..$ has been found satifying (\ref{isomet5}). For genuine weak motions, not all of the following equations can be satisfied: ${\cal{L}}_{\eta}{\cal{L}}_{\xi}g_{ab}= 0,~$ ${\cal{L}}_{\xi}{\cal{L}}_{\eta}g_{ab}= 0,~{\cal{L}}_{\eta}{\cal{L}}_{\zeta}g_{ab}= 0,~{\cal{L}}_{\zeta}{\cal{L}}_{\eta}g_{ab}= 0,~ {\cal{L}}_{\zeta}{\cal{L}}_{\xi}g_{ab}= 0, ~{\cal{L}}_{\xi}{\cal{L}}_{\zeta}g_{ab}= 0,~ ... .$ If the r vectors $\xi_{(k)}, k= 1, 2, .. , r$ are the infinitesimal generators of a Lie, group, the above demand {\em in general} leads into an impasse: instead of its intended role as a weak Lie-invariance group, it reduces to an isometry group. This is due to (\ref{redu}) or (\ref{Liebrak2}). An exception holds if some of the vector fields commute. Consequently, the following definition may be introduced:\\ {\em Definition 2} (strong complete set):\\ A Lie algebra presents a strong complete set of weak Lie symmetries if at least one of the corresponding Lie algebra elements does not generate a motion ($ {\cal{L}}_{\xi_{(j)}} g_{ab}\neq 0$ for one $(j)$, at least) and the following $(^{m+1}_{~2})~, m>1$ conditions hold:\begin{equation} {\cal{L}}_{\xi_{(i)}}{\cal{L}}_{\xi_{(j)}} g_{ab}= 0,~ \end{equation} for $(i)=(j)$ and $(i)<(j), (i), (j) = 1, 2, .., m$ or, for $(i)=(j)$ and $(i)>(j),~ (i), (j) = 1, 2, .., m.$\\ The remaining ${\cal{L}}_{\xi_{(i)}}{\cal{L}}_{\xi_{(j)}} g_{ab}\neq 0$ for $(i)>(j)~ [(i)<(j)]$ are then determined through (\ref{redu}). In general, we will demand that none of the vector fields $X_{(i)}$ generate motions. A less demanding definition would be:\\ {\em Definition 3} (complete set):\\ A Lie algebra leads to a complete set of weak Lie-symmetries if each of its infinitesimal operators $X_{i}=\xi_{(i)}^{~a}\frac{\partial}{\partial x^{a}}$ generates a weak Lie motion: ${\cal{L}}_{\xi_{(i)}}{\cal{L}}_{\xi_{(i)}} g_{ab} = 0,~{\cal{L}}_{\xi_{(i)}} g_{ab} \neq 0$ for every $ i = 1, 2, ...,m.$\\ In section \ref{subsection:G3weak}, examples will be given showing that the alternative definitions 2 and 3 for complete sets of weak Lie symmetries lead to different results. In general, we will prefer definition 2.\\ As will be seen in the next section, a consequence is that if $g(X,Y)$ allows the {\em maximal} group of motions, weak Lie motions for $g(X,Y)$ do not exist or reduce to conformal motions. As an example: in 2-dimensional Euclidean space with a 3-parameter maximal group (two translations and one rotation), no genuine weak (Lie) motion exists. The other extremal case is the non-existence of genuine weak Lie motions, e.g., for the rotation group together with definition 2. The Kasner metric (\ref{Kasner}) which allows three space translations as isometries, is a candidate for not leading to genuine weak Lie motions. \section{Weak Lie invariance} \label{section:weakinvar} \noindent We now want to determine the metrics allowing a time translation and the rotation group as weak Lie motions. The group is chosen such that, as an isometry group, it describes {\em static, spherically symmetric (s.s.s.) metrics}. Thus we have to allow for four vector fields $\xi_{(i)}, i=1, 2, 3, 4 $ forming a Lie algebra with a 2-parameter abelian subalgebra and then drag twice the arbitrary metric $g_{ab} $. At first, definition 3 is applied and the target metric $\gamma_{ab}$ calculated. \subsection{Weakly static metrics.} To begin, we demand that only the time translation $T= X_1$ with components $ \xi_{(1)}^{s}= \delta_{0}^{s}$ generates a weak motion: ${\cal{L}}_{X_1}{\cal{L}}_{X_1}g_{ab} = 0$. The resulting class of metrics is: \begin{equation} g_{ab} = x^0 c_{ab}(x^1, x^2, x^3) + d_{ab}(x^1, x^2, x^3)~,\label{metgstat}\end{equation} with arbitrary symmetric tensors $c_{ab}, d_{ab}$. The class remains invariant with regard to linear transformations in time $x^0 \rightarrow \alpha(x^1, x^2, x^3) x^0 +\beta(x^1, x^2, x^3);~ \alpha, \beta$ arbitrary functions. \subsection{Weak spherical symmetry} Now, the three generators of spatial rotations SO(3) in a representation using polar coordinates $x^1= r, x^{2} = \theta, x^{3} = \phi$ are added. Its corresponding generators are: \begin{equation} \xi_{(2)}^{s}= \delta_{3}^{s}, \xi_{(3)}= -sin x^3 \delta_{2}^{s} - cos x^{3} ctg x^{2}~ \delta_{3}^{s}, \xi_{(4)}=cos x^3 \delta_{2}^{s} - sin x^{3} ctg ~x^{2} \delta_{3}^{s}~.\end{equation} Lie-dragging with the time translation and with $ \xi_{(2)}$ forming the abelian subgroup leads to $ \overset{1}{\gamma}_{ab}=g_{ab,0}~, \overset{2}{\gamma}_{ab}=g_{ab,3},$ and to the weakly Lie invariant metric (i.e., with ${\cal{L}}_{X_1}{\cal{L}}_{X_1}g_{ab} = 0,~{\cal{L}}_{X_{2}}{\cal{L}}_{X_{2}}g_{ab} = 0)$ \begin{equation} g_{ab} = x^0 x^3 c_{ab}(x^1,x^2) + x^0 d_{ab}(x^1,x^2) + x^3 e_{ab}(x^1,x^2) + f_{ab}(x^1,x^2) \label{metg4}\end{equation} with four arbitrary bilinear forms $ c_{ab}, d_{ab}, e_{ab}, f_{ab}$. Lie-dragging with $\xi_{(3)}$ and $\xi_{(4)}$ applied to any of these bilinear forms results in the following equations (using $f_{ab}$ for the presentation):\begin{eqnarray} \overset{3}{\gamma}_{ab}= -sin x^3 f_{ab,2} - 2 cos x^3 f_{2(a}\delta_{b)}^{3} + 2 sin x^{3} ctg x^{2}f_{3(a}\delta_{b)}^{3} + 2\frac{cos x^3}{sin^2 x^2}f_{3(a}\delta_{b)}^{2}~,\\ \overset{4}{\gamma}_{ab}= cos x^3 f_{ab,2} - 2 sin x^3 f_{2(a}\delta_{b)}^{3} - 2 cos x^{3} ctg x^{2}f_{3(a}\delta_{b)}^{3} + 2\frac{sin x^3}{sin^2 x^2}f_{3(a}\delta_{b)}^{2}~.~~\end{eqnarray} The demand $\overset{2}{\gamma}_{ab}= \overset{3}{\gamma}_{ab}= \overset{4}{\gamma}_{ab}=0 $, i.e., that {\em spherical symmetry} hold, leads to $c_{ab}=e_{ab}=0$ and to the well-known result for $f_{ab}, d_{ab}$: \begin{equation}f_{ab}= \alpha(x^1)\delta_{a}^{0} \delta_{b}^{0} - \beta(x^1)\delta_{a}^{1} \delta_{b}^{1}- \epsilon(x^1)[\delta_{a}^{2} \delta_{b}^{2} + sin^2 x^{2} \delta_{a}^{3} \delta_{b}^{3}]\label{sssol}\end{equation} with two free functions $\alpha(x^1), \epsilon(x^1)$.\footnote{One of the functions $\alpha(x^1), \beta(x^1)$ is superfluous because, locally, a 2-dimensional space is conformally flat. $f_{01}=f_{23}=0$ follows from the rotation group acting on a 2-dimensional subspace. In addition, here $f_{02}= f_{03}= f_{12}= f_{13}= 0$ has been used.}\\ If definition 3 for complete sets of weak symmetry is applied up: two further PDE's must then be satisfied. If all generators of the rotation group are taken into account, then the result is \begin{equation} \gamma_{ab} = x^0 d_{ab}(x^1,x^2) + f_{ab}(x^1,x^2) \label{metg5}\end{equation} with two bilinear forms $ d_{ab}, f_{ab}$ having the same form: \begin{equation}f_{ab}= \alpha(x^1)\delta_{a}^{0} \delta_{b}^{0} - \beta(x^1)\delta_{a}^{1} \delta_{b}^{1}- [x^{2}\epsilon_{1}(x^1)+ \epsilon_{2}(x^1)][\delta_{a}^{2} \delta_{b}^{2} + sin^2 x^{2} \delta_{a}^{3} \delta_{b}^{3}]~\label{nsssol}.\end{equation} For the proof, we do not reproduce here the lengthy full expressions for \linebreak ${\cal{L}}_{\xi_{(3)}} {\cal{L}}_{\xi_{(3)}} f_{ab} =0 $ and ${\cal{L}}_{\xi_{(4)}} {\cal{L}}_{\xi_{(4)}} f_{ab} =0$, but give only the equations for the components $f_{22}, f_{33}$: \begin{eqnarray}{\cal{L}}_{\xi_{(3)}} {\cal{L}}_{\xi_{(3)}} f_{22} = -sin^2 x^2 f_{22,2,2} + 2\frac{cos^{2} x^3}{sin^2 x^2}[-f_{22} + \frac{f_{33}}{sin^2 x^2}] \overset{!}{=} 0~,\\{\cal{L}}_{\xi_{(4)}} {\cal{L}}_{\xi_{(4)}} f_{22} = -cos^2 x^2 f_{22,2,2} + 2\frac{sin^{2} x^3}{sin^2 x^2}[-f_{22} + \frac{f_{33}}{sin^2 x^2}] \overset{!}{=} 0~. \end{eqnarray} The consequences $ f_{22,2,2}=0$ and $f_{33}=sin^2 x^2 f_{22}$ are obvious. That (\ref{nsssol}) is a genuine solution is shown by $\gamma_{22}= {\cal{L}}_{\xi_{(3)}} f_{22}= -sinx^3~ \epsilon_1 (x^1)\neq 0$ and by $\gamma_{33}= {\cal{L}}_{\xi_{(3)}} f_{33}= -sinx^3 sin^2 x^2~ \epsilon_1 (x^1)\neq 0$ if $\epsilon_1 (x^1)\neq 0.$ The surface $x^1= const, x^0= const$ has Gaussian curvature: \begin{equation} K = \frac{1}{2(\epsilon_1 x^2 + \epsilon_2)^2}[-\epsilon_1 ctg x^2 + 2\epsilon_1 x^2 + 2 \epsilon_2 +\frac{(\epsilon_1)^2}{\epsilon_1 x^2 + \epsilon_2}].\end{equation} $\epsilon_1, \epsilon_2 $ are now constants. For $\epsilon_1 \rightarrow 0$ we obtain the constant curvature of the 2-sphere.\\ The time translation and the 3 generators of the rotation group form a complete set of weak Lie motions; this shows that definition 3 is not empty. However, if it is asked that the rotation group generate a {\em strong} set of weak symmetries according to definition 2, then the result is very restrictive. The conditions ${\cal{L}}_{\xi_{(2)}} {\cal{L}}_{\xi_{(3)}} f_{ab} = 0 = {\cal{L}}_{\xi_{(2)}} {\cal{L}}_{\xi_{(4)}} f_{ab}$ for equations (\ref{metg4}), (\ref{nsssol}) are leading to the remaining metric tensor of (\ref{metg5}. If $ {\cal{L}}_{\xi_{(3)}} {\cal{L}}_{\xi_{(4)}}\gamma_{33} = 0 $ is studied for $f_{ab}$, then $ {\cal{L}}_{\xi_{(3)}} {\cal{L}}_{\xi_{(4)}} f_{ab} \neq 0 $ due to the only nonvanishing expression ${\cal{L}}_{\xi_{(3)}} {\cal{L}}_{\xi_{(4)}} f_{33} = sin x^2 cos x^2 \times \epsilon_1(x^1)$ for $\epsilon_1(x^1)\neq 0.$ Thus the demand that the rotation group in 3 dimensions generates a {\em strong} set of weak Lie symmetries according to definition 2 enforces $\epsilon_{1}(x^1)= 0$ and reduces to an isometry. Nevertheless, the resulting spherically symmetric metric is only weakly static. \\ \subsection{The group $G_3$ acting as a group of weak Lie motions} \label{subsection:G3weak} In taking up the example of a $G_3$ acting on $V_3$ from section \ref{section:Liedrag} with Lie algebra (\ref{LieG3}), we first apply definition 3 to a scalar $f(x^1, x^2, x^3)$. If the generators are to lead to motions, then the only solution is $f= constant$. Definition 3 for a complete set of weak Lie motions leads to:\footnote{The calculations are sketched in appendix 2.} \begin{equation} f= a_0 x^2 x^3 + b_0 x^{1} (x^2-x^1 x^3) + c_0 x^1 x^3 + b_1x^2 + c_1 x^3 + d_1 x^1 + d_0~, \label{eqdef3}\end{equation} while definition 2 results in: \begin{equation} f= c_0 (x^1 x^3 + x^2) + c_1 x^3 + d_1 x^1 + d_0~.\label{eqdef4}\end{equation} We note, that the only one of the 9 possible demands so far unused, i.e., ${\cal{L}}_{\xi_{(3)}}{\cal{L}}_{\xi_{(2)}}g_{ab} = 0 $ reduces (\ref{eqdef4}) to \begin{equation} f= c_1 x^3 + d_1 x^1 + d_0~.\label{eqdef5} \end{equation} Applying $G_3$ to the metric, the following weakly Lie-invariant metric is obtained: \begin{gather*} {\gamma}_{ab}=\\ x^{2} \begin{pmatrix} \overset{(0)}{\alpha}_{11} & \overset{(0)}{\alpha}_{12} & P_1 \\ \overset{(0)}{\alpha}_{21} & \overset{(0)}{\alpha}_{22} & P'_1 \\ P_1 & P'_1 & P_2 \end{pmatrix} + x^{3} [~ x^{1}\begin{pmatrix} \overset{(0)}{\alpha}_{11} & \overset{(0)}{\alpha}_{12} & P_1 \\ \overset{(0)}{\alpha}_{21} & \overset{(0)}{\alpha}_{22} & P'_1 \\ P_1 & P'_1 & P_2 \end{pmatrix} + \begin{pmatrix}\overset{(0)}{\beta}_{11} & \overset{(0)}{\beta}_{12} & \tilde{P}_1 \\ \overset{(0)}{\beta}_{21} & \overset{(0)}{\beta}_{22} & \tilde{P'}_1 \\ \tilde{P}_1 & \tilde{P'}_1 & \tilde{P}_2 \end{pmatrix}] + \begin{pmatrix} Q_1 & Q_1 & Q_2 \\ Q_1 & \tilde{Q}_1 & Q_2 \\ Q_2 & Q'_2 & Q_3 \end{pmatrix}, \label{G3weak}\end{gather*} where $P_i, P_i, Q_i, \tilde{Q_i}, Q_i $ are polynomials in the coordinate $x^1$ of order $i$, the coefficients of which are not all independent:\\ $P_1 =\overset{(0)}{\alpha}_{12} x^1 +c_{13}, P_1'= \overset{(0)}{\alpha}_{22} x^1 + c_{23}, P_2 = \overset{(0)}{\alpha}_{22} (x^{1})^2 + 2c_{23}x^1 + c_{33},\\ Q_1= \overset{(0)}{l}_{11}x^1+\overset{(0)}{m}_{11}, Q_1= \overset{(0)}{l}_{12}x^1+\overset{(0)}{m}_{12}, \tilde{Q}_1= {l}_{22}x^1+\overset{(0)}{m}_{22}, \\Q_2= \overset{(0)}{l}_{12}(x^1)^2+\overset{(0)}{m}_{13}x^1 + \overset{(0)}{k}_{13},~Q_2= \overset{(0)}{l}_{22}(x^1)^2+\overset{(0)}{m}_{23}x^1 + \overset{(0)}{k}_{23},\\ Q_3= \overset{(0)}{l}_{22}(x^1)^3+\overset{(0)}{m}_{23}(x^1)^2 + \overset{(0)}{k}_{33}x^1 + m_{33} $ and $\overset{(0)}{\alpha}_{ab}, (a, b = 1, 2),~ c_{ij}, \overset{(0)}{l}_{ij},~ \overset{(0)}{m}_{ij}$ and $ \overset{(0)}{k}_{ij}$ constants. In the polynomials $\tilde{P}_1,\tilde{P'}_1,\tilde{P}_2,$ the constants $\alpha_{ab}, c_{ab}$ are exchanged by the set of independent constants $\beta_{ab}, d_{ab}$. Thus definition 2 is not empty. Two independent matrices of the type that occured for the group acting as an isometry group and a third, new matrix occur now.\\ Definition 3 leads to a different complete set of weak Lie motions for which the metric takes the form: $g_{ab} = d_{ab}(x^1) x^2 + e_{ab}(x^1) x^3 + \epsilon_{ab}(x^1)$ with $d, e, \epsilon$ expressed by matrices of the form: \begin{gather*} \begin{pmatrix} P_{11} & P_{12} & Q_1\\ P_{12} & P_{22} & Q_2\\ Q_1 & Q_2 & M \end{pmatrix}, \label{G3weak2}\end{gather*} where $P_{ik}$ are polynomials of 1st degree, $Q_i$ of 2nd degree, and $M$ a polynomial of 3rd degree. \section{A new algebra structure} \label{section:newalgeb} For Lie-dragging, up to now we have mostly taken vector fields forming Lie algebras corresponding to Lie groups of point transformations. In the following, after an introductory section, we consider more general types both of groups and algebras in sections \ref{section:algebext} and \ref{section:extmotion}. \subsection{Lie-dragging for vector fields not forming Lie algebras} \label{subsection: non-Lie} Already in (\ref{freefunc}) of section \ref{subsubsection:weaksym}, vector fields containing free functions were considered. We now continue with vector fields $X_1=\xi^{r}\frac{\partial}{\partial x^{r}}; ~X_2= \eta^{s}\frac{\partial}{\partial x^{s}}$ with $ \xi^{r}= f(x^0)\delta_{1}^{r}, \eta^{s}= h(x^{1})\delta_{0}^{r}$ such that \begin{equation}[X_1, X_2] = f(x^0) H(x^{1}) X_2 - h(x^{1}) F(x^0) X_1~.\label{extLiealg}\end{equation} Here, $ F(x^0)= \frac{d(ln f(x^{0}))}{dx^{0}}, H(x^{1})= \frac{d(ln h(x^{1}))}{dx^{1}}$. The finite transformations belonging to $X_1$ and $X_2$, respectively, are generalized time- and space-translations \begin{equation}x^0 \rightarrow x^{0'} = x^0 + h(x^1) ; ~x^1 \rightarrow x^{1'} = x^1 + f(x^0)~ \label{trafo}\end{equation} leaving invariant the time interval $|x^{0}_{(i)} -x^{0}_{(j)}|$ and the space interval $|x^{1}_{(i)} - x^{1}_{(j)}|$ between two events $(x^{0}_{(i)},x^{1}_{(i)})$ and $(x^{0}_{(j)},x^{1}_{(j)})$. Each of the transformations \begin{equation} x^0 \rightarrow x^{0'} = x^0 + h(x^1) ; ~x^1 \rightarrow x^{1'} = x^1 + a ,\label{trafospez1}\end{equation} and \begin{equation} x^1 \rightarrow x^{1'} = x^1 + f(x^0) ; ~x^0 \rightarrow x^{0'} = x^0 + b\label{trafospez2}\end{equation} forms a group: $ x^{0''} = x^{0'} + k(x^{1'}) = x^0 + h(x^1) + k(x^{1}+a),~ x^{1''} = x^{1}+ A+ a$, and $ x^{1''} = x^{1'} + g(x^{0'}) = x^{1} + f(x^0) + g(x^0+b),~ x^{0''} = x^{0} + B +b$. However, these groups are {\em not} Lie groups: in part, the Lie-group parameters have been replaced by arbitrary functions. In this case, the algebra (\ref{extLiealg}) reduces to either \begin{equation}[X_1, X_2] = H(x^1) X_2~. \label{specextLiealg1}\end{equation} or to \begin{equation}[X_1, X_2] = - F(x^0) X_1~. \label{specextLiealg2}\end{equation} Likewise, (\ref{extLiealg}), (\ref{specextLiealg1}), and (\ref{specextLiealg2}) are not Lie algebras. Both transformations (\ref{trafo}) applied together: $ x^{0''} = x^{0'} + k(x^{1'}) = x^0 + h(x^1) + k(x^{1} + f(x^{0})), ~ x^{1''} = x^{1'} + g(x^{0'}) = x^1 + f(x^0) + g(x^{0} + h(x^{1}))$ do not even form a group. The class of functions involved may be narrowed considerably by the demand that the function $f$ of a special type be kept fixed, e.g., be a polynomial of degree $p$, or $f(x^0) = a sin x^0 + b cos x^0 $. In these cases, just one function with constant coefficients occurs in the group; the group transformations change only the coefficients. (\ref{trafospez2}) is a subgroup of the so-called {\em Mach-Poincar\'e group} $G_{4}(3)$ \cite{treder1972}, (\cite{goenner1981}, pp. 85-101):\begin{equation} x^{a'} = A_{~r}^{a} x^{r} + f^{a}(x^{0}),~ x^{0'}= x^{0} + b,~~ A_{~r}^{a}A_{~b}^{r}= \delta_{b}^{a}.\end{equation} This group plays a role in Galilean relative mechanics. A generalization is the group $G_{1}(6)$ of transformations leaving invariant the observables describing a {\em rigid body}; 6 free functions of $x^{0}$ and one Lie-group parameter do appear: \begin{equation} x^{i'} = A_{~j}^{i}(x^{0}) x^{j} + f^{i}(x^{0}),~ x^{0'}= x^{0} + b,~~ A_{~j}^{i}(x^{0})A_{~k}^{j}(x^{0})=\delta_{k}^{i},~ (i,j =1,2,3).\end{equation} The corresponding seven algebra generators are: \begin{eqnarray} T = \frac{\partial}{\partial x^{0}},~ X_{i}= f_{i}(x^{0})\frac{\partial}{\partial x^{i}}~~(i~ not~ summed),\nonumber ~~~~~~\\ Y_1 = \omega^{2}_{3}(x^{3}\frac{\partial}{\partial x^{2}}- x^{2}\frac{\partial}{\partial x^{3}} ), Y_2 = \omega^{1}_{3}(x^{3}\frac{\partial}{\partial x^{1}}- x^{1}\frac{\partial}{\partial x^{3}}), Y_3 = \omega^{1}_{2}(x^{2}\frac{\partial}{\partial x^{1}}- x^{1}\frac{\partial}{\partial x^{2}}) \nonumber\\ \end{eqnarray} with $ \omega^{i}_{j}= \omega^{i}_{j}(x^{0})$. The corresponding algebra is given by:\begin{eqnarray} [T, T] = 0,~[ T, X_{i}]= F_{i}(x^0) X_{i},~ F_{i}= \frac{d}{dx^{0}}ln(f_{i} (x^{0})),~[X_{i}, X_{j}]= 0 ,~(i,j = 1,2,3)~~~~\nonumber\\~ [T, Y_1]= \omega^{2}_{3}(x^{0})Y_1,~[T, Y_2]= \omega^{3}_{1}(x^{0})Y_2, ~[T, Y_3]= \omega^{1}_{2}(x^{0})Y_3,~ \omega^{i}_{j}=\frac{d}{dx^{0}}ln(\omega^{i}_{j}(x^{0})),\nonumber\\~ [Y_1, Y_2]= -\frac{\omega^{1}_{3}\omega^{2}_{3}}{\omega^{1}_{2}}~Y_3,~ [Y_2, Y_3]= -\frac{\omega^{1}_{2}\omega^{1}_{3}}{\omega^{2}_{3}}~Y_1,~ [Y_1, Y_3]= -\frac{\omega^{1}_{2}\omega^{3}_{2}}{\omega^{1}_{3}}~Y_2,~~~~~\nonumber \\~ [X_1, Y_1] = 0,~ [X_1, Y_2]= \frac{f_1(x^{0})}{f_3(x^{0})}\omega^{1}_{3}~X_3,~ [X_1, Y_3] = - \frac{f_1(x^{0})}{f_2(x^{0})}\omega^{1}_{2}~X_2,~~~~~\nonumber \\~[X_2, Y_1]= \frac{f_2(x^{0})}{f_3(x^{0})}\omega^{2}_{3}~X_3,~ [X_2, Y_2] = 0,~ [X_2, Y_3] = \frac{f_2(x^{0})}{f_1(x^{0})}\omega^{1}_{2}~X_1, ~~~~~~~\nonumber \\~ [X_3, Y_1]= \frac{f_3(x^{0})}{f_2(x^{0})}\omega^{2}_{3}~X_2,~ [X_3, Y_2] = - \frac{f_3(x^{0})}{f_1(x^{0})}\omega^{1}_{3}~X_1,~ [X_3, Y_3] = 0 .~~~~~~ \label{rigbody} \end{eqnarray} There exist further groups of this non-Lie type occuring in classical mechanics like Weyl's kinematical group $G_{3}(6)$ and the covariance group of the Hamilton-Jacobi equation $G_{7}(3)$ or, as a subgroup in non-relativistic quantum mechanics, the covariance group of the Schr\"odinger equation $G_{12}(0)$, cf. \cite{goenner1981}. The structure functions of all these groups depend on a single coordinate, the time. \section{Extended Lie Algebras} \label{section:algebext} In the following, we will deal with a subbundle of the tangent bundle of n-dimensional Euclidean or Lorentz space. We will permit that the structure constants in the defining relations for a Lie algebra become dependent on the components $\xi^{a}_{~i}$ of the vector fields $X_{i}(x)$: they will become {\em structure functions}.\\ {\em Definition 4}:\\ The algebra \begin{equation} [X_i, X_j ]= c_{ij}^{~k}( x^1, x^2, ..., r) X_k \label{Killing}\end{equation} with structure functions $c_{ij}^{~k}(x^1, x^2, ...,x^{r})$ is called an {\em extended Lie algebra}.\\ The Lie algebra elements form an ``involutive distribution''. This is ``a smooth distribution $V$ on a smooth manifold $M$, i.e., a smooth vector subbundle of the tangent bundle'' $TM$. The Lie brackets constitute the composition law; the injection $V \hookrightarrow TM$ functions as the anchor map (cf. \cite{Marle2008}, p. 13). This is a simple example for a {\em tangent Lie algebroid} (cf. also (\cite{Mack2005}, p. 100 and example 2.7, p. 105)).\footnote{Closely related, but different structures are {\em family of Lie algebras} \cite{DouLaz66}, \cite{Copp77} and {\em variable Lie algebras} (\cite{LepLud94}, p. 115).} Nevertheless, the involutive distribution used here can also be considered a subset of the infinite-dimensional ``Lie-algebra'' ${\cal B}(M)$ of footnote 5. After completion of the paper, I learned of some of the historical background of (\ref{Killing}): It already has occured as the condition for closure of a complete set of linear, homogeneous operators belonging to a complete system of 1st order PDE's in Jacobi's famous paper of 1862 (\cite{Jacobi1862}, \S 26, p. 40).\footnote{In Jacobi's paper, (\ref{Killing}) is used in phase space such that the structure functions depend on both coordinates and momenta: $c_{ij}^{~k}(x^1, x^2, ...,x^{r}, p_1, p_2, ..p_r)$. It is in Clebsch's paper of 1866 (\cite{Clebsch1866}, \S 1) in connection with his definition of a complete system of linear PDE's that the r.h.s. of (\ref{Killing}) depends only on the coordinates. Cf. also equation (3.1) in \cite{Hawk1989}, p. 311.} \\ (\ref{Liealg2}) must then be replaced by \begin{equation}{\cal{L}}_{X_{i}} {\cal{L}}_{X_{j}} X_{k} = (c_{jk}^{~~l} c_{il}^{~~m} + X_{i} c_{jk}^{~~m}) X_{m}~,\label{Liealg3}\end{equation} and (\ref{Jaco2}) by \begin{equation} c_{jk}^{~~l} c_{il}^{~~m} +c_{ij}^{~~l} c_{kl}^{~~m}+c_{ki}^{~~l} c_{jl}^{~~m} + X_{i} c_{jk}^{~~m} + X_{k} c_{ij}^{~~m} + X_{j} c_{ki}^{~~m}=0. \label{Jaco3}\end{equation} An {\em extended Cartan-Killing} form can be defined acting as a symmetric metric on the sections of the subtangent bundle. An asymmetric form could be defined as well. \\ Definition 5 (Generalized Cartan-Killing form):\\ The generalized Cartan-Killing bilinear form $\tau$ is defined by: \begin{equation} \tau_{ij}:= \sigma_{ij} + 2 X_{(i} c_{j)m}^{~~~m} = c_{il}^{~~m}c_{jm}^{~~~l} + 2 X_{(i} c_{j)m}^{~~~m}~.\label{CaKiext}\end{equation} The generalized Cartan-Killing form now depends on the base points of the fibres in the tangent bundle. They may be interpreted as a metric.\\ To use the example of the group $G_{1}(6)$ given in section \ref{subsection: non-Lie}: The structure functions for the corresponding extended algebra (\ref{rigbody}) of rigid body transformations are shown in appendix 4. From them, calculation of the extended Cartan-Killing form leads to a Lorentz metric with signature (1,3) of rank 4 within a degenerated 7-dimensional bilinear form: \begin{gather} \tau_{ij}= \begin{pmatrix} \tau_{00} & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & \tau_{44} & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & \tau_{55} & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 &\tau_{66} \\ \end{pmatrix} \end{gather} where $\tau_{00}= \Sigma_{i=1}^{3}\frac{\ddot{f}_{i}}{f_{i}}+ \frac{\ddot{\omega}_{~3}^{2}}{\omega_{~3}^{2}} + \frac{\ddot{\omega}_{~1}^{3}}{\omega_{~1}^{3}}+ \frac{\ddot{\omega}_{~2}^{1}}{\omega_{~2}^{1}}$, and $\tau_{44}= -4(\omega_{~3}^{2})^2,~ \tau_{55}= -4(\omega_{~3}^{1})^2,~ \tau_{66}= -4(\omega_{~2}^{1})^2.$ By projection into the 4-dimensional space with coordinates $0, 4, 5, 6$ and signature (1,3), we surprisingly arrive at the general class of one-dimensional gravitational fields \cite{bible}. For special values for the $f_{i},$ and $ \omega_i^k$, the Kasner metric (\ref{Kasner}) can be derived by this approach. All the pre-relativistic groups mentioned at the end of the previous section lead to Cartan-Killing forms depending on just one coordinate, the time.\\ The following definition introduces a new class of extended motions and a new class of weak extended motions, the infinitesimal generators of which form an extended Lie algebra.\\ {\em Definition 6} (extended motions):\\ Let $x \rightarrow x+\xi,~ y \rightarrow y + \eta$ be infinitesimal transformations forming a continuous group the corresponding algebra of which is an extended Lie algebra according to definition 4. Then, the vector fields $X=\xi^{c} \frac{\partial}{\partial x^{c}},~Y = \eta^{c} \frac{\partial}{\partial x^{c}}$ with ${\cal L}_{X} g_{ab}=0,~{\cal L}_{Y} g_{ab}=0$ are called {\em extended motions}.\\ An analogous formulation is:\\ {\em Definition 7} (extended weak motions):\\ Let $x \rightarrow x+\xi,~ y \rightarrow y + \eta$ be infinitesimal transformations forming a continuous group the corresponding algebra of which is an extended Lie algebra according to definition 4. Then, the vector fields $X=\xi^{c} \frac{\partial}{\partial x^{c}},~Y = \eta^{c} \frac{\partial}{\partial x^{c}}$ with ${\cal L}_{X} {\cal L}_{X}g_{ab}=0,~{\cal L}_{Y}{\cal L}_{Y} g_{ab}=0$ are called {\em extended weak motions}.\\ \section{Extended motions and extended weak (Lie) motions} \label{section:extmotion} In section (\ref{subsection: non-Lie}), we have given examples of non-Lie groups leading to extended Lie algebras. How will the corresponding extended motions and extended weak (Lie) motions differ? These concepts are exemplified here with the most simple non-Lie group (\ref{trafospez2}). The tangent vectors $X,~Y$ with the algebra (\ref{specextLiealg2}) form an extended motion (${\cal L}_{X} g_{ab}=0,~{\cal L}_{Y} g_{ab}=0$) for all metrics of maximal rank 3: \begin{gather} g_{ab}= \begin{pmatrix} \alpha_{00}(x^2, x^3) & 0 & \alpha_{02}(x^2, x^3) & \alpha_{03}(x^2, x^3) \\ 0 & 0 & 0 & 0 \\ \alpha_{02}(x^2, x^3) & 0 & \alpha_{22}(x^2, x^3) & \alpha_{23}(x^2, x^3) \\ \alpha_{03}(x^2, x^3) & 0 & \alpha_{23}(x^2, x^3) & \alpha_{33}(x^2, x^3) \end{pmatrix}, \label{G3weak2}\end{gather} with arbitrary functions $\alpha_{ab}$ due to arbitrariness of $f(x^0)$. This is to be compared with the motions derived from $X=\frac{\partial}{\partial x^{1}}, Y=\frac{\partial}{\partial x^{0}} $ forming an abelian Lie algebra and leading to \begin{equation} g_{ab} = \alpha_{ab}( x^2, x^3)~.\end{equation} The corresponding extended weak (Lie) motions (${\cal L}_{X} {\cal L}_{X}g_{ab}=0,~{\cal L}_{Y}{\cal L}_{Y} g_{ab}=0$) are given by: \begin{gather} g_{ab}= \begin{pmatrix} x^1\alpha_{00} +\beta_{00} & \beta_{01} & x^1\alpha_{02} + \beta_{02} & x^1\alpha_{03} + \beta_{03}\\ \beta_{01} & 0 & \beta_{12} & \beta_{13} \\ x^1\alpha_{02} + \beta_{02} & \beta_{12} & x^1\alpha_{22} + \beta_{22} & x^1\alpha_{23} + \beta_{23} \\ x^1\alpha_{03} + \beta_{03} & \beta_{13} & x^1\alpha_{23}+ \beta_{23} & x^1\alpha_{33}+ \beta_{33} \end{pmatrix}, \label{G3weak3}\end{gather} where $\alpha_{ab}=\alpha_{ab}( x^2, x^3);~\beta_{ab}=\beta_{ab}( x^2, x^3),\alpha_{0a} = 0, \beta_{11}= 0$. Comparison with the weak (Lie) motions generated by the translations given above shows the class of metrics: \begin{equation} g_{ab} = x^1\alpha_{ab}( x^2, x^3) + \beta_{ab}( x^2, x^3)~.\end{equation} \section{Two-dimensional extended Lie algebras} \label{section:classextalg} In section \ref{subsection: non-Lie} we have given the example (\ref{extLiealg}) showing that (\ref{Killing}) is not empty. As for Lie algebras, the question about a classification of extended Lie algebras in n-dimensional space arises. This being a topic of its own, we start here by considering the case $n=2$ only, without proving completeness of the result. We begin with:\footnote{The coordinates $x^0, x^1$ of section \ref{subsection: non-Lie} are replaced by $x^1, x^2$.} \begin{equation} [X_{1}, X_{2}]= c_{12}^{~~1} X_{1} + c_{12}^{~~2} X_{2} \label{starteq}\end{equation} with $X_{1}=\xi^{1}\frac{\partial}{\partial x_1} + \xi^{2}\frac{\partial}{\partial x_2}, X_{2} = \eta^{1}\frac{\partial}{\partial x_1} + \eta^{2}\frac{\partial}{\partial x_2}.$ This is a system of two equations for the 6 unknowns $\xi^i, \eta^j$ and $c_{12}^{~~i}, i= 1, 2$:\begin{equation} [X_{1}, X_{2}]= [\xi^1 \eta^1_{~,1}+ \xi^2 \eta^1_{~,2} - \eta^1 \xi^1_{~,1} -\eta^2 \xi^1_{~,2}] \frac{\partial}{\partial x_1} + [\xi^1 \eta^2_{~,1}+ \xi^2 \eta^2_{~,2} - \eta^1 \xi^2_{~,1} -\eta^2 \xi^2_{~,2}] \frac{\partial}{\partial x_2}.~ \label{eq2ext}\end{equation} We distinguish two cases according to whether the vector fields are unaligned or aligned. In the {\em first case}, for $\xi^{1}\neq 0,~\eta^{2}\neq 0$ :\begin{equation} [X_{1}, X_{2}]= [(\xi^1)^2 \frac{\partial}{\partial x_1} (\frac{\eta^1}{\xi^1}) + \xi^2 \eta^1_{~,2}- \eta^2 \xi^1_{~,2}] \frac{\partial}{\partial x_1} + [\xi^1 \eta^2_{~,1}- \eta^1 \xi^2_{~,1} - (\eta^2)^2 \frac{\partial}{\partial x_2} (\frac{\xi^2}{\eta^2}) \frac{\partial}{\partial x_2}]~.\label{eq2ext1} \end{equation} Here, the simplification $\xi^{2} = \eta^{1}= 0$ does not restrict generality. In the solution, two free functions $\xi^{1}, \eta^2$ remain; they are contained in the expressions for the structure functions:\begin{equation}c_{12}^{~~1}=-\frac{\eta^2}{\xi^1} \xi^{1}_{~,2}~,~c_{12}^{~~2}= \frac{\xi^1}{\eta^2} \eta^{2}_{~,1}~. \end{equation} The further simplification $\xi^1=\eta^2$ leads to:\begin{equation} [X_{1}, X_{2}]= -\xi^{1}_{~,2} X_{1} + \xi^{1}_{~,1} X_{2} \label{example1}\end{equation} with arbitrary $\xi^{1}=\xi^{1} (x^1, x^2).$ Calculation of the extended Cartan-Killing form (\ref{CaKiext}) results in: \begin{gather*}\tau_{ik}=\begin{pmatrix} ~([\xi^{1}_{~,1})^2] + \xi^{1}\xi^{1}_{~,1,1} & \xi^{1}_{~,1}\xi^{1}_{~,2}+\xi^{1}\xi^{1}_{~,1,2} ~\\ ~ \xi^{1}_{~,1}\xi^{1}_{~,2}+\xi^{1}\xi^{1}_{~,1,2} & (\xi^{1}_{~,2})^2 + \xi^{1}\xi^{1}_{~,2,2} \end{pmatrix} ~,\end{gather*} or simply \begin{equation} \tau_{ij}= \frac{1}{2} [ (\xi^{1})^{2}]_{,ij}~.\end{equation} In general $det(\tau_{ik})\neq 0$.\\ In order to find (\ref{specextLiealg2}) in this formalism, we must start from (\ref{trafospez2}) and set $\xi^1= f(x^0), \eta^0= 1$ such that $c_{12}^{~~1}= -F(x^0),~ c_{12}^{~~2} = 0$. As the only dependence is on $x^0$, the Cartan-Killing form degenerates (does not have full rank). This also happens for the algebra (\ref{rigbody}).\\ For 2-dimensional Lorentz space, one of the generators can be lightlike. We use only the simplification $\xi^{2} = 0$ and $\eta^{1}= \pm \eta^{2}$ such that in this case the relation: \begin{equation} [X_{1}, X_{2}]= [\frac{\pm 1}{(\xi^1)^2} \frac{\partial}{\partial x_1} (\frac{\eta^1}{\xi^1}) - \eta^2 \xi^1_{~,2}] \frac{\partial}{\partial x_1} + \xi^1 \eta^2_{~,1} \frac{\partial}{\partial x_2}\end{equation} follows. Again, we can set $\xi^1=\eta^2$ and come back to (\ref{example1}). The two different Lie algebras allowed in 2-dimensional space can be obtained from (\ref{example1}) by special choice of $\xi^1$. By redefinition of the algebra elements in the sense of \begin{equation} X_1 \rightarrow Y_1 = f(x^1, x^2) X_1 + g(x^1, x^2) X_2~,~ X_2 \rightarrow Y_2 = m(x^1, x^2) X_1 + p(x^1, x^2) X_2~\end{equation} with arbitrary functions $f, g, m, p$, from (\ref{starteq}) it may be possible to come back to the canonical form for the non-abelian Lie algebra. However, this is an open question.\footnote{It depends on whether solutions of certain nonlinear 1st order PDEs exist.}\\ In the {\em second case} of aligned tangent vectors we can set $\xi^2=0=\eta^2$. From (\ref{eq2ext}) we retain as the only structure function: \begin{equation} c_{12}^{~~1} = \xi^{1} \eta^{1}_{~,1}- \eta^{1} \xi^{1}_{~,1}~.\end{equation} The Cartan-Killing form then is: \begin{gather}\tau_{ik}=\begin{pmatrix} 0 & - \xi_{1} (\xi^{1} \eta^{1}_{~,1}- \eta^{1} \xi^{1}_{~,1})\\ - \xi_{1} (\xi^{1} \eta^{1}_{~,1}- \eta^{1} \xi^{1}_{~,1})~ &~ (\xi^{1} \eta^{1}_{~,1}- \eta^{1} \xi^{1}_{~,1})^{2}-\eta_{1}( \xi^{1} \eta^{1}_{~,1,1}- \eta^{1} \xi^{1}_{~,1,1})~ \end{pmatrix} ~.\end{gather}\\ In general, there is a wealth of possibilities available for setting up extended Lie algebras. A particular choice for the structure functions would be\footnote{In (\ref{Killing2}), the summation convention is used for unbracketed indices.} \begin{equation}c_{(i) (j)}^{~~~~(k)}:= \xi^{r}_{(i)} \xi_{(j)}^{s} g_{rs} [\delta^{(k)}_{(i)} - \delta^{(k)}_{(j)}] \label{Killing2}~.\end{equation} In Euclidean space $ g_{rs}= \delta_{rs}$, in Lorentz space $ g_{rs}= \eta_{rs}$ are most simple choices. It is not difficult to calculate the extended Cartan-Killing form which depends only on the inner products $~ \xi^{r}_{(i)} \xi_{(j)}^{s} g_{rs}, (i), (j) = 1, 2, , ..., m;~ r,s = 1, 2, ..., n ~(0, 1, 2, ..., n-1)$.\\ \section{Discussion and conclusion} When Lie-dragging is seen as a mapping in the space of metrics, it may be asked whether it could provide a method for generating solutions of Einsteins equations from known solutions. It is easily shown that the Schwarzschild vacuum solution, the Robertson-Walker metric with flat 3-spaces, and the Kasner metric cannot be obtained by Lie-dragging of {\em Minkowski} space. On the other hand, the metric (\ref{metg4}) which is weakly Lie-invariant with respect to the group (T, SO(3)) trivially contains cosmological solutions of Einsteins equation. If the metric $x^0 d_{ab}(x^1, x^2) $ with spherically symmetry and with flat space sections is chosen, by a transformation of the time coordinate we arrive at the line element \begin{equation} ds^2 = (d\tau)^2 - 2/3 \tau^{3/2}[(dr)^2 + r^2 (d\theta)^2 + r^2 sin^2\theta (d\phi)^2]~.\end{equation} It describes a cosmic substrate with the equation of state $p=-\frac{1}{9}\mu$, where $\mu$ describes pressure and $\mu$ the energy density of the material. This equation of state for $w=-\frac{1}{9}$ is non-phantom because of $-1< w$ but does not accelerate the expansion of the universe which occurs for $-1< w<-\frac{1}{3}$. It remains to be seen whether the anisotropic line element \begin{equation} ds^2 = (d\tau)^2 - c_0 \tau^{2/3}[c_1 \theta r + c_2] [(dr)^2 + r^2 (d\theta)^2 + r^2 sin^2\theta (d\phi)^2]\end{equation} can satisfy Einstein's equations with a reasonable matter distribution. In view of the fact that Lie-dragging does not preserve the rank of the metric, its efficiency for generating interpretable gravitational fields is reduced considerably. Surprisingly, by studying the rigid body transformations $G_{1}(6)$ as a group of extended motions, we arrived at the complete class of one-dimensional gravitational fields including the Kasner metric. More generally, a close relation to {\em finite} tranformation groups in classical, non-relativistic mechanics containing arbitrary functions has been established. It is still to be cleared up whether a connection to gauge theory in physics exists. A classification of solutions of Einstein's equations with regard to weak (Lie) symmetries could be made. Although this might be a further help for deciding whether two solutions are transformable into each other or not, the calculational effort looks extensive. Weak Lie-invariance as a weakened concept of ``symmetry'' has been introduced and its consequences presented through a number of examples. It also has led to the introduction of a new type of algebra (``extended Lie algebra'') which is an example for a tangent Lie algebroid. In each {\em fibre} of a subbundle of the tangent bundle, the ``extended Lie algebra'' reduces to a Lie algebra. By help of an extended Cartan-Killing form, Riemann or Lorentz metrics have been constructed on such an algebroid. A particular example is provided by the non-Lie groups of classical mechanics mentioned above. The ensuing possible geometries could be studied and classified in the spirit of Felix Klein. A classification of non-Lie groups leading to extended Lie algebras and of the extended Lie algebras could also be of interest. A further study of the concept of extended Lie algebras is needed and might be of some relevance. Whether there are noteworthy applications in geometry and physics beyond those established here for classical mechanics and the Schr\"odinger equation will have to be found out. \section{Acknowledgments} My sincere thanks go to A. Papadopoulos for inviting me to the Strasbourg conference at IRMA. Remarks by P. Cartier and Y. Kosmann-Schwarzbach, participants at the conference, were quite helpful. I am also grateful for advice on mathematical concepts like algebroids by H. Sepp\"anen and Ch. Zhu of the Mathematical Institute of the University of G\"ottingen as well as to my colleague F. M\"uller-Hoissen, Max Planck Institute for Dynamics and Self-Organization, G\"ottingen, for a clarifying conceptual discussion. As a historian of mathematics, E. Scholz, University of Wuppertal, did guide me to the relevant historical literature.
1,941,325,219,975
arxiv
\section{Introduction} One of the classical combinatorial optimization problems that is studied in computer science is \emph{Bin Packing}. It appeared as one of the prototypical $\mathbf{NP}$-hard problems already in the book of Garey and Johnson~\cite{GareyJohnson79} but it was studied long before in operations research in the 1950's, for example by~\cite{TrimProblem-Eiseman1957}. We refer to the survey of Johnson~\cite{BinPackingSurvey84} for a complete historic account. Bin packing is a good example to study the development of techniques in approximation algorithms as well. The 1970's brought simple greedy heuristics such as \emph{First Fit}, analyzed by Johnson~\cite{Johnson73} which requires at most $1.7\cdot OPT + 1$ bins and \emph{First Fit Decreasing}~\cite{JohnsonFFD74}, which yields a solution with at most $\frac{11}{9} OPT + 4$ bins (see~\cite{FFDtightBound-Dosa07} for a tight bound of $\frac{11}{9} OPT + \frac{6}{9}$). Later, an \emph{asymptotic PTAS} was developed by Fernandez de la Vega and Luecker~\cite{deLaVegaLueker81}. One of their main technical contributions was an \emph{item grouping technique} to reduce the number of different item types. The algorithm of De la Vega and Luecker finds solutions using at most $(1+\varepsilon)OPT + O(\frac{1}{\varepsilon^2})$ bins, while the running time is either of the form $O(n^{f(\varepsilon)})$ if one uses dynamic programming or of the form $O(n \cdot f(\varepsilon))$ if one applies linear programming techniques. A big leap forward in approximating bin packing was done by Karmarkar and Karp in 1982~\cite{KarmarkarKarp82}. First of all, they argue how a certain exponential size LP can be approximately solved in polynomial time; secondly they provide a sophisticated rounding scheme which produces a solution with at most $OPT + O(\log^2 OPT)$ bins, corresponding to an \emph{asymptotic FPTAS}. It will be convenient throughout this paper to allow a more compact form of input, where $s \in [0,1]^n$ denotes the vector of different item sizes and $b \in \setN^n$ denotes the multiplicity vector, meaning that we have $b_i$ copies of item type $i$. In this notation we say that $\sum_{i=1}^n b_i$ is the \emph{total number of items}. The linear program that we mentioned earlier is called the \emph{Gilmore-Gomory LP relaxation}~\cite{TrimProblem-Eiseman1957,Gilmore-Gomory61} and it is of the form \begin{equation} \label{eq:GilmoreGomory} \min\left\{ {\bf{1}}^Tx \mid Ax \geq b, x \geq \bm{0} \right\}. \end{equation} Here, the constraint matrix $A$ consists of all column vectors $p \in \setZ_{\geq 0}^n$ that satisfy $\sum_{i=1}^n p_i s_i \leq 1$. The linear program has variables $x_p$ that give the number of bins that should be packed according to the \emph{pattern} $p$. We denote the value of the optimal fractional solution to \eqref{eq:GilmoreGomory} by $OPT_f$, and the value of the best integral solution by $OPT$. As we mentioned before, the linear program~\eqref{eq:GilmoreGomory} does have an exponential number of variables, but only $n$ constraints. A fractional solution $x$ of cost $\bm{1}^Tx \leq OPT_f + \delta$ can be computed in time polynomial in $\sum_{i=1}^n b_i$ and $1/\delta$~\cite{KarmarkarKarp82} using the Gr{\"o}tschel-Lovasz-Schrijver variant of the Ellipsoid method~\cite{GLS-algorithm-Journal81}. An alternative and simpler way to solve the LP approximately is via the Plotkin-Shmoys-Tardos framework~\cite{FractionalPackingAndCovering-PlotkinShmoysTardos-Journal95} or the multiplicative weight update method. See the survey of \cite{MWU-Survey-Arora-HazanKale2012} for an overview. The best known lower bound on the integrality gap of the Gilmore-Gomory LP is an instance where $OPT = \left\lceil OPT_f \right\rceil + 1$; Scheithauer and Terno~\cite{BinPacking-MIRUP-ScheithauerTerno97} conjecture that these instances represent the worst case additive gap. While this conjecture is still open, it is understandable that the best approximation algorithms are based on rounding a solution to this amazingly strong Gilmore Gomory LP relaxation. For example, the Karmarkar-Karp algorithm operates in $\log n$ iterations in which one first groups the items such that only $\frac{1}{2}\sum_{i \in [n]} s_i$ many different item sizes remain; then one computes a basic solution $x$ and buys $\lfloor x_p\rfloor$ times pattern $p$ and continues with the residual instance. The analysis provides a $O(\log^2 OPT)$ upper bound on the \emph{additive} integrality gap of \eqref{eq:GilmoreGomory}. The rounding mechanism in the recent paper of the second author~\cite{DBLP:conf/focs/Rothvoss13} uses an algorithm by Lovett and Meka that was originally designed for discrepancy minimization. The Lovett-Meka algorithm~\cite{DiscrepancyMinimization-LovettMekaFOCS12} can be conveniently summarized as follows: \begin{theorem}[Lovett-Meka '12] Let $v_1,\ldots,v_m \in \setR^n$ be vectors with $x_{\textrm{start}} \in [0,1]^n$ and parameters $\lambda_1,\ldots,\lambda_m \geq 0$ so that $\sum_{j=1}^m e^{-\lambda_j^2/16} \leq \frac{n}{16}$. Then in randomized polynomial time one can find a vector $x_{\textrm{end}} \in [0,1]^n$ so that $\left|\left<x_{\textrm{end}}-x_{\textrm{start}},v_j\right>\right| \leq \lambda_j \cdot \|v_j\|_2$ for all $j \in \{ 1,\ldots,m\}$ and at least half of the entries of $x_{\textrm{end}}$ are in $0/1$. \end{theorem} Intuitively, the points $x_{\textrm{end}}$ satisfying the linear constraints $|\left<x_{\textrm{end}} - x_{\textrm{start}}\right>| \leq \lambda_j \cdot \|v_j\|_2$ form a polytope and the distance of the $j$th hyperplane to the start point is exactly $\lambda_j$. Then the condition $\sum_{j=1}^m e^{-\lambda_j^2/16} \leq \frac{n}{16}$ essentially says that the polytope is going to be ``large enough''. The algorithm of \cite{DiscrepancyMinimization-LovettMekaFOCS12} itself consists of a random walk through the polytope. For more details, we refer to the very readable paper of \cite{DiscrepancyMinimization-LovettMekaFOCS12}. \begin{center} \psset{unit=1.9cm} \begin{pspicture}(-1.0,-1.2)(1.1,1.2) \pspolygon[linestyle=none,fillstyle=solid,fillcolor=lightgray](-1,-0.25)(0.25,1)(1,0.75)(1,0.25)(-0.25,-1)(-1,-0.75) \pspolygon[linewidth=1pt](-1,-1)(-1,1)(1,1)(1,-1)(-1,-1) \cnode*(0,0){2.5pt}{origin} \nput{0}{origin}{$x_{\textrm{start}}$} \psline[linewidth=1pt](-1,-0.25)(0.25,1) \psline[linewidth=1pt](1,0.25)(-0.25,-1) \psline[linewidth=1pt](0.25,1)(1,0.75) \psline[linewidth=1pt](-0.25,-1)(-1,-0.75) \pnode(-0.375,0.375){A} \pnode(-0.75,0.75){B} \psline[linewidth=2pt](-1,-0.25)(0.25,1) \ncline[arrowsize=5pt]{->}{A}{B} \nbput[labelsep=2pt]{$v_j$} \pnode(1,0.5){y} \psdots[linecolor=black,linewidth=1.5pt](y) \nput[labelsep=4pt]{0}{y}{$x_{\textrm{end}}$} \psline(-1,-1)(-1,-1.1) \rput[c](-1,-1.2){$0$} \psline( 1,-1)( 1,-1.1) \rput[c](1,-1.2){$1$} \psline(-1,-1)(-1.1,-1) \rput[r](-1.1,-1){$0$} \psline(-1, 1)(-1.1, 1) \rput[r](-1.1, 1){$1$} \ncline[linecolor=black,linewidth=1.5pt,arrowsize=6pt,nodesepA=1pt,nodesepB=1pt]{<->}{origin}{A} \nbput[labelsep=0pt]{$\lambda_j$} \end{pspicture} \end{center} The bin packing approximation algorithm of Rothvoss~\cite{DBLP:conf/focs/Rothvoss13} consists of logarithmically many runs of Lovett-Meka. To be able to use the Lovett-Meka algorithm effectively, Rothvoss needs to rebuild the instance in each iteration and ``glue'' clusters of small items together to larger items. His procedure is only able to do that for items that have size at most $\frac{1}{\textrm{polylog}(n)}$ and each of the iterations incurs a loss in the objective function of $O(\log \log n)$. In contrast we present a procedure that can even cluster items together that have size up to $\Omega(1)$. Moreover, Rothvoss' algorithm only uses two types of parameters for the error parameters, namely $\lambda_j \in \{ 0,O(\sqrt{\log \log n})\}$. In contrast, we use the full spectrum of parameters to achieve only a \emph{constant} loss in each of the logarithmically many iterations. \subsection{Our contribution} Our main contribution is the following theorem: \begin{theorem} \label{thm:MainContribution} For any Bin Packing instance $(s,b)$ with $s_1,\ldots,s_n \in [0,1]$, one can compute a solution with at most $OPT_f + O(\log OPT_f)$ bins, where $OPT_f$ denotes the optimal value of the Gilmore-Gomory LP relaxation. The algorithm is randomized and the expected running time is polynomial in $\sum_{i=1}^n b_i$. \end{theorem} The recent book of Williamson and Shmoys~\cite{DesignOfApproxAlgosWilliamsonShmoys} presents a list of 10 open problems in approximation algorithms. Problem $\#3$ in the list is whether the Gilmore-Gomory LP has a constant integrality gap; hence we make progress towards that question. We want to remark that the original algorithm of Karmarkar and Karp has an additive approximation ratio of $O(\log OPT_f \cdot \log( \max_{i,j} \{ \frac{s_i}{s_j} \}))$. For \emph{3-partition} instances where all item sizes are strictly between $\frac{1}{4}$ and $\frac{1}{2}$, this results in an $O(\log n)$ guarantee, which coincides with the guarantees of Rothvoss~\cite{DBLP:conf/focs/Rothvoss13} and this paper if applied to those instances. A paper of Eisenbrand et al.~\cite{BinPackingViaPermutationsSODA2011} gives a reduction of those instances to minimizing the discrepancy of 3 permutations. Interestingly, shortly afterwards Newman and Nikolov~\cite{CounterexampleBecksPermutationConjecture-FOCS12} showed that there are instances of 3 permutations that do require a discrepancy of $\Omega(\log n)$. It seems unclear how to realize those permutations with concrete sizes in a bin packing instance --- however any further improvement for bin packing even in that special case with item sizes in $]\frac{1}{4},\frac{1}{2}[$ would need to rule out such a realization as well. The second author is willing to conjecture that the integrality gap for the Gilmore Gomory LP is indeed $\Theta(\log n)$. \section{A 2-stage packing mechanism} \label{sec:Deficiency It is well-known that for the kind of approximation guarantee that we aim to achieve, one can assume that the items are not too tiny. In fact it suffices to prove an additive gap of $O(\log \max\{ n,\frac{1}{s_{\min}} \})$ where $n$ is the number of different item sizes and $s_{\min}$ is a lower bound on all item sizes. Note that in the following, ``polynomial time'' means always polynomial in the total number of items $\sum_{i=1}^n b_i$. \begin{lemma} \label{lem:AssumptionSizesAtLeast1-n} Assume for a monotone function $f$, there is a polynomial time $OPT_f + f(\max\{n,\frac{1}{s_{\min}}\})$ algorithm for Bin Packing instances $(s,b)$ with $s \in [0,1]^n$ and $s_1,\ldots,s_{n} \geq s_{\min} > 0$. Then there is a polynomial time algorithm that for \emph{all} instances finds a solution with at most $OPT_f + f(OPT_f) + O(\log OPT_f)$ bins. \end{lemma} For a proof, we refer to Appendix~A. From now on we assume that we have $n$ different item sizes with all sizes satisfying $s_i \geq s_{\min}$ for some given parameter $s_{\min}$ (as a side remark, the reduction in Lemma~\ref{lem:AssumptionSizesAtLeast1-n} will choose $s_{\min} = \Theta(\frac{1}{OPT_f})$). Starting from a fractional solution $x$ to \eqref{eq:GilmoreGomory} our goal is to find an integral solution of cost $\bm{1}^Tx + O(\log \max\{n,\frac{1}{s_{\min}}\})$. Another useful standard argument is as follows: \begin{lemma} \label{lem:GreedyPacking} Any bin packing instance $(s,b)$ can be packed in polynomial time into at most $2\sum_{i=1}^n s_ib_i+1$ bins. \end{lemma} \begin{proof} Simply assign the items greedily and open new bins only if necessary. If we end up with $k$ bins, then at least $k-1$ of them are at least half full, which means that $\sum_{i=1}^n s_ib_i \geq \frac{1}{2}(k-1)$. Rearranging gives the claim. \end{proof} Now, we come to the main mechanism that allows us the improvement over Rothvoss~\cite{DBLP:conf/focs/Rothvoss13}. Consider an instance $(s,b)$ and a fractional LP solution $x$. We could imagine the assignment of items in the input to slots in $x$ as a fractional matching in a bipartite graph, where we have nodes $i \in [n]$ on the left hand side, each with \emph{demand} $b_i$ and nodes $(p,i)$ on the right hand side with \emph{supply} $x_p \cdot p_i$. Instead, our idea is to employ a \emph{2-stage packing}: first we pack items into \emph{containers}, then we pack containers into bins. Here, a container is a multiset of items. Before we give the formal definition, we want to explain our construction with a small example that is visualized in Figure~\ref{fig:AssignmentExample}. The example has $n=3$ items of size $s = (0.3,0.2,0.1)$ and multiplicity vector $b = (2,1,7)$. Those items are assigned into containers $C_1,C_2,C_3$ which also have multiplicities. In this case we have $y_{C_1} = y_{C_2} = 1$ copies of the first two containers and $y_{C_3} = 2$ copies of the third container. Moreover, in our example we have 3 patterns $p_1,p_2,p_3$ each with fractional value $x_{p_1}=x_{p_2} = x_{p_3} = \frac{1}{2}$. For example, item $2$ is packed into container $C_1$ and that container is assigned with a fractional value of $\frac{1}{2}$ each to pattern $p_2$ and $p_3$. \begin{figure} \begin{center} \psset{xunit=4cm,yunit=0.8cm} \begin{pspicture}(0,-0.5)(3,9) \rput[r](-0.2,8.5){items} \rput[r](-0.2,4.5){containers} \rput[r](-0.2,0.5){bins} \drawRect{fillcolor=black!50!white,fillstyle=solid,linewidth=0.5pt}{0.4}{4}{0.6}{1} \drawRect{fillcolor=black!40!white,fillstyle=solid,linewidth=0.5pt}{1.6}{4}{0.4}{1} \drawRect{fillcolor=black!20!white,fillstyle=solid,linewidth=0.5pt}{2.6}{4}{0.3}{1} \rput[c](0.4,3.7){\psline{|-|}(0,0)(0.6,0) \cnode*(0.3,0){3.0pt}{C1}} \rput[c](1.6,3.7){\psline{|-|}(0,0)(0.4,0) \cnode*(0.2,0){2.5pt}{C2}} \rput[c](2.6,3.7){\psline{|-|}(0,0)(0.3,0) \cnode*(0.15,0){2.5pt}{C3}} \drawRect{fillcolor=black!20!white,fillstyle=solid,linewidth=0.5pt}{0}{0}{0.3}{1} \drawRect{fillcolor=black!20!white,fillstyle=solid,linewidth=0.5pt}{0.3}{0}{0.3}{1} \drawRect{fillcolor=black!40!white,fillstyle=solid,linewidth=0.5pt}{0.6}{0}{0.4}{1} \drawRect{}{0}{0}{1}{1} \drawRect{fillcolor=black!40!white,fillstyle=solid,linewidth=0.5pt}{1.2}{0}{0.4}{1} \drawRect{fillcolor=black!50!white,fillstyle=solid,linewidth=0.5pt}{1.6}{0}{0.6}{1} \drawRect{}{1.2}{0}{1}{1} \drawRect{fillcolor=black!50!white,fillstyle=solid,linewidth=0.5pt}{2.4}{0}{0.6}{1} \drawRect{fillcolor=black!20!white,fillstyle=solid,linewidth=0.5pt}{3.0}{0}{0.3}{1} \drawRect{}{2.4}{0}{1}{1} \rput[c](0,1.3){\psline{|-|}(0.0,0)(0.3,0) \cnode*(0.15,0){2.5pt}{p11}} \rput[c](0.3,1.3){\psline{|-|}(0.0,0)(0.3,0) \cnode*(0.15,0){2.5pt}{p12}} \rput[c](0.6,1.3){\psline{|-|}(0.0,0)(0.4,0) \cnode*(0.2,0){2.5pt}{p13}} \rput[c](1.2,1.3){\psline{|-|}(0.0,0)(0.4,0) \cnode*(0.2,0){2.5pt}{p21}} \rput[c](1.6,1.3){\psline{|-|}(0.0,0)(0.6,0) \cnode*(0.3,0){2.5pt}{p22}} \rput[c](2.4,1.3){\psline{|-|}(0.0,0)(0.6,0) \cnode*(0.3,0){2.5pt}{p31}} \rput[c](3.0,1.3){\psline{|-|}(0.0,0)(0.3,0) \cnode*(0.15,0){2.5pt}{p32}} \nccurve[nodesepA=1pt,nodesepB=1pt,angleA=-20,angleB=135]{->}{C1}{p31} \naput[labelsep=0pt,npos=0.3]{$\frac{1}{2}$} \nccurve[nodesepA=1pt,nodesepB=1pt,angleA=-20,angleB=135]{->}{C1}{p22} \nbput[labelsep=0pt,npos=0.05]{$\frac{1}{2}$} \ncline[nodesepA=1pt,nodesepB=1pt,angleA=-20,angleB=135]{->}{C2}{p13} \nbput[labelsep=0pt,npos=0.8]{$\frac{1}{2}$} \ncline[nodesepA=1pt,nodesepB=1pt,angleA=-20,angleB=135]{->}{C2}{p21} \nbput[labelsep=0pt,npos=0.8]{$\frac{1}{2}$} \nccurve[nodesepA=1pt,nodesepB=1pt,angleA=-135,angleB=45]{->}{C3}{p11}\nbput[labelsep=0pt,npos=0.9]{$\frac{1}{2}$} \nccurve[nodesepA=1pt,nodesepB=1pt,angleA=-120,angleB=45]{->}{C3}{p12}\nbput[labelsep=0pt,npos=0.95]{$\frac{1}{2}$} \nccurve[nodesepA=1pt,nodesepB=1pt,angleA=-45,angleB=90]{->}{C3}{p32}\naput[labelsep=0pt,npos=0.5]{$\frac{1}{2}$} \rput[c](0.5,-0.5){pattern $p_1$: $x_{p_1} = \frac{1}{2}$} \rput[c](1.7,-0.5){pattern $p_2$: $x_{p_2} = \frac{1}{2}$} \rput[c](2.9,-0.5){pattern $p_3$: $x_{p_3} = \frac{1}{2}$} \drawRect{fillstyle=vlines,linewidth=0.5pt}{0.5}{8}{0.3}{1} \drawRect{fillstyle=hlines,linewidth=0.5pt}{1.6}{8}{0.2}{1} \drawRect{fillstyle=crosshatch,linewidth=0.5pt}{2.6}{8}{0.1}{1} \drawRect{fillstyle=vlines,linewidth=0.5pt}{0.4}{4}{0.3}{1} \drawRect{fillstyle=hlines,linewidth=0.5pt}{0.7}{4}{0.2}{1} \drawRect{fillstyle=crosshatch,linewidth=0.5pt}{0.9}{4}{0.1}{1} \drawRect{fillstyle=vlines,linewidth=0.5pt}{1.6}{4}{0.3}{1} \drawRect{fillstyle=crosshatch,linewidth=0.5pt}{1.9}{4}{0.1}{1} \drawRect{fillstyle=hlines,linewidth=0.5pt}{2.6}{4}{0.2}{1} \drawRect{fillstyle=crosshatch,linewidth=0.5pt}{2.8}{4}{0.1}{1} \rput[c](0.5,7.7){\psline{|-|}(0.0,0)(0.3,0) \cnode*(0.15,0){2.5pt}{i1}} \rput[c](1.6,7.7){\psline{|-|}(0.0,0)(0.2,0) \cnode*(0.1,0){2.5pt}{i2}} \rput[c](2.6,7.7){\psline{|-|}(0.0,0)(0.1,0) \cnode*(0.05,0){2.5pt}{i3}} \rput[c](0.4,5.3){\psline{|-|}(0.0,0)(0.3,0) \cnode*(0.15,0){2.5pt}{i11}} \rput[c](0.7,5.3){\psline{|-|}(0.0,0)(0.2,0) \cnode*(0.1,0){2.5pt}{i12}} \rput[c](0.9,5.3){\psline{|-|}(0.0,0)(0.1,0) \cnode*(0.05,0){2.5pt}{i13}} \rput[c](1.6,5.3){\psline{|-|}(0.0,0)(0.3,0) \cnode*(0.15,0){2.5pt}{i21}} \rput[c](1.9,5.3){\psline{|-|}(0.0,0)(0.1,0) \cnode*(0.05,0){2.5pt}{i22}} \rput[c](2.6,5.3){\psline{|-|}(0.0,0)(0.2,0) \cnode*(0.10,0){2.5pt}{i31}} \rput[c](2.8,5.3){\psline{|-|}(0.0,0)(0.1,0) \cnode*(0.05,0){2.5pt}{i32}} \ncline[nodesepA=1pt,nodesepB=1pt,angleA=-20,angleB=135]{->}{i1}{i11} \nbput[labelsep=0pt,npos=0.8]{$1$} \nccurve[nodesepA=1pt,nodesepB=1pt,angleA=-40,angleB=135]{->}{i1}{i21} \nbput[labelsep=0pt,npos=0.8]{$1$} \nccurve[nodesepA=1pt,nodesepB=1pt,angleA=-135,angleB=90]{->}{i2}{i12} \nbput[labelsep=0pt,npos=0.2]{$1$} \nccurve[nodesepA=1pt,nodesepB=1pt,angleA=-135,angleB=45]{->}{i3}{i13} \nbput[labelsep=0pt,npos=0.2]{$1$} \nccurve[nodesepA=1pt,nodesepB=1pt,angleA=-135,angleB=60]{->}{i3}{i22} \naput[labelsep=0pt,npos=0.6]{$1$} \nccurve[nodesepA=1pt,nodesepB=1pt,angleA=-90,angleB=90]{->}{i3}{i31} \nbput[labelsep=0pt,npos=0.5]{$2$} \nccurve[nodesepA=1pt,nodesepB=1pt,angleA=-45,angleB=70]{->}{i3}{i32} \naput[labelsep=0pt,npos=0.5]{$2$} \drawRect{fillcolor=black!50!white,fillstyle=none,linewidth=1.5pt}{0.4}{4}{0.6}{1} \drawRect{fillcolor=black!40!white,fillstyle=none,linewidth=1.5pt}{1.6}{4}{0.4}{1} \drawRect{fillcolor=black!20!white,fillstyle=none,linewidth=1.5pt}{2.6}{4}{0.3}{1} \rput[c](0.2,4.8){cont. $C_1$} \rput[c](0.2,4.2){$y_{C_1} = 1$} \rput[c](1.4,4.8){cont. $C_2$}\rput[c](1.4,4.2){$y_{C_2} = 1$} \rput[c](2.4,4.8){cont. $C_3$}\rput[c](2.4,4.2){$y_{C_3} = 2$} \rput[c](0.3,8.7){item $i_1$} \rput[c](0.3,8.2){$b_1 = 2$} \rput[c](1.4,8.7){item $i_2$} \rput[c](1.4,8.2){$b_2 = 1$} \rput[c](2.4,8.7){item $i_3$} \rput[c](2.4,8.2){$b_3 = 7$} \end{pspicture} \caption{Example for assigning items to containers and containers to patterns.\label{fig:AssignmentExample}} \end{center} \end{figure} The reader might have noticed that we do allow that some copies of item $i_3$ are assigned to slots of a larger item $i_2$. On the other hand, we have $b_3=7$ copies of item 3, but only 6 slots in containers that we could use. So there will be 1 unit that we won't be able to pack. Similarly, we have $y_{C_3}=2$ copies of container $C_3$, but only $\frac{3}{2}$ slots in the patterns. Later we will say that the \emph{deficiency} of the 2-stage packing is $1 \cdot s_3 + \frac{1}{2} \cdot s(C_3)$ where $s(C_3)$ is the size of container $C_3$. Now, we want to give the formal definitions. We call any vector $C \in \setZ_{\geq 0}^n$ with $\sum_{i=1}^n s_iC_i \leq 1$ a \emph{container}. Here $C_i$ denotes the number of copies of item $i$ that are in the container. The \emph{size} of the container is denoted by $s(C) := \sum_{i=1}^n C_is_i$. Let $\pazocal{C} := \{ C \in \setZ_{\geq 0}^n \mid s(C) \leq 1\}$ be the set of all containers. As we will pack containers into bins, we want to define a \emph{pattern} as a vector of the form $p \in \setZ_{\geq 0}^{\pazocal{C}}$ where $p_C$ denotes the number of times that the pattern contains container $C$. Of course the sum of the sizes of the containers should be at most 1, thus \[ \pazocal{P} := \Big\{ p \in \setZ_{\geq 0}^{\pazocal{C}} \mid \sum_{C \in \pazocal{C}} p_C \cdot s(C) \leq 1\Big\} \] is set of all \emph{(valid) patterns}. Now suppose we have an instance $(s,b)$ and a fractional vector $x \in \mathbb{R}_{\geq 0}^{\pazocal{P}}$. To keep track of which containers should be used in the intermediate packing step, we also need to maintain an integral vector $y \in \mathbb{Z}_{\geq 0}^{\pazocal{C}}$. We say that a bipartite graph $G=(V_\ell\cup V_r,E)$ is a \emph{packing graph} if each $v\in V_\ell\cup V_r$ has an associated size $s(v)\in [0,1]$ and multiplicity $\text{mult}(v)\in \setR_{\ge 0}$, and the edge set is given by $E=\{(u,v)\in V_\ell\times V_r \mid s(u)\le s(v)\}$. An \emph{assignment} in a packing graph is a function $a:E\rightarrow \setR_{\ge 0}$ so that for any $v\in V$, we have $\sum_{e\in \delta(v)}a(e)\le \text{mult}(v),$ where $\delta(v)$ denotes the set of edges incident to $v$. The \emph{deficiency} of a packing graph is the total size of left nodes that fail to be packed in an optimal assignment. That is, $$\text{def}(G):=\min_{a \text{ assignment of } G} \Big\{ \sum_{v\in V_\ell} s(v)\cdot(\text{mult}(v) - a(\delta(v))) \Big\}.$$ The edge set of those graphs is extremely simple, so that one can directly obtain the deficiency as follows: \begin{observation} \label{obs:GreedyAssignment} For any packing graph, an optimal assignment $a : E \to \setR_{\geq 0}$ which attains $\textrm{def}(G)$ can be obtained as follows: go through the nodes $v \in V_r$ in any order. Take the node $u \in V_{\ell}$ of maximum size that has some capacities left and satisfies $s(u) \leq s(v)$. Increase $a(u,v)$ as much as possible. \end{observation} In this paper we further restrict ourselves to \emph{left-integral} packing graphs --- that is, for any $v\in V_\ell$, $\text{mult}(v)\in \setZ_{\ge 0}$. We construct two packing graphs: one responsible for the assignment of items to containers and one for assigning containers to bins. \begin{itemize} \item \emph{Assigning items to containers:} Given $b\in\setZ_{\ge 0}^n$,$y\in\setZ_{\ge 0}^\pazocal{C}$, we define a packing graph $G_1(b,y)$ as follows. The left nodes of the graph are defined by $V_\ell=[n]$, with sizes $s_i$ and multiplicities $b_i$. The right nodes are defined by $V_r = \{ (i,C) : C \in \pazocal{C}, i \in C \}$ with the size of node $(i,C)$ given by $s_i$ and multiplicity by $y_C\cdot C_i$. \item \emph{Assigning containers to patterns:} Given $y\in\setZ_{\ge 0}^\pazocal{C}$ and $x\in\setR_{\ge 0}^\pazocal{P}$, we define a packing graph $G_2(x,y)$. The left nodes are given by $\pazocal{C}$ with sizes given by the sizes of containers, and multiplicities $y_C$. The right nodes are given by $ \{ (C,p) : C \in \pazocal{C}, p \in \pazocal{P} \}$, with the size of node $(C,p)$ given by $s(C)$ and the multiplicity by $x_C \cdot p_{C}$. \end{itemize} We then define the deficiency of the pair $(x,y)$ with item multiplicities $b$ to be the sum $$\textrm{def}_b(x,y) := \textrm{def}(G_1(b,y)) + \textrm{def}(G_2(x,y)).$$ In later sections we will often leave off the $b$ to simplify notation. We should discuss why the 2-stage packing via the containers is useful. First of all, it is easy to find \emph{some} initial configuration. \begin{lemma} \label{lem:StartingSolution} For any bin packing instance $(s,b)$, one can compute a ``starting solution'' $x \in \mathbb{R}_{\geq 0}^{\pazocal{P}}$ and $y \in \mathbb{Z}_{\geq 0}^{\pazocal{C}}$ in polynomial time so that $\bm{1}^Tx \leq OPT_f + 1$ and $\textrm{def}(x,y) = 0$ with $|\textrm{supp}(x)| \leq n$. \end{lemma} \begin{proof} As we already argued, one can compute a fractional solution $x$ for \eqref{eq:GilmoreGomory} in polynomial time that has cost $\bm{1}^Tx \leq OPT_f + 1$. We simply use singleton containers $\{i\}$ for all items $i \in \{ 1,\ldots,n\}$ and set $y_{\{i\}} := b_i$. \end{proof} Next, we argue that our notation of deficiency was actually meaningful in recovering an assignment of items to bins. \begin{lemma} \label{lem:PackingItemsWith2DefItems} Suppose that $x \in \setZ_{\geq 0}^{\pazocal{P}},y\in \setZ_{\geq 0}^{\pazocal{C}}$ are both integral. Then there is a packing of all items into at most $\bm{1}^Tx + 2\textrm{def}(x,y) + 1$ bins. \end{lemma} \begin{proof} Since $x$ and $y$ are both integral, all multiplicities in $G_1$ and $G_2$ will be integral and we can find two integral assignments $a_1,a_2$ attaining $\textrm{def}(x,y)$. Buy all the patterns suggested by $x$. Use $a_2$ to pack the containers in $y$. Then use $a_1$ to map the items to containers. There are some items that will not be assigned --- their total size is $\textrm{def}(G_1(b,y))$. Moreover, there might also be containers in $y$ that have not been assigned; their total size is $\textrm{def}(G_2(x,y))$. We pack items and containers greedily into at most $2\textrm{def}(x,y) + 1$ many extra bins using Lemma~\ref{lem:GreedyPacking}. \end{proof} In each iteration of our algorithm, it will be useful for us to be able fix the integral part of $x$ and focus solely on the fractional part. \begin{lemma} \label{lem:SplittingOffFractionalPart} Suppose $x\in \setR_{\geq 0}^\pazocal{P}, y\in\setZ_{\ge 0}^\pazocal{C}$, and $b\in \setZ_{\ge 0}^n$. If $\hat{x}_p=\lfloor x_p \rfloor$ for all patterns $p$, then there exist vectors $\hat{y}\in\setZ_{\ge 0}^\pazocal{C}$, $\hat{b}\in\setZ_{\ge 0}^n$ with $\hat{b} \leq b$ so that $\text{def}_{\hat{b}}(\hat{x},\hat{y})=0$ and $\text{def}_{b-\hat{b}}(x-\hat{x},y-\hat{y})=\text{def}_b(x,y)$. \end{lemma} \begin{proof} Let us imagine that we replace each node $(C,p)$ in $G_2(x,y)$ with two copies, a ``red'' node and a ``blue'' node. The red copy receives an integral multiplicity of $\text{mult}_\text{red}(C,p)=\hat{x}_p p_C$ while the blue copy receives a fractional multiplicity of $\text{mult}_\text{blue}(C,p)=(x_p-\hat{x}_p)\cdot p_C$. Now we apply Observation~\ref{obs:GreedyAssignment} to find the best assignment $a$. Crucially, we set up the order of the right hand side nodes so that we first process the red integral nodes and then the blue fractional ones. Note that the assignment that this greedy procedure computes is optimal and moreover, the assignments for red nodes will be integral. For each container $C$ on the left, we define $\hat{y}_C$ to be the total red multiplicity of its targets under this optimal assignment. Then $\text{def}(G_2(\hat{x},\hat{y}))=0$ and $\text{def}(G_2(x-\hat{x},y-\hat{y}))=\text{def}(G_2(x,y))$. In the graph $G_1(b,y)$, all multiplicities are integral anyway, so we can trivially find an integral vector $\hat{b}$ so that $\text{def}(G_1(\hat{b},\hat{y}))=0$ and $\text{def}(G_1(b-\hat{b},y-\hat{y}))=\text{def}(G_1(b,y))$. \end{proof} Define $\text{supp}(x) := \{ p \in \pazocal{P} : x_p > 0\}$ as the support of $x$ and $\textrm{frac}(x) := \{ p \in \pazocal{P}: 0<x_p<1\}$ as the patterns in $p$ that are still fractional. Now we have enough notation to state our main technical theorem: \begin{theorem} \label{thm:OneIteration} Let $(s,b)$ be an instance with $s_1,\ldots,s_n \geq s_{\min}>0$. Let $y \in \setZ_{\geq 0}^{\pazocal{C}}$ and $x \in [0,1[^{\pazocal{P}}$ with $|\text{supp}(x)|\geq L\log(\frac{1}{s_{\min}})$, where $L$ is a large enough constant. Then there is a randomized polynomial time algorithm that finds $\tilde{y} \in \setZ_{\geq 0}^{\pazocal{C}}$ and $\tilde{x} \in \setR_{\geq 0}^{\pazocal{P}}$ with $\bm{1}^T\tilde{x} = \bm{1}^Tx$ and $\textrm{def}(\tilde{x},\tilde{y}) \leq \textrm{def}(x,y) + O(1)$ while $|\textrm{frac}(\tilde{x})| \leq \frac{1}{2}|\textrm{frac}(x)|$. \end{theorem} While it will take the remainder of this paper to prove the theorem, the algorithm behind the statement can be split into the following two steps: \begin{enumerate} \item[(I)] \emph{Rebuilding the container assignment}: We will change the assignments for the pair $(x,y)$ so that for every container in size class $\sigma$ the patterns in supp$(x)$ use, they use nearly $(\frac{1}{\sigma})^{1/2}$ copies, while no individual pattern in $\textrm{supp}(x)$ contains more than $(\frac{1}{\sigma})^{1/4}$ copies of the same container. \item[(II)] \emph{Application of Lovett-Meka}: We will apply the Lovett-Meka algorithm to sparsify the fractional solution $x$. Here, the vectors $v_j$ that comprise the input for the LM-algorithm will correspond to sums over intervals of rows of the constraint matrix $A$. Recall that the error bound provided by Lovett-Meka crucially depends on the lengths $\|v_j\|_2$. The procedure in $(I)$ will ensure that the Euclidean length of those vectors is small. \end{enumerate} Once we have proven Theorem~\ref{thm:OneIteration}, the main result easily follows: \begin{proof}[Proof of Theorem~\ref{thm:MainContribution}] We compute a fractional solution $x$ to \eqref{eq:GilmoreGomory} of cost $\bm{1}^Tx \leq OPT_f+1$. In fact, we can assume that $x$ is a basic solution to the LP and hence $|\textrm{supp}(x)| \leq n$. We construct a container assignment $y$ consisting only of singletons, see Lemma~\ref{lem:StartingSolution}. Then for $\log(n)$ iterations, we first use Lemma~\ref{lem:SplittingOffFractionalPart} to split the current solution $x$ as $x = x^{\textrm{int}} + x^{\textrm{frac}}$ where $x^{\textrm{int}}_p = \lfloor x_p \rfloor$ and obtain a corresponding split $y = y^{\textrm{int}} + y^{\textrm{frac}}$. Then we run Theorem~\ref{thm:OneIteration} with input $(x^{\textrm{frac}},y^{\textrm{frac}})$ and denote the result by $(\tilde{x}^{\textrm{frac}},\tilde{y}^{\textrm{frac}})$. Finally we update $x := x^{\textrm{int}} + \tilde{x}^{\textrm{frac}}$ and $y := y^{\textrm{int}} + \tilde{y}^{\textrm{frac}}$. As soon as $|\textrm{frac}(x)| \leq O(\log \frac{1}{s_{\min}})$, we can just buy every pattern in frac$(x)$. In each iteration the deficiency increases by at most $O(1)$. At the end, we use Lemma~\ref{lem:PackingItemsWith2DefItems} to actually pack the items into bins. We arrive at a solution of cost $OPT_f + O(\log \max\{ n, \frac{1}{s_{\min}} \})$ which is enough, using Lemma~\ref{lem:AssumptionSizesAtLeast1-n}. \end{proof} We will describe the implementation of $(I)$ in Section~\ref{sec:RebuildingContainers} and then $(II)$ in Section~\ref{sec:ApplyingLM}. \section{Rebuilding the container assignment\label{sec:RebuildingContainers}} In this section we assume that we are given $x\in [0,1[^\pazocal{P}$ with $|\text{supp}(x)|=m$. To ease notation, we will only write the nonzero parts of $x$, so that if supp$(x)=\{p_1,p_2,...,p_m\}$, then $x=(x_{p_i})_{i=1}^m$. We update $x$ by altering the patterns that make up its support. Even though some patterns could become identical, we continue to treat them as separate patterns. Originally, we had defined $A$ as the incidence matrix of the Gilmore Gomory LP in \eqref{eq:GilmoreGomory} where the rows correspond to items. Due to our 2-stage packing, we actually consider the patterns to be multi-sets of containers, not items anymore. Hence, let us for the rest of the paper redefine the meaning of $A$. Now, the rows of $A$ correspond to the containers in $\pazocal{C}$ ordered from largest to smallest, and columns represent the patterns in supp$(x)$. As we perform the grouping and container-forming operations, we update the columns of the matrix. The resulting columns then yield a new fractional solution $\tilde{x}$ by taking $x_{p_i}$ copies of the pattern now in column $i$. We will now describe our grouping and container reassignment operations, keeping track of what happens to the fractional solution as well as to the corresponding matrix. First, we need a lemma that tells us how rebuilding the fractional solution affects the deficiency. To have some useful notation, define $\text{mult}(C,x):=\sum_{p\in\pazocal{P}}\text{mult}(C,p)=\sum_{p\in\pazocal{P}}x_pp_C$ to be the number of times that the patterns cover container $C \in \pazocal{C}$. Now, if $\sum_{s(C)\ge s}\tilde{y}_C\le \sum_{s(C)\ge s}y_C$ for all $s\ge 0$, then we write $\tilde{y} \preceq y$. Moreover, if $\sum_{s(C)\ge s}\text{mult}(C,\tilde{x})\ge \sum_{s(C)\ge s}\text{mult}(C,x)$ for all $s \geq 0$, then we write $\tilde{x}\succeq x$. Observe that if $\tilde{y}\preceq y$ and $\tilde{x}\succeq x$, then $\text{def}(G_2(\tilde{x},\tilde{y})) \leq \text{def}(G_2(x,y))$. \begin{lemma} \label{lem:DefIncrease}Now suppose that $t_\sigma\ge 0$ is such that $$\sum_{s(C)\ge s}\text{mult}(C,\tilde{x}) \ge\left\{\begin{array}{lll}\sum_{s(C)\ge s}\text{mult}(C,x) & \text{ if } & s> \sigma\\ \sum_{s(C)\ge s}\text{mult}(C,x)-t_\sigma & \text{ if } & s\le \sigma\end{array}\right.$$ Then $\text{def}(\tilde{x},y)\le \text{def}(x,y)+\sigma \cdot t_\sigma$. \end{lemma} \begin{proof} Let $C_0$ be the largest container of size at most $\sigma$, and let $x'$ be the vector representing $t_\sigma$ copies of the pattern containing a single copy of $C_0$. Then $\tilde{x}+x'\succeq x$, and so $\text{def}(G_2(\tilde{x}+x',y))\le\text{def}(G_2(x,y))$. But if $y'$ is the vector representing $t_\sigma$ copies of $C_0$, then $\text{def}(G_2(\tilde{x},y))=\text{def}(G_2(\tilde{x}+x',y+y'))$, since we can find an optimal assignment taking the containers in $y'$ to those of $x'$. Since the total size of $y'$ is at most $\sigma t_\sigma$, we have $\text{def}(G_2(\tilde{x},y))\le \text{def}(G_2(\tilde{x}+x',y))+\sigma t_\sigma \le \text{def}(G_2(x,y))+ \sigma t_\sigma$, and therefore $\text{def}(\tilde{x},y)\le \text{def}(x,y)+\sigma t_\sigma$. \end{proof} If $\sigma$ is a power of 2, say $\sigma = 2^{-\ell}$ for $\ell \in \setZ_{\geq 0}$, then we say the \emph{size class} of $\sigma$ is the set of items with sizes between $\frac{1}{2}\sigma$ and $\sigma$. In this next lemma, we round containers in patterns down so that each container type in size class $\sigma$ is either not used at all or is used at least $\frac{\delta}{\sigma}$ times. \begin{lemma}[Grouping] Let $(s,b)$ be a bin packing instance with $y \in \mathbb{Z}_{\geq 0}^{\pazocal{C}}$ and $x \in [0,1[^{\pazocal{P}}$. For any size class $\sigma$ and $\delta>0$, we can find $\tilde{x} \in [0,1[^{\pazocal{P}}$ so that \begin{enumerate} \item $\bm{1}^T\tilde{x} = \bm{1}^Tx$ \item $|\textrm{supp}(\tilde{x})| \le |\textrm{supp}(x)|$ \item For each container type $C$ in size class $\sigma$, either $\text{mult}(C,\tilde{x})=0$ or $s(C)\cdot\text{mult}(C,\tilde{x})\geq \delta$. In all other size classes, the multiplicities of containers in patterns do not change. \item $\textrm{def}(\tilde{x},y) \leq \textrm{def}(x,y) + O(\delta)$. \end{enumerate} \end{lemma} \begin{proof} Assume containers are sorted by size, from largest to smallest. Define $S_\delta$ to be the set of containers in size class $\sigma$ not satisfying condition (3) above. In other words, \\$S_\delta:=\{C \text{ in size class } \sigma\mid 0<s(C)\cdot\text{mult}(C,x)<\delta\}$. For a subset $H\subset S_\delta$, define the weight of $H$ to be $w(H):=\sum_{C\in H} s(C)\cdot \text{mult}(C,x)$. Note that the weight of a single container is at most $\delta.$ Hence we can partition $S_\delta=H_1\dot\cup H_2 \dot\cup ... \dot\cup H_r$ so that: \begin{enumerate} \item $w(H_k)\in[2\delta,3\delta], \forall k=1,...,r-1$. \item $w(H_r)\le 3\delta$. \item $C\in H_k, C'\in H_{k+1} \textrm{ implies } s(C)\ge s(C')$. \end{enumerate} For each $k=1,...,r-1$ and container $C \in H_k$, we replace containers of type $C$ in all patterns $p \in \text{frac}(x)$ with the smallest container type appearing in $H_k$. For all $C \in H_r$, remove containers of type $C$ from all patterns $p \in \text{frac}(x)$. Call the updated vector $\tilde{x}$. We see immediately that $\bm{1}^T\tilde{x} = \bm{1}^Tx$ and $|\textrm{supp}(\tilde{x})|\le |\textrm{supp}(x)|$. Moreover, since every container type $C$ appearing in $\tilde{x}$ now has an entire group using it, and the weight of each container didn't change by more than a factor of $2$, we have $s(C)\cdot \text{mult}(C,\tilde{x})\ge \delta$, and so condition (3) is satisfied. To complete the proof, it remains to show that $\textrm{def}(G_2(\tilde{x},y))\le \textrm{def}(G_2(x,y)) + O(\delta)$. Now, for any $i$, there is at most one group $H_k$ whose containers (partly) changed from being larger than $s(C_i)$ to smaller. The weight of this group is at most $3\delta$, and so $\sum_{j\le i}\text{mult}(C_j,x)-\sum_{j\le i} \text{mult}(C_j,\tilde{x})\le\frac{6\delta}{\sigma}$. Since this holds for all $i$, we can therefore apply Lemma~\ref{lem:DefIncrease} to conclude that $\textrm{def}(G_2(\tilde{x},y))\le \textrm{def}(G_2(x,y)) + O(\delta)$. \end{proof} We now remark what happens to the associated matrix $A$ under this grouping operation. Write $A,\tilde{A}$ as our original and updated matrices, and $A_C,\tilde{A}_C$ as the rows for container $C$. For container types $C$ in size class $\sigma$, either $\tilde{A}_{C}x=0$ or $s(C)\cdot \tilde{A}_{C}x\ge \delta$. For all other size classes, $\tilde{A}_{C}=A_{C}$. In particular, notice that we have either $\|\tilde{A}_C\|_1=0$ or $s(C) \cdot \|\tilde{A}_C\|_1 \ge \delta.$ Before we introduce the next main lemma --- how to reassign containers --- we prove a useful result about decomposing packing graphs in a nice way. For a visualization of the following lemma, see Figure~\ref{fig:ExampleDecomposition}. \begin{figure} \begin{center} \begin{pspicture}(0,0)(5.5,8) \multido{\N=7+-1,\n=1+1}{7}{% \cnode[linewidth=0.5pt,fillstyle=solid,fillcolor=lightgray](0,\N){7pt}{v\n} \rput[c](v\n){$v_{\n}$} \cnode[linewidth=0.5pt,fillstyle=solid,fillcolor=lightgray](2,\N){7pt}{u\n} \rput[c](u\n){$u_{\n}$} \nput[labelsep=5pt]{0}{u\n}{$\red{0.3}\black{+}\blue{0.4}$} \nput[labelsep=5pt]{180}{v\n}{$1$} } \rput[c](4.2,3.8){\Huge{\gray{=}}} \rput[c](1,0){$(a) \; \; \textrm{mult}(v)$ } \pnode(-15pt,7.2){A} \pnode(-15pt,8){B} \ncline[linewidth=0.75pt]{->}{B}{A} \nput{90}{B}{$\textrm{mult}(v_i)$} \pnode(3,7.2){C} \pnode(3,8.5){D} \ncline[linewidth=0.75pt]{->}{D}{C} \nput{90}{D}{$\textrm{mult}_{\textrm{red}}(v_i)+\textrm{mult}_{\text{blue}}(u_i)$} \end{pspicture} \begin{pspicture}(0,0)(5.4,8) \multido{\N=7+-1,\n=1+1}{7}{% \cnode[linewidth=0.5pt,fillstyle=solid,fillcolor=lightgray](0,\N){7pt}{v\n} \rput[c](v\n){$v_{\n}$} \cnode[linewidth=0.5pt,fillstyle=solid,fillcolor=lightgray](2,\N){7pt}{u\n} \rput[c](u\n){$u_{\n}$} \nput[labelsep=5pt]{0}{u\n}{$\red{0.3}$} } \nput[labelsep=5pt]{180}{v1}{\red $0$} \nput[labelsep=5pt]{180}{v2}{\red $0$} \nput[labelsep=5pt]{180}{v3}{\red $0$} \nput[labelsep=5pt]{180}{v4}{\red $1$} \nput[labelsep=5pt]{180}{v5}{\red $0$} \nput[labelsep=5pt]{180}{v6}{\red $0$} \nput[labelsep=5pt]{180}{v7}{\red $1$} \ncline[linewidth=0.75pt]{->}{v4}{u1} \naput[labelsep=1pt,npos=0.5]{$0.3$} \ncline[linewidth=0.75pt]{->}{v4}{u2} \naput[labelsep=1pt,npos=0.9]{$0.3$} \ncline[linewidth=0.75pt]{->}{v4}{u3} \naput[labelsep=1pt,npos=0.9]{$0.3$} \ncline[linewidth=0.75pt]{->}{v4}{u4} \naput[labelsep=1pt,npos=0.7]{$0.1$} \ncline[linewidth=0.75pt]{->}{v7}{u4} \naput[labelsep=1pt,npos=0.5]{$0.2$} \ncline[linewidth=0.75pt]{->}{v7}{u5} \naput[labelsep=1pt,npos=0.9]{$0.3$} \ncline[linewidth=0.75pt]{->}{v7}{u6} \naput[labelsep=1pt,npos=0.9]{$0.3$} \ncline[linewidth=0.75pt]{->}{v7}{u7} \naput[labelsep=1pt,npos=0.7]{$0.2$} \rput[c](3.9,3.8){\Huge{\gray{+}}} \rput[c](1,0){$(b) \; \; \textrm{mult}_{\textrm{red}}(v)$ } \pnode(-15pt,7.2){A} \pnode(-15pt,8){B} \ncline[linewidth=0.75pt]{->}{B}{A} \nput{90}{B}{$\textrm{mult}_{\textrm{red}}(v_i)$} \pnode(2.7,7.2){C} \pnode(2.7,8){D} \ncline[linewidth=0.75pt]{->}{D}{C} \nput{90}{D}{$\textrm{mult}_{\textrm{red}}(u_i)$} \end{pspicture} \begin{pspicture}(0,0)(2,8) \multido{\N=7+-1,\n=1+1}{7}{% \cnode[linewidth=0.5pt,fillstyle=solid,fillcolor=lightgray](0,\N){7pt}{v\n} \rput[c](v\n){$v_{\n}$} \cnode[linewidth=0.5pt,fillstyle=solid,fillcolor=lightgray](2,\N){7pt}{u\n} \rput[c](u\n){$u_{\n}$} \nput[labelsep=5pt]{0}{u\n}{$\blue{0.4}$} } \nput[labelsep=5pt]{180}{v1}{\blue $1$} \nput[labelsep=5pt]{180}{v2}{\blue $1$} \nput[labelsep=5pt]{180}{v3}{\blue $1$} \nput[labelsep=5pt]{180}{v4}{\blue $0$} \nput[labelsep=5pt]{180}{v5}{\blue $1$} \nput[labelsep=5pt]{180}{v6}{\blue $1$} \nput[labelsep=5pt]{180}{v7}{\blue $0$} \ncline[linewidth=0.75pt]{->}{v1}{u1} \naput[labelsep=1pt,npos=0.5]{$0.4$} \ncline[linewidth=0.75pt]{->}{v2}{u2} \naput[labelsep=1pt,npos=0.5]{$0.4$} \ncline[linewidth=0.75pt]{->}{v3}{u3} \naput[labelsep=1pt,npos=0.5]{$0.4$} \ncline[linewidth=0.75pt]{->}{v5}{u4} \naput[labelsep=1pt,npos=0.6]{$0.4$} \ncline[linewidth=0.75pt]{->}{v5}{u5} \naput[labelsep=1pt,npos=0.5]{$0.4$} \ncline[linewidth=0.75pt]{->}{v6}{u6} \naput[labelsep=1pt,npos=0.5]{$0.4$} \rput[c](1,0){$(c) \; \; \textrm{mult}_{\textrm{blue}}(v)$ } \pnode(-15pt,7.2){A} \pnode(-15pt,8){B} \ncline[linewidth=0.75pt]{->}{B}{A} \nput{90}{B}{$\textrm{mult}_{\textrm{blue}}(v_i)$} \pnode(2.7,7.2){C} \pnode(2.7,8){D} \ncline[linewidth=0.75pt]{->}{D}{C} \nput{90}{D}{$\textrm{mult}_{\textrm{blue}}(u_i)$} \end{pspicture} \caption{Visualization of the decomposition of the left hand side multiplicities from Lemma~\ref{decomposition}. Here, $V_{\ell}=\{v_1,\ldots,v_7\}$ and $V_{r} = \{u_1,\ldots,u_7\}$. Assume that $s(u_i) = s(v_i)$ and $s(v_1) > \ldots > s(v_7)$. Nodes are labelled with their multiplicities. In $(b)$ and $(c)$ we also depict the assignments corresponding to the deficiencies. \label{fig:ExampleDecomposition}} \end{center} \end{figure} \begin{lemma} \label{decomposition} Suppose $G=(V_\ell\cup V_r,E)$ is a left-integral packing graph as in Section \ref{sec:Deficiency}, and that for every $v\in V_r$, we are given red and blue multiplicities so that $\text{mult}(v)=\text{mult}_\text{red}(v)+\text{mult}_\text{blue}(v)$. Suppose further that all nodes $v\in V_r$ of size greater than $\sigma$ have $\text{mult}_\text{red}(v)=0$. Then we can find left-integral packing graphs $G_\text{red}$ and $G_\text{blue}$ with the same edges, nodes, and sizes of $G$ but with multiplicities satisfying $\text{mult}_\text{red} + \text{mult}_\text{blue}=\text{mult}$. Moreover, we have $\text{def}(G_\text{red})=0$ and $\text{def}(G_\text{blue})\le \text{def}(G)+\sigma$.\end{lemma} \begin{proof} By allowing fractional red and blue multiplicities, we can find initial values for the red and blue multiplicities of left nodes so that $\text{def}(G_\text{red})=0$ and $\text{def}(G_\text{blue})=\text{def}(G)$. To enforce integrality, we will update these multiplicities by swapping (fractional parts of) larger red nodes for smaller blue nodes. Suppose nodes on the left with positive red multiplicity are ordered by size, so that $\sigma \ge s(v_1)\ge s(v_2)\ge ... \ge s(v_\ell)$. While the multiplicities are not all integral, let $i$ be the index of the largest $v_i$ with $\text{mult}_\text{red}(v_i)$ not integral. If $i<\ell$, decrease $\text{mult}_{\text{red}}(v_i)$ to $\lfloor\text{mult}_{\text{red}}(v_i)\rfloor$, and increase $\text{mult}_\text{red}(v_{i+1})$ by the same amount. If $i=\ell$, simply decrease $\text{mult}_\text{red}(v_\ell)$ to $\lfloor\text{mult}_\text{red}(v_\ell)\rfloor.$ Notice that the deficiency of the red graph has not increased, since we are either replacing nodes with smaller nodes or decreasing the multiplicity of the last node. Moreover, we notice that for any size $s$, the total red multiplicity of nodes at least size $s$ has decreased by at most $1$. Therefore in the complementary blue graph, $$ \sum_{v \in V_{\ell}:s(v)\ge s}(\text{mult}'_\text{blue}(v)-\text{mult}_\text{blue}(v))\le 1.$$ The additional blue nodes we fail to pack will therefore all have size at most $\sigma$ and their total multiplicity will be at most $1$, so the deficiency of the blue graph increases by at most $\sigma$. \end{proof} A key technical ingredient for our algorithm is to be able to replace sets of identical copies of a container in patterns of $x$ by a bigger container that contains the union of the smaller containers. \begin{lemma}\label{lem:Gluing} Given a pair $(x,y)$ with $x \in \setR_{\geq 0}^{\pazocal{P}}$ and $y \in \setZ_{\geq 0}^{\pazocal{C}}$. Let $k \in \setN$ and $0<\sigma \leq 1$ be two parameters. Let $\tilde{x} \in \setR_{\geq 0}^{\pazocal{P}}$ be the vector that emerges if for all containers $C$ with $\frac{1}{2}\sigma\le s(C)\le \sigma$ and all patterns $p$ we replace $k\cdot\lfloor \frac{p_C}{k}\rfloor$ copies of $C$ by $\lfloor\frac{p_C}{k}\rfloor$ copies of the container that is $k \cdot C$. Then there is a $\tilde{y} \in \setZ_{\geq 0}^{\pazocal{C}}$ so that $\textrm{def}(\tilde{x},\tilde{y}) \leq \textrm{def}(x,y) + O(k\sigma)$. \end{lemma} \begin{figure} \begin{center} \psset{xunit=2.0cm,yunit=0.6cm} \begin{pspicture}(-1,-0.4)(8,5) \rput[r](-0.2,4.5){{\bf container:}} \rput[r](-0.2,0.5){{\bf patterns:}} \rput[c](0.5,4){ \drawRect{linewidth=0.75pt,fillstyle=solid,fillcolor=black!70!white}{0}{0}{0.25}{1} \pnode(0.125,0){c1} } \rput[c](1.25,4){ \drawRect{linewidth=0.75pt,fillstyle=solid,fillcolor=black!50!white}{0}{0}{0.23}{1} \pnode(0.125,0){c2} } \rput[c](2.0,4){ \drawRect{linewidth=0.75pt,fillstyle=solid,fillcolor=black!35!white}{0}{0}{0.21}{1} \pnode(0.125,0){c3} } \rput[c](3.5,4){ \drawRect{linewidth=0.75pt,fillstyle=solid,fillcolor=black!25!white}{0}{0}{0.19}{1} \pnode(0.125,0){c4} } \rput[c](4.25,4){ \drawRect{linewidth=0.75pt,fillstyle=solid,fillcolor=black!15!white}{0}{0}{0.17}{1} \pnode(0.125,0){c5} } \rput[c](5.0,4){ \drawRect{linewidth=0.75pt,fillstyle=solid,fillcolor=black!15!white}{0}{0}{0.17}{1} \pnode(0.125,0){c6} } \rput[c](0,0){ \drawRect{linewidth=0.75pt}{0}{0}{1}{1} \drawRect{linewidth=0.75pt,fillstyle=solid,fillcolor=black!70!white}{0.0}{0}{0.25}{1} \drawRect{linewidth=0.75pt,fillstyle=solid,fillcolor=black!70!white}{0.25}{0}{0.25}{1} \drawRect{linewidth=0.75pt,fillstyle=solid,fillcolor=black!70!white}{0.5}{0}{0.25}{1} \pnode(0.125,1){p11} \pnode(0.375,1){p12} \pnode(0.625,1){p13} \rput[c](0.5,-1){$x_{p_1} = 0.4$} } \rput[c](1.3,0){ \drawRect{linewidth=0.75pt}{0}{0}{1}{1} \drawRect{linewidth=0.75pt,fillstyle=solid,fillcolor=black!50!white}{0.0}{0}{0.23}{1} \drawRect{linewidth=0.75pt,fillstyle=solid,fillcolor=black!50!white}{0.23}{0}{0.23}{1} \drawRect{linewidth=0.75pt,fillstyle=solid,fillcolor=black!50!white}{0.46}{0}{0.23}{1} \pnode(0.125,1){p21} \pnode(0.375,1){p22} \pnode(0.625,1){p23} \rput[c](0.5,-1){$x_{p_2} = 0.4$} } \rput[c](2.6,0){ \drawRect{linewidth=0.75pt}{0}{0}{1}{1} \drawRect{linewidth=0.75pt,fillstyle=solid,fillcolor=black!35!white}{0.0}{0}{0.21}{1} \drawRect{linewidth=0.75pt,fillstyle=solid,fillcolor=black!35!white}{0.21}{0}{0.21}{1} \drawRect{linewidth=0.75pt,fillstyle=solid,fillcolor=black!35!white}{0.42}{0}{0.21}{1} \pnode(0.1,1){p31} \pnode(0.3,1){p32} \pnode(0.5,1){p33} \rput[c](0.5,-1){$x_{p_3} = 0.4$} } \rput[c](3.9,0){ \drawRect{linewidth=0.75pt}{0}{0}{1}{1} \drawRect{linewidth=0.75pt,fillstyle=solid,fillcolor=black!25!white}{0.0}{0}{0.19}{1} \drawRect{linewidth=0.75pt,fillstyle=solid,fillcolor=black!25!white}{0.19}{0}{0.19}{1} \drawRect{linewidth=0.75pt,fillstyle=solid,fillcolor=black!25!white}{0.38}{0}{0.19}{1} \pnode(0.1,1){p41} \pnode(0.3,1){p42} \pnode(0.5,1){p43} \rput[c](0.5,-1){$x_{p_4} = 0.4$} } \rput[c](5.2,0){ \drawRect{linewidth=0.75pt}{0}{0}{1}{1} \drawRect{linewidth=0.75pt,fillstyle=solid,fillcolor=black!15!white}{0.0}{0}{0.17}{1} \drawRect{linewidth=0.75pt,fillstyle=solid,fillcolor=black!15!white}{0.17}{0}{0.17}{1} \drawRect{linewidth=0.75pt,fillstyle=solid,fillcolor=black!15!white}{0.34}{0}{0.17}{1} \pnode(0.1,1){p51} \pnode(0.3,1){p52} \pnode(0.5,1){p53} \rput[c](0.5,-1){$x_{p_5} = 0.4$} } \ncline[linewidth=0.75pt]{->}{c1}{p11} \nbput[labelsep=1pt,npos=0.7]{$.4$} \ncline[linewidth=0.75pt]{->}{c1}{p12} \naput[labelsep=1pt,npos=0.7]{$.4$} \ncline[linewidth=0.75pt]{->}{c1}{p13} \naput[labelsep=1pt,npos=0.2]{$.2$} \ncline[linewidth=0.75pt]{->}{c2}{p13} \nbput[labelsep=1pt,npos=0.5]{$.2$} \ncline[linewidth=0.75pt]{->}{c2}{p21} \nbput[labelsep=1pt,npos=0.5]{$.4$} \ncline[linewidth=0.75pt]{->}{c2}{p22} \naput[labelsep=1pt,npos=0.5]{$.4$} \ncline[linewidth=0.75pt]{->}{c3}{p23} \nbput[labelsep=1pt,npos=0.7]{$.4$} \ncline[linewidth=0.75pt]{->}{c3}{p31} \nbput[labelsep=1pt,npos=0.7]{$.4$} \ncline[linewidth=0.75pt]{->}{c3}{p32} \naput[labelsep=1pt,npos=0.5]{$.2$} \ncline[linewidth=0.75pt]{->}{c4}{p32} \nbput[labelsep=1pt,npos=0.5]{$.2$} \ncline[linewidth=0.75pt]{->}{c4}{p33} \naput[labelsep=1pt,npos=0.5]{$.4$} \ncline[linewidth=0.75pt]{->}{c4}{p41} \naput[labelsep=1pt,npos=0.5]{$.4$} \ncline[linewidth=0.75pt]{->}{c5}{p42} \nbput[labelsep=1pt,npos=0.5]{$.4$} \ncline[linewidth=0.75pt]{->}{c5}{p43} \naput[labelsep=1pt,npos=0.5]{$.4$} \ncline[linewidth=0.75pt]{->}{c5}{p51} \naput[labelsep=1pt,npos=0.3]{$.2$} \ncline[linewidth=0.75pt]{->}{c6}{p51} \nbput[labelsep=1pt,npos=0.3]{$.2$} \ncline[linewidth=0.75pt]{->}{c6}{p52} \naput[labelsep=1pt,npos=0.6]{$.4$} \ncline[linewidth=0.75pt]{->}{c6}{p53} \naput[labelsep=1pt,npos=0.5]{$.4$} \end{pspicture} \begin{pspicture}(-1,-0.8)(8,8.5) \pnode(3,7.25){A} \pnode(3,5.25){B} \ncline[linecolor=gray,linewidth=5pt,arrowsize=11pt]{->}{A}{B} \naput{reassignment} \rput[r](-0.2,4.5){{\bf super-container:}} \rput[r](-0.2,0.5){{\bf patterns:}} \rput[c](1.3,4){ \drawRect{linewidth=0.75pt,fillstyle=solid,fillcolor=black!70!white}{0}{0}{0.25}{1} \drawRect{linewidth=0.75pt,fillstyle=solid,fillcolor=black!50!white}{0.25}{0}{0.23}{1} \drawRect{linewidth=0.75pt,fillstyle=solid,fillcolor=black!35!white}{0.48}{0}{0.21}{1} \drawRect{linewidth=1.5pt,fillstyle=none}{0}{0}{0.69}{1} \pnode(0.34,0){c1} \rput[c](-0.2,0.5){$C_1$} } \rput[c](3.5,4){ \drawRect{linewidth=0.75pt,fillstyle=solid,fillcolor=black!25!white}{0}{0}{0.19}{1} \drawRect{linewidth=0.75pt,fillstyle=solid,fillcolor=black!15!white}{0.19}{0}{0.17}{1} \drawRect{linewidth=0.75pt,fillstyle=solid,fillcolor=black!15!white}{0.36}{0}{0.17}{1} \drawRect{linewidth=1.5pt,fillstyle=none}{0}{0}{0.53}{1} \pnode(0.26,0){c2} \rput[c](-0.2,0.5){$C_2$} } \rput[c](0,0){ \drawRect{linewidth=0.75pt}{0}{0}{1}{1} \drawRect{linewidth=0.75pt,fillstyle=solid,fillcolor=black!70!white}{0.0}{0}{0.75}{1} \pnode(0.375,1){p1} \rput[c](0.5,-1){$x_{p_1} = 0.4$} } \rput[c](1.3,0){ \drawRect{linewidth=0.75pt}{0}{0}{1}{1} \drawRect{linewidth=0.75pt,fillstyle=solid,fillcolor=black!50!white}{0.0}{0}{0.69}{1} \pnode(0.34,1){p2} \rput[c](0.5,-1){$x_{p_2} = 0.4$} } \rput[c](2.6,0){ \drawRect{linewidth=0.75pt}{0}{0}{1}{1} \drawRect{linewidth=0.75pt,fillstyle=solid,fillcolor=black!35!white}{0.0}{0}{0.63}{1} \pnode(0.31,1){p3} \rput[c](0.5,-1){$x_{p_3} = 0.4$} } \rput[c](3.9,0){ \drawRect{linewidth=0.75pt}{0}{0}{1}{1} \drawRect{linewidth=0.75pt,fillstyle=solid,fillcolor=black!25!white}{0.0}{0}{0.57}{1} \rput[c](0.5,-1){$x_{p_4} = 0.4$} } \rput[c](5.2,0){ \drawRect{linewidth=0.75pt}{0}{0}{1}{1} \drawRect{linewidth=0.75pt,fillstyle=solid,fillcolor=black!15!white}{0.0}{0}{0.51}{1} \rput[c](0.5,-1){$x_{p_5} = 0.4$} } \nccurve[angleA=-135,angleB=45,linewidth=1pt]{->}{c2}{p1} \nbput{$.4$} \nccurve[angleA=-135,angleB=45,linewidth=1pt]{->}{c2}{p2} \naput[npos=0.6,labelsep=0pt]{$.4$} \nccurve[angleA=-135,angleB=45,linewidth=1pt]{->}{c2}{p3} \naput{$.2$} \end{pspicture} \caption{Visualization of the reassignment in Lemma~\ref{lem:Gluing} for $k=3$. The upper packing graph is the red part of $G_2(x,y)$ with the optimal assignment $a$, assuming that each container has multiplicity $1$. The lower graph gives the red part of $G_2(\tilde{x},\tilde{y})$ with the constructed assignment $a$ that we give in the analysis. Darker colors indicate larger containers.} \end{center} \end{figure} \begin{proof} Consider the graph $G_2(x,y)$ as in section \ref{sec:Deficiency}. For every right node $(C,p)$, we assign $\text{mult}_\text{red}(C,p)= k\cdot \lfloor \frac{p_C}{k} \rfloor \cdot x_p$ for $C$ in size class $\sigma$, and $\text{mult}_\text{red}(C,p)=0$ for all other $C$. We set $\text{mult}_\text{blue}(C,p)=\text{mult}(C,p)-\text{mult}_\text{red}(C,p)$. By Lemma \ref{decomposition}, we can find integral red and blue multiplicities of left nodes so that $\text{def}(G_\text{red})=0$ and $\text{def}(G_\text{blue})\le \text{def}(x,y)+\sigma$. The red and blue graphs can now be treated separately, and so we restrict our attention to the red graph since it represents precisely the containers that we want to reassign. For all nodes $(C,p)$ on the right of the red graph, we combine the copies of $C$ in pattern $p$ into containers of type $k\cdot C$. For clarity we refer to these larger containers as super-containers. Similarly, we look at the containers of the left nodes, ordered from largest to smallest and taken with multiplicity. In consecutive sets of cardinality $k$, we combine the containers into super-containers, except perhaps fewer than $k$ of the smallest ones. Write $C_i$ to represent the $i$th largest super-container on the left. We claim that all super-containers except $C_1$ can be packed into the right nodes. To see how to pack them, let $a$ be an optimal assignment in the original red graph. For all $i$, $a$ assigned the containers making up $C_i$ to some combination of large-enough containers of total multiplicity $k$. All such containers became part of super-containers in the new graph, and the total multiplicity of their contribution to these super-containers is exactly $1$. These super-containers are not necessarily all large enough to fit $C_i$, but they are all large enough to fit $C_{i+1}$, and this is exactly where we send $C_{i+1}$. With this assignment, at most one super-container and $k$ containers were left unpacked, and so the deficiency of the updated red graph is at most $2k\sigma$. For all containers $C$, we let $\tilde{y}_C=\text{mult}_\text{red}(C)+\text{mult}_\text{blue}(C)$. We note that we only changed $y$ by rearranging the containers, and in particular we did not change the item multiplicities. Therefore we know that $\text{def}(G_1(\tilde{y}))=\text{def}(G_1(y))$. With this definition of $\tilde{y}$, we note that $G_{\text{red}}+G_{\text{blue}}$ is precisely $G_2(\tilde{x},\tilde{y}).$ We therefore have $\text{def}(G_2(\tilde{x},\tilde{y}))\le \text{def}(G_2(x,y))+O(k\sigma)$, and so the total increase in deficiency is at most $O(k\sigma)$. \end{proof} We are now ready to give our second main lemma of this section. \begin{lemma} [Reassigning containers] Suppose $x\in \setR_{\ge 0}^{\pazocal{P}}, y\in \setZ_{\ge 0}^{\pazocal{C}}$, and $\sigma<2^{-4}$. Then we can combine containers in size class $\sigma$ in $x$ and $y$ into larger containers, yielding new solutions $\tilde{x}, \tilde{y}$ satisfying the following conditions. \begin{enumerate} \item $\mathbf{1}^T\tilde{x}=\mathbf{1}^Tx$. \item $|\text{supp}(\tilde{x})|\le |\text{supp}(x)|$. \item For all patterns $p\in \text{supp}(\tilde{x})$ and containers $C$ in size class $\sigma$, $p_C\le (\frac{1}{\sigma})^{1/4}$. \item Multiplicities of small containers in patterns in supp$(x)$ are not affected. \item $\text{def}(\tilde{x},\tilde{y})\leq \text{def}(x,y)+O(\sigma^{3/4})$. \end{enumerate} \end{lemma} \begin{proof} We apply Lemma~\ref{lem:Gluing} with parameter $k=\lfloor(\frac{1}{\sigma})^{1/4}\rfloor$ and obtain a pair $(\tilde{x},\tilde{y})$ so that $\text{def}(\tilde{x},\tilde{y})\le O(k\sigma)\le O(\sigma^{3/4})$, and so condition $(5)$ is satisfied. Since we have updated $x$ by altering the patterns in its support, conditions $(1)$ and $(2)$ are also satisfied. In the process of Lemma~\ref{lem:Gluing}, we decreased $p_C$ for $C$ in size class $\sigma$ to at most $k$. Since $\sigma<2^{-4}$, we know that $k\ge 2$, and so the containers we created are in strictly larger size classes. Therefore conditions $(3)$ and $(4)$ are satisfied. \end{proof} Let us say briefly what the container reassignment does to the associated matrix $A$. If $\tilde{A}_C$ is any row of the updated matrix corresponding to a container in size class $\sigma$, we know $\tilde{A}_C$ is entrywise less than or equal to $A_C$ and $\|\tilde{A}_C\|_{\infty}\le (\frac{1}{\sigma})^{1/4}$. In all rows corresponding to smaller size classes, $\tilde{A}_C=A_C$. Before we talk about applying Lovett-Meka, we want to summarize the results of our grouping and container reassignment. We summarize the procedure: \begin{enumerate} \item[(1)] For size classes $s_{\min} \le \sigma \le 2^{-72}$, starting with the smallest, do: \begin{enumerate} \item[(2)] Group the containers in size class $\sigma$ with $\delta=\sqrt{\sigma}$. \item[(3)] Whenever we find more than $(\frac{1}{\sigma})^{1/4}$ copies of the same container in one pattern, we put them together in a larger container. \end{enumerate} \item[(4)] For $\sigma>2^{-72}$, group the containers in size class $\sigma$ with $\delta=64$. \end{enumerate} In the following we will call a size class $\sigma$ \emph{small} if $\sigma\le 2^{-72}$ and \emph{large} otherwise. First note that the increase in deficiency of the entire procedure is at most\\ \[ \sum_{\sigma\in2^{-\setN}}(O(\sigma^{1/2})+O(\sigma^{3/4}))+72 \cdot 64 =O(1). \] Let $A$ be the matrix we obtain at the end of this procedure. In addition, we would like to keep much of the group structure that was created during the procedure. Define the \emph{shadow incidence matrix} $\tilde{A}$ to be the matrix that agrees with $A$ on large size classes, but for small size classes represents the incidences \emph{after step $(2)$}, but \emph{before step $(3)$}. We can imagine that whenever a container is put into a larger container, its incidence entry \emph{remains} in $\tilde{A}$. In particular a container might be put into containers iteratively and hence it may contribute to several incidences in $\tilde{A}$ but only one in $A$. Note that $\tilde{A}$ is entrywise at least as large as $A$. For all containers $C\in \pazocal{C}$, let $A_C$ denote the row of $A$ corresponding to $C$, and $\tilde{A}_C$ the corresponding row of $\tilde{A}$. Recall that $A$ and $\tilde{A}$ contain columns for patterns in $\textrm{frac}(x)$. Now, let us summarize the properties that the container-forming procedure provides: \begin{enumerate} \item[(A)] For a container $C$ in size class $\sigma$ one has $\|\tilde{A}_C\|_1 \geq (\frac{1}{\sigma})^{1/2}$ if $\sigma$ is small, and $\|\tilde{A}_C\|_1=\|A_C\|_1 \geq 64$ if $\sigma$ is large. \item[(B)] For a container $C$ in a small size class $\sigma$, and column $j=1,...,m,$ one has $A_{Cj} \leq (\frac{1}{\sigma})^{1/4}$. \item[(C)] One has \[ \sum_{i=1}^s \|\tilde{A}_{C_i}\|_1 \cdot s(C_i)^{17/16} \leq 24\sum_{i=1}^s \|A_{C_i}\|_1 \cdot s(C_i). \] \end{enumerate} Here (A) follows from the fact that after step (2), we have $(\frac{1}{\sigma})^{1/2}$ incidences for each container. (B) follows since after step (3), there are at most $(\frac{1}{\sigma})^{1/4}$ containers of each type in a pattern. The condition in (C) can be understood as follows: if we have a container of size $s(C)$, then the containers in it may appear many times in $\tilde{A}$ but only in smaller size classes. By discounting smaller incidences, we can upper-bound the contribution of the shadow incidences by the contribution of the actual containers. To make this more concrete, consider a container $C$ appearing in $A$ in some size class. If this container came from $k$ smaller containers, then those smaller containers are size at most $2\cdot \frac{s(C)}{k}$. Here the factor $2$ comes from the fact that during grouping our container could have been rounded down by a factor of $2$. Therefore the contribution of the shadow incidences of these smaller containers to the left hand side is $(\frac{2s(C)}{k})^{17/16}\cdot k=s(C)^{17/16}\cdot 2^{17/16}k^{-1/16}.$ But we chose the parameters so that whenever we combine $k$ containers we have $k\ge 2^{18}$ and so the contribution is at most $2^{-1/16}\cdot s(C)^{17/16}$. The shadow incidences $\ell$ levels down similarly contribute $(2^{-1/16})^\ell\cdot s(C)^{17/16}$. Then the total contribution of the shadows of $C$ to the left hand side of property (C) is at most \[ \sum_{\ell \geq 0} (2^{-1/16})^\ell s(C)^{17/16}\leq 24 \cdot s(C)^{17/16} \leq 24\cdot s(C). \] \section{Applying the Lovett-Meka algorithm\label{sec:ApplyingLM}} Using the grouping and container reassignment above, we can replace $y$ with $\tilde{y}$ and $x$ with $\tilde{x}$ so that the incidence matrix $A$ and shadow matrix $\tilde{A}$ satisfy properties $(A)-(C)$. We now want to create intervals of the rows of $A$ and $\tilde{A}$ in a nice way so that we can apply Lovett-Meka and make $x$ more integral. Formally, we will argue the following: \begin{claim} Suppose $x\in [0,1[^\pazocal{P},y\in \setZ^\pazocal{C}_{\geq 0}$, $A$ is the incidence matrix of $x$, and $\tilde{A}$ is a matrix so that $A$ and $\tilde{A}$ satisfy conditions $(A)+(B)+(C)$. Then there is a randomized polynomial time algorithm to find a vector $\tilde{x}$ satisfying \begin{itemize*} \item $\bm{1}^T\tilde{x} = \bm{1}^Tx$ \item $\textrm{def}(\tilde{x},y) \leq \textrm{def}(x,y) + O(1)$ \item $|\textrm{frac}(\tilde{x})| \leq \frac{1}{2} |\textrm{frac}(x)|$ \end{itemize*} \end{claim} Suppose the containers appearing in the patterns in supp$(x)$ are $C_1,...,C_s$, ordered from largest to smallest. As we fix the fractional solution $x$ for now, let us denote $n(C_i) := \sum_{p \in \textrm{frac}(x)} p_{C_i} = \|A_{C_i}\|_1$ as the number of incidences of container $C_i$ in $A$. Similarly, let $\tilde{n}(i) = \|\tilde{A}_{C_i}\|_1$ be the number of incidences in the shadow matrix $\tilde{A}$. Again, we have $n(i) \leq \tilde{n}(i)$ for all $i$. Finally, let us denote $\tilde{n}_{\sigma} := \sum_{i\textrm{ in class }\sigma}\tilde{n}(i)$ as the total number of shadow incidences that occur for size class $\sigma$. For a fixed constant $K>0$, and for each small size class $\sigma$, we first create level $0$ intervals of the rows as follows. For any row $i$ satisfying $\tilde{n}(i)>\frac{1}{2}K(\frac{1}{\sigma})^{17/16}$, we let $\{i\}$ be its own interval. We then subdivide the remaining rows into intervals so that $\tilde{n}(I)\le K(\frac{1}{\sigma})^{17/16}$ for each interval $I$. We need a total of at most $\frac{4}{K}\sigma^{17/16}\tilde{n}_\sigma+1$ intervals on level $0$. Now, given an interval $I$ on level $\ell$ with $|I|>1$, we will subdivide $I$ into at most $3$ intervals on level $\ell+1$. First, for any row $i \in I$ with $\tilde{n}(i)>(\frac{1}{2})^{\ell+1}K({\frac{1}{\sigma}})^{17/16}$, let $\{i\}$ be its own interval. We then subdivide the remaining rows into intervals so that $\tilde{n}(I)\le (\frac{1}{2})^{\ell}K(\frac{1}{\sigma})^{17/16}$. Since none of the rows $i\in I$ became its own interval on level $\ell$, we also know that $\tilde{n}(i)\le (\frac{1}{2})^{\ell}K(\frac{1}{\sigma})^{17/16}$, and so in fact this bound holds for every interval on level $\ell+1$. The number of intervals on level $\ell$ is at most $3^\ell\cdot (\frac{4}{K}\sigma^{17/16}\tilde{n}_\sigma+1)$. For large size classes $\sigma$, create an interval for each row $\{i\}$. Due to the grouping procedure, the size of each interval is at least $64$. All such intervals are level zero, and we do not create any higher levels. Let us abbreviate all intervals on level $\ell$ for size class $\sigma$ as $\pazocal{I}_{\sigma,\ell}$. We denote $\pazocal{I}_{\sigma} := \bigcup_{\ell \geq 0} \pazocal{I}_{\sigma,\ell}$ as the whole family for size class $\sigma$ and $\pazocal{I} := \bigcup_{\sigma} \pazocal{I}_{\sigma}$ as the union over all size classes. For an interval $I$, we define the vector \[ v_I := \sum_{i \in I} A_i \] as the sum of the corresponding rows in the incidence matrix. For an interval $I \in \pazocal{I}_{\sigma,\ell}$, we define $\lambda_I := \ell$ (that means the parameter just denotes the level on which it lives). The input for the Lovett-Meka algorithm will consist of the pairs $\{ (v_I,\lambda_I)\}_{I \in \pazocal{I}}$ where we use $\lambda_I \geq 0$ as the parameter for a constraint with normal vector $v_I$. Additionally, we add a single vector $v_{\textrm{obj}} := \bm{1}$ with parameter $\lambda_{\textrm{obj}} := 0$ to control the objective function. There are two things to show. First we argue that the parameters are chosen so that the condition of the Lovett-Meka algorithm is actually satisfied: \begin{lemma} Suppose that $|\text{supp}(x)|\ge L\log(\frac{1}{s_{\min}})$. For $K,L$ large enough constants, one has \[ \sum_{I \in \pazocal{I}} e^{-\lambda_I^2/16} +1 \le \frac{1}{16} \cdot |\textrm{supp}(x)| \] \end{lemma} \begin{proof} On level $0$, we have $|\pazocal{I}_{\sigma,0}| \le \frac{4}{K}\sigma^{17/16} \cdot \tilde{n}_{\sigma}+1$ many intervals and hence on level $\ell \geq 0$ there are $|\pazocal{I}_{\sigma,\ell}| \le 3^{\ell} \cdot (\frac{4}{K}\sigma^{17/16}\cdot \tilde{n}_{\sigma}+1)$ many. We can calculate that \begin{eqnarray*} \sum_{I \in \pazocal{I}} e^{-\lambda_I^2/16} &=& \sum_{\sigma \text{ small}} \sum_{\ell \geq 0} e^{-\ell^2/16} \cdot |\pazocal{I}_{\sigma,\ell}| + \sum_{\sigma \text{ large}} \sum_{\ell \geq 0} e^{-\ell^2/16} \cdot |\pazocal{I}_{\sigma,\ell}| \\ &\le& \sum_{\sigma \text{ small}} \sum_{\ell \geq 0} e^{-\ell^2/16} \cdot 3^{\ell} \cdot (\frac{4}{K}\sigma^{17/16} \cdot \tilde{n}_{\sigma}+1) + \sum_{\sigma \text{ large}} |\pazocal{I}_{\sigma,0}|\\ &\stackrel{\textrm{for } K,L \textrm{ large enough}}{\leq}& \frac{1}{64}|\text{supp}(x)|+\frac{1}{128 \cdot 24}\sum_{\sigma \text{ small}} \sigma^{17/16} \cdot \tilde{n}_{\sigma} +\frac{1}{64}|\text{supp}(x)|\\ & \stackrel{\textrm{property }(C)}{\leq}& \frac{1}{64}|\text{supp}(x)|+\frac{1}{128}\sum_{\sigma} \sigma \cdot n_{\sigma} +\frac{1}{64}|\text{supp}(x)|\\ &\leq& \frac{3}{64}\cdot |\text{supp}(x)| \le \frac{1}{16} \cdot |\textrm{supp}(x)|-1. \end{eqnarray*} We used that the total size for each pattern is at most $1$, and so the sum of the sizes of all incidences in the matrix $A$ is at most $|\text{supp}(x)|$. \end{proof} Now, suppose we do run the Lovett-Meka algorithm and obtain a solution $\tilde{x}$ with $|\text{frac}(\tilde{x})| \leq \frac{1}{2}|\textrm{frac}(x)|$ so that \[ |\left<v_I,x-\tilde{x}\right>| \leq \lambda_I \cdot \|v_I\|_2 \quad \forall I \in \pazocal{I} \quad \textrm{and} \quad \bm{1}^Tx = \bm{1}^T\tilde{x}. \] The following is crucial to our error analysis: the lengths $\|v_I\|_2$ that appear in the error bound are not too long and in particular the ratio $\frac{\|v_I\|_2}{\tilde{n}(I)}$ decreases with smaller container sizes. \begin{lemma} Fix an interval $I \in \pazocal{I}_{\sigma,\ell}$ where $\sigma$ is small. Then $\|v_I\|_2 \leq \tilde{n}(I) \cdot \sigma^{1/8}$. \end{lemma} \begin{proof} Recall that $v_I = \sum_{i \in I} A_{C_i}$ where each row $A_{C_i}$ has a row-sum of $\|A_{C_i}\|_1 \leq \|\tilde{A}_{C_i}\|_1$. We have $\tilde{n}(i)= \|\tilde{A}_{C_i}\|_1\geq (\frac{1}{\sigma})^{1/2}$, while $\|A_{C_i}\|_{\infty} \leq (\frac{1}{\sigma})^{1/4}$. Therefore, we have \[ \|A_{C_i}\|_2 \leq \sqrt{\|A_{C_i}\|_1 \cdot \|A_{C_i}\|_{\infty}} \leq \sqrt{\|\tilde{A}_{C_i}\|_1 \cdot \|A_{C_i}\|_{\infty}} = \|\tilde{A}_{C_i}\|_1\sqrt{\frac{\|A_{C_i}\|_{\infty}}{\|\tilde{A}_{C_i}\|_1}} \leq \tilde{n}(i) \cdot \sigma^{1/8}. \] Then by the triangle inequality $\|v_I\|_2 \leq \sum_{i \in I} \|A_{C_i}\|_2 \leq \tilde{n}(I) \cdot \sigma^{1/8}$. \end{proof} The next step should be to argue that the error in terms of the deficiency will be small. Recall that we still assume that containers are sorted so that $1 \geq s(C_1) \geq s(C_2) \geq \ldots \geq s(C_s)>0$. \begin{lemma}\label{lem:LMRounding} Let $C_i$ be a container in small size class $\sigma$. Then \[ \Big|\sum_{j \leq i} A_{C_j}(x-\tilde{x}) \Big| \leq O\left(\frac{1}{\sigma}\right)^{15/16}. \] If $C_i$ is a large container, then $\sum_{j\leq i}A_{C_j}(x-\tilde{x})=0$. \end{lemma} \begin{proof} If $C_i$ is a container in small size class $\sigma$, we can write the interval $\{ 1,\ldots,i\} = \dot{\bigcup}_{I \in \pazocal{I}(i)} I$ as the disjoint union of intervals $\pazocal{I}(i) \subseteq \pazocal{I}$ from our collection so that the only intervals $I \in \pazocal{I}(i)$ with $\lambda_I > 0$ that we are using are from class $\sigma$ and we only take at most three intervals from each level; for all three such intervals on level $\ell$, we have $\|v_I\|_2 \le \tilde{n}(I)\sigma^{1/8}\le K\cdot2^{-\ell}\left(\frac{1}{\sigma}\right)^{15/16}$. Consequently, we can bound \begin{eqnarray*} \Big|\sum_{j \leq i} A_{C_j}(\tilde{x}-x)\Big| &\leq& \sum_{I \in \pazocal{I}(i)} \lambda_I \cdot \|v_I\|_2 \leq \sum_{\ell \geq 0} 3\ell \cdot K\cdot2^{-\ell}\left(\frac{1}{\sigma}\right)^{15/16}\\ &= & O(1) \cdot \left(\frac{1}{\sigma}\right)^{15/16}. \end{eqnarray*} If $C_i$ is a large container, we can write $\{1,...,i\}$ as a disjoint union of intervals with $\lambda=0$, and so the statement holds. \end{proof} It remains to argue why $\textrm{def}(\tilde{x},y) \leq \textrm{def}(x,y) + O(1)$ for one application of Lovett-Meka. First notice that $A_{C_j}\tilde{x}=\text{mult}(C_j,\tilde{x})$ and $A_{C_j}x=\text{mult}(C_j,x)$. Therefore by Lemmas \ref{lem:DefIncrease} and \ref{lem:LMRounding} the rounding of each size class $\sigma$ increases the deficiency by at most $O(1)\cdot(\frac{1}{\sigma})^{15/16}\cdot \sigma=O(1)\cdot \sigma^{1/16}$. Summing over all size classes gives a total increase in deficiency \[ O(1)\cdot \sum_{\sigma \in 2^{-\setN}} \sigma^{1/16} \leq O(1). \] \bibliographystyle{alpha}
1,941,325,219,976
arxiv
\section{Introduction} In this report, we discuss the phase diagram of the charge-neutral quark matter under beta-equilibrium constraint, taking into account the q-$\bar{\rm q}$ vector interaction and/or U(1)$_A$-anomaly-induced chiral-diquark coupling\cite{arXiv:1102.3263}. \subsection{Effect of the vector-type interaction in chiral transition} The significance of the vector-vector interaction $\sim G_V(\bar{q}\gamma^{\mu}q)^2$ on the chiral phase transition in hot and dense quark matter is known for a long time, and clearly demonstrated by Kitazawa et al \cite{hep-ph/0207255}, who showed that the increase of the vector coupling $G_V$ enlarges the crossover region, and the celebrated QCD critical point\cite{288973} eventually disappears completely in the phase diagram with a sufficiently large coupling: See Fig. 13 of \citen{hep-ph/0207255}. When the possible color-superconducting(CSC) phase transition is taken into consideration, the phase boundary for the chiral-to-CSC phase transition becomes of crossover in the low temperature ($T$) region including vanishing temperature, as is shown in Fig.8 of \citen{hep-ph/0207255} by Kitazawa et al. \begin{figure}[thb] \begin{center} {\includegraphics[scale=.65]{distfunc.eps}} \caption{ The schematic figures to show that the diquark gap plays a similar role as temperature for the chiral transition: (a) the distribution function at finite $T$, $n = 1/(e^{(p-\mu)/T}+1)$ and (b) that for CSC phase with a diquark gap $\Delta$. Taken from \citen{addenda}.} \end{center} \label{fig:analogy} \end{figure} The reason why the phase transition involving the chiral restoration becomes so weak and turns to crossover is a smearing of the Fermi surface due to the diquark gap, which is analogous to that due to finite $T$ where the quark distribution function is smeared and the chiral transition is known to become crossover at zero chemical potential: See Fig.~1. Notice that a positive energy state of the fermion has a positive matrix element $\la \bar{q}q \ra$ while the vacuum chiral condensate has a negative value, and hence the presence of the positive-energy fermions act to decrease the chiral condensate in the absolute value. Such a competition between the chiral and CSC phase transition is enhanced by the vector interaction because the repulsive term $G_V\rho^2$ postpones the emergence of the high density, although the large Fermi sphere is preferable for the formation of the CSC phase\cite{hep-ph/0207255}. \subsection{Effect of electric charge neutrality} In the asymmetric homogeneous quark matter under the electric charge neutrality, u, d and s quarks have different densities, namely $\rho_d >\rho_u > \rho_s$, or the mismatched Fermi surfaces, which disfavors the pairing. However, when the system is heated, the resulting smeared Fermi surfaces at finite $T$ make the diquark pairing with mismatched Fermi surfaces possible, and hence the diquark gap shows an abnormal temperature dependence that it has a maximum value at a finite temperature\cite{hep-ph/0302142}. Then, the competition between the chiral and the diquark correlations becomes largest at an intermediate $T$, which can lead to a nontrivial impact on the chiral-to-CSC transition\cite{hep-ph/0509172,arXiv:0808.3371}. Indeed, Zhang, Fukushima and Kunihiro \cite{arXiv:0808.3371} showed that there can appear a crossover region in the phase boundary sandwiched by novel two critical end points, although the resulting phase diagram is strongly dependent on the choice of the parameters in the model Lagrangian. \subsection{Combined effect of vector interaction and electric charge neutrality} The combined effect of the vector interaction and the charge-neutrality condition was examined by Zhang and Kunihiro\cite{arXiv:0904.1062}: Although the structure of the phase diagram depends on the choice of the strengths of the interaction, there can appear three cross over region or two first-order critical lines, the ends of which are attached by critical points: the appearance of a crossover region in the low $T$ region including zero temperature is due to the vector coupling, whereas that in the intermediate $T$ region is understood to be due to the abnormal temperature dependence of the diquark gap inherent for the asymmetric quark matter with mismatched Fermi surfaces. \section{Incorporation of anomaly-induced chiral-diquark coupling} Recently, Zhang and Kunihiro\cite{arXiv:1102.3263} investigated the phase diagram taking into account the following anomaly term $\mathcal{L}_{\chi{d}}^{(6)}=K'/8\cdot\sum_{i,j,k=1}^3\sum_{\pm}$ $[({\psi}t_i^{f}t^{c}_k(1 \pm \gamma_5){\psi}_C)(\bar{\psi}t_j^{f}t^{c}_k\\ (1 \pm\gamma_5)\bar{\psi}_C)(\bar{\psi}_i( 1 \pm \gamma_5)\psi_j)\label{eqn:Lagrangian4}] $, as well as the standard Kobayashi-Maskawa-'t Hooft term (with $K$ being the strength) in addition to the four-Fermi scalar-pseudoscalar ($G_S$) and the U(3)$_L\times$U(3)$_R$-invariant vector-axial-vector interactions. Then the constituent quark masses $M_i$ and the dynamical Majarona masses $\Delta_i$ are expressed in terms of the chiral and diquark condensates, $\sigma_i = \langle \bar\psi_i\psi_i \rangle$ and $s_i =\langle \bar{\psi}_Ci\gamma_5 t_i^ft_i^c \psi \rangle$, as follows($G_D$ being the diquark coupling): $M_i = m_i - 4G_S \sigma_i+K|\varepsilon_{ijk}|\sigma_j\sigma_k+ \frac{K'}{4}|s_i|^2$, and $ \Delta_i=2(G_D-\frac{K'}{4}\sigma_i)s_i $, respectively. Here $m_u=m_d=5.5$ MeV and $m_s=140.7$ MeV denote the respective current quark masses. The notable point is that $\mathcal{L}_{\chi{d}}^{(6)}$ induces the chiral-diquark coupling as is manifested in the fourth and second term of $M_i$ and $\Delta_i$, respectively\cite{arXiv:1003.0408,arXiv:1007.5198}. \begin{figure} \hspace{-.0\textwidth} \begin{minipage}[t]{.27\textwidth} \includegraphics*[width=\textwidth]{pdggv0k2.eps} \centerline{(a) $K'/K=2.0$} \end{minipage} \hspace{-.05\textwidth} \begin{minipage}[t]{.27\textwidth} \includegraphics*[width=\textwidth]{pdggv0k225.eps} \centerline{(b) $K'/K=2.25$} \end{minipage} \hspace{-.05\textwidth} \begin{minipage}[t]{.27\textwidth} \includegraphics*[width=\textwidth]{pdggv0k24.eps} \centerline{(c) $K'/K=2.4$} \end{minipage} \hspace{-.05\textwidth} \begin{minipage}[t]{.27\textwidth} \includegraphics*[width=\textwidth]{pdggv0k28.eps} \centerline{(d) $K'/K=2.8$} \end{minipage} \hspace{.0\textwidth} \caption{The phase diagrams for various values of $K'$ in the two-plus-one-flavor Nambu-Jona-Lasinio model with the charge-neutrality and $\beta$-equilibrium being kept, but without the vector interaction. The thick-solid, thin-solid and dashed lines denote the first-order, second-order and chiral crossover critical lines, respectively. Adapted from \citen{arXiv:1102.3263}. } \label{fig: pdzeroGv} \end{figure} \subsection{Without the vector term} In Fig.\ref{fig: pdzeroGv}, we first show the phase diagrams of the charge neutral quark matter for various ratio $K'/K$ when the vector term is absent ($G_V=0$)\cite{arXiv:1102.3263}: When the ratio $K'/K$ is as small as 2.0, we have the standard phase diagram with a single critical point, although the existence of the combined chiral-diquark phase in the low-$T$ region is notable. Then $\kr$ is increased up to 2.2, the crossover window opens in the intermediate-$T$ region, inherent for the charge-neutral system with mismatched Fermi surfaces; we note that the competition between the chiral and diquark correlations is enhanced by the new anomaly term. When $K'$ is further increased, the diquark correlation becomes so large that the chiral transition in the CSC phase becomes smooth, and eventually the first-order critical line disappears completely in the phase diagram. Notice, however, that the crossover window never opens in the low-$T$ region including zero temperature, as is the case without the charge neutrality constraint\cite{arXiv:1007.5198}, which is contrary to the observation in \citen{arXiv:1003.0408}. \begin{figure} \hspace{-.0\textwidth} \begin{minipage}[t]{.27\textwidth} \includegraphics*[width=\textwidth]{pdggv025k055.eps} \centerline{(a) $K'/K=0.55$} \end{minipage} \hspace{-.05\textwidth} \begin{minipage}[t]{.27\textwidth} \includegraphics*[width=\textwidth]{pdggv025k057.eps} \centerline{(b) $K'/K=0.57$} \end{minipage} \hspace{-.05\textwidth} \begin{minipage}[t]{.27\textwidth} \includegraphics*[width=\textwidth]{pdggv025k07.eps} \centerline{(c) $K'/K=0.70$} \end{minipage} \hspace{-.05\textwidth} \begin{minipage}[t]{.27\textwidth} \includegraphics*[width=\textwidth]{pdggv025k1.eps} \centerline{(d) $K'/K=1.0$} \end{minipage} \caption{The phase diagrams of charge-neutral quark matter for several values of $K'/K$ and fixed $G_V/G_S=0.25$. Adapted from \citen{arXiv:1102.3263}.} \label{fig:pdGVfixed} \end{figure} \subsection{Combined effect of vector and anomaly terms} We show in Fig.\ref{fig:pdGVfixed} the phase diagrams when the vector term is also included as well as the anomaly terms under charge-neutrality constraint\cite{arXiv:1102.3263}. Again there appears a crossover region in the intermediate $T$ region but for much smaller $\kr$ this time. Slightly increasing $\kr$ up to 0.57, a new crossover window opens with a new critical point attached in the low-$T$ region including zero temperature, which is again due to the chiral-diquark competition enhanced by the vector term as well as the anomaly term. As $\kr$ is increased further, the island of the first-order critical line for the chiral restoration disappears owing to so strong diquark correlation, and the first-order critical line ceases to exist for a realistic value $\kr=1.0$. We note that the unstable region characterized by the chromomagnetic instability (bordered by the dash dotted line) tends to shrink and ultimately vanishes in the phase diagram\cite{arXiv:1102.3263}. \section{Summary} The diquark-chiral coupling induced by $U(1)_A$ anomaly plays a similar role as the vector interaction for the phase diagram of the quark matter under charge-neutrality constraint. As a result, the phase boundary for the chiral-color-superconducting (CSC) phase transition can have alternate multiple windows of the crossover/first-order transition lines attached with critical point(s). The message to be taken from the present mean-field level calculation is that the QCD matter under the charge neutrality constraint is soft for the simultaneous formation of chiral and diquark condensates around the would-be phase boundary, implying possible absence of the first-order transition line and critical points. Since the chiral transition at finite density involves a change of baryon density, the soft mode is actually a combined fluctuations of chiral, diquark and baryon densities\cite{Kunihiro:2010vh}. T. K. was supported by the Grant-in-Aid for the Global COE Program ``The Next Generation of Physics, Spun from Universality and Emergence'' and also by a Grant-in-Aid for Scientific Research (No.20540265, 2334006) from MEXT of Japan. Z. Z. was supported by the Fundamental Research Funds for the Central Universities of China.
1,941,325,219,977
arxiv
\section{Introduction} Over the past several decades chemistry research has made large strides forward in the description of chemical reactions occurring in environments as diverse as combustion, the earth's atmosphere, and interstellar media where temperature and pressure vary over multiple orders of magnitude \cite{Howard1981,Millar1988}. Here, crossed molecular beam experiments have been instrumental in verifying and validating theoretical models of the reactions \cite{Neumark1985,Alagia1996,Castillo1998} that range from classical trajectory calculations, semiclassical theories and explicit quantum dynamics methods. The different theoretical approaches have been extensively reviewed in the literature \cite{Nyman,Althorpe,Hu,Guo}. However, these studies were mostly restricted to temperatures above 1 K where typically many angular momentum partial waves contribute to the overall rate coefficients. Only recently has it become possible to investigate chemical reactions between small molecules at temperatures well below 1 mK \cite{Science08,Science2010} where quantum effects and threshold phenomena begin to dominate the collisional outcome. These novel capabilities pave the way to explore the fundamental principles of molecular reactivity at the very quantum limit, where a single collisional partial wave or mechanical-orbital angular momentum can dominate in the reaction. In fact, in many cases the collision has zero orbital angular momentum (except in collisions of identical fermions for which the lowest allowed partial wave is a $p$-wave), and thus has no centrifugal barrier. Ultracold collisions between neutral alkali-metal atoms have been studied and quantified ever since the first laser cooling experiments of the late 1980s. Two of the most important outcomes were a thorough understanding of the non-classical scattering from the long-range dispersion potentials as well as the ability to significantly change the collision cross-section or scattering length with magnetic fields of the order of 100 G. This control of the scattering length is made possible by Feshbach resonances, weakly bound molecular states whose energy relative to that of the scattering atoms changes with magnetic field. Several excellent reviews were published on these topics recently, see Refs.~\cite{Kohler2006,Chin2010,Kotochigova2014}. On the other hand, reactions cannot occur in atomic collisions. State changing processes, however, are allowed. The electron or nuclear spin of the ground-state alkali-metal atoms can be reoriented converting a fraction of a kelvin of internal energy into kinetic energy. While numerous theoretical predictions of ultracold chemical reactions have been reported since 2001 \cite{bala2001,krems,weck2006,Quemener1}, controlled study of a chemical reaction with ultracold molecules started with the successful creation of a near quantum-degenerate gas of $^{40}$K$^{87}$Rb molecules in their absolute ro-vibrational ground state at a temperature of a few hundred nanoKelvin by two JILA groups \cite{Science08}. In this experiment an ensemble of ultracold fermionic $^{40}$K atoms and bosonic $^{87}$Rb atoms were bound together by transferring population from a Feshbach molecular state to the absolute ground state using a single optical Raman transition. Since these molecules were created in an optical trap they can collide among each other and with residual ultracold atoms and undergo chemical reaction, essentially at the single partial wave level. The first measurement of the reaction rate coefficient between ultracold KRb molecules and K atoms was made at JILA \cite{Science2010}. The atom-molecule reaction rate coefficient was surprisingly high (on the order of 10$^{-10}$ cm$^3$/s) even at temperatures below 1 $\mu$K. Quantum defect theory (QDT) calculations \cite{Julienne2010,Kotochigova2010} showed that the reaction is nearly universal suggesting that the long-range van-der-Waals interaction plays a prominent role in the reaction dynamics. Recently, ultracold $^{87}$Rb$^{133}$Cs molecules in their rovibrational ground state were produced at Innsbruck University \cite{Nagerl2014}. These RbCs molecules are collisionally stable as atom exchange reactions to form homonuclear dimers are energetically forbidden \cite{JHutson2010}. They found that the former obeys the universal regime whereas departures from universality was noted for the latter. Explicit measurement of the reaction rate coefficient for the Li+CaH$\to$ LiH+Ca reaction was recently reported by Singh et al. at 1 K \cite{Singh}. In this case, the buffer gas cooling method was employed for the CaH molecule that limits the translational temperatures to about 1 K. A number of experimental groups around the world are working to create other alkali-metal and/or alkaline-earth molecules \cite{Takahashi2011,Ketterle2012,Zwierlein2012,Tiemann2013,Gupta2014} in their stable ground states using a combination of magneto-association via Feshbach resonances and two-photon Raman photoassociation. Some of these molecules can undergo exothermic reactions, others are endothermic and need to be activated, for example by transfer to excited vibrational levels. Both ultracold molecular experiments and theoretical modeling of collisions between alkali-metal and alkaline-earth molecules have focussed in total or integrated reaction rates. The next logical step is to measure and calculate final state resolved distributions. On the theory side this means using approaches that go beyond a ``simple'' universal QDT. In fact, a detailed understanding of the reaction mechanism and product rovibrational distribution requires a rigorous quantum treatment. While it is possible to combine such treatments with QDTs to yield full rovibrationally resolved reaction rate coefficients as demonstrated recently for the D+H$_2\to$ HD+H reaction \cite{Jisha14}, additional efforts are needed for complex systems composed of alkali-metal and alkaline-earth metal systems. Over the years researchers have identified several issues that can be used as guidelines to set up improved simulations. Chemical processes have been categorized by the presence or absence of a reaction barrier. Barrier-less reactions are often described by capture theory, which suggests that their dynamics is principally controlled by the long-range potentials \cite{Clary2008}. On the other hand, for some systems tunneling or coupling to a single scattering resonance or long-lived collisional complex dominates the reaction and advanced multi-channel QDT based on statistical interpretations may be applied \cite{Rackham2003,Gonzales2007}. It is also important to understand the relative influence of the two- and three-body terms of the potential energy surfaces (PESs) on the collision dynamics. The three-body terms are influential when all three atoms are close together and fast moving, whereas two-body potentials dominate at long range, where at least one atom is far away. Naturally, one would like to understand whether these concerns affect reactions at very low temperatures. Using approximate quantum calculations based on knowledge of the long-range interactions, Mayle et al.~\cite{Bohn1,Bohn2} predict that narrow resonances might dominate molecular collisions as a function of an applied electric field. Finally, we note that in collisions between three- or more atoms there can exist intersecting PESs with the same symmetry, i.e. conical intersections \cite{Hutson2009,JHutson2010}. They are known to significantly affect reactions under the certain circumstances. For ground-state alkali-metal trimers intersecting PESs exist at the C$_{2v}$ symmetry \cite{JHutson2010}. Moreover, at ultracold temperatures a full quantum dynamics calculation might need to include coupling between potential surfaces due to the hyperfine interactions between electronic and nuclear spins of the reactants. Several excellent reviews on chemical reactions of molecules at ultracold temperatures \cite{Quemener1,Quemener2} discuss these and some other questions. The goal of this paper is to take an initial step toward addressing some of the questions raised above. In particular, we would like to compare the performance of universal models and statistical quantum-mechanical (SQM) approaches for ultracold reactions to a numerically exact quantum mechanical (EQM) method formulated in hyperspherical coordinates. We apply these approaches to the alkali-rare-earth LiYb molecule colliding with a Li alkali-metal atom at collisional energies $E/k$ from 0.1 $\mu$K to 1 K, where $k$ is the Boltzmann constant. These molecules can be created by photo/magnetoassociation from ultracold Li and Yb atoms and are the subject of on-going ultracold experiments \cite{Takahashi2011,Gupta2011}. Quantum mechanical description of this reaction is challenging but simpler than alkali-metal system as there are no conical intersections. We ignore effects of the hyperfine interactions. Despite these simplicities, a full quantum calculation of this reaction is a computationally demanding task due to the high density of states of both LiYb and Li$_2$ molecules. For this reason, we restrict the EQM treatment to total angular momentum quantum number $J=0$ (s-wave scattering in the initial LiYb channel) and adopt a $J$-shifting method \cite{Bowman91} to evaluate temperature dependent rate coefficients. We hope to be able to transfer our insights from these studies to more complex systems composed of alkali metal and non-alkali metal atom systems. \begin{figure}[h] \includegraphics[scale=0.30,trim=0 40 0 20,clip]{Fig1.pdf} \caption{Energetics of the LiYb+Li$\to$Li$_2$+Yb reaction. The $j=0$ vibrational levels of the X$^2\Sigma^+$ potential of the reactant $^6$Li$^{174}$Yb molecule are shown on the left as horizontal red lines. The $j'=0$ vibrational levels of the X$^1\Sigma^+_g$ potential of the product $^6$Li$_2$ molecule are shown on the right as horizontal blue lines. The zero of energy is located at the $v=0$, $j=0$ level of the $^6$Li$^{174}$Yb molecule. (Energies are divided by Planck's constant $h$ and the speed of light $c$.) The inset shows a blowup of the energy levels near the $v=0$ level of $^6$Li$^{174}$Yb. For clarity the rotational progressions are not shown. } \label{energetics} \end{figure} The paper is organized as follows. Section \ref{sec:potential} describes our calculation of the ground-state LiYbLi trimer potential including a description of the interpolation between the {\it ab initio} points and the smooth connection to the long-range form of the potential. Section \ref{sec:dispersion} describes a separate electronic-structure calculation of the dispersion potential between a LiYb molecule at its equilibrium separation and a Li atom. The dispersion coefficient is evaluated in terms of an integral over the dynamic polarizability of LiYb and Li as a function of imaginary frequencies \cite{Stone,Kotochigova2010}. This coefficient is used in determining the reaction rate coefficients within the universal QDT treatment. Section \ref{sec:theory} describes the EQM, SQM, and universal calculations for the isotopes $^6$Li and $^{174}$Yb. We present the results of these models in Sec.~\ref{sec:results}. We also show a comparison of rate coefficients based on the full trimer potential and the pair-wise potential. Finally, state-to-state reaction rate coefficients derived from the SQM and EQM methods are analyzed and discussed. Summary and conclusion are presented in Sec.~\ref{sec:conclusion}. \section{Trimer potential energy surface}\label{sec:potential} The chemical reaction between a LiYb molecule and a Li atom is illustrated by the pathway \begin{equation} {\rm Li}(1){\rm Yb} + {\rm Li}(2) \rightarrow [{\rm LiYbLi}] \rightarrow {\rm Li}_2 + {\rm Yb}\,, \end{equation} where initially the short-ranged, strong bond between the first Li(1) atom and the Yb atom weakens as the second Li(2) atom approaches. An intermediate three particle ``collision complex'' [LiYbLi] is formed. Finally, during the next stage of the reaction a short-range bond between Li(1) and Li(2) is formed and the Yb atom moves away quickly. The energetics of this reaction is shown in Fig.~\ref{energetics}. The interaction potential of this reaction depends on three independent variables: the molecular bond lengths $R_{\rm Li(1)Yb}$, $R_{\rm Li(2)Yb}$, and $R_{\rm Li(1)Li(2)}$ for the separation between Li(1) + Yb, Li(2) + Yb, and the two Li atoms, respectively. The PES is an important part of the quantum dynamics calculations. No prior calculations exist on the PES for the LiYb+Li reaction. We have computed the multi-dimensional ground-state potential surface of the ``collision complex'' by solving the Schr\"{o}dinger equation for the electron motion with the nuclei held in fixed positions. Such calculations are computationally expensive as the energies of many molecular geometries are needed. We use the {\it ab~initio} coupled-cluster method with single, double, and perturbative triple excitations (CCSD(T)) of the computational chemistry package CFOUR\cite{cfour}. The trimer potential is improved by first subtracting the pair-wise, dimer potentials obtained at the same level of electronic structure theory. The remainder is the non-additive three-body potential $ V^{(3)}(R_{\rm Li(1)Yb},R_{\rm Li(2)Yb},R_{\rm Li(1)Li(2)})$. Earlier studies for the quartet potential of homonuclear and heteronuclear alkali-metal trimers \cite{SoldanHomo1,SoldanHomo2,SoldanHetero} showed that non-additive effects are significant. An improved trimer potential is then created by adding the accurate experimental Li$_2$ ground state potential \cite{Barakat1986} and an {\it ab initio} theoretical LiYb potential determined with a larger basis set \cite{KotochigovaLiYb} to the three-body potential. No spectroscopic measurement of the LiYb potential exists at this time. This adjustment leads to the correct treatment of the long-range with at least one atom far away from the others. In section~\ref{sec:results} we will compare reaction rate coefficients in the low-temperature regime based on the full trimer potential surface and to a pairwise-additive potential (which ignores the three-body potential). For the coupled-cluster calculations we applied the aug-cc-pCVTZ basis set for the Li atom \cite{Prascher2011}, whereas we chose a basis set constructed from the (15s 14p 12d 11f 8g)/[8s 8p 7d 7f 5g] wave functions of Dolg and Cao \cite{Dolg2001,Dolg2013} for the ytterbium atom. This ytterbium basis relies on a relativistic pseudopotential that describes the inner orbitals up to the 3d$^{10}$ shell. Only, the 2s valence electrons of Li and 4f$^{10}$ and 6s$^2$ valence electrons of Yb are correlated in the {\it ab initio} calculation. The {\it ab~initio} non-additive part of the trimer potential is fit to the generalized power series expansion of Ref.~\cite{Aguado1992} given by \begin{eqnarray} \lefteqn{ V^{(3)}(R_{\rm Li(1)Yb},R_{\rm Li(2)Yb},R_{\rm Li(1)Li(2)}) =} \\ &&\quad\quad \sum^m_{i,j,k} d_{ijk} \rho^i_{\rm Li(1)Yb}\rho^j_{\rm Li(2)Yb} \rho^k_{\rm Li(1)Li(2)}\, , \nonumber \end{eqnarray} where the scaled length $\rho_{AB} = R_{AB} e^{-\beta_{AB}R_{AB}}$. The powers $i$, $j$, and $k$ satisfy the conditions $i+j+k \leq m$, $i+j+k \neq i \neq j \neq k$ for $m>0$ to ensure that the potential goes to zero when one of the internuclear separations is zero \cite{Aguado1992}. The coefficients $d_{ijk}$ and $\beta_{AB}$ serve as linear and non-linear fit parameters, respectively, and are determined iteratively. Symmetry under interchange of the Li atoms ensures that $d_{ijk}=d_{jik}$ and $\beta_{\rm Li(1)Yb}=\beta_{\rm Li(2)Yb}$. For $m=8$ we obtain a root-mean-square (rms) deviation smaller than $\delta V^{(3)}=0.0004833$ a.u. for all 591 data points. The optimal 13 linear $d_{ijk}$ and two non-linear $\beta_{AB}$ coefficients are listed in Table~\ref{3B-param}. \begin{table}[b] \caption{Parameters $d_{ijk}$ and $\beta_{AB}$ for the non-additive component of the three-body potential of LiYbLi as defined in the text. We have $\beta_{\rm LiYb}=0.7110242142956382$ and $\beta_{\rm LiLi}=0.2079741859771922$. Coefficients are in atomic units of the Hartree energy $E_h$ and Bohr radius $a_0$. }\label{3B-param} \begin{tabular}{c c c| r} i & j & k & \multicolumn{1}{c}{$d_{ijk}$} \\ \hline 1 & 0 & 1 & $-0.3791234233645178$ \\ 1 & 1 & 0 & $-12.07092112030131 $ \\ 1 & 1 & 1 & $6.778574385332172 $ \\ 2 & 0 & 1 & $0.9609698323047215$ \\ 2 & 1 & 0 & $18.08003175501403 $ \\ 0 & 1 & 2 & $0.4946458265430991 $ \\ 2 & 1 & 1 & $-27.85537833078476 $ \\ 1 & 1 & 2 & $ 2.029448131083818 $ \\ 2 & 0 & 2 & $-0.8786730695382046 $ \\ 2 & 2 & 0 & $104.7435507501138 $ \\ 3 & 0 & 1 & $0.07103760674735782$ \\ 3 & 1 & 0 & $39.61820620689986 $ \\ 0 & 1 & 3 & $-0.1402545726485475 $ \end{tabular} \end{table} The advantage of the separation of the full potential into an additive and non-additive part is that the two-body pair-wise potentials can be replaced by either a more-advanced, high-precision electronic structure calculation or by an ``experimental'' potential that reproduces the binding energies of all-observed dimer ro-vibrational levels. In this paper we use the spectroscopically-accurate X$^1\Sigma_g^+$ potential for Li$_2$ \cite{Barakat1986} and our previously determined {\it ab initio} X$^2\Sigma^+$ potential for LiYb \cite{KotochigovaLiYb}. Both pair-wise potentials were expanded to large internuclear separation using the best-known van der Waals coefficients \cite{C6_LiYb,C6_Li2}. The diatomic vibrational energies computed using these pair-wise potential curves are shown in Fig.~\ref{energetics}. It is seen that the LiYb($v=0,j=0$)+Li reaction can populate vibrational levels as high as $v=19$ of the Li$_2$ molecule at collision energies in the ultracold regime. A cut through our improved three-dimensional PES as a function of the LiYb and Li$_2$ bond lengths with the atoms restricted to a linear geometry is shown in Fig.~\ref{3Bsurface}. The reactant and product states are situated in the pair-wise potential wells when either $R_{\rm Li(1)Li(2)}$ or $R_{\rm LiYb}$ is large. We find that the optimized geometry, where the potential has its absolute minimum, occurs at this linear geometry with the three atoms on a line with the two Li atoms to one side (the same equilibrium configuration as N$_2$O, for example). It occurs when $R_{\rm Li(1)Yb}=7.00 a_0$, $R_{\rm Li(2)Yb}= 12.25 a_0$, and $R_{\rm Li(1)Li(2)}= 5.25 a_0$, respectively. In fact, the bond between the Li atoms is so strong that the Yb atom cannot get in between them and the Li-Li separation is close to that for the corresponding dimer potential. The atomization energy, the energy difference between the absolute minimum and three free atoms is $V_{\rm a}=0.045241 $ a.u.$ (9929.0$ cm$^{-1}$). The dissociation energy from the optimized geometry and the limit LiYb + Li is $V_{\rm d1}/(hc)=0.0368949 $ a.u.$ (8097.5$ cm$^{-1}$), while that to the Li$_2$+ Yb limit is $V_{\rm d2}/(hc)=0.007188 $ a.u.$ (1577.6$ cm$^{-1}$). \begin{figure} \includegraphics[scale=1,trim=0 25 0 25,clip]{Fig2.pdf} \caption{A three-dimensional view of the PES in atomic units for the reaction ${\rm Li}(1){\rm Yb} + {\rm Li}(2) \rightarrow [{\rm LiYbLi}] \rightarrow {\rm Li}_2 + {\rm Yb}$ as a function of bond lengths $R_{\rm LiYb}$ and $R_{\rm Li(1)Li(2)}$. The angle between $R_{\rm Li(1)Li(2)}$ and $R_{\rm Li(2)Yb}$ is fixed at $180^{\rm o}$. The zero of energy corresponds to the three separated atoms. Topographical contours of equal energies are shown on the base of the figure. From inside out their energies are $-0.04 E_h$, $-0.03 E_h$, $-0.02 E_h$, and $-0.0075 E_h$, respectively. } \label{3Bsurface} \end{figure} \section{Atom-Dimer dispersion potentials} \label{sec:dispersion} In this section we determine the long-range dispersion potential for a polar LiYb molecule in the lowest vibrational level ($v=0$) of the X$^2\Sigma^+$ potential and a lithium atom. We evaluate its isotropic and anisotropic contribution for rotation-less $j=0$ and slowly-rotating $j=1$ LiYb molecules. Later these coefficients will be used to evaluate universal reaction rates in Sec.~\ref{sec:universal}. We calculate the atom-molecule van-der-Waals coefficients by integrating and summing the product of the LiYb and Li dynamic polarizability tensor $\alpha_{ij}(i\omega)$ over imaginary frequencies $i\omega$ and components $i$ and $j$~\cite{Stone}. For polar molecules, which have a non-zero permanent dipole moment, there are contributions to the polarizability from ro-vibrational transitions within the ground-state potential as well as those to excited electronic potentials. The contribution from transitions within the ground state is only important when the permanent dipole moment is large. For example, Ref.~\cite{Kotochigova2010} showed that the ground-state contribution dominates for a heavy $v=0$, $j=0$ RbCs molecule and is small but non-negligible for the lighter KRb. The LiYb molecule has a very small permanent dipole moment of $0.011 ea_0$ \cite{KotochigovaLiYb} at equilibrium separation $R_e = 6.71 a_0$ and transitions to the electronically excited states dominate. Here, $e$ is the charge of the electron. The importance of excited electronic states in the calculation of the polarizability of vibrational ground state of LiYb allows us to make a simplification. We can neglect vibrational averaging and only have to determine the polarizability and thus the dispersion coefficients at $R_e$. Formally, the isotropic and anisotropic dispersion coefficients are \cite{Stone,Kotochigova2010} \begin{eqnarray} C^{\rm iso}_6 &=& \frac{3}{\pi} \int_0^\infty d\omega \, \bar\alpha^{\rm LiYb}(i\omega,R_e) \, \bar\alpha^{\rm Li}(i\omega) \label{C6atmoliso} \end{eqnarray} and \begin{eqnarray} C^{\rm aniso}_{6,20} &=& \frac{1}{\pi} \int_0^\infty d\omega \, \Delta\alpha^{\rm LiYb}(i\omega,R_e) \, \bar\alpha^{\rm Li}(i\omega)\,, \label{C6atmolaniso} \end{eqnarray} respectively, where for both atom and molecule $\bar\alpha=( \alpha_{xx} + \alpha_{yy}+\alpha_{zz})/3$ and $\Delta\alpha=\alpha_{zz} - (\alpha_{xx}+\alpha_{yy})/2$ in terms of the diagonal $x$, $y$, and $z$ components of the polarizability tensor. For the molecule the components are in the body-fixed frame with $z$ along the internuclear axis and $\alpha_{xx}=\alpha_{yy}$. The diagonal dynamic polarizabilities $\alpha^{\rm LiYb}_{ii}(\omega,R_e)$ are first calculated as a function of {\it real} frequency $\omega$ using the coupled-cluster method of CFOUR with single and double excitations (CCSD) \cite{Kallay2006}. The Li and Yb basis sets are the same as in the calculation of the trimer surface described in Section~\ref{sec:potential}. We then fit \begin{equation} \alpha^{\rm Mol}_{ii}(\omega,R_e) = \sum_k \frac{A_{k}}{1-(\omega/\eta_k)^2} \label{polar} \end{equation} with parameters $A_{k}$ and $\eta_k$. The $A_k$ and $\eta_k$ are related to the oscillator strength and transition frequency between ground and exited state $k$, respectively. We analytically continue to imaginary frequencies and perform the integral over frequencies to determine the dispersion coefficients. Finally, we find that the isotropic $C^{\rm iso}_6$ coefficient for the $v=0$ X$^2\Sigma^+$ LiYb molecule colliding with Li atom is $3086 E_h a_0^6$ for both $j=0$ and 1. The anisotropic $C^{\rm aniso}_{6,20}$ coefficient is $776 E_ha_0^6$ for the $j=1$ molecule, while for the rotation-less $j=0$ molecule it is zero. We verified that the contribution to $C_6$ from transitions within the ground state, to a good approximation is given by $d_e^4/[(4\pi\epsilon_0)^2 6 B_e]$ \cite{Barnett2006}, where $B_e=1.05 \times10^{-6}E_h$ or $B_e/(hc)=0.230$ cm$^{-1}$ is the $^6$Li$^{174}$Yb rotational constant at $R_e$ and $\epsilon_0$ is the electric constant, is negligible. The van der Waals length $R_{\rm vdW} = (2\mu C^{\rm iso}_6/\hbar^2)^{1/4}/2$ for the isotropic dispersion potential is $45.0 a_0$ for $^6$Li$^{174}$Yb and $^6$Li. \section{Quantum dynamics theory}\label{sec:theory} In this and the following section we describe and compare the predictions of three scattering approaches of different levels of complexity. We begin with a description of each approach. In all three formalisms the effects due to the weak hyperfine and magnetic-field-induced Zeeman interactions of the Li atoms as well as any electric-field-induced level shifts of the polar LiYb molecule are omitted. For ground-state LiYb + Li collisions this implies that we only need to model couplings between the relative orbital angular momenta of the three atoms. In fact, the sum of these orbital angular momenta, the total angular momentum $J$ and its space-fixed projection $M$, are conserved. Parity under spatial inversion, labeled by $p=\pm 1$, and particle exchange symmetry for identical particles within a diatomic molecule labeled by $q=\pm 1$ are also conserved quantities. Here, $q=\pm 1=(-1)^{j'}$, corresponds to even and odd rotational levels $j'$ of Li$_2$ \cite{Miller}. The symmetries of the Hamiltonian ensure that the reaction rates are independent of $M$. \subsection{Exact quantum-mechanical method}\label{sec:EQM} The formalism for atom-diatom reactive scattering is well developed~\cite{Del1,Del2,Miller,Launay1,pack,Launay2,Hu}. Only a brief account relevant to the present context is provided here. We use the approach developed by Pack and Parker~\cite{pack} based on the adiabatically adjusting principle axis hyperspherical (APH) coordinates $\left(\rho, ~\theta,~\phi\right)$. This single-set of coordinates is convenient for the description of an atom-diatom chemical reaction as it evenhandedly describes all three arrangement channels, $\tau$, in an A+BC system. On the other hand, one needs three sets of mass-scaled Jacobi coordinates ($S_{\tau}$, $s_{\tau}$, $\gamma_\tau$) for describing chemical reaction~\cite{pack}. Here, $S_\tau$ is the atom-molecule center-of-mass separation, $s_{\tau}$ is the diatom separation, and $\gamma_\tau$ is the angle between $S_\tau$ and $s_\tau$. The hyper radius $\rho$ is $\rho=\sqrt{S_\tau^2+s_\tau^2}$, while expressions for the hyper angles $\theta$ and $\phi$ are given in Ref.~\cite{pack}. Outside the region of strong interactions, where the three-body term has nearly decayed to zero, the three sets of Delves hyperspherical coordinates (DC), ($\rho$, $\theta_\tau$, $\gamma_\tau$) are used where $\theta_\tau=\arctan{(s_\tau/S_\tau)}$~\cite{Del1,Del2}. The hyper radius in DC is the same as in APH but its hyper angles are defined differently and depend on the arrangement channel. In our approach, we adopt the APH coordinates ($\rho$, $\theta$, $\phi$) in the strong interaction region (inner chemically important region) and the DC ($\rho$, $\theta_\tau$, $\gamma_\tau$) in the outer region. Finally, asymptotic boundary conditions are applied in Jacobi coordinates to evaluate the scattering matrix $S^{J,pq}_{f\leftarrow i}(E)$ for conserved $J$, $p$, and $q$. The indices $i$ and $f$ describe the initial and final scattering channels and $E$ is the initial collision energy. In the inner region, where APH coordinates are used, the Hamiltonian for a triatomic system is \begin{equation} H = -\frac{\hbar^2}{2\mu\rho^5}\frac{\partial}{\partial\rho}\rho^5\frac{\partial}{\partial\rho} + \frac{\hat{\Lambda}^2}{2\mu\rho^2} +V(\rho,\theta,\phi)\,, \end{equation} where $\mu=\sqrt{m_A m_B m_C/(m_A+m_B+m_C)}$ is the three body reduced mass, $\hat{\Lambda}$ is the grand angular momentum operator \cite{Brian_APH}, and $V(\rho,\theta,\phi)$ is the adiabatic potential energy surface of the triatomic system. The total trimer wave function in this region for a given $J$, $M$, $p$, and $q$ is expanded as \cite{Brian_APH,Gagan13,Gagan13_1} \begin{equation} \Psi^{JM,pq} = 4\sqrt{2}\sum_t \frac{1}{\rho^{5/2}}\Gamma^{J,pq}_t(\rho)\Phi^{JM,pq}_{t}(\Xi;\rho)\,, \end{equation} where the sum $t$ is over five-dimensional (5D) surface functions $\Phi^{JM,pq}_t(\Xi;\rho)$ with $\Xi=(\theta,\phi,\alpha,\beta,\eta$), where $\alpha$, $\beta$, and $\eta$ are Euler angles that orient the trimer in space. The other terms, $\Gamma^{Jpq}_{t}(\rho)$, are $\rho$-dependent radial coefficients. The orthonormal surface functions $\Phi^{JM,pq}(\Xi;\rho)$ depend parametrically on hyper radius $\rho$. For each $\rho$ the surface functions are eigen solutions of the Hamiltonian $\hbar^2\hat{\Lambda}^2/(2\mu\rho^2)+V(\rho,\theta,\phi)$. To evaluate the 5D surface functions $\Phi^{JM,pq}_{t}(\Xi;\rho)$, we expand in terms of primitive orthonormal basis functions in $\Xi$ given by $d^{l}_{\mu,\nu}(\theta)(e^{im\phi}/\sqrt{2\pi} ) \tilde{D}^J_{\Omega M}(\alpha,\beta,\gamma)$, where $d^{l}_{\mu,\nu}(\theta)$ is expressed in terms of Jacobi polynomials $P_{l-\mu}^{(\mu-\nu,\mu+\nu)}(\cos \theta)$ \cite{Brian_APH}, $\tilde{D}^J_{\Omega M}$ are normalized Wigner rotation matrices, and $\Omega$ is the projection of $J$ on the body-fixed axis. The basis function labels $\mu$, $\nu$, $l$ and $m$ can be integral or half-integral depending upon the value of total angular momentum $J$, $\Omega$, and inversion parity $p$.\cite{Brian_APH} We introduce $l_{\rm max}$ and $m_{\rm max}$ where $\mu\le l\le l_{\rm max}$ and $|m|\le m_{\rm max}$. The parameters $l_{max}$ and $m_{max}$ control the size of the basis sets in $\theta$ and $\phi$. A hybrid discrete variable representation (DVR) in $\theta$ and a finite basis representation (FBR) in $\phi$ are used to solve the eigenvalue problem involving the surface function hamiltonian. An implicitly Restarted Lanczos Method (IRLM) of Sorensen \cite{IRLM1} and Sylvester algorithm \cite{Sylvester1} are used for the diagonalization of the DVR Hamiltonian which includes tensor products of kinetic energy operators. Additionally, using a Sequential Diagonalization Truncation (SDT) technique \cite{STD1,STD2} the hamiltonian matrix is kept to a reasonable size. Outside the region of strong interaction, we use DC and the total wave function is expanded in a complete set of $\rho$ dependent vibrational wave functions $\Upsilon^{Jq}_{n}(\theta_{\tau};\rho)$, coupled angular functions ${\cal Y}^{JM,pq}_{n}$, and radial functions $\Gamma^{J,pq}$ to yield \begin{equation} \Psi^{JM,pq} = 2\sum_{n} \frac{1}{\rho^{5/2}}\Gamma^{J,pq}_{n}(\rho) \frac{\Upsilon^{J,q}_{n}(\theta_{\tau};\rho)}{\sin 2\theta_{\tau}} {\cal Y}^{JM,pq}_{n}(\hat{S}_{\tau},\hat{s}_{\tau})\,, \end{equation} where $n$ denotes collective molecular quantum numbers, $\{v_{\tau},j_{\tau},\ell_{\tau}\}$. The angles $\hat S_\tau$ and $\hat s_\tau$ are related to Euler angles via $d\hat Sd\hat s=d\alpha\sin\beta d\beta d\eta \sin\gamma d\gamma$. The vibrational wave functions $\Upsilon^{Jq}_{n}(\theta_{\tau};\rho)$ parametrically depend on $\rho$ and are computed using a one-dimensional Numerov propagator in $\theta_\tau$ \cite{Brian_Delves}. The Hamiltonian in the DC has the similar form as in APH except that the expression for $\hat{\Lambda}^2$ has a different form \cite{Brian_Delves} and the variables of the three body PES are also different. On substitution of $\Psi^{JM,pq}$ into the time-independent Schr\"{o}dinger equation $H\Psi^{JM,pq}=E_{\rm tot}\Psi^{JM,pq}$ one obtains a set of coupled equations in $\Gamma^{J,pq}(\rho)$. { Using a sector-adiabatic approach in $\rho$, where $\rho$ is partitioned into a large number of sectors, the surface functions are evaluated at the center of each sector. Assuming that the surface functions do not change within a sector, the solution of the Schr\"{o}dinger equation is obtained by propagating the radial equations from a small value of $\rho$ in the classically forbidden region to a large asymptotic value of $\rho=\rho_{max}$.} Here, we propagate the R-matrix $R(\rho) = \Gamma(\rho)\left(d\Gamma(\rho)/d\rho\right)^{-1}$ for each collision energy using the log-derivative method of Johnson \cite{John}. Scattering boundary conditions are applied at $\rho_{max}$ to evaluate the scattering S-matrix. Details of the numerical integration, mapping between basis functions in the APH and DC coordinates, and asymptotic matching in Jacobi coordinates are given in Refs.~\cite{Brian_APH,Brian_Delves}. The $S$-matrix elements are used to calculate the partial reactive rate coefficient for a given $J$, $p$, and $q$, \begin{equation} K^{J,pq}_i(E)= \frac{1}{2j_i + 1}v_r \frac{\pi}{k_r^2}\sum_{f} \left|S^{J,pq}_{f\leftarrow i}(E)\right|^2\,, \end{equation} where the sum $f$ is over all product (Li$_2$) ro-vibrational states ($v_f$,$j_f$). In the usual way, we have also averaged over initial $m_{j_i}$ and summed over all the final $m_{j_f}$. Here $v_r=\hbar k_r/\mu_{\rm A,BA}$ is the incident relative velocity and $\hbar^2k_r^2/(2\mu_{\rm A,BA})= E$ is the relative kinetic energy in the incident channel with the reduced mass $\mu_{\rm A,BA}=m_A(m_B+m_A)/(2m_A+m_B)$. In order to construct the total reaction rate coefficient the role of the nuclear spin $I$ of the two identical $^{6}$Li nuclei must be considered. Following Ref.~\cite{Miller} we define symmetrized rate coefficient \begin{eqnarray} \bar{K}^{J,pq}_{i}(E) = \frac{2I+1+q(-1)^{2I}}{2(2I+1)}K^{J,pq}_{i}(E)\,, \label{symmetrized-cross} \end{eqnarray} for a given $p$, $q$, and $J$ and the total reaction rate coefficient becomes \begin{eqnarray} K_i(E)= \sum_J (2J+1) \sum_{p} {\bar K}^{J,pq}_{i}(E) \,. \end{eqnarray} Since $^6$Li has nuclear spin $I=1$, this leads to weight factors 2/3 and 1/3 for even and odd $^6$Li$_2$ rotational levels $j_f$, respectively. The EQM calculations involve the numerical computation of the 5D hyperspherical surface functions in the APH and DC and log-derivative propagation of the CC equation in these coordinates, followed by asymptotic matching to Li$_2$ and LiYb ro-vibrational states in Jacobi coordinates. We have restricted calculations for total angular momentum $J=0$. For the inner region ranging from $\rho=6.0a_0$ to $33.89a_0$, the number of APH surface functions in $\theta$ and $\phi$ are controlled by $l_{\rm max}$ and $m_{\rm max}$. For computational efficiency this hyperradial range was further divided into the three regions $6.0 a_0<\rho < 13.98 a_0$, $13.98 a_0<\rho< 20.00 a_0$, and $20.00 a_0<\rho < 33.89 a_0$ with $l_{\rm max}=119,179,\,399$ and $m_{\rm max}=220,280,\,440$, respectively. For $J=0$ this leads to 5D surface function matrices of dimension 52\,920, 100\,980, and 352\,400. Fortunately, the dimensionality of these large matrices can be significantly reduced by using the SDT procedure to 23\,986, 42\,769, 136\,489, leading to considerable savings in computational time. Furthermore, the explicit construction of these matrices is avoided by using an efficient sparse matrix diagonalization methodology (IRLM). Finally, a logarithmic spacing in $\rho$ is adopted with 88, 122 and 175 sectors, respectively. We compute 950 lowest energy surface functions for $J=0$, leading to an equivalent number of coupled channel equations. Asymptotically, these channels correspond to different ro-vibrational levels of LiYb and Li$_2$ molecules. Among these, 636 are open channels and remaining 314 are closed channels. Delves coordinates are used in the outer region comprised of $\rho=33.89a_0$ to $\rho_{\rm max}=107.48a_0$. A logarithmic spacing in $\rho$ similar to that in the inner region is employed here. The number of basis functions in this region is controlled by an energy cutoff which is taken to be 0.9 eV relative to the minimum energy of the asymptotic Li$_2$ diatomic potential. As discussed previously, a one-dimensional Numerov method is used to compute the adiabatic surface functions $\Upsilon^{Jq}_{n}(\theta_{\tau};\rho)$. Consequently, solution of the adiabatic problem in the Delves coordinates is fast compared to the APH part. The computational time for the log-derivative propagation of the radial equations is comparable to that in the inner region. We have verified that convergence of the scattering matrices was reached at $\rho_{max}=107.48a_0$ by comparing with results obtained at $\rho_{max}=118a_0$. At $\rho=\rho_{\rm max}$, we match the DC wave functions to ro-vibrational levels of the LiYb and Li$_2$ molecules defined in Jacobi coordinates. This includes vibrational levels $v=0-4$ for LiYb and $v'=0-22$ for Li$_2$. For LiYb, rotational quantum numbers up to $j$ = 54, 47, 38, 27 and 3 are incorporated in the vibrational levels $v=0-4$ and for Li$_2$ rotational quantum numbers up to $j'=$101, 98, 95, 92, 90, 88, 85, 82, 79, 76, 73, 70, 66, 63, 60, 56, 52, 48, 43, 39, 33, 27 and 17 are included in vibrational levels $v'=0-22$, respectively. \subsection{Statistical quantum-mechanical method}\label{sec:SQM} The SQM has been developed to treat complex-forming atom-diatom reactions \cite{Rackham2003,Rackham2001,Gonzales2007}. The method has been successfully employed in recent investigations of the low energy dynamics of the D$^+ +$H$_2 \rightarrow$ DH + H$^+$ reaction \cite{GSH:JPCA14,GH:IRPC14,GHS:JCP13}. In particular, statistical predictions were found in almost perfect agreement with both experimental and quantum mechanical rate coefficients down to 11 K. It assumes that the process proceeds via the formation of an intermediate three-body species between reactants and products with a sufficiently long lifetime. Consequently, the state-to-state reaction probability $P^{J}_{f \leftarrow i}(E)$, for conserved total angular momentum $J$ and total energy $E$ can be approximated by the product of the probability $p^J_{i}(E)$ of the complex to be formed from the initial reactant channel $i$ and the fraction $p^J_{f}(E)/ \sum_{c} p^{J}_{c}(E)$ of complexes fragmenting into the final product channel $f$ (with Li$_2$ ro-vibrational state $v'j'\Omega'$) as follows: \begin{equation} P^{J}_{f\leftarrow i}(E) = {p^{J}_{i}(E) p^{J}_{f}(E) \over \sum_{c} p^{J}_{c}(E)}\,. \label{Sapprox} \end{equation} The sum over $c$ in Eq. (\ref{Sapprox}) runs over all energetically open states, $E_c\le E$, on both reactant and product channels at the total angular momentum $J$. To further simplify the SQM simulations we have used the centrifugal sudden (CS) approximation \cite{Rackham2001}, where channel states are uniquely specified by the rovibrational quantum numbers $v$ and $j$ and projection $\Omega$, where $\Omega$ is the body-fixed projection of the diatomic rotational angular momentum $\vec\jmath$ on the atom-diatom axis. For a collision energy of $E/k=0.1$ K we have verified that a proper treatment of the Coriolis coupling between $\Omega$ states does not significantly modify the predicted rate coefficient. The capture probabilities in each separate chemical arrangement $\tau$ are calculated as \begin{equation} p^{J}_{c}(E) = 1 - \sum_{c'} |S^{J}_{c' \leftarrow c}(E) |^2\,, \label{capture} \end{equation} by solving the corresponding closed-coupled channel equations in radial Jacobi coordinate $R_\tau$ \cite{Rackham2003,Rackham2001} by means of a time-independent log-derivative propagator \cite{Monolopoulos1986} between $R_{\rm c}$, where the complex is assumed to form, and the asymptotic separation $R_{{\rm max}}$. Finally, the total reaction rate coefficient for the ro-vibrational level $vj$ of the LiYb molecule is given by \begin{equation} \label{ics} K_{vj}(E) = \sum_{v'j'} K_{v'j',vj}(E)\,, \end{equation} where the $vj\to v'j'$ state-to-state rate coefficient is \begin{equation} \label{ics} K_{v'j',vj}(E) = \frac{1}{2j+1} \sum_{J\Omega} \sum_{\Omega'} {v_{r} (2J+1)} {\pi \over k^2_{r}} |S^{J}_{v'j'\Omega', vj\Omega}(E)|^2\,, \end{equation} with $|S^{J}_{v'j'\Omega', vj\Omega}(E)|^2=P^{J}_{f\leftarrow i}(E)$ and the sums over the body-fixed projections $\Omega'$ and $\Omega$, as well as $J$ and its space-fixed projection. The state-to-state rate is averaged over the $2j+1$ degenerate space-fixed projections of $\vec\jmath$ of the initial ro-vibrational level. \subsection{Universal model}\label{sec:universal} The universal model (UM) is a further simplification of the reaction valid for the rotation-less $v=0$ and $j=0$ LiYb molecule and ultracold collision energies. The model is based on a modification of the approach developed in Refs.~\cite{Mies84,Julienne}. For sufficiently large separations $R>R_u$ between a rotation-less LiYb molecule and Li coupling to other ro-vibrational states is negligible and the long-range interaction potential is an attractive isotropic van-der-Waals potential $-C^{\rm iso}_6/R^6$. Similar to the SQM we assume a scattering wavefunction that satisfies complete absorbing boundary conditions at the universal capture radius $R_u$. For these approximations to be valid the universal radius needs to satisfy the conditions $R_{u} \ll R_{\rm vdW}$ and $C^{\rm iso}_6/R_{u}^6 \sim 2 B_e$, where $B_e$ is the rotational constant of the $v=0$ LiYb molecule. As an aside we note that the second condition ensures that $R_u\gg R_c$, as expected. The coefficient $C^{\rm iso}_6$ has been determined in Sec.~\ref{sec:dispersion}. Under these assumptions the scattering of a rotation-less molecule with an atom is described by the single-channel potential $-C^{\rm iso}_6/R^6+\hbar^2J(J+1)/(2\mu R^2)$, since for a $j=0$ molecule the total angular momentum $J$ of the trimer equals the relative orbital angular momentum between the atom and dimer. The corresponding Schr\"odinger equation with short-range boundary conditions is numerically solved for $R>R_u$ and the total reaction rate coefficient for collision energy $E$ is given by \begin{equation} K^{\rm univ}_{v=0,j=0}(E) = \sum_{J} (2J+1)v_r\frac{\pi}{k_r^2} \left( 1- |S^J_{\rm el}(E)|^2 \right)\,, \label{lossrate} \end{equation} where $S^J_{\rm el}(E)$ is the elastic S-matrix element found by fitting the solution to in- and out-going spherical waves. Due to the absorbing boundary condition at $R=R_u$ we have $|S^J_{\rm elastic}(E)|<1$. \section{Results and Comparison} \label{sec:results} \begin{figure} \includegraphics[scale=0.33,trim=0 10 0 0,clip]{Fig3.pdf} \caption{Top panel) The EQM reaction rate coefficient for the collision of the $v=0$, $j=0$ ro-vibrational level of $^6$Li$^{174}$Yb with a $^6$Li atom as a function of relative collision energy $E$ for total angular momentum $J=0$. Lower panel) The thermally-averaged reaction rate coefficient summed over total angular momenta $J$ using the $J$-shifting approach as a function of temperature $T$. In both panels black and blue lines correspond to rate coefficients to form even and odd rotational levels of the $^6$Li$_2$ product molecule, respectively. Solid and dashed lines are rate coefficients from calculations using the full trimer and the additive pair-wise potential, respectively. } \label{full-vs-pairwise} \end{figure} In this section we describe and discuss our results based on the three computational methods. We start with the EQM calculations. The upper panel of Fig.~\ref{full-vs-pairwise} shows the $J=0$ EQM reactive rate coefficient for $^6$Li$^{174}$Yb($v=0,j=0$)+$^6$Li collisions as a function of the incident kinetic energy. Results are presented for even and odd rotational levels of diatomic Li$_2$ as well as for full three-body and additive pairwise potentials. The $J=0$ results correspond to $s$-wave scattering in the incident channel and only $s$-waves contribute for energies below 100 $\mu$K. The rate coefficients for the two potentials are similar for $E/k>10^{-3}$ K, while significant differences are seen for lower energies with the zero-temperature rate coefficients differing by a factor of two. We also observe that the onset of the Wigner-threshold regime, where the rate coefficient approaches a constant for $E\to 0$, is shifted to slightly lower energies for the pairwise potential. This may be due to the slightly different bound-state spectrum of the Li$_2$Yb complex for the two PESs. \begin{figure*} \includegraphics[scale=0.7,trim=0 30 0 0,clip]{Fig4a.pdf} \includegraphics[scale=0.7,trim=0 20 0 0,clip]{Fig4b.pdf} \caption{The $J=0$ state-to-state EQM reaction rate coefficient as a function of initial relative collision energy $E$ and vibrational state of the $^6$Li$_2$ product molecule. Left panel correspond to the results based on the calculation with full trimer potential, whereas the right panel shows rate for the additive pair-wise potential. } \label{CC-VibRate} \end{figure*} For collision energies $E/k>10^{-3}$ K, non-zero angular momenta $J$ need to be included. However, for our system this is not computationally feasible in the EQM approach. Instead, we adopt a $J$-shifting approximation~\cite{Bowman91}, which was shown to work reasonably well for barrier-less reactions involving non-alkali metal atom systems. Details can be found in Ref.~\cite{Gagan13,Gagan13_1}. In the lower panel of Fig.~\ref{full-vs-pairwise} we show the thermally-averaged reactive rate coefficients for full trimer and pairwise PES as a function of temperature evaluated using the $J$-shifting approach. Since the scattering calculation was only performed for collision energies up to 1 K, the Boltzmann average over collision energies limit the evaluation of the rate coefficient to temperatures up to 0.1 K. Results are presented for both even and odd rotational levels of the Li$_2$ molecule. For the full trimer potential, the rate coefficients in the zero-temperature limit for the even and odd Li$_2$ rotational levels are 2.61$\times10^{-10}$ cm$^3$/s and 1.11$\times10^{-10}$ cm$^3$/s, while for the pairwise potential they are 5.33$\times10^{-10}$ cm$^3$/s and 2.49$\times10^{-10}$ cm$^3$/s, respectively. The EQM calculations allow the study of state-to-state reaction rates and, in particular, the distribution over the vibrational and rotational levels of the $^6$Li$_2$ molecule. Figure \ref{CC-VibRate} plots the $J=0$ rate coefficients to form $^6$Li$_2$ vibrational states (summed over all open rotational states) as a function of collision energy. The left and right panels correspond to the case when the full three-body and pair-wise potentials have been used, respectively. For both cases vibrational levels as high as $v'=19$ are populated. The level $v'=15$ is the most populated level for the calculation with the full trimer potential, although vibrational levels $v'$=1, 2, 3, and 9 are also comparably populated, indicating a broad range of vibrational excitation for the $^6$Li$_2$ product. On the other hand vibrational levels from $v'$=1 to $v'$=4 have a highest rate of population for the calculation with the pair-wise potential. Their rate coefficients are twice as high as those of the other vibrational levels. Figures \ref{CC-RotRate-even} and \ref{CC-RotRate-odd} show the $J=0$ rate coefficients to form even and odd $j'$ levels in the $v'=15$ vibrational level of $^6$Li$_2$, respectively. In each figure the top and bottom panel correspond to collision energies $E/k=10^{-4}$ K and 1 K, respectively, and rotational levels as high as $j'=44$ are populated. Differently colored bars correspond to predictions for the full three-body and pair-wise potentials. For collision energies below 0.01 K (primarily the Wigner threshold regime), the relative distribution is independent of $E$. For the full trimer potential the $v'=15$ rate coefficients in Fig.~\ref{CC-RotRate-even} are dominated by the two highest rotational levels, $j'=42$ and 44. In other words, rotational levels, where the relative kinetic energy between the Li$_2$ dimer and Yb is smallest, are produced. On the other hand, the calculations with the pair-wise potential show the levels $j'=22$, $32$, and $42$ are most populated. At $E/k=1$ K a broader range of rotational levels is populated with the highest population for $j'=0$ and 26 for the full trimer potential and $j'=0$, 32 and 38 for the pair-wise potential. Similar results have been observed for other barrier-less reactions involving non-alkali metal atom systems such as OH+O and O($^1$D)+H$_2$ \cite{Gagan13,Gagan13_1}. Figure \ref{CC-RotRate-odd} shows results for the rotational distribution of the odd $j'$ levels in the $v'=15$ vibrational level. At $E/k=10^{-4}$ K calculations for the full trimer potential reveal that the $j'=21$ and $41$ levels are most populated, whereas for the pairwise potential these are the $j'=3$ and $39$ levels. At $E/k=1$ K the population of $j'=35, 37, 39$ and $41$ dominates for the trimer potential and $j'=19$ and $j'=33$ levels for the pair-wise potential. Overall, the highly-excited rotational levels are more populated than the lower rotational levels. This is partly driven by the anisotropy of the interaction potential and a compromise between conservation of internal energy and rotational angular momentum. \begin{figure} \includegraphics[scale=0.32,trim=0 25 0 50,clip]{Fig5.pdf} \caption{The EQM reaction rate coefficient as a function of the even rotational quantum number of the $v'=15$ vibrational level of the $^6$Li$_2$ molecule. The total angular momentum is $J=0$. Upper and lower panels show the rate coefficient for an initial collision energy of $E/k=10^{-4}$ K and 1 K, respectively. The black and red bars in both panels correspond to results of a calculation with the full trimer potential and additive pair-wise potential, respectively. } \label{CC-RotRate-even} \end{figure} \begin{figure} \includegraphics[scale=0.32,trim=0 25 0 50,clip]{Fig6.pdf} \caption{The EQM reaction rate coefficient as a function of the odd rotational quantum number of the $v'=15$ vibrational level of the $^6$Li$_2$ molecule. The total angular momentum is $J=0$. Upper and lower panels show the rate coefficient for an initial collision energy of $E/k=10^{-4}$ K and 1 K, respectively. The black and red bars in both panels correspond to results of a calculation with the full trimer potential and additive pair-wise potential, respectively. } \label{CC-RotRate-odd} \end{figure} We now turn to describe the results obtained with the SQM method. In this study, the calculation for the LiYb+Li reactant arrangement was performed for $R_{\rm c}=7 a_0$ and a variable $R_{\rm max}$ depending on the energy under consideration, but with a largest value of $100 a_0$, whereas for the Li$_2$+Yb product arrangement, those radii are $11.1 a_0$ and $69.5 a_0$, respectively. The selection of these values is made after numerical tests to guarantee the convergence of both individual capture, $p^J_i(E)$, and total, $P^J_{f\leftarrow i}(E)$, reaction probabilities. The SQM calculations in the reactants arrangement involves only the LiYb ground vibrational state and rotational states up to $j= 16$, whereas in the product arrangement rovibrational states of the Li$_2$ molecule extend up to $v'=16$ and $j'= 95$. Comparisons made at $E/k \sim 0.1$ K revealed that no significant differences are found between the CS approximation and a proper treatment of the Coriolis coupling term within the coupled-channel framework. Figure \ref{SQM_VibRate} shows the SQM reaction rate coefficient to produce vibrational level $v'=0-17$ of $^6$Li$_2$ as a function of collision energy $E$. The left panel shows the rate for $J=0$, while the right panel includes sum over all $J$. The full trimer potential has been used in these calculations. The figure shows that the SQM calculation predicts rate coefficients that decrease with increasing $v'$. This contrasts the EQM data, which predict a far more complex $v'$ dependence. This may be attributed to not accurately including the three-body forces in the SQM calculations. Experimental measurement of these state-to-state reaction rates are clearly needed. Ground state LiYb molecule does not exist yet. \begin{figure} \includegraphics[scale=0.34,trim=5 30 0 60,clip]{Fig7.pdf} \caption{The state-to-state SQM reaction rate coefficient for vibrational states $v'$ of $^6$Li$_2$ as a function of relative collision energy. The left panel corresponds to the results restricted to total angular momentum $J=0$ and right panel shows the rate coefficients summed over all $J$. The results are based on the full trimer potential. } \label{SQM_VibRate} \end{figure} The rotational dependence of the rate coefficient from SQM for three vibrational levels $v'$ is shown Fig.~\ref{SQM-RotRate}. The number of $j'$s that can be populated follows from energy conservation and decreases with increasing $v'$. For small $v'$ the $j'$ dependence is fairly smooth and gently approaches zero for larger $j'$, while for higher $v'$ more structure is predicted showing a maximum near the largest $j'$ that are energetically accessible. For example, for $v'=15$ rotational states around $j'=40$ are most populated. These trends coincide with those predicted by EQM for the full trimer potential. \begin{figure} \includegraphics[scale=0.37,trim=50 40 0 70,clip]{Fig8.pdf} \caption{The state-to-state SQM reaction rate coefficient for the rotational distribution in the $v'=0$ (upper panel), $v'=5$ (middle panel), and $v'=15$ (lower panel) vibrational levels of the $^6$Li$_2$ molecule as a function of the rotational quantum number. The calculation is for $J=0$ and $E/k=10^{-4}$ K and is based on the full trimer potential. } \label{SQM-RotRate} \end{figure} \begin{figure} \includegraphics[scale=0.35,trim=22 30 0 70,clip]{Fig9.pdf} \caption{The reaction rate coefficient for the $v=0$, $j=0$ ro-vibrational level of $^6$Li$^{174}$Yb colliding with $^6$Li as a function of collision energy, restricted to total angular momentum $J=0$ (left panel) and the thermally-averaged rate coefficient summed over total angular momenta $J$ as a function of temperature (right panel). The blue, red, and green curves correspond to the exact (EQM), statistical (SQM), and universal quantum-defect model (QDT), respectively. Solid and dashed lines for the EQM and SQM calculations correspond to calculations based on the full trimer potential and the pair-wise additive potentials, respectively. We used $C_6=3086 E_h a_0^6$ in the UM.} \label{Compare} \end{figure} For ultracold molecular reactions the universal model (UM) has been very successful in qualitatively and sometimes quantitatively describing the observed reaction rates \cite{Julienne2010,Kotochigova2010}. It solely depends on the dispersion coefficient between LiYb($v$=0) and Li and can only predict the total rate coefficient. In Fig.~\ref{Compare} we compare the UM rate coefficients for $^6$Li+$^6$Li$^{174}$Yb$(v=0,j=0)$ reaction with those of our other two calculations. For comparison purpose, the EQM results include contributions from both even and odd rotational levels of Li$_2$. Results for both full trimer and pair-wise potential are given where appropriate. From the figure it is clear that results from different calculations with different potentials and varying degrees of approximations begin to merge for energies or temperatures above $10^{-3}$ K. Hence, the rate coefficient is largely insensitive to model and potential for collision energies above a mK. Rate coefficients from the SQM and UM models attain constant values for temperatures below $10^{-5}$ K in accordance with the Wigner threshold behavior. However, for the EQM results, this regime is attained only at about $10^{-5}$ K, presumably due to contributions from short-range interactions. This may also explain why the SQM results on the pair-wise additive and full trimer potentials yield comparable results. The SQM approach neglects most of the region of the potential between reactants and products where the intermediate complex is assumed to form. It is this region where specific features of the full potential are introduced but not fully taken into account in the SQM approach. The EQM results from the pair-wise additive and full trimer potentials show a factor of two difference in the ultracold regime, indicating sensitivity of results to fine details of the interaction potential. This is also evident from the product-resolved rate coefficients presented in Figs.~\ref{CC-VibRate}, \ref{CC-RotRate-even} and \ref{CC-RotRate-odd}. \section{Conclusion} \label{sec:conclusion} We have investigated the chemical reaction between an ultracold LiYb molecule and an ultracold Li atom. This type of system was totally unknown in terms of its short- and long-range electronic potential surface as well as its scattering properties and reactivity. In this paper we reported on the first calculation of the ground-state electronic surface of the LiLiYb tri-atomic complex. We found that this collisional system possesses a deep potential energy surface that has its absolute minimum at a linear geometry with an atomization energy of 9929.0 cm$^{-1}$ and accommodates many bound and quasi-bound states that are accessible in ultracold collisions making quantum dynamics simulations extremely challenging. In addition, we performed a separate calculation of the long-range van der Waals potential between a Li atom and the LiYb molecule in the $v=0$ vibrational level based on the dynamic polarizability of Li and LiYb. These van der Waals coefficients were used to estimate the universal reaction rate coefficient. We explored the reactivity of this system at the quantum level using three different computational methods. These include an exact quantum mechanical method based on a rigorous close-coupling approach in hyper-spherical coordinates that uses a minimal amount of assumptions. The EQM method predicts both total and state-to-state reaction rate coefficients, which we hope will stimulate the development of state-selective detection of the product molecules in ultracold reactions. This is one of the major challenges for ultracold chemistry in going beyond integrated reaction rate constants. The high accuracy of the reaction rates comes at the expense of model complexity and computational time. We also explored two approximate quantum-mechanical methods to describe the reaction rate and capture the main features of the complex dynamics. One is the so-called statistical method, which assumes that the reactivity from reactants and products is controlled by one long-lived and short-ranged resonant state. The long-range scattering is described by separate coupled-channel calculations in Jacobi coordinates for the reactant (LiYb+Li) and product (Li$_2$+Yb) arrangements. This model makes predictions for state-to-state rate coefficients as well. Finally, we used the universal QDT model in the reactants arrangement, which assumes that at a carefully chosen separation between LiYb($v=0$, $j=0$) and Yb there is unit probability of a reaction. Reflection only occurs on the entrance-channel van der Waals potential. Consequently, this model does not depend on details of the strong short-ranged chemical interactions and only the total reaction rate coefficient can be calculated. The total reaction rate coefficient as calculated from the three models agree for collision energies (or temperatures) above 10$^{-3}$ K. Only for smaller collision energies and, in particular, in the Wigner threshold regime it differs by a factor of two. It was surprising for us to see that in the Wigner threshold regime the universal model predicts a rate that lies between the EQM and SQM values. In the language of the universal model this suggests that there is a significant probability that flux is returned from the short-range region. The incoming and outgoing fluxes can then interfere. In EQM calculation these fluxes interfere in such a way that the reaction rate coefficient is significantly enhanced, whereas in the statistical model it is reduced. The disagreement between EQM and SQM in the Wigner threshold regime suggests that the underlying SQM assumption of a complex-forming dynamics for the reaction must be relaxed. Both EQM and SQM calculations have been performed with the full three-body potential as well as the pair-wise potential. We conclude that only in the Wigner threshold regime with collision energies well below 10$^{-3}$ K the EQM model is sensitive to the presence of the three-body contribution. On the other hand the SQM model shows no such sensitivity due to its neglect of three-body forces in the chemically important region. \section*{Acknowledgments} The Temple University and UNLV teams acknowledge support from the Army Research Office, MURI grant No.~W911NF-12-1-0476 and the National Science Foundation, grant Nos.~PHY-1308573 (S.K.) and ~PHY-1205838 (N.B.). TGL acknowledges support from Project No. FIS2011-29596-C02-01 of the Spanish MICINN. BKK acknowledges that part of this work was done under the auspices of the US Department of Energy under Project No. 20140309ER of the Laboratory Directed Research and Development Program at Los Alamos National Laboratory. Los Alamos National Laboratory is operated by Los Alamos National Security, LLC, for the National Security Administration of the US Department of Energy under contract DE-AC52-06NA25396.
1,941,325,219,978
arxiv
\section{Introduction} The magnetic moment of the electron which is responsible for the interaction with the magnetic field in the Born approximation can be written in the standard form \begin{eqnarray} \vec{\mu}=g(\frac{e\hbar}{2mc})\vec{s} \end{eqnarray} where $\vec{s}$, e and m is the spin, electric charge and mass of the electron. The cofficient $g$ is called the Lande g-factor or gyromagnetic factor. Standard prediction of the Dirac equation gives $g=2$. Deviation from the Dirac value \begin{eqnarray} a_{e}=(g-2)/2 \end{eqnarray} is known as the anomalous magnetic moment. The first result for the anomalous magnetic moment of the electron was calculated from Quantum Electrodynamics (QED) using radiative corrections by Schwinger in 1948 as $a_{e}=\frac{\alpha}{2\pi}$ \cite{schwinger}. From that time, physicists have improved successively the accuracy of the this quantity both in the theoretical and experimental point of views. These works have provided the stringent tests of QED and have lead to the precise determination of the fine structure constant $\alpha$ based on the fact that $a_{e}$ is insensitive to the weak and strong interactions. Similar studies have been done for muons. Since the higher loop corrections are mass dependent, the $a_{\mu}$ is expected to include weak and hadronic contributions. This offers a sensitivity to new physics by a relative enhancement factor of $(m_{\mu}/m_{e})^{2}\sim 4\times 10^{4}$ than to the case of $a_{e}$. Several detailed Standard Model tests have been done using the accurate value of the anomalous magnetic moment of the muon \cite{brown}. Anomalous magnetic moment $a_{\tau}$ of $\tau$ lepton would be much better to constrain the new physics due to its large mass. However, spin precession experiment is not convenient to make a direct measurement for $a_{\tau}$ at present because of its short lifetime. So we need collider experiments with high accuracy to produce $\tau$ lepton. Latest QED contribution to the anomalous magnetic moment $a_{\tau}$ from higher loop corrections is given by the following theoretical result \cite{passera} \begin{eqnarray} a^{QED}_{\tau}=117324\times 10^{-8} \end{eqnarray} with the uncertainty $2\times 10^{-8}$. The experimental limits at 95\% CL were obtained by L3 and OPAL collaborations in radiative $Z\to \tau\tau\gamma$ events at LEP \cite{l3,opal} \begin{eqnarray} -0.052 < a_{\tau} < 0.058 \,\,\, \mbox{(L3)}\\ -0.068 < a_{\tau} < 0.065 \,\,\, \mbox{(OPAL)} \end{eqnarray} and later by DELPHI Collaborations \cite{delphi} based on the process $e^{+}e^{-} \to e^{+}e^{-}\tau^{+}\tau^{-}$ \begin{eqnarray} -0.052 < a_{\tau} < 0.013 \end{eqnarray} It is clear that we need at least one order of magnitude improvement to determine $a_{\tau}$. In the coupling of $\tau$ lepton to a photon, another interesting contribution is the CP violating effects which create electric dipol moment. CP violation has been observed in the system of $K^{0}$ mesons \cite{cp1}. This phenomenon has been described within the SM by the complex couplings in the Cabibbo-Kobayashi-Maskawa (CKM) matrix of the quark sector \cite{ckm}. Actually, there is no CP violation in the leptonic couplings in the SM. In spite of that, CP violation in the quark sector induces electric dipole moment of the leptons in the three loop level \cite{hoog}. This contribution of the SM to the electric dipole moment of the leptons can be shown to be too small to detect. Another source of CP violating coupling of leptons comes from the neutrino mixing if neutrinos are massive \cite{barr}. It is also shown that this kind of CP violation is undetectable through the electric dipole moment of the $\tau$ lepton. Supersymmetry (SUSY) \cite{ellis} , more Higgs multiplets \cite{weinberg}, left-right symmetric models \cite{pati} and leptoquarks \cite{ma} are expected to be the sources of the CP violation. Some loop diagrams are proportional to the fermion masses which make the $\tau$ the most sensitive lepton to the CP violation. Therefore, larger effects may arise from the physics beyond the SM. Only upper limits on the electric dipole moment of the $\tau$ lepton have been obtained so far from the experiments at 95\%CL \cite{l3,opal,delphi} \begin{eqnarray} |d_{\tau}| < 3.1\times 10^{-16}\,\, \mbox{e cm}\,\, \mbox{(L3)} \\ |d_{\tau}| < 3.7\times 10^{-16}\,\, \mbox{e cm}\,\, \mbox{(OPAL)} \\ |d_{\tau}| < 3.7\times 10^{-16}\,\, \mbox{e cm}\,\, \mbox{(DELPHI)} \end{eqnarray} More stringent limits were set by BELLE \cite{belle} \begin{eqnarray} -0.22<Re(d_{\tau}) < 0.45 \,\,( 10^{-16}\,\, \mbox{e cm}) \\ -0.25<Im(d_{\tau}) < 0.08 \,\,( 10^{-16}\,\, \mbox{e cm}) \end{eqnarray} There are more articles providing limits from previous LEP results \cite{cornet} or obtained by using some indirect methods and early study in heavy ion collision \cite{masso}. Couplings of $\tau$ lepton to a photon can be parametrized by replacing the pointlike factor $\gamma^{\mu}$ by $\Gamma^{\mu}$ as follows \cite{grimus} \begin{eqnarray} \Gamma^{\mu}=F_{1}(q^{2})\gamma^{\mu}+F_{2}(q^{2}) \frac{i}{2m_{\tau}}\sigma^{\mu\nu}q_{\nu}+F_{3}(q^{2}) \frac{1}{2m_{\tau}}\sigma^{\mu\nu}q_{\nu}\gamma^{5} \end{eqnarray} where $F_{1}(q^{2})$, $F_{2}(q^{2})$ and $F_{3}(q^{2})$ are form factors related to electric charge, anomalous magnetic dipole moment and electric dipole moment. q is defined as the momentum transfer to the photon and $\sigma^{\mu\nu}=\frac{i}{2}(\gamma^{\mu}\gamma^{\nu}- \gamma^{\nu}\gamma^{\mu})$. Asymptotic values of the form factors, in the limiting case $q^{2}\to 0$, are called moments describing the static properties of the fermions \begin{eqnarray} F_{1}(0)=1,\,\,\, a_{\tau}=F_{2}(0), \,\,\, d_{\tau}=\frac{e}{2m_{\tau}}F_{3}(0) \end{eqnarray} In the next section, we give some details of the equivalent photon approximation and forward detector physics at LHC. Then we study the sensitivity of the process $pp \to pp \tau^{+}\tau^{-} $ to the anomalous electromagnetic moments of the $\tau$ lepton via the subprocess $\gamma\gamma \to \tau^{+}\tau^{-} $. \section{$\gamma\gamma$ Scattering at LHC} Two photon scattering physics at Large Hadron Collider (LHC) is becoming interesting as an additional tool to search for physics in Standard Model (SM) or beyond it. Forward detectors at ATLAS and CMS are developed to detect the particles not detected by the central detectors with a pseudorapidity $\eta$ coverage 2.5 for tracking system and 5.0 for calorimetry. In many cases, the elastic scattering and ultraperipheral collisions are out of the central detectors. According to the program of ATLAS and CMS Collaborations forward detectors will be installed in a region nearly 100m-400m from the interaction point \cite{royon}. With these new equipments, it is aimed to investigate soft and hard diffraction, low-x dynamics with forward jet studies, high energy photon induced interactions, large rapidity gaps between forward jets, and luminosity monitoring \cite{royon,khoze, schul}. These dedicated detectors may tag protons with energy fraction loss $\xi =E_{loss}/E_{beam}$ far away from the interaction point. This nice property allows for high energy photon induced interactions with exclusive final states in the central detectors. In the recent program of ATLAS and CMS, the positions of the forward detectors are planned to give an overall acceptance region of $0.0015<\xi<0.5$ \cite{royon2,albrow}. Closer location of the forward detectors to interaction point leads to higher $\xi$. Almost real photons are emitted by each proton and interact each other to produce exclusive final states. In this work, we are interested in the $\tau$ lepton pair in the final states $\gamma\gamma \to \tau^{+}\tau^{-}$. Deflected protons and their energy loss will be detected by the forward detectors far away from the interaction point as mentioned above. Final $\tau$ leptons with rapidity $|\eta|<2.5$ and $p_{T}>20 GeV$ will be identified by the central detector. Photons emitted with small angles by the protons show a spectrum of virtuality $Q^{2}$ and energy $E_{\gamma}$. In order to handle this kind of processes equivalent photon approximation \cite{budnev,baur} is used. The proton-proton case differs from the pointlike electron-positron case by including the electromagnetic form factors in the equivalent photon spectrum and effective $\gamma\gamma$ luminosity \begin{eqnarray} dN=\frac{\alpha}{\pi}\frac{dE_{\gamma}}{E_{\gamma}} \frac{dQ^{2}}{Q^{2}}[(1-\frac{E_{\gamma}}{E}) (1-\frac{Q^{2}_{min}}{Q^{2}})F_{E}+\frac{E^{2}_{\gamma}}{2E^{2}}F_{M}] \end{eqnarray} where \begin{eqnarray} Q^{2}_{min}=\frac{m^{2}_{p}E^{2}_{\gamma}}{E(E-E_{\gamma})}, \;\;\;\; F_{E}=\frac{4m^{2}_{p}G^{2}_{E}+Q^{2}G^{2}_{M}} {4m^{2}_{p}+Q^{2}} \\ G^{2}_{E}=\frac{G^{2}_{M}}{\mu^{2}_{p}}=(1+\frac{Q^{2}}{Q^{2}_{0}})^{-4}, \;\;\; F_{M}=G^{2}_{M}, \;\;\; Q^{2}_{0}=0.71 \mbox{GeV}^{2} \end{eqnarray} Here E is the energy of the proton beam which is related to the photon energy by $E_{\gamma}=\xi E$ and $m_{p}$ is the mass of the proton. The magnetic moment of the proton is $\mu^{2}_{p}=7.78$, $F_{E}$ and $F_{M}$ are functions of the electric and magnetic form factors. The integration of the subprocess $\gamma\gamma \to \tau^{+}\tau^{-}$ over the photon spectrum is needed \begin{eqnarray} d\sigma=\int{\frac{dL^{\gamma\gamma}}{dW}} d\sigma_{\gamma\gamma \to \tau\tau}(W)dW \end{eqnarray} where the effective photon luminosity $dL^{\gamma\gamma}/dW$ is given by \begin{eqnarray} \frac{dL^{\gamma\gamma}}{dW}=\int_{Q^{2}_{1,min}}^{Q^{2}_{max}} {dQ^{2}_{1}}\int_{Q^{2}_{2,min}}^{Q^{2}_{max}}{dQ^{2}_{2}} \int_{y_{min}}^{y_{max}} {dy \frac{W}{2y} f_{1}(\frac{W^{2}}{4y}, Q^{2}_{1}) f_{2}(y,Q^{2}_{2})}. \end{eqnarray} with \begin{eqnarray} y_{min}=\mbox{MAX}(W^{2}/(4\xi_{max}E), \xi_{min}E), \;\;\; y_{max}=\xi_{max}E, \;\;\; f=\frac{dN}{dE_{\gamma}dQ^{2}}. \end{eqnarray} Here W is the invariant mass of the two photon system $W=2E\sqrt{\xi_{1}\xi_{2}}$ and $Q^{2}_{max}$ is the maximum virtuality. Behaviour of the effective $\gamma\gamma$ luminosity is shown in Fig.\ref{fig1} as a function of the invariant mass of the two photon system. $Q^{2}_{max}$ dependence of the effective $\gamma\gamma$ luminosity will not be separable in Fig.\ref{fig1} between $Q^{2}_{max}=(1-4)$ $\mbox{GeV}^{2}$. This is due to electromagnetic dipole form factors of the protons which are steeply falling as a function of $Q^{2}$. This causes very slow increase in $\gamma\gamma$ luminosity as $Q^{2}_{max}$ increases. This is explicitly shown in Table \ref{tab1} where the cross sections are calculated in the next section. From Table \ref{tab1}, we see that $Q^{2}_{max}$ dependence does not create considerable uncertainty. Thus, it is reasonable to take $Q^{2}_{max}$ as (1-2)$\mbox{GeV}^{2}$. \FIGURE{\epsfig{file=fig1.eps} \caption{Effective $\gamma\gamma$ luminosity as a function of the invariant mass of the two photon system.} \label{fig1}} \TABLE{ \begin{tabular}{|c|c|c|}\hline $Q^{2}_{max}(GeV^{2})$ & $\sigma^{0}$(fb) & $\sigma^{0}$(fb) \\ \hline & $ 0.0015<\xi<0.5$ & $ 0.01<\xi<0.15$ \\ \hline 0.5 & 167.6 & 10.4 \\ 0.8 & 171.3 & 10.7 \\ 1 & 172.3 & 10.8 \\ 1.5 & 173.3 & 10.9 \\ 1.8 & 173.5 & 10.9 \\ 2 & 173.6 & 10.9 \\ 3 & 173.8 & 11.0 \\ 4 & 173.8 & 11.0 \\ \hline \end{tabular} \caption{$Q^{2}_{max}$ dependence of the cross sections with equivalent photon approximation for the process $ pp \to p\tau^{+}\tau^{-}p$ without anomalous couplings of tau lepton . Two intervals of forward detector acceptance $\xi$ are considered. For $Q^{2}_{max}=(1-4)$ $GeV^{2}$ the cross sections do not change appreciably. \label{tab1}}} There are experimental uncertainties in the dipole form factors in Eq. (2.3). In Ref. \cite{arington} these uncertainties are given for the region $Q^{2}=0.007-5.850$ $GeV^{2}$. The change in the photon flux $f(E_{\gamma}, Q^{2})$ from the uncertainties in the electric and magnetic form factors can be calculated with the help of the expression below \begin{eqnarray} \delta f=\sqrt{(\frac{\partial{f}}{\partial{G_{E}}}\delta G_{E})^2 +(\frac{\partial{f}}{\partial{G_{M}}}\delta G_{M})^2} \end{eqnarray} Using some of the uncertainties in Ref. \cite{arington} we obtain relative changes in the photon flux $\delta f/f$. The results are shown in Table \ref{tab2} for two photon energies. The uncertainty in the photon flux from both protons leads to the relative uncertainty in the cross section $\delta\sigma/\sigma $ around 0.03 on the average depending on the photon energy for the process $ pp \to p\tau^{+}\tau^{-}p$ with $Q^{2}_{max}=2$ $GeV^{2}$. \TABLE{ \begin{tabular}{|c|c|c|c|c|}\hline $\xi$ &$Q^{2}(GeV^{2})$ & $\delta G_{E}/G_{D}$ & $\delta G_{M}/(\mu_{p}G_{D})$ & $\delta f/f$ \\ \hline 0.01 & 0.022 & 0.003 & 0.019 & 0.006 \\ 0.01 & 0.115 & 0.011 & 0.007 & 0.018 \\ 0.01 & 0.528 & 0.013 & 0.009 & 0.015 \\ 0.01 & 1.020 & 0.017 & 0.006 & 0.013 \\ 0.01 & 2.070 & 0.038 & 0.006 & 0.017 \\ \hline 0.15 & 0.022 & 0.003 & 0.019 & 0.090 \\ 0.15 & 0.115 & 0.011 & 0.007 & 0.016 \\ 0.15 & 0.528 & 0.013 & 0.009 & 0.015 \\ 0.15 & 1.020 & 0.017 & 0.006 & 0.013 \\ 0.15 & 2.070 & 0.038 & 0.006 & 0.017 \\ \hline \end{tabular} \caption{Relative change in the photon flux $\delta f/f$ due to the experimental uncertainties in dipole form factors. Values in the middle three columns are taken from Ref. \cite{arington}. \label{tab2}}} Let us discuss briefly bremsstrahlung lepton pair production which is one of the possible backgrounds to the equivalent photon approximation. In this process, there are a virtual photon exchange between the two protons and one bremsstrahlung photon emitted by one of the protons. The bremsstrahlung photon creates a lepton pair. The square of the matrix element includes electromagnetic form factors in each of the photon-proton vertex which are given in Ref.\cite{baur2} \begin{eqnarray} |M_{if}|^{2} \to |M_{if}|^{2} |F_{A}(q^{2}_{1})|^{2} |F_{B}(q^{2}_{1})|^{2}|F_{T}(q^{2}_{2})|^{2} \end{eqnarray} where $q_{1}$ is the momentum transfer between two protons and $q_{2}$ is identical to the momentum of the lepton pair. $F_{A}(q^{2}_{1})$, $F_{B}(q^{2}_{1})$ are elastic form factors in the space-like region and $F_{T}(q^{2}_{2})$ is the form factor in the time-like region. If we have high $q_{2}^{2}$ the form factor $|F_{T}(q^{2}_{2})|^{2}$ will supress the cross section based on the fact that the large $q^{2}$ form factors behave like $1/q^{4}$. In our work, as will be seen in the next section, each tau lepton in the final state has $p_{T}>20$ GeV. Therefore the minimum $q_{2}^{2}$ value is $4(m_{\tau}^{2}+p_{T}^{2})=1612$ $\mbox{GeV}^{2}$ which makes the cross section for the bremsstrahlung tau pair production completely negligible. Two photon exchange interactions with invariant diphoton mass $W > 1$ TeV are highly interesting to probe more accurate values of the SM parameters and also deviations from SM with available luminosity. \section{Cross Sections And Sensitivity} There are t and u channels Feynman diagrams of the subprocess $\gamma\gamma \to \tau^{+}\tau^{-}$ where both vertices contain anomalous couplings. The squared amplitude can be written in terms of the following reduced amplitudes, \begin{eqnarray} A_{1}&&=\frac{1}{2m^{4}}[48F_{1}^{3}F_{2}(m^{2}-\hat{t}) (m^{2}+\hat{s}-\hat{t})m^{4}-16F_{1}^{4}(3m^{4}-\hat{s}m^{2} +\hat{t}(\hat{s}+\hat{t}))m^{4} \nonumber \\ &&+2F_{1}^{2}(m^{2}-\hat{t}) (F_{2}^{2}(17m^{4}+(22\hat{s}-26\hat{t})m^{2}+\hat{t} (9\hat{t}-4\hat{s})) \nonumber \\ &&+F_{3}^{2} (17m^{2}+4\hat{s}-9\hat{t})(m^{2}-\hat{t}))m^{2} \nonumber \\ &&+12F_{1}F_{2}(F_{2}^{2}+F_{3}^{2})\hat{s}(m^{3}-m\hat{t})^{2} -(F_{2}^{2}+F_{3}^{2})^{2}(m^{2}-\hat{t})^{3} (m^{2}-\hat{s}-\hat{t})] \end{eqnarray} \begin{eqnarray} A_{2}&&=-\frac{1}{2m^{4}}[48F_{1}^{3}F_{2}(m^{4}+ (\hat{s}-2\hat{t})m^{2}+\hat{t}(\hat{s}+\hat{t}))m^{4}\nonumber \\ &&+16F_{1}^{4}(7m^{4}-(3\hat{s}+4\hat{t})m^{2}+ \hat{t}(\hat{s}+\hat{t}))m^{4} \nonumber \\ &&+2F_{1}^{2}(m^{2}-\hat{t}) (F_{2}^{2}(m^{4}+(17\hat{s}-10\hat{t})m^{2} +9\hat{t}(\hat{s}+\hat{t})) \nonumber \\ &&+F_{3}^{2}(m^{2}-9\hat{t})(m^{2}-\hat{t}-\hat{s})) m^{2}+(F_{2}^{2}+F_{3}^{2})^{2}(m^{2}-\hat{t})^{3} (m^{2}-\hat{s}-\hat{t})] \end{eqnarray} \begin{eqnarray} A_{12}&&=\frac{1}{m^{2}}[-16F_{1}^{4}(4m^{6}-m^{4}\hat{s}) +8F_{1}^{3}F_{2}m^{2}(6m^{4}-6m^{2}(\hat{s}+2\hat{t})-\hat{s})^{2} +6\hat{t})^{2}+6\hat{s}\hat{t}) \nonumber \\ &&+F_{1}^{2}(F_{2}^{2}(16m^{6}-m^{4}(15\hat{s}+32\hat{t})+ m^{2}(-15\hat{s})^{2}+14\hat{t}\hat{s}+16\hat{t})^{2}) +\hat{s}\hat{t}(\hat{s}+\hat{t})) \nonumber \\ &&+F_{3}^{2}(16m^{6}- m^{4}(15\hat{s}+32\hat{t})+m^{2}(-5\hat{s})^{2}+14\hat{t}\hat{s} +16\hat{t})^{2})+\hat{s}\hat{t}(\hat{s}+\hat{t}))) \nonumber \\ &&-4F_{1}F_{2}(F_{2}^{2}+F_{3}^{2})\hat{s}(m^{4}+ m^{2}(\hat{s}-2\hat{t})+\hat{t}(\hat{s}+\hat{t}))\nonumber \\ &&-4F_{1}F_{3}(F_{2}^{2}+F_{3}^{2})(2m^{2}-\hat{s}-2\hat{t}) \epsilon_{\mu\nu\rho\sigma}p_{1}^{\mu}p_{2}^{\nu}p_{3}^{\rho} p_{4}^{\sigma} \nonumber \\ &&-2(F_{2}^{2}+F_{3}^{2})^{2}\hat{s} (m^{4}-2\hat{t}m^{2}+\hat{t}(\hat{s}+\hat{t}))] \end{eqnarray} where $p_{1}$, $p_{2}$, $p_{3}$ and $p_{4}$ are the momenta of the incoming photons and final $\tau$ leptons. Mandelstam variables are defined as $\hat{s}=(p_{1}+p_{2})^{2}$, $\hat{t}=(p_{1}-p_{3})^{2}$ and $\hat{u}=(p_{1}-p_{4})^{2}$. m is the $\tau$ lepton mass. The squared amplitudes are \begin{eqnarray} |M_{1}|^{2}&&=\frac{16\pi^{2}\alpha^{2}}{(\hat{t}-m^{2})^{2}}A_{1} \\ |M_{2}|^{2}&&=\frac{16\pi^{2}\alpha^{2}}{(\hat{u}-m^{2})^{2}}A_{2} \\ |M_{12}|^{2}&&=\frac{16\pi^{2}\alpha^{2}}{(\hat{t}-m^{2}) (\hat{u}-m^{2})}A_{12} \end{eqnarray} The cross section for the process $pp \to pp\tau^{+}\tau^{-}$ without anomalous couplings is given in Table \ref{tab3} at the LHC energy $\sqrt{s}=14$ TeV for rapidity $\eta<2.5$ and transverse momentum $p_{T}>20$ GeV of the final $\tau$ leptons. The possible background is the diffractive double pomeron exchange (DPE) production of tau pairs created via Drell-Yan process. The DPE production cross section can be obtained within the factorized Ingelman-Schlein \cite{ingelman} model where the concept of diffractive parton distribution function(DPDF) is introduced. The convolution integral for the subprocess $ q\bar{q}\to \tau\tau$ is given by \begin{eqnarray} \sigma&&=\int dx_{1} dx_{2} d\beta_{1} d\beta_{2} f_{\mathbb{P}/p}(x_{1},t) f_{\mathbb{P}/p}(x_{2},t) \nonumber \\ &&\sum_{i,j=1}^{3} \left [f_{i}(\beta_{1},Q^{2}) f_{j}(\beta_{2},Q^{2})+ f_{j}(\beta_{1},Q^{2}) f_{i}(\beta_{2},Q^{2})\right ] \hat{\sigma}(q\bar{q}\to \tau\tau) \end{eqnarray} where $f_{\mathbb{P}/p}(x_{1},t)$ is the pomeron flux emitted by one of the protons and $f_{i}(\beta_{1},Q^{2})$ is the light quark distribution function coming from the structure of the pomeron. $x_{1}$, $x_{2}$ denote the momentum fractions of the protons carried by the pomeron fluxes and $\beta_{1}$, $\beta_{2}$ represent the longitudinal momentum fractions of the pomeron carried by the struck quarks. Double pomeron exchange production cross section should be multiplied by gap survival probability 0.03 for LHC. The measurements of pomeron flux and DPDF were performed at HERA with their uncertainties \cite{aktas,aktas2}. The uncertainty in DPDF was obtained as (5-10)\% for light quarks in Fig.11 of Ref. \cite{aktas2}. We have determined the uncertainty in the pomeron flux as (8-10)\% using the uncertainties of the flux parameters which were given in Ref. \cite{aktas}. Taking the maximum values of the each uncertainties above, the combined uncertainty due to both DPDF and pomeron flux from one proton is estimated by 14\%. The overall uncertainty related to pomerons arising from both protons is expected to be 20\% using a root sum-of-the-squares approach. \TABLE{ \begin{tabular}{|c|c|c|}\hline $\xi$ &$\sigma^{\mathbb{P}}$ (fb) & $\sigma^{0}$ (fb)\\ \hline 0.0015-0.5 & 28.4$\pm$ 2.8 & 173$\pm$ 2.6 \\ 0.0015-0.15 & 27.2$\pm$ 2.7 & 173$\pm$ 2.6 \\ 0.01-0.15 & 4.6$\pm$ 0.5 & 10.9$\pm$ 0.2 \\ \hline \end{tabular} \caption{Cross sections $\sigma^{\mathbb{P}}$ obtained by double pomeron exchange production of tau pairs multiplied by gap survival probability 0.03. For comparison, the cross sections $\sigma^{0}$ for the same subprocess obtained by equivalent photon approximation at tree level (without anomalous couplings) are given. In both cases the LHC energy $\sqrt{s}=14$ TeV, transverse momentum and rapidity cuts $p_{T}>20$ $GeV$ and $|\eta|< 2.5$ are taken into account for each final $\tau$ lepton. The uncertainty in the $\sigma^{\mathbb{P}}$ is due to pomeron flux and DPDF. The uncertainty in the $\sigma^{0}$ is related to the dipole form factors in the equivalent photon spectrum. \label{tab3}}} Considering $t=-1$ $GeV^{2}$, $Q^{2}=2$ $GeV^{2}$ the calculated cross sections are given in Table \ref{tab3} for three acceptance regions of forward detectors. During computation sum over three light quarks in Eq. (3.7) has been considered. Measurements at HERA for pomeron flux and DPDF have ranges $8.5<Q^{2}<1600$ $GeV^{2}$ and $0.08<|t|<0.5$. Using NLO DGLAP equations DPDF were evolved to higher and lower scales, beyond the measured range, and the grids were provided for $1<Q^{2}<30000$ $GeV^{2}$ in the H1 2006 DPDF Fits. The data were also analysed by integrating the cross section over the range $t_{min}<|t|<1$ $GeV^{2}$ \cite{aktas}. Possible additional uncertainties from these extrapolations are expected to be compansated by choosing maximum individual uncertainties before combination. Anomalous couplings are more sensitive to higher energies based on the term $\sigma_{\mu\nu}q^{\nu}$. For the invariant two photon mass $W>1$ TeV with sufficent luminosity we are expecting far better result than the case of LEP energies. First we place bounds on the tau anomalous magnetic moment by $\chi^{2}$ analysis keeping $F_{3}=0$. \begin{eqnarray} \chi^{2}=\frac{(\sigma(F_{2})-\sigma^{0})^{2}} {(\sigma^{0}+\sigma^{\mathbb{P}})^{2}\delta^{2}}\\ \delta=\sqrt{(\delta^{st})^{2}+(\delta^{sys})^{2}} \\ \delta^{st}=\frac{1}{\sqrt{N^{0}}}\\ N^{0}=L_{int}(\sigma^{0}+\sigma^{\mathbb{P}}) BR \end{eqnarray} where $\sigma^{0}$, $N^{0}$ and $\delta$ are the cross section, number of events and uncertainty without anomalous couplings. $L_{int}$ is the integrated luminosity of LHC. The contributions of pomeron background do not appear in the numerator because of cancellation of each other. Thus, the pomeron contribution in the denominator is not expected to be so effective even if it has large 20\% uncertainty. Now let us determine the effect of uncertainties due to $\sigma(F_{2})$, $\sigma^{0}$ and pomeron backgrounds on the $\chi^{2}$ function. The sources of uncertainties of $\sigma(F_{2})$ and $\sigma^{0}$ are connected to the dipole form factors in the equivalent photon spectrum, as explained before. The change in the $\chi^{2}$ function from the 3\% uncertainty of $\sigma(F_{2})$ and $\sigma^{0}$ lead to the $\delta^{sys}$ values shown in Table \ref{tab4}. Total systematic uncertainty can be formed by combining individual contributions in quadrature given in the last column of the Table \ref{tab4} . In our calculations, the individual uncertainties have been kept maximum and have been considered to be uncorrelated to get larger systematic uncertainty $\delta^{sys}$. \TABLE{ \begin{tabular}{|c|c|c|c|c|c|}\hline $L_{int}(fb^{-1})$ & $\xi$ &$\delta^{sys}(\sigma(F_{2,3}))$ & $\delta^{sys}(\sigma^{0})$ & $\delta^{sys}(\sigma^{\mathbb{P}})$ & $\delta^{sys}$ \\ \hline 50 & 0.0015-0.5&0.016& 0.016 & 0.002 & 0.02 \\ 100 & 0.0015-0.5&0.014 & 0.014 & $<$0.002 & 0.02\\ 200 & 0.0015-0.5 & 0.012& 0.012 & $<$0.002 & $<$0.02 \\ \hline 50 & 0.0015-0.15&0.016& 0.016 & 0.002 & 0.02 \\ 100 & 0.0015-0.15&0.014 & 0.014 & $<$0.002 & 0.02\\ 200 & 0.0015-0.15 & 0.012& 0.012 & $<$0.002 & $<$0.02 \\ \hline 50 & 0.01-0.15&0.025 & 0.025 & 0.007 & 0.04 \\ 100 & 0.01-0.15&0.020 & 0.020 & 0.006 & 0.03\\ 200 & 0.01-0.15 & 0.018 & 0.018 & 0.005 & $<$0.03 \\ \hline \end{tabular} \caption{Systematic uncertainties in the $\chi^2$ function depending on luminosity and acceptance region $\xi$. The last column represents the combined uncertainties in quadrature. The values with $<$ character defines the uncertainties less than the specified values. \label{tab4}}} In this work all computations are done in the laboratory frame of the two protons. For the signal we consider one of the tau leptons decays hadronically and the other leptonically with branching ratios 65\% and 35\%. Then joint branching ratio of the tau pairs becomes BR=0.46. \TABLE{ \begin{tabular}{|c|c|c|c|}\hline $L_{int}(fb^{-1})$ & $\xi$ & $a_{\tau}$ & $|d_{\tau}|$(e cm) \\ \hline 50 & 0.0015-0.5&-0.0062, 0.0042 &0.23$\times 10^{-16}$ \\ 100 & 0.0015-0.5&-0.0057, 0.0037 & 0.21$\times 10^{-16}$ \\ 200 & 0.0015-0.5 &-0.0054, 0.0034 & 0.19$\times 10^{-16}$ \\ \hline 50 & 0.0015-0.15&-0.0063, 0.0043 &0.23 $\times 10^{-16}$ \\ 100 & 0.0015-0.15&-0.0058, 0.0037 & 0.22 $\times 10^{-16}$ \\ 200 & 0.0015-0.15 &-0.0055, 0.0034 & 0.20 $\times 10^{-16}$ \\ \hline 50 & 0.01-0.15&-0.0048, 0.0045 &0.19 $\times 10^{-16}$ \\ 100 & 0.01-0.15&-0.0042, 0.0038 & 0.16 $\times 10^{-16}$ \\ 200 & 0.01-0.15 &-0.0036, 0.0032 & 0.14 $\times 10^{-16}$ \\ \hline \end{tabular} \caption{Sensitivity of the process $pp\to p \tau^{+}\tau^{-}p$ to tau anomalous magnetic moment $a_{\tau}$ and electric dipole moment $d_{\tau}$ at 95\% C.L. for $\sqrt{s}=14$ TeV, integrated luminosities $L_{int}=50,\,\,100,\,\,200$ $fb^{-1}$ and three intervals of forward detector acceptance $\xi$. Only one of the moments is assumed to deviate from zero at a time. Total systematic uncertainty used in $\chi^{2}$ function has been taken $\delta^{sys}=0.01$. \label{tab5}}} \TABLE{ \begin{tabular}{|c|c|c|c|c|}\hline $L_{int}(fb^{-1})$ & $\xi$ &$\delta^{sys}$ & $a_{\tau}$ & $|d_{\tau}|$(e cm) \\ \hline 50 & 0.0015-0.5&0.02 &-0.0071, 0.0051 &0.28$\times 10^{-16}$ \\ 100 & 0.0015-0.5&0.02 &-0.0068, 0.0048 & 0.26$\times 10^{-16}$ \\ 200 & 0.0015-0.5 & 0.02& -0.0066, 0.0046 & 0.26$\times 10^{-16}$ \\ \hline 50 & 0.0015-0.15& 0.02 &-0.0073, 0.0051 &0.28 $\times 10^{-16}$ \\ 100 & 0.0015-0.15&0.02 &-0.0070, 0.0048 & 0.27 $\times 10^{-16}$ \\ 200 & 0.0015-0.15 &0.02 &-0.0067, 0.0048 & 0.27 $\times 10^{-16}$ \\ \hline 50 & 0.01-0.15&0.04 &-0.0054, 0.0050 &0.21 $\times 10^{-16}$ \\ 100 & 0.01-0.15&0.03 &-0.0046, 0.0042 & 0.18 $\times 10^{-16}$ \\ 200 & 0.01-0.15 &0.03 & -0.0043, 0.0038 & 0.17 $\times 10^{-16}$ \\ \hline \end{tabular} \caption{The same as the Table\ref{tab5} but for the systematic uncertainties shown in the third column. \label{tab6}}} Table \ref{tab5} and Table \ref{tab6} show the constraints on the anomalous magnetic moment and electric dipole moment of the tau lepton that we obtain using different systematic uncertainties in $\chi^{2}$ function for comparison. The acceptance region $\xi=0.01-0.15$ seems more sensitive to anomalous couplings. The limits are improved by one order of magnitude when compared to DELPHI results. Electric dipole moment limits are slightly better than those of BELLE. At this point a remark is in order. Experimentally, the anomalous magnetic and electric dipole moments can be extracted by comparing the measured cross section with QED expectations. At LEP \cite{delphi}, for example, the fits to the measured cross section were performed taking $a_{\tau}$ and $d_{\tau}$ as parameters based on the $\tau\tau\gamma$ vertex parametrization given by (1.12). However, our predictions for the cross sections in the $\chi^{2}$ function are theoretical. When comparing our limits with those of LEP this distinction should be taken into account. The quadratic and quartic terms according to $F_{3}$ are not CP violating except the term with Levi-Civita tensor in the interference amplitude $A_{12}$. However its contribution to the cross section is zero. That is why the magnitudes of negative and positive parts of the limits on $d_{\tau}$ are the same. This leads to the fact that it may be possible to measure tau anomalous magnetic moment when efficient tau identification is available. Tau is the heaviest charged lepton which decays into lighter leptons, electron, muon and lighter hadrons such as $\pi$'s and $K$'s with a lifetime of $3.0 \times 10^{-13}$ s. Primary decay channels can be given with one charged particle (one prong decay) \begin{eqnarray} &&\tau \to \nu_{\tau}+\ell+\hat{\nu}_{\ell}, \,\,\,\,\ell=e, \mu \\ &&\tau \to \nu_{\tau}+ \pi^{\pm} \\ &&\tau \to \nu_{\tau}+ \pi^{\pm}+ \pi^{0} \\ &&\tau \to \nu_{\tau}+ \pi^{\pm}+ \pi^{0}+\pi^{0} \end{eqnarray} and with three charged particle (three prong decay) \begin{eqnarray} \tau \to \nu_{\tau}+ 3\pi^{\pm}+n\pi^{0} \end{eqnarray} 85\% of all the tau decays are the one prong decays and 15\% of them are the three prong decays. Produced particles from tau decays are called tau jets due to the fact that number of daughter particles is always greater than one. One prong lepton jets are identified by similar algorithms used by direct electron and muon. Identification of hadronic jets is more complicated than leptonic modes because of the QCD jets as background. However, tau jets are higly collimated and are distingushed from background due to its topology. Dedicated algorithms have been developed for hadronic tau jets by ATLAS \cite{atlas} and CMS \cite{cms} groups. Use of these algorithms allows for good separation between tau jets and fake jets for some LHC process. Nevertheless, tau identification efficiency depends of a specific process, background processes, some kinematic parameters and luminosity. Studies of tau identification have not been finalized yet for LHC detectors. In every case, identification efficiency can be determined as a function of transverse momentum and rapidity. In our study we have considered $p_{T}>20$ GeV and $\eta<2.5$ for a good $\tau$ selection as used in most ATLAS and CMS studies. For a realistic efficiency we need a detailed study based on our specific process including properties of both central and forward detectors of ATLAS and CMS experiments. We expect highly efficient $\tau$ identification due to clean final state in the process $\gamma\gamma \to \tau^{+}\tau^{-}$ when compared to the LHC itself.
1,941,325,219,979
arxiv
\section{Introduction} Data augmentation is a popular step in training deep learning models. Data augmentation becomes necessary due to insufficient training data for a particular task. More training data can be generated through synthetic modification of the available dataset. The advantage of doing this is that it makes the model robust to noise and invariant to certain properties e.g., translation, illumination, size, rotation etc. This is especially useful when working with satellite datasets as they usually contain few images. This is because training dataset is difficult or expensive to acquire as manual data labeling can be very time consuming.\ Training a model on a very small dataset may lead to poor performance when applied to a new, never-seen-before dataset. Augmentation techniques such as random flipping, rotation, zooming and channel averaging have been successfully applied to RGB images; however these techniques are limited when applied to satellite imagery and might not have a significant impact as a result of inherent uniformity in the images, as also reported by \cite{s21238083}. Despite the unique challenges associated with training deep learning models with images, it is widely accepted that data augmentation can lead to significant improvements in the performance of these models by making them more robust to noise and less likely to overfit.\ In addition, it is general knowledge in machine learning and deep learning that having more data has the potential of increasing model accuracy. Even in scenarios where there are lots of data available for training, data augmentation can still help to increase the amount of relevant data in the dataset. We therefore propose the use of a generative model using Generative Adversarial Networks (GANs) which can produce realistic satellite images to improve the performance of deep learning models on the task of Land Use and Land Cover classification. The rest of this paper is organized as follows. The related works and similar literature are presented in Section 2. Section 3 describes the dataset used in this work. The baseline model selected and information about the techniques used to achieve the baseline are described in Section 4. The model we are proposing is presented in Section 5 while the experiments performed using GANs are presented in Section 6. Sections 7 and 8 have the results and discussion respectively. \section{Related Works} \subsection{Augmentation using Geometric Techniques} Data augmentation increases the amount of data available to train a model. The primary advantage of doing this is that it makes the model more robust and less susceptible to overfitting\cite{Dave2020-zz}\cite{Ghaffar2019-nw}. In addition, it can improve the model performance by mimicking the image with different features \cite{Lei2019-er}. Different augmentation techniques have been shown to improve the model performance differently, depending on the quantity and quality of these features.\ Small satellite image datasets have been used for training deep learning models in existing works. For instance, \cite{Wicaksono2019-fo} and \cite{Saralioglu2022-oe} used a single image for classification and semantic segmentation tasks respectively. However, data augmentation has become more popular in recent times. Several relevant augmentation techniques have been compared in \cite{Abdelhack2020ACO}. Horizontal and vertical flipping had the highest accuracy out of the techniques considered for classifying satellite images \cite{Abdelhack2020ACO}. Image zooming or scale augmentation was used by \cite{Lei2019-er}. Here, the image is zoomed in or out, depending on a rate magnitude. Rotation augmentation is another relevant technique in which several copies of an image are produced by rotating it through different angles. Furthermore, the authors in \cite{Yu2017DeepLI} used flip, translation and rotation in remote sensing scene classification. However, \cite{michannel2021} concluded that geometric transformations like rotation, zooming and translation have limited use for medium and low-resolution satellite data as they do not provide enough variability.\ Existing literature in the domain of remote sensing have used multi-temporal satellite data for both classification and semantic segmentation. \cite{Persson2018-gv} shows that combining images from several years from the same sensor on a single location improves model performance. With multi-temporal data, \cite{Skriver2012-nl} found the best date of observation based on available data. The authors in \cite{8903738} combined images from different dates by taking a weighted average of the spectral values. \cite{Tompson2015-dc} proposed channel dropout as a means of reducing overfitting of a CNN model trained on RGB images. The technique involved setting the pixel value of a channel with some predefined probability to zero. Color Jittering is another data augmentation technique that has been successfully applied to RGB images. Here, the pixel values in each channel are multiplied by some random number within a fixed range. \cite{Taylor2017-nl} used this technique to augment an RGB image dataset.\ Recently, \cite{michannel2021} showed that a Mix Channel approach, where a channel from the original image is replaced with the same channel of another image of the same location on a different date, outperformed state of the art models. Mix Channel approach showed better generalizability compared to generic data augmentation techniques like color jittering and geometric transformations. Experiments from \cite{michannel2021} also showed that channel drop out had promising results. Even though it did not outperform the mix channel approach, it outperformed the baseline model significantly.\ \subsection{Augmentation with GANs} Generative Adversarial Networks (GANs) have also been used for data augmentation on traditional RGB images and Satellite images. This is an unsupervised, way of generating data \cite{gautam2020realistic}. The generative model usually comprises of the generator and discriminator, which work like gaming components. The former learns to generate realistic images and trick the discriminator which attempts to differentiate between the real and synthesized images. Examples of GANs, which have been applied to satellite images are DCGAN \cite{radford2015unsupervised}\cite{gautam2020realistic}, CycleGAN \cite{ren2020cycle}, SSSGAN\cite{rs13193984}. In \cite{gautam2020realistic}, the authors identified the challenge of the discriminator loss tending towards infinity while the generator loss immediately tends towards zero when training. High resolution images were generated using a progressive growing GAN. \cite{Abady2020-ba} applied GANs to satellite images for both image generation and image style transfer with visually promising results. However, no attempt was made to use the synthetically generated images for classification or segmentation tasks.\ Conditional GAN has also been applied to satellite images in \cite{kulkarni2020semantic}. They observed a better performance for fully supervised training and they were able to achieve this performance with just a slight increase in number of parameters. Marta GANs was also used in \cite{Lin2017MARTAGU}. When compared to DCGAN, the Marta GAN is capable of generating images at a higher resolution. \ \subsection{Land Use and Land Cover Classification (LULC) using EuroSAT data} Some similar works have been done on improving the performance in image classification for LULC. In \cite{Xu2013RemoteSI}, principal component analysis was used to improve the task and to reduce the redundancy of the remote sensing images. This outperformed the maximum likelihood method. \cite{basu2015deepsat} proposed a classification framework that extracts features from a remote sensing input image, normalizes and feeds them to a Deep Belief Network for classification. In \cite{liu2016active}, an active learning framework based on a weighted incremental dictionary learning was proposed. This was compared with other active learning algorithms and their method was observed to be more effective and efficient. Classification based on pixel on object-classification is investigated in \cite{doi:10.1080/22797254.2020.1790995}. In this project, we take the work a step further by investigating how GAN generated images improve the LULC task. Two GAN models are considered namely Deep Convolutional Generative Adversarial Network (DCGAN) and Wasserstein Generative Adversarial Network with Gradient Penalty (WGAN-GP). From our research, although DCGAN is not very scalable in terms of generating images from a lower resolution to a higher resolution, we found that it does perform well on 64x64 images as is also confirmed by \cite{Gautam2020-yd} and does latent space representation well. Since the original dataset contained 64x64 images and this work aims to add the generated images back into the training dataset to evaluate performance, the decision was made to use DCGAN, as it performs well for the image resolution we were working with. The use of ProGAN which is a GAN architecture based on a progressively growing training approach that increases images from lower resolutions to higher resolutions was also discussed, however implementing ProGAN would mean increasing the dimensions for all the images in the original dataset as well. Our choice of WGAN-GP was mainly to address possible instability in the training process and to reach faster convergence and this was apparent in the amount of time it took to generate the WGAN-GP images compared to how long the DCGAN images took for the same number of epochs. \section{Dataset} The dataset used for this task is the EuroSAT deep learning benchmark for LULC classification \cite{helber2019eurosat}.The EuroSAT dataset consists of images captured by the advanced resolutions cameras on the Sentinel-2 satellite (year of release 2015). The Sentinel-2 satellite is a remote land monitoring system that operates under the European Space Agency. The Sentinel-2 satellite provides images that have two different types of spectral resolutions, that is, the number of channels for each feature. The Sentinel-2 satellite could produce images of varying numbers of channels, for example, the Red-Green-Blue (e.g., the 3-channels RGB) and the multispectral (13 channels). The Sentinel-2 captures the global Earth’s land surface with a Multispectral imager (MSI) that detects and monitors the physical characteristics of an area by measuring the object reflected and emitted radiation at a distance every 5 days duration, and would provide the datasets for the next 20 - 30 years of time. \ The EuroSAT dataset is a novel dataset for remote sensing and capturing land use changes. It consists of high-resolution satellite images of rural and urban scenes. The datasets are patch-based and provide macro-level details of the mapped area. The EuroSAT dataset consists of 10 labeled classes of LULC classification namely Forest, Annual Crop, Highway, Herbaceous Vegetation, Pasture, Residential, River, Industrial, Permanent Crop, and Sea/Lake \cite{helber2019eurosat}. Its high-resolution satellite images, therefore, provide a clearer representation of objects. It contains 27000 labeled geo-referenced image data, each of 64x64 pixels, with a spatial resolution of 10m, of size 3.2 Terabytes (about 1.6 TB of compressed images) produced per day, with about 2000-3000 datasets per class \cite{helber2019eurosat}. \ Past works on LULC problems have utilized the multispectral version of the Sentinel-2 EuroSAT dataset for training, which had stable learning and demonstrated good performances in differentiating different classes, although very slow in training. The RGB version is widely used because of its fast training and acceptable accuracy performance, while also known for its instability while learning \cite{isprs-archives-XLIII-B3-2021-369-2021}, and this was what was used in this project. \cite{naushad2021deep} established benchmarking accuracy of 99.17\% on the RGB channel size. The dataset used for this project can be found online at \footnote{\url{https://www.kaggle.com/datasets/raoofnaushad/eurosat-sentinel2-dataset}} . The GAN images generated were renamed and added to this dataset folder, and this was used to perform an ablation of experiments. The images from the Kaggle link are in jpg format, while the generated images are in png format.The images were grouped into folders by class for the image generation, and for the model training, the label of the image was extracted from the filename, which contains the class to which the image belongs. \section{Baseline Selection} Two models are used for this project, namely VGG16 and Wide Resnet50. To ensure early convergence, pretrained weights are used. VGG16 and Wide Resnet50 are proposed in \cite{naushad2021deep}, after experimenting with different architectures. In \cite{naushad2021deep}, four experiments are carried out using the VGG16 and Wide Resnet50. LULC is performed on the EuroSAT dataset, with augmentation and without augmentation for both models. It was reported by the authors that the wide Resnet50 model with augmentation performed best. Early stopping was implemented both in the baseline model and our experiments. The difference in the number of epochs is a reflection of how quickly the model converges and stops improving. This was also confirmed when the baseline was re-established. The results are presented in table 1. \begin{table}[H] \caption{Baseline implementation} \smallskip \centering \begin{tabular}{c c c c c } \hline\hline Model & (b)Epochs & (b)Accuracy (\%) & (e)Epochs & (e)Accuracy (\%)\\ \hline VGG16 Accuracy without augmentation & 18 & 98.14 & 20 & 98.12 \\ VGG16 Accuracy with augmentation & 21 & 98.55 & 24 & 98.65 \\ Wide Resnet50 Accuracy without augmentation & 14 & 99.04 &24 & 98.81\\ Wide Resnet50 Accuracy with augmentation & 23 & 99.17 & 21 & 99.20 \\ \hline \end{tabular} \\ b = baseline , e = our experiment \label{table:nonlin} \end{table} \section{Model Description} \begin{figure}{H} \centering \includegraphics[width=14cm]{Images/a1.eps} \caption{Work-layout of the project} \end{figure} Figure 1 shows different experiments carried out, with and without data augmentation. The augmentation used for this project are geometric, DCGAN and Wassertein GP generated images. These augmentation techniques are used to get more training dataset so the model does not overfit. It was observed that the training accuracy reaches 100\% accuracy when training with the Wide Resnet50 model. However, the validation accuracy was 98.81 \%. \subsection{Mathematical Model} \subsubsection{GAN} The generative Adversarial network is modelled mathematically as a two-player game between the Generator (G) and Discriminator (D), where given a training set, the generator attempts to generate new data with statistics similar to those in the training dataset while the discriminator tries not to be fooled by differentiating the fake data from the real ones. The min-max game equations are adapted from \cite{curto2017high} and entail the following objective function\ \begin{equation} \min\limits_{{(G)}}\max\limits_{{(D)}}V(D,G) = E_{x\sim P_{data}}[\log D(x)]+ E_{z\sim P_z} [\log(1-D(G(z)))] \end{equation} Where x is a ground truth image sampled from the true distribution p\_data and z is a noise sample sampled from p\_z (that is uniform or normal distribution ). G and D are parametric function where G\:p\_z→p\_data maps sample from noise distribution p\_z to data distribution p\_data. The goal of the Discriminator is to minimize : \begin{equation} L^{(D)} = -\frac{1}{2}E_{x\sim p_{data}}[\log D{(x)}]- - \frac{1}{2}E_{z\sim p_{z}}[\log (1-D(G(z))))] \end{equation} If we differentiate it w.r.t D(x) and set the derivative equal to zero, we can obtain the optimal strategy as: \begin{equation} D(x)= \frac{p_{data} (x)}{p_{z(x)}+ p_{data}(x)} \end{equation} This can be understood as follows, the input is accepted with its probability evaluated under the distribution of the data, $p_{data}(x)$ and then its probability is evaluated under the generator distribution of the data, $p_{z}$. Under the condition in D of enough capacity, it can achieve its optimum. It should be noted that the discriminator does not have access to the distribution of the data, but it is learned through training. The same applies for the generator distribution of the data. Under the conditions in G of enough capacity, then it will set $p_{z}= p_{data}$. The result is \begin{equation}D(x)= \frac{1}{2}\end{equation} which is actually the Nash equilibrium. Therefore, the generator is referred to as the perfect generative model, sampling from p(x). \subsubsection{DCGAN} DCGAN mathemical model is similar to that of the normal GAN. However, DCGAN is different because it uses convolutional layers to facilitate stability during model training. Also, to replace the fully connected layer, the generator upsamples using transposed convolution layers. Rectified linear unit (ReLU) is used in all the layers except the output layer which uses a hyperbolic tangent function (Tanh). Likewise, in the discriminator, strided convolutional layers are used.The discriminator downsamples using convolutional layers with stride, instead of maxpooling. In contrast, the Leaky ReLU is used in all the layers. Furthermore, batch normalization is used for the generator and the discriminator. The generator and discriminator loss functions are given by \begin{equation} L_G^{(DCGAN)} = E_{z \sim{P_{z}(z)}}[\log D(G(z))] \end{equation} \begin{equation} L_D^{(DCGAN)} = E_{x\sim P_{data}(x)}[\log D_{(x)}]+ E_{z\sim P_{z}(z)}[\log(1- D(G(z))] \end{equation} \subsubsection{WGAN-GP} Wasserstein GAN tackles the problem of GANs' loss functions being susceptible to hyperparameter choice and random initialization. It minimizes an approximation of the Wasserstein-1 distance which is the earth mover distance, between the distribution of the real images and the images generated by the GAN. The weights of the discriminator must be a K-Lipshitz function. This means that the first derivative of the function is bounded everywhere, and less than a constant. Wasserstein GAN with gradient penalty (WGAN-GP) is an update to the firstly introduced Wasserstein GAN. WGAN-GP introduces gradient penalty approach, which solves the vanishing and exploding gradient problem of early WGAN. The vanishing and exploding gradient problem in early WGAN was as a result of weight clipping used. The generator and discriminator loss functions are: \begin{equation} L_G^{(WGAN\_GP)} = - E_{z \sim P_{(z)}}[ D(G(z))] \end{equation} \begin{equation} L_D^{(WGAN\_GP)} = E_{x\sim P_{data}(x),z\sim P_{z}(z)}[D(\widetilde{x})-D(x)+ \lambda {(\parallel \nabla x D({\tilde{x})} \parallel x-1)}^2) \end{equation} where $\epsilon \sim U[0,1],\widetilde{x} = G(z), \tilde{x} = \epsilon x + (1- \epsilon) \widetilde{x}$, The gradient penalty served as an efficient weight constraint to enhance the stability of the WGAN over the weight clipping \cite{gulrajani2017improved}. It is the second half of equation 8. It is a differentiable function that enforces the Lipschitz constraints by penalizing the gradient norm of the critic’s output for random samples \cite{gulrajani2017improved}. The properties of the optimal WGAN critic is also given in \cite{gulrajani2017improved}. \subsection{VGG16 and ResNet50} \begin{figure}[H] \centering \includegraphics[width=14cm, height= 8cm]{Images/vgg16resnet50.eps} \caption{ Model architectures (a) Modified VGG16 architecture with training and freezing layers, and (b) Wide ResNet-50 architecture with training and freezing layers \cite{naushad2021deep}} \end{figure} VGG16 is a deep convolutional network proposed by K. Simonyan and A. Zisserman from the University of Oxford. The model achieved 92.7 \% top 5 test accuracy in ImageNet, with dataset of over 14 million images belonging to 1000 images.The convolutional block processes multiple convolutional layers. The top and bottom layers learn low level and high level features respectively. The ResNet architecture is an adaption of convolutional neural networks to tackle the problem of vanishing/exploding. In this technique, the model skips connections. The advantage of this is that any layer that makes the architecture to perform poorly is skipped. The ResNet architecture performs considerable well on image classification tasks. There are different variants of the ResNet and for this project, the Wide ResNet-50 is used. For this project, and in both cases, pretrained weights are used and the pretrained model is modified to add fully connected and dropout layers. Also, ReLU and log-softmax activation functions are added. Both models, normalised images are expected in minibatches of 3-channel RGB images of shape (3 X H X W) where H and W are expected to be 224. Also, some techniques are used to enhance the model's computation time and performance. These techniques are early stopping, data augmentation and adaptive learning rates. Early stopping stops the training at an arbitrary number of epochs when the model's performance stops improving during training. The early stopping criteria used in this project is one which terminates the training as soon as generalization exceeds certain threshold \cite{Prechelt1998-ts}. We had splitted the EuroSAT dataset into the train dataset, validation set of proportions 3 : 1 (75/25) and each model trained such that the models stop the training at an arbitrary number of epochs when the model's performance stops improving during training. The threshold for number of consecutive epochs for which the model does not improve is set to 3. This means that if the training does not improve for 3 consecutive epochs, the training terminates. Moreover, we had used the early stopping criteria, which has been proven to give better performance with deep networks generally. Together, these validates that the gradient descent with early stopping is provably robust to deep neural networks, adversarial networks inclusive, which validates the empirical robustness of deep networks as well as widely adopted heuristics to eliminate overfitting \cite{arxiv.2007.10099}.\\ In addition, in this project, we implemented the adaptive learning rate method to enhance model performance. This model efficiency technique is based on the momentum method that adds an adaptive property. The adaptive characteristics is one which implements as a step size change with the degree of the cost function. It will maintain the power of the momentum method and add adaptive properties to it to make a specific percentage of learning, and overall, constructs a step size according to the amount of the cost function. This causes it to be as close as possible to the minimum value of the cost function for convergence. In other words, the learning rate, is slowly reduced as it approaches convergence \cite{app11020850}. To reiterate, the different augmentations used are geometric augmentation, DCGAN, WGAN-GP. \subsection{Geometric Augmentation} The geometric augmentation methods used are random horizontal and vertical flip and random rotation. It was not stated by the authors of the baseline the intuition for choosing this augmentation methods as against other options such as colour jittering, random crop and so on. However, so as to make a fair comparison with the baseline, the geometric augmentation methods used are retained. \subsection{Evaluation Metric} The task is a classification task and the metric used is accuracy. This is a widely used metric for classification task. This was the same metric used in the baseline which makes it possible to compare our experiments with the baseline. The model predicts what class the image belongs to. The accuracy is the number of accurately predicted images divided by the total number of predicted images. \begin{equation} Accuracy = \frac{\tilde{y}}{y} \end{equation} where $\tilde{y}$ = accurate predictions, and y is total predictions Also, for GAN models, Frechet Inception Distance is widely used to assess the quality of generated images; however, we have opted not to use it in this case as we think the effect the generated images have on the classification model performance is a better test of the images generated for this particular task. \section{Experiments} To investigate whether our GANs generated satellite images can improve the generalizability of a classification model for LULC task, we designed two image generation experiments using Deep Convolutional GAN (DCGAN) and Wasserstein GAN with gradient penalty (WGAN-GP). The DC GAN training was done on NVIDIA Tesla P100 GPU available on Kaggle. The WGAN-GP training was done on AWS G52X large instance. Our codes are available on: \footnote{\url{https://github.com/Oarowolo11/11785-Project}} \subsection{DC GAN Training} The dataset consists of 27,000 satellite images in 10 classes. Each image is a 64x64 RGB image. All images in the dataset are already of the same size. We normalize the images such that pixel values are mapped between (1, -1) because normalized images have been shown to improve GAN training \cite{kurach2019large}. Following the DC GAN paper \cite{radford2015unsupervised}, we initialized the weights of convolutional layers to have a zero mean and a standard deviation of 0.02, while Batchnorm layers were initialized from a normal distribution with a mean of 1.0 and standard deviation of 0.02. The generator is a deep convolutional network with 5 blocks where each block consists of a transposed convolution, batch norm and an activation layer. A tanh activation is used in the final layer because the images have been normalized to be between 1 and -1. The generator receives an input noise vector of 100 dimensions to generate a 64 by 64 RGB image. The discriminator is also a deep convolutional network with 5 blocks. It performs binary classification of an input image into a fake or real class. We used binary cross entropy as the loss function for both networks. The learning rate was 0.0002 and Adam optimizer was used for both networks. Beta coefficients of 0.5 and 0.999 were also used. We trained the model to generate images in each class separately. Each class of image in the training data had between 2000 to 3000 images. We trained each GAN model for 300 epochs. 256 images were generated for each class making a total of 2560. The generated images constituted about 10 \% of the original training dataset. We visually inspected the quality of the images generated and observed the effect of batch size used on the quality of images generated. Batch sizes greater than 16 produced images of worse quality on visual inspection. This may be because at the training, exposure of the discriminator to many images (batch number of images) may make it overpower the generator leading to poorer generated images. A sample image generated by DC for each class is presented in the figures below: \begin{figure}[ht] \centering \includegraphics[width=14cm]{Images/MuktiClass2dc.PNG.eps} \includegraphics[width=14cm]{Images/MultiClass2.PNG.eps} \caption{ Sample DCGAN Generated Images} \end{figure} Tables 2 and 3 show the model structure for the generator and discriminator \begin{table}[H] \caption{DCGAN model structure-Generator} \smallskip \centering \begin{tabular}{c c c} \hline\hline Layer(type)& Output Shape & Param \# \\ \hline ConvTranspose2d-1 & [-1,512,4,4] & 819,200 \\ BatchNorm2d-2& [-1,512,4,4] & 1,024 \\ ReLU-3 & [-1,512,4,4] & 0 \\ ConvTranspose2d-4 & [-1,256,8,8]& 2,097,152\\ BatchNorm2d-5& [-1,256,8,8] & 512\\ ReLU-6 & [-1,256,8,8] & 0 \\ ConvTranspose2d-7 & [-1,128,16,16] & 524,288 \\ \ BatchNorm2d-8& [-1,128,16,16] & 256\\ ReLU-9 & [-1,128,16,16] & 0 \\ ConvTranspose2d-10 & [-1,64,32,32]& 131,072\ \\ BatchNorm2d-11& [-1,64,32,32] & 128\\ ReLU-12 & [-1,64,32,32] & 0 \\ ConvTranspose2d-13 & [-1,3,64,64]& 3,072\\ Tanh-14 & [-1,3,64,64]& 0\\ \hline \end{tabular} \label{table:nonlin} \end{table} \begin{table}[H] \caption{DCGAN model structure-Discriminator} \smallskip \centering \begin{tabular}{c c c} \hline\hline Layer(type)& Output Shape & Param \# \\ \hline Conv2d-1 & [-1,64,32,32] & 3,072 \\ LeakyReLU-2 & [-1,64,32,32] & 0 \\ Conv2d-3 & [-1,128,16,16]& 131,072\\ BatchNorm2d-4& [-1,128,16,16] & 256 \\ LeakyReLU-5 & [-1,128,16,16] & 0 \\ Conv2d-6 & [-1,256,8,8] & 524,288 \\ \ BatchNorm2d-7& [-1,256,8,8] & 512\\ LeakyReLU-8 & [-1,256,8,8] & 0 \\ Conv2d-9 & [-1,512,4,4]& 2,097,152\ \\ BatchNorm2d-10& [-1,512,4,4] & 1,024\\ LeakyReLU-11 & [-1,512,4,4] & 0 \\ Conv2d-12 & [-1,1,1,1]& 8,192\\ Sigmoid-13 & [-1,1,1,1]& 0\\ Flatten-14 & [-1,1]& 0\\ \hline \end{tabular} \label{table:nonlin} \end{table} \subsection{WGAN-GP Training} We trained a WGAN-GP to generate 256 images for each class. According to \cite{gulrajani2017improved}, this method does not necessarily generate better images than DCGAN approach, but it does provide advantage in form of better training stability. We used a similar generator architecture to the one used in DCGAN. We kept the learning rate and optimizer the same for comparison with DCGAN. We used the code provided by \cite{radford2015unsupervised} to compare their results. The results obtained are presented in figure 4. Also, the architecture for the generator and critic are in table 4 and 5. \begin{figure}[H] \centering \includegraphics[width=14cm]{Images/MultiClassP3.PNG.eps} \includegraphics[width=14cm]{Images/MultiClassP4.PNG.eps} \caption{Sample of WGAN-GP Generated Images} \end{figure} \begin{table}[H] \caption{WGAN-GP model structure-Generator} \smallskip \centering \begin{tabular}{c c c} \hline\hline Layer(type)& Output Shape & Param \# \\ \hline dense(Dense) & (None, 32768) & 4227072 \\ relu (ReLu) & (None, 32768) & 0 \\ reshape (Reshape) & (None, 8, 8, 512) & 0 \\ Conv2DTranspose & (None, 16,16,256) & 2,097,152\\ BatchNorm& (None, 16,16,256) & 1024\\ ReLU & (None, 16,16,256) & 0 \\ ConvTranspose2d & (None, 32, 32, 128) & 524,288 \\ BatchNorm & (None, 32, 32, 128) & 512\\ ReLU & (None, 32, 32, 128) & 0 \\ ConvTranspose2d & (None, 64, 64, 4)& 8192 \\ BatchNorm2d & (None, 64, 64, 4) & 16\\ ReLU & (None, 64, 64, 4) & 0 \\ ConvTranspose2d & (None, 64, 64, 3)& 195\\ \hline \end{tabular} \label{table:nonlin} \end{table} \begin{table}[H] \caption{WGAN-GP model structure-Critic} \smallskip \centering \begin{tabular}{c c c} \hline\hline Layer(type)& Output Shape & Param \# \\ \hline Conv2d & (None, 32, 32, 64) & 3136 \\ leaky ReLu & (None, 32, 32,64) & 0 \\ Conv2D & (None, 16, 16, 128) & 131200 \\ Leaky ReLU & (None, 16,16,128) & 0\\ Conv2D & (None, 8,8,128) & 262272\\ Leaky ReLU & (None, 8,8,128) & 0 \\ Flatten & (None, 8192) & 0 \\ dropout (Dropout) & (None, 8192) & 0\\ Dense & (None, 1) & 8192 \\ \hline \end{tabular} \label{table:nonlin} \end{table} \section{Results} \cite{naushad2021deep} trained four models using VGG16 and ResNET50 as pretrained base models. Four experiments were performed. First, each of the pretrained models was used for the classification without geometric augmentation, for the second round of experiments, random horizontal flip, random vertical flip, and random rotation were applied as geometric augmentations. We also repeated these experiments with the GANs generated images added to the original images. The results obtained are presented in table 6. It is observed that the different experiments train for different epochs because of the early stopping applied. \begin{table}[h] \caption{Experiment results} \centering \begin{tabular}{lcccccc} \toprule \multirow{2}{*}[-0.5\dimexpr \aboverulesep + \belowrulesep + \cmidrulewidth]{Model} & \multicolumn{2}{c}{Baseline} & \multicolumn{2}{c}{DCGAN} & \multicolumn{2}{c}{WGAN-GP} \\ \cmidrule(l){2-3} \cmidrule(l){4-5} \cmidrule(l){6-7} & Epochs & Accuracy & Epochs & Accuracy & Epochs & Accuracy \\ \midrule VGG16 without augmentation & 18 & 98.14 & 14 & 98.17 & 15 & 98.2 \\ VGG16 with augmentation & 21 & 98.55 & 25 & 98.52 & 25 & 98.38\\ Resnet50 without augmentation & 14 & 99.04 & 21 & 98.81 & 18 & 98.88\\ Resnet50 with augmentation & 23 & 99.17 & 21 & 99.15 & 25 & 99.12\\ \bottomrule \end{tabular} \end{table} Some of the generated images are presented in figure 3 and figure 4. \section{Discussion} The results show that the GANs augmented dataset achieved comparable performance to the original dataset. The type of GANs architecture however seem to have no obvious effect on the model performance. This may be attributed to the fact that the number of images added to each dataset is just about 10\%. While there is not a significant difference in the model accuracy with GANs images, the important thing is that the model performance did not worsen, which shows that the GANs images in both cases are of comparable quality to the actual dataset. GANs images could however show significant improvement in model performance for a smaller dataset. Smaller datasets are more sensitive to the increase in their size, and a classification model trained with a smaller dataset may see the biggest positive impact from the use of GAN generated images. Also, geometric augmentation has good effects on model accuracy in every experiment. A combination of geometric augmentation and GANs generated images could be useful where data is limited as GANs images could reduce the tendency of the model to overfit when geometric augmentation alone is used. Furthermore, the already high accuracy of the baseline models makes it especially difficult to improve these results. However, our work shows that augmenting the base dataset with GANs generated images does not worsen the performance of the classification models. Hence, our generated images are of sufficient quality for the task. This can improve generalizability of the classification models and will prove especially useful when limited training data is available. \section{Conclusion} GANs generated satellite images can improve the generalizability of deep classification models. We used two GAN architectures, DCGAN and WGAN-GP, to generate artificial satellite images. The type of architecture used had no apparent effect on model performance. The main advantage WGAN-GP offers from literature is training stability when training with deeper residual networks. In this case, it offers no advantage over DCGAN. Geometric augmentation can be used in combination with GAN augmentation for improved model performance. For future research, the effect of GANs generated images on a smaller dataset can be investigated. The effect of GANs augmentation for image datasets with severe class imbalance can also be explored. Finally, determining the effect of GANs generated images for other tasks such as semantic segmentation and built structure counting can also be considered. \section{Division of work} Members of the group contributed to different aspects of the project, both technical and non-technical essentials. The search for the baseline was done by every member of the group. Also, every member contributed to report writing and editing. Specifically, Olayiwola Arowolo contributed to writing the literature review, experiments, results and discussion. He also actively participated in generation of images using DCGAN, and setting the environment up with dataset for image generation. Olayiwola also managed the Git repository for the project. Peter Owoade researched GANs and probable types to consider. He also researched the implementation of different GANs. He actively participated in generating images using WGAN-GP. He also contributed to reestablishing the benchmark model. He was also in charge of correspondence with the project mentor. Peter also made the presentation slides for the project review. Opeyemi Ajayi researched GANs and probable types, actively contributed to writing the Dataset, introduction and mathematical model sections of the project. She also generated some images using DCGAN and the restablishment of the baseline model. She setup the work environment which was used for training. She was also in charge of setting up meeting links, and reminders to keep the team on track. Oluwadara Adedeji was in charge of manuscript writing in latex. He also contributed to the literature review, baseline, and mathematical model section of the report. He participated in image generation using WGAN-GP and the restablishment of the baseline model. He contributed to designing the workflow and layout of images and tables in the report. He was also in charge of correspondence with the TA mentor. \section{Acknowledgement} We would like to thank Jacob Lee (project mentor) and Iffanice Houndayi (TA mentor) for providing valuable advice during this project. \maketitle \bibliographystyle{IEEEtran}
1,941,325,219,980
arxiv
\section{Introduction} \label{sec:intro} Brydges and Imbrie \cite{bi}, and Kenyon and Winkler \cite{kw} study the space of branched polymers of order $n$ in $\mathbb{R}^2$ and $\mathbb{R}^3$. In their definition, a {\bf branched polymer} of order $n$ in $\mathbb{R}^D$ is a connected set of $n$ labeled unit spheres in $\mathbb{R}^D$ with nonoverlapping interiors. To each polymer $P$ corresponds a graph $G_P$ on the vertex set $[n]$ where $(i, j) \in E(G_P)$ if and only if spheres $i$ and $j$ touch in the polymer. The space of branched polymers of order $n$ in $\mathbb{R}^D$ can be parametrized by the $D$-dimensional angles at which the spheres are attached to each other. If $G_P$ has a cycle, this parametrization becomes ambiguous. However, since polymers containing a cycle of touching spheres have probability zero, this ambiguity is not of concern. Brydges and Imbrie \cite{bi}, and Kenyon and Winkler \cite{kw} compute the volume of the space of branched polymers in $\mathbb{R}^2$ and $\mathbb{R}^3$, which are $(n-1)! (2 \pi)^{n-1}$ and $n^{n-1} (2 \pi)^{n-1}$, respectively. They show that if we relax the requirement that the spheres in the polymer have radii $1$, the volume of the space of branched polymers in $\mathbb{R}^2$ remains the same, while the volume of the space of branched polymers in $\mathbb{R}^3$ changes. Intrigued by the robustness of the space of branched polymers in $\mathbb{R}^2$ under the change of radii, we generalize this notion differently from how it is done in \cite{bi, kw}. Under our notion of polymers, the volume of the space of polymers is independent of the radii in all cases. We associate the space of branched polymers to any hyperplane arrangement ${\mathcal{A}}$. The polymers corresponding to the braid arrangement $\mathcal{B}_n$ coincides with the definition of branched polymers in $\mathbb{R}^2$ given above. We also broaden the notions of branched polymers, by not requiring polymers to be connected. We define the volume of the space of such polymers and show that it is invariant under the change of radii. In the case of the braid arrangement, the volume of the space of connected branched polymers associated to $\mathcal{B}_n$ is $(n-1)! (2 \pi)^{n-1}=(-2 \pi)^{r(\mathcal{B}_n)} {\chi}_{\mathcal{B}_n}(0)$, where $r(\mathcal{B}_n)$ is the rank of $\mathcal{B}_n$ and $\chb(t)$ is the characteristic polynomial of $\mathcal{B}_n$. In our generalized notion of volume, weighted with $q$, the volume of the space of branched polymers associated to $\mathcal{B}_n$ is $(-2 \pi)^{r(\mathcal{B}_n)} {\chi}_{\mathcal{B}_n}(-q)$. The volume of the space of connected branched polymers associated to $\mathcal{B}_n$ is a specialization of this $q$-volume at $q=0$. A theorem of the same flavor holds for any of our polymers. \vspace{.05in} \noindent {\bf Theorem.} {\it The $q$-volume of the space of branched polymers associated to a central hyperplane arrangement ${\mathcal{A}}$ is $(-2 \pi)^{r({\mathcal{A}})}{\chi}_{{\mathcal{A}}}(-q)$. Furthermore, the volume of the space of connected branched polymers associated to ${\mathcal{A}}$ is a specialization of its $q$-volume at $q=0$. } \vspace{.05in} In the case that ${\mathcal{A}}$ is a graphical arrangement, we recover the $G$-polymers of \cite{kw}, and the Theorem can be rephrased in terms of the chromatic polynomial of the graph $G$. We also relate the volume of the space of branched polymers to broken circuits and the Orlik-Solomon algebra. Finally, we prove that the cohomology ring of the space of branched polymers is isomorphic to the Orlik-Solomon algebra, and conjecture the same for for the space of connected branched polymers. The outline of the paper is as follows. In Section \ref{sec:braid} we explain how the notion of branched polymers in $\mathbb{R}^2$ from \cite{bi, kw} translates to polymers associated to the braid arrangement $\mathcal{B}_n$. In Section \ref{sec:arr} we give a general definition of connected branched polymers, as well as branched polymers (where we do not require connectivity) associated to any arrangement ${\mathcal{A}}$. We define the notion of $q$-volume of the space of branched polymers and restate the Theorem about the value of the $q$-volume of the space of branched polymers. In Section \ref{sec:proof} we prove the Theorem. In Section \ref{sec:gpoly} we recover the $G$-polymers of \cite{kw} from the graphical arrangement ${\mathcal{A}}_G$. We relate the volumes of branched polymers to broken circuits in Section \ref{sec:nbc}. We conclude in Section \ref{sec:os} by proving that the cohomology ring of the space of branched polymers is the Orlik-Solomon algebra, and we conjecture that the cohomology ring of the space of connected branched polymers is the same. \section{Branched polymers and the braid arrangement} \label{sec:braid} In this section we explain how to think of the space of connected branched polymers and its volume defined in \cite{bi, kw} in terms of the braid arrangement $\mathcal{B}_n$. We also give a more general definition of the space of branched polymers associated to $\mathcal{B}_n$ equipped with the notion of $q$-volume. For $q=0$ we recover the volume of the space of connected branched polymers in the sense of \cite{bi, kw}. \footnote{Note that our terminology differs slightly from that of \cite{bi, kw}. The notion of ``branched polymer" in \cite{bi, kw} corresponds to our ``connected branched polymers."} Let $V:=\mathbb{C}^n / (1, \ldots, 1) \mathbb{C} \cong \mathbb{C}^{n-1}$. A point $\mbox{\bf{x}} \in V$ has coordinates $(x_1, \ldots, x_n)$ considered modulo simultaneous shifts of all $x_i$'s. The {\bf braid arrangement} $\mathcal{B}_n$ in $V$, $n >1$, consists of the ${n \choose 2}$ hyperplanes given by the equations $$h_{ij}(\mbox{\bf{x}})=x_i-x_j=0, \mbox{ for } 1\leq i<j\leq n, \mbox{ where } {\bf x} \in V.$$ Let $$H_{ij}=\{\mbox{\bf{x}} \in V \mid h_{ij}(\mbox{\bf{x}})=0\}.$$ Define the {\bf space of branched polymers} associated to the arrangement $\mathcal{B}_n$ and nonnegative scalars $R_{ij}$, $1\leq i<j\leq n$ to be $$P_{\mathcal{B}_n}=\{ \mbox{\bf{x}} \in V \mid |h_{ij}(\mbox{\bf{x}})|\geq R_{ij} \mbox{ for all } H_{ij} \in \mathcal{B}_n \}.$$ A connected branched polymer of size $n$ is a connected collection of $n$ labeled disks in $\mathbb{R}^2$ with nonoverlapping interiors (considered up to translation). Think of the collection of $n$ disks in $\mathbb{R}^2=\mathbb{C}$ as a point $\mbox{\bf{x}} \in V,$ where $x_k$ is the center of the $k^{th}$ disk. Denote by $r_k$ the radius of the $k^{th}$ disk and let $R_{ij}=r_i+r_j$. The condition that the disks do not overlap can be written as $|x_i-x_j|\geq R_{ij}$. Disks $i$ and $j$ touch exactly if $|x_i-x_j|= R_{ij}$. Thus the space $P_{\mathcal{B}_n}$ consists of points corresponding to branched polymers, which are not necessarily connected. \begin{figure}[htbp] \begin{center} \includegraphics[scale=.75]{conn.png} \caption{A connected branched polymer.} \label{fig0} \end{center} \end{figure} It is clear from the definition of $P_{\mathcal{B}_n}$ that it decomposes as the disjoint union $$P_{\mathcal{B}_n}=\bigsqcup_{S \subset \mathcal{B}_n} P_{\mathcal{B}_n}^S,$$ where $ P_{\mathcal{B}_n}^S$ consists of all points $\mbox{\bf{x}} \in P_{\mathcal{B}_n} $ such that $|x_i-x_j|= R_{ij}$ exactly if $H_{ij} \in S$. That is, $$ P_{\mathcal{B}_n}^S=\{\mbox{\bf{x}} \in \mathbb{C}^n \mid |h_{ij}(\mbox{\bf{x}})|> R_{ij}\mbox{ for } H_{ij}\in \mathcal{B}_n \backslash S, \mbox{ }|h_{ij}(\mbox{\bf{x}})|=R_{ij}\mbox{ for } H_{ij} \in S \}.$$ We can consider $\mathcal{B}_n$ as a matroid with ground set the set of hyperplanes in $\mathcal{B}_n$ and independence given by linear independence of the normals to the hyperplanes. Connected branched polymers, in the sense of \cite{bi, kw}, correspond to points in the strata $ P_{\mathcal{B}_n}^S$ of $P_{\mathcal{B}_n}$ such that $S$ contains a base of ${\mathcal{B}_n}$. Otherwise, the configuration of disks would not be connected. Thus, the {\bf space of connected branched polymers} corresponding to $\mathcal{B}_n$ is $$CP_{\mathcal{B}_n}=\bigsqcup_{S \subset \mathcal{B}_n \mbox{ } : \mbox{ } r(S)=r({\mathcal{B}_n})} P_{\mathcal{B}_n}^S,$$ where $r(S)$ denotes the rank of the arrangement $S$, which is the dimension of the space which the normals to the hyperplanes in $S$ span. We now define the notion of the volume of the space $CP_{\mathcal{B}_n}$, which coincides with the definition given in \cite{bi, kw}. Let $S=\{h_{i_1 j_1}, \ldots, h_{i_{n-1} j_{n-1}}\}$ be a base of $\mathcal{B}_n$. Let us embed $P_{\mathcal{B}_n}^S$ into $[0, 2\pi]^{n-1}$ by $$\mbox{\bf{x}} \mapsto \varphi(\mbox{\bf{x}})= (\varphi_1, \ldots, \varphi_{n-1}),$$ where $h_{i_k j_k}(\mbox{\bf{x}})=e^{i \varphi_{k}} R_{i_k j_k}, k \in [n-1] $. Define the volume of $P_{\mathcal{B}_n}^S$ as the volume of its image in $[0, 2\pi]^{n-1}$: $$\vol P_{\mathcal{B}_n}^S=\vol (\{ \varphi(\mbox{\bf{x}}) \mid \mbox{\bf{x}} \in P_{\mathcal{B}_n}^S\}).$$ If $S$ is a dependent set in $\mathcal{B}_n$, then $P_{\mathcal{B}_n}^S$ has a lower dimension (${\rm dim} \mbox{ } P_{\mathcal{B}_n}^S<n-1$), so we let $\vol P_{\mathcal{B}_n}^S=0$. Let $$\vol CP_{\mathcal{B}_n}=\sum_{S \mbox{ } : \mbox{ } r(S)=r({\mathcal{B}_n})}\vol P_{\mathcal{B}_n}^S=\sum_{S \mbox{ } : \mbox{ } S \in \mathcal{B}({\mathcal{B}_n})} \vol P_{\mathcal{B}_n}^S,$$ where $\mathcal{B}(\mathcal{M})$ denotes the set of bases of a matroid $\mathcal{M}$. Recall that the characteristic polynomial of $\mathcal{B}_n$ is $\chb(t)=(t-1)\cdots (t-(n-1))$ \cite[p. 414]{s}. By \cite[Theorem 2]{kw}, $\vol CP_{\mathcal{B}_n}=(n-1)! (2 \pi)^{n-1}$. Observe that it equals \begin{equation} \label{vol} \vol CP_{\mathcal{B}_n} =(-2 \pi)^{r(\mathcal{B}_n)} \chb(0),\end{equation} where $r(\mathcal{B}_n)=n-1$ is the rank of $\mathcal{B}_n$. Equation (\ref{vol}) is a special case of Theorem \ref{01} which we state and prove in the following sections. \section{Polymers associated to a hyperplane arrangement} \label{sec:arr} In this section we associate branched polymers to any central hyperplane arrangement. We calculate the volume of the space of connected branched polymers as well as the $q$-volume of the space of all branched polymers. Let $\{h_1, \ldots, h_N\} \subset (\mathbb{C}^r)^*$, and assume that $h_1, \ldots, h_N$ span $(\mathbb{C}^r)^*$. Let ${\mathcal{A}}=\{H_1, \ldots, H_N\} $ be the hyperplane arrangement where the hyperplanes are $$H_i=\{ \mbox{\bf{x}} \in \mathbb{C}^r \mid h_{i}(\mbox{\bf{x}})=0\}.$$ Note that $r=r({\mathcal{A}})$ is the rank of the arrangement. Define the {\bf space of branched polymers } associated to the arrangement ${\mathcal{A}}$ and nonnegative scalars $R_{i}$, $1\leq i\leq N$ to be $$P_{{\mathcal{A}}}=\{ \mbox{\bf{x}} \in \mathbb{C}^r \mid |h_{i}(\mbox{\bf{x}})|\geq R_{i} \mbox{ for all } H_i \in {\mathcal{A}} \}.$$ This space can be thought of as $\mathbb{C}^r$ with $N$ tubes removed. Namely, let $$T_i=\{ \mbox{\bf{x}} \in \mathbb{C}^r \mid |h_i(\mbox{\bf{x}})|<R_i\}, \mbox{ } i \in [N],$$ be the $i^{th}$ tube, which is the set of points in $\mathbb{C}^r$ at distance less than $R_i/ ||h_i||$ from the hyperplane $H_i$. Clearly then $P_{{\mathcal{A}}}$ is the complement to all the tubes in $\mathbb{C}^r$. The space $P_{\mathcal{A}}$ is related to the well-studied space $C_{\mathcal{A}}=\mathbb{C}^r \backslash \bigcup_{H \in {\mathcal{A}}} H,$ the complement of the arrangement ${\mathcal{A}}$ in $\mathbb{C}^r$. The space of branched polymers $P_{{\mathcal{A}}}$ decomposes as the disjoint union \begin{equation} \label{bp} P_{{\mathcal{A}}}=\bigsqcup_{S \subset {\mathcal{A}}} P_{{\mathcal{A}}}^S,\end{equation} where $$ P_{{\mathcal{A}}}^S=\{\mbox{\bf{x}} \in \mathbb{C}^r \mid |h_{i}(\mbox{\bf{x}})|> R_ {i}\mbox{ for } H_i \in {\mathcal{A}} \backslash S, \mbox{ } |h_{i}(\mbox{\bf{x}})|=R_ {i} \mbox{ for } H_i \in S \}.$$ Consider ${\mathcal{A}}$ as the matroid with ground set the hyperplanes in ${\mathcal{A}}$ and independence linear independence of the normals to the hyperplanes. Define the {\bf space of connected branched polymers} associated to ${\mathcal{A}}$ as \begin{equation} \label{conn} CP_{\mathcal{A}}=\bigsqcup_{S \subset {\mathcal{A}} \mbox{ } : \mbox{ } r(S)=r({\mathcal{A}})} P_{{\mathcal{A}}}^S. \end{equation} \medskip We now define the notions of volume $\vol$and $q$-volume $\qvol$of $ P_{{\mathcal{A}}}^S$ for any set $S \subset {\mathcal{A}}$ with respect to ${\mathcal{A}}$. Let $\vol P_{{\mathcal{A}}}^S=0$ for any dependent set $S$, since ${\rm dim } \mbox{ } P_{{\mathcal{A}}}^S< n$ . Let $S=\{h_{i_1 }, \ldots, h_{i_{r} }\}$ be a base of ${\mathcal{A}}$. Embed $P_{{\mathcal{A}}}^S$ into $[0, 2\pi]^{r}$ by $$\mbox{\bf{x}} \mapsto \varphi(\mbox{\bf{x}})=(\varphi_1, \ldots, \varphi_{r}),$$ where $h_{i_k }(\mbox{\bf{x}})=e^{i \varphi_{k}} R_ {i_k }, k \in [r] $. Define the volume of $P_{{\mathcal{A}}}^S$ as the volume of its image in $[0, 2\pi]^{r}$: $$\vol P_{{\mathcal{A}}}^S=\vol (\{ \varphi(\mbox{\bf{x}}) \mid \mbox{\bf{x}} \in P_{{\mathcal{A}}}^S\}).$$ For any subset $S=\{H_{i_1}, \ldots, H_{i_l}\} $ of ${\mathcal{A}}$, let $$\langle S \rangle_{{\mathcal{A}}}:=\{ H_i \in {\mathcal{A}} \mid h_i \mbox{ is in the span } \langle h_{i_1}, \ldots, h_{i_l} \rangle \}.$$ We consider $\langle S \rangle_{{\mathcal{A}}}$ as a hyperplane arrangement in $\mathbb{C}^r / (H_{i_1} \cap \ldots \cap H_{i_l}) \cong \mathbb{C}^{r'}.$ Its rank is $r(\langle S \rangle_{{\mathcal{A}}})=r'$. Define $$\vol P_{{\mathcal{A}}}^S:=\vol P_{\langle S \rangle_{{\mathcal{A}}}}^S \cdot (2\pi)^{r({\mathcal{A}})-r(\langle S \rangle_{{\mathcal{A}}})},$$ and $$\qvol P_{{\mathcal{A}}}^S:=\vol P_{{\mathcal{A}}}^S \cdot q^{r({\mathcal{A}})-r(\langle S \rangle_{{\mathcal{A}}})}.$$ Finally, define $$\vol CP_{{\mathcal{A}}}=\sum_{S \in \mathcal{B}({\mathcal{A}})} \vol P_{{\mathcal{A}}}^S, \mbox{ }\vol P_{{\mathcal{A}}}=\sum_{S \in \mathcal{I}({\mathcal{A}})} \vol P_{{\mathcal{A}}}^S, $$ and $$\qvol P_{{\mathcal{A}}}=\sum_{S \in \mathcal{I}({\mathcal{A}})} \qvol P_{{\mathcal{A}}}^S, $$ where $\mathcal{I}(\mathcal{M})$ denotes the independent sets of the matroid $\mathcal{M}$. \medskip \begin{theorem} \label{01 \begin{equation} \label{beta1} \qvol P_{\mathcal{A}}=(-2 \pi)^{r({\mathcal{A}})} {\chi}_{{\mathcal{A}}}(-q),\end{equation} where $\cha(t)$ is the characteristic polynomial of the central hyperplane arrangement ${\mathcal{A}}$ in $\mathbb{C}^r$ with $\dim(\bigcap_{H \in {\mathcal{A}}} H)=0$. {\it In particular,} \begin{equation} \label{nobeta} \vol CP_{\mathcal{A}}=(-2 \pi)^{r({\mathcal{A}})} {\chi}_{{\mathcal{A}}}(0).\end{equation} \end{theorem} \medskip We prove Theorem \ref{01} in the next section. \medskip We now point out an alternative way of thinking about the $q$-volume. Recall that $\langle S \rangle_{\mathcal{A}}$ has the structure of a matroid. Two hyperplanes $H_k$ and $H_l$ are in the same {\bf connected component} of $\langle S \rangle_{\mathcal{A}}$ if they are in a common circuit of the matroid defined by $\langle S \rangle_{\mathcal{A}}$. Let the components of $\langle S \rangle_{\mathcal{A}}$ be $C_1, \ldots, C_k$. Then it is possible to write $S=\bigsqcup_{l=1}^k S_l$ such that $C_i=\langle S_i \rangle_{\mathcal{A}}$. We call $S_1, \ldots, S_k$ the {\bf components} of $S$. The volume $\qvol P_{\langle S \rangle_{{\mathcal{A}}}}^S$, where $S_1, \ldots, S_k$ are the components of $S$, can be written as $$\qvol P_{\langle S \rangle_{{\mathcal{A}}}}^S=\qvol P_{\langle S_1 \rangle_{\mathcal{A}}}^{S_1} \cdots \qvol P_{\langle S_k \rangle_{\mathcal{A}}}^{S_k},$$ where $S_1, \ldots, S_k$ are the components of $S$. \section{The proof of Theorem \ref{01}} \label{sec:proof} In this section we prove Theorem \ref{01}. We first state a few lemmas we use in the proof. \begin{lemma} \label{inv} (Invariance Lemma) The volume $ \sum_{S \mbox{ }: \mbox{ } |S|=s, \mbox{ } S \in \mathcal{I}({{\mathcal{A}}})} \vol P_{{\mathcal{A}}}^S$ is independent of the $R_ i$, $i \in [N]$, where $s \in \{0, 1, \ldots, r({\mathcal{A}})\}$. \end{lemma} Lemma \ref{inv} is more general than the Invariance Lemma stated and proved in \cite{kw}, but the techniques used in \cite{kw} apply just as well in this case, yielding the proof of Lemma \ref{inv}. \begin{lemma} \label{a'} Let ${\mathcal{A}}'={\mathcal{A}} \backslash \{H_1\}$, such that $r({\mathcal{A}})=r({\mathcal{A}}')$. Then as $R_ 1 \rightarrow 0$, $$\vol P_{{\mathcal{A}}'}^{S}=\vol P_{{\mathcal{A}}}^{S},$$ for any independent set $S\subset {\mathcal{A}},$ with $H_1 \not \in S$. \end{lemma} \proof Note that $r(\langle S \rangle_{\mathcal{A}})=r(\langle S \rangle_{{\mathcal{A}}'}).$ Let the set of components of $\langle S \rangle_{{\mathcal{A}}}$ be $\{\langle S_1 \rangle_{{\mathcal{A}}}, \ldots, \langle S_k \rangle_{{\mathcal{A}}} \} $, and the set of components of $\langle S \rangle_{{\mathcal{A}}'}$ be $\{ \langle S_{1i} \rangle_{{\mathcal{A}}'}\}_{i \in I_1} \cup \ldots \cup \{\langle S_{ki} \rangle_{{\mathcal{A}}'}\}_{i \in I_k}.$ Then $\bigcup_{i \in I_l} S_{li}=S_l$, $ l \in [k]$, since if two elements $x$ and $y$ in the same component of $\langle S \rangle_{{\mathcal{A}}'}$, then there is a circuit in ${\mathcal{A}}$ containing $x, y$ and not containing $H_1$. Also, if for two elements $x$ and $y$ in different components of $\langle S \rangle_{{\mathcal{A}}'}$, there is a circuit in ${\mathcal{A}}$ containing $x$ and $y$ (then this circuit necessarily contains $H_1$) and if $x'$ and $x$ in a circuit of ${\mathcal{A}}$ and $x$ and $y$ are in a circuit of ${\mathcal{A}}$, so are $x'$ and $y$. We have $$\vol P_{{\mathcal{A}}}^{S}=\prod_{j=1}^k \vol P_{\langle S_j \rangle_{\mathcal{A}}}^{S_j} \cdot (2 \pi)^{r({\mathcal{A}})-r(\langle S \rangle_{\mathcal{A}})}$$ and $$\vol P_{{\mathcal{A}}'}^{S} =\prod_{j=1}^k \prod_{i \in I_j} \vol P_{\langle S_{ji} \rangle_{{\mathcal{A}}'}}^{S_{ji}}\cdot (2 \pi)^{r({\mathcal{A}}')-r(\langle S \rangle_{{\mathcal{A}}'})}.$$ We claim that for any $j \in [k]$, \begin{equation} \label{equal} \vol P_{\langle S_j \rangle_{{\mathcal{A}}}}^{S_j}=\prod_{i \in I_j} \vol P_{\langle S_{ji} \rangle_{{\mathcal{A}}'}}^{S_{ji}}.\end{equation} Since $r({\mathcal{A}})-r(\langle S \rangle_{\mathcal{A}})=r({\mathcal{A}}')-r(\langle S \rangle_{{\mathcal{A}}'})$, equation (\ref{equal}) suffices to prove $\vol P_{{\mathcal{A}}}^{S}=\vol P_{{\mathcal{A}}'}^{S}$. Equation (\ref{equal}) follows, since, as $R_ 1 \rightarrow 0$, the presence of the hyperplane $H_1$ in ${\mathcal{A}}$ imposes no constraints on the angles $\varphi_{j}$, for $j \in S_{ji}$. Furthermore, if $ H_l \in \langle S_{jl} \rangle_{{\mathcal{A}}'}$ and $ H_m \in \langle S_{jm} \rangle_{{\mathcal{A}}'}$, $l \neq m$, then as $R_ 1 \rightarrow 0$, the span of the angles $\varphi_l$ and $\varphi_m$ are independent of each other as all circuits containing both of them also contain $H_1$. \qed \begin{lemma} \label{a''} Let ${\mathcal{A}}''={\mathcal{A}} / \{H_1\}$ be the contraction of ${\mathcal{A}}$ with respect to the hyperplane $H_1$ such that $r({\mathcal{A}})=r({\mathcal{A}}'')+1$. Then as $R_ 1 \rightarrow 0$, $$2 \pi \cdot \vol P_{{\mathcal{A}}''}^{S\backslash \{H_1\}}=\vol P_{{\mathcal{A}}}^{S},$$ for any independent set $S\subset {\mathcal{A}}$ with $H_1 \in S$. \end{lemma} \proof Note that since $S$ is an independent set in ${\mathcal{A}}$, so is $S\backslash \{H_1\}$ in ${\mathcal{A}}''$. Also, $r(\langle S \rangle_{\mathcal{A}})=r(\langle S\backslash \{H_1\} \rangle_{{\mathcal{A}}''})+1$ and $r({\mathcal{A}})-r(\langle S \rangle_{\mathcal{A}})=r({\mathcal{A}}'')-r(\langle S\backslash \{H_1\} \rangle_{{\mathcal{A}}''}).$ Thus, it suffices to prove that $2 \pi \cdot \vol P_{\langle S\backslash \{H_1\} \rangle_{{\mathcal{A}}''}}^{S\backslash \{H_1\}}=\vol P_{\langle S \rangle_{{\mathcal{A}}}}^{S}.$ Let the set of components of $\langle S \rangle_{{\mathcal{A}}}$ be $\{\langle S_1 \rangle_{{\mathcal{A}}}, \ldots, \langle S_k \rangle_{{\mathcal{A}}} \} $ with $H_1 \in S_1$. Note that any circuit in $\langle S \rangle_{\mathcal{A}}$ not involving $H_1$ automatically carries over to a circuit in $\langle S \rangle_{{\mathcal{A}}''}$ (easy to see if we assume, without loss of generality, that $h_1(\mbox{\bf{x}})=x_n$). Also, any circuit in $\langle S \rangle_{{\mathcal{A}}''}$ automatically lifts to a circuit in $\langle S \rangle_{{\mathcal{A}}}$. Thus, the set of components of $\langle S \backslash \{H_1\}\rangle_{{\mathcal{A}}''}$ is $\{\langle S_{1j} \rangle_{{\mathcal{A}}''}\}_{j \in I_1} \cup \{\langle S_2 \rangle_{{\mathcal{A}}''}, \ldots, \langle S_k \rangle_{{\mathcal{A}}''} \} $, where $\bigcup_{j \in I_1} S_{1j}=S_1$. As $R_ 1 \rightarrow 0$, the volumes $\vol P_{\langle S_i \rangle_{{\mathcal{A}}''}}^{S_i}=\vol P_{\langle S_i \rangle_{{\mathcal{A}}}}^{S_i},$ $i \in \{2, \ldots, k\}$. Finally, if $ H_l \in \langle S_{1l} \rangle_{{\mathcal{A}} }$ and $ H_m \in \langle S_{1m} \rangle_{{\mathcal{A}}}$, $l \neq m$, then as $R_ 1 \rightarrow 0$, the span of the angles $\varphi_l$ and $\varphi_m$ are independent of each other as all circuits containing both of them also contain $H_1$. Thus, $2\pi \cdot \prod_{j \in I_1} \vol P_{\langle S_{1j} \rangle_{{\mathcal{A}}''}}^{S_1j}=\vol P_{\langle S_1 \rangle_{{\mathcal{A}}}}^{S_1},$ the $2 \pi$ accounting for the fact that $H_1 \in S_1$ and as $R_ 1 \rightarrow 0$, $\varphi_1$ ranges between $0$ and $2\pi$ freely. \qed \medskip \noindent {\it Proof of Theorem \ref{01}.} Equation (\ref{nobeta}) is a consequence of (\ref{beta1}) by the definitions of the spaces $P_{\mathcal{A}}$ and $CP_{\mathcal{A}}$ and their notions of volume. To prove equation (\ref{beta1}) we consider two cases depending on whether there is a hyperplane $H \in {\mathcal{A}}$ such that $r({\mathcal{A}})=r({\mathcal{A}}')$, for ${\mathcal{A}}'={\mathcal{A}} \backslash \{H\}$. Suppose ${\mathcal{A}}$ is such that if we delete any hyperplane $H$ from it, obtaining the hyperplane arrangement ${\mathcal{A}}'={\mathcal{A}} \backslash \{H\}$, then $r({\mathcal{A}}')=r({\mathcal{A}})-1$. This means that the hyperplanes of ${\mathcal{A}}$ constitute a base. Then, \begin{align*}\qvol P_{{\mathcal{A}}}&=\sum_{S \mbox{ }: \mbox{ } S \in \mathcal{I}({{\mathcal{A}}})} q^{r({\mathcal{A}})-|S|} \vol P_{{\mathcal{A}}}^S\\ &= \sum_{i=0}^{r({\mathcal{A}})} { r({\mathcal{A}}) \choose i} q^{r({\mathcal{A}})-i} (2 \pi)^{r({\mathcal{A}})} \\ &=(1+q)^{r({\mathcal{A}})} (2 \pi)^{r({\mathcal{A}})}\\ &=(-2 \pi)^{r({\mathcal{A}})} (-q-1)^{r({\mathcal{A}})} \\ &=(-2 \pi)^{r({\mathcal{A}})}\cha(-q) . \end{align*} The second equality holds since the independent sets of size $i$ are exactly the $i$-subset of the base ${\mathcal{A}}$, of which there are ${ r({\mathcal{A}}) \choose i}$ and $ \vol P_{{\mathcal{A}}}^S=(2 \pi)^{r({\mathcal{A}})}$ for any set $S$, since ${\mathcal{A}}$ is a base itself. The last equality holds since if ${\mathcal{A}}$ itself is a base, then $\cha(t)=(t-1)^{r({\mathcal{A}})}$. Suppose now that there exists a hyperplane $H \in {\mathcal{A}}$ such that $r({\mathcal{A}})=r({\mathcal{A}}')$ for ${\mathcal{A}}'={\mathcal{A}} \backslash \{H\}$. Assume without loss of generality that $H=H_1$. If ${\mathcal{A}}'$ and ${\mathcal{A}}''$ are the deletion and restriction of ${\mathcal{A}}$ with respect to the hyperplane $h_1(\mbox{\bf{x}})=0$, then $r({\mathcal{A}})=r({\mathcal{A}}')=r({\mathcal{A}}'')+1$. Recall that \begin{equation} \label{rec1} \cha(t)=\chi_{{\mathcal{A}}'}(t)-\chi_{{\mathcal{A}}''}(t).\end{equation} \medskip Let $$R({\mathcal{A}}, q)= (-2 \pi)^{r({\mathcal{A}})} {\chi}_{{\mathcal{A}}}(-q)$$ denote the right hand side of equation (\ref{beta1}). By (\ref{rec1}) $$R({\mathcal{A}}, q)=R({\mathcal{A}}', q)+2 \pi R({\mathcal{A}}'', q).$$ In particular, $$[q^p]R({\mathcal{A}}, q)=[q^p]R({\mathcal{A}}', q)+2 \pi [q^p]R({\mathcal{A}}'', q),$$ for any integer power $p$ of $q$. To prove the theorem it suffices to show that the same recurrence holds for \begin{equation} \label{rec2} L({\mathcal{A}}, q)=\sum_{S \mbox{ }: \mbox{ } S \in \mathcal{I}({{\mathcal{A}}})} q^{r({\mathcal{A}})-|S|} \vol P_{{\mathcal{A}}}^S,\end{equation} since the case when we cannot use the recurrence is exactly when removing any hyperplane from ${\mathcal{A}}$ reduces the rank, and we proved the validity of Theorem \ref{01} for it above. Recurrence (\ref{rec2}) holds, since \begin{align*} \sum_{S \mbox{ }: \mbox{ } |S|=s, \mbox{ } S \in \mathcal{I}({{\mathcal{A}}})} & \vol P_{{\mathcal{A}}}^S= \sum_{S' \mbox{ }: \mbox{ } |S'|=s, \mbox{ } {S'} \in \mathcal{I}({{\mathcal{A}}'})} \vol P_{{\mathcal{A}}'}^{S'} + \\ &+2 \pi \sum_{S'' \mbox{ }: \mbox{ } |S''|=s-1, \mbox{ } {S''} \in \mathcal{I}({{\mathcal{A}}''})} \vol P_{{\mathcal{A}}''}^{S''},\end{align*} since Lemma \ref{a'} and \ref{a''} imply \begin{equation} \label{notin} \sum_{S' \mbox{ }: \mbox{ } |S'|=s, \mbox{ } {S'} \in \mathcal{I}({{\mathcal{A}}'})} \vol P_{{\mathcal{A}}'}^{S'} =\sum_{S \mbox{ }: \mbox{ } H_1 \not \in S, \mbox{ } |S|=s, \mbox{ } S \in \mathcal{I}({{\mathcal{A}}})} \vol P_{{\mathcal{A}}}^S\end{equation} and \begin{equation} \label{in} 2 \pi \sum_{S'' \mbox{ }: \mbox{ } |S''|=s-1, \mbox{ } {S''} \in \mathcal{I}({{\mathcal{A}}''})} \vol P_{{\mathcal{A}}''}^{S''}=\sum_{S \mbox{ }: \mbox{ } H_1 \in S, \mbox{ } |S|=s, \mbox{ } S \in \mathcal{I}({{\mathcal{A}}})} \vol P_{{\mathcal{A}}}^S. \end{equation} \qed \begin{example} By Theorem \ref{01} the $q$-volume of the space of branched polymers associated to the braid arrangement is \begin{align} \label{uh} \qvol \mbox{ } P_{\mathcal{B}_n}&=(-2 \pi)^{r(\mathcal{B}_n)} {\chi}_{\mathcal{B}_n}(-q)\\ \nonumber&=(2 \pi)^{n-1} (q+1)\cdots (q+(n-1)). \end{align} A special case of (\ref{uh}) is equation (\ref{vol}). \end{example} \section{G-polymers} \label{sec:gpoly} In this section we discuss $G$-polymers, as defined in \cite[Section 3]{kw}, and show how they correspond to graphical arrangements in our setup. We also rewrite the $q$-volume of the generalized $G$-polymers in the language of its chromatic polynomial, which is a special case of its Tutte polynomial. Given a graph $G=([n], E)$, let $R_ {ij}$ be positive scalars for each edge $(i, j)$ of $G$. A {\bf $G$-polymer}, following Kenyon and Winkler, is a configuration of points $(x_1, \ldots, x_n) \in \mathbb{C}^n$ such that \begin{itemize} \item $|x_i-x_j|\geq R_ {ij}$, for any edge $(i, j)$ of $G$ \item $ |x_i-x_j|= R_ {ij}$ for all edges $(i, j)$ of some connected subgraph of $G$. \end{itemize} If $G$ is not connected, there are no $G$-polymers. The volume of the space of $G$-polymers is defined by the angles made by the vectors from $x_i$ to $x_j$, where $(i, j) \in E$ is such that $|x_i -x_j|=R_ {ij}$. See \cite{kw} for further details. Recall that the {\bf graphical arrangement} ${\mathcal{A}}_G$ corresponding to the graph $G=([n], E)$ consists of the $|E|$ hyperplanes $$H_{ij}=\{\mbox{\bf{x}} \in \mathbb{C}^n \mid x_i-x_j=0\} \mbox{ for } (i, j) \in E, i<j.$$ We consider ${\mathcal{A}}_G$ as a hyperplane arrangement in $\mathbb{C}^n / ( \bigcap_{(i, j) \in E} H_{ij}).$ The space connected branched polymers associated to ${\mathcal{A}}_G$ as defined in (\ref{conn}) coincide with the space of $G$-polymers. Let $\chi_G(t)$ be the chromatic polynomial of the graph $G$, $\widetilde{\chi}_{G}(t)=\chi_G(t)/t^{k(G)}$, where $k(G)$ is the number of components of $G$ and $r(G)= n-k(G)$. A special case of Theorem \ref{01} is the following proposition. \begin{proposition} \label{chrom} {\rm (\cite[Theorem 4]{kw})} \begin{equation} \label{ag} \vol CP_{{\mathcal{A}}_G}=(-2 \pi)^{r(G)} \widetilde{\chi}_{G}(0).\end{equation} \end{proposition} Proposition \ref{chrom} follows from Theorem \ref{01} since $\widetilde{\chi}_G(t)=\chi_{{\mathcal{A}}_G}(t)$ (\cite[Theorem 2.7.]{s}) and $r(G)=r({\mathcal{A}}_G)$. Recall also that the chromatic polynomial $\chi_G(t)$ of a graph $G$ is a special case of its Tutte polynomial $T_G(x, y)$: $\chi_G(t)=(-1)^{r(G)} t^{k(G)}T_G(1-t, 0)$ (\cite[Theorem 6, p. 346]{b}). In this light, Proposition \ref{chrom} coincides with \cite[Theorem 4]{kw}, which is stated in terms of the Tutte polynomial of $G$. The {\bf space of generalized $G$-polymers} is $P_{{\mathcal{A}}_G}$ as defined in (\ref{bp}). A special case of Theorem \ref{01} is the following proposition. \begin{proposition} \label{G} \begin{align*}\qvol P_{{\mathcal{A}}_G}&=(-2 \pi)^{r(G)} \widetilde{\chi}_{G}(-q). \end{align*} \end{proposition} As Proposition \ref{G} illustrates, the notion of $q$-volume of the space of generalized $G$-polymers allows for recovering the entire chromatic polynomial of~ $G$. \section{Symmetric polymers} \label{sec:sym} In this section we specialize our results to the type $B_n$ Coxeter arrangement. This arrangement naturally gives rise to polymers symmetric with respect to the origin. The {\bf type $B_n$ Coxeter arrangement} ${\mathcal{A}}(B_n)$, $n >1$, in $\mathbb{C}^n$, consists of the $2{n \choose 2}+n$ hyperplanes $$H_{ij}^-=\{ \mbox{\bf{x}} \in \mathbb{C}^n \mid x_i-x_j=0\}, H_{ij}^+=\{ \mbox{\bf{x}} \in \mathbb{C}^n \mid x_i+x_j=0\},$$ $$H_{k}=\{ \mbox{\bf{x}} \in \mathbb{C}^n \mid x_k=0\}, \mbox{ for } 1\leq i<j\leq n, k \in [n].$$ We consider ${\mathcal{A}}(B_n)$ as a hyperplane arrangement in $V=\mathbb{C}^n / ( \bigcap_{H \in {\mathcal{A}}(B_n)} H).$ \medskip The space of branched polymers associated to ${\mathcal{A}}(B_n)$ and nonnegative scalars $R_ {ij}^+, R_ {ij}^-, R_ k$, for $1\leq i<j\leq n, k \in [n] $ is, by definition, \begin{align*}P_{{\mathcal{A}}(B_n)}=\{ \mbox{\bf{x}} \in V \mid |x_i+x_j|\geq R_ {ij}^+, |x_i-x_j|\geq R_ {ij}^-, |x_k|\geq R_ {k}, \\\mbox{ for } 1\leq i<j\leq n, k \in [n] \}.\end{align*} To obtain symmetric polymers, place a disk of radius $r_k$ around the points $x_k$ and $-x_k$ in the plane for all $k \in [n]$. Let $R_{ij}^+=R_{ij}^-=r_i+r_j$ and $R_k=r_k$, for $1\leq i<j\leq n, k \in [n]$. Then the condition $|x_i+x_j|\geq R_{ij}^+$ ensures that the disks around $x_i$ and $-x_j$, and $-x_i$ and $x_j$ do not overlap, the condition $|x_i-x_j|\geq R_ {ij}^-$ ensures that the disks around $x_i$ and $x_j$, and $-x_i$ and $-x_j$ do not overlap, and the condition $|x_k|\geq R_ {k}$ ensures that the disks around $x_k$ and $-x_k$ do not overlap. Since the characteristic polynomial of ${\mathcal{A}}(B_n)$ is given by $$\chi_{{\mathcal{A}}(B_n)} (t)=(t-1)\cdot (t-3) \cdots (t-(2n-1)), \mbox{ \cite[p. 451]{s}}$$ Theorem 1 specializes to the following corollary. \begin{corollary} \label{cor:sym} \begin{equation} \qvol P_{{\mathcal{A}}(B_n)}=(2 \pi)^n(q+1)\cdot (q+3) \cdots (q+(2n-1)).\end{equation} In particular, \begin{equation} \vol CP_{{\mathcal{A}}(B_n)}=(2 \pi)^n 1\cdot 3 \cdots (2n-1).\end{equation} \end{corollary} \medskip \begin{figure}[htbp] \begin{center} \includegraphics[scale=.75]{sym1.png} \caption{The graph representing a base and the corresponding polymer. The half-line drawn on the figure is the positive $x$-axis. The parametrization can be done by the angles which the dashed line segments make with the $x$-axis.} \label{fig1} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[scale=.75]{sym2.png} \caption{The graph representing a base and the corresponding polymer. The half-line drawn on the figure is the positive $x$-axis. The parametrization can be done by the angles which the dashed line segments make with the $x$-axis.} \label{fig2} \end{center} \end{figure} We now outline the geometric picture of connected branched polymers associated to ${\mathcal{A}}(B_n)$. Since $\vol CP_{{\mathcal{A}}(B_n)}=\sum_{S \in \mathcal{B}({{\mathcal{A}}(B_n)})} \vol P_{{\mathcal{A}}(B_n)}^S$ we restrict our attention to the polymers corresponding to the bases of ${\mathcal{A}}(B_n)$. Biject a subset $S$ of hyperplanes of ${\mathcal{A}}(B_n)$ with a graph $G_S$ on vertex set $[n]$ by adding an edge labeled by $+$ between vertices $i$ and $j$ if $H_{ij}^+ \in S$, by adding an edge labeled by $-$ between vertices $i$ and $j$ if $H_{ij}^- \in S$, and adding a loop at vertex $k$ if $H_k \in S$. Then the bases of ${\mathcal{A}}(B_n)$ correspond to graphs $G$ on vertex set $[n]$ with $n$ edges such that each component of $G$ contains exactly one cycle or one loop, and if it contains a cycle then the cycle has an odd number of edges labeled by $+$. It is easy to see that if there is more than one loop in $G_S$, for some base $S$, then the stratum $P_{{\mathcal{A}}(B_n)}^S$ is empty. This is true, since a loop based at vertex $k$ in $G_S$ corresponds to two disks of radius $R_ k=r_k$ around the points $x_k$ and $-x_k$, touching at the origin. If there was also a loop at vertex $l \neq k$, it would be impossible to also satisfy $|x_k+x_l|\geq r_l+r_k$ and $|x_k-x_l|\geq r_l+r_k$ if $k<l$ or $|x_l-x_k|\geq r_l+r_k$ if $k>l$. The disks corresponding to a component of $G_S$ with a loop at $k$ look like a tree symmetric across the origin and can be parametrized by the angles which the edges of ``half" of the tree make with the $x$-axis and by the angle made by the segment $0-x_k$ and the $x$-axis; see Figure \ref{fig1}. The disks corresponding to a component of $G_S$ with a cycle is symmetric with respect to the origin and contains a cycle of disks symmetrically around the origin. If the cycle is of length $2m$, then the polymer is parametrized by $m+1$ angles which the edges of the cycle make with the $x$-axis as well the angles that ``half" of the remaining edges make with the $x$-axis; see Figure \ref{fig2}. \section{Volumes of branched polymers and broken circuits} \label{sec:nbc} In this section we relate the volumes of branched polymers associated to ${\mathcal{A}}$ and broken circuits of ${\mathcal{A}}$. While the volume $\vol \mbox{ } CP_{{\mathcal{A}}}$ is invariant under changing the radii $R_i$ by the Invariance Lemma, the individual volumes of the strata $ P_{{\mathcal{A}}}^S$ change. By picking the radii $R_ i$ appropriately, it is possible to construct a stratification, where only strata corresponding to certain special bases $S$ appear in the stratification, and they all have the same volume. The mentioned special bases are the no broken circuit bases as we show in Theorem \ref{bc}. A {\bf broken circuit} of ${\mathcal{A}}$ with respect to a linear ordering $\mathcal{O}$ of the hyperplanes of ${\mathcal{A}}$ is a set $C-\{u\},$ where $C$ is a circuit and $u$ is the largest element of $C$ in the linear ordering $\mathcal{O}$. While the exact set of broken circuits depends on the linear order we choose, remarkably, the number of sets $S$ of hyperplanes of size $k$, for an arbitrary fixed $k$, such that $S$ contains no broken circuit is the same; see \cite[Theorem 4.12]{s}. \begin{theorem} \label{bc} Let ${\mathcal{A}}=\{H_1, \ldots, H_N\}$ be a hyperplane arrangement with $R_ 1 \ll \cdots \ll R_N$. Then the space $CP_{\mathcal{A}}$ is a disjoint union of $(-1)^{r({\mathcal{A}})}\cha(0)$ tori $(S^1)^{r({\mathcal{A}})}$. More precisely, if $\mathcal{O}$ is the order $H_1 <\cdots<H_N$ on the hyperplanes of ${\mathcal{A}}$ and $BC_\mathcal{O}({\mathcal{A}})$ are all subsets of ${\mathcal{A}}$ containing no broken circuits with respect to $\mathcal{O}$, then for $R_ 1 \ll \cdots \ll R_N$ \begin{equation} \label{eq:tori} CP_{{\mathcal{A}}}=\bigsqcup_{S \in \mathcal{B}({{\mathcal{A}}}) \cap BC_\mathcal{O}({\mathcal{A}})} P_{{\mathcal{A}}}^S,\end{equation} where $\vol \mbox{ } P_{{\mathcal{A}}}^S=(2\pi)^{r({\mathcal{A}})}$ for $S \in \mathcal{B}({{\mathcal{A}}}) \cap BC_\mathcal{O}({\mathcal{A}})$. \end{theorem} \proof By \cite[Theorem 4.12]{s} \begin{equation} \label{412} \# \{S \mid |S|=i, S \in BC_\mathcal{O}({\mathcal{A}}) \} =(-1)^{r({\mathcal{A}})}[q^{r({\mathcal{A}})-i)}]\cha(-q).\end{equation} Thus, $|\mathcal{B}({{\mathcal{A}}}) \cap BC_\mathcal{O}({\mathcal{A}})|=(-1)^{r({\mathcal{A}})}\cha(0)$, and it suffices to prove the second part of the theorem's statement. Recall that by definition $$ CP_{\mathcal{A}}=\bigsqcup_{S \subset {\mathcal{A}} \mbox{ } : \mbox{ } r(S)=r({\mathcal{A}})} P_{{\mathcal{A}}}^S. $$ We first show that if $S$ contains a broken circuit, then $P_{{\mathcal{A}}}^S$ is empty. Let $\{H_{i_1}, \ldots, H_{i_k}\} \subset S$, $i_1<\cdots <i_k$, be a broken circuit. Then there exists $H_i \in {\mathcal{A}}$ such that $\{H_{i_1}, \ldots, H_{i_k}, H_i\}$, is a circuit and $i_k<i$. Let $h_i =\sum_{j=1}^k c_j h_{i_j}$, $c_j\neq 0$. Then for any $\mbox{\bf{x}} \in P_{{\mathcal{A}}}^{S_j}$, $$\sum_{j=1}^k |c_j| R_{i_k} \geq \sum_{j=1}^k |c_j| R_{i_j}=\sum_{j=1}^k |c_j h_{i_j}(\mbox{\bf{x}})|\geq |\sum_{j=1}^k c_j h_{i_j}(\mbox{\bf{x}})|=|h_i(\mbox{\bf{x}})|\geq R_{i}.$$ However, $$\sum_{j=1}^k |c_j| R_{i_k} < R_{i}, \mbox{ since } R_{i_k} \ll R_i.$$ Thus, $P_{{\mathcal{A}}}^{S}$ is empty. Next we prove that $\vol P_{{\mathcal{A}}}^S=(2\pi)^{r({\mathcal{A}})}$ for $S \in \mathcal{B}({{\mathcal{A}}}) \cap BC_\mathcal{O}({\mathcal{A}})$. Let $S=\{H_{j_1}, \ldots, H_{j_r}\}$ with ${j_1}< \cdots<{j_r}$. Since $S$ contains no broken circuit, it follows that if $ i \in [N]\backslash\{{j_1}, \ldots, {j_r}\}$, then $i<j_r$. To prove that $\vol P_{{\mathcal{A}}}^S=(2\pi)^{r({\mathcal{A}})}$, it suffices to show that $|h_{j_l}(\mbox{\bf{x}})|=R_{j_l}$ for $l \in [r]$ imply $|h_{i}(\mbox{\bf{x}})|>R_{i}$ for $ i \in [N]\backslash\{{j_1}, \ldots, {j_r}\}$. Since $S$ is a base, $S\cup \{H_i\}$ contains a circuit $C$ whose maximal element is $H_{j_k}\neq H_i$, $k \in [r]$, since $S \in BC_\mathcal{O}({\mathcal{A}})$. Thus, $$h_{j_k}(\mbox{\bf{x}})=c_i h_i(\mbox{\bf{x}})+\sum_{l \in [N]\backslash \{k\}}c_{j_l}h_{j_l}(\mbox{\bf{x}}),$$ where $c_i, c_{j_1}, \ldots, c_{j_r}$ are scalars with $c_i \neq 0$. In particular, $$|h_{j_k}(\mbox{\bf{x}})| \leq |c_i|| h_i(\mbox{\bf{x}})|+\sum_{l \in [N]\backslash \{k\}}|c_{j_l}||h_{j_l}(\mbox{\bf{x}})|.$$ Since $|h_{j_l}(\mbox{\bf{x}})|=R_{j_l}$ for $l \in [r]$ and $R_{j_k} \gg R_i$ it follows that $|h_{i}(\mbox{\bf{x}})|> R_{i}.$ \qed \vspace{.05in} \noindent {\it An alternative proof of Theorem \ref{01}.} By definition, \begin{equation} \label{um2} \qvol P_{\mathcal{A}}=\sum_{i=0}^{r({\mathcal{A}})} \sum_{S \in \mathcal{S}_i} (q \cdot 2\pi)^{r({\mathcal{A}})-i} \vol CP_{\langle S \rangle_{\mathcal{A}}},\end{equation} where $\mathcal{S}_i$ is a collection of independent sets $S$ of cardinality $i$ such that the arrangements $\langle S \rangle_{\mathcal{A}}$ for $S \in \mathcal{S}_i$ run over all rank $i$ subarrangements of ${\mathcal{A}}$. By Theorem \ref{bc} the right hand side of equation (\ref{um2}) is \begin{align} \label{um3} \sum_{i=0}^{r({\mathcal{A}})} \sum_{S \in \mathcal{S}_i} (q \cdot 2\pi)^{r({\mathcal{A}})-i} \# \{B \mid B \in \mathcal{B}({\langle S \rangle_{\mathcal{A}}}) \cap BC_\mathcal{O}(\langle S \rangle_{\mathcal{A}}) \} \cdot (2\pi)^i= \\ \label{um4} \sum_{i=0}^{r({\mathcal{A}})} q^{r({\mathcal{A}})-i} \cdot (2\pi)^{r({\mathcal{A}})}\# \{S \mid |S|=i, S \in BC_\mathcal{O}({\mathcal{A}}) \}. \end{align} Since by \cite[Theorem 4.12]{s} \begin{equation} \label{4.12} \# \{S \mid |S|=i, S \in BC_\mathcal{O}({\mathcal{A}}) \} =(-1)^{r({\mathcal{A}})}[q^{r({\mathcal{A}})-i)}]\cha(-q),\end{equation} it follows from equations (\ref{um2}), (\ref{um3}), (\ref{um4}) and (\ref{4.12}) that $$ \qvol P_{\mathcal{A}}=(-2 \pi)^{r({\mathcal{A}})} {\chi}_{{\mathcal{A}}}(-q).$$ In particular, $$\vol CP_{\mathcal{A}}=(-2 \pi)^{r({\mathcal{A}})} {\chi}_{{\mathcal{A}}}(0).$$ \qed Note the striking similarity between the formula for the $1/t$-volume of branched polymers $P_{{\mathcal{A}}}$, \begin{equation} \label{eq:vol} {\rm vol}_{1/t} \mbox{ } P_{{\mathcal{A}}}=(-2 \pi)^{r({\mathcal{A}})} {\chi}_{{\mathcal{A}}}(-1/t)\end{equation} and the generating function \begin{equation} \label{eq:os} \sum_{i\geq 0} {\rm rank } \mbox{ } H^i(C_{\mathcal{A}}, \mathbb{Z})t^i=(-t)^{r({\mathcal{A}})} \cha(-1/t),\end{equation} where $C_{\mathcal{A}}=\mathbb{C}^r \backslash \bigcup_{H \in {\mathcal{A}}} H,$ is the complement of the arrangement ${\mathcal{A}}$ in $\mathbb{C}^r$ and $H^k(C_{\mathcal{A}}, \mathbb{Z})$ is the $k^{th}$ graded piece of the cohomology ring $H^*(C_{\mathcal{A}}, \mathbb{Z})$; see \cite{os}. The right hand sides of (\ref{eq:vol}) and (\ref{eq:os}) are in fact identical, if we normalize appropriately. This is no surprise, since we related ${\rm vol}_{1/t} \mbox{ } P_{{\mathcal{A}}}$ to no broken circuits in Theorem \ref{bc}, and since the cohomology ring $H^*(C_{\mathcal{A}}, \mathbb{Z})$ of $C_{\mathcal{A}}$ is isomorphic to the Orlik-Solomon algebra $A({\mathcal{A}})$ associated to the hyperplane arrangement ${\mathcal{A}}$, and the $i^{th}$ graded piece of $A({\mathcal{A}})$ has a basis in terms of no broken circuits. \section{Cohomology rings of the spaces $P_{\mathcal{A}}$ and $CP_{\mathcal{A}}$ and the Orlik-Solomon algebra} \label{sec:os} In this section we prove and conjecture about the cohomology rings of the spaces $P_{\mathcal{A}}$ and $CP_{\mathcal{A}}$. The well-known Orlik-Solomon algebra is isomorphic to the cohomology ring $H^*(C_{\mathcal{A}}, \mathbb{Z})$ of $C_{\mathcal{A}}$, where $C_{\mathcal{A}}=\mathbb{C}^r \backslash \bigcup_{H \in {\mathcal{A}}} H,$ is the complement of the arrangement ${\mathcal{A}}$ in $\mathbb{C}^r$. We prove that the spaces $P_{\mathcal{A}}$ and $C_{\mathcal{A}}$ are homotopy equivalent, and we conjecture that the same is true under certain circumstances for the spaces $CP_{\mathcal{A}}$ and $C_{\mathcal{A}}$. \begin{proposition} \label{f} The map $f: C_{\mathcal{A}} \rightarrow P_{\mathcal{A}}$ defined by $$f : (x_1, \ldots, x_r) \mapsto {\rm max}_{i \in [r]}(1, \frac{R_ i}{|h_i(\mbox{\bf{x}})|}) \cdot (x_1, \ldots, x_r),$$ where ${\mathcal{A}}$ is a hyperplane arrangements in $\mathbb{C}^r$, $$C_{\mathcal{A}}=\mathbb{C}^r \backslash \bigcup_{H \in {\mathcal{A}}} H,$$ $$P_{{\mathcal{A}}}=\{ \mbox{\bf{x}} \in \mathbb{C}^r \mid |h_{i}(\mbox{\bf{x}})|\geq R_ {i} \}$$ is a deformation retraction. \end{proposition} \proof It is straightforward to check that $f$ satisfies the following three conditions: (i) continuous, (ii) $f |_{P_{\mathcal{A}}}=id$, (iii) $f$ is homotopic to the identity. \qed \begin{corollary} \label{corf} The spaces $P_{\mathcal{A}}$ and $C_{\mathcal{A}}$ are homotopy equivalent. In particular, the cohomology rings of the spaces $C_{\mathcal{A}}$ and $P_{\mathcal{A}}$, where ${\mathcal{A}}$ is a hyperplane arrangements in $\mathbb{C}^r$, satisfy $$H^*(P_{\mathcal{A}}, \mathbb{Z})\cong H^*(C_{\mathcal{A}}, \mathbb{Z}).$$ \end{corollary} In Corollary \ref{bc1} we proved that for ${\mathcal{A}}=\{H_1, \ldots, H_N\}$ and $R_1\ll \cdots \ll R_N$ the space $CP_{\mathcal{A}}$ is a disjoint union of $|\cha(0)|$ tori $(S^1)^{r({\mathcal{A}})}$. In particular, $CP_{\mathcal{A}}$ is disconnected and cannot be homotopy equivalent to $C_{\mathcal{A}}$. However, if the scalars $R_i$, $i \in [N]$, satisfy certain generalized triangle inequalities, we conjecture that the spaces $CP_{\mathcal{A}}$ and $C_{\mathcal{A}}$ are homotopy equivalent, so in particular, the space $CP_{\mathcal{A}}$ is connected. Let ${\mathcal{A}}=\{H_1, \ldots, H_N\} $ be the hyperplane arrangement where the hyperplanes are $H_i=\{ \mbox{\bf{x}} \in \mathbb{C}^r \mid h_{i}(\mbox{\bf{x}})=0\}.$ Define the subset $T_{\mathcal{A}}$ of $ \mathbb{R}_{>0}^N$ to consist of all points ${\bf R}=(R_1, \ldots, R_N) \in \mathbb{R}_{>0}^N$ satisfying the inequalities $$ R_{i_0} \leq |c_1| R_{i_1}+\cdots + |c_k|R_{i_k},$$ whenever $$ h_{i_0}=c_1 h_{i_1}+\cdots + c_k h_{i_k}.$$ \begin{example} The set $T_{\mathcal{B}_n}$ consists of points ${\bf R}=(R_{ij})_{1\leq i< j\leq n}$, where $R_{ij}$ denotes the coordinate corresponding to the hyperplane $H_{ij}=\{ \mbox{\bf{x}} \in V \mid x_{i}-x_j=0\},$ for which the $R_{ij}$'s satisfy the triangle inequalities $R_{ij} \leq R_{ik}+R_{kj}$, for all $1\leq i< k<j \leq n$ and $R_{ij} \leq R_{ik}+R_{jk}$, for all $1\leq i< j<k \leq n$. In particular, $(R_{ij}=r_i+r_j)_{1\leq i< j\leq n} \in T_{\mathcal{B}_n}$, where $r_i$, $i \in [n]$, are nonnegative scalars. \end{example} \begin{conjecture} \label{?} For $(R_1, \ldots, R_N) \in T_{\mathcal{A}}$ the spaces $CP_{\mathcal{A}}$ and $C_{\mathcal{A}}$ are homotopy equivalent. In particular, $H^*(CP_{\mathcal{A}}, \mathbb{Z})\cong H^*(C_{\mathcal{A}}, \mathbb{Z}).$ \end{conjecture} A proof of Conjecture \ref{?} would yield another explanation, why, while $C_{\mathcal{A}}$ has $2n$ real dimensions, its top cohomology is in dimension $n$, since $CP_{\mathcal{A}}$ has $n$ real dimensions.
1,941,325,219,981
arxiv
\section{Introduction}\label{sec:intro} Galaxies can be divided into two main types, blue star-forming galaxies that typically have disky morphologies, and red passive galaxies that on the contrary are bulge-dominated (e.g. \citealt{Gadotti2009}; \citealt{Bluck2014}; \citealt{Whitaker2015}). These two types can be easily identified in a color-magnitude diagram where they form, respectively, the blue cloud and the red sequence (e.g. \citealt{Strateva2001}; \citealt{Blanton2003}; \citealt{Kauffmann2003b}; \citealt{Baldry2004}). According to the current theory of galaxy evolution, galaxies move from the blue cloud to the red sequence (\citealt{Cowie1996}; \citealt{Baldry2004}; \citealt{Perez2008}, \citealt{Fritz2014}). The mechanisms involved in such a transformation, called galaxy quenching, are still a matter of study as they have to explain how the morphological transformation occurs, how star formation ceases, whether the environment and a galaxy stellar mass play a role, and when and on which timescale such a process occurs. Galaxy evolution models reproduce quenching by shutting off the cold gas supply in a galaxy (e.g \citealt{Gabor2010}). This can occur by inhibiting the cold gas from entering a galaxy or from producing stars, or through ejective feedback mechanisms that remove gas from the galaxy. One of the most viable ejective mechanisms is provided by powerful active galactic nuclei (AGN), powered by accretion onto supermassive black holes (SMBH, \citealt{Lynden1969}). The discovery that black hole (BH) masses of nearby bulges correlate with the stellar velocity dispersion, mass and luminosity of the bulge (e.g. \citealt{Magorrian1998}, \citealt{Ferrarese2000}, \citealt{Gebhardt2000}, \citealt{Gultekin2009}, \citealt{Kormendy2013}), and the similarities between the evolution of the star formation rate density and the growth of the AGN (e.g. \citealt{Madau1996}, \citealt{Hasinger2005}, \citealt{Hopkins2007}, \citealt{Aird2015}) led to the idea that SMBH and their host-galaxies are tightly linked, despite the different size scales involved. Most galaxies have gone through an active phase during which the SMBH have accreted material, grown itself and possibly supplied the energy to influence the host galaxy on large scale distances (e.g. \citealt{Cattaneo2009}, \citealt{Kauffmann2000}). This could be possible through large-scale outflows expelling a large fraction of the gas from the host-galaxy (e.g. \citealt{Silk1998}), where a small fraction of the energy released by the BH accretion would be sufficient to heat and blow out the host-galaxy gas content. By including AGN feedback in numerical simulations and semi-analytical models of galaxy evolution a good agreement with the observations has been obtained, such as suppression of star formation at the highest stellar masses, which appears necessary to recover the properties of the local galaxy population (e.g. \citealt{DiMatteo2005}, \citealt{Croton2006}, \citealt{Schaye2015}, \citealt{Manzoni2021}). However, the impact AGN might have on their hosts and on the star formation activity is still a matter of numerous investigations (e.g. \citealt{Alexander2012}, \citealt{Kormendy2013}). Previous works have studied the link between the AGN and their host galaxies, but with conflicting results. Some find that the strength of the AGN activity strongly correlates with the SFR of their host (e.g. \citealt{Mullaney2012}, \citealt{Chen2013}, \citealt{Hickox2014}, \citealt{Lanzuisi2017}, \citealt{Stemo2020}, \citealt{Zhuang2020}), whereas others find that SFR is weakly or not correlated with the AGN luminosity (e.g. \citealt{Azadi2015}, \citealt{Stanley2015,Stanley2017}, \citealt{Shimizu2017}). A dependence on redshift and luminosity seems to exist, with higher luminosity AGN (L$_{AGN}$ > 10$^{44}$ erg s$^{-1}$) and lower redshift (z < 1) galaxies exhibiting a steep correlation, while no correlation is found for lower luminosities or AGN at higher redshifts (e.g., \citealt{Shao2010}; \citealt{Harrison2012}; \citealt{Rosario2012}, \citealt{Santini2012}). These inconclusive results could be due to different binning methods, e.g. AGN luminosity is averaged in bins of host properties as SFR and stellar mass, or the SFR is averaged in bins of AGN luminosity. As \cite{Hickox2014} point out, the different results likely arise from AGN luminosity varying on timescales shorter than that of SFR. Other factors include the sample size (e.g. \citealt{Harrison2012}, \citealt{Page2012}), selection effects, low number statistics, SFR measurements and the mutual dependence of AGN luminosity and SFR on stellar mass (\citealt{Harrison2017}). Most star-forming galaxies show a tight correlation between the star formation rate (SFR) and the stellar mass, referred to as the Main Sequence (MS) of star formation (e.g. \citealt{Daddi2007}, \citealt{Elbaz2007}, \citealt{Rodighiero2011}, \citealt{Whitaker2012}, \citealt{Speagle2014}, \citealt{Schreiber2015}). First studies pointed out that SFR increases with stellar mass as a linear relation, with normalization varying according to the redshift (as well as the choice of initial mass function (IMF) and/or SFR indicators). Recent studies have found that this relation is linear for stellar masses up to $\sim$ 10$^{10}$ M$_{\odot}$ and actually flattens towards higher stellar masses (e.g. \citealt{Whitaker2014}, \citealt{Schreiber2015}). The most luminous AGN tend to reside in the most massive galaxy hosts, therefore the mutual dependence of AGN strength and SFR on stellar mass could lead to the correlation observed for AGN luminosity and SFR (e.g. as demonstrated by \citealt{Stanley2017}, \citealt{Yang2017}). Several studies have used X-ray luminosity as AGN strength indicator, however as discussed in \cite{Hickox2014}, it traces the instantaneous AGN activity on timescale much shorter than the timescale for star formation (>100 Myr). [OIII] luminosity instead, which is produced in the narrow line region, traces the AGN activity on longer timescales, resulting in a stronger correlation between the AGN luminosity and the SFR, as found by \cite{Zhuang2020}. Other studies explored the AGN activity comparing the host galaxies properties with that of star-forming galaxies. Some find that AGN host galaxies mainly lie above or on the MS of galaxies (e.g. \citealt{Silverman2009}, \citealt{Santini2012}; \citealt{Mullaney2012}), whereas others find that most AGN host galaxies are below the MS, suggesting that AGN activity might regulate star formation inside their host galaxies through feedback mechanism (e.g. \citealt{Bongiorno2012}; \citealt{Mullaney2015}, \citealt{Shimizu2015}). These discordant results can be due to different AGN selection techniques, and SFR indicators. Measuring AGN and star formation activity in these systems is therefore crucial to determine whether these processes are causally linked or not. AGN can be identified at different wavelengths, in the X-rays, mid-IR, radio and optical bands. The central source ionizes the gas located at kiloparsec scale distances, showing characteristic emission-line intensity ratios discernible from those coming from normal star-forming regions. Therefore one simple and physical method for classifying AGN is to examine their emission line ratios. AGN can be classified into two classes depending on whether the central engine is viewed directly (Type I) or is obscured by a dusty torus (Type II) (e.g. \citealt{Antonucci1993}, \citealt{Urry2000}). The spectra of Type I AGN show broad permitted emission lines (full width at half maximum (FWHM) $\ge$ 2000 km/s), originating from the so-called broad line region (BLR); on the contrary those of Type II AGN show narrow permitted and forbidden lines. Type II AGN can be identified in spectroscopic surveys by using the ratio of specific emission lines such as [OIII] over H$\beta$ and [NII] or [SII] over H$\alpha$ up to z$\sim$0.5 and [OII] over H$\beta$ at z$\geq$ 0.5 up to z$\sim$1. In Type I AGN the optical continuum is dominated by non-thermal emission, making it a challenge to study the host galaxy properties. We have therefore focused our analysis on Type II AGN. The SFR can be estimated from the spectral energy distribution (SED) of an AGN, however the energy output can affect the entire SED, contaminating the SFR indicators usually used for the star-forming galaxies. Broadband SED and infrared band are frequently used to calculate SFR for X-ray selected AGN (e.g. \citealt{Mullaney2012}, \citealt{Stemo2020}). The AGN contribution to the infrared luminosity, if not taken into account, would overestimate the infrared-based SFR (\citealt{Zhuang2018}) as well as other SFR indicators (e.g. \citealt{Azadi2015},\citealt{Ho2005}). Optical spectral features can be used to measure the stellar properties of host galaxies, such as the 4000 \AA\ break, the strengths of the H$\delta$ absorption (e.g. \citealt{Kauffmann2003c}) and [OII] emission line (e.g. \cite{Ho2005}, \citealt{Zhuang2019}). Indeed these have been extensively used to measure the properties of statistical samples of AGN host galaxies (e.g. \citealt{Kauffmann2003a}; \citealt{Silverman2009}; \citealt{Ho2005}). \cite{Kauffmann2003a} have analysed a large sample of Type II AGN galaxies selected from the Sloan Digital Sky Survey (SDSS) at low redshift 0.02 < z < 0.3, and showed that AGN are typically hosted by massive galaxies (>3$\times 10^10 M\odot$) with properties similar to ordinary early-type galaxies, with spectral signatures of young stellar populations (10$^8$-10$^9$ yr) in AGN exhibiting high [OIII] luminosity (L$_{[OIII]}$>10$^7$ L$\odot$). Analysing SDSS DR7 galaxies which also include Type II AGN and LINERS, \cite{Leslie2016} show that AGN activity plays an important role in quenching star formation in massive galaxies. Indeed they find that the SFR in these objects is below the expected value according to the MS. \cite{Ho2005} used [OII]$\lambda$3727 as a tracer of ongoing star formation in a statistical sample of AGN, finding that optically selected AGN host galaxies exhibit low level of SFR despite the abundant molecular gas revealed. This finding suggests that such systems are less efficient in forming stars with respect to galaxies with similar molecular content, possibly due to the activity of the central nucleus. We extend such analysis at higher redshift and to stellar masses lower the than typical values probed by SDSS survey. Using the VIPERS (e.g. \citealt{Guzzo2014},\citealt{Garilli2014}, \citealt{Scodeggio2018}) and VVDS spectroscopic surveys (e.g. \citealt{LeFevre2013}), we aim at investigating whether the SFR of the host galaxies, relative to that expected at a given stellar mass and redshift for a normal star-forming galaxy, changes as a function of AGN power and galaxy stellar mass, in a statistical sample of Type II AGN galaxies at 0.5 $\le$ z $\le$ 0.9, selected on the basis of their optical emission lines (\citealt{Lamareille2010}). The paper is organized as follows. In Sec. \ref{sec:vipers} and \ref{sec:vvds} we summarize the VIPERS and VVDS survey properties; in Sec. \ref{sec:analysis} the spectroscopic analysis along with the Type II AGN sample selection, as well as the SED fitting analysis are presented; Sec. \ref{sec:results} discusses the properties of Type II AGN host galaxies in the SFR-stellar mass plane, with the discovery of two distinct populations, along with their spectral properties, and a tentative evidence for AGN feedback that quenches star formation. Sec. \ref{sec:summary} provides a summary of the paper. Throughout this work, we assume a standard cosmological model with $\Omega_M$ =0.3, $\Omega_{\lambda}$ =0.7, and H$_0$ =70 km s$^{-1}$Mpc$^{-1}$. \section{The sample} The goal of the present paper is to select and study the properties of narrow emission line AGN at intermediate redshifts (0.5 < z <0.9). For this purpose, we have collected spectroscopic and photometric data from the VIPERS survey (\citealt{Guzzo2014}, \citealt{Garilli2014}, \citealt{Scodeggio2018}) and the VVDS survey (\citealt{LeFevre2005}). The resulting sample is indicated as the VIMOS sample throughout the paper. In the local universe, the ionization source, either AGN or star formation, can be identified by using the intensity ratios of emission lines such as [OIII]$\lambda$5007, H$\beta$, H$\alpha$, [NII]$\lambda$6583, [SII]$\lambda\lambda$6717,6731 through specific diagnostic diagrams (\citealt{Baldwin1981}, BPT), which are accessible using ground-based optical telescopes up to z$\le$0.5. However, at higher redshift the H$\alpha$,[NII] and [SII] lines are redshifted in the NIR range and can no longer be used, therefore alternative diagrams have been proposed. It is possible to use the [OII] emission line doublet, which enters the optical spectra at z >= 0.5, and the optical diagnostic diagram originally proposed by \cite{Rola1997} and further improved by \cite{Lamareille2010}, based on the ratios [OIII]/H$\beta$ vs. [OII]/H$\beta$ (also known as blue diagram). \subsection{VIPERS survey}\label{sec:vipers} The VIPERS spectroscopic survey was designed to sample galaxies at redshift 0.5 $\lower.5ex\hbox{\ltsima}$ z $\lower.5ex\hbox{\ltsima}$ 1.2, selected from the Canada-France-Hawaii Telescope Legacy Survey Wide (CFHTLS Wide) over the W1 and W4 fields (\citealt{Guzzo2014}, \citealt{Garilli2014}, \citealt{Scodeggio2018}). The observations were carried out using ESO VIsible Multi-Object Spectrograph (VIMOS) on Unit 3 of the ESO Very Large Telescope (VLT), with the low resolution red grism (R $\sim$ 210) and a slit width of 1 arcsec, covering the spectral range 5500-9500 \AA\ with a dispersion of 7.14 \AA\ per pixel. To achieve useful spectral quality in a limited exposure time, a bright magnitude limit of i$\rm_{AB}$ \lower.5ex\hbox{\ltsima} 22.5 was adopted, while low redshift (z $<$ 0.5) galaxies were removed using the color-color selection in the (r-i) vs. (u-g) plane. Full information regarding observations, data reduction and selection criteria are contained in \cite{Garilli2014}, \cite{Guzzo2014}. In this work, we use the final VIPERS data from the Public Data Release 2 (PDR-2, \citealt{Scodeggio2018}) containing 91507 galaxies with a measured redshift. To collect a reliable sample of sources that host AGN, we adopted the so-called blue diagram (\citealt{Lamareille2010}). From the PDR-2 VIPERS catalogue, we selected sources with highly reliable [OIII]$\lambda$5007,[OII]$\lambda$3726 and H$\beta$4861 ([OIII], [OII] and H$\beta$ hereafter) line measurements, i.e. satisfying the following constraints: the distance between the expected position and the Gaussian peak must be within 7 \AA\ ($\sim$ 1 pixel), the FWHM of the line must be between 7 and 22 \AA\ (from 1 to 3 pixels), the Gaussian amplitude and the observed peak flux must differ by no more than 30\% and the equivalent width (EW) must be detected at 3.5$\sigma$ or flux at $\lower.5ex\hbox{\gtsima}$ 8$\sigma$. The final VIPERS sample consists of 7125 galaxies. \subsection{VVDS survey}\label{sec:vvds} We complemented the VIPERS data with the VIMOS VLT Deep Survey (VVDS). This survey was designed to study the evolution of galaxies, large scale structures and AGN in the redshift range 0 $<$ z $<$ 6.7 using the VIMOS spectrograph with the same instrument configuration as in VIPERS. VVDS is the result of a combination of magnitude-limited surveys as WIDE, which covers three fields (1003+01, 1400+05, 2217+00) down to I$\rm_{AB}$=22.5, DEEP, targeting the 0226-04 and ECDFS fields down to I$\rm_{AB}$=24 and Ultra-Deep (0226-04 field) in the magnitude range 23 < I$\rm_{AB}$ < 24.75. We have used the final VVDS dataset (\citealt{LeFevre2013}), collecting sources with reliable redshift estimates (with redshift flag z$\rm_{flag}=2,3,4$ corresponding to 75 up to 100 \% probability that the redshift is correct). Furthermore, we focused on targets with [OIII] and H$\beta$ fluxes detected at $\ge$ 5 and 2$\sigma$, respectively, and FWHM > 7 \AA\ (1 pixel), ensuring us to collect a clean sample as confirmed by visual inspection. The final VVDS sample consists of 1663 galaxies. The requirement to have both [OII] and [OIII] in the observed spectrum limits our sample to the redshift range 0.5<z<0.9. Applying the above selections, our starting VIMOS sample comprises 8788 galaxies. \section{Analysis}\label{sec:analysis} \subsection{Spectroscopic analysis} In order to obtain a uniform and precise measurement of line fluxes and equivalent widths, we subtracted the stellar continuum with absorption lines from the galaxy spectra, using the penalized pixel fitting public code (pPXF \citealt{Cappellari2012}). Specifically, the spectra are fitted with a linear combination of stellar spectra templates from the MILES library (\citealt{Vazdekis2010}, library included in the software package), which contains single stellar population synthesis models, covering the full range of the optical spectrum with a resolution of FWHM=2.54 \AA. We convolved the templates spectra with a Gaussian in order to match the spectral resolution of the VIMOS observed galaxy spectra, which have lower resolution. We included low order multiplicative polynomials to adjust the continuum shape of the templates to the observed spectrum. In the fitting procedure, the spectra are shifted to restframe and strong emission features are masked out. The pPXF best-fit model spectrum is chosen through $\chi^2$ minimization. Objects with best-fit results associated with reduced $\chi^2$ larger than 1 sigma of the $\chi^2$-distribution were discarded ($\sim$19\%). The residual spectrum obtained by subtracting the best-fit stellar model from the observed spectrum of each target is then used to characterize the emission line features. As mentioned in Sec. \ref{sec:vipers}, we used [OII], [OIII], H$\beta$ to identify AGN Type II using optical diagnostic tools. For each spectrum, we have adopted as systemic redshift the one estimated from the [OIII]$\lambda$5007 emission line, and we performed a fit \footnote{ The spectral analysis was performed with the python routine {\tt{scipy.optimize.curve\_fit}}.} of stellar-subtracted spectra shifted to the rest-frame We separately fit two spectral regions, focusing on the [OIII]-H$\beta$ and the [OII] doublet lines. We adopted a linear function to model possible continuum residuals, while Gaussian components were used to reproduce the emission lines. We fixed the wavelength separation and broadening between the [OIII]$\lambda$5007 \AA\ and [OIII]$\lambda$4959 \AA\ , H$\beta$ lines. The flux intensities of the [OIII] doublet is set to 1:3, according to their atomic parameters (\citealt{Osterbrock2006}). The [OII] emission line doublet is unresolved in our spectra and we fit its profile using a single Gaussian model, with three free parameters (normalization, centroid and sigma). From the best-fit models, we derive as spectral line parameters the flux and the EW We note that 73\% of the VVDS sample presented here was already analysed by \cite{Lamareille2009}. We checked for consistency between our line measurements and those presented in \cite{Lamareille2009}, and found a fair agreement, with median absolute differences between EW being 0.8 \AA, 2.2 \AA\ and 0.3 \AA, for [OIII], [OII] and H$\beta$ respectively. \subsection{Identification of Type II AGN}\label{sec:selection} The demarcation proposed by \cite{Lamareille2010} to separate star-forming galaxies (SFG) and Type II AGN (shown as blue curve in Fig. \ref{fig:BPT}) is the following: \begin{equation} \rm log\frac{[OIII]}{H\beta}=\frac{0.11}{log\frac{[OII]}{H\beta}-0.92}+0.85. \label{eq:AGN} \end{equation} The boundary used to distinguish between the Type II AGN and the LINERS regions (shown as red dashed line in Fig. \ref{fig:BPT}) is: \begin{equation} \rm log\frac{[OIII]}{H\beta}= 0.95 \times log\frac{[OII]}{H\beta}-0.4, \label{eq:LINER} \end{equation} and the region where SFGs are mixed with AGN (shown as red solid line in Fig. \ref{fig:BPT}) is given by: \begin{equation} \rm log\frac{[OIII]}{H\beta} >0.3. \label{eq:composite} \end{equation} Given the wavelength distance between [OIII] and H$\beta$ on one side, and [OII] doublet lines on the other, these line ratios are sensitive to reddening. \cite{Lamareille2010} have demonstrated that the use of equivalent widths instead of the line fluxes minimizes this problem, even if it does not eliminate it completely. We thus use EW ratios instead of flux ratios to minimize the effect of reddening. In Fig. \ref{fig:BPT} the blue diagram as found for the VIMOS sample is shown. Our selection of AGN Type II includes 812 objects. \begin{figure}[] \includegraphics[width=1.1\columnwidth]{./figure/BPT.eps} \caption{\small{Blue diagram (\citealt{Lamareille2010}) for the VIMOS sample. Diamonds represent Type II AGN color-coded according to their redshift. VIMOS galaxies with reliable emission line measurements are marked with grey dots. The blue curve shows the separations between starforming galaxies and AGN (Eq. \ref{eq:AGN}), the red dashed line between AGN and LINERs (Eq. \ref{eq:LINER}), the red solid line between starforming galaxies and SF/AGN (Eq. \ref{eq:composite}).}}\label{fig:BPT} \end{figure} \subsection{Ancillary data} In order to carry out our study, we need to estimate galaxy properties such as stellar masses and star formation rates for the VIMOS AGN sample. The study of broad-band spectral energy distribution (SED) is the most commonly method adopted to derive galaxy properties. To estimate the selected AGN stellar mass we collect all available photometric data and fit them with galaxy$+$AGN templates. \subsubsection{VIPERS photometry}\label{sec:photometry} The VIPERS sample has been selected from the W1 and W4 fields of the CFHTLS, which provides magnitudes in the {\it{u$^*$,g,r,i,z}} photometric bands down to i < 22.5, corrected for Galaxy extinction derived from the Schlegel dust maps (\citealt{Guzzo2014}, \citealt{Moutard2016}). The following additional photometric data are available for a subset of sources: \begin{itemize} \item NIR observations are available for 98\% of the AGN sample in the Ks band in the W1 and W4 fields and in the K$\rm_{video}$ band in the W1 field (\citealt{Moutard2016}). \item the VIPERS survey is also covered by GALEX observations in the FUV and NUV bands for 17\% of the AGN sample (\citealt{Moutard2016}). \item MIR photometry is available for 13\% of the AGN targets with Spitzer, from the Spitzer WIDE-area Infrared Extragalactic survey observations in the XMM-LSS field (\citealt{Lonsdale2004}) \item Photometric information in the WISE all-sky passbands is also available for 18\% of the AGN targets (\citealt{Wright2010}, VIPERS team). \end{itemize} \subsubsection{VVDS photometry}\label{sec:photometry_VVDS} All the VVDS fields have been observed in B, V, R, I filters as part of the VIRMOS Deep Imaging Survey (\citealt{McCracken2003}; \citealt{LeFevre2004}) with the CFH12K imager at CFHT. The following additional photometric data are available: \begin{itemize} \item u*, g', r', i', z' photometry from the Canada-France-Hawaii Telescope Legacy Survey (CFHTLS, \citealt{Cuillandre2012}) is available for 43\% of the AGN sample and NIR photometric information from WIRcam InfraRed Deep Survey in the J, H, and K bands (\citealt{Bielby2012}) and from UKIDSS in the J and K filters (\citealt{Lawrence2007}) for 38\% and 44\% of the AGN sample, respectively. \item FUV and NUV photometry with the GALEX satellite (\citealt{Arnouts2005}) is available for 2\% of the AGN sample and MIR data with Spitzer (SWIRE survey, \citealt{Lonsdale2003}) for 3\% of the sample. \item Imaging with Advanced Camera for Surveys Field Channel instrument on board the Hubble Space telescope is available for 4\% of the AGN sample, in four bands $B,V,I$ and $Z$ (\citealt{LeFevre2004b}). \end{itemize} \subsection{SED fitting analysis}\label{sec:SED} We have derived stellar masses and SFRs through SED fitting of the available multiwavelength photometry (see Sec. \ref{sec:photometry}) using the Code Investigating GALaxy Emission (CIGALE; \citealt{Noll2009}, \citealt{Boquien2019}, version 2020.0).\footnote{CIGALE can also handle upper limits.} CIGALE provides a multi-component SED fit that includes multiple stellar components (old, and young), dust and interstellar medium (ISM) radiation, and AGN emission. The different components are linked to balance the absorbed radiation at UV-optical wavelengths with that re-emitted in the FIR. The components used for the fitting procedure are (i) the stellar emission which dominates the wavelength range 0.3 - 5 $\mu$m, (ii) the emission by the cold dust, which is heated by the star formation and dominates the FIR and (iii) the AGN emission, coming from the accretion disk, peaking at UV-optical wavelengths and reprocessed by the dusty torus in the MIR. In the fitting procedure, we have fixed the redshift at the value derived from the [OIII] emission line (see Sec. \ref{sec:analysis}). The models adopted for the SED fitting are the following: \begin{itemize} \item For the stellar models we adopted a delayed star formation history (SFH), $\tau$-model, with varying e-folding time and main stellar population ages, defined as: \begin{equation} \rm SFR(t) \propto\ t \times exp(-t/\tau) \end{equation} where $\tau$ is the e-folding time of the star formation. \item The SFH is convolved with the stellar library of \cite{Bruzual2003} assuming the Chabrier initial mass function (IMF). The metallicity is fixed to solar value, 0.02. We set the separation between the young and old stellar populations to 10 Myr. \item Dust extinction is modelled by assuming the \cite{Calzetti2000} law. We use the color-excess E(B-V)$_*$ in the range of values shown in Table \ref{tab:parameters}. We assume that old stars have a lower extinction compared to the young stellar populations by a factor of 0.44 (\citealt{Calzetti2000}). \item We adopted the \cite{Dale2014} templates to model the reprocessed emission in the IR from the dust heated by stellar radiation. These templates also include the contribution from the dust heated by AGN. We used them without the AGN contribution, which is set equal to 0, since it is defined separately with the \cite{Fritz2006} templates. The models represent emission from dust which is exposed to different ranges of radiation field intensity and the templates are combined to model the total dust emission with the relative contribution given by a power law distribution dM$\rm_{dust}$ $\propto$ U$^{-\alpha}$dU, with M$\rm_{dust}$ the dust mass heated by a radiation field and U the radiation field intensity. The free parameter $\alpha$ slope was allowed to vary in the range listed in Table \ref{tab:parameters}. \item To parametrize the AGN emission component, we used the models from \cite{Fritz2006}, which assume isotropic emission from the central AGN and emission from the dusty torus. The law describing the dust density within the torus is variable along the radial and polar coordinates: \begin{equation} \rm \rho(r,\theta) = \alpha r^{\beta} e^{-\gamma|cos(\theta)|} \end{equation} with $\alpha$ proportional to the equatorial optical depth at 9.7 $\mu$m ($\tau_{9.7}$), $\beta$ and $\gamma$ are related to the radial and angular coordinates respectively. We fixed the parameters $\beta$, $\gamma$ and $\theta$ to parametrize the dust distribution within the torus, according to the values reported in Table \ref{tab:parameters}. The geometry of the torus is described by using the ratio between the outer and inner radii of the torus, R$\rm_{max}$/R$\rm_{min}$ and the opening angle of the torus, $\theta$. We choose typical values as found in \cite{Fritz2006} and by fixing these parameters we avoid degeneracies in the templates. It is possible to provide a range of inclination angle between the line of sight of the observer and the torus equatorial plane, $\psi$, with values ranging from 0 for Type II up to 90 for Type I AGN. Another important parameter is the fractional contribution of the AGN emission to the total IR luminosity, frac$\rm_{AGN}$. We set a wide range of values to account for the possibility that the AGN contribution to the IR luminosity is very low, 5\%, up to 95\% of the total contribution. \end{itemize} The photometric data are fitted with the models and the physical properties are then estimated through the analysis of the likelihood distribution. In Fig. \ref{fig:stellar_mass_redshift} the distribution of stellar masses for AGN Type II is shown for different redshift bins. We probed stellar masses, Log (M$\rm_{stellar}/ M_{\odot}$), in the range $\sim$8-12, with a median (mean) value of 9.5 (10.2). \begin{figure}[] \includegraphics[width=1.1\columnwidth]{./figure/Mstar_redshift_histogram.eps \caption{Stellar mass distribution of the Type II AGN host galaxies for the VIMOS sample in different redshift ranges}\label{fig:stellar_mass_redshift} \end{figure} \begin{table*} \setlength{\tabcolsep}{0.05pt} \centering \begin{threeparttable} \caption{CIGALE parameters used for the SED fitting.}\label{tab:parameters} \begin{tabular}{ccc} \hline \hline Parameter & Description & Value \\ \hline & Star Formation History - Delayed Model & \\ \\ Age & Age of the main stellar population & 500, 1000, 3000, 4000,5000\\ && 5500, 6000, 7000, 8000, 9000 Myr \\ $\tau$ & e-folding time of the main stellar population & 0.5, 1.0, 3.0, 5.0, 10.0 Gyr \\ \hline &\cite{Bruzual2003} Stellar Emission Model&\\ \\ IMF & Initial mass function & Chabrier \\ Z & Metallicity & 0.02 \\ Separation age & Separation between the young and the old stellar population & 10 Myr \\ \hline & \cite{Calzetti2000} and \cite{Leitherer2002} Dust attenuation model &\\ \\ E(B-V) & Colour excess of the young stellar continuum light & 0.05,0.1,0.3,0.5,0.7,0.9,1.1,1.3 \\ UV bump & Amplitude of the UV bump & 0.0\\ Slope & Slope delta of the power law attenuation curve & 0.0 \\ reduction factor & Reduction factor for the color excess of & 0.44\\ & the old population compared to the young one &\\ \hline & Nebular Emission model &\\ \\ U & Ionization parameter& 10$^{-2}$ \\ f$\rm_{esc}$& Escape fraction of Lyman continuum photons & 0\%\\ f$\rm_{dust}$& Absorption fraction of Lyman continuum photons & 10\% \\ \hline & \cite{Dale2014} Dust Module &\\ \\ $\alpha$ & Slope of the power law combining the contribution of different dust templates & 0.5, 1.0, 1.5, 2.0, 2.5, 3.0 \\ \hline & \cite{Fritz2006} AGN Module & \\ \\ R$\rm_{max}$/R$\rm_{min}$ & Ratio of the maximum to minimum radii of the dust torus &60 \\ $\tau_{9.7}$ & Optical depth at 9.7 $\mu$m &1.0 \\ $\beta$ & Slope of the radial coordinate & -0.5 \\ $\gamma$ & Exponent of the angular coordinate &0.0 \\ $\Phi$ & Full opening angle of the dust torus & 100 deg \\ $\psi$ & Angle between equatorial axis and line of sight & 0.001, 10.1, 20.1, 30.1,50.1,70.1 \\ f$\rm_{AGN}$ & AGN fraction & 0.05,0.1,0.15,0.2,0.25,0.3,0.35,0.4,\\ &&0.45,0.5,0.55,0.6,0.65,0.7,0.75,0.8, \\ &&0.85 ,0.9,0.95 \\ \hline \end{tabular} \end{threeparttable} \end{table*} \subsubsection{Mock analysis} The reliability of the computed host galaxies stellar mass values from the SED fitting analysis can be assessed through the analysis of a mock catalogue. The basic idea is to compare the stellar masses of the mock catalogue (true values), which are known exactly, to the values estimated from the analysis of the likelihood distribution. We use an option included in CIGALE to build a mock catalogue, based on the best-fit model for each object, as derived in Sec \ref{sec:SED}. A detailed description of the mock analysis can be found in \cite{Giovannoli2011}. Briefly, the best-fit SED model of each object is modified by adding a randomly Gaussian-distributed error to each flux measured in the photometric bands of the dataset, with the same standard deviation as the observed uncertainty in each band. The mock catalogue is then analysed in the exact same way as the real observations. Fig. \ref{fig:mstar_mock} shows the comparison between the stellar masses derived from the mock analysis and the values estimated for the real sample of Type II AGN. The estimated and true values are well correlated, indicating that the stellar mass parameter can be consistently constrained, with a Pearson's correlation coefficient for linear regression of $\sim$0.98. \begin{figure}[] \includegraphics[width=1.1\columnwidth]{./figure/mock.png} \caption{Comparison between the true value of the stellar mass as derived from the mock analysis and the value estimated by SED fitting. The grey dashed line indicates the 1:1 relation between the parameters.}\label{fig:mstar_mock} \end{figure} \subsection{SFR} From the SED decomposition it would also be possible to derive the SFR, however a lack of FIR coverage prevents us to retrieve a reliable estimate of the SFR from the SED (see \citealt{Ciesla2015}). An alternative SFR indicator is the [OII]$\lambda$3726+3728 doublet line. The [OII] emission line is commonly used to measure SFR in star forming galaxies (e.g. \citealt{Kennicutt1998}, \citealt{Hopkins2003}, \citealt{Kewley2004}), despite it suffers from dust-extinction, as it is known to be strongly excited by star formation. For galaxies with an active nucleus, lines of low ionization potential such as [OII] could be excited by both SF and AGN activity, however it has been observed that the [OII] is mainly produced by star-formation (e.g. \citealt{Ho2005}, \citealt{Zhuang2019}) As discussed in \cite{Silverman2009}, the [OII]/[OIII] flux ratio decreases at increasing [OIII] luminosities and the slope that describes this relation is flatter for Type II AGN than for Type I AGN (see Fig. \ref{fig:LOII_LOIII}). This difference is explained by an additional contribution to the [OII] flux in type II AGN due to on-going star formation. Previously \cite{Kim2006} explained the enhanced [OII]/[OIII] ratios for Type II Quasars from \cite{Zakamska2003} (median value of [OII]/[OIII]= $-$0.12) as due to a more prevalent star formation in Type II AGN. In Fig. \ref{fig:LOII_LOIII} we show the [OII]/[OIII] luminosity ratio as a function of [OIII] luminosity for the VIMOS sample. The median value of the [OII]/[OIII] ratio is $-$0.14, consistent with the Zakamska et al. sample. We note that the line luminosities are not corrected for extinction, therefore the line ratios can be considered as lower limits. We also plot the best-fit linear relation for SDSS Type I (dashed line) and Type II (solid line) sources as reported in \cite{Silverman2009}. Type II AGN in the VIMOS sample exhibit a similar slope to the SDSS Type II AGN and have slightly enhanced [OII]/[OIII] ratios compared to the SDSS sample. This finding further justifies the use of this line as SFR indicator (e.g. \citealt{Silverman2009}, \citealt{Kim2006}). \begin{figure}[] \includegraphics[width=1.1\columnwidth]{./figure/L_OII_LOIII_vs_LOIII.eps} \caption{\small{[OII]/[OIII] luminosity ratio as a function of [OIII] luminosity for the VIMOS Type II AGN. Dashed and solid lines represent the best-fit relation for SDSS Type I and Type II AGN at z<0.3, respectively. }}\label{fig:LOII_LOIII} \end{figure} Assuming that high ionization lines such as [OIII] are mainly powered by AGN activity (e.g. \citealt{Kauffmann2003a}), we adopted this line to remove the AGN contribution from the [OII] line. Recently \cite{Zhuang2019} found a fairly constant [OII]/[OIII]$\sim$0.10 for the Type II AGN contribution according to a set of photoionization models, we therefore subtracted 10\% of the [OIII] luminosity from the [OII] line. We then derived the SFR by using the calibration from \cite{Kewley2004} (rescaled by a factor of 1.7 to account for the different IMF used): \begin{equation} \rm SFR_{[OII]} = 6.58 \pm 1.65 \times\ 10^{-42} (L_{[OII]} - 0.109L_{[OIII]}) \\ (M_{\odot} yr^{-1})\label{eq:SFR} \end{equation} where L$\rm_{[OII]}$ and L$\rm_{[OIII]}$ are in units of erg/s. We probed SFR in the range 0.01-38 M$\odot$/yr, with a median(mean) value of 0.8 (1.3) M$\odot$/yr. \section{Results}\label{sec:results} \subsection{SFR-stellar mass plane}\label{sec:sfr_mass} In Fig. \ref{fig:SFR_Mstar}, we show the SFR-stellar mass relation for the Type II AGN host galaxies of the VIMOS sample. We indicate the star-forming MS relation at z=0.7, the mean redshift of our sample, from \cite{Schreiber2015} (solid curve) along with the scatter (0.4 dex, dashed lines). We rescaled both the SFR and stellar masses of \cite{Schreiber2015} by a factor of 1.7 to account for the different IMF used (Salpeter vs. Chabrier). The bulk of the VIMOS sample populates the MS region, with a fraction of AGN host galaxies on and off the MS. At high stellar masses (> 10$^{10}$ M$_{\odot}$) almost all sources are below the MS (see Fig. \ref{fig:SFR_Mstar}). Overall, Type II host galaxies show a broader distribution of SFR than star-forming MS galaxies, consistent with previous studies (e.g. \citealt{Mullaney2015}). As discussed in \cite{Bongiorno2012}, optically selected Type II sources from the zCOSMOS-bright survey show properties similar to the VIMOS sample. This is not surprising since both surveys cover similar volumes and depths. We add this sample in Fig. \ref{fig:SFR_Mstar} as green circles. These sources are selected through the blue diagram in the redshift range 0.50<z<0.92, with stellar masses and SFR derived through SED fitting analysis. In terms of stellar masses and SFR, they span the same range as the VIMOS Type II AGN galaxies, and also for these sources the MS locus at high stellar masses remains under-populated for Type II AGN host galaxies, with respect to what is found for star forming galaxies, suggesting that the distribution could be different with respect to non-AGN galaxies. We investigate whether a fraction of Type II AGN can be missed by the adopted selection criteria. One possibility is that the "missing" fraction could reside in the composite locus of SF-Type II AGN. Indeed, a high level of star formation can produce an enhancement of the H$\beta$ flux, moving a Type II AGN down to the composite locus in the blue diagram plane. We therefore investigated the host-galaxies properties of the composite sources, as defined by the blue-diagram, in the VIPERS sample. We collected their SED-based stellar masses and measured the SFR using the [OII]$\lambda$3727 emission line. We found that the composite galaxies actually exhibit stellar masses <10$^{10}$ M$\odot$ and SFR slightly enhanced with respect to those observed for Type II AGN in a similar mass range. This indicates that the missing fraction of Type II AGN is not classified as composite. We also compare the VIMOS sample with the DR12 BOSS sample of Type II AGN (\citealt{Thomas2013}). Specifically we used the galaxy properties (i.e. emission line measurements, BPT classification and stellar masses) derived by the Portsmouth Group for the BOSS DR12 (\citealt{Thomas2013}). They applied the blue diagram criterion to select a Type II AGN sample at redshift 0.5 < z < 0.9. We restricted our analysis to those galaxies with [OIII], [OII] and H$\beta$ flux detection over 2 sigma (as defined by the amplitude-over-noise ratio parameter). For consistency with the VIMOS sample, we measured SFRs of the Type II AGN host galaxies from the BOSS DR12 using the [OII]$\lambda$3727+3729 emission line fluxes, subtracting off the AGN contribution by using the [OIII]$\lambda$5007 line flux (see Eq. \ref{eq:SFR}). Stellar masses are calculated by the Portsmouth team from the best-fit SED (\citealt{Maraston2013}). They used two types of templates to derive stellar masses, i.e. passively evolving and star-forming models, based on the galaxy types expected according to the BOSS color cut. As presented in \cite{Thomas2013}, BOSS Type II AGN preferentially show a g-r color (strongly dependent on the star formation history of galaxies) in between the one observed for luminous red and star forming galaxies. Here we used stellar masses from the star-forming model, noting that stellar masses could be underestimated. A dust-extinction effect could still play a role. In case of highly-star forming galaxies (and high stellar mass) with an AGN, H$\beta$ emission line could remain undetected due to attenuation by dust and hence the galaxy would be excluded from our sample selection. This would result in a missing fraction of galaxies with high stellar mass. We therefore compare our sample with the Type II AGN from the BOSS survey. The bulk of Type II AGN is preferentially found in host galaxies with stellar mass > 10$^{10}$ M$\odot$, due to the BOSS selection color cut favouring the most massive galaxies and 17\% of Type II AGN from the BOSS survey occupies the locus of MS and starburst. Despite the slightly enhanced statistics than that probed by the VIMOS targets, we can conclude they are overall consistent, considering the small area covered by VIPERS and VVDS surveys (24 deg$^2$ for VIPERS, 8.7 deg$^2$, 0.74 deg$^2$, and 512 arcmin$^2$ for VVDS-Wide, Deep and Ultra Deep respectively) with respect to the BOSS survey ($\sim$10000 deg$^2$). \begin{figure}[] \includegraphics[width=1.1\columnwidth]{./figure/SFR_Mstar.eps} \caption{Star formation rate (SFR) as a function of stellar mass (M$\rm_{stellar}$) for the VIMOS sample (blue diamonds). The solid line represents the star-forming main sequence as found by \cite{Schreiber2015} and dashed lines mark the scatter of 0.4 dex. Upper limits are shown as grey diamonds. Optically selected Type II AGN from zCOSMOS survey as found by \cite{Bongiorno2012} and from BOSS DR12 are shown as green circles and grey contours, respectively.}\label{fig:SFR_Mstar} \end{figure} \subsection{Correlation between SFR offset from the Main Sequence and AGN luminosity}\label{sec:populations} AGN feedback could be responsible for the quenching of star formation in massive galaxies, with increasing AGN efficiency in driving outflows at increasing AGN luminosity (e.g. \citealt{Menci2008}, \citealt{Faucher2012}, \citealt{Hopkins2016}). To test this point, we examined the relationship between the distance from the MS and the AGN power. In Fig. \ref{fig:SFR_LOIII} we show the relative offset of the SFR from the MS relation of \cite{Schreiber2015} as a function of [OIII] luminosity, which can be considered as a proxy of AGN power. VIMOS Type II AGN host galaxies show different properties in terms of star formation with a clear dependence on stellar mass (color-coded), forming two distinct groups of AGN on this diagram: at a fixed AGN power, Type II AGN host galaxies at M$\rm_{stellar}$< 10$^{10}$ M$_{\odot}$ show higher star formation activity than more massive galaxies. To define the boundary of this bimodality, we divided our targets in five subsamples with different AGN power (indicated in Fig. \ref{fig:SFR_LOIII}, left panel). In each luminosity bin, we used the Gaussian kernel density estimation (KDE) to estimate the probability density function of the SFR offset and derive the separation between the two subsamples (see right panel of Fig. \ref{fig:SFR_LOIII}). We proceeded as follows: i) we fit two Gaussians to reproduce the bimodality of KDE functions; ii) we derive the intersection point of the two best-fit Gaussians in each bin and iii) perform a linear regression on the intersection points. The best-fit line to these points, i.e. 0.54$\times$ Log (L$\rm_{[OIII]}$/{erg s$^{-1}$}) - 23.22, is shown in Fig. \ref{fig:SFR_LOIII}, left panel, as black solid line. Above the boundary line, 64\% of the VIMOS subsample occupies the same region as the star-forming (i.e. log($\rm SFR_{[OII]}/SFR_{MS}$) within $\pm$0.4 dex) and starburst galaxies (i.e. log($\rm SFR_{[OII]}/SFR_{MS}$) > 0.4), with stellar mass mostly below 10$^{10}$ M$_{\odot}$, and the remaining 36\% have SFR below the bulk of the MS galaxies but above the quiescent locus (sub-MS, $-$1.3<log($\rm SFR_{[OII]}/SFR_{MS}$) < $-$0.4). Hereafter we will refer to this subsample as high-SF Type II AGN. Instead below the line threshold the diagram is occupied by massive targets along the MS (3\%), 51\% in the sub-MS and 46\% occupy the quiescent regime (i.e. log($\rm SFR_{[OII]}/SFR_{MS}$) < $-$1.3, e.g. \citealt{Aird2019}). Hereafter we will refer to this subsample as low-SF Type II AGN. Since our sample does not have H$\alpha$ and H$\beta$ within the observed spectral window, we could not compute the Balmer decrement and as a consequence did not correct the line luminosities for extinction. We explored the possible effect the extinction could have on the presence of the two populations. \cite{RosaGonzalez2002} show that the excess in the [OII]-based and UV-based SFR estimates is mainly due to an overestimation of the extinction resulting from the effect of underlying stellar Balmer absorptions in the measured emission line fluxes. Therefore they constructed unbiased SFR estimators, which statistically include the effect of underlying stellar Balmer absorptions in the measured emission line fluxes. \cite{Kewley2004} found a strong correlation between the intrinsic [OII] luminosity and the color excess for the galaxies in Nearby Field Galaxies Survey, deriving a direct relation between intrinsic and observed [OII] luminosity, although they emphasize that the relation should not be blindly applied to other galaxies. We have tested what happens to the distribution shown in Fig \ref{fig:SFR_LOIII} applying either the extinction correction by \cite{Kewley2004}, eq. 18, or the recipe by \citealt{RosaGonzalez2002}. In both cases, we still find the observed separation between low- and high-SF AGN, and we can therefore conclude that the bimodality does not depend on the extinction. In the following, we investigate whether the offset from the MS is related to the AGN power, by comparing it with the [OIII] luminosity. In Fig. \ref{fig:SFR_LOIII}, we show the median Log (SFR/SFR$\rm_{MS}$) in bins of [OIII] luminosity of the high and low-SF subsamples, with brown and orange circles respectively, where the errors bars indicate the 25th and 75th percentiles. We find a correlation between the relative offset of the SFR from the MS and the AGN power for both populations: at increasing AGN luminosities Type II AGN hosts tend to have higher SFR. As a positive correlation exists between the [OII] and [OIII] luminosity, as a counter-check we performed the same analysis using the SFR as derived from the SED fitting, obtaining similar results but with a shallower slope, thus confirming the reliability of our findings (see stars in Fig. \ref{fig:SFR_LOIII}). Previous works examined the connection between the SFR and AGN activity, with controversial results claiming strong to weak or absent relations (e.g. \citealt{Azadi2015}, \citealt{Chen2013}, \citealt{Stanley2015,Stanley2017}), \citealt{Harrison2012}. The origin of these discrepancies could be related to sample selection effects, methods of estimating the SFR and the AGN luminosity, as well as the number statistics of the sample (e.g. \citealt{Harrison2012}). \citealt{Harrison2012} reported that the SFR of z=1-3 AGN is independent of X-ray luminosity, used as indicator of AGN activity. This result is in contrast with that found by \cite{Page2012} and the authors suggest that a poor statistics is at least partially responsible for the disagreement at high luminosity between their work and that of \cite{Page2012}. \cite{Stanley2015} used 2000 X-ray detected AGN to investigate the SFR and AGN luminosity relation, in the redshift range 0.2$<$ z $<$2.5 and with X-ray luminosity 10$^{42}$<L$_{2-8kev}$ 10$^{45.5}$ erg s$^{-1}$. They used infrared SEDs decomposition (AGN+star formation components) to derive IR-based SFR and X-ray luminosity as a probe of AGN power, founding a broadly flat SFR-AGN luminosity relation at all redshifts and all the AGN luminosity investigated. They argue that the flat observed relation is probably due to short time-scale variations in AGN luminosity (probed by X-ray luminosity), which can wash out the long-term relationship between SFR and AGN activity. \cite{Masoura2021} found a positive correlation between the MS offset and the X-ray luminosity of a sample of X-ray selected type II AGN at 0.03<z<3.5. \cite{Zhuang2020} analysed a sample of 5800 Type 1 and 7600 Type 2 AGN at z < 0.35 to study the star formation activity based on [OII] and [OIII] emission lines, finding a tight linear correlation between AGN luminosity (probed by [OIII] emission) and SFR. The [OIII] AGN indicator probes the AGN activity on longer timescales than X-ray luminosity, which traces the instantaneous AGN strength, and therefore the use of [OIII] may result in stronger correlation between SFR and AGN luminosity. The positive correlation found for SFR and AGN activity support the idea that the AGN and the star-formation activity in the host galaxy are sustained by a common fuelling mechanism, the large amounts of cold gas, and that the growths of the stellar mass and of the SMBH proceed concurrently (e.g. \citealt{Silverman2009}, \citealt{Santini2012}). However at high stellar masses Type II AGN host galaxies show systematically lower SFR values. This could indicate that the process of AGN growth is linked to the process of star formation in AGN host galaxies (e.g. \citealt{Matsuoka2015}, \citealt{Mullaney2015}, \citealt{Shimizu2015}). The AGN could work against star formation, decreasing the gas reservoir through several ways such as mechanical heating and powerful outflows, moving the host galaxies of Type II AGN down to the quiescent locus in the SFR-M$\rm_{stellar}$ plane. \begin{figure*} \begin{minipage}{.5\linewidth} \centering \includegraphics[scale=0.45]{figure/SFR_SFRMS_LOIII_median.eps \end{minipage}% \begin{minipage}{.5\linewidth} \centering \includegraphics[scale=0.38]{figure/density.eps \end{minipage} \caption{(left panel) Distance from the MS (parameterised as SFR$\rm_{[OII]}/SFR_{MS}$) as a function of [OIII] luminosity (proxy of AGN power) for the VIMOS sample (diamonds) color-coded according to the stellar mass. Median values of [OII]-based and SED-based SFR offset in bins of [OIII] luminosities are represented as brown and orange circles and stars, below and above the line threshold, respectively (see the main text for details). The dashed lines delimit the locus of the MS ($\pm$0.4 dex). (right panel) Probability density function of the SFR offset in each luminosity bin (black solid lines), with superimposed the best-fit Gaussian components (black dashed lines) which reproduce the observed bimodality (see Sec. \ref{sec:populations}).}\label{fig:SFR_LOIII} \end{figure*} \subsection{Low-SF and high-SF Type II AGN properties \subsubsection{[OIII] line shape} About 50\% of our type II AGN host galaxies are located below the MS. Their lower than expected SFRs might be evidence of on-going quenching. Powerful AGN radiation is often invoked as one of the main mechanisms to halt star formation by ejecting the gas necessary to fuel it. The ejected gas can be traced at all scales, and in various gas components: blue-shifted absorption lines from the accretion disc (i.e. ultra-fast outflows; e.g. \citealt{Tombesi2010}), blue-shifted emission line components produced by ionized gas in the broad line region, BLR (e.g., CIV; \citealt{Vietri2020} and references therein) and narrow line region, NLR (e.g., [OIII]; \citealt{Harrison2016}), and as blue-shifted lines produced by outflowing cold molecular gas on galactic scales (e.g., CO; \citealt{Polletta2011}, and many others). Here, we investigate whether we find any evidence of AGN-driven outflowing gas by analysing the profile of the [OIII] emission line. Each population (as defined in Sec. \ref{sec:populations}) is divided in [OIII] luminosity bins as done in Sec. \ref{sec:sfr_mass} We performed a median spectral stack, by using IRAF task {\it{scombine}}, resampling spectra to a rest-frame wavelength grid from 3520 \AA\ with a step size of 4.29 \AA, corresponding to the wavelength resolution at redshift 0.7, the mean redshift of the sources studied in this paper. We also normalized each spectrum to the continuum from 4500 \AA\ up to 4600 \AA, where the spectrum is free of strong emission and absorption lines. We analysed the line profile of the [OIII]$\lambda$4959,5007 doublet, H$\beta$ and [OII] doublet by fitting the lines with two models, considering a single and a double Gaussian to search for a possible second broad and shifted component, indicative of the presence of outflow. We used the same constraints as discussed in Sec. \ref{sec:analysis}. We adopted the double Gaussian model as best-fit when it satisfies the Bayesian Information Criterion (BIC, \citealt{Schwarz1978}), which uses differences in $\chi^2$ that penalize models with more free parameters. For both models we estimated the BIC defined as $\chi^2$ + $k$ ln($N$), with $N$ the number of data points and $k$ the number of free parameters of the model. For each stacked spectrum we derived the $\Delta$BIC= BIC$\rm_1$ $-$ BIC$\rm_2$, where BIC$\rm_1$ and BIC$\rm_2$ are derived from the models with one and two Gaussian profiles, respectively. We favoured the fit with a single Gaussian profile when $\Delta$BIC < 10. \begin{figure*}[] \centering \includegraphics[width=0.45\textwidth]{./figure/OIII_profile_sup_spectra.png} \includegraphics[width=0.47\textwidth]{./figure/OIII_profile_sub_spectra.png} \includegraphics[width=0.45\textwidth]{./figure/OIII_profile_sub_bootstrap.png} \caption{Comparison of the stacked spectra, in the H$\beta$-[OIII] doublet lines wavelength range, in bins of [OIII]$\lambda$5007 luminosity above (left) and below (right) the line threshold as defined in Sec. \ref{sec:populations}. (lower panel) Stacked spectrum in the [OIII] luminosity bin Log (L$\rm_{[OIII]}/ erg\ s^{-1}$) > 42.5 of the low-SF Type II AGN with superimposed the 1$\sigma$ uncertainties estimated through a bootstrap resampling technique and the best fitting one-component Gaussian curve (black dashed line). The inset plot shows the best-fit double Gaussian model (magenta curve) and its line decomposition (green and blue Gaussian profiles refer to the narrow and broad best-fit components).}\label{fig:stack_spectra} \end{figure*} In Fig. \ref{fig:stack_spectra} we compared the spectral properties of the H$\beta$ and [OIII] doublet lines in each [OIII] luminosity bin and for each subsample. In all but the high luminosity bin of the low-SF Type II AGN hosts, the [OIII] line appears to be symmetrical as the one Gaussian preferred model and visual inspection suggest. Only in the highest luminosity bin (Log (L$\rm_{[OIII]}/ erg\ s^{-1}$) > 42.5) of the low-SF population, there seems to be a hint of asymmetry in the [OIII] line profile. Fig. \ref{fig:stack_spectra} (bottom panel) shows the spectrum (magenta line) and the best fitting single component Gaussian (black dashed line). To rule out that such an excess is compatible with errors, we estimated the uncertainties on the stack through a bootstrap resampling technique, creating 1000 realizations of the AGN stack spectra with replacement, and derived the 1$\sigma$ uncertainties from 84th and 16th percentiles of the bootstrap distribution, shown as grey area in Fig. \ref{fig:stack_spectra} (bottom panel). Also in the fitting, a double Gaussian model is preferred, with a centroid of the second component nearly at systemic redshift and a FWHM of 1260 km/s, after correcting for instrumental broadening (see inset plot in Fig. \ref{fig:stack_spectra}, bottom panel.) This finding is in agreement with previous results that reported an increasing outflow component at increasing luminosity (e.g. \citealt{Mullaney2013} ). The presence of outflowing gas in galaxies with stellar mass > 10$^{10}$ M$_{\odot}$ and their position in SFR-stellar mass plane is qualitatively consistent with the evolutionary scenario, where the AGN is capable of driving outflows that could regulate the star formation and the baryonic content of galaxies. The line profile analysis indicates the presence of disturbed kinematics only in the high luminosity bin of the low-SF sample. Indeed we do not find a similar result for the high-SF sample at the highest luminosity bin. This could be either due to the absence of outflow in galaxies with stellar mass <10$^{10}$ M$\odot$ o , considering the unified model, to the fact that the outflowing material should emerge in a direction perpendicular to the plane of the obscuring torus (i.e. to our line-of-sight), resulting in a small projected velocity of the outflow and/or in a symmetric line profile, which can explain the symmetric profiles found for most of the stacked VIMOS sample (see \citealt{Mullaney2013}, \citealt{Harrison2012}). In case the outflow signature is unresolved, it could be that the outflows are not quenching the star formation in these systems or that the timescale is actually longer than the stage at which these objects are seen. Higher resolution spectroscopy becomes necessary to characterize subtle spectral features and disentangle between gravitational and non-gravitational motions, to get a deep insight on the AGN feedback in those systems. \subsubsection{Black hole masses and Eddington ratios} Previous studies have shown that there is a correlation between strong blue wings, large FWHM of line profiles originating in the NLR and Eddington ratio, which describes the accretion mechanism of an AGN (e.g. \citealt{Woo2016}). The Eddington ratio is defined as: \begin{equation} \rm \lambda_{Edd}= \frac{L_{Bol}}{L_{Edd}} \end{equation} where L$\rm_{Edd}$ (= 1.27 $\times\ 10^{38} M\rm_{BH}$, with M$\rm_{BH}$ indicating the BH mass) is the limit at which the outward radiation pressure from the accreting matter balances the inward gravitational pressure exerted by the BH, and $L\rm_{Bol}$ is the bolometric luminosity. BH masses for Type I AGN are usually estimated indirectly by using the virial theorem, which links the BH mass to the BLR radius and the gas velocity dispersion. Considering the H$\alpha$ emission line, the single-epoch relation can be written as (\citealt{Baron2019}): \begin{equation} \rm \frac{M_{BH}}{M_{\odot}}=log \epsilon + 6.90 +0.54 \times log \frac{\lambda L_{\lambda} 5100}{10^{44} erg s^{-1}} +2.06 \times\ log \frac{FWHM_{H\alpha}^{BLR}}{10^3 km s^{-1}} \end{equation}\label{eq:bh_mass} with $\epsilon$ as the virial shape factor, $\lambda L_{\lambda} 5100$ the monochromatic AGN luminosity at 5100$\AA$ and FWHM$\rm_{H\alpha}^{BLR}$ the BLR component of the H$\alpha$ emission line. However, Type II AGN are viewed edge-on, preventing us to see the BLR which is obscured by the presence of a dusty torus. Therefore in Type II AGN BH masses cannot be estimated by using the single-epoch mass determination which requires the view of BLR clouds. Indirect methods can be used as the well-known correlations between the BH mass and host galaxy bulge stellar mass or stellar velocity dispersion. However these relations are established for local inactive galaxies. Recently \cite{Baron2019} find a correlation between the narrow L([OIII])/L(H$\beta$) line ratio and the width of the H$\alpha$ BLR component, linking the kinematics of the BLR clouds to the ionization state of the NLR as follows: \begin{equation} \rm log\frac{L_{[OIII]}^{narrow}}{L_{H\beta}^{narrow}}=0.58\pm0.07 \times log \frac{FWHM _{H_{\alpha}}^{BLR}}{km/s}-1.38\pm0.38\label{eq:ratio_fw} \end{equation} This power-law dependence holds for AGN-dominated systems with log([OIII]/H$\beta$)>0.55. We therefore derive the BLR H$\alpha$ FWHM component for the 79\% of our targets, exhibiting log([OIII]/H$\beta$)>0.55. For the continuum luminosity we rely on relation found by \cite{Baron2019}: \begin{equation} \rm Log \lambda L_{\lambda}5100 = 1.09 \times log L_{bol} -5.23 \end{equation} and \cite{Heckman2004} for the bolometric luminosity, inferred from the [OIII] luminosity with no correction for dust extinction and applying a bolometric correction of 3500. \begin{figure}[] \center \includegraphics[width=1\columnwidth]{./figure/BH_edd.eps} \caption{Eddington ratio (left) and black hole mass (right) distributions of Type II AGN divided into two subsamples, according to the classification of their host galaxies as high-SF (blue) and low-SF (green). Solid and dashed lines represent the median value of the parameters.}\label{fig:BH_edd} \end{figure} We derive BH masses in the range $\sim$ 10$^{7-10}$ M$_{\odot}$ and $\lambda_\textup{Edd}$$\sim$10$^{-3}$-0.5, with a median value of $\sim$0.08, consistent with what is found in other AGN samples (e.g. \citealt{Lamastra2009}) We now explore if there is an indication for a variation of BH mass and Eddington ratio distribution between the Type II AGN groups in Fig. \ref{fig:BH_edd} (parameter values for high-SF galaxies are shown as blue solid lines and for low-SF sample as green dashed vertical lines). Despite the wide range spanned, differences between BH mass and $\lambda_\textup{Edd}$\ distributions are discernible. The median values for each parameter are shown as vertical lines in corresponding colors and line styles. The median value of $\lambda_{Edd}$ derived for the high-SF galaxies is slightly higher than that derived for low-SF galaxies. Furthermore these galaxies are less massive and have in general lower BH mass than low-SF galaxies. On the contrary, this latter galaxy sample shows larger values of BH mass, the distribution of which points towards higher BH mass values, as shown by the M$\rm_{BH}$ distribution in Fig. \ref{fig:BH_edd}. To assess difference between the distributions, we compute the Kolmogorov-Smirnov (K-S) test for the $\lambda_{Edd}$ and BH mass distributions of the two groups of Type II AGN. The null-hypothesis is that the two samples are drawn from the same parent population. The K-S test was performed by using the python routine {\tt{scipy.stat.ks\_2samp}}. For the $\lambda_{Edd}$ distributions we compute a statistic of 0.3 and p-value = 2.6 10$^{-8}$ and for BH mass distributions a statistic of 0.2 and p-value = 4.5 10$^{-6}$. The very low probability value excludes that the two groups of Type II AGN are drawn from the same parent population suggesting that the difference between these groups also relates to the AGN activity. It is instructive to compare these findings with what is found in the local Universe. \cite{Kauffmann2009} examined the dependence of the distribution of the Eddington ratio on the star formation history of SDSS Type II AGN. They found that the Eddington ratio distributions can be different for actively star-forming and passive host galaxies. Specifically they divided these two populations according to the break index D(4000) (\citealt{Balogh1999}), a useful diagnostic of the recent star formation history in these systems. The star-forming host galaxies show a log-normal distribution of Eddington ratios peaking at a few percent of Eddington, independent of black hole masses. This regime dominates the growth of BH with mass <10$^8$ M$_{\odot}$. While the passive galaxies (D4000 >1.7) show a power-law distribution of Eddington ratios with its amplitude decreasing with increasing black hole mass. This regime dominates the growth of BH with mass >10$^8$ M$_{\odot}$. This finding is in line with our evidence of different accretion rates distribution for high and low-SF Type II AGN host galaxies. Finally, although the estimated Eddington ratios have large uncertainties, we did not find a clear dependence of the broadening of the [OIII] line with the Eddington ratio, since the only evidence of outflowing gas is found in the [OIII]-luminous low-SF galaxies, which are accreting at $\sim$5\% of the Eddington rate, a lower rate than that of high-SF Type II AGN subsample, which show on average Eddington ratios of $\sim$0.10. This could be interpreted as a final decay phase of the AGN activity in the [OIII]-luminous low-SF galaxies, where the outflowing gas persists but the AGN feeding mechanism is fading as well as the star formation activity, likely due to AGN feedback. \section{Summary and conclusions}\label{sec:summary} In this paper we have used the VVDS and VIPERS optical spectroscopic surveys, carried out using the VIMOS spectrograph, to select and investigate the properties of those galaxies hosting an AGN at redshift 0.5<z<0.9. We have analysed the emission line properties of the [OIII] doublet, H$\beta$ and [OII] doublet and adopted the blue diagram of \cite{Lamareille2010} to distinguish Type II AGN from star-forming galaxies and LINERS, through emission line ratios [OIII]/H$\beta$ vs [OII]/H$\beta$. Our main findings can be summarized as follows: (i) The masses of the host galaxies range from 10$^{8}$ to 10$^{12}$ M$_{\odot}$, with a median value of 10$^{9.5}$ M$_{\odot}$ and span a star-formation rates range of 0.01-38 M$_{\odot}$/yr. The VIMOS sample with stellar mass < 10$^{10}$ M$_{\odot}$ mostly resides on the star-forming MS locus, as defined by \cite{Schreiber2015}, with a fraction of sources ($\sim$20\%) between the MS and quiescent region having stellar mass > 10$^{10}$ M$_{\odot}$, indicating reduced level of star formation. (ii) We find a bimodality in the SFR MS offset-AGN power plane (probed by the [OIII] luminosity), ascribing to two different populations in the VIMOS sample. We divide our type II AGN sample in two groups according to the properties of their host galaxies, the high-SF one with stellar mass < 10$^{10}$ M$_{\odot}$, occupying the star-forming MS region and the low-SF one with levels of star formation between the MS and quiescent locus, and even lower, and stellar mass > 10$^{10}$ M$_{\odot}$. For both populations a positive correlation exists between the distance from the MS and the AGN power, which could reflect the available amount of gas which both triggers star formation and fuels the AGN activity. Despite this positive correlation, lower level of star formation rates are found in low-SF Type II galaxies. (iii) AGN feedback may be responsible for reducing the supply of cold gas in host-galaxies at least for AGN luminous systems. Indeed, for the [OIII]-luminous low-SF galaxies we found a hint of outflowing gas, as probed by the asymmetric [OIII] line profile, which could be connected with the low SFR content found, possibly due to the effect of AGN acting on the ISM, expelling a certain amount of gas. These massive low-SF galaxies seem to be at their final AGN stage as indicated by their average Eddington ratio value ($\sim$5\% of Eddington rate). \begin{acknowledgements} We thank the anonymous referee for the helpful comments. GV acknowledges financial support from Premiale 2015 MITic (PI: B. Garilli). AP and KM acknowledge support from the Polish National Science Centre under grants: UMO-2018/30/M/ST9/00757 and UMO-2018/30/E/ST9/00082. GM acknowledges support from ST/P006744/1. The results published have been funded by the European Union's Horizon 2020 research and innovation programme under the Maria Skłodowska-Curie (grant agreement No 754510), the National Science Centre of Poland (grant UMO-2016/23/N/ST9/02963) and by the Spanish Ministry of Science and Innovation through Juan de la Cierva-formacion program (reference FJC2018-038792-I). Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 182.A-0886(H), 182.A-0886(N), 182.A-0886(K), 182.A-0886(R), 182.A-0886(G), 182.A-0886(C), 182.A-0886(B), 182.A-0886(O), 182.A-0886(P), 182.A-0886(J), 182.A-0886(D), 182.A-0886(I), 182.A-0886(Q), 60.A-9157(B). Based on data obtained with the European Southern Observatory Very Large Telescope, Paranal, Chile, under Large Programmes 070.A-9007 and 177.A-0837. \end{acknowledgements} \bibliographystyle{aa}
1,941,325,219,982
arxiv
\section*{References}
1,941,325,219,983
arxiv
\section{Introduction} Interaction of atoms and molecules with intense, very short laser pulses results in many interesting phenomena, such as high-order harmonic generation (HHG), above-threshold ionization (ATI) or non-sequential multiple ionization (NSI). All these phenomena, are studied carefully since '80s of the last century with the use of both theoretical and experimental tools \cite{Becker12, agostini2004physics, krausz2009attosecond}. The experiments are becoming more and more refined, even allowing one to resolve dynamics of electron wave packets at the attosecond time-scale \cite{Kubel17,Ossiander17}. Such a situation sets the bar high for the theory. Theoretical description in many cases requires a non-perturbative treatment and, eventually, ends with simplified modeling and computer simulations~\cite{Efimov18}. This happens because, despite of enormous increase in available computer resources, full {\it ab initio} quantum calculations of processes involving more than one electron are typically beyond reach. In view of the above, the Single Active Electron (SAE) approximation appears as a very powerful tool. In fact, one may relatively easily solve a full three-dimensional (3D) time-dependent Schr\"odinger equation (TDSE) for an atom with a single electron exposed to an external field with a given set of parameters \cite{Mosert16} without the necessity of referring to restricted geometry models \cite{Efimov18}. Importantly, the external field may have parameters, i.e., amplitude, frequency, envelope, carrier-envelope phase (CEP) and often duration, in ranges that are used in experiments. Already for two-electron atoms similar calculations are very demanding with respect to computer resources. To the best of our knowledge it was done only for helium \cite{Dundas99,Taylor03,Parker00,Emma11,Feist08,Pazourek12,Ossiander17}. The proper comparison of numerical results from such full 3D calculations with experimental data imposes the usage of various averaging techniques like focal volume or the Gouy phase averaging \cite{hoff2017tracing}. The averaging, in turn, calls for large data sets covering specific ranges of parameters resulting , in the course of time, in excessive computational demands. Thus, judiciously chosen restricted geometry model may be a good trade-off offering better agreement with experimental data at the lesser expense and granting enhanced insight. Indeed, numerical solutions of TDSE in various simplified models lead to results closely resembling experimental data especially for some quantities of interest, such as dipole acceleration \cite{ciappina2012high}. In general, the interpretation of TDSE results requires the use of certain analytic methods such as the strong field approximation (SFA) \cite{Lewenstein94,amini2018symphony,peng2015tracing}. When applied to photoelectron momentum maps it becomes possible to separate the individual processes and trajectories that correspond to them. These trajectories interfere with each other and, ultimately, lead to a very complex final image. The reasoning can also be applied in reverse; the image encodes the dynamics dictated by the Hamiltonian and thus photoelectron spectroscopy can be done by the analysis of the image. Recently, K\"ubel~{\it et al.}~\cite{Kubel17}, demonstrated a streak camera that allows to temporally resolve strong field ionization induced by linearly polarized short pulses. They called their method sub-cycle tracing of ionization enabled by infrared, i.e., STIER. The set-up used in the quoted experiment is a kind of pump-and-probe one, i.e., a few-cycle, intense, linearly polarized pulse in near-visible spectral range (VIS-pulse) induces ionization, whereas a moderately intense, mid-infrared pulse (IR-pulse) streaks photo-electrons allowing observation of the sub-cycle dynamics of strong field ionization. In particular, K\"ubel~{\it et al.} observed an asymmetry in the yield of low-energy electrons associated with the rescattering process. Interestingly, the respective asymmetry in the yield has not been fully reproduced by the authors through solving a one-dimensional (1D) TDSE, in spite of taking into account both the focal volume averaging (FVA) and integration over the Gouy phase. The discrepancy between experimental and computational results is ascribed to the observation that the 1D model cannot capture the details of the recollision processes accurately. Thus, the part of yield related to recolliding electrons is poorly modeled. Here, we study strong field ionization in two-color fields corresponding to the STIER experiment. We solve the 2D TDSE within SAE approximation and collect data sets of intensities, phases, and delays significant enough to allow for the focal volume averaging and the integration over Gouy phase in a similar manner as in \cite{Kubel17}. We present calculated 2D momentum distributions and directly compare them to previously unpublished experimental data. The obtained 2D momenta distributions display complex features which we also analyze within the SFA framework. Our results reveal a complex ring structure that arises from the interference of attosecond wave-packets produced at different half-cycle maxima. The role of multiple electron rescattering in the formation of the asymmetry feature described in Ref.~\cite{Kubel17} is confirmed. \section{Methods} \subsection{TDSE simulation} The main part of the calculations concerned with momenta distributions followed closely the method outlined in \cite{Kubel17}, but with upgraded dimensionality. Momenta distributions were found using 1D and 2D TDSE simulation performed in cartesian coordinates and length gauge which similarly assume a simplified description of the external field (see Fig.~\ref{shortSignal}). TDSE solver is based on the split operator method and the Fast Fourier Transformation algorithm as implemented in software developed by us for other restricted dimensionality models~\cite{Prauzner2007,Prauzner08,Efimov18,Thiede18}. The values of parameters used were $dt=0.05$ (evolution time step), $dx=\frac{100}{512}\approx 0.2$ (grid spacing), $N=28672$ (number of grid point in one direction). The groundstate was found using propagation in imaginary time. Both pulses were linearly polarized with respect to $z$ axis and the respective wavelengths and peak intensities were: ($\lambda_\mathrm{VIS} = 735$ nm, $I_\mathrm{VIS} = 7\times 10^{14}$ W cm$^{-2}$) and ($\lambda_\mathrm{IR}=2215$ nm, $I_\mathrm{IR}=3\times 10^{13}$ W cm$^{-2}$). \begin{figure}[h] \centering \includegraphics[width= 0.95\columnwidth]{shortsignal.pdf} \caption{An exemplary pulse shape of IR + VIS field used for simulations (red lines, corresponding to $\phi=0$, $\tau=0[\tau_{\mathrm{IR}}]$) on top of part the experimental pulse (blue lines, the long decaying tails have been cut out from this picture).} \label{shortSignal} \end{figure} The pulse shape has been identical to the one from the original paper of K\"ubel {\it et al.}~\cite{Kubel17}, namely: the IR-pulse is approximated by a $cos$ carrier wave devoid of envelope, with a significantly shorter duration (as compared with experiment) of 4.25 cycles of length $\tau_\mathrm{IR} \approx 7.4$ fs each and 0 field strength at the beginning. The envelope for VIS-pulse has a full width at half maximum (FWHM) of 5 fs (see Fig.~\ref{shortSignal}): \begin{equation} \bm{E}(t) = E_\mathrm{VIS}(t)\bm{e}_z+E_\mathrm{IR}(t)\bm{e}_z \end{equation} where \begin{eqnarray} \label{def:fields} E_\mathrm{VIS}(t) &=& F_\mathrm{VIS}f(t-\tau)\cos(\omega_\mathrm{VIS}(t-\tau)+\phi),\\ E_\mathrm{IR}(t) &=& F_\mathrm{IR}\cos(\omega_\mathrm{IR}t). \end{eqnarray} Here $F_i$ and $\omega_i$ stand for field amplitude and frequency of corresponding pulse, respectively, $f(t-\tau)$ is a Guassian envelope of VIS pulse, $\tau$ is the delay between pulses and $\phi$ is the CEP of the VIS signal (in the later part of this article we will use the symbol $\phi$ and CEP interchangeably). \subsection{Post-processing and fitting data to experiment} The well-known affliction of laboratory strong field experiments is the inherent spatial averaging due to non-uniformity of laser beam cross-section and geometry of the optical elements. The most straightforward recipe for adapting theoretical calculations is to average over intensities with the weight of inverse intensity \cite{strohaber2015highly}, in other words, calculating the integral \begin{equation} \label{vol-avg} \int_0^{I_0} dI P(I)/I. \end{equation} On the other hand, more advanced measures could be employed taking into account the Gouy phase \cite{paulus2005measurement}. Surprisingly, in our case these averaging methods did not lead to improvement over the non-averaged (single intensity) results, to the contrary - their use (especially accounting for Gouy phase) caused an underestimation in the width of momentum spectra when a single delay was considered. This can be understood, as follows. In practice, the action of FVA can be reduced to mixing the thin (in terms of width) low intensity momentum distributions with the wide high intensity momentum distributions. Since the former usually obtain much higher weights, the averaged distribution appears thinner than the original one. On the other side, in the experiment, the synchronization jitter between IR and VIS pulses and the focal geometry can effectively lead to uncertainties in the time delay which broadens the distributions. Moreover, FVA becomes harder to implement correctly when two-color pulses are considered. Consequently, we have restricted our analysis to the simple volume average recipe given by eq.~\ref{vol-avg} (FVA), delay ($\pm$0.8 fs) average (DA) or both (DFVA). \begin{figure}[ht] \centering \includegraphics[width= 0.95\columnwidth]{moments.pdf} \caption{Comparison of mean $p_z$ value of the momentum distribution ($\langle p_z \rangle$) dependence on CEP between experiment (red, solid) and 1D (panel A) or 2D (panel B) simulation (dashed and dotted lines). Notice that even simulations fit to the same physical moment separated by 2 IR periods ($\approx 14.8$ fs).} \label{moments} \end{figure} After averaging the obtained momentum distributions were smoothed out using a median filter \footnote{The non-averaged data was smoothed with median filter of 10, whereas the data that has been focal-volume averaged (FVA) were smoothed with median filter of size 1 (the highly oscillatory behavior has been mostly canceled thanks to FVA averaging).} and (point) resampled to the (smaller) resolution of experimental results. Since the experiment provided data over a wide range of delay, but the absolute information about the CEP ($\phi$ of eq. \ref{def:fields}) was not determined completely, the CEP dependence of TDSE and experimental results had to be compared and matched. To aid our analysis we've employed a secondary measure of (dis)similarity, based on the least squares method for partially smoothed out and normalized central momenta distributions. This automated method yielded the CEP shift and the experimental delay $0[\tau_\mathrm{IR}]$ (corresponding to the center of the VIS pulse centered at a maximum of the IR field, see Fig.~\ref{shortSignal}, also delay $\approx 5.87$ fs from \cite{Kubel17}), as the most consistent with our 2D results (Fig.~\ref{moments} B) for raw data and DFVA data. In case of 1D results (Fig.~\ref{moments} A), the raw CEP spectrum matched to delay $-2[\tau_\mathrm{IR}]$ (separated by 2 $\tau_\mathrm{IR}$ from the 2D match), but the DFVA matched with the reversely scattered electrons (incorrect). This result suggests that automated matching experimental data to low dimensional models can be deceitful and some additional information is needed for a successful match. Nevertheless, such problems become absent in the case of higher dimensional models as Fig.~\ref{moments} B suggests. \section{Results} \subsection{1D momenta distributions} Performing a full comparison of 1D and 2D TDSE results at $\tau=0[\tau_\mathrm{IR}]$ (see Fig.~\ref{shortSignal}) with experiment, we notice (see Fig.~\ref{comp0} and compare with Fig.~4c of \cite{Kubel17}) good agreement at most CEP's for solely delay averaged ($\pm 0.8$ fs), otherwise raw data (DA) as well as for delay and focal-volume averaged (DFVA) data. \begin{figure}[h] \centering \includegraphics[width= 0.95\columnwidth]{compc0.pdf} \includegraphics[width= 0.95\columnwidth]{compc14.pdf} \caption{Comparison of momenta distributions for $\tau=0[\tau_{\mathrm{IR}}]$, at different CEP values for experimental data (blue, solid), nearby-delay averaged (DA) 1D sim. (yellow), 2D simulation (green, solid) and 2D with focal-volume and delay ($\pm$ 0.8 fs) averaging (DFVA).} \label{comp0} \end{figure} Advantages of one average versus the other, seen previously on Fig.~\ref{moments}, are not evident at this point; however, the improvement over the 1D result is visible. Indeed, the agreement is reasonable for both projected directions even without FVA over a range of CEP values and fixed delay, see Fig~\ref{CEPpzpy}. \begin{figure}[h!] \centering \includegraphics[width=1.0\columnwidth]{ceppzpy.pdf} \caption{Comparison of non-averaged, smoothed, single delay (0$[\tau_\mathrm{IR}]$) momenta distributions for both axes obtained in simulation (A, C - left column) and experiment (B, D - right column) for different CEP values} \label{CEPpzpy} \end{figure} Remaining discrepancies may be attributed to low experimental resolution leading to significant uncertainties in fitting parameters and a limited number (3) of averaged delays. \subsection{2D momenta distributions} \begin{figure}[ht!] \centering \includegraphics[width=0.99\columnwidth]{comp2d.pdf} \caption{CEP averaged and smoothed 2D photoelectron momentum distributions for $\tau=0[\tau_{\mathrm{IR}}]$ parallel ($p_z$) and perpendicular ($p_y$) to the laser polarization direction for TDSE (A) compared with experiment (B).} \label{comp2d} \end{figure} Comparison of CEP averaged photoelectron momentum maps predicted by theory (panel A) versus experiment (panel B) is presented in Fig.~\ref{comp2d}. Apart from good agreement between the two results (the theoretical prediction has been smoothed out to aid comparison), we observe large asymmetry on the $p_z$ axis. The upper side ($p_z>0$) is much narrower in $p_y$ than the lower side and exhibits the asymmetric feature at $p_z = 0.2$ a.u., which emerges due to multiple electron rescattering events and Coulombic interaction \cite{Kubel17}. In order to confirm this statement, in Fig.~\ref{ev2d} we present snapshots of momentum photoelectron distributions (for fixed CEP and delay value) at time instances separated by multiple IR periods (corresponding to zeroes of vector potential \cite{becker2018plateau}). The obtained structures remain mostly fixed after the high intensity VIS pulse is over (see times $t>1 [\tau_{\mathrm{IR}}]$ in Fig.~\ref{ev2d}), with the exception of low energy regions for which (for convenience) we provide zoomed windows. Indeed, one can observe the formation of a holographic, shell-like structure concentrated around $p_z = 0.2$. The asymmetry builds up with each IR period through simultaneous horizontal ($p_y$) splitting and shifting towards lower $p_z$ values. This non-equilibrium steady structure emerges through interference of both forward and backward rescattered electrons accelerated by the IR field \cite{peng2015tracing, geng2014attosecond}. Comparing time $t=1 [\tau_{\mathrm{IR}}]$ of Fig.~\ref{ev2d} with times $t>1 [\tau_{\mathrm{IR}}]$ we see a significant increase in phase accumulation in the radial direction from the $(p_y=0,p_z=0.2)$, leading to sharp drops in intensity visible in the zoomed regions, the effect again attributable to multiple rescatterings. The significance of this asymmetry is expected to grow with the wavelength of the IR field \cite{geng2014attosecond}. \begin{figure*}[h!t] \centering \includegraphics[width=2.0\columnwidth]{ev2d_.pdf} \caption{Momentum distributions snapshots at equal IR period intervals for exemplary $delay=-1 [\tau_\mathrm{IR}]$, $CEP=0$. The overall distribution appears be stable after the second cycle ($\tau_{\mathrm{IR}}$), significant changes appearing in the LES area (zoom on the area $p_z = (0,0.5), p_y=(-0.3,0.3)$ presented).} \label{ev2d} \end{figure*} \begin{figure}[h!t] \centering \includegraphics[width=0.99\columnwidth]{ringev_less.pdf} \caption{Photoelectron momentum maps for individual CEP's. A) TDSE result, B) SFA prediction for ionization times $\{t_0, t_2\}$, C) SFA prediction for ionization times $\{t_0, t_1, t_2\}$ (details in text). The diagrams in B and C present IR vector potential (red solid line), total electric field (green solid line), ionization times (dashed blue lines).} \label{ringEV} \end{figure} Returning to Fig.~\ref{comp2d}; at $p_z>0$ and around $p_y = \pm~0.2$~a.u. in both, experiment and simulation, one can notice sidebands parallel to the polarization axis, also known as holographic structures \cite{huismans2011timea}. The feature is visibly less pronounced in the experiment than in the simulation due to limited resolution (high-resolution experiments using the STIER technique are currently in preparation). In general holographic structures can be attributed to interference of two coherent wave packets ionized at two nearby (same quarter-cycle) instances of time. One of such wave packets (signal) is assumed to rescatter with the parent ion before meeting the other (reference) wave packet. These kind of processes occur frequently in this setup as can be seen on the positive part of the $p_z$ axis in Fig.~\ref{ringEV} A, which presents photoelectron momentum maps in function of VIS field CEP ($\phi$) value. The frequent occurrence of rescattering events is also the reason why such side lobes are visible on CEP averaged plots such as Fig.~\ref{comp2d}. The holographic structures seen on the detailed maps of Fig.~\ref{ringEV}~A are also heavily influenced by other sub-cycle effects, including interference of wave packets ionized with half, one or more cycle delays. One of the most dominant traces of this is the ring structure centered at $p_z=0$ a.u. for CEP=$\pi$ and of approximately $0.25$ a.u. (first) radius (see Fig.~\ref{ringEV}~A, third panel from the left). This ring structure is displaced along the $p_z$ axis with the change of CEP value (see Fig.~\ref{ringEV}~A). In experiments using the RABBITT technique the amount of ring displacement was shown \cite{remetter2006attosecond, varju2006angularly} to be proportional to the area under the vector potential between two dominant ionization times in some simpler models. Arguably, the presented situation is considerably more complex than the setups occurring in the cited references. In particular, the vector potential is the sum of IR and VIS contributions, i.e. $\bm{A}(t)=\bm{A}_{\mathrm{IR}}(t)+\bm{A}_{\mathrm{VIS}}(t)$, and therefore does not behave as a simple sine-like function. Nevertheless, given the versatility of the SFA \cite{amini2018symphony} we suspect that a similar analysis might be valuable. Thus, resorting to SFA analysis we show that the CEP displacement of the ring is in fact due to interference between few~(2~or~3) major ionization times (see Fig.~\ref{heat_cep}) and only weakly dependent on the $\bm{A}_{\mathrm{VIS}}(t)$ contribution. \begin{figure}[h!t] \centering \includegraphics[width=.99\columnwidth]{heat_cep.pdf} \caption{Field intensity at ionization times $t_n$ for different CEP values and $\tau=0[\tau_{\mathrm{IR}}]$. The dominant peaks throughout the whole CEP domain are $\{t_0,t_2,t_1\}$.} \label{heat_cep} \end{figure} \subsection{SFA analysis} The ionization amplitude $a_p$ and action $S(t)$ are given by: \begin{eqnarray} \label{ap} a_p &=& -i \int_{-\infty}^{\infty}\bm{E}(t')\cdot \bm{d}(\bm{p}+\bm{A}(t')) e^{-i S(t')} dt', \\ S(t)&=&\int_t^\infty \left[ \frac{(\bm{p}+\bm{A}(t'))^2}{2} + I_p\right] dt', \label{actionS}\\ \bm{d}(\bm{p})&=&\frac{\bm{p}}{(\bm{p}^2+2 I_p)^3}, \end{eqnarray} where $I_p$ is the ionization potential. Assuming that the ionization takes place at the extrema of the total electric field, which in our case can be approximated with the extrema of the VIS field, we get: \begin{equation} t_n = (n-\frac{\phi}{\pi})\tau_\mathrm{VIS}/2, \label{timesCEP} \end{equation} for most likely times $t_n$ of ionization. Using this, equations (\ref{ap}-\ref{actionS}) can then be reduced to: \begin{align} \label{ap2} a_p ={}& \sum_{n} E_z(t_n)d_z(\bm{p}-\bm{A}(t_n))e^{-i S(t_n)},\\ \begin{split} S(t_n) ={}& -(\frac{\bm{p}^2}{2}+I_p)t_n\\ &-p_z \int_0^{t_n} \left[ A(t') + \frac{A(t')^2}{2} \right] dt'. \label{actionS2} \end{split} \end{align} The resulting probability $|a_p|^2$ is presented as a function of CEP in Figs.~\ref{ringEV} B, C. For the sake of clarity, above each momentum distribution in Fig.~\ref{ringEV} B and C, we present a diagram with VIS+IR pulse (green lines) and the IR vector potential in the background (red lines). Blue dashed lines point to ionization times taken into account in the analysis. \begin{figure}[h!t] \centering \includegraphics[width=0.99\columnwidth]{heat_delay.pdf} \caption{Field intensity at ionization times $t_n$ for different delay values and CEP=0. The set of dominant peaks varies with the delay, leading to less straightforward analysis.} \label{heat_delay} \end{figure} \begin{figure*}[h!t] \centering \includegraphics[width=0.92 \textwidth]{ringev2.pdf} \caption{Photoelectron momentum maps for individual delays (CEP=0). A) TDSE result, B) SFA prediction for ionization times at which electric field surpasses a given threshold (details in text). The diagrams in B and C present IR vector potential (red solid line), total electric field (green solid line), ionization times (dashed blue lines).} \label{delays} \end{figure*} First, we take into account two ionization times $\{t_0,t_2\}$ (Fig.~\ref{ringEV}~B), for which the field points in the same direction. At CEP=0 one can notice the ring structure centered at just above $p_z=1$, which intensifies while moving down towards lower $p_z$ values with the increasing value of CEP. For CEP=$\pi$ the contributions from ionization times $\{t_0,t_2\}$ become equal (see Fig.~\ref{heat_cep}) leading to a pronounced ring, centered at $p_z=0$ with the first radius of approx. 0.25 a.u., in agreement with TDSE results. The rings are preserved even when the IR vector potential alone is considered ($\bm{A}_{\mathrm{VIS}}(t)$ neglected) in calculating $a_p$ with respect to eqs.~(\ref{ap2}-\ref{actionS2}) (results not shown). Moreover, the contribution of the term $A(t')^2/2$ has small influence on the ring and can be dropped in the eq.~(\ref{actionS2}). Thus, for readability in diagrams above each momentum distribution, in Fig.~\ref{ringEV} B and C, only $A_{\mathrm{IR}}$ is shown for reference (red lines). In Fig.~\ref{ringEV}~C we expand the analysis to the case of three ionization times, $\{t_0, t_1, t_2\}$. The interference structures become richer and the ring acquires slight deformations. The addition of more ionization times, does not bring clarity to the overall picture. Other effects, such as multiple rescatterings with the parent ion, provide additional structures which suppress or enhance the visibility of the ring and its surrounding features. In case of changing delay instead of the CEP value, the eq.~(\ref{timesCEP}) changes to: \begin{equation} t_n = n\tau_\mathrm{VIS}/2 + \tau, \end{equation} while the eqs.~(\ref{ap2}-\ref{actionS2}) stay unchanged. Now, in contrast with the varying CEP case (see Fig.~\ref{heat_delay} and compare it with Fig.~\ref{heat_cep}) it is not trivial to select the ionization peaks contributing the most to the momentum maps pictured in Fig.~\ref{delays}~A. In fact, many such choices exist. Here we limit our analysis to ionization peaks for which the electric field intensity passes a chosen threshold ($|E(t_n)|^2>0.009$ a.u.) and present the results in Fig.~\ref{delays}~B. The abundance of ring-like features could be discouraging at first, but through careful inspection one can see that the vast majority of the ring like structures of Fig.~\ref{delays}~B either, can be found directly (as rings) or appear to steer the holographic structures found in the TDSE result (Fig.~\ref{delays}~A). Although, our analysis is by no means exhausting we have shown how to understand one layer of complexity of the TDSE result and how other interference structures can be affected. \section{Discussion and summary} Through an increase in dimensionality of the TDSE simulation we have found that individual (per CEP) 1D momenta distribution can indeed be substantially improved. However, fitting computational results to experimental ones requires caution - especially when comparing reduced dimensionality models to experimental data. Averaging over focal volume and Gouy phase needs to be done on par with averaging over "nearby" delays in order to retain the width of the momenta distributions. On the other hand, the individual 2D momenta distributions could not be directly compared due to limited resolution of the experimental results. Using the CEP averaged results we have found good agreement between theory and experiment. Asymmetric features present in the experimental results have been traced to multiple rescattering effects in the IR field; visible and similar at all CEP values. The complex individual momentum maps consist of two kind of structures: holographic lobes extending in the direction perpendicular to the polarization direction, ATI structures (affecting the holographies) and/or ring structures. In particular, an intense ring centered at CEP=$\pi$ and traversing through the $p_z$ axis with the change of CEP value have been noted. With the use of the SFA we have shown that this ring originates from the interference of two or more wave packets born at different ionization times. In a bit less straightforward way this effect can also be seen when delay instead of CEP is manipulated. Together with holography, the ring structures record the electron dynamics on a sublaser-cycle time scale that lead to applications in next generation photoelectron spectroscopy. Complementary analysis from the perspective of high harmonic generation in the given setup is currently underway. \bigbreak \section*{Funding Information} This project has received funding from the EU’s Horizon2020 research and innovation programme under the Marie Sklodowska-Curie Grant Agreement No. 657544 (MK);\\ National Science Centre, Poland via project No. 2016/20/W/ST4/00314 (MM, JPB and JZ).\\ We also acknowledge the support of PL-Grid Infrastructure. \bigbreak \bigbreak
1,941,325,219,984
arxiv
\section{Introduction} Let $(X, \mathsf{d}, \mathfrak{m})$ be a compact metric measure space, that is, $(X, \mathsf{d})$ is a compact metric space with $\supp \mathfrak{m} =X$ and $\mathfrak{m} (X)<\infty$. There are several definitions of `lower Ricci bounds on $(X, \mathsf{d}, \mathfrak{m})$', whose studies are very quickly, widely developed now. We refer to, \cite{LottVillani} by Lott-Villani, \cite{Sturm06} by Sturm, and \cite{AmbrosioGigliSavare14} by Ambrosio-Gigli-Savar\'e, as their pioneer works. In this paper, we forcus on two of them. One of them is the \textit{Bakry-\'Emery $\mathrm{(}\BE\mathrm{)}$ condition} \cite{BE} by Bakry-\'Emery, denoted by $\BE(K, N)$, the other is the \textit{Riemannian curvature dimension $\mathrm{(}\RCD\mathrm{)}$ condition} \cite{AmbrosioGigliSavare14} by Ambrosio-Gigli-Savar\'e (in the case when $N=\infty$), \cite{Gigli0} by Gigli (in the case when $N<\infty$), denoted by $\RCD(K, N)$. Both notions give us meanings that the Ricci curvature of $(X, \mathsf{d}, \mathfrak{m})$ is bounded below by $K$, and the dimension of $(X, \mathsf{d}, \mathfrak{m})$ is bounded above by $N$ in synthetic sense. The $\BE(K, N)$ condition is roughly stated by: \begin{equation}\label{0} \frac{1}{2}\Delta |\nabla f|^2 \ge \frac{(\Delta f)^2}{N}+\langle \nabla \Delta f, \nabla f\rangle +K|\nabla f|^2 \end{equation} holds in a weak form for all `nice' functions $f$ on $X$ (Definition \ref{bedef}). It is known that if $(X, \mathsf{d})$ is a $n$-dimensional smooth Riemannian manifold $(M^n, g)$ and $\mathfrak{m}$ is the Riemannian (or equivalently, the Hausdorff) measure, then, the $\BE(K, N)$ condition (\ref{0}) is equivalent to satisfying $n\le N$ and $\mathrm{Ric}_{M^n}^g \ge K$, and that the $\BE(K, N)$ condition is also equivalent to some gradient estimates on the heat flow, so called Bakry-\'Emery/Bakry-Ledoux gradient estimates. In general, the implication from $\RCD(K, N)$ to $\BE(K, N)$ is always satisfied. The converse is true under adding a some property, so-called the `\textit{Sobolev to Lipschitz property}', which is introduced in \cite{Gigli0} (Definition \ref{def:RCDspaces}). This property (with $\BE$), in a point of view in geometric analysis, plays a role to get the coincidence between the analytic distance $\mathsf{d}_{{\sf Ch}}$ (that is, the induced distance by the Cheeger energy) and the original (geometric) distance $\mathsf{d}$. Moreover, the $\RCD$ condition also implies the Sobolev to Lipschitz property. Thus, the following equivalence is known: \begin{equation}\label{444444} \RCD(K, N) \Longleftrightarrow \BE(K, N) + `\mathrm{Sobolev\,to\, Lipschitz\,property}'. \end{equation} The RHS of (\ref{444444}) is also called the metric $\BE(K, N)$ condition. Thus, to keep the short presentation, we adopt the RHS of (\ref{444444}) as the definition of $\RCD(K, N)$ condition in this paper (Definition \ref{def:RCDspaces}). We refer to, \cite{AmbrosioGigliSavare15} by Ambrosio-Gigli-Savar\'e, \cite{AmbrosioMondinoSavare}, \cite{AmbrosioMondinoSavare16} by Ambrosio-Mondino-Savar\'e, and \cite{ErbarKuwadaSturm} by Erbar-Kuwada-Sturm for the details. In these observation, more precisely, `$\RCD$' should be replaced by `$\RCD^*$'. However, since the equivalence between $\RCD$ and $\RCD^*$ spaces is also recently established in \cite{CavMil} by Cavalletti-Milman, we use the notation `$\RCD$' only for simplicity. In this paper, we discuss the condition: \begin{equation}\label{22} \BE(K, N) + `\mathsf{d}_{{\sf Ch}}=\mathsf{d}'. \end{equation} One of the goals in this paper is to provide an example satisfying (\ref{22}), but it is not an $\RCD$ space. More precisely, for any two (not necessary same dimensional) closed pointed Riemannian manifolds $(M_i^{m_i}, g_i, p_i) (m_i \ge 2)$, the glued metric space $M_1^{m_1} * M_2^{m_2}$ at their base points with the standard measure is a $\BE (K, \max \{m_1, m_2\})$ space, where $K:= \min \{ \inf \mathrm{Ric}_{M_1^{m_1}}^{g_1}, \inf \mathrm{Ric}_{M_2^{m_2}}^{g_2}\}$ (Example \ref{1091}). It is easy to check that this metric measure space does not satisfy the Sobolev to Lipschitz property, thus, it is not a $\RCD(L, \infty)$ space for any $L \in \mathbb{R}$. This tells us that (\ref{22}) does \textit{not} imply the expected Bishop-Gromov inequality (Remark \ref{yy6}), and that (\ref{22}) does \textit{not} imply the constancy of the local dimension. In particular, the glued space gives a first example with a Ricci bound from below in the Bakry-\'Emery sense, whose local dimension is not constant. We point out a very recent result in \cite{BS18} by Bru\`e-Semola, which states that for any $\RCD(K, N)$ space, there exists a unique $k$ such that the $k$-dimensional regular set $\mathcal{R}_k$ has positive measure. This generalizes a result of Colding-Naber in \cite{CN} for Ricci limit spaces to $\RCD$ spaces. Thus, we know that the Sobolev-Lipschitz property is crucial to get the constant dimensional property. Note that in \cite{KR}, Ketterer-Rajala constructed a metric measure space with the measure contraction property (MCP), which also characterize `Ricci bounds from below' in a synthetic sense, but the local dimension is not constant. Therefore, in general, MCP and BE spaces are very diferent from $\RCD$ spaces. Moreover, we should pay attention to a similar sufficient condition in \cite{AmbrosioMondinoSavare16} by Ambrosio-Mondino-Savar\'e, so-called the local to global property, which states in our compact setting; if $(X, \mathsf{d})$ is a geodesic (or equivalently, length) space and there exists an open covering $\{U_i\}_{i \in I}$ of $X$ such that $U_i \neq \emptyset$ and that $(\overline{U_i}, \mathsf{d}, \mathfrak{m}_{\overline{U_i}})$ satisfies the metric $\BE(K, N)$ condition, then, $(X, \mathsf{d}, \mathfrak{m})$ satisfies the metric $\BE(K, N)$ condition. In fact, the glued example shows that the openness of $U_i$ is essential because although $U_i:=M_i^{m_i}$ in $M_1^{m_1}*M_2^{m_2}$ satisfies the assumptions except for their openness properties, but the glued space does not satisfy the metric $\BE(K, N)$ condition for all $K, N$. In order to justify these, we study \textit{almost smooth compact metric measure spaces}. See Definition \ref{def:asmm} for the definition, which allows us such spaces to have at least the codimension $2$ singularities. Thus, compact (Riemannian) orbifolds with the Hausdorff measure are typical examples of them. Then, the main result in this paper is roughly stated as follows; if an almost smooth compact metric measure space satisfies the $L^2$-strong compactness condition and satisfies the gradient estimates on the eigenfunctions, then, a lower bound of the Ricci tensor of the smooth part implies a $\BE$ condition (Theorem \ref{be}). By using this, we can give a necessary and sufficient condition for such a space to be a $\RCD$ space (Corollary \ref{corrcd}). The organization of the paper is as follows. In section $2$, to keep the short presentation, we give a very quick introduction to calculus on metric measure spaces. In section $3$, we study our main targets, almost smooth metric measure spaces, and prove the main results. \textbf{Acknowledgement.} A part of the work is done during the author's stay in Yau Mathematical Sciences Center (YMSC) at Tsinghua University. The author would like to express his appreciation to Guoyi Xu for his warm hospitality. He is also grateful to YMSC for giving him nice environment. He thanks Luigi Ambrosio, Nicola Gigli, Bangxian Han and Aaron Naber for helpful comments. Moreover, He thanks the referee for the careful reading of the manuscript and for the suggestions in the revision. Finally, he acknowledges the supports of the Grantin-Aid for Young Scientists (B), 16K17585, and of the Grant-in-Aid for Scientific Research (B), 18H01118. \section{$\BE$ and $\RCD$ spaces} We use the notation $B_r(x)$ for open balls and $\overline{B}_r(x)$ for $\{y:\ \mathsf{d}(x,y)\leq r\}$. We also use the standard notation $\mathrm{LIP}(X,\mathsf{d})$, $\mathrm{LIP}_c(X,\mathsf{d})$ for the spaces of Lipschitz, compactly supported Lipschitz functions, respectively. Let us now recall basic facts about Sobolev spaces in metric measure spaces $(X,\mathsf{d},\mathfrak{m})$, see \cite{AmbrosioGigliSavare13}, \cite{Gigli1} and \cite{Gigli} for a more systematic treatment of this topic. We shall always assume that \begin{itemize} \item the metric space $(X,\mathsf{d})$ is compact with $\supp \mathfrak{m} =X$ and $\mathfrak{m} (X)<\infty$ \end{itemize} for simplicity. The Cheeger energy ${\sf Ch}={\sf Ch}_{\mathsf{d},\mathfrak{m}}:L^2(X,\mathfrak{m})\to [0,+\infty]$ is a convex and $L^2(X,\mathfrak{m})$-lower semicontinuous functional defined as follows: \begin{equation}\label{eq:defchp} {\sf Ch}(f):=\inf\left\{\liminf_{n\to\infty}\frac 12\int_X(\mathrm{Lip} f_n)^2\mathop{}\!\mathrm{d}\mathfrak{m}:\ \text{$f_n\in\Lip (X,\mathsf{d})$, $\|f_n-f\|_{L^2}\to 0$}\right\}, \end{equation} where $\mathrm{Lip} f$ is the so-called slope, or local Lipschitz constant. The Sobolev space $H^{1,2}(X,\mathsf{d},\mathfrak{m})$ then concides with $\{f:\ {\sf Ch}(f)<+\infty\}$. When endowed with the norm $$ \|f\|_{H^{1,2}}:=\left(\|f\|_{L^2(X,\mathfrak{m})}^2+2{\sf Ch}(f)\right)^{1/2} $$ this space is Banach, reflexive if $(X,\mathsf{d})$ is doubling (see \cite{AmbrosioColomboDiMarino}), and separable Hilbert if ${\sf Ch}$ is a quadratic form (see \cite{AmbrosioGigliSavare14}). According to the terminology introduced in \cite{Gigli1}, we say that a metric measure space $(X,\mathsf{d},\mathfrak{m})$ is infinitesimally Hilbertian if ${\sf Ch}$ is a quadratic form. By looking at minimal relaxed slopes and by a polarization procedure, one can then define a {\it carr\'e du champ} $$ \Gamma:H^{1,2}(X,\mathsf{d},\mathfrak{m})\times H^{1,2}(X,\mathsf{d},\mathfrak{m})\rightarrow L^1(X,\mathfrak{m}) $$ playing in this abstract theory the role of the scalar product between gradients (more precisely, the duality between differentials and gradients, see \cite{Gigli1}). In infinitesimally Hilbertian metric measure spaces, the $\Gamma$ operator satisfies all natural symmetry, bilinearity, locality and chain rule properties, and provides integral representation to ${\sf Ch}$: $2{\sf Ch}(f)=\int_X \Gamma(f,f)\,\mathsf{d}\mathfrak{m}$ for all $f\in H^{1,2}(X,\mathsf{d},\mathfrak{m})$. We can now define a densely defined operator $\Delta:D(\Delta)\to L^2(X,\mathfrak{m})$ whose domain consists of all functions $f\in H^{1,2}(X,\mathsf{d},\mathfrak{m})$ satisfying $$ \int_X hg\mathsf{d}\mathfrak{m}=-\int_X \Gamma(f,h)\mathsf{d}\mathfrak{m}\quad\qquad\forall h\in H^{1,2}(X,\mathsf{d},\mathfrak{m}) $$ for some $g\in L^2(X,\mathfrak{m})$. The unique $g$ with this property is then denoted by $\Delta f$ (see \cite{AmbrosioGigliSavare13}). From the point of view of Riemannian geometry, we will also adopt the following notaion instead of $\Gamma$; $$ \langle \nabla f, \nabla g \rangle :=\Gamma (f, g), \, \quad |\nabla f|^2:=\Gamma (f, f). $$ We are now in a position to introduce the $\BE(K, N)$ condition (see \cite{AmbrosioMondinoSavare}, \cite{AmbrosioMondinoSavare16} and \cite{ErbarKuwadaSturm}): \begin{definition}[$\BE$ spaces]\label{bedef} Let $(X, \mathsf{d}, \mathfrak{m})$ be a compact metric measure space, let $K \in \mathbb{R}$ and let $N \in [1, \infty]$. We say that $(X, \mathsf{d}, \mathfrak{m})$ is a \textit{$\BE (K, N)$ space} if for all $f\in D(\Delta)$ with $\Delta f\in H^{1,2}(X,\mathsf{d},\mathfrak{m})$, Bochner's inequality $$ \frac 12\Delta |\nabla f|^2 \geq \frac{(\Delta f)^2}{N} + \langle \nabla f,\nabla \Delta f\rangle + K|\nabla f|^2 $$ holds in the weak form, that is, \begin{equation}\label{eq:boch} \frac 12\int_X |\nabla f|^2\Delta\phi\mathsf{d}\mathfrak{m}\geq \int_X\phi\left(\frac{(\Delta f)^2}{N}+ \langle \nabla f,\nabla \Delta f\rangle + K|\nabla f|^2\right)\mathsf{d}\mathfrak{m} \end{equation} for all $\phi\in D(\Delta) \cap L^{\infty}(X, \mathfrak{m})$ with $\phi\geq 0$ and $\Delta\phi\in L^\infty(X,\mathfrak{m})$. \end{definition} In order to introduce the class of $\RCD(K,N)$ metric measure spaces, we follow the $\Gamma$-calculus point of view, based on Bochner's inequality, because this is the point of view more relevant in our proofs. However, the equivalence with the Lagrangian point of view, based on the theory of optimal transport first proved in \cite{AmbrosioGigliSavare15} (in the case $N=\infty$) and then in \cite{ErbarKuwadaSturm}, \cite{AmbrosioMondinoSavare} (in the case $N<\infty$). Moreover, the following definition should be written as $\RCD^*(K, N)$ spaces. However, since it is known by \cite{CavMil} that these are equivalent notions, we use the notation $\RCD(K, N)$ only for simplicity. \begin{definition} [$\RCD$ spaces]\label{def:RCDspaces} Let $(X,\mathsf{d},\mathfrak{m})$ be a compact metric measure space, let $K \in \mathbb{R}$ and let $N \in [1, \infty]$. We say that $(X, \mathsf{d}, \mathfrak{m})$ is a \textit{$\RCD(K, N)$ space} if it is a $\BE (K, N)$ space with the Sobolev-Lipschitz property, that is, \begin{itemize} \item{(Sobolev to Lipschitz property)} any $f\in H^{1,2}(X,\mathsf{d},\mathfrak{m})$ with $|\nabla f| \leq 1$ $\mathfrak{m}$-a.e. in $X$ has a $1$-Lipschitz representative. \end{itemize} \end{definition} We end this section by giving the definition of local Sobolev spaces: \begin{definition}[Sobolev spaces $H^{1,2}_0$]\label{def:loc sob} Let $U$ be an open subset of $X$. We denote by $H^{1,2}_0(U, \mathsf{d}, \mathfrak{m} )$ the $H^{1,2}$-closure of $\mathrm{LIP}_c(U, \mathsf{d})$. \end{definition} In the next section, the local Sobolev spaces will play a role to localize global Sobolev functions to smooth parts via the zero capacity condition. \section{Almost smooth metric measure space} Let us fix a compact metric measure space $(X, \mathsf{d}, \mathfrak{m})$. \subsection{Constant dimensional case} \begin{definition}[$n$-dimensional almost smooth compact metric measure space]\label{def:asmm} Let $n \in \mathbb{N}$. We say that $(X, \mathsf{d}, \mathfrak{m})$ is an \textit{$n$-dimensional almost smooth compact metric measure space associated with an open subset $\Omega$ of $X$} if the following three conditions are satisfied; \begin{enumerate} \item{(Smoothness of $\Omega$)} there exist an $n$-dimensional (possibly incomplete) Riemannian manifold $(M^n, g)$ and a map $\phi: \Omega \to M^n$ such that $\phi$ is a local isometry between $(\Omega, \mathsf{d})$ and $(M^n, \mathsf{d}_g)$, that is, for all $p \in \Omega$ there exists an open neighborhood $U \subset \Omega$ of $p$ such that $\phi|_U$ is an isometry from $U$ to $\phi(U)$ as metric spaces; \item{(Hausdorff measure condition)} The restricition $\mathfrak{m} \res_{\Omega}$ of $\mathfrak{m}$ to $\Omega$ coincides with the $n$-dimensional Hausdorff measure $\mathcal{H}^n$ on $\Omega$, that is, $\mathfrak{m} (A)=\mathcal{H}^n(A)$ holds for all Borel subset $A$ of $\Omega$; \item{(Zero capacity condition)} $X \setminus \Omega$ has zero capacity in the following sense, that is, $\mathfrak{m} (X \setminus \Omega)=0$ is satisfied, there exists a sequence $\phi_i \in C^{\infty}_c(\Omega)$ such that the following two conditions hold; \begin{enumerate} \item for any compact subset $A \subset \Omega$, $\phi_i|_A \equiv 1$ holds for all sufficiently large $i$; \item it holds that $0 \le \phi_i \le 1$ and that \begin{equation}\label{uuh} \sup_i\int_{\Omega} |\Delta \phi_i | \mathsf{d} \mathcal{H}^n<\infty. \end{equation} \end{enumerate} \end{enumerate} \end{definition} The zero capacity condition is a kind of that `$H^{1, 2}$-capacity of $X \setminus \Omega$ is zero' whose standard definition is given by replacing (\ref{uuh}) by \begin{equation}\label{ffff} \int_{\Omega}|\nabla \phi_i|^2\mathsf{d} \mathcal{H}^n \to 0 \quad (i \to \infty). \end{equation} See \cite{JM}. In particular, (\ref{ffff}) is satisfied if $\|\Delta \phi_i\|_{L^1} \to 0$. Compare with (2) of Proposition \ref{fun}. \begin{remark}\label{3332} Whenever we discuss `analysis/geometry on $\Omega$ locally', we can identify $(\Omega, \mathsf{d})$ with the smooth Riemannian manifold $(M^n, g)$ (thus, sometimes, we will use the notations $(\Omega, g), \mathrm{Ric}_{\Omega}^g$ and so on). Note that for all $p \in M^n$ and all sufficiently small $r>0$, $B_r^g(p)$ is convex and it has a uniform lower bound on Ricci curvature. In particular, the volume doubling condition and the Poincar\'e inequality hold locally. Thus, Cheeger's theory \cite{Cheeger} can be applied locally. In particular, the \textit{Lipschitz-Lusin property} holds for all $f \in H^{1, 2}(X, \mathsf{d}, \mathfrak{m})$ (this notion is equivalent to that of differentiability of functions introduced in \cite{Honda4}), that is, for all $\epsilon>0$, there exists a Borel subset $A $ of $\Omega$ such that $\mathfrak{m} (\Omega \setminus A)<\epsilon$ and that $f|_A$ is Lipschitz. Combining this with the locality property of the slope on both theories in \cite{AmbrosioGigliSavare13}, in \cite{Cheeger}, yields \begin{equation}\label{ppk} |\nabla f|(x) = |\nabla^g (f \circ \phi^{-1})|(\phi(x)) \quad \mathcal{H}^n-a.e. x \in \Omega, \end{equation} where the RHS means the minimal weak upper gradient in \cite{Cheeger}. Let us give a quick proof of (\ref{ppk}) for reader's convenience. By the Lipschitz-Lusin property with the localities of slopes as mentioned above, it suffices to check that under assuming $f \in \mathrm{LIP}(X, \mathsf{d})$, the LHS of (\ref{ppk}) is equal to $\mathrm{Lip} f$ for $\mathfrak{m}$-a.e. $x \in X$. Moreover, since it follows from \cite{AmbrosioGigliSavare13} that $|\nabla f|(x) \le \mathrm{Lip} f(x)$ $\mathfrak{m}$-a.e. $x \in X$, let us check the converse inequality. Let $x \in \Omega$ and fix any sufficiently small $r>0$ as above. Note that by \cite{Cheeger}, if $f_i \in \mathrm{LIP}(B_r(x), \mathsf{d})$ $L^2$-strongly converge to $f$ on $B_r(x)$, then \begin{equation}\label{eerf} \liminf_{i \to \infty}\int_{B_r(x)}(\mathrm{Lip} f_i)^2\mathsf{d} \mathcal{H}^n \ge \int_{B_r(x)}(\mathrm{Lip} f)^2\mathsf{d} \mathcal{H}^n. \end{equation} On the other hand, by \cite{AmbrosioGigliSavare14}, there exists a sequence $F_i \in \mathrm{LIP}(X, \mathsf{d})$ such that $F_i, \mathrm{Lip} F_i \to f, |\nabla f|$ in $L^2(X, \mathfrak{m})$, respectively. Applying (\ref{eerf}) for $f_i=F_i$ shows $$ \int_{B_r(x)}|\nabla f|^2\mathsf{d} \mathcal{H}^n\ge \int_{B_r(x)}(\mathrm{Lip} f)^2\mathsf{d} \mathcal{H}^n. $$ Since $r$ is arbitrary, we have the converse inequality, $|\nabla f|(x) \ge \mathrm{Lip} f(x)$ $\mathfrak{m}$-a.e. $x \in X$, which completes the proof. Similarly, the Sobolev space $H^{1, 2}_0(M^n, g, \mathcal{H}^n)$, which is defined by the standard way in Riemannian geometry (that is, the $H^{1, 2}$-closure of $C^{\infty}_c(M^n)$), coincides with $H^{1, 2}_0(\Omega, \mathsf{d}, \mathfrak{m})$. We will immediately use these compatibilities below. \end{remark} From now on, we use the same notation as in Definition \ref{def:asmm} (e.g. $\Omega, \phi_i$) without any attention. \begin{proposition}\label{fun} Let $(X, \mathsf{d}, \mathfrak{m})$ be an $n$-dimensional almost smooth compact metric measure space. Then \begin{enumerate} \item $\phi_i \to 1$ in $L^1(X, \mathfrak{m})$ with $\sup_i \|\phi_i\|_{H^{1, 2}}<\infty$; \item the canonical inclusion map $\iota: H^{1, 2}_0(\Omega, \mathsf{d}, \mathcal{H}^n) \hookrightarrow H^{1, 2}(X, \mathsf{d}, \mathcal{H}^n)$ is an isometry. In particular $(X, \mathsf{d}, \mathcal{H}^n)$ is infinitesimally Hilbertian. \end{enumerate} \end{proposition} \begin{proof} Since $\phi_i(x) \to 1$ $\mathfrak{m}$-a.e. $x \in X$, applying the dominated convergence theorem shows that $\phi_i \to 1$ in $L^2(X, \mathfrak{m})$. Moreover, since $$ \int_{\Omega}|\nabla \phi_i|^2\mathsf{d} \mathcal{H}^n =-\int_{\Omega}\phi_i\Delta \phi_i\mathsf{d} \mathcal{H}^n \le \int_{\Omega}|\Delta \phi_i|\mathsf{d} \mathcal{H}^n, $$ we have (1). Next, let us check (2). It is trivial that the map $\iota$ preserves the distances (we identify $H^{1, 2}_0(\Omega, \mathsf{d}, \mathfrak{m})$ with the image by $\iota$ for simplicity). As written in Remark \ref{3332}, it also follows from the smoothness of $\Omega$ that $H^{1, 2}_0(\Omega, \mathsf{d}, \mathfrak{m})$ is a Hilbert space, and that $\phi_i f \in H^{1, 2}_0(\Omega, \mathsf{d}, \mathfrak{m})$ for all $f \in \mathrm{LIP}(X, \mathsf{d})$. Fix $f \in \mathrm{LIP}(X, \mathsf{d})$. Then, since $$ \int_X|\nabla (\phi_i f)|^2\mathsf{d} \mathfrak{m} \le \int_X\left(2|\nabla f|^2 + 2|f|^2 |\nabla \phi_i|^2\right)\mathsf{d} \mathfrak{m}, $$ we have $\sup_i\|\phi_if\|_{H^{1, 2}}<\infty$. Therefore, since $\phi_if \to f$ in $L^2(X, \mathfrak{m})$, Mazur's lemma yields $f \in H^{1, 2}_0(\Omega, \mathsf{d}, \mathfrak{m})$. In particular, \begin{equation}\label{rrf} {\sf Ch}(\phi+\psi)+{\sf Ch}(\phi-\psi)=2{\sf Ch}(\phi )+2{\sf Ch}(\psi ) \quad \forall \phi, \psi \in \mathrm{LIP}(X, \mathsf{d}). \end{equation} By \cite{AmbrosioGigliSavare13}, for all $F, G \in H^{1, 2}(X, \mathsf{d}, \mathfrak{m})$, there exist sequences $F_i, G_i \in \mathrm{LIP}(X, \mathsf{d})$ such that $F_i, G_i \to F, G$ in $L^2(X, \mathfrak{m})$, respectively and that ${\sf Ch}(F_i), {\sf Ch} (G_i) \to {\sf Ch}(F), {\sf Ch}(G)$, respectively. Then, letting $i \to \infty$ in the equality (\ref{rrf}) for $\phi=F_i, \psi=G_i$ with the lower semicontinuity of the Cheeger energy shows \begin{equation}\label{ss} {\sf Ch}(F+G)+{\sf Ch} (F-G) \le 2{\sf Ch}(F)+2{\sf Ch}(G). \end{equation} Replacing $F, G$ by $F+G, F-G$, respectively yields the converse inequality, that is, we have the equality in (\ref{ss}) for $\phi=F, \psi=G$, which proves that $H^{1, 2}(X, \mathsf{d}, \mathfrak{m})$ is a Hilbert space. Thus, by \cite{AmbrosioGigliSavare14}, $\mathrm{LIP}(X, \mathsf{d})$ is dense in $H^{1, 2}(X, \mathsf{d}, \mathfrak{m})$. Since we already proved that $\mathrm{LIP}(X, \mathsf{d}) \subset H^{1, 2}_0(\Omega, \mathsf{d}, \mathfrak{m})$, we conclude. \end{proof} \begin{remark} Recall that if $u_i$ $L^2$-weakly converge to $u$ in $L^{2}(X, \mathfrak{m})$ with $\sup_i\|u_i\|_{H^{1, 2}}<\infty$, then, we see that $u \in H^{1, 2}(X, \mathsf{d}, \mathfrak{m})$ and that $\nabla u_i$ $L^2$-weakly converge to $\nabla u$. Although this statement was already proved in general setting (e.g. \cite{AmbrosioStraTrevisan} and \cite{Gigli}. See also \cite{AmbrosioHonda} and \cite{Honda2}), for reader's convenience, let us give a proof as follows. Mazur's lemma yields the first statement, $u \in H^{1, 2}(X, \mathsf{d}, \mathfrak{m})$. To get the second one, since $\sup_i\|\nabla u_i\|_{L^2}<\infty$, it is enough to check that \begin{equation}\label{eer} \int_X\langle \nabla u_i, f\nabla h\rangle \mathsf{d} \mathfrak{m} \to \int_X\langle \nabla u, f\nabla h\rangle \mathsf{d} \mathfrak{m} \quad (i \to \infty) \quad \forall f, h \in C^{\infty}_c(\Omega). \end{equation} Then, \begin{align*} \int_X\langle \nabla u_i, f\nabla h\rangle \mathsf{d} \mathfrak{m} &=\int_Xu_i (-\langle \nabla f, \nabla h\rangle -f\Delta h)\mathsf{d} \mathfrak{m} \\ &\to \int_Xu (-\langle \nabla f, \nabla h\rangle -f\Delta h)\mathsf{d} \mathfrak{m} =\int_X\langle \nabla u, f\nabla h\rangle \mathsf{d} \mathfrak{m} \end{align*} which proves (\ref{eer}). \end{remark} \begin{definition}[$L^2$-strong compactness] A compact metric measure space $(Y, \mathsf{d}, \nu)$ is said to satisfy the \textit{$L^2$-strong compactness condition} if the canonical inclusion $\iota: H^{1, 2}(Y, \mathsf{d}, \nu) \hookrightarrow L^2(Y, \nu)$ is a compact operator. \end{definition} It is well-known that there are several sufficient conditions to satisfy the $L^2$-strong compactness condition, for instance, PI-condition (i.e. the volume doubling and the Poincar\'e inequality are satisfied), which follows from $\RCD(K, N)$-conditions for $N<\infty$ (see for instance \cite{HK} for the proof of the $L^2$-strong compactness condition). However, in general, for an $n$-dimensional almost smooth compact metric measure space, the $L^2$-strong compactness condition is not satisfied even if $\Omega$ has a uniform lower Ricci bound. To see this, for any two pointed metric spaces $(X_i, \mathsf{d}_i, x_i)(i=1, 2)$, let us denote by $(X_1, \mathsf{d}_1, x_1) * (X_2, \mathsf{d}_2, x_2)$ their glued pointed metric space as $x_1=x_2$, that is, the metric space is $$ X_1*X_2:=(X_1 \bigsqcup X_2) /(x_1=x_2) $$ with the intrinsic metric, and the base point is the glued point. See \cite{BuragoBuragoIvanov} for the detail. Sometimes, we denote the metric space by $(X_1 *X_2, \mathsf{d})$ without any attention on the base points for simplicity. \begin{example}\label{ddddd} Let us define a sequence of pointed compact metric spaces $(X_i, \mathsf{d}_i, x_i)$ as follows. Fix $n \ge 3$ and consider a sequence of flat $n$-tori: $$ \mathbb{T}_i^n:=\mathbb{S}^1(1/2^i) \times \mathbb{S}^1(1/2^i) \times \cdots \times \mathbb{S}^1(1/2^i) $$ with fixed points $p_i \in \mathbb{T}_i^n$, where $\mathbb{S}^1(r):=\{v \in \mathbb{R}^2;|v|=r\}$. Then, let $(X_1, \mathsf{d}_1, x_1):=(\mathbb{T}_1^n, \mathsf{d}_{\mathbb{T}^n_{1}}, p_1)$ and let $$ (X_{i+1}, \mathsf{d}_{i+1}, x_{i+1}):=(X_i, \mathsf{d}_i, x_i) *(\mathbb{T}^n_{i+1}, \mathsf{d}_{\mathbb{T}^n_{i+1}}, p_{i+1}) \quad \forall i \ge 1. $$ Then, let us denote by $(X, \mathsf{d}, x)$ the pointed Gromov-Hausdorff limit space of $(X_i, \mathsf{d}_i, x_i)$. Note that $(X, \mathsf{d})$ is compact, that $\Omega:=X \setminus \{x\}$ satisfies the smoothness with $\mathrm{Ric}_{\Omega}^g \ge 0$, and that there exist canonical isometric embeddings $\mathbb{T}_i^n \hookrightarrow X$ (we identify $\mathbb{T}^n_i$ with the image). Then, we consider the $n$-dimensional Hausdorff measure $\mathcal{H}^n$ as the reference measure $\mathfrak{m}$ on $X$. Let us check the zero capacity condition. It is trivial that $\mathcal{H}^n(X \setminus \Omega)=0$. For all $\epsilon>0$, take $\psi_{\epsilon} \in C^{\infty}(\mathbb{R})$ satisfying that $\psi_{\epsilon}|_{(-\infty, \epsilon]} \equiv 0$, that $0\le \psi_{\epsilon} \le 1$, that $\psi_{\epsilon}|_{[2\epsilon, \infty)} \equiv 1$, that $|\psi_{\epsilon}'| \le 100/\epsilon$ and that $|\psi_{\epsilon}''| \le 100/\epsilon^2$. Define $\phi_i \in C^{\infty}_c(\Omega)$ by $$ \phi_i(y):=\sum_{j=1}^i1_{\mathbb{T}^n_j}(y)\psi_{\pi /2^{i+10}}(\mathsf{d} (x, y)). $$ Then, since it is easy to see that for some universal constant $C_1>0$ $$ |\Delta \psi_{\pi /2^{i+10}}(\mathsf{d} (x, \cdot))| (y) \le C_12^{2i} \quad \forall j \le i, \forall y \in \mathbb{T}^n_j \cap \left(\overline{B}_{\pi/2^{i+9}}(x) \setminus B_{\pi /2^{i+10}}(x)\right), $$ we see that for all $j \le i$ \begin{align*} &\int_{\mathbb{T}^n_j}|\Delta \psi_{\pi /2^{i+10}}(\mathsf{d} (x, y))|\mathsf{d} \mathcal{H}^n \\ &=\int_{\mathbb{T}^n_j \cap \left(\overline{B}_{\pi/2^{i+9}}(x) \setminus B_{\pi /2^{i+10}}(x)\right)}|\Delta \psi_{\pi /2^{i+10}}(\mathsf{d} (x, y))|\mathsf{d} \mathcal{H}^n \le C_22^{(2-n)i}, \end{align*} where $C_2$ is also a universal constant. In particular, $$ \int_X|\Delta \phi_i|\mathsf{d} \mathcal{H}^n \le C_2i2^{(2-n)i} \to 0 \quad (i \to \infty) $$ which proves the zero capacity condition. Thus, $(X, \mathsf{d}, \mathcal{H}^n)$ is an $n$-dimensional almost smooth compact metric measure space. Let us define a sequence $f_i \in L^2(X, \mathcal{H}^n)$ by $$ f_i:=\frac{1}{\mathcal{H}^n(\mathbb{T}^n_i)}1_{\mathbb{T}^n_i}. $$ Then, it is easy to see that $f_i$ $L^2$-weakly converge to $0$ and that $f_i \in H^{1, 2}(X, \mathsf{d}, \mathfrak{m})$ with $\|f_i\|_{L^2}=\|f_i\|_{H^{1, 2}}=1$ (see also Example \ref{ccd}). Since $f_i$ does not $L^2$-strongly converge to $0$, the $L^2$-strong compactness condition does not hold. \end{example} It follows from standard arguments in functional analysis that if an infinitesimally Hilbertian compact metric measure space $(Y, \mathsf{d}, \nu)$ satisfies the $L^2$-strong compactness condition with $\mathrm{dim}\,L^2(Y, \nu)=\infty$, then, the spectrum of $-\Delta$ is discrete and unbounded (each eigenvalue has finite multiplicities). Thus, we then denote the eigenvalues by $$ 0=\lambda_1(Y, \mathsf{d}, \nu) \le \lambda_2(Y, \mathsf{d}, \nu) \le \lambda_2(Y, \mathsf{d}, \nu) \le \cdots \to \infty $$ counted with multiplicities, and denote the corresponding eigenfunctions by $\phi_i^Y$ with $\|\phi_i^Y\|_{L^2}=1$. We always fix an $L^2$-orthogonal basis $\{\phi_i^Y\}_i$ consisting of eigenfunctions, immediately. Moreover, it also holds that for all $f \in L^2(Y, \nu)$, \begin{equation}\label{ee} f=\sum_i \left(\int_Yf\phi_i^Y\mathsf{d} \nu\right) \phi_i^Y\quad \mathrm{in}\,L^{2}(Y, \nu) \end{equation} and that for all $f \in H^{1, 2}(Y, \mathsf{d}, \nu)$, \begin{equation}\label{ee2} f=\sum_i \left(\int_Yf\phi_i^Y\mathsf{d} \nu\right) \phi_i^Y\quad \mathrm{in}\,H^{1, 2}(Y, \mathsf{d}, \nu). \end{equation} For reader's convenience, we will give proofs of them in the appendix. We are now in a position to give the main result: \begin{theorem}[From $\mathrm{Ric}_{\Omega}^g\ge K(n-1)$ to $\BE(K(n-1), n)$]\label{be} Let $(X, \mathsf{d}, \mathfrak{m})$ be an $n$-dimensional almost smooth compact metric measure space. Assume that $(X, \mathsf{d}, \mathfrak{m})$ satisfies the $L^2$-strong compactness condition, that each eigenfunction $\phi_i^X$ satisfies $|\nabla \phi_i^X| \in L^{\infty}(X, \mathfrak{m})$ and that $\mathrm{Ric}_{\Omega}^g\ge K(n-1)$ for some $K \in \mathbb{R}$. Then, $(X, \mathsf{d}, \mathfrak{m})$ satisfies the $\BE(K(n-1), n)$-condition. \end{theorem} \begin{proof} Let us use the same notation as above, that is, let $f_N:=\sum_i^Na_i\phi_i^X$, where $a_i:=\int_Xf\phi_i^X\mathsf{d} \mathfrak{m}$. Note that by (\ref{ee2}), $f_N, \Delta f_N \to f, \Delta f$ in $H^{1, 2}(X, \mathsf{d}, \mathfrak{m})$, respectively as $N \to \infty$. In the following, for all $h \in C^{\infty}(\Omega)$, the Laplacian $\mathrm{tr} (\mathrm{Hess}_h)$ defined in Riemannian geometry is also denoted by the same notation $\Delta h$, without any attention because \begin{equation}\label{kkjjnnmmkk} \int_{\Omega}\langle \nabla h, \nabla \psi\rangle \mathsf{d} \mathcal{H}^n=-\int_{\Omega}\mathrm{tr} (\mathrm{Hess}_h)\psi \mathsf{d} \mathcal{H}^n, \quad \forall \psi \in C^{\infty}_c(\Omega) \end{equation} is satisfied and (\ref{kkjjnnmmkk}) characterizes the function $\mathrm{tr} (\mathrm{Hess}_f)$ in $L^2_{\mathrm{loc}}(\Omega, \mathcal{H}^n)$. Fix $N \in \mathbb{N}$. Then, let us prove that $|\nabla f_N|^2 \in H^{1, 2}(X, \mathsf{d}, \mathfrak{m})$ as follows. By our assumption on the eigenfunctions, we see that $|\nabla f_N| \in L^{\infty}(X, \mathfrak{m})$. Moreover, the elliptic regularity theorem shows that $f_N|_{\Omega} \in C^{\infty}(\Omega)$. Since $\mathrm{Ric}_{\Omega}^g\ge K(n-1)$, we have \begin{equation}\label{rr} \frac{1}{2}\Delta |\nabla f_N|^2 \ge |\mathrm{Hess}_{f_N}|^2+\langle \nabla \Delta f_N, \nabla f_N\rangle +K(n-1)|\nabla f_N|^2 \quad \forall x \in \Omega. \end{equation} Thus, multiplying $\phi_i$ on both sides and then integrating this over $\Omega$ show \begin{align}\label{gg} &\frac{1}{2}\int_{\Omega}|\nabla f_N|^2\Delta \phi_i \mathsf{d} \mathcal{H}^n \ge \int_{\Omega}\phi_i\left( |\mathrm{Hess}_{f_N}|^2 +\langle \nabla \Delta f_N, \nabla f_N\rangle +K(n-1)|\nabla f_N|^2\right)\mathsf{d} \mathcal{H}^n. \end{align} Since $|\nabla f_N| \in L^{\infty}(X, \mathfrak{m})$ and our assumption on the zero capacity, the inequality (\ref{gg}) implies $$ \limsup_{i \to \infty}\int_{\Omega}\phi_i|\mathrm{Hess}_{f_N}|^2\mathsf{d} \mathcal{H}^n<\infty. $$ In particular, $$ \int_A |\mathrm{Hess}_{f_N}|^2\mathsf{d} \mathcal{H}^n\le \limsup_{i \to \infty}\int_{\Omega}\phi_i|\mathrm{Hess}_{f_N}|^2\mathsf{d} \mathcal{H}^n<\infty \quad \forall A \Subset \Omega. $$ Thus, the monotone convergence theorem yields \begin{equation}\label{hhhj} \int_{\Omega}|\mathrm{Hess}_{f_N}|^2\mathsf{d} \mathcal{H}^n<\infty. \end{equation} On the other hand, since $\phi_i \in C_c^{\infty}(\Omega)$ and $f_N|_{\Omega} \in C^{\infty}(\Omega)$, we have $\phi_i |\nabla f_N|^2 \in H^{1, 2}(X, \mathsf{d}, \mathfrak{m})$. Moreover, since \begin{align*} \int_X|\nabla (\phi_i|\nabla f_N|^2)|^2\mathsf{d} \mathfrak{m} & \le \int_X \left(2 |\nabla \phi_i|^2|\nabla f_N|^4 + 2|\nabla |\nabla f_N|^2|^2\right) \mathsf{d} \mathfrak{m} \\ &\le \int_X \left(2 |\nabla \phi_i|^2|\nabla f_N|^4 + 2|\mathrm{Hess}_{f_N}|^2 |\nabla f_N|^2\right) \mathsf{d} \mathfrak{m}, \end{align*} by (\ref{hhhj}), we have $\sup_i\|\phi_i|\nabla f_N|^2\|_{H^{1, 2}}<\infty$ which completes the proof of the desired statement, $|\nabla f_N|^2 \in H^{1, 2}(X, \mathsf{d}, \mathfrak{m})$, because $\phi_i|\nabla f_N|^2 \to |\nabla f_N|^2$ in $L^2(X, \mathfrak{m})$. We are now in a position to finish the proof. Let $\phi \in D(\Delta) \cap L^{\infty}(X, \mathfrak{m})$ with $\Delta \phi \in L^{\infty}(X, \mathfrak{m})$ and $\phi \ge 0$. Multiplying $\phi \phi_i$ on both sides of (\ref{rr}) and integrating this over $X$ show \begin{align}\label{gg2} -\frac{1}{2}\int_X\langle \nabla (\phi \phi_i), \nabla |\nabla f_N|^2\rangle\mathsf{d} \mathfrak{m} \ge \int_X\phi\phi_i \left(|\mathrm{Hess}_{f_N}|^2 +\langle \nabla \Delta f_N, \nabla f_N\rangle +K(n-1)|\nabla f_N|^2\right)\mathsf{d} \mathfrak{m}. \end{align} Recall that $\phi_i \to 1$ in $L^2(X, \mathfrak{m})$ and that $\nabla \phi_i$ $L^2$-weakly converge to $\nabla 1=0$ with $\nabla (\phi \phi_i)=\phi_i\nabla \phi +\phi\nabla \phi_i$. Thus, we have $$ \mathrm{LHS}\,\mathrm{of}\, (\ref{gg2}) \to -\frac{1}{2}\int_X\langle \nabla \phi, \nabla |\nabla f_N|^2\rangle \mathsf{d} \mathfrak{m} = \frac{1}{2}\int_X |\nabla f_N|^2\Delta \phi\mathsf{d} \mathfrak{m}, $$ where we used $\|\nabla |\nabla f_N|^2\|_{L^2}\le 2\|\mathrm{Hess}_{f_N}\|_{L^2}\|\nabla f_N\|_{L^{\infty}}<\infty$ and $|\nabla f_N|^2 \in H^{1, 2}(X, \mathsf{d}, \mathfrak{m})$. Moreover, the dominated convergence theorem yields $$ \mathrm{RHS}\,\mathrm{of}\, (\ref{gg2}) \to \int_X\phi \left(|\mathrm{Hess}_{f_N}|^2 +\langle \nabla \Delta f_N, \nabla f_N\rangle +K(n-1)|\nabla f_N|^2\right)\mathsf{d} \mathfrak{m}. $$ Thus, combining these with letting $i \to \infty$ in (\ref{gg2}) shows \begin{align*} \frac{1}{2}\int_X |\nabla f_N|^2\Delta \phi\mathsf{d} \mathfrak{m} &\ge \int_X\phi \left(|\mathrm{Hess}_{f_N}|^2 +\langle \nabla \Delta f_N, \nabla f_N\rangle +K(n-1)|\nabla f_N|^2\right)\mathsf{d} \mathfrak{m} \\ &\ge \int_X\phi \left(\frac{(\Delta f_N)^2}{n} +\langle \nabla \Delta f_N, \nabla f_N\rangle +K(n-1)|\nabla f_N|^2\right)\mathsf{d} \mathfrak{m}. \end{align*} Therefore, letting $N \to \infty$ yields $$ \frac{1}{2}\int_X|\nabla f|^2 \Delta \phi \mathsf{d} \mathfrak{m} \ge \int_X\phi \left( \frac{(\Delta f)^2}{n} +\langle \nabla \Delta f, \nabla f\rangle +K(n-1)|\nabla f|^2\right)\mathsf{d} \mathfrak{m} $$ which completes the proof. \end{proof} Let us apply Theorem \ref{be} to an explicit simple example as follows. \begin{example}\label{ccd} Let us check that the metric measure space $$(X, \mathsf{d}, \mathfrak{m}):=\left(\mathbb{S}^n(1) * \mathbb{S}^n(1), \mathsf{d}, \mathcal{H}^n\right)$$ satisfies the $\BE(n-1, n)$-condition ($n \ge 2$), where $\mathbb{S}^n(r):=\{v \in \mathbb{R}^{n+1}; |v|=r\}$. Let us denote by $\mathbb{S}^n_1(1)$ and $\mathbb{S}^m_2(1)$, respectively, the images of the canonical isometric embeddings $\mathbb{S}^n(1) \hookrightarrow X$ to the first sphere and the second one, respectively. Moreover, we denote by $p$ the intersection point of them. It is worth pointing out that $(X, \mathsf{d}, \mathfrak{m})$ satisfies the Ahlfors $n$-regularity, which is easily checked. \textit{Being an $n$-dimensional almost smooth compact metric measure space.} Let $\Omega:=X \setminus \{p\}$. Then, it is trivial that $\Omega$ satisfies the smoothness with $\mathrm{Ric}_{\Omega}^g\ge (n-1)$ and $\mathcal{H}^n(X \setminus \Omega)=0$. Let us use $\psi_{\epsilon}$ as in Example \ref{ddddd}. Then, by an argument similar to that in Example \ref{ddddd}, it is easy to check that the functions $\phi_i(x):=\psi_{i^{-1}}(\mathsf{d}(p, x))$ satisfies the zero capacity condition. Thus, $(X, \mathsf{d}, \mathfrak{m})$ is an $n$-dimensional almost smooth compact metric measure space. \textit{Satisfying the $L^2$-strong compactness condition.} We remark that \begin{equation}\label{gvv} f1_{\mathbb{S}^n_j(1)} \in H^{1, 2}(X, \mathsf{d}, \mathfrak{m}) \quad \forall f \in H^{1, 2}(X, \mathsf{d}, \mathfrak{m}) \end{equation} and \begin{equation}\label{qqqqq2} h1_{\mathbb{S}^n_j(1)} \in H^{1, 2}(X, \mathsf{d}, \mathfrak{m}) \quad \forall h \in H^{1, 2}(\mathbb{S}^n_j(1), \mathsf{d}, \mathcal{H}^n) \end{equation} are satisfied, which come from the zero capacity condition with the truncation argument, that is, for functions $\phi_i1_{\mathbb{S}^n_j(1)}(f\wedge L \vee -L)$ and $\phi_i1_{\mathbb{S}^n_j(1)}(h\wedge L\vee -L)$, letting $i \to \infty$ and then letting $L \to \infty$ show (\ref{gvv}) and (\ref{qqqqq2}) (recall the proof of (2) of Proposition \ref{fun}). Let $f_i \in H^{1, 2}(X, \mathsf{d}, \mathfrak{m})$ with $\sup_i\|f_i\|_{H^{1, 2}}<\infty$. Put $f_i^j:=f_i1_{\mathbb{S}^n_j(1)} \in H^{1, 2}(\mathbb{S}^n_j(1), \mathsf{d}, \mathcal{H}^n)$. Then, since the $L^2$-strong compactness condition holds for $(\mathbb{S}^n_j(1), \mathsf{d}, \mathcal{H}^n)$, there exist a subsequence $i(k)$ and $f^j \in L^2(\mathbb{S}^n_j(1), \mathcal{H}^n)$ such that $f_{i(k)}^j \to f^j$ in $L^2(\mathbb{S}^n_j(1), \mathcal{H}^n)$ for all $j \in \{1, 2\}$. In particular, $f_{i(k)}=f_{i(k)}^1+f_{i(k)}^2 \to f^1+f^2=:f$ in $L^2(X, \mathfrak{m})$, which proves the $L^2$-strong compactness condition for $(X, \mathsf{d}, \mathfrak{m})$. \textit{Satisfying the gradient estimates on the eigenfucntions and the $\BE(n-1, n)$ condition.} Let $f$ be an eigenfunction, that is, $f \in D(\Delta)$ with $-\Delta f=\lambda f$ for some $\lambda \ge 0$. For any $h \in H^{1, 2}(\mathbb{S}^n_j(1), \mathsf{d}, \mathcal{H}^n)$, put $\hat{h}=h1_{\mathbb{S}^n_j(1)} \in H^{1, 2}(X, \mathsf{d}, \mathfrak{m})$. Then, since \begin{equation}\label{aasa} \int_X\langle \nabla f, \nabla \hat{h}\rangle \mathsf{d} \mathfrak{m} =\lambda \int_Xf\hat{h}\mathsf{d} \mathfrak{m} \end{equation} and $$ \mathrm{LHS}\,\mathrm{of}\,(\ref{aasa}) = \int_{\mathbb{S}^n_j(1)}\langle \nabla f, \nabla h\rangle \mathsf{d} \mathcal{H}^n, \quad \mathrm{RHS}\,\mathrm{of}\,(\ref{aasa}) = \lambda \int_{\mathbb{S}^n_j(1)}f h \mathsf{d} \mathcal{H}^n, $$ we see that $f|_{\mathbb{S}^n_j(1)}$ is an eigenfunction of $(\mathbb{S}^n_j(1), \mathsf{d}, \mathcal{H}^n)$. Thus, $|\nabla (f|_{\mathbb{S}^n_j(1)})| \in L^{\infty}(\mathbb{S}_j^n(1), \mathcal{H}^n)$, which implies $|\nabla f| \in L^{\infty}(X, \mathfrak{m})$. Therefore, we can apply Theorem \ref{be} to show that $(X, \mathsf{d}, \mathfrak{m})$ satisfies the $\BE(n-1, n)$-condition. \textit{Coincidence between the induced distance $\mathsf{d}_{{\sf Ch}}$ by the Cheeger energy and $\mathsf{d}$.} Let us prove: \begin{equation}\label{kkm} \mathsf{d}_{{\sf Ch}}(x, y)= \mathsf{d} (x, y), \quad \forall x, y \in X, \end{equation} where \begin{equation}\label{honn} \mathsf{d}_{{\sf Ch}}(x, y):=\sup \left\{\phi (x)-\phi (y); \phi \in C^0(X) \cap H^{1, 2}(X, \mathsf{d}, \mathfrak{m}), |\nabla \phi |(z) \le 1, \mathfrak{m}-a.e. z \in X\right\}. \end{equation} Let $x \in \mathbb{S}^n_1(1)$ and let $y \in \mathbb{S}^n_2(1)$. For any $\phi$ as in the RHS of (\ref{honn}), \begin{align}\label{hfhf} \phi (x)-\phi(y) &= \phi|_{\mathbb{S}^n_1(1)}(x)-\phi|_{\mathbb{S}^n_1(1)}(p) + \phi|_{\mathbb{S}^n_2(1)}(p)-\phi|_{\mathbb{S}^n_2(1)}(y) \nonumber \\ &\le \mathsf{d}_{\mathbb{S}^n_1(1)}(x, p)+\mathsf{d}_{\mathbb{S}^n_2(1)}(p, y)=\mathsf{d} (x, y), \end{align} where we used the fact that $\mathsf{d}=\mathsf{d}_{{\sf Ch}}$ in $(\mathbb{S}^n_j(1), \mathsf{d}_{\mathbb{S}^n_j(1)}, \mathcal{H}^n)$. Thus, taking the supremum in (\ref{hfhf}) with respect to $\phi$ shows the inequality `$\le$' in (\ref{kkm}). To get the converse inequality, let $$ \phi(z):=(1_{\mathbb{S}^n_1(1)}(z)-1_{\mathbb{S}^n_2(1)}(z))\mathsf{d} (p, z). $$ Then, we see that $\phi \in \mathrm{LIP}(X, \mathsf{d})$, that $\mathrm{Lip} \phi (z)\le 1$ for all $z \in X$, and that $\phi(x)-\phi(y)=\mathsf{d} (x, y)$, which proves the converse inequality `$\ge$' in (\ref{kkm}). Similarly, we can prove (\ref{kkm}) in the remaining case, thus, we have (\ref{kkm}) for all $x, y \in X$. \textit{Poincar\'e inequality and $\RCD (K, \infty)$ condition are not satisfied.} Assume that $(X, \mathsf{d}, \mathfrak{m})$ satisfies the $(1, 2)$-Poincar\'e inequality, that is, there exists $C>0$ such that for all $r>0$, all $x \in X$ and all $f \in H^{1, 2}(X, \mathsf{d}, \mathfrak{m})$, it holds that \begin{equation}\label{ll} \frac{1}{\mathfrak{m} (B_r(x))}\int_{B_r(x)}\left| f- \frac{1}{\mathfrak{m} (B_r(x))}\int_{B_r(x)}f\mathsf{d} \mathfrak{m} \right|\le Cr\left(\frac{1}{\mathfrak{m} (B_r(x))}\int_{B_r(x)}|\nabla f|^2\mathsf{d} \mathfrak{m}\right)^{1/2}. \end{equation} Let $\phi:=1_{\mathbb{S}^n_1(1)}-1_{\mathbb{S}^n_2(1)}$. By (\ref{qqqqq2}), we have $\phi \in H^{1, 2}(X, \mathsf{d}, \mathfrak{m})$. Then, by the locality of the slope, we have $|\nabla \phi|=0$ $\mathfrak{m}$-a.e.. In particular, (\ref{ll}) yields that $\phi$ must be a constant, which is a contradiction. By the same reason, for all $K \in \mathbb{R}$, $(X, \mathsf{d}, \mathfrak{m})$ does not satisfy $\RCD(K, \infty)$-condition. \end{example} \begin{remark}\label{yy6} Example \ref{ccd} also tells us that (\ref{22}) does not imply the expected Bishop-Gromov inequalty. In fact, under the same notation as in Example \ref{ccd}, letting $q$ be the antipodal point of $p$ in $\mathbb{S}^n_1(1)$ yields $$ \frac{\mathfrak{m} (B_{2\pi}(q))}{\mathfrak{m} (B_\pi(q))}=2 >1 =\frac{\mathcal{H}^n(B_{2\pi}^{\mathbb{S}^n(1)}(x))}{\mathcal{H}^n(B_{\pi}^{\mathbb{S}^n(1)}(x))} \quad \forall x \in \mathbb{S}^n(1) $$ which is the `reverse' Bishop-Gromov inequality. Similarly, the $\BE(n-1, n)$ condition with `$\mathsf{d}=\mathsf{d}_{{\sf Ch}}$' does not imply the expected Bonnet-Myers theorem. \end{remark} \begin{corollary}[Characterization of $\RCD$ condition on almost smooth compact metric measure space]\label{corrcd} Let $(X, \mathsf{d}, \mathfrak{m})$ be an $n$-dimensional almost smooth compact metric measure space associated with an open subset $\Omega$ of $X$, and let $K \in \mathbb{R}$. Then, $(X, \mathsf{d}, \mathfrak{m})$ is a $\RCD(K(n-1), n)$ space if and only if the following four conditions hold: \begin{enumerate} \item the Sobolev to Lipschitz property holds; \item the $L^2$-strong compactness condition holds; \item any eigenfunction is Lipschitz; \item $\mathrm{Ric}_{\Omega}^g\ge K(n-1)$ holds. \end{enumerate} \end{corollary} \begin{proof} By Theorem \ref{be}, it is enough to check `only if' part. If $(X, \mathsf{d}, \mathfrak{m})$ is a $\RCD (K(n-1), n)$, then, applying Gigli's Bochner inequality (for $H^{1, 2}_H$-vector fields) in \cite{Gigli} shows that for all $f, h, \phi \in C^{\infty}_c(\Omega)$ with $\phi \ge 0$, $$ \int_X\frac{\phi}{2}\Delta |f\nabla h|^2 \ge \int_X\phi\left( |\nabla (f\nabla h)|^2 -\langle \Delta_{H}(f\mathsf{d} h), f\mathsf{d} h\rangle +K(n-1)|f\nabla h|^2\right)\mathsf{d} \mathfrak{m} $$ which implies \begin{equation}\label{ttf} \frac{1}{2}\Delta |f\nabla h|^2 \ge |\nabla (f\nabla h)|^2 -\langle \Delta_{H}(f\mathsf{d} h), f\mathsf{d} h\rangle +K(n-1)|f\nabla h|^2 \quad \forall x \in \Omega \end{equation} because $\phi$ is arbitrary, where $\Delta_H:=\mathsf{d} \delta +\delta \mathsf{d}$ is the Hodge Laplacian acting on $1$-forms. In particular, since (\ref{ttf}) is equivalent to $\mathrm{Ric}_{\Omega}^g(f\nabla h, f\nabla h) \ge K(n-1)|f\nabla h|^2$ for all $x \in \Omega$, we have $\mathrm{Ric}_{\Omega}^g \ge K(n-1)$ because $f, h$ are also arbitrary. The Sobolev to Lipschitz property is in the definition of $\RCD$ space. Moreover, as written previously, the $L^2$-strong compactness condition follows from the doubling condition and the Poincar\'e inequality, which are justified by the Bishop-Gromov inequality \cite{Sturm06} and by \cite{Rajala}. Finally, since the Lipschitz regularity on the eigenfunctions is satisfied by \cite{Jiang}, we conclude. \end{proof} \begin{corollary}[Another characterization of $\RCD$ condition] Let $(X, \mathsf{d}, \mathfrak{m})$ be an $n$-dimensional almost smooth compact metric measure space associated with an open subset $\Omega$ of $X$, and let $K \in \mathbb{R}$. Then, $(X, \mathsf{d}, \mathfrak{m})$ is a $\RCD(K(n-1), n)$ space if and only if the following four conditions hold: \begin{enumerate} \item $(X, \mathsf{d}, \mathfrak{m})$ is a PI space; \item the induced distance $\mathsf{d}_{{\sf Ch}}$ by the Cheeger energy is equal to the original distance $\mathsf{d}$; \item any eigenfunction $f$ satisfies $|\nabla f| \in L^{\infty}(X, \mathfrak{m})$; \item $\mathrm{Ric}_{\Omega}^g\ge K(n-1)$ holds. \end{enumerate} \end{corollary} \begin{proof} Since the proof of `only if' part is same to that of Corollary \ref{corrcd}, let us check `if' part. By Theorem \ref{be}, we see that $(X, \mathsf{d}, \mathfrak{m})$ is a $\BE(K(n-1), n)$ space. Thus, it suffices to check the Sobolev to Lipschitz property. Let $f \in H^{1, 2}(X, \mathsf{d}, \mathfrak{m})$ with $|\nabla f|(x) \le 1$ $\mathfrak{m}$-a.e. $x \in X$. Then, the telescope argument with the PI condition (c.f. \cite{Cheeger}) yields that there exists $\hat{f} \in \mathrm{LIP}(X, \mathsf{d})$ such that $f(x)=\hat{f}(x)$ $\mathfrak{m}$-a.e. $x \in X$. Then, since it is proved in \cite{AmbrosioErbarSavare} that \begin{itemize} \item any $h \in H^{1, 2}(X, \mathsf{d}, \mathfrak{m}) \cap C^0(X)$ with $|\nabla h| \le 1$ $\mathfrak{m}$-a.e. $x \in X$ is $1$-Lipschitz, \end{itemize} we see that $\hat{f}$ is $1$-Lipschitz, that is, the Sobolev-Lipschitz property holds. Thus, we conclude. \end{proof} \begin{remark} We should mention that very recently, similar characterization of $\RCD$ conditions for \textit{stratified spaces}, which give almost smooth metric measure spaces as typical examples, is proved in \cite{BKMR}. \end{remark} We end this section by giving a sufficient condition to satisfy the Sobolev to Lipschitz property. For that, let us introduce the definition of the segment inequality: \begin{definition}[Segment inequality] Let $(Y, \mathsf{d}, \nu)$ be a metric measure space satisfying that $(Y, \mathsf{d})$ is a geodesic space. For a nonnegative valued Borel function $f$ on $Y$, define $$ \mathcal{F}_f(x, y) := \inf_{\gamma}\int_{[0, \mathsf{d} (x, y)]}f(\gamma)\mathsf{d} s, \quad \forall x, y \in Y, $$ where the infimum runs over all minimal geodesics $\gamma$ from $x$ to $y$. Then, we say that \textit{$(Y, \mathsf{d}, \nu)$ satisfies the segment inequality} if there exists $\lambda>0$ such that $$\int_{B_r(x) \times B_r(x)}\mathcal{F}_f(y, z) \mathsf{d}(\nu \times \nu) \le \lambda r \nu (B_r(x)) \int_{B_{\lambda r}(x)}f\mathsf{d} \nu \quad \forall x \in Y, \forall r>0, \forall f. $$ \end{definition} Cheeger-Colding proved in \cite{CheegerColding3} that if $(Y, \mathsf{d}, \nu)$ satisfies the volume doubling condition and the segment inequality, then, the $(1, 1)$-Poincar\'e inequality holds (see also \cite{CheegerColding} and \cite{HP}). \begin{proposition}[Segment inequality with doubling condition implies Sobolev to Lipschitz property] Let $(Y, \mathsf{d}, \nu)$ be a compact metric measure space satisfying that $(Y, \mathsf{d})$ is a geodesic space. Assume that $(Y, \mathsf{d}, \nu)$ satisfies the volume doubling condition and the segment inequality. Then, the Sobolev to Lipschitz property holds. \end{proposition} \begin{proof} Let $f \in H^{1, 2}(Y, \mathsf{d}, \nu)$ with $|\nabla f| \le 1$ $\nu$-a.e.. As written above, since $(1, 1)$-Poincar\'e inequality is satisfied, $f$ has a representative in $\hat{f} \in \mathrm{LIP}(Y, \mathsf{d})$ by the telescope argument (see for instance \cite{Cheeger}). Thus, since we have $|\mathrm{Lip}\hat{f}| \le 1$ $\nu$-a.e., which also follows from \cite{Cheeger}, it suffices to check that $\hat{f}$ is $1$-Lipschitz. Let us take a Borel subset $A$ of $Y$ such that $\nu(Y \setminus A)=0$ and that $$\mathrm{Lip}\hat{f} \le 1, \quad \forall x \in A.$$ Applying the segment inequality for $1_{Y \setminus A}$ yields that there exists a Borel subset $B$ of $Y \times Y$ such that $(\nu \times \nu ) ((Y \times Y) \setminus B)=0$ and that for any $(x, y) \in B$ and any $\epsilon >0$, there exists a minimal geodesic $\gamma$ from $x$ to $y$ such that $$ \int_{[0, \mathsf{d} (x, y)]}1_{Y \setminus A}(\gamma (s))\mathsf{d} s<\epsilon.$$ Therefore, since $\mathrm{Lip}\hat{f}$ is an upper gradient of $\hat{f}$, we have \begin{align}\label{21212121} |\hat{f}(x)-\hat{f}(y)| &\le \int_{[0, \mathsf{d} (x, y)]}\mathrm{Lip}\hat{f}(\gamma (s))\mathsf{d} s \nonumber \\ &= \int_{[0, \mathsf{d} (x, y)]} 1_{A}(\gamma (s))\mathrm{Lip}\hat{f}(\gamma (s))\mathsf{d} s + \int_{[0, \mathsf{d} (x, y)]} 1_{Y \setminus A}(\gamma (s))\mathrm{Lip}\hat{f}(\gamma (s))\mathsf{d} s \nonumber \\ &\le \mathsf{d} (x, y) + \sup_z \mathrm{Lip}\hat{f}(z) \epsilon. \end{align} Since $\epsilon$ is arbitrary and $B$ is dense in $Y \times Y$, (\ref{21212121}) yields that $\hat{f}$ is $1$-Lipschitz. \end{proof} \begin{remark}Let us give remarks on related works. Note that in the following, the spaces are not necessary compact. As we already used, Jiang proved in \cite{Jiang} the gradient estimates on solutions of Poisson's equations (including eigenfunctions) in the setting of metric measure spaces under assuming mild geometric conditions and a heat semigroup curvature condition (or called an weighted Sobolev inequality). Bamler and Chen-Wang proved in \cite{Bamler}, in \cite{ChenWang}, such conditions (including the segment inequality) in their almost smooth settings, independently. One of interesting questions is; if an $n$-dimensional almost smooth (compact) metric measure space $(X, \mathsf{d}, \mathfrak{m})$ satisfies that the induced distance $\mathsf{d}_g$ by $g$ on $\Omega$ coincides with $\mathsf{d}|_{\Omega}$, then, being $\mathrm{Ric}_{\Omega}^g\ge K(n-1)$ is equivalent to that $(X, \mathsf{d}, \mathfrak{m})$ is a $\RCD(K(n-1), n)$-space? \end{remark} \subsection{Nonconstant dimensional case} In this section, let us discuss a variant of $n$-dimensional almost smooth compact metric measure spaces. Let us recall that we fix a compact metric measure space $(X, \mathsf{d}, \mathfrak{m})$. \begin{definition}[Generalized almost smooth compact metric measure space] We say that $(X, \mathsf{d}, \mathfrak{m})$ is a \textit{generalized almost smooth compact metric measure space associated with an open subset $\Omega$ of $X$} if the following two conditions are satisfied; \begin{enumerate} \item{(Generalized smoothness of $\Omega$)} for all $p \in \Omega$, there exist an integer $n(p) \in \mathbb{N}$, an open neighborhood $U_p$ of $p$ in $\Omega$, an $n(p)$-dimensional (possibly incomplete) Riemannian manifold $(M^{n(p)}, g)$ and a map $\phi: U_p \to M^{n(p)}$ such that $\phi$ is a local isometry between $(U_p, \mathsf{d})$ and $(M^{n(p)}, \mathsf{d}_g)$; \item{(Hausdorff measure condition)} For all $p \in \Omega$, take $U_p$ as above. Then, the restriction of $\mathfrak{m}$ to $U_p$ coincides with the $n(p)$-dimensional Hausdorff measure $\mathcal{H}^{n(p)}$; \item{(Zero capacity condition)} $X \setminus \Omega$ has zero capacity in the sense of Definition \ref{def:asmm}. \end{enumerate} \end{definition} By an argument similar to the proof of Theorem \ref{be}, we have the following: \begin{theorem}[From $\mathrm{Ric}_{\Omega}^g \ge K$ to $\BE (K, N)$] Let $(X, \mathsf{d}, \mathfrak{m})$ be a generalized almost smooth compact metric measure space associated with an open subset $\Omega$ of $X$. Assume that $(X, \mathsf{d}, \mathfrak{m})$ satisfies the $L^2$-strong compactness condition, that each eigenfunction $\phi_i$ satisfies $|\nabla \phi_i| \in L^{\infty}(X, \mathfrak{m})$ and that $\mathrm{Ric}_{\Omega}^g\ge K$ for some $K \in \mathbb{R}$. Then, $(X, \mathsf{d}, \mathfrak{m})$ satisfies the $\BE(K, N)$-condition, where $N:=\sup_pn(p)$. \end{theorem} \begin{example}\label{1091} By an argument similar to that in Example \ref{ccd}, we can easily see that for any two (not necessary same dimensional) closed pointed Riemannian manifolds $(M_i^{m_i}, g_i, p_i) (m_i \ge 2)$ with $\mathrm{Ric}_{M_i^{m_i}}^{g_i} \ge K$ for some $K \in \mathbb{R}$, the metric measure space $$ \left(M_1^{m_1} * M_2^{m_2}, \mathsf{d}, \mathcal{H}^{m_1}\res_{M_1^{m_1}} + \mathcal{H}^{m_2}\res_{M_2^{m_2}}\right) $$ is a $\BE(K, \max \{m_1, m_2\})$ space with $\mathsf{d}_{{\sf Ch}}=\mathsf{d}$. More generally, similar constructions of $\BE(K, N)$ spaces by gluing embedded closed convex submanifolds $N_i^{n_i} \subset M_i^{m_i}$, which are isometric to each other, with $m_i-n_i\ge 2$ are also justified. \end{example} \begin{remark} In this paper we discuss only the \textit{unweighted} case, that is, the restriction of the reference measure to the smooth part is the optimal Hausdorff measure $\mathcal{H}^n$. Similar results are also obtained in the weighted case, $e^{-f}\mathsf{d} \mathcal{H}^n$, where $f \in C^{\infty}(\Omega)$, under suitable assumptions on $f$ by using the Bakry-\'Emery ($N$-) Ricci tensor (and the Witten Laplacian $\Delta_f$, respectively) instead of using the original Ricci tensor (and the Laplacian $\Delta$, respectively). However, we do not discuss the details because our main forcus is to discuss some `flexibility' on the $\BE(K, N)$ conditions as in Example \ref{1091}, which is very different from the $\RCD(K, N)$ conditions, and to give a bridge between almost smooth spaces and noncollapsed $\RCD$ spaces introduced in \cite{PhGi}, as in Corollary \ref{corrcd}. \end{remark} \section{Appendix} Let $(Y, \mathsf{d}, \nu)$ be an infinitesimally Hilbertian compact metric measure space and assume that $(Y, \mathsf{d}, \nu)$ satisfies the $L^2$-strong compactness condition with $\dim L^2(Y, \nu) =\infty$. In this appendix, we will show that the spectrum of $-\Delta$ is discrete and unbounded, and that (\ref{ee}) and (\ref{ee2}) hold. Let us begin with proving the following lemma: \begin{lemma}\label{ww3} For all $\lambda \in \mathbb{R}$, let $E(\lambda):=\{ f \in D(\Delta);-\Delta f=\lambda f\}$. \begin{enumerate} \item If $\dim E(\lambda) \ge 1$, then $\lambda \ge 0$ (which is called an eigenvalue of $-\Delta$). \item $\dim E(\lambda)<\infty$ holds. \end{enumerate} \end{lemma} \begin{proof} Let us check (1). Taking $f \in E(\lambda)$ with $f \neq 0$ in $L^2(Y, \nu)$ yields $ \lambda =\frac{\int_Y|\nabla f|^2\mathsf{d} \nu}{\int_Y|f|^2\mathsf{d} \nu} \ge 0. $ To prove (2), with no loss of generality, we can assume $\dim E(\lambda) \ge 1$. Let us check $(E(\lambda), \| \cdot \|_{L^2})$ is a Hilbert space. Take a Cauchy sequence $f_i$ in $E(\lambda)$. Let $f \in L^2(Y, \nu)$ be the $L^2$-strong limit function. Since $\|\nabla f_i\|_{L^2}^2=\lambda \|f_i\|_{L^2}^2$, $f_i$ is a bounded sequence in $H^{1, 2}(Y, \mathsf{d}, \nu)$. Thus, Mazur's lemma shows that $f \in H^{1, 2}(Y, \mathsf{d}, \nu)$ and that $f_i$ converge weaky to $f$ in $H^{1, 2}(Y, \mathsf{d}, \nu)$. Therefore, letting $i \to \infty$ in $$ \int_Y\langle \nabla f_i, \nabla g\rangle \mathsf{d} \nu=\lambda \int_Yf_ig\mathsf{d} \nu \quad \forall g \in H^{1, 2}(Y, \mathsf{d}, \nu) $$ yields $$ \int_Y\langle \nabla f, \nabla g\rangle \mathsf{d} \nu=\lambda \int_Yfg\mathsf{d} \nu \quad \forall g \in H^{1, 2}(Y, \mathsf{d}, \nu) $$ which shows $f \in E(\lambda)$, where the convergence of the left hand sides comes from the polarization. Thus, $(E(\lambda), \| \cdot \|_{L^2})$ is a Hilbert space. Then, similar argument with the $L^2$-strong compactness condition allows us to prove that $S(\lambda)$ is a compact subset of $E(\lambda)$, where $S(\lambda):=\{f \in E(\lambda); \|f\|_{L^2}=1\}$. Therefore, $\dim E(\lambda)<\infty$. \end{proof} \begin{lemma}\label{ww2} The set $\mathcal{E}(Y, \mathsf{d}, \nu):=\{\lambda \in \mathbb{R}_{\ge 0}; \dim E(\lambda) \ge 1\}$ is discrete. \end{lemma} \begin{proof} The proof is done by contradiction. Assume that there exists a sequence $\lambda_i \in \mathcal{E}(Y, \mathsf{d}, \nu)$ such that $\lambda_i \neq \lambda_j (i \neq j)$ and that $\lambda_i \to \lambda \in \mathbb{R}$. Take $f_i \in E(\lambda_i)$ with $\|f_i\|_{L^2}=1$. Then, since $\|f_i\|_{H^{1, 2}}^2=\lambda_i$, by the $L^2$-strong compactness condition, with no loss of generality, we can assume that there exists the $L^2$-strong limit function $f$ of $f_i$. Thus, $\|f\|_{L^2}=1$. Moreover, similar argument as in the proof of (2) of Lemma \ref{ww3} shows $f \in E(\lambda)$. In particular, $\lambda$ is an eigenvalue of $-\Delta$. Let $\{g_j\}_{j=1, 2, \ldots, N}$ be an ONB of $E(\lambda)$. Since $g_j \perp f_i$ in $L^2(Y, \nu)$, letting $i \to \infty$ yields $g_j \perp f$. Therefore, $\{g_j\}_j \cup \{f\}$ are linearly independent in $E(\lambda )$, which contradicts that $\{g_j\}_j$ is a basis of $E(\lambda)$. \end{proof} \begin{lemma}\label{ww4} The set $\mathcal{E}(Y, \mathsf{d}, \nu)$ is unbounded. \end{lemma} \begin{proof} Note that since $1 \in E(0)$, we have $\mathcal{E}(Y, \mathsf{d}, \nu) \neq \emptyset$. Assume that $\mathcal{E}(Y, \mathsf{d}, \nu)$ is bounded. Then, Lemma \ref{ww2} yields that $\mathcal{E}(Y, \mathsf{d}, \nu)$ is a finite set. By Lemma \ref{ww3}, there exists an ONB, $\{f_i\}_{i=1, 2,\ldots, N}$, of $ \bigoplus_{\lambda \in \mathcal{E}(Y, \mathsf{d}, \nu)}E(\lambda) (=:V). $ On the other hand, it is easy to see that the number $$ \lambda_*:=\inf_{f \perp V}\frac{\int_Y|\nabla f|^2\mathsf{d} \nu}{\int_Y|f|^2\mathsf{d} \nu} $$ is also an eigenvalue of $-\Delta$ and that there exists a minimizer $f_*$ of the right hand side with $f_* \in E(\lambda_*)$ and $\|f_*\|_{L^2}=1$, where we used our assumption, $\dim L^2(Y, \nu)=\infty$, to make sence in the infimum. Thus, since $f_* \in V$ and $f_* \perp V$, we have $f_*=0$, which contradicts that $\|f_*\|_{L^2}=1$. \end{proof} Lemmas \ref{ww2} and \ref{ww4} allow us to denote the eigenvalues of $-\Delta$ by $$ 0=\lambda_1(Y, \mathsf{d}, \nu) \le \lambda_2(Y,\mathsf{d}, \nu) \le \cdots \to \infty $$ counted with multiplicities. Fix the corresponding eigenfunctions by $\phi_i^Y$ with $\|\phi_i^Y\|_{L^2}=1$. \begin{proposition}\label{ww5} For all $f \in L^2(Y, \nu)$, we have (\ref{ee}). \end{proposition} \begin{proof} We first assume that $f \in H^{1, 2}(Y, \mathsf{d}, \nu)$. For all $N \in \mathbb{N}$, let $f_N:=\sum_i^N(\int_Yf\phi_i^Y\mathsf{d} \nu)\phi_i^Y$ and let $g_N:=f-f_N$. With no loss of generality, we can assume that $g_N \not \equiv 0$ for all $N$. Then, since for all $i \le N$ \begin{align*} \int_Yg_N\phi_i^Y\mathsf{d} \nu&=\int_Yf\phi_i^Y\mathsf{d} \nu-\sum_j^N\left(\int_Yf\phi_j^Y\mathsf{d} \nu\right)\int_Y\phi_j^Y\phi_i^Y\mathsf{d} \nu \\ &=\int_Yf\phi_i^Y\mathsf{d} \nu-\int_Yf\phi_i^Y\mathsf{d} \nu=0, \end{align*} we have $g_N \perp V_N$, where $V_N:= \mathrm{span}\,\{\phi_i^Y\}_{i=1, 2,\ldots, N}$. On the other hand, it is easy to see that the number $$ \lambda_{N+1}:=\inf_{h \perp V_N}\frac{\int_Y|\nabla h|^2\mathsf{d} \nu}{\int_Y|h|^2\mathsf{d} \nu} $$ coincides with $\lambda_{N+1}(Y, \mathsf{d}, \nu)$ (the inequality $\lambda_{N+1} \le \lambda_{N+1}(Y, \mathsf{d}, \nu)$ is trivial. The converce is done by checking that $\lambda_{N+1}$ is an eigenvalue of $-\Delta$, which is similar to the proof of Lemma \ref{ww4}). Therefore, we have $\|\nabla g_N\|_{L^{2}}^2 \ge \lambda_{N+1}(Y, \mathsf{d}, \nu)\|g_N\|_{L^2}^2$. Since \begin{align*} \int_Y|\nabla g_N|^2\mathsf{d} \nu &= \int_Y|\nabla f|^2\mathsf{d} \nu -2\int_Y\langle \nabla f, \nabla f_N\rangle \mathsf{d} \nu +\int_Y|\nabla f_N|^2\mathsf{d} \nu \\ &=\int_Y|\nabla f|^2\mathsf{d} \nu -\int_Y|\nabla f_N|^2\mathsf{d} \nu \le \int_Y|\nabla f|^2\mathsf{d} \nu, \end{align*} we have $\|g_N\|_{L^2}^2\le (\lambda_{N+1}(Y, \mathsf{d}, \nu))^{-1}\|\nabla f\|_{L^2}^2 \to 0$ as $N \to \infty$, which shows (\ref{ee}) in the case when $f \in H^{1, 2}(Y, \mathsf{d}, \nu)$. Next, let us check (\ref{ee}) for general $f \in L^2(Y, \nu)$. Take a sequence $F_n \in H^{1, 2}(Y, \mathsf{d}, \nu)$ with $F_n \to f$ in $L^2(Y, \nu)$. Let $a_i:=\int_Yf\phi_i^Y\mathsf{d} \nu$, let $a_{n, i}:=\int_YF_n\phi_i^Y\mathsf{d} \nu$, let $f_N:=\sum_i^Na_i\phi_i^Y$ and let $F_{n, N}:=\sum_i^Na_{n, i}\phi_i^Y$. For all $\epsilon>0$, there exists $n_0$ such that $\|f-F_{n_0}\|_{L^2}<\epsilon$. Then, there exists $N_0$ such that for all $N \ge N_0$, we have $\|F_{n_0}-F_{n_0, N}\|_{L^2}<\epsilon$. Moreover, \begin{align*} \int_Y|F_{n_0, N}-f_N|^2\mathsf{d} \nu &=\sum_i^N(a_{n_0, i}-a_i)^2\\ &\le \sum_i(a_{n_0, i}-a_i)^2 \\ &= \sum_i\left(\int_Y(F_{n_0}-f_i)\phi_i^Y\mathsf{d} \nu\right)^2 \le \|F_{n_0}-f\|_{L^2}^2 \le \epsilon^2, \end{align*} where we used the fact that for any ONS, $\{e_i\}_i$, in a Hilbert space $(H, \langle \cdot, \cdot \rangle)$, we have $|v|^2 \ge \sum_i\langle v, e_i\rangle^2$ for all $v \in H$. Therefore, for all $N \ge N_0$, $$ \|f-f_N\|_{L^2} \le \|f-F_{n_0}\|_{L^2}+\|F_{n_0}-F_{n_0, N}\|_{L^2}+\|F_{n_0, N}-f_N\|_{L^2} \le 3\epsilon, $$ which completes the proof. \end{proof} \begin{proposition} For all $f \in H^{1, 2}(Y, \mathsf{d}, \nu)$, we have (\ref{ee2}). \end{proposition} \begin{proof} Let $f_N:=\sum_i^Na_i\phi_i^Y$, where $a_i=\int_Yf\phi_i^Y\mathsf{d} \nu$. Then, $$ \|\nabla f_N\|_{L^2}^2=\sum_i^N\lambda_i(Y, \mathsf{d}, \nu)(a_i)^2. $$ On the other hand, \begin{align*} \int_Y\langle \nabla f, \nabla f_N\rangle \mathsf{d} \nu &=\sum_i^Na_i\int_Y\langle \nabla f, \nabla \phi_i^Y\rangle \mathsf{d} \nu \\ &=\sum_i^Na_i\lambda_i(Y, \mathsf{d}, \nu)\int_Yf\phi_i^Y\mathsf{d} \nu =\sum_i^N\lambda_i(Y, \mathsf{d}, \nu)(a_i)^2 =\|\nabla f_N\|_{L^2}^2. \end{align*} In particular, the Cauchy-Schwartz inequality yields $\|\nabla f_N\|_{L^2}^2\le \|\nabla f\|_{L^2}\|\nabla f_N\|_{L^2}$. Thus, since $\|\nabla f_N\|_{L^2}\le \|\nabla f\|_{L^2}$, we have $\sup_i \|f_N\|_{H^{1, 2}}<\infty$. Since $f_N \to f$ in $L^2(Y, \nu)$ as $N \to \infty$, Mazur's lemma shows (\ref{ee2}). \end{proof}
1,941,325,219,985
arxiv
\section{#1} \setcounter{equation}{0} \renewcommand{\theequation}{\arabic{section}.\arabic{equation}}} \renewcommand{\Im}{{\rm \,Im\,}} \renewcommand{\Re}{{\rm \,Re\,}} \title{Two-dimensional RCFT's without Kac-Moody symmetry} \author{Harsha R. Hampapura$^a$} \author{and Sunil Mukhi$^{a,b}$} \affiliation{$^a$ Indian Institute of Science Education and Research,\\ Homi Bhabha Rd, Pashan, Pune 411 008, India} \affiliation{$^b$ Yukawa Institute of Theoretical Physics\\ Kyoto University, Kyoto 606-8502, Japan} \emailAdd{[email protected]} \emailAdd{[email protected]} \abstract{Using the method of modular-invariant differential equations, we classify a family of Rational Conformal Field Theories with two and three characters having no Kac-Moody algebra. In addition to unitary and non-unitary minimal models, we find ``dual'' theories whose characters obey bilinear relations with those of the minimal models to give the Moonshine Module. In some ways this relation is analogous to cosets of meromorphic CFT's. The theory dual in this sense to the Ising model has central charge $\frac{47}{2}$ and is related to the Baby Monster Module.} \preprint{YITP-16-61} \keywords{Conformal field theory, Modular invariance, 3d gravity} \begin{document} \maketitle \section{Introduction} Conformal field theories in two dimensions \cite{DiFrancesco:1997nk} have a chiral symmetry algebra involving fields of spin $n\ge 1$. The conformal symmetry itself is generated by a spin-2 Virasoro algebra, while other algebras may or may not be present in general. In the holographic perspective \cite{Brown:1986nw} the spin-2 Virasoro algebra is the asymptotic symmetry of a spin-2 gauge theory, i.e. gravity, in the bulk. Likewise, a chiral algebra of spin-$N$ tells us that the bulk theory has spin-$N$ gauge fields. For $N=1$ this is the case of (Abelian or non-Abelian) gauge symmetry in the bulk. In this paper we will focus our attention on rational conformal field theories (RCFT's) in 2d, for which the total number of characters is finite. We will study the characters of these theories, defined as: \begin{equation} \chi_i(q)={\rm tr}_{{\cal H}_i} q^{-\frac{c}{24}+L_0} \end{equation} where $q=e^{2\pi i\tau}$ and $\tau$ is the coordinate on the moduli space of the torus. The trace is taken over the Hilbert space ${\cal H}_i$ of chiral states above the $i$th primary state. The characters are typically in correspondence with primary fields (upto degeneracies) of the full chiral algebra of the theory. Within RCFT's, the presence of spin-1 algebras is particularly interesting as they give rise to affine theories (WZW models, in Lagrangian language) based on a Kac-Moody current algebra. These in turn generate a vast set of 2d CFT's via the famous coset construction \cite{Goddard:1984vk, Goddard:1986ee}. Conversely one may ask for RCFT's that have no spin-1 chiral algebra. It is known \cite{Belavin:1984vu} that the only case where one can have a finite number of primaries without {\em any} other algebra beyond the Virasoro algebra, is when the central charge $c$ of this algebra is less than 1. The resulting models, called minimal models, are exactly solvable by virtue of having null vectors in the Virasoro module. They also have physical relevance, with the unitary series being related to RSOS models at criticality \cite{Andrews:1984af} and the $c=-\frac{22}{5}$ non-unitary model being related to the Lee-Yang edge singularity. The weaker condition that only a spin-1 algebra is absent (while arbitrary algebras of spin $N\ge 2$ are allowed) has not, to our knowledge, been classified. One interesting RCFT with this property is the ``Moonshine Module CFT'', which has a single character and is extremely interesting from the point of view of the mathematics of sporadic discrete groups \cite{Frenkel:1988xz}. Below we investigate possible solutions to the requirement of no Kac-Moody algebra for theories with a small number (but greater than 1) of characters. We will find new theories that bear an intriguing relationship with the Moonshine CFT. One way to discover new RCFT's is to look for solutions to modular-invariant differential equations for their characters \cite{Anderson:1987ge,Eguchi:1988wh, Mathur:1988rx, Mathur:1988na, Mathur:1988gt}. In this approach one fixes a priori the number of characters of the desired theory, as well as an integer parameter $\ell$ describing the number of zeroes in moduli space of the Wronskian of the characters, and then writes down a general modular-invariant differential equation. For small $\ell$, this turns out to have a small number of arbitrary parameters. One then searches for those values of the parameters for which the solutions have a $q$-series expansion with non-negative integral coefficients. If this is verified to a sufficiently high order in the expansion then one has a candidate RCFT. With the available information, it is often possible to directly identify it as a WZW theory or coset theory \cite{Mathur:1988na, Hampapura:2015cea, Gaberdiel:2016zke}, thereby proving its existence as a CFT, and to reconstruct its fusion rules and primary correlators \cite{Mathur:1988gt}. How is the absence of a spin-1 current algebra reflected in CFT characters? Among all the characters there is a unique one called the identity character. In unitary RCFT's this is the one with the most singular behaviour as $q\to 0$, though it can be more tricky to identify in non-unitary theories. Its leading behaviour is $q^{-\frac{c}{24}}$, where $c$ is the central charge of the theory. Let us now look at the coefficient of the successive term, $q^{-\frac{c}{24}+1}$. This is the number of states created from the ground state of the theory by acting on it with a mode of index $-1$ of one of the symmetry generators. If $K(z)$ is a generic spin-$N$ field in the chiral algebra then the modes are given by $K(z)=\sum_{n\in \relax{\sf Z\kern-.35em Z}}K_n z^{-n-N}$. From considerations of non-singularity at the origin, it follows that: \begin{equation} K_{-i}|0 \rangle=0,\qquad i<N \end{equation} We see in particular that $K_{-1}|0\rangle=0$ for all generators of spin $N\ge 2$. Therefore the only way to have terms of order $q^{-\frac{c}{24}+1}$ in the identity character is to have currents $J^a_n$, and these terms must then have a $1-1$ correspondence with the states $J_{-1}^a|0\rangle$. Indeed, this approach was used in Refs.\cite{Mathur:1988na, Mathur:1988gt, Hampapura:2015cea, Gaberdiel:2016zke} to determine the dimension of the current algebra given the degeneracy of the first excited state in the identity character. It follows that if there are no spin-1 currents in the theory then the coefficient of $q^{-\frac{c}{24}+1}$ in the identity character must be zero. If we parametrise the identity character as: \begin{equation} \chi_0(q)=q^{-\frac{c}{24}}\Big(1+m_1 q+m_2 q^2+\cdots\Big) \end{equation} then this tells us that $m_1=0$. There is another way to have spin-1 currents, even if the identity character has $m_1=0$. Normally, characters are defined such that all states of integer dimension are included in the identity character. However it may happen that a theory has states of integer dimension but they are counted as descendants of some other primary (not the identity). Then they will appear in a distinct character built above a primary of integer dimension. If this primary has dimension 1 then it is a Kac-Moody current. Thus even with $m_1=0$ in the identity character, we have to ensure the absence of any dimension-1 primary in the theory by looking at the remaining characters. To summarise, in order to discover potentially new RCFT's that do not have a Kac-Moody algebra, we have to look for consistent sets of characters transforming into each other under modular transformations and having the property that the identity character has $m_1=0$, and we also have to ensure there is no spin-1 primary in the theory. Suppose first that the theory has a single character and a central charge of the form $c=24k$ where $k$ is an integer. Then of course the only requirement to have no Kac-Moody algebra is $m_1=0$. This can be explicitly solved as follows. The character must be a degree-$k$ polynomial in the Klein $j$-invariant. The $q$-expansion of $j$ is $j(q)=q^{-1}+744 + {\cal O}(q)$. We will find it more convenient to work with $J(q)=j(q)-744=q^{-1}+{\cal O}(q)$. Clearly a polynomial in $j$ is also a polynomial of the same degree in $J$. Now let us write: \begin{equation} \chi_0(q)=P_k(J)=\sum_{m=0}^k a_m J^m \end{equation} In order for the identity field to be non-degenerate we must have $a_k=1$, and for the coefficient of $q^{-\frac{c}{24}+1}=q^{-k+1}$ to vanish we require $a_{k-1}=0$. These two conditions give us an infinite set of potential characters for one-character theories without Kac-Moody symmetry. The simplest theory in this class has $\chi(q)=J(q)$ and $c=24$, and corresponds to the famous Moonshine Module. In this example the CFT associated to this character is known to exist and has been constructed, but this is not yet the case for arbitrary characters of the above form. In general, we must think of the above conditions as necessary but not sufficient for the existence of CFT's without Kac-Moody symmetry. Another interesting class of examples is provided by the $c<1$ minimal models. These are labelled by two integers $(p,p')$. Their central charge and primary conformal dimensions are as follows: \begin{equation} \label{minimalmod ch} c = 1 - \frac{6(p-p')^2}{pp'} \hspace{1cm} h_{r,s}=\frac{(rp'-sp)^2-(p-p')^2}{4pp'},\quad 1\le r \le p,~ 1\le s\le p' \end{equation} The unitary case corresponds to $p'=p+1$ and in this case the characters are given by \cite{DiFrancesco:1997nk}: \begin{equation} \chi_{r,s}= K_{r,s} - K_{r,-s} \end{equation} where \begin{equation} K_{r,s} = \frac{q^{-\frac{c}{24}} } {\prod_{n=1}^\infty (1-q^n)} \sum_{n\in\relax{\sf Z\kern-.35em Z}} q^{\frac{(2np(p+1) +r(p+1) -sp)^2 -1}{4p(p+1)}} \end{equation} Using \eref{minimalmod ch}, the expression for the character becomes: \begin{equation} \chi_{r,s} =\frac{q^{-\frac{c}{24} + h_{r,s} } } {\prod_{n=1}^\infty (1-q^n)} \sum_{n \in Z} q^{\frac{n}{2} } (1-q^{rs}) \end{equation} Evaluating the low-lying terms in the identity character, it is easy to verify explicitly that the first term above the ground state is absent. Indeed minimal models not only have no Kac-Moody symmetry, they have no other symmetry algebra besides the Virasoro algebra. For non-unitary minimal models the character formula above needs some modification, but the same conclusions hold. We would now like to search for other RCFT's, besides those in the above examples, that have no Kac-Moody symmetries. For this we will use the method of modular-invariant differential equations, applied to RCFT's with small numbers of characters. We will re-discover the relevant minimal models, and also ``dual'' theories whose characters obey bilinear relations with those of the minimal models to give the character of the Moonshine CFT. We try to analyse what lessons can be learned from the existence of the latter types of characters. One of them, dual to the Ising model, has $c=\frac{47}{2}$ and has been previously studied by H\"ohn\cite{Hoehn:thesis}. \section{Modular-invariant differential equations} The characters of an RCFT arise as the independent solutions of a degree-$p$ modular-invariant differential equation in $\tau$. Such an equation must be of the form \cite{Anderson:1987ge,Eguchi:1988wh,Mathur:1988rx, Mathur:1988na, Mathur:1988gt}: \begin{equation} \left(D^p + \sum_{k=0}^{p-1} \phi_k(\tau) D^k\right)\chi=0 \label{modinveq} \end{equation} where $D$ is a covariant derivative to be defined below, and $\phi_k(\tau)$ is a modular function of weight $2(p-k)$ under SL(2,Z). The characters transform into each other under SL(2,Z) but they have zero weight. The derivative $D$ acting on them successively increases the weight by 2. It follows that every term in the above equation has modular weight $2p$, and the equation is therefore modular invariant. The covariant derivative is given by: \begin{equation} D \equiv \frac{\partial}{\partial\tau}-\frac{i\pi r}{6}E_2(\tau) \label{covdev} \end{equation} where $r$ is the modular weight of the object on which it acts, and $E_2(\tau)$ is a special Eisenstein series that transforms inhomogeneously under SL(2,Z) and thereby provides a suitable connection. In general $\phi_k$ need not be holomorphic, indeed they can be meromorphic even though the resulting characters are holomorphic. In fact the poles of $\phi_k$ are related to the zeroes of the Wronskian of the independent solutions $\chi_0,\chi_1,\cdots \chi_{p-1}$ of the differential equation by the following relation: \begin{equation} \phi_k(\tau)=(-1)^{n-k}\frac{W_k}{W} \end{equation} where the Wronskian determinants $W_k$ are defined in Refs.\cite{Mathur:1988rx, Mathur:1988na, Mathur:1988gt}. In searching for new RCFT's one therefore starts by choosing the number of characters $p$ as well as the number of zeroes of $W$, which is of the form $\frac{\ell}{6}$ where $\ell$ is a non-negative integer other than 1 (fractional zeroes are allowed due to the orbifold singularities of the torus moduli space). The central charge and conformal dimensions of any RCFT satisfy the relation \cite{Mathur:1988na}: \begin{equation} \sum_{i=0}^{p-1}\left(-\frac{c}{24}+h_i\right)=\frac{p(p-1)}{12}-\frac{\ell}{6} \label{elldef} \end{equation} It is tedious but straightforward to verify that all the $c<1$ minimal models, unitary or otherwise, have $\ell=0$. The same is true of SO(N) and SU(N) WZW models, but there are also many known CFT's with $\ell\ge 2$, some of which appear in known discrete series and others are constructed in Refs.\cite{Naculich:1988xv,Hampapura:2015cea,Gaberdiel:2016zke}. We now use this approach of modular-invariant differential equations to investigate the existence of CFT characters without Kac-Moody symmetries for small values of $\ell$. \section{Theories without Kac-Moody symmetries} \subsection{Two-character theories} Let us start by fixing the number $\ell$ of zeroes of the Wronskian to be 0. The modular-invariant differential equation is simplest in this case, because the coefficient functions $\phi_k(\tau)$ are holomorphic everywhere in the interior of moduli space and therefore must be polynomials in the two Eisenstein series $E_4$ and $E_6$ (for definitions, see the Appendix). The most general homogeneous, modular invariant, second order differential equation is: \begin{equation} \label{2char de} \Big({\tilde D}^2 + \phi_1(\tau){\tilde D}+\phi_0(\tau)\Big)\chi=0 \end{equation} where ${\tilde D}=\frac{1}{2\pi i}D$ is the covariant derivative scaled for future convenience. Here $\phi_k$ are holomorphic and $\phi_1,\phi_0$ have modular weight $2,4$ respectively. It follows that $\phi_1=0$ and $\phi_0$ is proportional to $E_4$. Thus we have the differential equation: \begin{equation} ({\tilde D}^2 + \mu E_4)\chi=0 \label{leq.0} \end{equation} with $\mu$ a free parameter. In terms of ordinary derivatives this differential equation can be written as: \begin{equation} \Big({\tilde \del}^2-\frac16 E_2 {\tilde \del} + \mu E_4 \Big) \chi =0 \end{equation} where ${\tilde \del}=\frac{1}{2\pi i}\partial$. In Ref. \cite{Mathur:1988na}, this equation was solved by substituting the mode expansions of the characters, $\chi=\sum_{n=0}^\infty a_n q^{\alpha+n}$, and the Eisenstein series $E_a(\tau)=\sum_{k=0}^\infty E_{a,k}\,q^k$. The result is the following set of equations. First of all, if $\alpha$ is either of the two exponents then \begin{equation} \alpha^2-\sfrac16\alpha+\mu=0 \end{equation} Next, denoting the two roots of this equation by $\alpha_0,\alpha_1$ (where $\alpha_0$ is the exponent corresponding to the identity character and $\alpha_1$ corresponds to the non-trivial primary), we have: \begin{equation} \begin{split} \alpha_0+\alpha_1&=\frac16\\ \mu = \alpha_0\alpha_1 &=\alpha_0\left(\frac16-\alpha_0\right) \end{split} \label{muval} \end{equation} Note that $\alpha_0=-\frac{c}{24}$ and therefore our parametrisation of the identity character is: \begin{equation} \chi_0=q^{\alpha_0}(1+m_1 q+ m_2 q^2+\cdots) \end{equation} Going to next order in the series solution, and using the above results, we eventually get \cite{Mathur:1988na}: \begin{equation} m_1=\frac{24\alpha_0(60\alpha_0-11)}{5+12\alpha_0} \label{m.1.ell.0} \end{equation} To find theories without Kac-Moody symmetries we set $m_1=0$ in the above equation and solve for $\alpha_0$. This directly gives $\alpha_0=\frac{11}{60}$. Identifying this with $-\frac{c}{24}$ gives us $c=-\frac{22}{5}$. This is a well-known minimal model corresponding to $(p,p')=(2,5)$. There are no more solutions to $m_1=0$ at $\ell=0$, beyond the trivial case $\alpha_0=0$ corresponding to $c=0$. Next we look at the $\ell=2$ two-character theories, exhaustively studied in Ref.\cite{Hampapura:2015cea} (for earlier work on these theories see Ref.\cite{Naculich:1988xv}). In this case we will find something interesting. The value of $m_1$ for these theories is given by Eq.(5.26) of that reference in terms of another integer ${\tilde N}$. Putting $m_1=0$ one finds ${\tilde N}=144$, and indeed this value appeared in the list Eq.(5.27) of Ref.\cite{Hampapura:2015cea}. However this was then ruled out in that paper because the degeneracy at the second level above the identity turned out to be a {\em negative} integer. Computing the degeneracies of the identity character to very high powers in $q$, we find that except for the ground state, they are all negative integers. This makes it difficult to propose a physical meaning for this theory. However, it is still remarkable (and not the result of any prediction, since we do not know a candidate CFT for this case) that the degeneracies for the identity character are integral to very high orders in $q$ and we expect this property persists to all orders. The central charge of this would-be theory is found by setting $m_1=0$ in Eq(5.22) of Ref.\cite{Hampapura:2015cea}, from which one finds $c=\frac{142}{5}$. Using the fact that $\ell=2$ one easily finds that the nontrivial primary of this theory has conformal dimension $\frac95$. Next the characters for this primary can be computed, upto normalisation, to any desired order in $q$ following the method of Ref.\cite{Hampapura:2015cea} and one finds that to very high orders the degeneracies are positive rational numbers with a denominator that appears to be bounded. Thus for this character a suitable degeneracy factor for the ground state would render it consistent. These empirical facts lead to an intriguing observation. The above ``characters'' bear a close relation to those of the non-unitary $c=-\frac{22}{5}$ minimal model via a bilinear relation, analogous to the one recently found in Ref.\cite{Gaberdiel:2016zke}. Let us exhibit the precise relationship. Denote the familiar $c=-\frac{22}{5}$ minimal model by ${\cal M}_{2,5}$ and let $\chi_0,\chi_1$ be its characters. Likewise, denote the (tentative) $\ell=2$ theory with ${\tilde c}=\frac{142}{5}$ as ${\tilde {\cal M}}_{2,5}$ and let ${\tilde \chi}_0,{\tilde \chi}_1$ be its characters. It is well-known that $\mathcal{M}} \newcommand{\cR}{\mathcal{R}_{2,5}$ has a primary with $h=\frac15$, while we have just seen that ${\tilde \mathcal{M}} \newcommand{\cR}{\mathcal{R}}_{2,5}$ has a primary of dimension ${\tilde h}=\frac{9}{5}$. Putting all this together we find that $c+{\tilde c}=24$ and $h+{\tilde h}=2$. This is precisely the relation between a specific affine theory and the coset of a meromorphic $c=24$ CFT by that affine theory, proposed in Ref.\cite{Gaberdiel:2016zke} and justified with numerous examples. However, there is an important difference. In the case of Ref.\cite{Gaberdiel:2016zke}, one really had a coset construction. The numerator theories were meromorphic $c=24$ CFT's having a Kac-Moody symmetry (not affine theories, but rather modular-invariant combinations of characters of a subset of the integrable primaries). The denominators were affine theories having a Kac-Moody algebra that is a direct summand of the one in the numerator. The Kac-Moody algebras play a crucial role in enabling a definition of these generalised cosets, as explained in detail in Ref.\cite{Gaberdiel:2016zke}. But in the present case there are no currents and therefore no coset construction. However, in analogy with Eq.(2.7) of Ref.\cite{Gaberdiel:2016zke} we can still look for a bilinear relation between the characters $\chi_i,{\tilde \chi}_i$ described above and some single-character CFT. What theory should appear on the RHS of such a bilinear relation? It has to be a meromorphic $c=24$ CFT and must therefore appear in the list of Ref.\cite{Schellekens:1992db}. Since the pair of theories on the LHS have no Kac-Moody symmetry, the same property must hold for the RHS. There is a unique meromorphic CFT with this property, namely the famous Moonshine CFT whose character is $J(q)=j(q)-744$. Thus we are motivated to suggest a bilinear holomorphic relation as follows: \begin{equation} \sum_{i=0}^{p-1}\chi_i(q){\tilde \chi}_i(q)= j(q)-744 \label{jbilinear} \end{equation} with $p=2$, the characters on the LHS being those of ${\cal M}_{2,5}$ and ${\tilde {\cal M}_{2,5}}$. This is a precise, testable formula and can only hold if all the (infinitely many) coefficients in the $q$-series match on both sides. We know $\chi_i$, the characters of ${\cal M}_{2,5}$, and we can also use the modular differential equation to compute the characters ${\tilde \chi}_i$ of the hypothetical theory ${\tilde {\cal M}_{2,5}}$ as explained above. Therefore we can check whether $\chi_i(q),{\tilde \chi}_i(q)$ obey \eref{jbilinear} to any desired order in $q$. At leading order the relationship holds due to the matching of exponents discussed above. Once we go beyond that, there is a subtlety: so far, we do not know the degeneracy of the nontrivial primary whose character is ${\tilde \chi}_1$. Let us assume all the characters under discussion are normalised so that the first term in unity. Let the degeneracy of the ground state be labelled by $D_0,D_1$ for the characters of $\mathcal{M}} \newcommand{\cR}{\mathcal{R}_{2,5}$ and ${\tilde D}_0,{\tilde D}_1$ for those of ${\tilde\mathcal{M}} \newcommand{\cR}{\mathcal{R}}_{2,5}$. Also let us use $\psi_i$ and ${\tilde \psi}_i$ to denote characters normalised so that the first term in the expansion is unity. We then have: \begin{equation} \chi_i(q)=D_i\,\psi_i(q),\qquad {\tilde \chi}_i={\tilde D}_i\,{\tilde \psi}_i(q),\qquad i=0,1 \label{bilintwo} \end{equation} One always has $D_0={\tilde D}_0=1$ from non-degeneracy of the identity. Therefore in a general situation, the bilinear of interest is: \begin{equation} \sum_{i=0}^{1}\chi_i(q){\tilde \chi}_i(q)=\psi_0(q){\tilde \psi}_0(q)+D_1{\tilde D}_1\,\psi_1(q){\tilde \psi}_1(q) \end{equation} The expansion of $\psi_i$ is: \begin{equation} \psi_i=q^{\alpha_i}\left(1+m_1^{(i)}q+m_2^{(i)}q^2+\cdots\right) \end{equation} where $\alpha_0=-\frac{c}{24}$ and $\alpha_1=-\frac{c}{24}+h$. Note that the quantities previously called $m_1,m_2$ are now labelled $m_1^{(0)},m_2^{(0)}$ but we will revert to the simpler notation whenever there is no scope for confusion. A similar expansion holds for ${\tilde \psi}$. Now from the above relations between the central charges and conformal dimensions of the paired theories, we have: \begin{equation} \alpha_0+{\tilde \alpha}_0=-1, \qquad \alpha_1+{\tilde \alpha}_1=1 \end{equation} It follows that upto ${\cal O}(q)$, \eref{bilintwo} is equal to: \begin{equation} \begin{split} &q^{-1}\left(1+m_1^{(0)}q+m_2^{(0)}q^2\right)\left(1+{\tilde m}_1^{(0)}q+{\tilde m}_2^{(0)}q^2\right) +D_1{\tilde D}_1 q \\ &\quad = q^{-1}+(m_1+{\tilde m}_1)+(m_1{\tilde m}_1+m_2+{\tilde m}_2+D_1{\tilde D}_1)q \end{split} \end{equation} As promised, in the last line we have dropped the superscripts on $m_i,{\tilde m}_i$ because only those corresponding to the identity appear to this order. Now in the present case, $m_1={\tilde m}_1=0$ and also $D_1=1$ because minimal models have non-degenerate primaries. Recall that: \begin{equation} J(q)=j(q)-744=q^{-1} + 196884 q+\cdots \end{equation} and therefore to satisfy \eref{jbilinear} we must have ${\tilde D}_1+m_2+{\tilde m}_2=196884$. Since $m_2$ and ${\tilde m}_2$ are directly calculable using the differential equation, this determines ${\tilde D}_1$. Applying it to the case of ${\cal M}_{2,5}$ and $\tilde{\cal M}_{2,5}$, we find that $m_2=1,{\tilde m}_2=-164081$. This determines ${\tilde D}_1=360964$. A non-trivial check of this normalisation is that in the expansion of the non-identity character of $\tilde{\cal M}_{2,5}$, one finds fractional coefficients with denominators as large as 90241 (working up to ${\cal O}(q^{1000})$). Thus it must be the case that 360964 is divisible by 90241, and this is true (the ratio is 4). This means that with this choice of ${\tilde D}_1$, the non-identity character of $\tilde{\cal M}_{2,5}$ indeed has integer degeneracies. Despite the above check, we have not yet performed any actual test of \eref{jbilinear}. But now all quantities on the LHS are known, as we can compute the power series for $\chi_i,{\tilde \chi}_i$ to any desired order in $q$ and we have determined all the normalisations. We can then test \eref{jbilinear} order-by-order and we find that all the way to ${\cal O}(q^{1000})$ it works perfectly. To summarise, we have conjectured a bilinear relation between two pairs of characters (one corresponding to a known non-unitary CFT and the other to a more exotic system with negative but integer degeneracies) to the Moonshine CFT, and verified this conjecture to ${\cal O}(q^{1000})$. The significance of this construction is that it points the way to similar relations for {\em unitary} theories, where no negative degeneracies are present. Such relations cannot be sought within two-character theories because we do not know of any two-character {\em unitary} RCFT without a Kac-Moody algebra. However such theories do exist with $p\ge 3$ characters, where an infinite family is provided by the unitary minimal models, starting with the well-known Ising model. Hence we now turn our attention to the case $p=3$. We will repeat the procedures described above and find very analogous results. Note that, independent of the bilinear relation, we have successfully classified all possible two-character RCFT's with $\ell=0,2,3$ having no Kac-Moody algebra. For $\ell=0$ this is the Yang-Lee theory, for $\ell=2$ this is the exotic dual discussed above and for $\ell=3$ there are no candidates as shown in Ref.\cite{Hampapura:2015cea}. \subsection{Three-character theories} The case of three-character theories, even with $\ell=0$, is not completely classified despite non-trivial progress in Ref.\cite{Mathur:1988gt,Gaberdiel:2016zke}. It is known that infinitely many such theories exist, in sharp contrast to the case of two-character theories with $\ell=0$. However the best-known infinite series corresponds to the SO(N)$_k$ WZW models, which are not of interest to us here. We shall now re-open the investigation into three-character theories with $\ell=0$, but focusing specifically on solutions without Kac-Moody symmetry. The modular invariant differential equation in this case is \cite{Mathur:1988gt}: \begin{equation} \left(D_\tau^3 + \pi^2 \mu_1 E_4 D_\tau + i\pi^3 \mu_2 E_6\right)\chi(\tau)=0 \end{equation} In terms of ordinary derivatives the above equation becomes: \begin{equation} \label{3char-de} \Big(\partial^3_{\tau}-\frac{i\pi}{3}(\partial_{\tau}E_2)\partial_{\tau} -i\pi E_2\partial^2_{\tau} -\frac{2\pi^2}{9}E^2_2\partial_{\tau}+\mu_1 \pi^2 E_4\partial_{\tau}+i\mu_2 \pi^3 E_6\Big)\chi=0 \end{equation} As explained in Ref.\cite{Hampapura:2015cea}, one can use Ramanujan identities to make differential equations linear in the Eisenstein series. Accordingly, we apply the following identity to \eref{3char-de}: \begin{equation} \frac{1}{2i\pi}(\partial_{\tau}E_2) = \frac{E^2_2-E_4}{12} \end{equation} as a result of which the equation becomes: \begin{equation} \Big(\partial^3_{\tau}+i\pi(\partial_{\tau}E_2)\partial_{\tau} -i\pi E_2\partial^2_{\tau} -\frac{2\pi^2}{9}E_4\partial_{\tau}+\mu_1 \pi^2 E_4\partial_{\tau}+i\mu_2 \pi^3 E_6\Big)\chi=0 \end{equation} Upon substituting the mode expansions we get the recursion relation: \begin{equation} \begin{split} \label{recursion-3char} &-8(n+\alpha)^3 a_n- 4\sum_{k=0}^{n}E_{2,k} a_{n-k} k(n-k+\alpha)+ 4\sum_{k=0}^{n}(n-k+\alpha)^2a_{n-k}E_{2,k} \\ &-\frac{4}{9} \sum_{k=0}^{n} E_{4,k}(n-k+\alpha)a_{n-k} +2\mu_1 \sum_{k=0}^{n}(n-k+\alpha)E_{4,k}a_{n-k}+\mu_2\sum_{k=0}^{n}E_{6,k}a_{n-k}=0 \end{split} \end{equation} For $n=0$ and $n=1$, we get the following polynomial equations in $\alpha$: \begin{equation} \label{3char-indicial} -8\alpha^3 + 4\alpha^2 + -\frac{4}{9}\alpha + 2\mu_1\alpha+\mu_2=0 \end{equation} and \begin{equation} \label{3char-n1} \begin{split} &\left(-4E_{2,1}\alpha+4\alpha^2E_{2,1}-\frac{4}{9}E_{4,1}\alpha+\mu_2E_{6,1}+2\mu_1 E_{4,1}\alpha\right)\\ &\qquad\qquad\qquad + m_1\left(-24\alpha^2-16\alpha-\frac{40}{9}+2\mu_1\right) =0 \end{split} \end{equation} From these equations we immediately see that: \begin{equation} \label{3char-mu1} 2\mu_1 =\frac49 - 8(\alpha_0 \alpha_1+ \alpha_1 \alpha_2+\alpha_0 \alpha_2), \hspace{1cm} \mu_2 = 8\alpha_0 \alpha_1 \alpha_2 \end{equation} Using Eqs.(\ref{3char-indicial}), (\ref{3char-mu1}) and (\ref{3char-n1}) and substituting for the Fourier coefficients of the Eisenstein series (see the Appendix) we get: \begin{equation} \label{m1 for 3char} m^{(i)}_1 =\frac{24\alpha_i\big(20\alpha^2_i+(62\alpha_j -11)\alpha_i +62\alpha^2_j -31\alpha_j+1\big)}{(\alpha_i - \alpha_j +1)(4\alpha_i+2\alpha_j+1)},\quad j\neq i \end{equation} Note that for any chosen $i$, this equation holds for both values of $j$ different from $i$. Let us now specialize to the case of identity character ($i=0$) and ask under what circumstances $m_1^{(0)}$ vanishes. By requiring \eref{m1 for 3char} to be zero we get the following relation between $\alpha_0$ and one of the other exponents, say $\alpha_1$: \begin{equation} \label{3char-alpha0} \alpha_0 = \frac{1}{40} \left(11-62 \alpha_1 \pm \sqrt{41+1116 \alpha_1-1116 \alpha_1^2}\right) \end{equation} Since $\alpha_0$ and $\alpha_1$ are rational linear combinations of the central charge and conformal dimensions of one of the primaries (both of which are rational), we conclude that they themselves are rational numbers. It follows that the discriminant in \eref{3char-alpha0} is the square of some rational number $p$. Solving this equation for $\alpha_1$ we get: \begin{equation} \alpha_1 = \frac{1}{186}\left(93 \pm \sqrt{31}\sqrt{320-p^2}\right) \label{alphap} \end{equation} The rationality of $\alpha_1$ forces $\sqrt{320-p^2}$ to be of the form $\sqrt{31}\,q$, where $q$ is some rational number. Squaring both sides of this equation gives us the following Diophantine equation: \begin{equation} \label{3char-diophantine} p^2+ 31q^2=320 \end{equation} This equation describes an ellipse. Thus, we see that rational points $(p,q)$ on this ellipse correspond to possible candidate $\alpha$'s describing 3-character theories with $\ell=0$ and without a Kac-Moody algebra. Of course these candidates, if found, would only have passed a low-level test and we would then have to determine their characters and check integrality of their coefficients to high orders before having any confidence that they exist as CFT's. This check is straightforward to perform because for $\ell=0$ and three characters, the exponents completely specify the differential equations and thereby the characters. We already know one solution to the above requirements that is definitely a CFT, namely the Ising model. This has $c={\frac{1}{2}\,}$ and conformal dimensions $\frac12, \frac{1}{16}$. It is easy to verify that the characters of this theory have $\ell=0$, which as we already pointed out is the case for all minimal models. And it has $m_1^{(0)}=0$, because minimal models have no Kac-Moody symmetry. We will find it useful to start by describing the Ising model as a rational point of the ellipse of \eref{3char-diophantine}. Indeed using Eqs.(\ref{alphap}) and (\ref{3char-diophantine}) we easily find that it represents the point $(p_0,q_0)=(\frac{37}{4},\frac{11}{4})$ on this ellipse. Using this as a ``base point'' we will search for other rational points on the ellipse. Let us consider a line with variable slope passing through the point $(p_0,q_0)$ and look for rational points where it intersects the ellipse. A line through $(p_0,q_0)$ can be parametrised as follows: \begin{equation} \label{pq} ( p,q)= (p_0 -\gamma t,q_0-t) \end{equation} where $\gamma$ is a real parameter. Given that $(p_0,q_0)$ are rational, $(p,q)$ will also be rational if $t, \gamma t$ are rational. This means that $\gamma$ in particular must be rational. Now substituting the above in \eref{3char-diophantine} permits us to solve for $t$ in terms of $p_0,q_0$ and $\gamma$. Putting this back in the above, we get (after excluding $t=0$): \begin{equation} \label{pqexpr} (p,q) = \Bigg( p_0-\gamma \left (\frac{2 \gamma p_0 + 62 q_0}{31 + \gamma^2}\right),q_0- \left (\frac{2 \gamma p_0 + 62 q_0}{31 + \gamma^2}\right) \Bigg) \end{equation} Thus we have solved the initial problem, that of finding all rational points on the ellipse. There is one such point for every rational $\gamma$. Next we use the recursive solution to find the second-level degeneracy in terms of the exponents $\alpha_i$: \begin{equation} \begin{split} m^{(i)}_2&=\Bigg[36 \alpha_i \bigg(2 + 3200 \alpha_i^5 - 339 \alpha_j + 2897 \alpha_i^2 - 8876 \alpha_j^3 + 8876 \alpha_j^4 + 80 \alpha_i^4 (-1 + 248 \alpha_j)\\ &\quad + 8 \alpha_i^3 (-277 - 718 \alpha_j + 6324 \alpha_j^2) + \alpha_i^2 (953 - 6136 \alpha_j- 17700 a1^2 + 61504 \alpha_j^3) + \\ &\qquad \alpha_i(-109 + 2702 \alpha_j - 5236 \alpha_j^2 - 13000 \alpha_j^3 + 30752 \alpha_j^4)\bigg)\Bigg]\times\\ &\qquad \Big[(1 + \alpha_i - \alpha_j) (1 + 4 \alpha_i + 2 \alpha_j) (2+\alpha_i-\alpha_j)(3+4\alpha_i+2\alpha_j)\Big]^{-1} \end{split} \label{3char-m2} \end{equation} Using Eqs.(\ref{alphap}) and (\ref{3char-alpha0}) we can express the exponents $\alpha_i$ in terms of $p,q$: \begin{equation} \label{alphaexpr} (\alpha_0,\alpha_1,\alpha_2) = \Big(-{\frac{1}{2}\,}+\frac{31q-3p}{120},{\frac{1}{2}\,}-\frac{q}{6},\frac12+\frac{(3p-11q)}{120} \Big) \end{equation} Substituting the values of this in \eref{3char-m2}, we obtain $m_2$ (as usual, this quantity without a superscript refers to the identity character) as a rational function of $\gamma$: \begin{equation} m_2= \frac{-(-5363 - 62 \gamma + 53 \gamma^2) (18480991 + 359538 \gamma - 116516 \gamma^2 - 1058 \gamma^3 + 21 \gamma^4)}{4 (31 + \gamma^2) (-31 - 9 \gamma + 6 \gamma^2) (-527 + 82 \gamma + 97 \gamma^2)} \label{mtwoeq} \end{equation} Our strategy is now to consider all rational points on the ellipse, i.e. all rational numbers $\gamma$, and ask which ones specify a non-negative integer $m_2$ via \eref{mtwoeq}. If they give fractional or negative $m_2$, they can be eliminated. This procedure will rule out all but a small number of cases. Accordingly, we searched for all rational solutions to \eref{mtwoeq} for values of $m_2$ ranging from 1 to 2000,000. Solutions are quite sparse, with only nine possible values of $m_2$ in the range 1 to 100,000 and not a single one after that. We suspect (but cannot rigorously prove) that these are all the solutions. We found six rational values of $\gamma$ for $m_2=1$, and two for each of the other allowed values of $m_2$. Once we have the values of $\gamma$ that solve \eref{mtwoeq}, we use equations \eref{pqexpr} and \eref{alphaexpr} to obtain the exponents $\alpha_i$. The central charges for these candidates can be computed as $c=-24\alpha_0$. It turns out that there are two different values of $\gamma$ for each set of exponents $\alpha_i$, with the roles of $\alpha_1$ and $\alpha_2$ exchanged. This can be explained by observing that \eref{alphaexpr} has a symmetry under the transformation $(p,q) \rightarrow (-\frac{93q}{20}-\frac{11p}{20},\frac{11q}{20}-\frac{3p}{20})$ which leaves $\alpha_0$ unchanged but exchanges $\alpha_1$ with $\alpha_2$. Therefore, for $m_2=1$ the six different values of $\gamma$ group into three pairs corresponding to three different sets of exponents $\alpha_i$. There is a candidate 3-character theory for each set. On the other hand for $m_2\ge 2$ we have a single set of exponents for each pair of $\gamma$ values, and therefore a single candidate theory. The results at this stage are exhibited in Table \ref{table-m2}. \begin{table}[ht] \centering \begin{tabular}{|c|c|c| |c|c|c|c|} \hline No. & $m_2^{(0)}$ & $\gamma$ & $\alpha_0$ & $\alpha_1$ & $\alpha_2$ & $c=-24\alpha_0$ \\ \hline 1 &$1$ & $-13,15$ & $\phantom{-}\frac{11}{60}$ & $-\frac{1}{60}$ & $\frac{1}{3}$ & $-\frac{22}{5}$ \\ \hline 2 & $1$ & $-33, 47$ &$\phantom{-}\frac{17}{42}$& $-\frac{1}{42}$ & $\frac{5}{42}$ & $-\frac{68}{7}$ \\ \hline 3 & 1 & $-\frac{341}{37},\frac{31}{3}$ & $-\frac{1}{48}$ & $\phantom{-}\frac{1}{24}$ & $\frac{23}{48}$& $\phantom{-}\frac12$ \\ \hline 4 & 2 & $-\frac{217}{9},31$ & $\phantom{-}\frac{11}{30}$ & $-\frac{1}{30}$ & $\frac{1}{6}$& $-\frac{44}{5}$ \\ \hline 5 & 156 & $7,-\frac{19}{3}$ & $-\frac{1}{3}$ & $\phantom{-}\frac{2}{3}$ & $\frac{1}{6}$& $\phantom{-} 8$ \\ \hline 6 & 2296 & $-\frac{31}{7},\frac{93}{19}$ & $-\frac{2}{3}$ & $\phantom{-}\frac{1}{3}$ & $\frac{5}{6}$& $\phantom{-}16$ \\ \hline 7 & 49291 & $\frac{1}{3},-\frac{1}{17}$ & $-\frac{121}{84}$ & $\phantom{-}\frac{83}{84}$ & $\frac{20}{21}$& $\phantom{-}\frac{242}{7}$ \\ \hline 8 & 63366 & $\frac{31}{33},-\frac{31}{47}$ & $-\frac{59}{42}$ & $\phantom{-}\frac{43}{42}$ & $\frac{37}{42}$& $\phantom{-}\frac{236}{7}$ \\ \hline 9 & 63428 & $-\frac{403}{131},\frac{31}{9}$ & $-\frac{101}{105}$ & $\phantom{-}\frac{107}{210}$ & $\frac{20}{21}$& $\phantom{-}\frac{808}{35}$ \\ \hline 10 & 90118 & $\frac97,-1$ & $-\frac{41}{30}$ & $\phantom{-}\frac{31}{30}$ & $\frac{5}{6}$& $\phantom{-}\frac{164}{5}$ \\ \hline 11 & 96256 & $\frac{37}{11},-3$ & $-\frac{47}{48}$ & $\phantom{-}\frac{23}{24}$ & $\frac{25}{48}$& $\phantom{-}\frac{47}{2}$ \\ \hline \end{tabular} \caption{Solutions to \eref{mtwoeq}. The exponents $\alpha_i$ are obtained using Eqs.(\ref{pqexpr}) and (\ref{alphaexpr}). The given values of $\alpha_1,\alpha_2$ correspond to the first value of $\gamma$ exhibited, while they are interchanged if we use the second value.} \label{table-m2} \end{table} We now try to understand whether these candidates really exist as characters, and if so, to what CFT's they are associated. First of all given the exponents $\alpha_i$ in Table \ref{table-m2}, we check the absence of any spin-1 primary (recall that the primary dimension is $h_i=\alpha_i-\alpha_0,~ i=1,2$). This rules out lines 5 and 6 of the table. Next using these exponents we can evaluate the corresponding character as a $q$-series using the modular-invariant differential equation. Thereby we check to very high orders that $m_n^{(0)}$ are non-negative integers. We also verify that $m_n^{(i)}$ for the other two characters is a non-negative rational number. We reject candidates that do not satisfy these consistency conditions. Using these criteria and carrying out this analysis on each line of Table \ref{table-m2}, we find that the entries for $m_2^{(0)}=49291$ and 63428 must also be rejected. The surviving candidates are those appearing in lines $1-4, 8, 10, 11$ of the table. Examining the exponents for these cases, we easily see that the cases in lines $1-4$ of the table correspond to known CFT's. The first three are, respectively, the minimal models for $(p,p')=(2,5),(2,7)$ and $(3,4)$ while the fourth one is the tensor product of two copies of the $(2,5)$ minimal model. Notice that case 1 is really a 2-character theory that has appeared as the solution of a 3-character differential equation (this means it has a spurious ``third character'' with which the first two do not mix under modular transformations, much as for the $E_8$ case discussed in Ref.\cite{Mathur:1988na}). This was already discussed as a 2-character theory in the previous section and we therefore ignore it in the present discussion. The others are all genuine 3-character theories. One of them, with $c={\frac{1}{2}\,}$, is the famous Ising model. Since these theories exist and satisfy all the criteria for which we have been searching, it is of course reassuring to find them. But the important question is whether there are any more. Remarkably it turns out that there are {\em precisely three} more theories in our list that are {\em not} minimal models. Moreover, they precisely satisfy the bilinear ``dual'' relation to the 3-character minimal models given in \eref{jbilinear} with $p=3$. To see the relations between the new and old cases, let us compare lines 2 and 8 of Table \ref{table-m2}. Note first that the central charges add up to 24. Next, each of the conformal dimensions $h_1=\alpha_1-\alpha_0$ and $h_2=\alpha_2-\alpha_0$ for these two lines adds up to 2. These are precisely the properties of the bilinear relation (and also of the novel coset construction of Ref.\cite{Gaberdiel:2016zke} to which it is analogous). The same properties hold when we compare lines 3 and 11 of the table, and lines 4 and 10. Thus we have found three more pairs of theories that may potentially satisfy a bilinear relation giving the Moonshine CFT. Again the proposed relations can be verified to high orders in $q$. We simply compute the characters of the potentially related pairs in Table \ref{table-m2} using the differential equations approach, multiply them pairwise, add them up and compare with $J(q)$ to each order. As before, this verification involves an ambiguity in the normalisations of the non-identity characters, since these are not determined by the differential equation. Compared with the 2-character case in the previous section, here we are studying 3-character theories so there are two undetermined primary degeneracies ${\tilde D}_1,{\tilde D}_2$. These are determined by imposing the bilinear identity to the first two nontrivial orders in $q$. Thereafter we check whether the identity continues to hold up to ${\cal O}(q^{1000})$. We find that each pair passes this test perfectly. One caveat is that we again encounter negative degeneracies when the original theory is non-unitary, so in these cases the duals are ``exotic'' so it may or may not be possible to make sense of them as some kind of CFT's. For the dual to ${\cal M}_{2,7}$, denoted $\tilde{\cal M}_{2,7}$, we have the exponents: \begin{equation} ({\tilde \alpha}_0,{\tilde \alpha}_1,{\tilde \alpha}_2)=\frac{1}{42}(-59,43,37) \end{equation} and the degeneracies of the non-identity primaries are $({\tilde D}_1,{\tilde D}_2)=(-715139,848656)$. For the dual to ${\cal M}_{2,5}\times {\cal M}_{2,5}$ the exponents are: \begin{equation} ({\tilde \alpha}_0,{\tilde \alpha}_1,{\tilde \alpha}_2)=\frac{1}{30}(-41,31,25) \end{equation} and the degeneracies are $({\tilde D}_1,{\tilde D}_2)=(615164,-508400)$. Notice that unlike the two-character case in the previous section, here {\em all} the degeneracies of the associated character are negative (equivalently, all are positive after extracting the overall negative degeneracy of the ground state). Thus it may be possible to make sense of these as theories with a fermion number\footnote{We thank Matthias Gaberdiel for this suggestion.}. This time we also have a unitary case, the Ising model, for which everything works perfectly. The ''dual'' theory with which it obeys a bilinear identity has $c=\frac{47}{2}$ and the degeneracies all turn out to be non-negative integers. Again we have determined the degeneracies and verified that the bilinear identity holds to high orders in $q$. For this case the exponents are: \begin{equation} ({\tilde \alpha}_0,{\tilde \alpha}_1,{\tilde \alpha}_2)=\frac{1}{48}(-47,46,25) \end{equation} and the degeneracies are $({\tilde D}_1,{\tilde D}_2)=(96256,4371)$. All in all, there is now a strong case that every minimal model with two or three characters has an associated ``dual'' CFT (exotic when the original theory is non-unitary, but normal when the original is unitary) which pairs with it to give the Moonshine Module. The resulting pairings are summarised in Table \ref{table-3char}. It is amusing to note that the values of $\gamma$ for each pair of models satisfying the bilinear relation are related by the inversion $\gamma\to-\frac{31}{\gamma}$. \begin{table}[ht] \centering \begin{tabular}{|c|c|c|c|c|c|} \hline No & $m_2^0$ & $\gamma$ & $D_1$ &$D_2$ & Identification\\ \hline\hline 1 & $1$ & $-33,47$ & $1$ & $1$ & $\mathcal{M}} \newcommand{\cR}{\mathcal{R}_{2,7}$ minimal model \\ \hline 2 & 63366 & $\frac{31}{33},-\frac{31}{47}$ & $-715139$ & $848656$ & Dual of $\mathcal{M}} \newcommand{\cR}{\mathcal{R}_{2,7}$ \\ \hline\hline 3 & 1 & $-\frac{341}{37},\frac{31}{3}$ & $1$ & $1$ & ${\cal M}_{3,4}$ (Ising model) \\ \hline 4 & 96256 & $\frac{37}{11},-3$ & $96256$ & $4371$ & Dual of ${\cal M}_{3,4}$ \\ \hline\hline 5 & 2 & $-\frac{217}{9},31$ & $1$ & $1$ & $\mathcal{M}} \newcommand{\cR}{\mathcal{R}_{2,5}\otimes \mathcal{M}} \newcommand{\cR}{\mathcal{R}_{2,5}$ \\ \hline 6 & 90118 & $\frac97,-1,$ & $615164$ & $-508400$ & Dual of $\mathcal{M}} \newcommand{\cR}{\mathcal{R}_{2,5}\otimes \mathcal{M}} \newcommand{\cR}{\mathcal{R}_{2,5}$ \\ \hline \end{tabular} \caption{$\ell=0$ three-character theories without Kac-Moody algebra. Here $m_2^{(0)}$ is the degeneracy of the second excited state in the identity character, $\gamma$ is the rational number in \eref{pq} and $D_1$, $D_2$ are the ground-state degeneracies of the non-trivial primaries.} \label{table-3char} \end{table} Again it is worth pointing out that, independent of the bilinear relation, we have successfully classified all possible three-character RCFT's with $\ell=0$ having no Kac-Moody algebra. There are precisely three such theories, one of them a normal CFT (which we discuss in the following section) and the other two ``exotic'' in the sense of having negative, but integer, degeneracies. \subsection{Relation to the Baby Monster} Clearly it is important to understand the ``new'' theories, namely entries $2,4,6$ of Table \ref{table-3char}. We note that these candidates have large values of $m_2$. They also have large degeneracies for the non-identity primary. Since two of them are exotic (and therefore may or may not exist as CFT's) we will focus on the sole unitary candidate, the one related to the Ising model. One of its fascinating features is that the number 96256 occurs twice: once as the degeneracy of the second excited state in the identity character ($m_2$) and once as the degeneracy of one of the nontrivial primaries. The number 4371 is the dimension of the other primary. Both these numbers are related to the Baby Monster, the second largest sporadic group. Indeed, there is a Baby Monster Vertex Operator Algebra $V\relax{\rm I\kern-.18em B}^\natural$ \cite{Hoehn:thesis} with central charge $\frac{47}{2}$ whose character (the ``shorter Moonshine module'') has the expansion: \begin{equation} \chi_{V\relax{\rm I\kern-.18em B}^\natural}=q^{-\frac{47}{48}}\left(1+4371 q^{\frac32} +96256 q^2 +1143745 q^{\frac52}+\cdots\right) \end{equation} This can be rewritten: \begin{equation} \begin{split} \chi_{V\relax{\rm I\kern-.18em B}^\natural}&=q^{-\frac{47}{48}}\left(1 +96256 q^2+\cdots\right)+ q^{\frac{25}{48}}\left(4371+ 1143745 q+\cdots\right)\\ &=\chi_0(q)+\chi_2(q) \end{split} \end{equation} where the second line is a sum of two of the characters of our 3-character $c=\frac{47}{2}$ theory. Moreover the three characters ${\tilde \chi}_0,{\tilde \chi}_1,{\tilde \chi}_2$ of the $c=\frac{47}{2}$ theory discussed above appear in Eq.(4.13) of Ref.\cite{Hoehn:thesis} (see also Ref.\cite{Hoehn:Baby8}) as the modules $V\relax{\rm I\kern-.18em B}^\natural(0),V\relax{\rm I\kern-.18em B}^\natural(2),V\relax{\rm I\kern-.18em B}^\natural(1)$ respectively. The bilinear relation \eref{jbilinear} for this case will then decompose the dimensions of representations of the Monster in terms of those of the Baby Monster. The dual relation between the Ising model and the above theory seems to originate in the fact that the Moonshine Module is itself a sum of irreducible characters of 48 copies of the Ising model, as shown in Ref.\cite{Dong:1994}\footnote{We thank Matthias Gaberdiel for bringing this fact and this reference to our attention.}. Indeed this observation was relevant for the subsequent work of H\"ohn on the $c=\frac{47}{2}$ ``Baby Monster'' module. Nevertheless, the way we have discovered the Baby Monster as a three-character RCFT without a Kac-Moody algebra, which pairs with Ising model to give the Moonshine character, could provide a simple method to reproduce some of the results of Ref.\cite{Hoehn:thesis}, add new perspectives and give rise to generalisations. Our results lead one to ask whether other minimal models have duals of the sort we have found. The next minimal model, ${\cal M}_{4,5}$ (the tri-critical Ising model) has 6 characters and a central charge of $\frac{7}{10}$. If it has a dual in our sense (with which it obeys a bilinear identity giving the Moonshine Module) then that theory must have six characters as well, and a central charge $c=\frac{233}{10}$. It is easy to verify using $\alpha_0+{\tilde \alpha}_0=-1,\alpha_i+{\tilde \alpha}_i=1, i=1,\ldots,5$ that the dual has $\ell=6$, i.e. the coefficients in its differential equation can have $\frac{\ell}{6}=1$ full singularity in moduli space. This allows a rather large number of independent coefficient functions, hence the free parameters in the differential equation cannot be determined completely by the conjectured exponents ${\tilde \alpha}_i$. Thus we have no obvious way of generating the $q$-series for the possible dual characters and verifying their integrality. The situation rapidly gets even more complicated for other unitary minimal models. In short, for the moment we have no way to support nor exclude the existence of such duals for minimal models. \section{Summary and Discussion} In this section we summarise the emerging picture and propose possible lines of further investigation. We have classifed all two-character RCFT's with no Kac-Moody algebra for $\ell\le 2$ ($\ell=0,3$ were already done) and all three-character RCFT's with the same property for $\ell=0$ (the latter, under the assumption that our computation upto $m_2= 2,000,000$ is sufficient). The restriction to low values of $\ell$ is due to the fact that the method of modular-invariant differential equations is most restrictive in these cases and allows efficient construction of candidate characters given only the critical exponents. Within this set of systems, we found the expected unitary and non-unitary minimal models (precisely those with two and three characters) as well as dual theories in every case, although the duals for non-unitary minimal models have negative integer degeneracies. Each model and its dual satisfies a bilinear pairing identity equating it to the Moonshine Module. This pairing is reminiscent of that recently found in Ref.\cite{Gaberdiel:2016zke} between an affine theory and the coset of a meromorphic theory by that affine theory, although affine Lie algebras play no role in the present case -- by construction. It would be interesting to know if more such pairs of theories exist. This structure is interesting from the mathematical point of view as well, since the unitary case we discovered in our approach is a known theory related to the Baby Monster module, and our pairing decomposes representations of the Monster into the Baby Monster. \section*{Acknowledgements} We would like to thank R. Loganayagam and Sameer Kulkarni for useful discussions and Matthias Gaberdiel for very helpful comments on a first draft of this manuscript. The work of HRH is supported by an INSPIRE Scholarship, DST, Government of India, and that of SM by a J.C. Bose Fellowship, DST, Government of India. SM would also like to acknowledge the warm hospitality of the Yukawa Institute of Theoretical Physics, Kyoto University where this work was completed. We thank the people of India for their generous support for the basic sciences. \section*{Conventions and useful formulae} The relevant Eisenstein series used in this paper, normalised so that their first term is unity, have the series expansion: \begin{equation} \begin{split} E_2 &= 1 -24\sum_{n=1}^\infty \frac{n q^n}{1-q^n}=1-24\sum_{n=1}^\infty \sigma_1(n)q^n\\ E_4 &= 1 +240\sum_{n=1}^\infty \frac{n^3 q^n}{1-q^n}=1+240\sum_{n=1}^\infty \sigma_3(n)q^n\\ E_6 &= 1 -504\sum_{n=1}^\infty \frac{n^5 q^n}{1-q^n}=1-504\sum_{n=1}^\infty \sigma_5(n)q^n\nonumber \end{split} \end{equation} where \begin{equation} \sigma_p(n)=\sum_{d|n}d^p\nonumber \end{equation} $E_4$ and $E_6$ can be expressed in terms of Jacobi $\theta$-functions: \begin{equation} \begin{split} E_4&={\frac{1}{2}\,} \sum_{\nu=2}^4 \big(\theta_\nu(0|\tau)\big)^8\\ E_6&= \sqrt{E_4^3-\sfrac{27}{4}(\theta_2\theta_3\theta_4)^8}\nonumber \end{split} \end{equation} The explicit expansion of these series to a few finite orders is: \begin{equation} \begin{split} E_2 &= 1-24q -72 q^2 - 96 q^3 - 168 q^4\\ E_4 &= 1+240 q+2160 q^2+6720 q^3+17520 q^4\\ E_6 &= 1- 504q-16632 q^2 - 122976 q^3-532728 q^4\nonumber \end{split} \end{equation} \bibliographystyle{JHEP}
1,941,325,219,986
arxiv
\section{Introduction and background information} \label{Introduction} \noindent The aim of this note is to present our research of recent years on varieties, generated by (standard) wreath products of abelian groups~\cite{AwrB_paper}--\cite{wreath products Prilozh}, and to display some newest, yet unpublished progress in this direction, concerning wreath products of non-abelian groups also. In particular, we give complete classification of all cases when for the abelian groups $A$ and $B$ their cartesian (or direct) wreath product generates the variety $\var{A} \var{B}$. The analog of this also is established for arbitrary {\it sets} of abelian groups. Since cartesian wreath product $A \Wr B$ and the direct wreath product $A \Wrr B$ of any groups $A$ and $B$ always generate the same variety of groups~\cite[Statement 22.31]{HannaNeumann}, below we will formulate all results for cartesian wreath products, keeping in mind that their analogs also are true for direct wreath products. \vskip5mm Wreath products perhaps are the most useful tool to study the product varieties of groups. A product $\U \V$ of varieties $\U$ and $\V$ is defined as the variety of extensions of all groups $A \in \U$ by all groups $B \in \V$. By the Kaloujnine-Krassner theorem extensions of $A$ by $B$ can be embedded into the cartesian wreath product $A \Wr B$~\cite{KaloujnineKrasner}. Take $A$ and $B$ to be some fixed groups, generating the varieties $\U$ and $\V$ respectively. If \begin{equation} \label{EQUATION_general} \var{A \Wr B}=\U \V, \end{equation} then the extensions of $A$ by $B$ already are enough to generate $\U \V =\var{A} \var{B}$, and we can restrict ourselves to consideration of $\var{A \Wr B}$, which is easier to study rather than to check all the extensions in $\U \V$. Examples, when this approach is used, are too many to list, so let us recall the earliest ones (see Chapter 2 of Hanna Neumann's monograph~\cite{HannaNeumann} for references). Using the relatively free groups $F_\infty(\U)$ of infinite rank for varieties $\U$, we get an easy consequence of the Kaloujnine-Krassner theorem: \begin{equation} \label{EQUATION_free_groups} \var{F_\infty(\U) \Wr {F_\infty(\V)}}= \var{F_\infty(\U)} \var{F_\infty(\V)} =\U \V. \end{equation} Moreover, according to~\cite[Statement 22.23]{HannaNeumann}, in (\ref{EQUATION_free_groups}) the group $F_\infty(\U)$ can be replaced by any group $A$, generating $\U$. Namely: $$ \var{A \Wr {F_\infty(\V)}}= \var{A} \var{F_\infty(\V)} =\U \V.\ $$ Further, by the theorem of G.~Baumslag and Neumanns~\cite{B3,B+3N}, the group $F_\infty(\V)$ can be replaced by any group that discriminates $\V$ because for any group $A$ generating $\U$ and for any group $B$ discriminating $\V$ the wreath product $A \Wr B$ discriminates and generates $\U \V$. We will below get the converse of this for abelian groups in Theorem~\ref{Baumslag's_converse_groups}. Let $B^{\infty}$ be the countably infinite direct power of a group $B$, and let $B^{k}= B \times \cdots \times B$ be the $k$'th direct power of $B$. Since $B^{\infty}$ discriminates $\var{B}$, we for any generator $B$ of $\V$ have: $$ \var{A \Wr B^{\infty}}= \var{A} \var{B^{\infty}}= \var{A} \var{B} =\U \V. $$ Another example is when $\U \V$ has a finite basis rank $n$ ($\U \V$ is generated by its $n$-generator groups) and $\V$ is locally finite. Then $$\var{F_t(\U) \Wr {F_n(\V)}}= \U \V,$$ where $t = (n-1)|F_n(\V)|+1$. \vskip5mm In this note we are mainly interested in the case of wreath products of abelian groups. The initial result in this direction is proved by G.~Higman (Lemma 4.5 and Example 4.9 in~\cite{Some_remarks_on_varieties}): if $C_p$ and $C_n$ are finite cycles of orders respectively $p$ and $n$ ($p$ is a prime not dividing $n$), then $$\var{C_p \Wr C_n}= \var{C_p} \var{C_n}= \A_p \A_n,$$ where, as usual, $\A_n$ is the variety of all abelian groups of exponent dividing $n$. Higman used them to study the varieties, which cannot be generated by their $k$-generator groups for any fixed $k$. C.H.~Houghton generalized this for the case of arbitrary finite cycles $A=C_m$ and $B=C_n$. Namely, $$\var{C_m \Wr C_n}= \var{C_m} \var{C_n} = \A_m \A_n$$ holds if and only if $m$ and $n$ are coprime (this result of Houghton is unpublished, but is repeatedly mentioned in literature, in particular, by R.G.~Burns in~\cite[page 356]{Burns65} and~\cite{BurnsDiso}, and by Hanna Neumann in~\cite{HannaNeumann}. Some other cases can be obtained using the discriminating groups. For example, the infinite cyclic group $C$ discriminates the variety of all groups $\A$, so for any abelian group $A$ the equality $$\var{A \Wr C}= \var{A} \var{C} = \var{A} \A$$ holds. Also, the infinite direct (or cartesian) power $C_p^\infty$ discriminates the variety $\A_p$, so we have: $$\var{A \Wr C_p^\infty}= \var{A} \var{C_p^\infty} = \var{A} \A_p.$$ On the other hand $\var{C_p \Wr C_p^k}\not= \A_p \A_p$ because $C_p \Wr C_p^k$ is nilpotent and, thus, it does not generate the non-nilpotent variety $\A_p \A_p$ for any positive integer $k$~\cite{HannaNeumann}. \section{Classification for wreath products of arbitrary abelian groups} \label{Classification} In~\cite{AwrB_paper, finitely generated abelian}, generalizing the above mentioned results on abelian groups, we gave a general criterion for varieties, generated by wreath product of arbitrary abelian groups $A$ and $B$. Let $B_p=\{b \in B \, | \, |b| = p^i \text{ for some }i \}$ be the $p$-primary component of $B$. By Pr\"ufer's theorem, every abelian group of finite exponent, in particular $B_p$, is a direct product of finite cycles of prime-power exponents: $B_p = \sum_{i\in I} C_{p^{k_i}}$. Denote by $k'$ the greatest of the exponents $k_i$. Then: \begin{Theorem}[Theorem 6.1 in~\cite{AwrB_paper}] \label{classification for groups} For arbitrary abelian groups $A$ and $B$ the equality $\var{A \Wr B} = \var A \var B$ holds if and only if: \noindent {\bf (1)} either at least one of the groups $A$ and $B$ is not of finite exponent; or \noindent {\bf (2)} if $\exp A = m$ and $\exp B = n$ are both finite and for every common prime divisor $p$ of $m$ and $n$, the $p$-primary component $B_p = \sum_{i\in I} C_{p^{k_i}}$ of $B$ contains infinitely many direct summands $C_{p^{k'}}$, where $p^{k'}$ is the highest power of $p$ dividing $n$.\end{Theorem} The analog of this criterion also holds for direct wreath products. The theorem shows that for abelian groups $\var{A \Wr B}$ may be distinct from $\var A \var B$ only if both $A$ and $B$ are of finite exponents $m$ and $n$ respectively, and there is a prime common divisor $p$ of $m$ and $n$ such that the $B$ can be presented in the form $ B = ( C_{p^{k'}} \oplus \cdots \oplus C_{p^{k'}} ) \oplus B', $ where $p^{k'} \slash \hskip-1.5mm | \exp B'$. For example, for distinct primes $p$ and $q$: $$ \var{C_p \Wr [ C_q \oplus (C_p \oplus C_p) \oplus (C_{p^2} \oplus \cdots \oplus C_{p^2} \oplus \cdots ) ] } = \A_p \A_{q p^2}, $$ $$ \var{C_p \Wr [C_q \oplus (C_{p^2} \oplus C_{p^2} \oplus C_{p^2}) \oplus (C_{p} \oplus \cdots \oplus C_{p} \oplus \cdots ) ] } \not= \A_p \A_{q p^2}. $$ When the abelian groups $A$ and $B$ are finite, Theorem~\ref{classification for groups} simply implies: \begin{Theorem} \label{Generalisation of Houghton's result for finite abelian groups} For arbitrary non-trivial finite abelian groups $A$ and $B$ of exponents $m$ and $n$ respectively $ \var{A\Wrr B}= \var A \var B = \A_m \A_n $ holds if and only if $m$ and $n$ are coprime. \end{Theorem} Thus, the above mentioned result of Houghton is true not only for cyclic group, but for any finite abelian groups. This fact, however, seems to be known in mathematical folklore. After our paper~\cite{AwrB_paper} was published in 2001, we presented it to Prof.~C.H.~Houghton and asked about the article where his result on wreath product of cyclic groups is published. He confirmed that it was never published: when his advisor Hanna Neumann was preparing the ``Varieties of Groups''~\cite{HannaNeumann}, C.H.~Houghton informed her about this result, and it was mentioned in the book. Later, however, some problem in proofs was discovered, and the article was never published. \vskip5mm For any group sets $\X$ and $\Y$ their cartesian wreath product is defined by $\X \Wr \Y = \{ X\Wr Y \,|\, X\in \X, Y\in \Y\}$ (the direct wreath product $\X \Wrr \Y $ is defined analogously). In~\cite{Metabelien, wreath products Prilozh} we discuss the varieties, generated by wreath products of sets of abelian groups. The following theorem gives full classification for this case also: \begin{Theorem}[Theorem 7.1 in~\cite{Metabelien}] \label{classification for groups} For arbitrary sets of abelian groups $\X$ and $\Y$ the equality $\var{\X \Wr \Y} = \var{\X} \var{\Y}$ holds if and only if: \noindent {\bf (1)} either at least one of the sets $\X$ and $\Y$ is not of finite exponent; or \noindent {\bf (2)} if $\exp \X = m$ and $\exp \Y = n$ are both finite and for every common prime divisor $p$ of $m$ and $n$, and for arbitrary positive integer $s$ the set $\Y$ contains a group $B(s)$ such that the $p$-primary component $B(s)_p = \sum_{i\in I(s)} C_{p^{k_i}}$ of $B(s)$ contains at least $s$ direct summands $C_{p^{k'}}$, where $p^{k'}$ is the highest power of $p$ dividing $n$. \end{Theorem} Again, the analog of this criterion also holds for direct wreath products. The classification we obtained can also be used to get other results, which are listed in~\cite{AwrB_paper}--\cite{wreath products Prilozh}. For example, the above mentioned theorem of Baumslag and Neumanns~\cite{B3,B+3N} about wreath products with discriminating groups in fact is a {\it necessary and sufficient} condition for abelian groups: \begin{Theorem} \label{Baumslag's_converse_groups For arbitrary non-trivial abelian groups $A$ and $B$ the wreath product $A \Wr B$ discriminates the variety $\var{A} \var{B}$ if and only if $B$ discriminates the variety $\var{B}$. \end{Theorem} The analog of this also holds for wreath products of sets of abelian groups. \section{Wreath products of finite groups in other classes} \label{groups_close} After we fully classified all the cases, when the equality (\ref{EQUATION_general}) holds for abelian groups, it is natural to consider the same problem for non-abelian groups. Clearly, this is a much more complicated objective, as there are continuum varieties of groups, and there are non-abelian varieties, in which all finite groups are abelian (see, for example, the result of A.Yu.~Olshanskii~\cite{{Olshanskii all finite groups are abelian}}), whereas the set of abelian and metabelian varieties is countable (see the theorem of D.E.~Cohen~\cite{CohenMetabelian}). We have posed these as problems in~\cite{AwrB_paper, Metabelien, Two problems}. \vskip3mm The simplest step to start with is consideration of the varieties $\var{A \Wr B}$ for not necessarily abelian, {\it finite} groups $A$ and $B$. For this case there is a necessity condition based on a theorem of A.L.~Shmel'kin~\cite{ShmelkinOnCrossVarieties}, which strongly restricts the groups we should deal with: \begin{Proposition}[a corollary of Shmel'kin's theorem] \label{Shmelkin's Corollary} If the non-trivial varieties of groups $\U$ and $\V$ are generated by finite groups $A$ and $B$ respectively, then $ \var{A \Wr B}=\var{A} \var{B}=\U \V $ holds only if $B$ is abelian, $A$ is nilpotent, and the exponents of $A$ and $B$ are coprime. \end{Proposition} \begin{proof} By Shmel'kin's theorem $\U \V$ can be generated by a finite group if and only if $\V$ is abelian, $\U$ is nilpotent, and the exponents of $\V$ and $\U$ are coprime~\cite{ShmelkinOnCrossVarieties}. It remains to recall that the wreath product $A \Wr B$ is finite of order $|A|^{|B|}\cdot |B|$. \end{proof} Denote by $\Ni_{c,m}$ the variety of all groups of nilpotency class at most $c$ and of exponent dividing $m$. Clearly $\Ni_{c,m} = \Ni_c \cap \B_m$ holds. In this notation Proposition~\ref{Shmelkin's Corollary} can be formulated shorter: {\it If (\ref{EQUATION_general}) holds for finite non-trivial groups $A$ and $B$, then $A \in \Ni_{c,m}$ and $B \in \A_n$, where $m$ and $n$ are coprime}. The condition of Proposition~\ref{Shmelkin's Corollary} is not sufficient. An example showing this for some small values for nilpotency class and for exponents (thus, for a case ``very near" to the abelian groups) is displayed in~\cite{Metabelien, wreath products Prilozh}: \begin{Example} \label{smallExample} Let $A=F_2(\Ni_{2,3})$ be the free group of rank $2$ in the variety $\Ni_{2,3}$. Then exponents of $A$ and $C_2$ are coprime, but, nevertheless, $ \var{A \Wr C_2}\not= \var{A} \var{C_2} =\Ni_{2,3} \A_2. $ \end{Example} \begin{proof} Let us present $A$ as $ A=F_2(\Ni_{2,3}) =\langle x_1,x_2 \,|\, [x_1,x_2,x_1]=[x_1,x_2,x_2]=x_1^3=x_2^3=1 \rangle $ and define a group $R$ to be the extension of $A$ by means of the group of operators generated by automorphisms $\nu_1,\nu_2\in \Aut{A}$ given by: $\nu_1 : x_1\mapsto x_1^{-1}, \hskip0mm \nu_1 : x_2\mapsto x_2; \hskip1mm \nu_2 : x_1\mapsto x_1, \hskip0mm \nu_2 : x_2\mapsto x_2^{-1} $. Clearly $\langle \nu_1,\nu_2 \rangle \cong C_2 \oplus C_2 \in \A_2$. As it is shown in~\cite{BurnsDiso}, $R$ is a critical group, that is, a finite group which does not belong to the variety generated by its proper factors. Each of its proper factors, but not $R$ itself, satisfies the identity $ [[x_1,x_2],[x_3,x_4],x_5]\equiv 1. $ On the other hand the wreath product $A \Wr C_2$ satisfies this identity because its second commutator subgroup lies in the center. Thus, $R \notin \var{A \Wr C_2}$. \end{proof} The following is a partial converse of Proposition~\ref{Shmelkin's Corollary} in the sense that even if (\ref{EQUATION_general}) does not hold for groups $A$ and $B$, satisfying Shmel'kin's conditions, it will hold if we replace $B$ by its finite direct power: \begin{Proposition} \label{Nilpotent by abelian} Let $A \in \Ni_{c,m}$ be any finite group of nilpotency class at most $c>1$ and of exponent $m$, and let $B$ be any finite abelian group of exponent $n$, where $m$ and $n$ are coprime. Then $ \var{A \Wr B^{c+1}} = \var{A} \var{B} = \var{A} \A_n $ holds. And, moreover, if $A$ generates $\Ni_{c,m}$, then: $ \var{A \Wr B^{c}} = \var{A} \var{B} = \Ni_{c,m} \A_n. $ \end{Proposition} We omit the case $c=1$ because in that case $\Ni_{c,m}$ is abelian, and we covered that case earlier. Also, notice that in the second equality we took lower direct power of $B$. \begin{proof} Let us start from the second equality. As it is proved in~\cite{Burns65, BurnsDiso}, in current circumstances $\Ni_{c,m} \A_n$ has basis rank $c$, that is, this variety is generated by its $c$-generator groups. By~\cite[Proposition 21.13]{HannaNeumann} the $c$-generator free group $F_c(\Ni_{c,m} \A_n)$ is an extension of the $t$-generator free group $F_t(\Ni_{c,m})$ of the variety $\Ni_{c,m}$ by the $c$-generator free group $F_c(\A_n)=C_n^c$ of the variety $\A_n$, where the rank $t$ is calculated by Schreier's formula: $t = (n-1)|F_c(\A_n)|+1 = (n-1)n^c + 1$. By the Kaloujnine-Krassner theorem~\cite{KaloujnineKrasner} this extension is embeddable into the cartesian wreath product $F_t(\Ni_{c,m}) \Wr C_n^c$. Since the latter wreath product still is within the product variety $\Ni_{c,m} \A_n$, we get that $\var{F_t(\Ni_{c,m}) \Wr C_n^c} = \Ni_{c,m} \A_n$. Since $A$ generates $\Ni_{c,m}$, the group $F_t(\Ni_{c,m})$ by~\cite[Theorem 15.4]{HannaNeumann} can be embedded into the cartesian (in this case also direct, as $A$ and $t$ are finite) power $A^{A^t}$. Thus: $\var{A^{A^t} \Wr C_n^c} = \Ni_{c,m} \A_n$. Applying~\cite[Lemma 1.1]{AwrB_paper} for the case when $\X = \{A\}$, \, $\Y = \{C_n^c\}$ and $X^*=A^{A^t}$, we get that $A^{A^t} \Wr C_n^c \in \var{A \Wr C_n^c}$. There only remains to notice that if the exponent of $B$ is $n$, then the direct power $B^c$ contains a subgroup isomorphic to $C_n^c$. Combining these steps, we get: $$ F_c(\Ni_{c,m} \A_n) \hookrightarrow F_t(\Ni_{c,m}) \Wr C_n^c \hookrightarrow A^{A^t} \Wr C_n^c \in \var{A \Wr C_n^c} \subseteq \var{A \Wr B^c}. $$ \vskip4mm Turning to the proof of the first equality, notice that $\var{A}$ may be a proper subvariety of $\Ni_{c,m}$. This also includes the case, when $A$ is abelian, so an even stronger equality holds: $\var{A \Wr B} = \var{A} \var{B} = \A_m \A_n$. Being a locally finite variety, $\Ni_{c,m} \A_n$ is generated by its critical groups~\cite[Proposition 51.41]{HannaNeumann}. According to~\cite[Statement 2.1]{Burns65} all critical groups in $\Ni_{c,m} \A_n$ are at most $(c+1)$-generator. Each of such critical groups $K$ is an extension of a group $P \in \Ni_{c,m}$ by an at most $(c+1)$-generator subgroup $Q \in \A_n$. Assume the group $P$ is $l$-generator for some $l$ (it is finitely generated, as it is a subgroup of finite index in a finitely generated group). $K$ is embeddable into the wreath product $P \Wr Q$. The group $P$ is an epimorphic image of the free group $F_l(\var{A})$ of rank $l$. The latter is embeddable into the cartesian power $A^{A^l}$. So we have two differences from the previous case: $F_l(\var{A})$ may not be equal to $F_l(\Ni_{c,m})$ and $P$ may not be embeddable into $A^{A^l}$. But $P$ still is a factor of $A^{A^l}$, that is, an epimorphic image of a subgroup $S$ of $A^{A^l}$. Applying~\cite[Proposition 22.11]{HannaNeumann} (or~\cite[Lemma 1.1]{AwrB_paper}), we get that $P \Wr Q$ belongs to $\var{S \Wr Q}$. By~\cite[Proposition 22.12]{HannaNeumann} (or by~\cite[Lemma 1.1]{AwrB_paper}) we also have $S \Wr Q \in \var{A^{A^l} \Wr Q}$. We can again apply~\cite[Lemma 1.1]{AwrB_paper} to get that $A^{A^l} \Wr Q \in \var{A \Wr Q}$. It remains to notice that the exponent $n'$ of $Q$ is a divisor of $n$ and, thus, $Q$ is a subgroup of $C_{n'}^{c+1}$. The latter is a subgroup of $B^{c+1}$ because $B$ has exponent $n$ and contains at least one direct summand $C_n$ (clearly $C_{n'} \hookrightarrow C_n$). By~\cite[Proposition 22.13]{HannaNeumann} (or by~\cite[Lemma 1.2]{AwrB_paper}) we get that $A \Wr Q \in \var{A \Wr B^{c+1}}$. Combination of these steps gives: $$ K \hookrightarrow P \Wr Q \in \var{S \Wr Q} \subseteq \var{A^{A^t} \Wr Q} \subseteq \var{A \Wr Q} \subseteq \var{A \Wr B^{c+1}}, $$ which completes the proof. \end{proof} In contrast with the previous example, we get: \begin{Example} \label{smallExampleContinued} If $A=F_2(\Ni_{2,3})$ is the group of Example~\ref{smallExample}, then: $ \var{A \Wr (C_2 \oplus C_2)} = \var{A} \var{C_2 \oplus C_2} =\Ni_{2,3} \A_2. $ \end{Example} \begin{proof} We just apply the second equality of Proposition~\ref{Nilpotent by abelian} and take into account that the bases rank of the variety $\Ni_{2,3} \A_2$ is 2 (see~\cite{Burns65, BurnsDiso}). \end{proof} Comparison of Example~\ref{smallExample}, Proposition~\ref{Nilpotent by abelian} and Example~\ref{smallExampleContinued} displays how different situation we may have if we proceed from consideration of wreath products of abelian groups to wreath products of nilpotent and abelian groups. They also show, that for finite groups, satisfying Shmel'kin's condition, the equality $ \var{A \Wr B}=\var{A} \var{B} $ always holds, if the group $B$ is ``large'' in the sense that it contains sufficiently many copies of $C_n$. Thus, the equality can only fail for ``small'' groups $B$, which makes their further study and complete classification a reasonable objective.
1,941,325,219,987
arxiv
\section{Introduction} Toxic comments refer to rude, insulting and offensive comments that can severely affect a person’s mental health and severe cases can also be classified as cyber-bullying. It often instills insecurities in young people, leading them to develop low self-esteem and suicidal thoughts. This abusive environment also dissuades people from expressing their opinions in the comment section which is supposed to be a safe and supportive space for productive discussions. Young children learn the wrong idea that adapting profane language will help them in seeking attention and becoming more socially acceptable. Hence, the task of flagging inappropriate content on social media is extremely important to create a healthy social space. In this paper, we discuss how we leveraged AI to solve this task for us. This task is unprecedented because of the lack of useful data to train powerful models. We have used state-of-the-art transformer models pretrained on MLM task, fine-tuned with different architectures. Furthermore, we discuss some creative post-processing techniques that help to enhance the scores. \section{Task Overview} \subsection{Problem Formulation} \emph{IIIT-D Multilingual Abusive Comment Identification} is an innovative challenge towards combating abusive comments on Moj, one of India's largest social media platform, in multiple regional Indian languages. This paper is about a novel research problem focused on improving the social space for the members of the social media community. The focus area is improving abusive comment identification on social media in low-resource Indian languages. There are multiple challenges that we need to combat while dealing with multilingual text data. Some of the main challenges in this task include: \begin{itemize} \item There is a lack of resources about literature and grammar despite millions of native speakers using these languages. Building NLP algorithms with limited basic lexical resource is highly challenging. \item Not all Indic languages fall into the same Linguistic Families. There's (1) Indo-Aryan which includes Hindi, Marathi, Gujarati, Bengali, etc (2) The Dravidian family consists of Tamil, Telugu, Kannada and Malayalam (3) Tribes of Central India speak Austric languages (4) Sino-Tibetan languages are spoken by tribes of the North Eastern India. \item The posts/comments on social media do not adhere to a particular format, grammar or sentence structure. They are short, incomplete, filled with slangs, emoticons and abbreviations. \end{itemize} \subsection{Data Description} In this challenge, the data provided was split into 2 parts: training data with 665k+ samples and test data with 74.3k+ samples. Key novelties around this dataset include: \begin{itemize} \item Natural language comment text data in 13 Indic Languages labeled as Abusive(312k samples) or Not Abusive(352k samples). Fig. 1. tells us that Hindi is the most common language. \item The data is human annotated. \item Metadata based explicit feedback - post identification number, report count of the comment, report count of the post, count of likes on the comment and count of likes on the post. \end{itemize} \begin{figure}[htp] \centering \includegraphics[width=7.8cm]{lang_dist.png} \caption{Language distribution in train dataset} \label{fig:lang_dist} \end{figure} \section{Model Building Approach and Evaluation} After the research in Attention Mechanism \cite{vaswani2017attention} and the groundbreaking model - BERT \cite{devlin2019bert}, NLP space has been revolutionized and state-of-the-art Transformers have become the go-to option for almost all NLP tasks. For tackling this multilingual task, we chose the following models - \begin{itemize} \item \emph{XLM-RoBERTa} - It is a transformer-based masked language model trained on 100 languages, using more than two terabytes of filtered CommonCrawl data. \cite{conneau2020unsupervised} \item \emph{MuRIL} - A multilingual language model specifically built for Indic languages trained on significantly large amounts of Indian text corpora with both transliterated and translated document pairs, that serve as supervised cross-lingual signals in training. \cite{khanuja2021muril} \item \emph{mBERT} - MultilingualBERT (mBERT) is a transformer based language model trained on raw Wikipedia text of 104 languages. This model is contextual and its training requires no supervision - no alignment between the languages is done. \cite{k2020crosslingual} \item \emph{IndicBERT} - This model is trained on IndicCorp and evaluated on IndicGLUE. Similar to mBERT, a single model is trained on all Indian languages with the hope of utilizing the relatedness amongst Indian languages. \cite{kakwani-etal-2020-indicnlpsuite} \item \emph{RemBERT} - This model is pretrained on 110 languages using a Masked Language Modeling (MLM) objective. Its main difference with mBERT is that the input and output embeddings are not tied. Instead, RemBERT uses small input embeddings and large output embeddings. \cite{DBLP:conf/iclr/ChungFTJR21} \end{itemize} The Evaluation Metric for this task is Mean F1-score. \subsection{Pre-training} Pre-training using Masked Language Model(MLM) \cite{song2019mass} has been one of the most popular and successful methods in downstream tasks mostly due to their ability to model higher-order word co-occurrence statistics. \cite{sinha2021masked} Since MuRIL is already a BERT encoder model pre-trained on Indic languages with MLM objective, we performed pre-training only on XLM-RoBERTa model (3 epochs for Base variant and 2 epochs for Large variant). We separated 10\% of the test data for evaluating the pre-train step. Table I shows the summary of this task. On average, the test F-1 score on downstream task was approximately 0.87 without performing MLM and 0.88 with MLM when we use the same settings to train the finetuned model in both the cases. This helped us to conclude that MLM on the given data certainly helps in the downstream tasks. \begin{table}[!t] \renewcommand{\arraystretch}{1.3} \caption{Validation loss for MLM pre-training} \label{mlm_pretraining} \centering \begin{tabular}{|c|c|c|} \hline Info & XLM-RoBERTa Base & XLM-RoBERTa Large\\ \hline Accelerator & Tesla P100 16GB & Tesla T4 16GB\\ \hline Time Taken (hr) & 11.26 & 35.3 \\ \hline Epoch 1 & 3.3819 & 3.0549\\ \hline Epoch 2 & 3.3222 & 2.8466\\ \hline Epoch 3 & 3.2946 & -\\ \hline \end{tabular} \end{table} \subsection{Data Augmentation} We performed data augmentation by adding transliterated data. We removed emojis from text and then we used uroman\footnote{\url{https://github.com/isi-nlp/uroman}} to generate additional transliterated data of 219114 samples. We also transliterated the test dataset and made the original plus transliterated dataset available on Kaggle \footnote{\url{https://www.kaggle.com/harveenchadha/iitdtransliterated}}. \subsection{Model training} Mainly, we used 2 different Model Architectures; one was the original architecture where we took the transformer outputs and found the probabilities and in the second one, we added a custom attention head for the transformer output before calculating the probabilities. We also experimented with a number of different truncation sizes of the input text length. We tried different models with truncation length of 64, 96, 128 and 256 and finally decided to go ahead with 128. From Table II, we found out that MLM and Transliterated data had a positive impact on the performance. Moreover, a custom attention head boosted the score too. XLM-RoBERTa was the best performer followed by MuRIL. We also observed a slight difference between GPU and TPU accelerators - models trained on GPU tend to perform much better. But since experimentation time on TPU was lower, we went ahead with it for most experiments. We used wandb\cite{wandb} to track most of our experiments, we are also releasing our experimentation logs \footnote{\url{https://wandb.ai/harveenchadha/iitd-private}} \begin{table*}[!t] \renewcommand{\arraystretch}{1.3} \caption{Model Experimentation Summary} \label{model_exp_summary} \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline Id & Model & Data & Accelerator & Validation Strategy & CV & Test Score \\ \hline 1 & XLM-RoBERTa Base & Original & TPU v3-8 & 10\% split & 0.854 & 0.87674\\ \hline 2 & XLM-RoBERTa Large & Original & TPU v3-8 & 10\% split & 0.8601 & 0.8819 \\ \hline 3 & XLM-RoBERTa Base & Original + Transliterated & TPU v3-8 & 10\% split & 0.86 & 0.87989\\ \hline 4 & XLM-RoBERTa Large & Original + Transliterated & TPU v3-8 & 10\% split & 0.8626 & 0.88238 \\ \hline 5 & XLM-RoBERTa Large + MLM & Original & TPU v3-8 & 10\% split & 0.8631 & 0.88291 \\ \hline 6 & XLM-RoBERTa Large + MLM & Original + transliterated & TPU v3-8 & 10\% split & 0.863 & 0.88316 \\ \hline 7 & XLM-RoBERTa Base + MLM + Attention head & Original & Tesla P100 - 16GB & 4-Fold & 0.866075 & 0.8814\\ \hline 8 & XLM-RoBERTa Large + MLM + Attention head & Original + transliterated & RTX5000 24GB & 10\% split & 0.8669 & 0.88378 \\ \hline 9 & MuRIL Base & Original & TPU v3-8 & 10\% split & 0.8520 & 0.87539\\ \hline 10 & MuRIL Large & Original & TPU v3-8 & 10\% split & 0.8546 & 0.87821 \\ \hline 11 & XLM-R Base + MLM + last 5 hidden states average & Original & Tesla P100 - 16GB & 4-Fold & 0.8662 & 0.88149 \\ \hline 12 & XLM-RoBERTa + MLM & Original + transliterated & TPU v3-8 & 4-Fold & 0.86825 & 0.8872\\ \hline 13 & XLM-RoBERTa Large & Original & TPU v3-8 & 10-Fold & 0.85759 & 0.8829 \\ \hline 14 & RemBERT (Truncation Length = 64) & Original & Tesla P100 & 10\% split & 0.8529 & 0.877\\ \hline 15 & RemBERT & Original & TPU v3-8 & 10\% split & 0.8397 & 0.8678\\ \hline 16 & mBERT & Original & TPU v3-8 & 10\% split & 0.8474 & 0.8724\\ \hline 17 & mBERT (Truncation Length = 64) & Original & Tesla P100 & 4-Fold & 0.8435 & 0.86327\\ \hline 18 & IndicBERT & Original & TPU v3-8 & 10\% split & 0.8306 & 0.8456\\ \hline \end{tabular} \end{table*} As mentioned above, we also experimented with other transformer models but didn't achieve satisfactory results. RemBERT has outperformed XLM-RoBERTa in various tasks but underperformed in this case. mBERT also gave meagre results. Surprisingly, IndicBERT is the worst performer for this task. Note - We transliterated the test data as well. So, whenever we have trained a model with transliterated data, the inference was done on the original text and transliterated text and the probabilities were combined with 7:3 ratio to form the final prediction. \subsection{Ensembling} We wanted to select diverse models so that each model makes different mistakes and by combining the learning of diverse models, we can get a better result. Hence, after a few experiments, we went forward with the following Ids from Table II - 2, 6, 8, 9, 12, 13. We gave equal weights to all the models and our test score was 0.88756. After changing the probability inference threshold to 0.55, the score increased to 0.88827. \subsection{Using Metadata} As mentioned previously, we were also provided with weak metadata which was utilized in the following way - \subsubsection{Feature Engineering} Due to the variation in timestamps, there were different values of reports and likes for the same post in different samples. For example, there is a post with \emph{post\_index} = 1, and it has 1 comment. When this comment was captured for the dataset, the like and report on the post were 5 and 10 respectively. After a few minutes, when a new comment was uploaded and captured for the dataset, the number of likes and reports had been changed to 10 and 20 respectively. Which means, now there are 2 samples with \emph{post\_index} = 1 in the dataset but the report count and like count recorded for them are different due to variation in the timestamp. In order to deal with this, we created mean-value aggregated features and maximum-value aggregated features for report count and like count for posts as well as comments. We also added length of characters in the comment, length of tokens in the comment and probabilities from the ensemble as a feature. We used this data to train boosting classifiers. \subsubsection{Post-processing} As per our findings, increasing the threshold gave us a boost and hence we decided to tweak the thresholds for each language. After experimenting with several thresholds, we found that the values in Table III gave the best score. All the threshold values that we tested were multiples of 5. So, there must have been some edge cases that might be getting misclassified and in order to account for that, we increased the predicted probabilities by 0.01. Table IV shows the performance of different boosting models. XGBoost and CatBoost were giving the highest score. We combined the probabilities from these models with XGBoost to CatBoost ratio of 6:4 and achieved the final best score of 0.90005. \begin{table}[!t] \renewcommand{\arraystretch}{1.3} \caption{Language-wise Inference Thresholds} \label{lang_thresh} \centering \begin{tabular}{|c|c|} \hline Language & Threshold\\ \hline Marathi & 0.6\\ \hline Malayalam & 0.5\\ \hline Hindi & 0.54\\ \hline Telugu & 0.6\\ \hline Tamil & 0.5\\ \hline Odia & 0.5\\ \hline Gujarati & 0.4\\ \hline Bhojpuri & 0.6\\ \hline Haryanvi & 0.5\\ \hline Assamese & 0.45\\ \hline Kannada & 0.6\\ \hline Rajasthani & 0.5\\ \hline Bengali & 0.6\\ \hline \end{tabular} \end{table} \begin{table}[!t] \renewcommand{\arraystretch}{1.3} \caption{Post-processing Models Summary} \label{post_process_models} \centering \begin{tabular}{|c|c|c|} \hline Model & CV Average & Test Score\\ \hline LightGBM & 0.9051 & 0.89933\\ \hline XGBoost & 0.9051 & 0.89988\\ \hline CatBoost & 0.9052 & 0.89980\\ \hline \end{tabular} \end{table} \section{Conclusion} In recent times, social media has become a hub for information exchange and entertainment. Results from this paper can be used to create systems that can flag toxicity and provide users with a healthier experience. Surprisingly, MuRIL which is trained specifically on Indian text did not perform well individually (both large and base variants) but it gave us an edge in the ensemble because it added to model diversity. For the same reason, we added non-pretrained models to the ensemble. The metadata in itself was very weak but combining it with the transformer output probabilities improved our predictions substantially. Even small tweaks like increasing the probability by 0.01 and language-wise inference thresholds had an astonishing impact in making a better system. \section*{Acknowledgment} We would like to thank Moj and ShareChat for providing the data and organizing the Multilingual Abusive Comment Identification Challenge along with IIIT-Delhi. We would also like to thank Kaggle for providing access to TPUs and jarvislabs.ai for providing credits to train the final model. \bibliographystyle{plain}